diff --git a/abcattentionwithboundedmemorycontrol/9e5ccf6e-536c-4048-9a07-6bf0490a0b83_content_list.json b/abcattentionwithboundedmemorycontrol/9e5ccf6e-536c-4048-9a07-6bf0490a0b83_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..2b23a5bc31f28ef87b9f7921b51f4dcf634b0b83 --- /dev/null +++ b/abcattentionwithboundedmemorycontrol/9e5ccf6e-536c-4048-9a07-6bf0490a0b83_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f0cb4743c763893b136c804fbb5a5914fb8c3e0f27a6a6a6444225e7d87f4f55 +size 114605 diff --git a/abcattentionwithboundedmemorycontrol/9e5ccf6e-536c-4048-9a07-6bf0490a0b83_model.json b/abcattentionwithboundedmemorycontrol/9e5ccf6e-536c-4048-9a07-6bf0490a0b83_model.json new file mode 100644 index 0000000000000000000000000000000000000000..e3401de5208915ff07593acc9bcbee015dbdec6e --- /dev/null +++ b/abcattentionwithboundedmemorycontrol/9e5ccf6e-536c-4048-9a07-6bf0490a0b83_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2f36bb5e4be602c91d366f3d33d69e0efdda3c7a11cb5bf3cb0679f8dcf64f46 +size 140325 diff --git a/abcattentionwithboundedmemorycontrol/9e5ccf6e-536c-4048-9a07-6bf0490a0b83_origin.pdf b/abcattentionwithboundedmemorycontrol/9e5ccf6e-536c-4048-9a07-6bf0490a0b83_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e05dc42eceb86b77777fe47b9a9a2eb64fffc089 --- /dev/null +++ b/abcattentionwithboundedmemorycontrol/9e5ccf6e-536c-4048-9a07-6bf0490a0b83_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:58d9fec97fe293fc9e2d9aa30f872376a21a07472ab0926450e97b77b52eb492 +size 711334 diff --git a/abcattentionwithboundedmemorycontrol/full.md b/abcattentionwithboundedmemorycontrol/full.md new file mode 100644 index 0000000000000000000000000000000000000000..61f72a46a74bf5c283af4becfafc5ba280b67475 --- /dev/null +++ b/abcattentionwithboundedmemorycontrol/full.md @@ -0,0 +1,491 @@ +# ABC: Attention with Bounded-Memory Control + +Hao Peng\* Jungo Kasai\* Nikolaos Pappas\* Dani Yogatama\* Zhaofeng Wu\* Lingpeng Kong\* Roy Schwartz\* Noah A. Smith\* + +$\spadesuit$ Paul G. Allen School of Computer Science & Engineering, University of Washington + +$^{\star}$ Amazon Web Services $\clubsuit$ DeepMind $\diamond$ Allen Institute for Artificial Intelligence $\diamond$ School of Computer Science & Engineering, Hebrew University of Jerusalem +$\diamond$ Department of Computer Science, The University of Hong Kong {hapeng, jkasai, npappas, zfw7, nasmith}@cs.washington.edu dyogatama@deepmind.com, lpk@cs.hku.hk roy.schwartz1@mail.huji.ac.il + +# Abstract + +Transformer architectures have achieved state-of-the-art results on a variety of natural language processing (NLP) tasks. However, their attention mechanism comes with a quadratic complexity in sequence lengths, making the computational overhead prohibitive, especially for long sequences. Attention context can be seen as a random-access memory with each token taking a slot. Under this perspective, the memory size grows linearly with the sequence length, and so does the overhead of reading from it. One way to improve the efficiency is to bound the memory size. We show that disparate approaches can be subsumed into one abstraction, attention with bounded-memory control (ABC), and they vary in their organization of the memory. ABC reveals new, unexplored possibilities. First, it connects several efficient attention variants that would otherwise seem distinct. Second, this abstraction gives new insights—an established approach (Wang et al., 2020b) previously thought to not be applicable in causal attention, actually is. Last, we present a new instance of ABC, which draws inspiration from existing ABC approaches, but replaces their heuristic memory-organizing functions with a learned, contextualized one. Our experiments on language modeling, machine translation, and masked language model finetuning show that our approach outperforms previous efficient attention models; compared to strong transformer baselines, it significantly improves the inference time and space efficiency with no or negligible accuracy loss. + +# 1 Introduction + +Transformer architectures are now central in natural language processing (Vaswani et al., 2017). They rely on the attention mechanism (Bahdanau et al., 2015) to contextualize the input. The context can be seen as a random access memory whose size linearly grows with the sequence length; each query + +reads from it using a softmax-normalized linear combination, with overhead linear in the memory size. This amounts to a quadratic complexity overall, making transformers' computational overhead prohibitive, especially for long sequences. + +One way to improve attention's efficiency is to bound its memory size. Imposing a constant-sized constraint over the memory ensures that reading from it has constant time and space overhead, yielding a linear overall complexity in sequence lengths. This is in fact a common strategy adopted by several recent works. In this work, we show that some of these works are closely connected in ways that, to date, have gone unremarked. We propose attention with bounded-memory control (ABC), a unified abstraction over them. In ABC, constant-sized memories are organized with various control strategies, e.g., induced from heuristic patterns (Beltagy et al., 2020; Zaheer et al., 2020; Ainslie et al., 2020; Rae et al., 2020, inter alia), locality assumptions (Parmar et al., 2018; Liu et al., 2018), or positions (Wang et al., 2020b). + +These strategies, by and large, are "context-agnostic." In response to this, we propose $\mathrm{ABC}_{\mathrm{MLP}}$ , a particular instance of ABC that learns a contextualized control strategy from data. Specifically, $\mathrm{ABC}_{\mathrm{MLP}}$ uses a neural network to determine how to store each token into the memory (if at all). Compared to previous bounded-memory models, it strikes a better trade-off between accuracy and efficiency: controlling for the accuracy, $\mathrm{ABC}_{\mathrm{MLP}}$ can get away with much smaller memory sizes. + +ABC models (including $\mathrm{ABC}_{\mathrm{MLP}}$ ) come with a linear complexity in sequence lengths, and admit recurrent computation graphs in causal attention (self-attention over the prefix). Therefore they are appealing choices in a variety of applications, including text encoding, language modeling and text generation. This leads to a surprising finding. Linformer (Wang et al., 2020b), an established efficient attention method, was previously thought not + +to be applicable in causal attention or autoregressive decoding (Tay et al., 2020). Through the ABC view, we show that it actually is, and achieves competitive performance in our machine translation experiments. + +ABC connects existing models that would otherwise seem distinct, reveals new insights into established methods, and inspires new efficient attention architectures. We explore its applications in transformers, as a drop-in substitute for the canonical softmax attention. ABC offers a novel lens that can help future research in the analysis of transformers, where the theoretical insights are still catching up with empirical success. Experiments on language modeling, machine translation, and masked language model finetuning show that our $\mathrm{ABC}_{\mathrm{MLP}}$ model outperforms previous ABC approaches in accuracy with a much smaller memory size. Compared to the strong transformer baseline, $\mathrm{ABC}_{\mathrm{MLP}}$ achieves a significant speedup and memory savings at inference time, with no or negligible accuracy loss. The efficiency improvements are more prominent for long sequences, suggesting that the asymptotic savings are even more appealing in applications involving long sequences. We release our code at https://github.com/Noahs-ARK/ABC. + +# 2 An Outer-Product View of Attention + +This section presents our outer-product memory perspective of attention, which allows for a smooth transition to later discussion. + +In attention, a sequence of queries $\{\mathbf{q}_i\}_{i=1}^N$ attend to a memory with $N$ slots, each storing a key and value pair: $\mathbf{K} = [\mathbf{k}_1, \dots, \mathbf{k}_N]^\top$ , $\mathbf{V} = [\mathbf{v}_1, \dots, \mathbf{v}_N]^\top \in \mathbb{R}^{N \times d}$ . Query $\mathbf{q}$ reads from the memory using a softmax-normalized linear combination, producing a $d$ -dimensional vector: + +$$ +\operatorname {a t t n} (\mathbf {q}, \{\mathbf {k} _ {i} \}, \{\mathbf {v} _ {i} \}) = \mathbf {V} ^ {\top} \operatorname {s o f t m a x} \left(\mathbf {K q}\right). (1) +$$ + +This takes $\mathcal{O}(N)$ time and space. When the attention with $N$ queries can be parallelized (e.g., in text encoding), it takes linear time and quadratic space; when it cannot be (e.g., in decoding), it takes quadratic time and linear space. + +The memory can be equivalently represented as sums of vector outer products: $\mathbf{K} = \mathbf{IK} = \sum_{i=1}^{N} \mathbf{e}_i \otimes \mathbf{k}_i$ , $\mathbf{V} = \sum_{i=1}^{N} \mathbf{e}_i \otimes \mathbf{v}_i$ . $\mathbf{I}$ is the identity matrix, and $\otimes$ denotes the outer product: $[\mathbf{x} \otimes \mathbf{v}] = [\mathbf{x} \otimes \mathbf{v}]^{\mathrm{T}}$ . + +$\mathbf{y}]_{i,j} = x_i y_j$ . $N$ -dimensional vectors $\{\mathbf{e}_i\}$ form the standard basis: $\mathbf{e}_i$ has the $i$ th element being one and others zeros. We can view $\mathbf{e}_i$ as control vectors that determine where to store $\mathbf{k}_i$ and $\mathbf{v}_i$ : + +$$ +\begin{array}{l} \mathbf {e} _ {i} \otimes \mathbf {k} _ {i} = \left[ \underbrace {0 , \ldots 0} _ {i - 1}, 1, \underbrace {0 , \ldots , 0} _ {N - i} \right] ^ {\top} \otimes \mathbf {k} _ {i} \\ = \left[ \underbrace {\mathbf {0}} _ {d \times (i - 1)}; \mathbf {k} _ {i}; \underbrace {\mathbf {0}} _ {d \times (N - i)} \right] ^ {\top}. \\ \end{array} +$$ + +The $N$ -by- $d$ matrix on the last line has its $i$ th row being $\mathbf{k}_i^\top$ and all others zeros; in this sense, $\mathbf{k}_i$ is stored in the $i$ th slot by $\mathbf{e}_i$ , not affecting others. + +# 3 Attention with Bounded Memory + +A straightforward way to improve attention's efficiency is to bound its memory size. Our outer-product view of attention provides a straightforward way to devise this, by replacing $\{\mathbf{e}_i\}$ with control vectors that select $n \ll N$ vectors to attend to. We dub this approach attention with bounded-memory control (ABC). Concretely, let $\widetilde{\mathbf{K}}, \widetilde{\mathbf{V}} \in \mathbb{R}^{n \times d}$ denote a constant-size memory with $n$ slots, with $n$ set a priori. + +$$ +\widetilde {\mathbf {K}} = \sum_ {i = 1} ^ {N} \phi_ {i} \otimes \mathbf {k} _ {i}, \quad \widetilde {\mathbf {V}} = \sum_ {i = 1} ^ {N} \phi_ {i} \otimes \mathbf {v} _ {i}. \tag {3} +$$ + +$\{\phi_i \in \mathbb{R}^n\}_{i=1}^N$ denotes a sequence of control vectors. The output is calculated by attending to $\widetilde{\mathbf{K}}$ and $\widetilde{\mathbf{V}}$ : ABC $(\mathbf{q}, \{\mathbf{k}_i\}, \{\mathbf{v}_i\}, \{\phi_i\}) =$ + +$$ +\widetilde {\mathbf {V}} ^ {\top} \operatorname {s o f t m a x} (\widetilde {\mathbf {K}} \mathbf {q}). \tag {4} +$$ + +We will discuss various ways to construct $\{\phi_i\}$ in the subsequent sections. Reading from the memory takes a constant $\mathcal{O}(n)$ time and space; therefore ABC's overall complexity is $\mathcal{O}(Nn)$ , linear in the sequence length.2 + +Eq. 3 offers an equivalent recurrent computation, which is particularly useful in causal attention where only the prefix is looked at, + +$$ +\widetilde {\mathbf {K}} _ {t + 1} = \widetilde {\mathbf {K}} _ {t} + \phi_ {t + 1} \otimes \mathbf {k} _ {t + 1}, \tag {5} +$$ + +likewise for $\widetilde{\mathbf{V}}_t$ . $\widetilde{\mathbf{K}}_t$ and $\widetilde{\mathbf{V}}_t$ can be seen as the recurrent hidden state that encodes the prefix. + +In what follows, we study several existing efficient attention approaches and show that they are in fact instances of the ABC abstraction. + +# 3.1 Linformer + +Linformer (Wang et al., 2020b) is an established efficient transformer variant that has proven successful in masked language modeling and text encoding. It assumes fixed-length inputs and learns a low-rank approximation of the attention weights. A learned $n$ -by- $N$ matrix $\mathbf{W}^{\mathrm{LF}}$ down projects the $N$ -by- $d$ dimensional keys and values along the timestep dimension, to an $n$ -by- $d$ memory: $\widetilde{\mathbf{K}}^{\mathrm{LF}} = \mathbf{W}^{\mathrm{LF}}\mathbf{K}$ , $\widetilde{\mathbf{V}}^{\mathrm{LF}} = \mathbf{W}^{\mathrm{LF}}\mathbf{V}$ ; they are then used for attention computation with Eq. 4. This yields a linear complexity in the input length. Linformer is an ABC instance with $\phi_{i}^{\mathrm{LF}} = \mathbf{W}_{:,i}^{\mathrm{LF}}$ (ith column), and in this sense, it learns a control vector for each position. + +Previous works have noted that Linformer cannot be efficiently applied in causal attention (Table 1 of Tay et al., 2020). Indeed, it is less straightforward to avoid mixing future with the past when projecting along the timestep dimension. ABC reveals that, in fact, Linformer is applicable in causal attention. Like all ABC models, it admits a linear-complexity recurrent computation (Eq. 5): $\widetilde{\mathbf{K}}_{t + 1}^{\mathrm{LF}} = \widetilde{\mathbf{K}}_t + \phi_{t + 1}^{\mathrm{LF}}\otimes \mathbf{k}_{t + 1}$ . This confirms ABC's benefits: it reveals new insights about existing models and reassesses their applications and impact. Our experiments show that Linformer achieves competitive performance in machine translation. + +# 3.2 Clustering-Based Attention + +Improving attention's efficiency with clustering has received an increasing amount of interest (Kitaev et al., 2020; Roy et al., 2020; Wang et al., 2020a, inter alia). ABC bears interesting connections to clustering-based methods. Here we discuss an approach that closely follows Vyas et al. (2020), except that it clusters keys and values instead of queries, and only attends to the centroids to reduce the effective context size. Formally, keys and values are grouped into $n < N$ clusters $\{\widetilde{\mathbf{k}}_j^{\mathrm{CL}}\}_{j=1}^n$ , $\{\widetilde{\mathbf{v}}_j^{\mathrm{CL}}\}_{j=1}^n$ . Let an $N$ -by- $n$ binary matrix $\mathbf{M}$ denote the cluster membership shared between keys and values. $M_{i,j} = 1$ iff. $\mathbf{k}_i$ is assigned to cluster $\widetilde{\mathbf{k}}_j^{\mathrm{CL}}$ and $\mathbf{v}_i$ to $\widetilde{\mathbf{v}}_j^{\mathrm{CL}}$ . The $j$ th centroid for the keys is + +$$ +\widetilde {\mathbf {k}} _ {j} ^ {\mathrm {C L}} = \sum_ {i = 1} ^ {N} \frac {M _ {i , j}}{\sum_ {\ell = 1} ^ {N} M _ {\ell , j}} \mathbf {k} _ {i}; \tag {6} +$$ + +likewise for the values. It then attends over the centroids using Eq. 4, with $\widetilde{\mathbf{K}}^{\mathrm{CL}} = [\widetilde{\mathbf{k}}_1^{\mathrm{CL}},\dots ,\widetilde{\mathbf{k}}_n^{\mathrm{CL}}]^{\top} =$ + +$$ +\begin{array}{l} \sum_ {j = 1} ^ {n} \mathbf {e} _ {j} \otimes \widetilde {\mathbf {k}} _ {j} ^ {\mathrm {C L}} = \sum_ {j = 1} ^ {n} \mathbf {e} _ {j} \otimes \sum_ {i = 1} ^ {N} \frac {M _ {i , j}}{\sum_ {\ell = 1} ^ {N} M _ {\ell , j}} \mathbf {k} _ {i} \\ = \sum_ {i = 1} ^ {N} \left(\sum_ {j = 1} ^ {n} \mathbf {e} _ {j} \frac {M _ {i , j}}{\sum_ {\ell = 1} ^ {N} M _ {\ell , j}}\right) \otimes \mathbf {k} _ {i}. \\ \end{array} +$$ + +The last line indicates that this model is an instance of ABC: $\phi_{i} = \sum_{j=1}^{n}(M_{i,j} / \sum_{\ell=1}^{N}M_{\ell,j})\mathbf{e}_{j}$ . The stack of centroids can be seen as the constant-size memory. Putting aside the clustering overhead (i.e., constructing $\mathbf{M}$ and computing centroids), it has a linear complexity in the sequence length. + +# 3.3 Sliding-Window Attention + +In some applications, being able to remove entries from the memory can be beneficial: clearing up older context frees slots for more recent ones, promoting a locality inductive bias. ABC offers the capability to do so, if augmented with an additional matrix multiplication. We use the sliding-window attention as an example. + +Attending to the most recent $n$ input tokens (Beltagy et al., 2020; Zaheer et al., 2020; Sukhbaatar et al., 2021, inter alia) can be seen as a first-in-first-out queue that "pops" out the oldest token while "pushing" in the most recent one: $\widetilde{\mathbf{K}}_t^{\mathrm{WD}} = [\mathbf{k}_{t - n + 1},\dots,\mathbf{k}_t]^\top$ . The pop operation can be achieved by multiplying an $n$ -by- $n$ upper shift matrix: $U_{i,j} = \delta_{i + 1,j}$ , with $\delta$ being the Kronecker delta (i.e., $\mathbf{U}$ has ones only on the superdiagonal and zeros elsewhere). Left-multiplying $\mathbf{U}$ against $\widetilde{\mathbf{K}}_t^{\mathrm{WD}}$ shifts its rows one position up, with zeros appearing in the last: + +$$ +\begin{array}{l} \mathbf {U} \widetilde {\mathbf {K}} _ {t} ^ {\mathrm {W D}} = \mathbf {U} \left[ \underbrace {\mathbf {k} _ {t - n + 1} , \ldots , \mathbf {k} _ {t}} _ {n} \right] ^ {\top} \\ = \left[ \underbrace {\mathbf {k} _ {t - n + 2} , \ldots , \mathbf {k} _ {t - 1} , \mathbf {k} _ {t}} _ {n - 1}, \mathbf {0} \right] ^ {\top} \in \mathbb {R} ^ {n \times d}. \\ \end{array} +$$ + +Then the most recent token can be put into the slot freed up: $\widetilde{\mathbf{K}}_{t + 1}^{\mathrm{WD}} = \mathbf{U}\widetilde{\mathbf{K}}_t^{\mathrm{WD}} + \mathbf{e}_n\otimes \mathbf{k}_{t + 1}$ . $\mathbf{U}$ and $\phi_t = \mathbf{e}_n$ ensure a first-in-first-out queue. Dilated and stride convolution patterns (Beltagy et al., 2020) can be similarly recovered (§A.4). + +Recurrently multiplying $\mathbf{U}$ simulates the discrete pop operation (Grefenstette et al., 2015; Joulin and Mikolov, 2015; Yogatama et al., 2018) in a differentiable way. This is reminiscent of recurrent neural networks, while in this case $\mathbf{U}$ is never updated as + +parameters. It is exciting to explore learning $\mathbf{U}$ but is beyond the scope of this work. + +Discussion. Besides the models discussed above, certain variants of Rae et al. (2020) and sparse attention patterns (local-to-global attention; Beltagy et al., 2020; Zaheer et al., 2020; Ainslie et al., 2020) can also be seen as instances of ABC ( $\S$ A). ABC provides a unified perspective of them, and at the same time points out their limitations: their control strategies are context-agnostic. In response to this, in $\S$ 4 we propose to learn a contextualized strategy from data. Table 1 analyzes various ABC models, and Table 2 details their complexity. + +# 4 Learned Memory Control + +The ABC abstraction connects several existing approaches that would otherwise seem distinct. This inspires the design of new architectures. We hypothesize that learning a contextualized strategy can achieve better performance. This section introduces $\mathrm{ABC}_{\mathrm{MLP}}$ . It parameterizes $\phi$ with a single-layer multi-layer perceptron (MLP) that takes as input the token's representation $\mathbf{x}_i$ , and determines which slots to write it into and how much. + +$$ +\boldsymbol {\alpha} _ {i} = \exp \left(\mathbf {W} _ {\phi} \mathbf {x} _ {i}\right), \quad \phi_ {i} = \boldsymbol {\alpha} _ {i} \Bigg / \sum_ {j = 1} ^ {N} \boldsymbol {\alpha} _ {j}. (7) +$$ + +Matrix $\mathbf{W}_{\phi}$ is learned. exp is an elementwise activation function. The motivation is to allow for storing a "fractional" (but never negative) amount of input into the memory. Using a non-negative activation, however, has a drawback: the scales of $\sum_{i}\phi_{i}\otimes \mathbf{k}_{i}$ and $\sum_{i}\phi_{i}\otimes \mathbf{v}_{i}$ would grow with the sequence lengths, making training less stable. To overcome this, we divide $\alpha_{i}$ vectors by their sum. This functions as normalization and aims to offset the impact of varying sequence lengths. It admits the recurrent computation graph as in Eq. 5, and has a linear complexity in the sequence length. + +A key design choice of $\mathrm{ABC}_{\mathrm{MLP}}$ is that its $\phi_{i}$ depends only on current input $\mathbf{x}_i$ . This helps (1) keep the recurrent computation efficient in practice (Lei et al., 2018), and (2) make it applicable + +in not only encoder self-attention and cross attention, but also causal attention. Concurrently to this work, Goyal et al. (2021) and Ma et al. (2021) also proposed methods to learn contextualized control. They compute $\phi_{i}$ from previous layer's memory, revealing the full sequence to the control vectors. As a result, these two approaches are unsuitable for causal attention. $^{6}$ + +$\mathrm{ABC}_{\mathrm{MLP}}$ , as other ABC models, can be used as a drop-in replacement for the canonical softmax attention, and we apply its multihead variant in transformers. With proper parameter sharing, the number of additional parameters $\mathrm{ABC}_{\mathrm{MLP}}$ incurs is small: inspired by Wang et al. (2020b), we tie $\phi$ -MLP's parameters across different layers, which adds less than $1\%$ parameters to the models. + +ABCMLP: context-agnostic then context-dependent attention. We now dissect ABCMLP and show that it can be seen as a cascade of two attention mechanisms: one with a learned context-agnostic "pseudo query" followed by one with a context-dependent query. Our analysis starts with a one-dimensional example; the conclusion generalizes to higher-dimensional cases. + +Example 1. Consider $\mathrm{ABC}_{\mathrm{MLP}}$ with a single memory slot ( $n = 1$ ). It is parameterized with a learned vector $\mathbf{w}_{\phi}$ , and $\phi_{i} = \exp(\mathbf{w}_{\phi} \cdot \mathbf{x}_{i}) / \sum_{j=1}^{N} \exp(\mathbf{w}_{\phi} \cdot \mathbf{x}_{j})$ . Since $\phi_{i}$ is a scalar here, $\phi_{i} \otimes \mathbf{k}_{i} = \phi_{i} \mathbf{k}_{i}^{\top}$ . + +$$ +\begin{array}{l} \widetilde {\mathbf {K}} ^ {\top} = \sum_ {i = 1} ^ {N} \left(\phi_ {i} \otimes \mathbf {k} _ {i}\right) ^ {\top} \\ = \sum_ {i = 1} ^ {N} \frac {\exp (\mathbf {w} _ {\phi} \cdot \mathbf {x} _ {i})}{\sum_ {j = 1} ^ {N} \exp (\mathbf {w} _ {\phi} \cdot \mathbf {x} _ {j})} \mathbf {k} _ {i} \\ = \operatorname {a t t n} \left(\mathbf {w} _ {\phi}, \left\{\mathbf {x} _ {i} \right\} _ {i = 1} ^ {N}, \left\{\mathbf {k} _ {i} \right\} _ {i = 1} ^ {N}\right). \\ \end{array} +$$ + +In other words, $\widetilde{\mathbf{K}}$ uses $\mathbf{w}_{\phi}$ as a "pseudo-query" to attend to $\{\mathbf{x}_i\}$ and $\{\mathbf{k}_i\}$ . Likewise, $\widetilde{\mathbf{V}}^{\top} = \mathrm{attn}(\mathbf{w}_{\phi}, \{\mathbf{x}_i\}_{i=1}^{N}, \{\mathbf{v}_i\}_{i=1}^{N})$ . Despite its similarity to the standard softmax attention, Example 1 has a more efficient linear complexity in sequence lengths. $\mathbf{w}_{\phi}$ 's being context-independent is the key to the savings. Table 2 details its complexity. + +Example 1's conclusion generalizes to higher-dimensional cases: the $j$ th dimension of $\{\phi_i\}$ attends to $\{\mathbf{x}_i\}$ and $\{\mathbf{k}_i\}$ using the $j$ th row of $\mathbf{W}_{\phi}$ as the context-independent pseudo-query; $n$ such attention mechanisms run in parallel, stacking the + +
ModelSectionφtMem. Control
Sliding-window§3.3enKt+1 = UKt + φt+1 ⊗ kt+1
Linformer§3.1WLF
L2G Pattern§A.1ei if xt is the ith global token
ABCD§A.2ei, where it ~ unif{1,n}Kt+1 = Kt + φt+1 ⊗ kt+1
Comp. Trans.§A.3e[nt/N]
Clustering§3.2∑j=1n (Mt,j / ∑l=1N Mt,l,j) ej
ABCMLP§4exp (Wφxt) / ∑i=1t exp (Wφxt)
+ +Table 1: A comparison of different ABC models. $N$ denotes the sequence length, and $n$ the memory size. $\phi_t$ denotes the memory control vector for $\mathbf{k}_t$ and $\mathbf{v}_t$ , and unif is the discrete uniform distribution. + +
ModelTime ComplexitySpace Complexity
Mem.Per QueryOverallMem.Per QueryOverall
Softmax Attention-O(N)O(N2)-O(N)O(N2)
ABCO(N)O(n)O(nN)O(n)O(n)O(nN)
+ +Table 2: ABC's time and space complexity in sequence length against the softmax attention's. "Mem." indicates the time and space needed for calculating and storing memory $\widetilde{\mathbf{K}},\widetilde{\mathbf{V}}$ $N$ denotes the sequence length, and $n$ the memory size. The time complexity analysis assumes that the softmax attention cannot be parallelized across the queries. In practice, this is common in autoregressive decoding or for long sequences where the accelerators (e.g., GPUs) do not have enough threads to fully parallelize softmax attention's computation across different queries. + +results into $n$ -by- $d$ memory $\widetilde{\mathbf{K}}$ and $\widetilde{\mathbf{V}}$ . Intuitively, it is the "real queries" $\{\mathbf{q}_i\}$ that encode "what information is useful for the prediction task." Without access to them, $\mathrm{ABC}_{\mathrm{MLP}}$ summarizes the input for $n$ times using different pseudo-queries, aiming to preserve enough information in the memory for onward computation. The attention output is calculated with the context-dependent real queries using Eq. 4. §B.2 presents a detailed derivation. + +Connections to other prior works. Although starting from distinct motivations, $\mathrm{ABC}_{\mathrm{MLP}}$ closely relates to hierarchical attention (HA; Yang et al., 2016). HA summarizes the context into higher-level representations with a cascade of attention mechanisms, e.g., words to sentences, and then to documents. $\mathrm{ABC}_{\mathrm{MLP}}$ applies two types of attention. The first learns context-agnostic pseudo-queries and attends to the same sequence for $n$ times in parallel, while the second retrieves from the memory with real queries. HA, in contrast, summarizes non-overlapping segments at each level. + +The learned pseudo-queries closely relate to the inducing point method in set attention (ISA; Lee et al., 2019). ISA applies a non-linear feedforward network between a cascade of two attention mod + +ules. This precludes the outer-product memory computation and efficient recurrences in ABC. + +Another line of work "linearizes" attention through kernel tricks and also applies bounded memory: their feature map dimensions are analogous to memory sizes. They substitute the softmax with approximations (Peng et al., 2021; Choromanski et al., 2021), heuristically designed (Katharopoulos et al., 2020; Schlag et al., 2021), or learned (Kasai et al., 2021b) functions. $\mathrm{ABC}_{\mathrm{MLP}}$ keeps the softmax, but over a smaller constant-sized context. This can be useful in practice: (1) ABC provides a unified perspective of several efficient attention methods, allowing for borrowing from existing wisdom to design new architectures; (2) it draws a close analogy to the canonical softmax attention, and is better-suited as its drop-in substitute in various application settings, as we will show in the experiments; (3) empirically, we find that $\mathrm{ABC}_{\mathrm{MLP}}$ can get away with a much smaller memory size to retain the accuracy. Peng et al. (2021) and Schlag et al. (2021) use gating to promote recency bias. The same technique is equally applicable in ABC models. + +The learned contextualized memory control is reminiscent of the content-based addressing in neu + +ral Turing machines (NTM; Graves et al., 2014). $\mathrm{ABC_{MLP}}$ computes the control vectors $\{\phi_i\}$ as a function of the input, but not of the memory as in NTM. This ensures that the control vectors at different timesteps can be computed in parallel, improving the time efficiency in practice (Lei et al., 2018; Peng et al., 2018). Analogies between memory and neural architectures are also made by other previous works (Hochreiter and Schmidhuber, 1997; Weston et al., 2015; Le et al., 2020, inter alia). + +# 5 Experiments + +We evaluate ABC models on language modeling (§5.1), sentence-level and document-level machine translation (§5.2), and masked language model finetuning (§5.3). Dataset statistics and implementation details are summarized in §C. + +# 5.1 Language Modeling + +Setting. We experiment with WikiText-103, sampled text from English Wikipedia (Merit et al., 2017). The BASE model with standard softmax attention is the strong transformer-based language model by Baevski and Auli (2019). We compare the following ABC variants, which build on BASE, but replace the softmax attention with linear-complexity bounded-memory attention alternatives while keeping other components the same. + +- $\mathrm{ABC}_{\mathrm{MLP}}$ , as described in §4, learns a contextualized exp-MLP as the $\phi$ function. +Linformer (§3.1; Wang et al., 2020b). +- $\mathrm{ABC}_{\mathrm{RD}}$ stores each token in a randomly-selected memory slot with $\phi_t = \mathbf{e}_{i_t}$ . $i_t$ is uniformly drawn from $\{1, \dots, n\}$ at each time step. This helps us quantify the differences between random and learned bounded-memory controls. + +# We consider two model size settings: + +16 layers (Baevski and Auli, 2019). All models have around $\sim 242\mathrm{M}$ parameters. They train with 512-token segments, and evaluate with 0 or 480 context sizes: a 0- or 480- length prefix precedes each evaluation segment. +- 32 layers (Kasai et al., 2021b). All models have $\sim 484\mathrm{M}$ parameters. This setting applies layer dropout (Fan et al., 2020), and evaluates with a 256 context size. It aims to compare $\mathrm{ABC}_{\mathrm{MLP}}$ to several kernel-based efficient attention variants: ELU (Katharopoulos et al., 2020), RFA (Peng et al., 2021), and T2R (Kasai et al., 2021b). + +Results. Table 3a compares ABC variants using Baevski and Auli (2019)'s 16-layer setting. Among + +
ModelnDev.Test
04800480
BASE-19.818.420.519.0
Linformer6426.527.127.230.7
ABCD6423.222.324.023.1
ABCMLP3221.219.721.920.5
ABCMLP6420.418.921.119.5
+ +(a) 16-layer setting. $0 / 480$ indicate evaluation context sizes. + +
ModelnDev.Test
†BASE-17.918.5
†ELU12822.022.8
†RFA3220.421.3
†T2R3220.120.8
ABCMLP3219.219.9
+ +(b) 32-layer setting. A 256-length context is used at evaluation time. $\dagger$ numbers are due to Kasai et al. (2021b). +Table 3: WikiText-103 language modeling perplexity (lower is better). $n$ denotes the memory size. Bold numbers perform the best among linear-complexity models. + +ABC models, $\mathrm{ABC}_{\mathrm{MLP}}$ achieves the best performance for both context sizes. With a memory size $n = 64$ , $\mathrm{ABC}_{\mathrm{MLP}}$ outperforms both Linformer and $\mathrm{ABC}_{\mathrm{RD}}$ by more than 2.9 test perplexity; and the gap is larger with the longer 480-length context: more than 3.6 test perplexity. $\mathrm{ABC}_{\mathrm{MLP}}$ -32 outperforms its larger-memory ABC counterparts by more than 2.1 test perplexity. These results confirm $\mathrm{ABC}_{\mathrm{MLP}}$ 's advantages of using a contextualized strategy. Surprisingly, Linformer underperforms $\mathrm{ABC}_{\mathrm{RD}}$ , and its performance drops with the larger 480-length context window. This suggests that, while successful in text encoding, Linformer's position-based strategy is a suboptimal design choice for causal attention, at least for long context. All ABC models underperform the BASE, with $\mathrm{ABC}_{\mathrm{MLP}}$ -64 having the smallest gap of 0.5 perplexity. $\mathrm{ABC}_{\mathrm{MLP}}$ -32 outperforms kernel-based methods by more than 0.9 test perplexity, using Kasai et al. (2021b)'s 32-layer setting (Table 3b). + +# 5.2 Machine Translation + +Datasets. To assess their performance over various output lengths, we compare ABC models on sentence- and document- level machine translation. + +- Sentence-level translation with WMT14 EN-DE + +
ModelCross nCausal nBLEU
BASE--27.2
ABCD323225.7
ABCD646426.2
Linformer323226.6
Linformer646426.7
ABCMLP32827.1
ABCMLP323227.3
+ +(a) Bolded number outperforms BASE. + +
ModelCross nCausal nBLEU
BASE--39.9
Linformer12864-
ABCD1286438.6
ABCMLP1286439.7
+ +(b) Linformer fails to converge even with multiple random seeds. Bold number performs the best among ABC models. +Table 4: Machine translation test SacreBLEU. Left: sentence-level translation with WMT14 EN-DE; right: document-level translation with IWSLT14 ES-EN. + +(Bojar et al., 2014). The preprocessing and data splits follow Vaswani et al. (2017). + +- Document-level translation with IWSLT14 ESEN (Cettolo et al., 2014). We use Miculicich et al. (2018)'s data splits and preprocessing. Following standard practice (Voita et al., 2019), a 4-sentence sliding window is used to create the dataset, i.e., each instance has 4 sentences. + +Setting. We compare ABC variants as in §5.1. §C.2 further compares to the clustering-based (§3.2) and sliding-window (§3.3) ABC variants. + +The BASE model they build on is our implementation of transformer-base (Vaswani et al., 2017). ABC variants replace decoder cross attention and causal attention with bounded-memory attention, while keeping softmax attention for the encoder, since its overhead is much less significant (Kasai et al., 2021a); other components are kept the same. §C.2 studies a model that replaces all softmax attention with $\mathrm{ABC_{MLP}}$ . It performs on par with BASE, confirming $\mathrm{ABC_{MLP}}$ 's broad applicability in various application scenarios. We evaluate with SacreBLEU (Post, 2018). + +Results. Table 4a summarizes sentence-level machine translation results on the WMT14 EN-DE test set. Overall $\mathrm{ABC}_{\mathrm{MLP}}$ performs on par with BASE, with either 32-32 cross-causal memory sizes or 32-8. Even with smaller memory sizes, it outperforms other ABC variants by more than 1.1 BLEU. Differently from the trend in the language modeling experiment (§5.1), Linformer outperforms $\mathrm{ABC}_{\mathrm{RD}}$ by more than 0.5 BLEU. We attribute this to the smaller sequence lengths of this dataset. $\mathrm{ABC}_{\mathrm{MLP}}$ outperforms other ABC models by more than 0.4 BLEU, even with smaller memory sizes. + +The trend is similar on document-level translation with IWSLT14 ES-EN (Table 4b), except that $\mathrm{ABC}_{\mathrm{MLP}}$ slightly underperforms BASE by 0.2 BLEU. This suggests that even with longer sequences, $\mathrm{ABC}_{\mathrm{MLP}}$ is effective despite its bounded memory size. Linformer fails to converge even with multiple random seeds, suggesting the limitations of its purely position-based strategy in tasks involving decoding varying-length text. + +# 5.3 Masked Language Model Finetuning + +Setting. We compare the ABC variants as in §5.1. It is interesting to pretrain ABC from scratch, but we lack the resources to do so. Instead, we warm-start from a pretrained RoBERTa-base (Liu et al., 2019) trained with the softmax transformer, swap its attention with ABC variants, and continue pretraining with the masked language modeling (MLM) objective on a concatenation of BookCorpus (Zhu et al., 2015), English Wikipedia, OpenWebText (Gokaslan and Cohen, 2019), and RealNews (Zellers et al., 2019). Then the models are finetuned and evaluated on downstream classification datasets from the the GLUE benchmark (Wang et al., 2019). This is an appealing setting, since it avoids reinvesting the huge amounts of resources already put into pretraining. + +Results. Table 5 compares downstream text classification performance. BASE indicates a baseline that continues pretraining RoBERTa-base on our data. Following standard practice, we report development accuracy. Linformer achieves competitive + +
ModelnMNLIQNLIQQPSSTAvg.
BASE-87.292.491.794.391.4
Linformer6485.391.890.892.490.1
Linformer12886.191.991.493.790.8
ABCMLP6485.691.891.793.890.7
ABCMLP12887.192.691.894.491.5
+ +Table 5: Text classification development set accuracy. All models continue pretraining RoBERTa-base on our data with the MLM objective. Bold numbers perform the best among ABC models, and underlined ones perform on par with or better than BASE. + +performance, aligned with Wang et al. (2020b)'s results. $\mathrm{ABC}_{\mathrm{MLP}}$ outperforms Linformer, and performs on par with or better than BASE, affirming the benefits of using contextualized memory organization in MLM. $\mathrm{ABC}_{\mathrm{RD}}$ fails to converge in continued pretraining even with multiple seeds. + +Based on the above results, we think $\mathrm{ABC}_{\mathrm{MLP}}$ can achieve competitive performance when pretrained from scratch, just as Linformer does (Wang et al., 2020b). Further empirical exploration is beyond our budget and left for future work. + +# 6 Analysis + +Decoding efficiency over varying sequence lengths. ABC's efficiency gains can be more prominent for long sequences. We study $\mathrm{ABC}_{\mathrm{MLP}}$ 's decoding overhead with varying sequence lengths. Following Kasai et al. (2021b), we consider a sequence-to-sequence generation experiment. Three linear-complexity models are compared: RFA (with 256/128 cross/causal memory sizes; Peng et al., 2021), T2R (32/4; Kasai et al., 2021b), and $\mathrm{ABC}_{\mathrm{MLP}}$ (32/8). The sizes are chosen to maximize efficiency without accuracy drop. T2R needs to be finetuned from a pretrained transformer to match its performance, while others don't. + +All linear-time models achieve consistent decoding speed for different lengths (Figure 1a), substantially outpacing the softmax attention baseline, especially for long sequences. In particular, $\mathrm{ABC}_{\mathrm{MLP}}$ decodes $\sim 1.25$ times faster than RFA, another competitive model that can match transformer's accuracy without a warm start from a pretrained model. This can be attributed to the fact that $\mathrm{ABC}_{\mathrm{MLP}}$ achieves similar accuracy with a much smaller memory. T2R's memory sizes are similar to $\mathrm{ABC}_{\mathrm{MLP}}$ 's, but it decodes about $20\%$ faster. + +![](images/923142fc51328c0d7e65ae5d2e884d74e92245d0e28bd2d89f783eb9c9bb75b2.jpg) +(a) Decoding Speed. + +![](images/00cb2bc3d969902934c975f32ac77ef9a1952aded12123a2fe0a694c05e9b17b.jpg) +(b) Decoding memory overhead. +Figure 1: Sequence-to-sequence decoding speed (top) and memory consumption (bottom) varying sequence lengths. Greedy decoding is used, with batch size 16. + +This is because it does not compute the softmax when calculating attention output, while $\mathrm{ABC}_{\mathrm{MLP}}$ does (Eq. 4). These results show that $\mathrm{ABC}_{\mathrm{MLP}}$ is an appealing modeling choice for decoding tasks, especially when training from scratch is desired. + +$\mathrm{ABC}_{\mathrm{MLP}}$ also achieves significant savings in terms of memory overhead (Figure 1b). $\mathrm{ABC}_{\mathrm{MLP}}$ , RFA, and T2R's curves are similar. + +Text encoding efficiency. We compare the efficiency of $\mathrm{ABC_{MLP}}$ against softmax attention and Linformer when used as text encoders. The models' sizes mirror those in the MLM experiment (§5.3). Table 6 summarizes inference time and memory overhead with 512-length inputs, batch size 16. Both $\mathrm{ABC_{MLP}}$ and Linformer achieve inference speed gains and memory savings over BASE. Linformer is faster, since its linear projection is cheaper to compute than $\mathrm{ABC_{MLP}}$ ’s MLP. Inference speed is measured on the same V100 GPU. The trend in memory overhead is similar. + +Although $\mathrm{ABC}_{\mathrm{MLP}}$ slightly underperforms Linformer in terms of inference speed, it can be a more appealing architectural choice in practice: in all of our 5 experiments, $\mathrm{ABC}_{\mathrm{MLP}}$ outperforms other ABC models in accuracy. Linformer, in contrast, fails to converge or yields sub-optimal performance on some tasks. This confirms its flexibility and ap + +
nBASE -LinformerABCMLP
6412864128
Speed1.0×1.7×1.5×1.5×1.3×
Memory1.0×0.5×0.6×0.5×0.6×
+ +Table 6: Text encoding inference speed (higher is better) and memory (lower is better). Inputs are text segments with 512 tokens and batch size 16. + +
Cross n
8163264
Causal n824.725.225.625.5
16-25.425.725.6
32--25.725.8
64---25.8
+ +Table 7: $\mathrm{ABC}_{\mathrm{MLP}}$ 's SacreBLEU on WMT14 EN-DE development data varying memory sizes. + +plicability in various settings. + +Memory size's impact on accuracy. Practically, one may want to minimize the memory size to improve efficiency. We use the WMT14 EN-DE experiment to investigate how memory size affects accuracy. Using the §5.2's setup, we vary $\mathrm{ABC}_{\mathrm{MLP}}$ 's cross and causal attention memory sizes and compare their translation quality on the development data. They are selected from $\{8, 16, 32, 64\}$ , with cross attention's equal to or larger than causal's: cross attention is more important than causal attention in machine translation (Michel et al., 2019). Our results (Table 7) align with this observation: when cross attention memory is large enough, reducing causal attention memory size from 64 to 8 has a minor 0.3 BLEU drop. Surprisingly, $\mathrm{ABC}_{\mathrm{MLP}}$ with 8-8 sized cross-causal memory is only 1.1 BLEU behind the best-performing configuration. + +# 7 Conclusion + +We presented attention with bounded-memory control (ABC). It provides a unified perspective of several recently-proposed models, and shows that they vary in the organization of the bounded memory. ABC reveals new insights into established methods and inspires new architectures. We proposed $\mathsf{ABC}_{\mathsf{MLP}}$ , a particular instance of ABC that learns a contextualized memory control. On language modeling, machine translation, and masked language model finetuning, $\mathsf{ABC}_{\mathsf{MLP}}$ outperforms previous ABC models. Compared to the strong transformer + +baseline, $\mathrm{ABC}_{\mathrm{MLP}}$ achieves substantial efficiency improvements with no or negligible accuracy loss. + +# Acknowledgments + +We would like to thank the ARK group at the University of Washington for their helpful feedback, and the anonymous reviewers for their thoughtful comments. This work was supported in part by NSF grant 2113530 and a Google Fellowship. Nikolaos Pappas was supported by the Swiss National Science Foundation grant P400P2_183911. + +# References + +Joshua Ainslie, Santiago Ontanon, Chris Alberti, Va-clav Cvicek, Zachary Fisher, Philip Pham, Anirudh Ravula, Sumit Sanghai, Qifan Wang, and Li Yang. 2020. ETC: Encoding long and structured inputs in transformers. In Proc. of EMNLP. +Alexei Baevski and Michael Auli. 2019. Adaptive input representations for neural language modeling. In Proc. of ICLR. +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. of ICLR. +Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. +Ondrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and Aleš Tamchyna. 2014. Findings of the 2014 workshop on statistical machine translation. In Proc. of WMT. +Mauro Cettolo, Jan Niehues, Sebastian Stüker, Luisa Bentivogli, and Marcello Federico. 2014. Report on the 11th IWSLT evaluation campaign. In Proc. of IWSLT. +Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, David Belanger, Lucy Colwell, and Adrian Weller. 2021. Rethinking attention with performers. In Proc. of ICLR. +Kornél Csernai. 2017, accessed September 1, 2020. First Quora Dataset Release: Question Pairs. +Angela Fan, Edouard Grave, and Armand Joulin. 2020. Reducing transformer depth on demand with structured dropout. In Proc. of ICLR. +Aaron Gokaslan and Vanya Cohen. 2019. Openwebtext corpus. http://Skylion007.github.io/OpenWebTextCorpus. + +Anirudh Goyal, Aniket Didolkar, Alex Lamb, Kartikeya Badola, Nan Rosemary Ke, Nasim Rahaman, Jonathan Binas, Charles Blundell, Michael Mozer, and Yoshua Bengio. 2021. Coordination among neural modules through a shared global workspace. +Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural tuning machines. +Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. 2015. Learning to transduce with unbounded memory. In Proc. of NeurIPS. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation. +Armand Joulin and Tomás Mikolov. 2015. Inferring algorithmic patterns with stack-augmented recurrent nets. In Proc. of NeurIPS. +Jungo Kasai, Nikolaos Pappas, Hao Peng, James Cross, and Noah A. Smith. 2021a. Deep encoder, shallow decoder: Reevaluating non-autoregressive machine translation. In Proc. of ICLR. +Jungo Kasai, Hao Peng, Yizhe Zhang, Dani Yogatama, Gabriel Ilharco, Nikolaos Pappas, Yi Mao, Weizhu Chen, and Noah A. Smith. 2021b. Finetuning pretrained transformers into RNNs. In Proc. of EMNLP. +Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and Francois Fleuret. 2020. Transformers are RNNs: Fast autoregressive transformers with linear attention. In Proc. of ICML. +Nikita Kitaev, Łukasz Kaiser, and Anselm Levskaya. 2020. Reformer: The efficient transformer. In Proc. of ICLR. +Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proc. of ACL. +Hung Le, Truyen Tran, and Svetha Venkatesh. 2020. Self-attentive associative memory. In Proc. of ICML. +Juho Lee, Yoonho Lee, Jungtaek Kim, Adam Kosiorek, Seungjin Choi, and Yee Whye Teh. 2019. Set transformer: A framework for attention-based permutation-invariant neural networks. In Proc. of ICML. +Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, and Yoav Artzi. 2018. Simple recurrent units for highly parallelizable recurrence. In Proc. of EMNLP. +Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating Wikipedia by summarizing long sequences. In Proc. of ICLR. + +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. +Xuezhe Ma, Xiang Kong, Sinong Wang, Chunting Zhou, Jonathan May, Hao Ma, and Luke Zettlemoyer. 2021. Luna: Linear unified nested attention. In Proc. of NeurIPS. +Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer sentinel mixture models. In Proc. of ICLR. +Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one? In Proc. of NeurIPS. +Lesly Miculicich, Dhananjay Ram, Nikolaos Pappas, and James Henderson. 2018. Document-level neural machine translation with hierarchical attention networks. In Proc. of EMNLP. +Sebastian Nagel. 2016. News dataset available. https://commoncrawl.org/2016/10/news-dataset-available/. +Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, and Dustin Tran. 2018. Image transformer. In Proc. of ICML. +Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah Smith, and Lingpeng Kong. 2021. Random feature attention. In Proc. of ICLR. +Hao Peng, Roy Schwartz, Sam Thomson, and Noah A. Smith. 2018. Rational recurrences. In Proc. of EMNLP. +Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proc. of WMT. +Jack W. Rae, Anna Potapenko, Siddhant M. Jayakumar, Chloe Hillier, and Timothy P. Lillicrap. 2020. Compressive transformers for long-range sequence modelling. In Proc. of ICLR. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proc. of EMNLP. +Aurko Roy, Mohammad Taghi Saffar, David Grangier, and Ashish Vaswani. 2020. Efficient content-based sparse attention with routing transformers. TACL. +Imanol Schlag, Kazuki Irie, and Jürgen Schmidhuber. 2021. Linear transformers are secretly fast weight programmers. In Proc. of ICML. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proc. of ACL. + +Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proc. of EMNLP. +Sainbayar Sukhbaatar, Da Ju, Spencer Poff, Stephen Roller, Arthur Szlam, Jason Weston, and Angela Fan. 2021. Not all memories are created equal: Learning to forget by expiring. In Proc. of ICML. +Yi Tay, Mostafa Dehghani, Dara Bahri, and Donald Metzler. 2020. Efficient transformers: A survey. +Trieu H. Trinh and Quoc V. Le. 2018. A simple method for commonsense reasoning. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proc. of NeurIPS. +Elena Voita, Rico Sennrich, and Ivan Titov. 2019. When a good translation is wrong in context: Context-aware machine translation improves on deixis, ellipsis, and lexical cohesion. In Proc. of ACL. +Apoory Vyas, Angelos Katharopoulos, and François Fleuret. 2020. Fast transformers with clustered attention. In Proc. of NeurIPS. +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proc. of ICLR. +Shuohang Wang, Luowei Zhou, Zhe Gan, Yen-Chun Chen, Yuwei Fang, Siqi Sun, Yu Cheng, and Jingjing Liu. 2020a. Cluster-Former: Clustering-based sparse transformer for long-range dependency encoding. Findings of ACL. +Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, and Hao Ma. 2020b. Linformer: Self-attention with linear complexity. +Jason Weston, Sumit Chopra, and Antoine Bordes. 2015. Memory networks. In Proc. of ICLR. +Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proc. of NAACL. +Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proc. of NAACL. +Dani Yogatama, Yishu Miao, Gabor Melis, Wang Ling, Adhiguna Kuncoro, Chris Dyer, and Phil Blunsom. 2018. Memory architectures in recurrent neural network language models. In Proc. of ICLR. + +Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontonon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, and Amr Ahmed. 2020. Big bird: Transformers for longer sequences. In Proc. of NeurIPS. +Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. In Proc. of NeurIPS. +Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proc. of ICCV. + +# Appendices + +# A Other ABC Models + +# A.1 Sparse Local-to-global Attention + +It sparsifies attention pattern to reduce the number of tokens that are attended to (Beltagy et al., 2020; Zaheer et al., 2020, inter alia). All queries attend to a subset of $n < N$ "global tokens," while ignoring others. Therefore the effective context size is reduced to $n$ . The global tokens are usually pre-selected by positions according to some heuristics. Local-to-global attention is an instance of ABC: it can be recovered by letting $\phi_t = \mathbf{e}_i$ if $x_t$ is the $i$ th global token $(i = 1, \dots, n)$ , and the zero vectors for others. + +# A.2 Random Memory Control + +As a baseline, $\mathrm{ABC}_{\mathrm{RD}}$ stores each token in a randomly-selected memory slot. This is achieved by letting $\phi_t = \mathbf{e}_{i_t}$ , where $i_t$ is uniformly drawn from $\{1,\dots,n\}$ for each $t$ . It is designed as a baseline to $\mathrm{ABC}_{\mathrm{MLP}}$ and Linformer to quantify the differences between random and learned bounded-memory control. + +Random sparse attention patterns are explored by Zaheer et al. (2020), where a subset of $n < N$ tokens are randomly selected to be attended to by all tokens. $\mathrm{ABC}_{\mathrm{RD}}$ is different, and it attends to all tokens, but randomly "squash" them into an $n$ -slot memory. + +# A.3 Compressive Transformer with Mean Pooling + +The compressive transformer (Rae et al., 2020) explores various ways to "squash" long context into smaller and more compact representations. It achieves state-of-the-art performance on several language modeling benchmarks. We show that at least the mean-pooling variant of the compressive transformer can be seen as an ABC instance. + +The mean-pooling variant of the compressive transformer compresses the context by + +$$ +\begin{array}{l} \mathbf {K} = \left[ \mathbf {k} _ {1}, \dots , \mathbf {k} _ {N} \right] ^ {\top} \in \mathbb {R} ^ {N \times d} \\ \rightarrow \widetilde {\mathbf {K}} = \big [ \underbrace {(\mathbf {k} _ {1} + \cdots + \mathbf {k} _ {c})} _ {c} / c, \\ \underbrace {\left(\mathbf {k} _ {c + 1} + \cdots + \mathbf {k} _ {2 c}\right)} _ {c} / c \ldots , \\ \underbrace {\left(\mathbf {k} _ {N - c + 1} + \cdots + \mathbf {k} _ {N}\right)} _ {c} / c \bigg ] ^ {\top} \in \mathbb {R} ^ {n \times d}. \\ \end{array} +$$ + +where $c = N / n$ is the compression ratio. Here $N \mod n = 0$ is assumed, since otherwise the sequence can be padded to. + +The above model is an ABC instance by letting + +$$ +\phi_ {i} = \mathbf {e} _ {\lfloor (i - 1) / c \rfloor + 1} / c. \tag {8} +$$ + +# A.4 Dilated Convolution Attention Patterns + +The dilated attention pattern is similar to the sliding window attention and only considers the context within a predefined window. It differs in that it attends to every other token: + +$$ +\widetilde {\mathbf {K}} _ {t} = \left[ \mathbf {k} _ {t - 2 n + 2}, \mathbf {k} _ {t - 2 n + 4}, \dots , \mathbf {k} _ {t - 2}, \mathbf {k} _ {t} \right] ^ {\top}. \tag {9} +$$ + +It can be simulated with two separate queues $\widetilde{\mathbf{K}}^{\mathrm{odd}}$ and $\widetilde{\mathbf{K}}^{\mathrm{even}}$ : + +$$ +\widetilde {\mathbf {K}} _ {t} ^ {\mathrm {o d d}} = \left\{ \begin{array}{l l} \mathbf {U} \widetilde {\mathbf {K}} _ {t - 1} ^ {\mathrm {o d d}} + \mathbf {e} _ {n} \otimes \mathbf {k} _ {t}, & \text {i f t i s o d d} \\ \widetilde {\mathbf {K}} _ {t - 1} ^ {\mathrm {o d d}}, & \text {o t h e r w i s e} \end{array} \right. +$$ + +$$ +\widetilde {\mathbf {K}} _ {t} ^ {\text {e v e n}} = \left\{ \begin{array}{l l} \mathbf {U} \widetilde {\mathbf {K}} _ {t - 1} ^ {\text {e v e n}} + \mathbf {e} _ {n} \otimes \mathbf {k} _ {t}, & \text {i f t i s e v e n} \\ \widetilde {\mathbf {K}} _ {t - 1} ^ {\text {e v e n}}, & \text {o t h e r w i s e} \end{array} \right. +$$ + +Likewise for the values. Depending on $t$ , the query attends to one of the two queues: output = + +$$ +\left\{ \begin{array}{l l} \left(\widetilde {\mathbf {V}} ^ {\mathrm {o d d}}\right) ^ {\top} \operatorname {s o f t m a x} (\widetilde {\mathbf {K}} ^ {\mathrm {o d d}} \mathbf {q} _ {t}), & \text {i f} t \text {i s o d d} \\ \left(\widetilde {\mathbf {V}} ^ {\mathrm {e v e n}}\right) ^ {\top} \operatorname {s o f t m a x} (\widetilde {\mathbf {K}} ^ {\mathrm {e v e n}} \mathbf {q} _ {t}), & \text {o t h e r w i s e}. \end{array} \right. +$$ + +The above implementation could incur considerable amount of overhead and may be actually more expensive than the original dilated window formulation. Therefore it has more conceptual value than practical value. + +# A.5 Shared Workspace and Linear Unified Nested Attention + +Concurrently to this work, shared workspace (SW; Goyal et al., 2021) and linear unified nested attention (LUNA; Ma et al., 2021) also proposed methods to learn contextualized memory control strategies. Both can be seen as instances of ABC. At layer $\ell$ , their $\phi_i^\ell$ is a function of previous layer's memory $\widetilde{\mathbf{X}}^{\ell -1} \in \mathbb{R}^{n \times d}$ and current layer's input $\mathbf{X}^\ell \in \mathbb{R}^{N \times d}$ : + +$$ +\phi_ {i} = \left[ \operatorname {s o f t m a x} \left(\widetilde {\mathbf {X}} ^ {\ell - 1} \mathbf {X} ^ {\ell^ {\top}}\right) \right] _ {:, i}, \tag {10} +$$ + +where $[\cdot ]_{:,i}$ denotes the ith column of a matrix. Query, key, and value projections are suppressed for notation clarity. + +SW and LUNA reveal the entire sequence to the control vectors, by constructing $\phi$ as a function of previous layer's memory. Although both admit the recurrent computation as all ABC models do, they are ill-suited for causal attention and autoregressive decoding, since future information is "leaked" to $\phi_{i}$ from the previous layer. LUNA resorts to a variant of Katharopoulos et al. (2020) in causal attention (Ma et al., 2021). In contrast, $\mathrm{ABC}_{\mathrm{MLP}}$ never conditions $\phi_{i}$ on previous layer's memory, but only on the current layer's input. + +# B More Details about ABC-MLP + +# B.1 Normalization in Causal Attention + +An equivalent implementation to Eq. 7 is to normalize $\widetilde{\mathbf{K}}$ and $\widetilde{\mathbf{V}}$ instead of $\phi_{i}$ vectors: + +$$ +\boldsymbol {\alpha} _ {i} = \exp \left(\mathbf {W} _ {\phi} \mathbf {x} _ {i}\right), \quad \phi_ {i} = \boldsymbol {\alpha} _ {i}, +$$ + +$$ +\bar {\mathbf {K}} = \widetilde {\mathbf {K}} \left/ \sum_ {j = 1} ^ {N} \boldsymbol {\alpha} _ {j}. \quad \bar {\mathbf {V}} = \widetilde {\mathbf {V}} \left. \right/ \sum_ {j = 1} ^ {N} \boldsymbol {\alpha} _ {j}. +$$ + +$\mathrm{output} = \bar{\mathbf{V}}^{\top}$ softmax $(\bar{\mathbf{K}}\mathbf{q})$ + +$\mathbf{M} / \mathbf{z}$ divides the $\ell$ th row of matrix $\mathbf{M}$ by vector $\mathbf{z}$ 's $\ell$ th dimension. This admits a linear complexity computation graph for the causal variant of $\mathrm{ABC}_{\mathrm{MLP}}$ . + +# B.2 Higher-Dimensional Case of Example 1 + +This section generalizes Example 1 to higher dimensional cases. Assume that the constant-sized memory has $n$ slots. $\phi_{i}$ is calculated as in Eq.7. Then $\widetilde{\mathbf{K}} = \sum_{i = 1}^{N}\phi_{i}\otimes \mathbf{k}_{i}\in \mathbb{R}^{n\times d}$ . Each row of $\widetilde{\mathbf{K}}$ can be seen as a separate attention mechanism with a pseudo query. Let $[\cdot ]_{\ell}$ denote the $\ell$ th row/dimension of a matrix/vector. Then for any $\ell = 1,\ldots ,n$ + +$$ +\begin{array}{l} \left[ \widetilde {\mathbf {K}} \right] _ {\ell} = \sum_ {i = 1} ^ {N} \left[ \phi_ {i} \right] _ {\ell} \otimes \mathbf {k} _ {i} \\ = \sum_ {i = 1} ^ {N} \frac {\exp \left(\left[ \mathbf {W} _ {\phi} \right] _ {\ell} \cdot \mathbf {x} _ {i}\right)}{\sum_ {j = 1} ^ {N} \exp \left(\left[ \mathbf {W} _ {\phi} \right] _ {\ell} \cdot \mathbf {x} _ {j}\right)} \mathbf {k} _ {i} ^ {\top} \\ = \mathrm {a t t n} \left([ \mathbf {W} _ {\phi} ] _ {\ell}, \{\mathbf {x} _ {i} \} _ {i = 1} ^ {N}, \{\mathbf {k} _ {i} \} _ {i = 1} ^ {N}\right) ^ {\top} \in \mathbb {R} ^ {1 \times d}. \\ \end{array} +$$ + +In other words, there are $n$ attention mechanisms in total, each with a separately-parameterized pseudoquery $[\mathbf{W}_{\phi}]_{\ell}$ . They summarize the context for $n$ times in parallel, each producing a $d$ -dimensional vectors. These output vectors are then stacked into $n$ -by- $d$ memory $\widetilde{\mathbf{K}}$ . $\widetilde{\mathbf{V}}$ is similar. + +# C Experimental Details + +# C.1 Language Modeling + +We closely build on Baevski and Auli (2019) and Kasai et al. (2021b). The hyperparameters are summarized in Table 10. All models are trained on 4 A100 GPUs. + +# C.2 Machine Translation + +We experiment with a sentence-level (WMT14 EN-DE, Bojar et al., 2014) and a document-level benchmark (IWSLT14 ES-EN, Cettolo et al., 2014) to assess model performance over various sequence lengths. The preprocessing and data splits of WMT14 EN-DE follow Vaswani et al. (2017). A 32,768 byte pair encoding (BPE; Sennrich et al., 2016) vocabulary is shared between source and target languages. For IWSLT14, we follow Miculicich et al. (2018) and use the dev2010 subset for development and tst2010-2012 for testing. The tokenization is also the same as Miculicich et al. (2018): we tokenize and truecase Spanish and English with Moses (Koehn et al., 2007) and run byte-pair encoding with 30k splits, shared between the two languages. The final dataset contains 1421, 8, and 42 documents for training, development, and testing. On average, each document contains 126.7 sentences, and each sentence contains 21.7(ES)/22.5(EN) BPE subwords. We use a sliding window with length-4 and stride-one to generate our dataset. During inference, we use predicted context on the target side. + +We average the checkpoints from the last five epochs to obtain the final model (Vaswani et al., 2017). In inference, we apply beam search with size 5 and length penalty 0.6. Other hyperparameters are summarized in Table 11. All models are trained on 4 RTX 2080 Ti GPUs. + +Additional machine translation results. In addition to the results presented in §5.2, Table 8 further compares, on the WMT14 EN-DE dataset, the clustering-based (§3.2) and sliding-window (§3.3) models of ABC, as well as ReLU and sigmoid variants of $\mathsf{ABC}_{\mathsf{MLP}}$ . Clustering and sliding-window ABC variants underperform $\mathsf{ABC}_{\mathsf{MLP}}$ with the same memory sizes by more than 0.5 BLEU. Both ReLU and sigmoid underperform their exp counterpart. + +MLP-exp-all replaces the encoder's softmax attention modules with ABC, in addition to the decoder's. It underperforms $\mathrm{ABC}_{\mathrm{MLP}}$ by only 0.3 BLEU. + +
ModelφCross nCausal nEncoder nBLEU
BASE----27.2
ABCWindow3232-26.3
Cluster3232-26.8
MLP-ReLU328--
MLP-ReLU3232-26.4
MLP-sigmoid328-26.8
MLP-sigmoid3232-27.0
MLP-exp328-27.1
MLP-exp3232-27.3
MLP-exp-all32323227.0
+ +Table 8: ABC variants' performance (SacreBLEU) on the WMT14 EN-DE test set for sentence-level machine translation. MLP-ReLU with 32/8 memory sizes fails to converge. MLP-exp-all applies ABC in both the encoder and the decoder, while others only in the decoders. + +Figure 1b compares $\mathrm{ABC}_{\mathrm{MLP}}$ 's (32-8 memory sizes) attention memory overhead with softmax attention's. Following Kasai et al. (2021b), we consider a synthetic sequence-to-sequence generation task with varying sequence lengths. A batch size of 16 and greedy decoding is used. The models are of the same size as those in §5.2. + +# C.3 Masked Language Model Finetuning + +Our data for continued pretraining is a concatenation of BookCorpus (Zhu et al., 2015), English Wikipedia, OpenWebText (Gokaslan and Cohen, 2019), and RealNews (Zellers et al., 2019). Our data differs from RoBERTa's pretraining data, which we do not have access to. We replace their CC-News (Nagel, 2016) with RealNews, and drop Stories (Trinh and Le, 2018). At the time of this project, the public access to the Stories dataset is broken.[10] Our machine does not have a large enough memory to load all the data. We therefore split the training data into 20 shards, after shuffling. Other preprocessing is the same as Liu et al. (2019).[11] The hyperparameters for continued pretraining follow base-sized RoBERTa, part of which are summarized in Table 12. All models are trained on a single TPU v3 accelerator. + +For downstream task finetuning, we use the same + +hyperparameters as Liu et al. (2019).12 Table 13 briefly describes the tasks. The readers are referred to Wang et al. (2019) for further details. + +
DataTrainDev.TestVocab.Sent./doc
WikiText-103103M218K246K268K-
WMT14 EN-DE4.5M3K3K32K-
IWSLT14 ES-EN171385630K121.5
+ +Table 9: Statistics for the datasets. WikiText-103 split sizes are in number of tokens, WMT14 in number of sentences, and IWSLT14 in number of documents. + +
Hyperprams.B&AKasai
# Layers1632
# Heads88
Embedding Size10241024
Head Size128128
FFN Size40964096
Batch Size6464
Learning Rate1.01.0
Dropout0.30.3
Layer Dropout-0.2
Memory size[32, 64]64
+ +Table 10: Hyperparameters used in the language modeling experiments. B&A: Baevski and Auli (2019); Kasai: Kasai et al. (2021b). + +
Hyperprams.WMT14IWSLT14
# Layers66
# Heads88
Embedding Size512512
Head Size6464
FFN Size20481024
Warmup Steps60004000
Dropout0.10.3
Cross Attn. n32128
Causal Attn. n864
+ +Table 11: Hyperparameters used in the machine translation experiments. + +
Hyperprams.Values
# Layers12
# Heads12
Embedding Size768
Head Size64
FFN Size3072
Dropout0.1
Memory Size[64, 128]
+ +Table 12: Hyperparameters for continued pretraining in the masked language model finetuning experiments. + +
DataTaskTrainDev.
MNLIEntailment392K9.8K
QNLIEntailment105K5.5K
QQPParaphrase363K40K
SST-2Sentiment67K873
+ +Table 13: GLUE datasets and statistics. MNLI: Williams et al. (2018); QNLI is compiled by GLUE's authors using Rajpurkar et al. (2016); QQP: Csernai (2017, accessed September 1, 2020); SST-2: Socher et al. (2013). \ No newline at end of file diff --git a/abcattentionwithboundedmemorycontrol/images.zip b/abcattentionwithboundedmemorycontrol/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..1ff24f7a409f0eb1bb4a1d38c14bd37104c243df --- /dev/null +++ b/abcattentionwithboundedmemorycontrol/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2cc3c202f904c4379f6c715884fb4b0e3c5a146ba0c07295aabdfbf975c644ee +size 569880 diff --git a/abcattentionwithboundedmemorycontrol/layout.json b/abcattentionwithboundedmemorycontrol/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..76ae8b5ddf0eeda06ccb4e45a1491fb53de78de1 --- /dev/null +++ b/abcattentionwithboundedmemorycontrol/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1932cf95d144d87e710c51efb208f75fa89b5f61c585cd0bfb624e43b0fc344 +size 675351 diff --git a/acceleratingcodesearchwithdeephashingandcodeclassification/995cb5a3-7043-440d-8e30-2e5ef65f0c39_content_list.json b/acceleratingcodesearchwithdeephashingandcodeclassification/995cb5a3-7043-440d-8e30-2e5ef65f0c39_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..c224db71d2c0f4aebd4abdf83802baf597f6ece7 --- /dev/null +++ b/acceleratingcodesearchwithdeephashingandcodeclassification/995cb5a3-7043-440d-8e30-2e5ef65f0c39_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e4c58deb93ed24478726419898949609975b955303226aa218d2e4bf0f04213b +size 73965 diff --git a/acceleratingcodesearchwithdeephashingandcodeclassification/995cb5a3-7043-440d-8e30-2e5ef65f0c39_model.json b/acceleratingcodesearchwithdeephashingandcodeclassification/995cb5a3-7043-440d-8e30-2e5ef65f0c39_model.json new file mode 100644 index 0000000000000000000000000000000000000000..2ef59b8f173761db68b3b83afac3047d4e12bf5b --- /dev/null +++ b/acceleratingcodesearchwithdeephashingandcodeclassification/995cb5a3-7043-440d-8e30-2e5ef65f0c39_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9b67ec3aa903f5d9505046a308fd2965c96bab2275f92c242f3dd2ae10905dff +size 86935 diff --git a/acceleratingcodesearchwithdeephashingandcodeclassification/995cb5a3-7043-440d-8e30-2e5ef65f0c39_origin.pdf b/acceleratingcodesearchwithdeephashingandcodeclassification/995cb5a3-7043-440d-8e30-2e5ef65f0c39_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..99c8dba58a011cb9934f13302a3006d3c84f4d05 --- /dev/null +++ b/acceleratingcodesearchwithdeephashingandcodeclassification/995cb5a3-7043-440d-8e30-2e5ef65f0c39_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0e3b6a44ef31e504231cd13d0622c2b5a06cf5f53291811e2810a3d3060714d7 +size 480761 diff --git a/acceleratingcodesearchwithdeephashingandcodeclassification/full.md b/acceleratingcodesearchwithdeephashingandcodeclassification/full.md new file mode 100644 index 0000000000000000000000000000000000000000..3ef5a196df1abccbaf59eea2eb61f12b9ebad270 --- /dev/null +++ b/acceleratingcodesearchwithdeephashingandcodeclassification/full.md @@ -0,0 +1,261 @@ +# Accelerating Code Search with Deep Hashing and Code Classification + +Wenchao Gu $^{1*}$ , Yanlin Wang $^{2,\dagger}$ , Lun Du $^{2}$ , Hongyu Zhang $^{3}$ , Shi Han $^{2}$ , Dongmei Zhang $^{2}$ , and Michael R. Lyu $^{1}$ + +$^{1}$ Department of Computer Science and Engineering, The Chinese University of Hong Kong, China. + +$^{2}$ Microsoft Research Asia, Beijing, China + +3 The University of Newcastle, Australia + +# Abstract + +Code search is to search reusable code snippets from source code corpus based on natural languages queries. Deep learning-based methods on code search have shown promising results. However, previous methods focus on retrieval accuracy, but lacked attention to the efficiency of the retrieval process. We propose a novel method CoSHC to accelerate code search with deep hashing and code classification, aiming to perform efficient code search without sacrificing too much accuracy. To evaluate the effectiveness of CoSHC, we apply our method on five code search models. Extensive experimental results indicate that compared with previous code search baselines, CoSHC can save more than $90\%$ of retrieval time meanwhile preserving at least $99\%$ of retrieval accuracy. + +# 1 Introduction + +Code reuse is a common practice during software development process. It improves programming productivity as developers' time and energy can be saved by reusing existing code. According to previous studies (Brandt et al., 2009; Lv et al., 2015), many developers tend to use natural language to describe the functionality of desired code snippets and search the Internet/code corpus for code reuse. + +Many code search approaches (Brandt et al., 2009; McMillan et al., 2011; Lv et al., 2015; Du et al., 2021) have been proposed over the years. With the rapid growth of open source code bases and the development of deep learning technology, recently deep learning based approaches have become popular for tackling the code search problem (Gu et al., 2018; Husain et al., 2019; Gu et al., 2021). Some of these approaches adopt neural network models to encode source code and query descriptions into representation vectors in the same + +embedding space. The distance between the representation vectors whose original code or description are semantically similar should be small. Other approaches (Feng et al., 2020; Guo et al., 2021; Du et al., 2021) regard the code search task as a binary classification task, and calculate the probability of code matching the query. + +In the past, deep learning-based methods focused on retrieval accuracy, but lacked attention to the efficiency of retrieval on large-scale code corpus. However, both types of these deep learning-based approaches directly rank all the source code snippets in the corpus during searching, which will incur a large amount of computational cost. For the approaches that separately encode code and description representation vectors, the similarity of the target query vector with all code representation vectors in the corpus needs to be calculated for every single retrieval. In order to pursue high retrieval accuracy, a high dimension is often set for the representation vectors. For example, in CodeBERT, the dimension of the final representation vector is 768. The similarity calculation between a pair of code and query vectors will take 768 multiplications and 768 additions between two variables with double data type. The total calculation of single linear scan for the whole code corpus containing around 1 million code snippets is extremely large - around 1 billion times of multiplications and additions. As for the approaches adopting binary classification, there is no representation vectors stored in advance and the inference of the target token sequence with all the description token sequences needs to be done in real time for every single retrieval. Due to the large number of parameters in the current deep learning models, the computation cost will be significant. + +Hashing is a promising approach to improve the retrieval efficiency and widely adopted in other retrieval tasks such as image-text search and image-image search. Hashing techniques can convert high + +dimensional vectors into low dimensional binary hash code, which greatly reduce the cost of storage and calculation (Luo et al., 2020). Hamming distance between two binary hash code can also be calculated in a very efficient way by running XOR instruction on the modern computer architectures (Wang et al., 2016). However, the performance degradation is still not avoidable during the conversion from representation vectors to binary hash codes even the state-of-the-art hashing models are adopted. The tolerance of performance degradation from most users is quite low and many of them are willing to sweep the performance with efficiency. In order to preserve the performance of the original code search models that adopt bi-encoders for the code-query encoding as much as possible, we integrate deep hashing techniques with code classification, which could mitigate the performance degradation of hashing model in the recall stage by filtering out the irrelevant data. + +Specifically, in this paper, we propose a novel approach CoSHC (Accelerating Semantic Code Search with Deep Hashing and Code Classification) for accelerating the retrieval efficiency of deep learning-based code search approaches. CoSHC firstly clusters the representation vectors into different categories. It then generates binary hash codes for both source code and queries according to the representation vectors from the original models. Finally, CoSHC gives the normalized prediction probability of each category for the given query, and then CoSHC will decide the number of code candidates for the given query in each category according to the probability. Comprehensive experiments have been conducted to validate the performance of the proposed approach. The evaluation results show that CoSHC can preserve more than $99\%$ performance of most baseline models. We summarize the main contributions of this paper as follows: + +- We propose a novel approach, CoSHC, to improve the retrieval efficiency of previous deep learning based approaches. CoSHC is the first approach that adopts the recall and re-rank mechanism with the integration of code clustering and deep hashing to improve the retrieval efficiency of deep learning based code search models. +- We conduct comprehensive experimental evaluation on public benchmarks. The results demonstrate that CoSHC can greatly improve the retrieval efficiency meanwhile preserve almost the + +same performance as the baseline models. + +# 2 Background + +# 2.1 Code Search + +In this subsection, we briefly review some deep learning based code search approaches. Sachdev et al. (2018) firstly propose the neural network based model NCS to retrieve the source code from a large source code corpus according to the given natural language descriptions. Cambronero et al. (2019) propose a neural network model UNIF based on bag-of-words, which embeds code snippets and natural language descriptions into a shared embedding space. Gu et al. (2018) propose to encode source code representation with API sequences, method name tokens and code tokens. Yao et al. (2019) treat code annotation and code search as dual tasks and utilize the generated code annotations to improve code search performance. Husain et al. (2019) explore different neural architectures for source code representation and discover that the self-attention model achieves the best performance. Gu et al. (2021) extract the program dependency graph from the source code and adopt long short term memory (LSTM) networks to model this relationship. Feng et al. (2020) propose a pre-trained model for source code representation and demonstrate its effectiveness on the code search task. + +# 2.2 Deep Hashing + +In this subsection, we briefly introduce some representative unsupervised cross-modal hashing methods. In order to learn a unified hash code, Ding et al. (2014) propose to adopt collective matrix factorization with latent factor model from different modalities to merge multiple view information sources. Zhou et al. (2014) firstly utilize sparse coding and matrix factorization to extract the latent features for images and texts, respectively. Then the learned latent semantic features are mapped to a shared space and quantized to the binary hash codes. Wang et al. (2014) suggest using stacked auto-encoders to capture the intra- and inter-modal semantic relationships of data from heterogeneous sources. He et al. (2017) and Zhang et al. (2018) adopt adversarial learning for cross-modal hash codes generation. Wu et al. (2018) propose an approach named UDCMH that integrates deep learning and matrix factorization with binary latent factor models to generate binary hash codes for multi + +modal data retrieval. By incorporating Laplacian constraints into the objective function, UDCMH preserve not only the nearest neighbors but also the farthest neighbors of data. Unlike using Laplacian constraints in the loss function, Su et al. (2019) construct a joint-semantic affinity matrix that integrates the original neighborhood information from different modalities to guide the learning of unified binary hash codes. + +# 3 Method + +We propose a general framework to accelerate existing Deep Code Search (DCS) models by decoupling the search procedure into a recall stage and a re-rank stage. Our main technical contribution lies in the recall stage. Figure 1 illustrates the overall framework of the proposed approach. CoSHC consists of two components, i.e., Offline and Online. In Offline, we take the code and description embeddings learned in the given DCS model as input, and learn the corresponding hash codes by preserving the relations between the code and description embeddings. In Online, we recall a candidate set of code snippets according to the Hamming distance between the query and code, and then we use the original DCS model to re-rank the candidates. + +# 3.1 Offline Stage + +Multiple Code Hashing Design with Code Classification Module Since the capacity of binary hashing space is very limited compared to Euclidean space, the Hamming distance between similar code snippets will be too small to be distinguishable if we adopt a single Hashing model. To be specific, we cluster the codebase using K-Means algorithm with the code embeddings learned from the given DCS model. The source code whose representation vectors are close to each other will be classified into the same category after the clustering. + +Deep Hashing Module The deep hashing module aims at generating the corresponding binary hash codes for the embeddings of code and description from the original DCS model. Figure 2 illustrates the framework of the deep hashing module. To be specific, three fully-connected (FC) layers with $\tanh (\cdot)$ activation function are adopted to replace the output layer in the original DCS model to convert the original representation vectors into a soft binary hash code. + +The objective of the deep hashing module is to force the Hamming distance between hashing + +representations of code pairs and description pairs approaching the Euclidean distance between the corresponding embeddings. Thus, we need to calculate the ground truth similarity matrix between code pairs and description pairs firstly. For performance consideration, we calculate the similarity matrix within a mini-batch. + +To construct such a matrix, we first define the code representation vectors and the description representation vectors in the original code search model as $V_{C} = \{v_{c}^{(1)},\dots,v_{c}^{(n)}\}$ and $V_{D} = \{v_{d}^{(1)},\dots,v_{d}^{(n)}\}$ , respectively. $V_{C}$ and $V_{D}$ represent the representation vectors matrix for the entire batch, while $v_{c}^{(i)}$ and $v_{d}^{(i)}$ represent the representation vector for the single code snippet or query. After normalizing $V_{C}, V_{D}$ to $\hat{V}_{C}, \hat{V}_{D}$ with $l_{2}$ -norm, we can calculate the code similarity matrices $S_{C} = \hat{V}_{C}\hat{V}_{C}^{T}$ and summary similarity matrices $S_{D} = \hat{V}_{D}\hat{V}_{D}^{T}$ to describe the similarity among code representation vectors and summary representation vectors, respectively. In order to integrate the similarity information in both $S_{C}$ and $S_{D}$ , we combine them with a weighted sum: + +$$ +\tilde {S} = \beta S _ {C} + (1 - \beta) S _ {D}, \beta \in [ 0, 1 ] \tag {1} +$$ + +where $\beta$ is the weight parameter. Since the pairwise similarity among the code representation vectors and description representation vectors still cannot comprehensively present the distribution condition of them in the whole embedding space, we involve a matrix $\tilde{S}\tilde{S}^T$ to describe a high order neighborhood similarity that two vectors with high similarity should also have the close similarity to other vectors. Finally, we utilize a weighted equation to combine both of these two matrices as follows: + +$$ +S = (1 - \eta) \tilde {S} + \eta \frac {\tilde {S} \tilde {S} ^ {T}}{m}, \tag {2} +$$ + +where $\eta$ is a hyper-parameter and $m$ is the batch size which is utilized to normalize the second term in the equation. Since we hope the binary hash codes of the source code and its corresponding description to be the same, we replace the diagonal elements in the similarity matrix with one. The final high order similarity matrix is: + +$$ +S _ {F _ {i j}} = \left\{ \begin{array}{l l} 1, & i = j \\ S _ {i j}, & \text {o t h e r w i s e} \end{array} \right. \tag {3} +$$ + +Binary Hash Code Training We propose to replace the output layer of the original code search + +![](images/72687c5e757dbc13f2bf9bf40eade1ea38242f67d4a4bc0b82a53c03d18da92b.jpg) +Figure 1: Overview of the proposed CoSHC. 1 Encoding the code token sequence and description token sequence via original code retrieval models. 2 Clustering the code representation vectors into several categories. 3 Converting the original code representation vectors into binary hash codes. 5 Predicting the category of the query given by users and set the number of code candidates for different categories. 7 Converting the input query into binary hash code. 8 Recall the code candidates according to the hamming distance and the number of code candidates for each category. 9 Re-ranking all the code candidates according to the cosine similarity between the input query description vectors and code candidates' representation vectors and return the results to the user. + +model with three FC layers with $\tanh (\cdot)$ activate function. We define the trained binary hash code for code and description as $B_{C} = \{b_{c}^{(1)},\dots,b_{c}^{(n)}\}$ and $B_{D} = \{b_{d}^{(1)},\dots,b_{d}^{(n)}\}$ , respectively. To ensure that the relative distribution of binary hash codes is similar to the distribution of representation vectors in the original embedding space, the following equation is utilized as the loss function of the deep hashing module: + +$$ +\begin{array}{l} \mathcal {L} (\theta) = \min _ {B _ {C}, B _ {D}} \| \min (\mu S _ {F}, 1) - \frac {B _ {C} B _ {D} ^ {T}}{d} \| _ {F} ^ {2} \\ + \lambda_ {1} \| \min \left(\mu S _ {F}, 1\right) - \frac {B _ {C} B _ {C} ^ {T}}{d} \| _ {F} ^ {2} \tag {4} \\ + \lambda_ {2} \| \min (\mu S _ {F}, 1) - \frac {B _ {D} B _ {D} ^ {T}}{d} \| _ {F} ^ {2}, \\ \end{array} +$$ + +$$ +s. t. B _ {C}, B _ {D} \in \{- 1, + 1 \} ^ {m \times d}, +$$ + +where $\theta$ are model parameters, $\mu$ is the weighted parameters to adjust the similarity score between different pairs of code and description, $\lambda_1, \lambda_2$ are the trade-off parameters to weight different terms in the loss function, and $d$ is the dimension of the binary hash code generated by this deep hashing module. These three terms in the loss function are adopted to restrict the similarity among binary hash codes of the source codes, the similarity among + +binary hash codes of the descriptions, and the similarity between the binary hash codes of source code and description, respectively. + +Note that we adopt $B_{C}B_{D}^{T} / d$ to replace $\cos (B_C,B_D)$ because $\cos (B_C,B_D)$ only measures the angle between two vectors but neglects the length of the vectors, which makes $\cos (B_C,B_D)$ can still be a very large value even the value of every hash bits is close to zero. Unlike $\cos (B_C,B_D)$ , $B_{C}B_{D}^{T} / d$ can only achieve a high value when every bit of the binary hash code is 1 or -1 since the value of $B_{C}B_{D}^{T} / d$ will be close to zero if the value of every hash bits is close to zero. + +Since it is impractical to impose on the output of neural network to be discrete values like 1 and -1, we adopt the following equation to convert the output of deep hashing module to be strict binary hash code: + +$$ +B = \operatorname {s g n} (H) \in \{- 1, + 1 \} ^ {m \times d}, \tag {5} +$$ + +where $H$ is the output of the last hidden layer without the activation function in the deep hashing module and $\operatorname{sgn}(\cdot)$ is the sign function and the output of this function is 1 if the input is positive and the output is -1 otherwise. + +However, the gradient of the sign function will be zero in backward propagation which will induce + +![](images/c83080d41e1ef725532a13143912876a63e406d7036934e4644e5ac9ac2986ea.jpg) +Figure 2: Architecture of the hashing module. The original representation vectors will be utilized for the joint-similarity matrix construction at first. Then the joint-similarity matrix will be utilized as the labels for training binary hash codes generation. The training objective is to make the Hamming distance similarity matrix to be identical as the joint-similarity matrix. + +the vanishing gradients problem and affect model convergence. To address this problem, we follow the previous research (Cao et al., 2017; Hu et al., 2019) and adopt a scaling function: + +$$ +B = \tanh (\alpha H) \in \{- 1, + 1 \} ^ {m \times d}, \tag {6} +$$ + +where $\alpha$ is the parameter which is increased during the training. The function of $\tanh (\alpha H)$ is an approximate equation of $\operatorname{sgn}(H)$ when $\alpha$ is large enough. Therefore, the output of Eq. 6 will finally be converged to 1 or -1 with the increasing of $\alpha$ during the training and the above problem is addressed. + +# 3.2 Online Stage + +Recall and Re-rank Mechanism The incoming query from users will be fed into the description category prediction module to calculate the normalized probability distribution of categories at first. Then the number of code candidates $R_{i}$ for each category $i$ will be determined according to this probability distribution. The Hamming distance between the hash code of the given query and all the code inside the database will be calculated. Then code candidates will be sorted by Hamming distance in ascending order and the top $R_{i}$ code candidates in each category $i$ will be recalled. In the re-rank step, the original representation vectors of these recalled code candidates will be retrieved and utilized for the cosine similarity calculation. Finally, code snippets will be returned to users in descending order of cosine similarity. + +Description Category Prediction Module The description category prediction module aims to pre + +dict the category of source code that meets user's requirement according to the given natural language description. The model adopted for category prediction is the same as the original code search model, except that the output layer is replaced with a one-hot category prediction layer and the cross-entropy function is adopted as the loss function of the model. + +Since the accuracy of the description category prediction module is not perfect, we use the probability distribution of each category instead of the category with the highest predicted probability as the recall strategy for code search. We define the total recall number of source code as $N$ , the normalized predicted probability for each code category as $P = \{p_1, \dots, p_k\}$ , where $k$ is the number of categories. The recall number of source code in each category is: + +$$ +R _ {i} = \min (\left\lfloor p _ {i} \cdot (N - k) \right\rfloor , 1), i \in 1, \dots , k, \tag {7} +$$ + +where $R_{i}$ is the recall number of source code in category $i$ . To ensure that the proposed approach can recall at least one source code from each category, we set the minimum recall number for a single category to 1. + +# 4 Experiments + +# 4.1 Dataset + +We use two datasets (Python and Java) provided by CodeBERT (Feng et al., 2020) to evaluate the performance of CoSHC. CodeBERT selects the data from the CodeSearchNet (Husain et al., 2019) + +dataset and creates both positive and negative examples of pairs. Since all the baselines in our experiments are bi-encoder models, we do not need to predict the relevance score for the mismatched pairs so we remove all the negative examples from the dataset. Finally we get 412,178 pairs as the training set, 23,107 pairs as the validation set, and 22,176 pairs as the test set in the Python dataset. We get 454,451 pairs as the training set, 15,328 pairs as the validation set, and 26,909 pairs as the test set in the Java dataset. + +# 4.2 Experimental Setup + +In the code classification module, we set the number of cluster to 10. In the deep hashing module, we add three fully connected (FC) layer in all the baselines, the hidden size of each FC layer is the same as the dimension of the original representation vectors. Specifically, the hidden size of FC layer for CodeBERTa, CodeBERT, GraphCodeBERT is 768. The hidden size of FC layer for UNIF is 512 and for RNN is 2048. The size of the output binary hash code for all the baselines is 128. The hyper parameters $\beta, \eta, \mu, \lambda_1, \lambda_2$ are 0.6, 0.4, 1.5, 0.1, 0.1, respectively. The parameter $\alpha$ is the epoch number and will be linear increased during the training. In the query category prediction module, a cross-entropy function is adopted as the loss function and the total recall number is 100. + +The learning rate for CodeBERTa, CodeBERT and GraphCodeBERT is 1e-5 and the learning rate for UNIF, RNN is $1.34\mathrm{e - }4$ . All the models are trained via the AdamW algorithm (Kingma and Ba, 2015). + +We train our models on a server with four 4x Tesla V100 w/NVLink and 32GB memory. Each module based on CodeBERT, GraphCodeBERT and CodeBERTa are trained with 10 epochs and Each module based on RNN and UNIF are trained with 50 epochs. The early stopping strategy is adopted to avoid overfitting for all the baselines. The time efficiency experiment is conducted on the server with Intel Xeon E5-2698v4 2.2Ghz 20-core. The programming for evaluation is written in $\mathrm{C + + }$ and the program is allowed to use single thread of CPU. + +# 4.3 Baselines + +We apply CoSHC on several state-of-the-art and representative baseline models. UNIF (Cambronero et al., 2019) regards the code as the sequence of tokens and embeds the sequence of code tokens and description tokens into representation vectors via full connected layer with attention mechanism, respectively. RNN baseline adopts a two-layer bi-directional LSTM (Cho et al., 2014) to encode the input sequences. CodeBERTa is a 6-layer, Transformer-based model trained on the CodeSearchNet dataset. CodeBERT (Feng et al., 2020) is a pre-trained model based on Transformer with 12 layers. Similar to CodeBERT, GraphCodeBERT (Guo et al., 2021) is a pre-trained Transformer-based model pre-trained with not only tokens information but also dataflow of the code snippets. As we introduced, the inference efficiency of cross-encoder based models like CodeBERT is quite low and the purpose of our approach is to improve the calculation efficiency between the representation vectors of code and queries. Here we slightly change the model structure of CodeBERTa, CodeBERT, and GraphCodeBERT. Rather than concatenating code and query together and inputting them into a single encoder to predict the relevance score of the pair, we adopt the bi-encoder architecture for the baselines, which utilize the independent encoder to encoding the code and queries into representation vectors, respectively. Also, cosine similarity between the given representation vector pairs is adopted as the training loss function to replace the cross entropy function of the output relevance score. + +# 4.4 Evaluation Metric + +SuccessRate@k is widely used by many previous studies (Haldar et al., 2020; Shuai et al., 2020; Fang et al., 2021; Heyman and Cutsem, 2020). The metric is calculated as follows: + +$$ +\text {S u c c e s s R a t e @} k = \frac {1}{| Q |} \sum_ {q = 1} ^ {Q} \delta \left(F \operatorname {R a n k} _ {q} \leq k\right), \tag {8} +$$ + +where $Q$ denotes the query set and $F \operatorname{Rank}_q$ is the rank of the correct answer for query $q$ . If the correct result is within the top $k$ returning results, $\delta(F \operatorname{Rank}_q \leq k)$ returns 1, otherwise it returns 0. A higher $R@k$ indicates better performance. + +
PythonJava
Total Time
CodeBERT572.97s247.78s
CoSHC33.87s (↓94.09%)15.78s (↓93.51%)
(1) Vector Similarity Calculation
CodeBERT531.95s234.08s
CoSHC14.43s (↓97.29%)7.25s (↓96.90%)
(2) Array Sorting
CodeBERT41.02s13.70s
CoSHC19.44s (↓53.61%)8.53s (↓37.74%)
+ +Table 1: Time Efficiency of CoSHC. + +# 4.5 Experimental Results + +In this section, we present the experimental results and evaluate the performance of CoSHC from the aspects of retrieval efficiency, overall retrieval performance, and the effectiveness of the internal classification module. + +# 4.5.1 RQ1: How much faster is CoSHC than the original code search models? + +Table 1 illustrates the results of efficiency comparison between the original code search models and CoSHC. Once the representation vectors of code and description are stored in the memory, the retrieval efficiency mainly depends on the dimension of representation vectors rather than the complexity of the original retrieval model. Therefore, we select CodeBERT as the baseline model to illustrate efficiency comparison. Since code search process in both approaches contains vector similarity calculation and array sorting, we split the retrieval process into these two steps to calculate the time cost. + +In the vector similarity calculation step, CoSHC reduces $97.29\%$ and $96.90\%$ of time cost in the dataset of Python and Java respectively, which demonstrates that the utilization of binary hash code can effectively reduce vector similarity calculation cost in the code retrieval process. + +In the array sorting step, CoSHC reduces $53.61\%$ and $37.74\%$ of time cost in the dataset of Python and Java, respectively. The classification module makes the main contribution on the improvement of sorting efficiency. The sorting algorithm applied in both original code search model and CoSHC is quick sort, whose time complexity is $O(n \log n)$ . Classification module divides a large code dataset into several small code datasets, reducing the average time complexity of sorting to $O(n \log \frac{n}{m})$ . The reason why the improvement of sorting in the Java dataset is not so significant as in the Python dataset + +is that the size of Java dataset is much smaller than the size of Python dataset. However, the combination of the algorithm of divide and conquer and max-heap, rather than quick sort, is widely applied in the big data sorting, which can greatly shrink the retrieval efficiency gap between these two approaches. Therefore, the improvement of efficiency in the sorting process will not be as large as what shown in Table 1. + +In the overall code retrieval process, the cost time is reduced by $94.09\%$ and $93.51\%$ in the dataset of Python and Java, respectively. Since the vector similarity calculation takes most of cost time in the code retrieval process, CoSHC still can reduce at least $90\%$ of cost time, which demonstrates the effectiveness on the efficiency improvement in the code search task. + +# 4.5.2 RQ2: How does CoSHC affect the accuracy of the original models? + +Table 2 illustrates the retrieval performance comparison between the original code search models and CoSHC. We have noticed that the performance of the conventional approaches like BM25 (Robertson and Zaragoza, 2009) is not good enough. For example, we set the token length for both code and queries as 50, which is the same as the setting in CodeBERT, and apply BM25 to recall top 100 code candidates for the re-rank step on the Python dataset. BM25 can only retain $99.3\%$ , $95.6\%$ and $92.4\%$ retrieval accuracy of CodeBERT in terms of $R@1$ , $R@5$ and $R@10$ on the Python dataset. Here we only compare the performance of our approach with the original code search models since the purpose of our approach is to preserve the performance of the original code search models. As can be observed, CoSHC can retain at least $99.5\%$ , $99.0\%$ and $98.4\%$ retrieval accuracy of most original code search models in terms of $R@1$ , $R@5$ and $R@10$ on the Python dataset. CoSHC can also retain $99.2\%$ , $98.2\%$ and $97.7\%$ of the retrieval accuracy as all original code search baselines in terms of $R@1$ , $R@5$ and $R@10$ on the Java dataset, respectively. We can find that CoSHC can retain more than $97.7\%$ of performance in all metrics. $R@1$ is the most important and useful metric among these metrics since most users hope that the first returned answer is the correct answer during the search. CoSHC can retain at least $99.2\%$ of performance on $R@1$ in both datasets, which demonstrates that CoSHC can retain almost the same performance as the original code search model. + +
ModelPythonJava
R@1R@5R@10R@1R@5R@10
UNIF0.0710.1730.2360.0840.1930.254
CoSHCUNIF0.072 (↑1.4%)0.177 (↑2.3%)0.241 (↑2.1%)0.086 (↑2.4%)0.198 (↑2.6%)0.264 (↑3.9%)
-w/o classification0.071 (0.0%)0.174 (↑0.6%)0.236 (0.0%)0.085 (↑1.2%)0.193 (0.0%)0.254 (0.0%)
-one classification0.069 (↓2.8%)0.163 (↓5.8%)0.216 (↓8.5%)0.083 (↓1.2%)0.183 (↓5.2%)0.236 (↓7.1%)
-ideal classification0.077 (↑6.9%)0.202 (↑16.8%)0.277 (↑17.4%)0.093 (↑10.7%)0.222 (↑15.0%)0.296 (↑16.5%)
RNN0.1110.2530.3330.0730.1840.250
CoSHCRNN0.112 (↑0.9%)0.259 (↑2.4%)0.343 (↑5.0%)0.076 (↑4.1%)0.194 (↑5.4%)0.265 (↑6.0%)
-w/o classification0.112 (↑0.9%)0.254 (↑0.4%)0.335 (↑0.6%)0.073 (0.0%)0.186 (↑1.1%)0.253 (↑1.2%)
-one classification0.112 (↑0.9%)0.243 (↓4.0%)0.311 (↓6.6%)0.075 (↑2.7%)0.182 (↓1.1%)0.240 (↓4.0%)
-ideal classification0.123 (↑10.8%)0.289 (↑14.2%)0.385 (↑15.6%)0.084 (↑15.1%)0.221 (↑20.1%)0.302 (↑20.8%)
CodeBERTa0.1240.2500.3140.0890.2030.264
CoSHCCodeBERTa0.123 (↓0.8%)0.247 (↓1.2%)0.309 (↓1.6%)0.090 (↑1.1%)0.210 (↑3.4%)0.272 (↑3.0%)
-w/o classification0.122 (↓1.6%)0.242 (↓3.2%)0.302 (↓3.8%)0.089 (0.0%)0.201 (↓1.0%)0.258 (↓2.3%)
-one classification0.116 (↓6.5%)0.221 (↓11.6%)0.271 (↓13.7%)0.085 (↓4.5%)0.189 (↓6.9%)0.238 (↓9.8%)
-ideal classification0.135 (↑8.9%)0.276 (↑10.4%)0.346 (↑10.2%)0.100 (↑12.4%)0.235 (↑15.8%)0.305 (↑15.5%)
CodeBERT0.4510.6830.7590.3190.5370.608
CoSHCCodeBERT0.451 (0.0%)0.679 (↓0.6%)0.750 (↓1.2%)0.318 (↓0.3%)0.533 (↓0.7%)0.602 (↓1.0%)
-w/o classification0.449 (↓0.4%)0.673 (↓1.5%)0.742 (↓2.2%)0.316 (↓0.9%)0.527 (↓1.9%)0.593 (↓2.5%)
-one classification0.425 (↓5.8%)0.613 (↓10.2%)0.665 (↓12.4%)0.304 (↓4.7%)0.483 (↓10.1%)0.532 (↓12.5%)
-ideal classification0.460 (↑2.0%)0.703 (↑2.9%)0.775 (↑2.1%)0.329 (↑3.1%)0.555 (↑3.4%)0.627 (↑3.1%)
GraphCodeBERT0.4850.7260.7920.3530.5710.640
CoSHCGraphCodeBERT0.483 (↓0.4%)0.719 (↓1.0%)0.782 (↓1.3%)0.350 (↓0.8%)0.561 (↓1.8%)0.625 (↓2.3%)
-w/o classification0.481 (↓0.8%)0.713 (↓1.8%)0.774 (↓2.3%)0.347 (↓1.7%)0.553 (↓3.2%)0.616 (↓3.7%)
-one classification0.459 (↓5.4%)0.653 (↓10.1%)0.698 (↓11.9%)0.329 (↓7.8%)0.505 (↓11.6%)0.551 (↓13.9%)
-ideal classification0.494 (↑1.9%)0.741 (↑2.1%)0.803 (↑1.4%)0.361 (↑2.3%)0.585 (↑2.5%)0.649 (↑1.4%)
+ +It is interesting that CoSHC presents a relatively better performance when the performance of the original code retrieval models is worse. $\mathrm{CoSHC_{CodeBERTa}}$ even outperforms the original baseline model in Java dataset. $\mathrm{CoSHC_{RNN}}$ and $\mathrm{CoSHC_{UNIF}}$ outperform the original model in both Python and Java datasets. The integration of deep learning and code classification with recall make the contribution on this result. The worse performance indicates more misalignment between the code representation vectors and description representation vectors. Since the code classification and deep hashing will filter out most of irrelevant codes in the recall stage, some irrelevant code representation vectors but has high cosine similarity with the target description representation vectors are filtered, which leads the improvement on the final retrieval performance. + +# 4.5.3 RQ3: Can the classification module help improve performance? + +Table 2 illustrates the performance comparison between the CoSHC variants which adopt different recall strategies with query category prediction results. $\mathrm{CoSHC}_{\mathrm{w/o classification}}$ represents CoSHC + +Table 2: Results of code search performance comparison. The best results among the three CoSHC variants are highlighted in bold font. + +
ModelPython Acc.Java Acc.
CoSHCUNIF0.5580.545
CoSHCRNN0.6100.535
CoSHCCodeBERTa0.5910.571
CoSHCCodeBERT0.6940.657
CoSHCGraphCodeBERT0.7130.653
+ +Table 3: Classification accuracy of the code classification module in each model. + +without code classification and description prediction module. $\mathrm{CoSHC_{one}}$ classification represents the CoSHC variant that recalls $N - k + 1$ candidates in the code category with highest prediction probability and one in each of the rest categories. $\mathrm{CoSHC_{ideal}}$ classification is an ideal classification situation we set. Assuming the correct description category is known, $N - k + 1$ candidates are recalled in the correct category and one candidate is recalled in each of the rest categories. Note that the display of $\mathrm{CoSHC_{ideal}}$ classification is only to explore the upper threshold of performance improvement of the category prediction module and will not be counted as a variant of CoSHC we compare. + +By comparing the performance be + +tween $\mathrm{CoSHC}_{\mathrm{ideal~classification}}$ and $\mathrm{CoSHC}_{\mathrm{w/o~classification}}$ , we can find that correct classification can significantly improve the retrieval performance. With the ideal category labels, CoSHC can even outperform all baseline models. As mentioned in Sec. 4.5.2, code classification can mitigate the problem of vector pairs misalignment via filtering out wrong candidates whose representation vectors have high cosine similarity with the target representation vectors in the recall stage. The more serious the misalignment problem, the more effective the code classification. That is the reason why the improvement of CoSHC with ground-truth labels on UNIF, RNN, and CodeBERTa is more significant than the improvement of it on CodeBERT and GraphCodeBERT since the retrieval accuracy of former models is much lower than the latter ones. Similar conclusions can also be drawn at the aspect of binary hash code distribution via the comparison between CoSHC and $\mathrm{CoSHC}_{\mathrm{ideal~classification}}$ since CoSHC utilizes the distribution of the original representation vectors as the guidance for model training. Therefore, the distribution of binary hash codes will be similar to the distribution of original representation vectors. + +Since we have explored the theoretical upper limit of the effectiveness of code classification for code retrieval, the effectiveness of code classification for code retrieval in the real application will be validated. By comparing the experimental results between $\mathrm{CoSHC_{w/o}}$ classification and $\mathrm{CoSHC_{one}}$ classification, we can find that the performance of CoSHC with predicted labels is even worse than the performance of CoSHC without code classification module. The reason is that the accuracy of description category prediction is far from the satisfactory. Table 3 illustrates the accuracy of description category prediction module in all baseline models. We regard the category with the highest probability as the predicted category from the description category prediction module and check whether the module could give a correct prediction. It can be seen that the classification accuracy is not very high (less than $75\%$ ). By observing the experimental results of CoSHC in GraphCodeBERT on the Java dataset, we can also find that low accuracy greatly affects the performance of $\mathrm{CoSHC_{oneclassification}}$ , which makes $7.8\%$ , $11.6\%$ , and $13.9\%$ performance drop in terms of $R@1$ , $R@5$ , and $R@10$ , respectively. + +Fortunately, although the description category prediction module cannot accurately tell the exact category which this description belongs to, the module still can give a relative high predicted probability on the correct category. By comparing the experimental results among all the variants of CoSHC, we can find the performance is increased significantly once the recall strategy is replaced to that the number of code candidates for each category is determined by the normalized predication probability. CoSHC with new recall strategy almost achieve the best performance in all metrics on all baseline models. Even on RNN in the Python dataset, CoSHC still achieve the same performance as CoSHC without classification under $R@1$ and achieve similar performance in other metrics. Above experimental results have demonstrated the effectiveness of the adoption of code classification in code search. + +# 5 Conclusion + +To accelerate code search, we present CoSHC, a general method that incorporates deep hashing techniques and code classification. We leverage the two-staged recall and re-rank paradigm in information retrieval field and apply deep hashing techniques for fast recall. Furthermore, we propose to utilize a code classification module to retrieve better quality code snippets. Experiments on five code search models show that compared with the original code search models, CoSHC can greatly improve the retrieval efficiency meanwhile preserve almost the same performance. + +# 6 Acknowledgement + +Wenchao Gu's and Michael R. Lyu's work described in this paper was in part supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (CUHK 14210920 of the General Research Fund). + +# References + +Joel Brandt, Philip J. Guo, Joel Lewenstein, Mira Dontcheva, and Scott R. Klemmer. 2009. Two studies of opportunistic programming: interleaving web foraging, learning, and writing code. In Proceedings of the 27th International Conference on Human Factors in Computing Systems, CHI 2009, Boston, MA, USA, April 4-9, 2009, pages 1589-1598. ACM. +Jose Cambrero, Hongyu Li, Seohyun Kim, Koushik Sen, and Satish Chandra. 2019. When deep learning met code search. In Proceedings of the ACM + +Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/SIGSOFT FSE 2019, Tallinn, Estonia, August 26-30, 2019, pages 964-974. ACM. +Zhangjie Cao, Mingsheng Long, Jianmin Wang, and Philip S. Yu. 2017. Hashnet: Deep learning to hash by continuation. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, pages 5609-5618. IEEE Computer Society. +Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. In Proceedings of SSST@EMNLP 2014, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, Doha, Qatar, 25 October 2014, pages 103-111. Association for Computational Linguistics. +Guiguang Ding, Yuchen Guo, and Jile Zhou. 2014. Collective matrix factorization hashing for multimodal data. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014, Columbus, OH, USA, June 23-28, 2014, pages 2083-2090. IEEE Computer Society. +Lun Du, Xiaozhou Shi, Yanlin Wang, Ensheng Shi, Shi Han, and Dongmei Zhang. 2021. Is a single model enough? mucos: A multi-model ensemble learning approach for semantic code search. In CIKM '21: The 30th ACM International Conference on Information and Knowledge Management, Virtual Event, Queensland, Australia, November 1 - 5, 2021, pages 2994-2998. ACM. +Sen Fang, Youshuai Tan, Tao Zhang, and Yepang Liu. 2021. Self-attention networks for code search. Inf. Softw. Technol., 134:106542. +Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou. 2020. Codebert: A pre-trained model for programming and natural languages. In Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of Findings of ACL, pages 1536-1547. Association for Computational Linguistics. +Wenchao Gu, Zongjie Li, Cuiyun Gao, Chaozheng Wang, Hongyu Zhang, Zenglin Xu, and Michael R. Lyu. 2021. Cradle: Deep code retrieval based on semantic dependency learning. *Neural Networks*, 141:385-394. +Xiaodong Gu, Hongyu Zhang, and Sunghun Kim. 2018. Deep code search. In Proceedings of the 40th International Conference on Software Engineering, ICSE 2018, Gothenburg, Sweden, May 27 - June 03, 2018, pages 933-944. ACM. + +Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, Shujie Liu, Long Zhou, Nan Duan, Alexey Svyatkovskiy, Shengyu Fu, Michele Tufano, Shao Kun Deng, Colin B. Clement, Dawn Drain, Neel Sundaresan, Jian Yin, Daxin Jiang, and Ming Zhou. 2021. Graphcodebert: Pre-training code representations with data flow. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. +Rajarshi Haldar, Lingfei Wu, Jinjun Xiong, and Julia Hockenmaier. 2020. A multi-perspective architecture for semantic code search. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 8563-8568. Association for Computational Linguistics. +Li He, Xing Xu, Huimin Lu, Yang Yang, Fumin Shen, and Heng Tao Shen. 2017. Unsupervised cross-modal retrieval through adversarial learning. In 2017 IEEE International Conference on Multimedia and Expo, ICME 2017, Hong Kong, China, July 10-14, 2017, pages 1153-1158. IEEE Computer Society. +Geert Heyman and Tom Van Cutsem. 2020. Neural code search revisited: Enhancing code snippet retrieval through natural language intent. CoRR, abs/2008.12193. +Di Hu, Feiping Nie, and Xuelong Li. 2019. Deep binary reconstruction for cross-modal hashing. IEEE Trans. Multim., 21(4):973-985. +Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. 2019. Code-searchnet challenge: Evaluating the state of semantic code search. CoRR, abs/1909.09436. +Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Xiao Luo, Chong Chen, Huasong Zhong, Hao Zhang, Minghua Deng, Jianqiang Huang, and Xiansheng Hua. 2020. A survey on deep hashing methods. CoRR, abs/2003.03369. +Fei Lv, Hongyu Zhang, Jian-Guang Lou, Shaowei Wang, Dongmei Zhang, and Jianjun Zhao. 2015. Codehow: Effective code search based on API understanding and extended boolean model (E). In 30th IEEE/ACM International Conference on Automated Software Engineering, ASE 2015, Lincoln, NE, USA, November 9-13, 2015, pages 260-270. IEEE Computer Society. +Collin McMillan, Mark Grechanik, Denys Poshyvanyk, Qing Xie, and Chen Fu. 2011. Portfolio: finding relevant functions and their usage. In Proceedings of + +the 33rd International Conference on Software Engineering, ICSE 2011, Waikiki, Honolulu, HI, USA, May 21-28, 2011, pages 111-120. ACM. +Stephen E. Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and beyond. Found. Trends Inf. Retr., 3(4):333-389. +Saksham Sachdev, Hongyu Li, Sifei Luan, Seohyun Kim, Koushik Sen, and Satish Chandra. 2018. Retrieval on source code: a neural code search. In Proceedings of the 2nd ACM SIGPLAN International Workshop on Machine Learning and Programming Languages, MAPL@PLDI 2018, Philadelphia, PA, USA, June 18-22, 2018, pages 31-41. ACM. +Jianhang Shuai, Ling Xu, Chao Liu, Meng Yan, Xin Xia, and Yan Lei. 2020. Improving code search with co-attentive representation learning. In ICPC '20: 28th International Conference on Program Comprehension, Seoul, Republic of Korea, July 13-15, 2020, pages 196-207. ACM. +Shupeng Su, Zhisheng Zhong, and Chao Zhang. 2019. Deep joint-semantics reconstructing hashing for large-scale unsupervised cross-modal retrieval. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, pages 3027-3035. IEEE. +Jun Wang, Wei Liu, Sanjiv Kumar, and Shih-Fu Chang. 2016. Learning to hash for indexing big data - A survey. Proc. IEEE, 104(1):34-57. +Wei Wang, Beng Chin Ooi, Xiaoyan Yang, Dongxiang Zhang, and Yueting Zhuang. 2014. Effective multi-modal retrieval based on stacked autoencoders. Proc. VLDB Endow., 7(8):649-660. +Gengshen Wu, Zijia Lin, Jungong Han, Li Liu, Guiguang Ding, Baochang Zhang, and Jialie Shen. 2018. Unsupervised deep hashing via binary latent factor models for large-scale cross-modal retrieval. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pages 2854-2860. ijcai.org. +Ziyu Yao, Jayavardhan Reddy Peddamail, and Huan Sun. 2019. Coacor: Code annotation for code retrieval with reinforcement learning. In The World Wide Web Conference, WWW 2019, San Francisco, CA, USA, May 13-17, 2019, pages 2203-2214. ACM. +Jian Zhang, Yuxin Peng, and Mingkuan Yuan. 2018. Unsupervised generative adversarial cross-modal hashing. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 539-546. AAAI Press. + +Jile Zhou, Guiguang Ding, and Yuchen Guo. 2014. Latent semantic sparse hashing for cross-modal similarity search. In The 37th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '14, Gold Coast, QLD, Australia - July 06 - 11, 2014, pages 415-424. ACM. \ No newline at end of file diff --git a/acceleratingcodesearchwithdeephashingandcodeclassification/images.zip b/acceleratingcodesearchwithdeephashingandcodeclassification/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4e386e8bef68d74a40b38feb113cd08a5e695da9 --- /dev/null +++ b/acceleratingcodesearchwithdeephashingandcodeclassification/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d6c2115a7261dd4dac78f4e30c98086af6d38a4426254124d519bec8ec7dbc8b +size 545067 diff --git a/acceleratingcodesearchwithdeephashingandcodeclassification/layout.json b/acceleratingcodesearchwithdeephashingandcodeclassification/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b23c5090c2da6c3ef368903356c3a04d0b0c4f1a --- /dev/null +++ b/acceleratingcodesearchwithdeephashingandcodeclassification/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e5abf14dfe5d6449ee90a5eed0a0af7a97de32eef5fb724ce08020234380f4a2 +size 349931 diff --git a/accurateonlineposterioralignmentsforprincipledlexicallyconstraineddecoding/1acb5d0e-647e-4740-bb65-fdd39858d3eb_content_list.json b/accurateonlineposterioralignmentsforprincipledlexicallyconstraineddecoding/1acb5d0e-647e-4740-bb65-fdd39858d3eb_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ef2d2968e852068620bc9eb8c521cb9c1bc09472 --- /dev/null +++ b/accurateonlineposterioralignmentsforprincipledlexicallyconstraineddecoding/1acb5d0e-647e-4740-bb65-fdd39858d3eb_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1fa3c6cfb6b0c994c5f1f9fd0ff323f2a0b5dc9b72b0ecfba2fe241a97dd5ed4 +size 118094 diff --git a/accurateonlineposterioralignmentsforprincipledlexicallyconstraineddecoding/1acb5d0e-647e-4740-bb65-fdd39858d3eb_model.json b/accurateonlineposterioralignmentsforprincipledlexicallyconstraineddecoding/1acb5d0e-647e-4740-bb65-fdd39858d3eb_model.json new file mode 100644 index 0000000000000000000000000000000000000000..4c1640ef87f0f0b1a0f493eafa90829f38a7eb9b --- /dev/null +++ b/accurateonlineposterioralignmentsforprincipledlexicallyconstraineddecoding/1acb5d0e-647e-4740-bb65-fdd39858d3eb_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:237f117da61941e700cec1783b27d65e767cad9f9ea4d60025a0e8affa486664 +size 136573 diff --git a/accurateonlineposterioralignmentsforprincipledlexicallyconstraineddecoding/1acb5d0e-647e-4740-bb65-fdd39858d3eb_origin.pdf b/accurateonlineposterioralignmentsforprincipledlexicallyconstraineddecoding/1acb5d0e-647e-4740-bb65-fdd39858d3eb_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a9af335935c226ffd8eec1839f5bd46983a49932 --- /dev/null +++ b/accurateonlineposterioralignmentsforprincipledlexicallyconstraineddecoding/1acb5d0e-647e-4740-bb65-fdd39858d3eb_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e6eb5571a3720093fd894c24d7e53e2c9cc2d5c36662318db06f190f5ccc91ab +size 533807 diff --git a/accurateonlineposterioralignmentsforprincipledlexicallyconstraineddecoding/full.md b/accurateonlineposterioralignmentsforprincipledlexicallyconstraineddecoding/full.md new file mode 100644 index 0000000000000000000000000000000000000000..9eb0e2a5cb8d05b46a652aedb882f5e5ac3a5a98 --- /dev/null +++ b/accurateonlineposterioralignmentsforprincipledlexicallyconstraineddecoding/full.md @@ -0,0 +1,424 @@ +# Accurate Online Posterior Alignments for Principled Lexically-Constrained Decoding + +Soumya Chatterjee + +IIT Bombay + +Sunita Sarawagi + +IIT Bombay + +Preethi Jyothi + +IIT Bombay + +soumya@cse.iitb.ac.in + +sunita@iitb.ac.in + +pjyothi@cse.iitb.ac.in + +# Abstract + +Online alignment in machine translation refers to the task of aligning a target word to a source word when the target sequence has only been partially decoded. Good online alignments facilitate important applications such as lexically constrained translation where user-defined dictionaries are used to inject lexical constraints into the translation model. We propose a novel posterior alignment technique that is truly online in its execution and superior in terms of alignment error rates compared to existing methods. Our proposed inference technique jointly considers alignment and token probabilities in a principled manner and can be seamlessly integrated within existing constrained beam-search decoding algorithms. On five language pairs, including two distant language pairs, we achieve consistent drop in alignment error rates. When deployed on seven lexically constrained translation tasks, we achieve significant improvements in BLEU specifically around the constrained positions. + +# 1 Introduction + +Online alignment seeks to align a target word to a source word at the decoding step when the word is output in an auto-regressive neural translation model (Kalchbrenner and Blunsom, 2013; Cho et al., 2014; Sutskever et al., 2014). This is unlike the more popular offline alignment task that uses the entire target sentence (Och and Ney, 2003). State of the art methods of offline alignment based on matching of whole source and target sentences (Jalili Sabet et al., 2020; Dou and Neubig, 2021) are not applicable for online alignment where we need to commit on the alignment of a target word based on only the generated prefix thus far. + +An important application of online alignment is lexically constrained translation which allows injection of domain-specific terminology and other phrasal constraints during decoding (Hasler et al., + +2018; Hokamp and Liu, 2017; Alkhouli et al., 2018; Crego et al., 2016). Other applications include preservation of markings between the source and target (Müller, 2017), and supporting source word edits in summarization (Shen et al., 2019). These applications need to infer the specific source token which aligns with output token. Thus, alignment and translation is to be done simultaneously. + +Existing online alignment methods can be categorized into Prior and Posterior alignment methods. Prior alignment methods (Garg et al., 2019; Song et al., 2020) extract alignment based on the attention at time step $t$ when outputting token $y_{t}$ . The attention probabilities at time-step $t$ are conditioned on tokens output before time $t$ . Thus, the alignment is estimated prior to observing $y_{t}$ . Naturally, the quality of alignment can be improved if we condition on the target token $y_{t}$ (Shankar and Sarawagi, 2019). This motivated Chen et al. (2020) to propose a posterior alignment method where alignment is calculated from the attention probabilities at the next decoder step $t + 1$ . While alignment quality improved as a result, their method is not truly online since it does not generate alignment synchronously with the token. The delay of one step makes it difficult and cumbersome to incorporate terminology constraints during beam decoding. + +We propose a truly online posterior alignment method that provides higher alignment accuracy than existing online methods, while also being synchronous. Because of that we can easily integrate posterior alignment to improve lexicon-constrained translation in state of the art constrained beam-search algorithms such as VDBA (Hu et al., 2019). Our method (Align-VDBA) presents a significant departure from existing papers on alignment-guided constrained translation (Chen et al., 2020; Song et al., 2020) that employ a greedy algorithm with poor constraint satisfaction rate (CSR). For example, on a ja→en their CSR is 20 points lower than ours. Moreover, the latter does not benefit + +from larger beam sizes unlike VDBA-based methods that significantly improve with larger beam widths. Compared to Chen et al. (2020), our method improves average overall BLEU scores by 1.2 points and average BLEU scores around the constrained span by up to 9 points. In the evaluations performed in these earlier work, VDBA was not allocated the slightly higher beam size needed to pro-actively enforce constraints without compromising BLEU. Compared to Hu et al. (2019) (VDBA), this paper's contributions include online alignments and their use in more fluent constraint placement and efficient allocation of beams. + +# Contributions + +- A truly online posterior alignment method that integrates into existing NMT sytems via a trainable light-weight module. +- Higher online alignment accuracy on five language pairs including two distant language pairs where we improve over the best existing method in seven out of ten translation tasks. +- Principled method of modifying VDBA to incorporate posterior alignment probabilities in lexically-constrained decoding. VDBA enforces constraints ignoring source alignments; our change (Align-VDBA) leads to more fluent constraint placement and significant BLEU increase particularly for smaller beams. +- Establishing that VDBA-based pro-active constrained inference should be preferred over prevailing greedy alignment-guided inference (Chen et al., 2021; Song et al., 2020). Further, VDBA and our Align-VDBA inference with beam size 10 provide 1.2 BLEU increase over these methods with the same beam size. + +# 2 Posterior Online Alignment + +Given a sentence $\mathbf{x} = x_{1},\ldots ,x_{S}$ in the source language and a sentence $\mathbf{y} = y_{1},\dots ,y_{T}$ in the target language, an alignment $\mathcal{A}$ between the word strings is a subset of the Cartesian product of the word positions (Brown et al., 1993; Och and Ney, 2003): $\mathcal{A}\subseteq \{(s,t):s = 1,\dots ,S;t = 1,\dots ,T\}$ such that the aligned words can be considered translations of each other. An online alignment at timestep $t$ commits on alignment of the $t^{\mathrm{th}}$ output token conditioned only on $\mathbf{x}$ and $\mathbf{y}_{< t} = y_1,y_2,\dots y_{t - 1}$ . Additionally, if token $y_{t}$ is also available we call it a posterior online alignment. We seek to embed online alignment with existing NMT systems. We will first briefly describe the architecture of state + +of the art NMT systems. We will then elaborate on how alignments are computed from attention distributions in prior work and highlight some limitations, before describing our proposed approach. + +# 2.1 Background + +Transformers (Vaswani et al., 2017) adopt the popular encoder-decoder paradigm used for sequence-to-sequence modeling (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015). The encoder and decoder are both multi-layered networks with each layer consisting of a multi-headed self-attention and a feedforward module. The decoder layers additionally use multi-headed attention to encoder states. We elaborate on this mechanism next since it plays an important role in alignments. + +# 2.1.1 Decoder-Encoder Attention in NMTs + +The encoder transforms the $S$ input tokens into a sequence of token representations $\mathbf{H} \in \mathbb{R}^{S \times d}$ . Each decoder layer (indexed by $\ell \in \{1, \dots, L\}$ ) computes multi-head attention over $\mathbf{H}$ by aggregating outputs from a set of $\eta$ independent attention heads. The attention output from a single head $n \in \{1, \dots, \eta\}$ in decoder layer $\ell$ is computed as follows. Let the output of the self-attention sub-layer in decoder layer $\ell$ at the $t^{\text{th}}$ target token be denoted as $\mathbf{g}_t^\ell$ . Using three projection matrices $\mathbf{W}_Q^{\ell,n}$ , $\mathbf{W}_V^{\ell,n}$ , $\mathbf{W}_K^{\ell,n} \in \mathbb{R}^{d \times d_n}$ , the query vector $\mathbf{q}_t^{\ell,n} \in \mathbb{R}^{1 \times d_n}$ and key and value matrices, $\mathbf{K}^{\ell,n} \in \mathbb{R}^{S \times d_n}$ and $\mathbf{V}^{\ell,n} \in \mathbb{R}^{S \times d_n}$ , are computed using the following projections: $\mathbf{q}_t^{\ell,n} = \mathbf{g}_t^\ell \mathbf{W}_Q^{\ell,n}$ , $\mathbf{K}^{\ell,n} = \mathbf{H} \mathbf{W}_K^{\ell,n}$ , and $\mathbf{V}^{\ell,n} = \mathbf{H} \mathbf{W}_V^{\ell,n}$ . These are used to calculate the attention output from head $n$ , $\mathbf{Z}_t^{\ell,n} = P(\mathbf{a}_t^{\ell,n} | \mathbf{x}, \mathbf{y}_{ threshold then +7: candidates.append $(k,y,\text{scores}[k,y] \times \text{alignProb})$ , beam[k].constraints.add(y)) +8: candidates.append $(k,y,\text{scores}[k,y],\text{beam}[k].\text{constraints.add}(y))$ +9: $w = \text{ARGMAX}(\text{scores}[k,:])$ +10: candidates.append $(k,w,\text{scores}[k,w],\text{beam}[k].\text{constraints.add}(w))$ +11: newBeam $\leftarrow$ ALLOCATE(candidates, K) + +Currently, VDBA attempts beam allocation for each unmet constraint since it has no way to discriminate. In Align-VDBA we allocate only when the alignment probability is greater than a threshold. When the beam size is small (say 5) this yields higher accuracy due to more efficient beam utilization. We used a threshold of 0.1 for all language pairs other than ro→en for which a threshold of 0.3 was used. Further, the thresholds were used for the smaller beam size of 5 and not for larger beam sizes of 10 and 20. + +We present the pseudocode of our modification (steps 5, 6 and 7, in blue) to DBA in Algorithm 1. Other details of the algorithm including the handling of constraints and the allocation steps (step 11) are involved and we refer the reader to Post and Vilar (2018) and Hu et al. (2019) to understand these details. The point of this code is to show that our proposed posterior alignment method can be easily incorporated into these algorithms so as to provide a more principled scoring of constrained hypothesis in a beam than the ad hoc revision-based method of Chen et al. (2021). Additionally, posterior alignments lead to better placement of constraints than in the original VDBA algorithm. + +# 4 Experiments + +We first compare our proposed posterior online alignment method on quality of alignment against existing methods in Section 4.2, and in Section 4.3, we demonstrate the impact of the improved alignment on the lexicon-constrained translation task. + +# 4.1 Setup + +We deploy the fairseq toolkit (Ott et al., 2019) and use transformer_iwslt_de_en preconfigured model for all our experiments. Other configuration parameters include: Adam optimizer with $\beta_{1} = 0.9$ , $\beta_{2} = 0.98$ , a learning rate of $5\mathrm{e}-4$ + +
de-enen-frro-enen-hija-en
Training1.9M1.1M0.5M1.6M0.3M
Validation9941000999251166
Test5084472481401235
+ +Table 1: Number of sentence pairs for the five datasets used. Note that gold alignments are available only for the handful of sentence pairs in the test set. + +with 4000 warm-up steps, an inverse square root schedule, weight decay of $1\mathrm{e} - 4$ , label smoothing of 0.1, 0.3 probability dropout and a batch size of 4500 tokens. The transformer models are trained for 50,000 iterations. Then, the alignment module is trained for 10,000 iterations, keeping the other model parameters fixed. A joint byte pair encoding (BPE) is learned for the source and the target languages with 10k merge operation (Sennrich et al., 2016) using subword-nmt. + +All experiments were done on a single 11GB Nvidia GeForce RTX 2080 Ti GPU on a machine with 64 core Intel Xeon CPU and 755 GB memory. The vanilla Transformer models take between 15 to 20 hours to train for different datasets. Starting from the alignments extracted from these models, the POSTALN alignment module trains in about 3 to 6 hours depending on the dataset. + +# 4.2 Alignment Task + +We evaluate online alignments on ten translation tasks spanning five language pairs. Three of these are popular in alignment papers (Zenkel et al., 2019): German-English (de-en), English-French (en-fr), Romanian-English (ro-en). These are all European languages that follow the same subject-verb-object (SVO) ordering. We also present results on two distant language pairs, English-Hindi (en-hi) and English-Japanese (ja-en), that follow a SOV word order which is different from the SVO + +
MethodDelayde-enen-frro-enen-hija-en
de→enen→deen→frfr→enro→enen→roen→hihi→enja→enen→ja
Statistical Methods (Not Online)
GIZA++ (Och and Ney, 2003)End18.919.77.37.027.628.335.936.441.839.0
FastAlign (Dyer et al., 2013)End28.432.016.415.933.835.5----
No Alignment Training
NAIVEATT (Garg et al., 2019)032.440.024.031.237.333.249.153.862.263.5
SHIFTATT (Chen et al., 2020)+120.022.914.720.426.927.435.338.653.648.6
With Alignment Training
PRIORATT023.425.814.016.629.327.236.435.152.750.9
SHIFTAET (Chen et al., 2020)+115.819.510.310.422.423.729.329.342.541.9
POSTALN [Ours]015.519.59.910.421.823.228.728.941.242.2
+ +Table 2: AER for de-en, en-fr, ro-en, en-hi, ja-en language pairs. "Delay" indicates the decoding step at which the alignment of the target token is available. NaIVEATT, PRIORATT and POSTALN are truly online and output alignment at the same time step (delay=0), while SHIFTATT and SHIFTAET output one decoding step later. + +word order of English. Data statistics are shown in Table 1 and details are in Appendix C. + +Evaluation Method: For evaluating alignment performance, it is necessary that the target sentence is exactly the same as for which the gold alignments are provided. Thus, for the alignment experiments, we force the output token to be from the gold target and only infer the alignment. We then report the Alignment Error Rate (AER) (Och and Ney, 2000) between the gold alignments and the predicted alignments for different methods. Though our focus is online alignment, for comparison to previous works, we also report results on bidirectional symmetrized alignments in Appendix D. + +Methods compared: We compare our method with both existing statistical alignment models, namely GIZA++ (Och and Ney, 2003) and FastAlign (Dyer et al., 2013), and recent Transformer-based alignment methods of Garg et al. (2019) (NAIVEATT) and Chen et al. (2020) (SHIFTATT and SHIFTAET). Chen et al. (2020) also propose a variant of SHIFTATT called SHIFTAET that delays computations by one time-step as in SHIFTATT, and additionally includes a learned attention sublayer to compute alignment probabilities. We also present results on PRIORATT which is similar to POSTALN but does not use $\mathbf{y}_t$ . + +Results: The alignment results are shown in Table 2. First, AERs using statistical methods FastAlign and GIZA++ are shown. Here, for fair comparison, the IBM models used by GIZA++ are trained on the same sub-word units as the Transformer models and sub-word alignments are converted to word level alignments for AER calculations. (GIZA++ has remained a state-of-the-art alignment technique and continues to be compared against.) Next, we present alignment results for two vanilla + +Transformer models - NaIVEATT and SHIFTATT - that do not train a separate alignment module. The high AER of NaIVEATT shows that attention-as-is is very distant from alignment but posterior attention is closer to alignments than prior. Next we look at methods that train alignment-specific parameters: PRIORATT, a prior attention method; SHIFTAET and POSTALN, both posterior alignment methods. We observe that with training even PRIORATT has surpassed non-trained posterior. The posterior attention methods outperform the prior attention methods by a large margin, with an improvement of 4.0 to 8.0 points. Within each group, the methods with a trained alignment module outperform the ones without by a huge margin. POSTALN performs better or matches the performance of SHIFTAET (achieving the lowest AER in nine out of ten cases in Table 2) while avoiding the one-step delay in alignment generation. Even on the distant languages, POSTALN achieves significant reductions in error. For ja→en, we achieve a 1.3 AER reduction compared to SHIFTAET which is not a truly online method. Figure 2 shows examples to illustrate the superior alignments of POSTALN compared to NaIVEATT and PRIORATT. + +# 4.3 Impact of POSTALN on Lexicon-Constrained Translation + +We next depict the impact of improved AERs from our posterior alignment method on a downstream lexicon-constrained translation task. Following previous work (Hokamp and Liu, 2017; Post and Vilar, 2018; Song et al., 2020; Chen et al., 2020, 2021), we extract constraints using the gold alignments and gold translations. Up to three constraints of up to three words each are used for each sentence. Spans correctly translated by a greedy decoding + +![](images/5ada8c4056d6ffa85ed4f9251ef27356d9c05993dc89fd400bce9b8673f56003.jpg) + +![](images/b8602ebee6625cbcac78935ae093d2b3eeb849394d9950adf577d5623d32859b.jpg) + +![](images/f441193c38b833860755f27fea41f6de0c9ca0fbd36d61cee011adb39f76f695.jpg) + +![](images/b14c704d764aff2c7ffc35bc20b4b81cdb4f5bcb1b839f51d8ad9316f45d7470.jpg) + +![](images/0630ee0e5bf20214d8d50aa5cce1e8475b13b6bfbbf4f1a4a5282dd5288f3ddf.jpg) +Figure 2: Alignments for de→en (top-row) and en→hi (bottom-row) by NAIVEATT, PRIORATT, and POSTALN. Note that POSTALN is most similar to Gold alignments in the last column. + +![](images/92c91267867736f679785ebdc39754dfd3cd2a2c742dfffebb887001543d4a47.jpg) + +![](images/9bd1e2f7dd7331369b910e5e9f6d479ea9a3703a514aeceac32614497c937e40.jpg) + +![](images/91037571b76b311e7de4b926aa38e512e5359ca080b12395950fc43d1ce2e211.jpg) + +
Methodde→enen→frro→enen→hija→en
BLEU-CCSRBLEUTimeBLEU-CCSRBLEUTimeBLEU-CCSRBLEUTimeBLEU-CCSRBLEUTimeBLEU-CCSRBLEUTime
No constraints0.04.632.9870.08.734.8640.08.833.4470.06.319.7210.08.818.9237
NAIVEATT28.786.136.614736.588.038.39333.392.336.59922.588.423.62715.175.920.2315
PRIORATT35.092.837.615942.194.438.99736.091.237.210027.291.524.42816.779.720.4326
SHIFTATT41.096.638.744345.093.538.723939.294.237.424123.278.721.95815.272.719.3567
SHIFTAET43.197.539.145846.694.339.023540.894.437.626324.380.222.06218.175.919.7596
POSTALN42.797.239.039946.394.138.721840.093.537.422623.879.022.04718.275.719.7460
VDBA44.598.938.529351.998.539.516043.199.137.916529.892.324.54924.395.621.6494
Align-VDBA44.598.638.635752.998.439.718944.198.938.120330.591.524.77025.195.521.8630
+ +Table 3: Constrained translation results showing BLEU-C, CSR (Constraint Satisfaction Rate), BLEU scores and total decoding time (in seconds) for the test set. Align-VDBA has the highest BLEU-C on all datasets. + +are not selected as constraints. + +Metrics: Following prior work (Song et al., 2020), we report BLEU (Papineni et al., 2002), time to translate all test sentences, and Constraint Satisfaction Rate (CSR). However, since it is trivial to get $100\%$ CSR by always copying, we report another metric to evaluate the appropriateness of constraint placement: We call this measure BLEU-C and compute it as the BLEU of the constraint (when satisfied) and a window of three words around it. All numbers are averages over five different sets of randomly sampled constraint sets. The beam size is set to ten by default; results for other beam sizes appear in Appendix E. + +Methods Compared: First we compare all the alignment methods presented in Section 4.2 on the constrained translation task using the alignment based token-replacement algorithm of Song et al. + +(2020) described in Section 3.1. Next, we present a comparison between VBDA (Hu et al., 2019) and our modification Align-VDBA. + +Results: Table 3 shows that VDBA and our AlignVDBA that pro-actively enforce constraints have a much higher CSR and BLEU-C compared to the other lazy constraint enforcement methods. For example, for ja→en greedy methods can only achieve a CSR of $76\%$ compared to $96\%$ of the VDBA-based methods. In terms of overall BLEU too, these methods provide an average increase in BLEU of 1.2 and an average increase in BLEU-C of 5 points. On average, Align-VDBA has a 0.7 point greater BLEU-C compared to VDBA. It also has a greater BLEU than VDBA on all the five datasets. In Table 9 of Appendix we show that for smaller beam-size of 5, the gap between Align-VDBA and VDBA is even larger (2.1 points greater BLEU-C and 0.4 + +
Constraints(gesetz zur, law also), (dealer, pusher)
Goldof course, if a drug addict becomes a pusher, then it is right and necessary that he should pay and answer before the law also.
VDBAcertainly, if a drug addict becomes a dealer, it is right and necessary that he should be brought to justice before the law also pusher.
Align-VDBAcertainly, if a drug addict becomes a pusher, then it is right and necessary that he should be brought to justice before the law also.
Constraints(von mehrheitsverfahren, of qualified)
Gold... whether this is done on the basis of a vote or of consensus, and whether unanimity is required or some form of qualified majority.
VDBA... whether this is done by means of qualified votes or consensus, and whether unanimity or form of majority procedure apply.
Align-VDBA... whether this is done by voting or consensus, and whether unanimity or form of qualified majority voting are valid.
Constraints(zustimmung der, strong backing of)
Gold... which were adopted with the strong backing of the ppe group and the support of the socialist members.
VDBA... which were then adopted with broad agreement from the ppe group and with the strong backing of the socialist members.
Align-VDBA... which were then adopted with strong backing of the ppe group and with the support of the socialist members.
Constraints(den usa, the usa), (sicherheitssystems an, security system that), (entwicklung, development)
Goldmatters we regard as particularly important are improving the working conditions between the weu and the eu and the development of a European security system that is not dependent on the usa .
VDBAwe consider the usa ’s european security system to be particularly important in improving working conditions between the weu and the eu and developing a European security system that is independent of the united states development .
Align-VDBAwe consider the development of the security system that is independent of the usa to be particularly important in improving working conditions between the weu and the eu .
+ +points greater BLEU). Table 4 lists some example translations by VDBA vs. Align-VDBA. We observe that VDBA places constraints at the end of the translated sentence (e.g., "pusher", "development") unlike Align-VDBA. In some cases where constraints contain frequent words (like of, the, etc.), VDBA picks the token in the wrong position to tack on the constraint (e.g., "strong backing of", "of qualified") while Align-VDBA places the constraint correctly. + +Table 4: Anecdotes showing constrained translations produced by VDBA vs. Align-VDBA. + +
Dataset →IATE.414Wiktionary.727
Method (Beam Size) ↓BLEU (Δ)CSRBLEU (Δ)CSR
Baseline (5)25.876.326.076.9
Train-by-app. (5)26.0 (+0.2)92.926.9 (+0.9)90.7
Train-by-rep. (5)26.0 (+0.2)94.526.3 (+0.3)93.4
No constraints (10)29.777.029.972.4
SHIFTAET (10)29.995.930.497.2
VDBA (10)30.999.830.999.4
Align-VDBA (10)30.9 (+1.2)99.831.1 (+1.2)99.5
+ +Table 5: Constrained translation results on the two real world constraints from Dinu et al. (2019). + +Real World Constraints: We also evaluate our method using real world constraints extracted from IATE and Wiktionary datasets by Dinu et al. (2019). Table 5 compares Align-VDBA with the soft-constraints method of Dinu et al. (2019) that requires special retraining to teach the model to copy constraints. We reproduced the numbers from their paper in the first three rows. Their baseline is almost 4 BLEU points worse than ours since they used a smaller transformer NMT model, thus making running times incomparable. When we compare the increment $\Delta$ in BLEU over the respective baselines, Align-VDBA shows much greater gains of $+1.2$ vs. their $+0.5$ . Also, Align-VDBA provides + +a larger CSR of 99.6 compared to their 92. Results for other beam sizes and other methods and metrics appear in Appendix F. + +# 5 Related Work + +Online Prior Alignment from NMTs: Zenkel et al. (2019) find alignments using a single-head attention submodule, optimized to predict the next token. Garg et al. (2019) and Song et al. (2020) supervise a single alignment head from the penultimate multi-head attention with prior alignments from GIZA++ alignments or FastAlign. Bahar et al. (2020) and Shankar et al. (2018) treat alignment as a latent variable and impose a joint distribution over token and alignment while supervising on the token marginal of the joint distribution. + +Online Posterior Alignment from NMTs: Shankar and Sarawagi (2019) first identify the role of posterior attention for more accurate alignment. However, their NMT was a single-headed RNN. Chen et al. (2020) implement posterior attention in a multi-headed Transformer but they incur a delay of one step between token output and alignment. We are not aware of any prior work that extracts truly online posterior alignment in modern NMTs. Offline Alignment Systems: Several recent methods apply only in the offline setting: Zenkel et al. (2020) extend an NMT with an alignment module; Nagata et al. (2020) frame alignment as a question answering task; and Jalili Sabet et al. (2020); Dou and Neubig (2021) leverage similarity between contextual embeddings from pretrained multilingual models (Devlin et al., 2019). + +Lexicon Constrained Translation: Hokamp and Liu (2017) and Post and Vilar (2018); Hu et al. + +(2019) modify beam search to ensure that target phrases from a given constrained lexicon are present in the translation. These methods ignore alignment with the source but ensure high success rate for appearance of the target phrases in the constraint. Song et al. (2020) and Chen et al. (2021) do consider source alignment but they do not enforce constraints leading to lower CSR. Dinu et al. (2019) and Lee et al. (2021) propose alternative training strategies for constraints, whereas we focus on working with existing models. Recently, non autoregressive methods have been proposed for enforcing target constraints but they require that the constraints are given in the order they appear in the target translation (Susanto et al., 2020). + +# 6 Conclusion + +In this paper we proposed a simple architectural modification to modern NMT systems to obtain accurate online alignments. The key idea that led to high alignment accuracy was conditioning on the output token. Further, our designed alignment module enables such conditioning to be performed synchronously with token generation. This property led us to Align-VDBA, a principled decoding algorithm for lexically constrained translation based on joint distribution of target token and source alignments. Future work includes increase efficiency of constrained inference and harnessing such joint distributions for other forms of constraints, for example, nested constraints. + +Limitations: All existing methods for hard constrained inference, including ours, come with considerable runtime overheads. Soft constrained methods are not accurate enough. + +# Acknowledgements + +We are grateful to the reviewers for their detailed analysis, thoughtful comments and insightful questions which have helped us improve the paper. We are grateful to Priyesh Jain for providing alignment annotations for 50 English-Hindi sentences. + +# References + +Tamer Alkhouli, Gabriel Bretschner, and Hermann Ney. 2018. On the alignment problem in multi-head attention-based neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 177-185, Brussels, Belgium. Association for Computational Linguistics. + +Parnia Bahar, Nikita Makarov, and Hermann Ney. 2020. Investigation of transformer-based latent attention models for neural machine translation. In Proceedings of the 14th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Track), pages 7-20, Virtual. Association for Machine Translation in the Americas. +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263-311. +Guanhua Chen, Yun Chen, and Victor O.K. Li. 2021. Lexically constrained neural machine translation with explicit alignment guidance. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14):12630-12638. +Yun Chen, Yang Liu, Guanhua Chen, Xin Jiang, and Qun Liu. 2020. Accurate word alignment induction from neural machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 566-576, Online. Association for Computational Linguistics. +Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bah-danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103-111, Doha, Qatar. Association for Computational Linguistics. +Josep Crego, Jungi Kim, Guillaume Klein, Anabel Rebollo, Kathy Yang, Jean Senellart, Egor Akhanov, Patrice Brunelle, Aurelien Coquard, Yongchao Deng, Satoshi Enoue, Chiyo Geiss, Joshua Johanson, Ardas Khalsa, Raoum Khiari, Byeongil Ko, Catherine Kobus, Jean Lorieux, Leidiana Martins, Dang-Chuan Nguyen, Alexandra Priori, Thomas Riccardi, Natalia Segal, Christophe Servan, Cyril Tiquet, Bo Wang, Jin Yang, Dakun Zhang, Jing Zhou, and Peter Zoldan. 2016. Systran's pure neural machine translation systems. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Shuoyang Ding, Hainan Xu, and Philipp Koehn. 2019. Saliency-driven word alignment interpretation for + +neural machine translation. In Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers), pages 1-12, Florence, Italy. Association for Computational Linguistics. +Georgiana Dinu, Prashant Mathur, Marcello Federico, and Yaser Al-Onaizan. 2019. Training neural machine translation to apply terminology constraints. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3063-3068, Florence, Italy. Association for Computational Linguistics. +Zi-Yi Dou and Graham Neubig. 2021. Word alignment by fine-tuning embeddings on parallel corpora. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2112-2128, Online. Association for Computational Linguistics. +Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameterization of IBM model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644-648, Atlanta, Georgia. Association for Computational Linguistics. +Sarthak Garg, Stephan Peitz, Udhyakumar Nallasamy, and Matthias Paulik. 2019. Jointly learning to align and translate with transformer models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4453-4462, Hong Kong, China. Association for Computational Linguistics. +Eva Hasler, Adrià de Gispert, Gonzalo Iglesias, and Bill Byrne. 2018. Neural machine translation decoding with terminology constraints. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 506-512, New Orleans, Louisiana. Association for Computational Linguistics. +Felix Hieber, Tobias Domhan, Michael Denkowski, and David Vilar. 2020. Sockeye 2: A toolkit for neural machine translation. In Proceedings of the 22nd Annual Conference of the European Association for Machine Translation, pages 457-458, Lisboa, Portugal. European Association for Machine Translation. +Chris Hokamp and Qun Liu. 2017. Lexically constrained decoding for sequence generation using grid beam search. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1535-1546, Vancouver, Canada. Association for Computational Linguistics. +J. Edward Hu, Huda Khayrallah, Ryan Culkin, Patrick Xia, Tongfei Chen, Matt Post, and Benjamin + +Van Durme. 2019. Improved lexically constrained decoding for translation and monolingual rewriting. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 839-850, Minneapolis, Minnesota. Association for Computational Linguistics. +Masoud Jalili Sabet, Philipp Dufter, François Yvon, and Hinrich Schütze. 2020. SimAlign: High quality word alignments without parallel training data using static and contextualized embeddings. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1627-1643, Online. Association for Computational Linguistics. +Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1700-1709, Seattle, Washington, USA. Association for Computational Linguistics. +Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 388-395, Barcelona, Spain. Association for Computational Linguistics. +Philipp Koehn, Amittai Axelrod, Alexandra Birch Mayne, Chris Callison-Burch, Miles Osborne, and David Talbot. 2005. Edinburgh system description for the 2005 iwslt speech translation evaluation. In International Workshop on Spoken Language Translation (IWSLT) 2005. +Anoop Kunchukuttan, Pratik Mehta, and Pushpak Bhattacharyya. 2018. The IIT Bombay English-Hindi parallel corpus. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). +Gyubok Lee, Seongjun Yang, and Edward Choi. 2021. Improving lexically constrained neural machine translation with source-conditioned masked span prediction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 743-753, Online. Association for Computational Linguistics. +Joel Martin, Rada Mihalcea, and Ted Pedersen. 2005. Word alignment for languages with scarce resources. In Proceedings of the ACL Workshop on Building and Using Parallel Texts, pages 65-74, Ann Arbor, Michigan. Association for Computational Linguistics. +Rada Mihalcea and Ted Pedersen. 2003. An evaluation exercise for word alignment. In Proceedings of the HLT-NAACL 2003 Workshop on Building and + +Using Parallel Texts: Data Driven Machine Translation and Beyond, pages 1-10. +Mathias Müller. 2017. Treatment of markup in statistical machine translation. In Proceedings of the Third Workshop on Discourse in Machine Translation, pages 36-46, Copenhagen, Denmark. Association for Computational Linguistics. +Masaaki Nagata, Katsuki Chousa, and Masaaki Nishino. 2020. A supervised word alignment method based on cross-language span prediction using multilingual BERT. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 555-565, Online. Association for Computational Linguistics. +Graham Neubig. 2011. The Kyoto free translation task. +Franz Josef Och and Hermann Ney. 2000. Improved statistical alignment models. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, pages 440-447, Hong Kong. Association for Computational Linguistics. +Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51. +Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. *fairseq: A fast, extensible toolkit for sequence modeling.* In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*, pages 48-53, Minneapolis, Minnesota. Association for Computational Linguistics. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. +Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191, Brussels, Belgium. Association for Computational Linguistics. +Matt Post and David Vilar. 2018. Fast lexically constrained decoding with dynamic beam allocation for neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1314-1324, New Orleans, Louisiana. Association for Computational Linguistics. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational + +Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics. +Shiv Shankar, Siddhant Garg, and Sunita Sarawagi. 2018. Surprisingly easy hard-attention for sequence to sequence learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 640–645, Brussels, Belgium. Association for Computational Linguistics. +Shiv Shankar and Sunita Sarawagi. 2019. Posterior attention models for sequence to sequence learning. In International Conference on Learning Representations. +Xiaoyu Shen, Yang Zhao, Hui Su, and Dietrich Klakow. 2019. Improving latent alignment in text summarization by generalizing the pointer generator. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3762-3773, Hong Kong, China. Association for Computational Linguistics. +Kai Song, Kun Wang, Heng Yu, Yue Zhang, Zhongqiang Huang, Weihua Luo, Xiangyu Duan, and Min Zhang. 2020. Alignment-enhanced transformer for constraining nmt with pre-specified translations. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):8886-8893. +Raymond Hendy Susanto, Shamil Chollampatt, and Liling Tan. 2020. Lexically constrained neural machine translation with Levenshtein transformer. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3536-3543, Online. Association for Computational Linguistics. +Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, volume 27. Curran Associates, Inc. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc. +David Vilar, Maja Popović, and Hermann Ney. 2006. AER: Do we need to "improve" our alignments? In International Workshop on Spoken Language Translation (IWSLT) 2006. +Thomas Zenkel, Joern Wuebker, and John DeNero. 2019. Adding interpretable attention to neural translation models improves word alignment. +Thomas Zenkel, Joern Wuebker, and John DeNero. 2020. End-to-end neural word alignment outperforms GIZA++. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1605-1617, Online. Association for Computational Linguistics. + +# A Alignment Error Rate + +Given gold alignments consisting of sure alignments $S$ and possible alignments $\mathcal{P}$ , and the predicted alignments $\mathcal{A}$ , the Alignment Error Rate (AER) is defined as (Och and Ney, 2000): + +$$ +\mathrm {A E R} = 1 - \frac {| \mathcal {A} \cap \mathcal {P} | + | \mathcal {A} \cap \mathcal {S} |}{| \mathcal {A} | + | \mathcal {S} |} +$$ + +Note that here $\mathcal{S} \subseteq \mathcal{P}$ . Also note that since our models are trained on sub-word units but gold alignments are over words, we need to convert alignments between word pieces to alignments between words. A source word and a target word are said to be aligned if there exists an alignment link between any of their respective word pieces. + +# B BLEU-C + +Given a reference sentence, a predicted translation and a set of constraints, for each constraint, a segment of the sentence is chosen which contains the constraint and window size words (if available) surrounding the constraint words on either side. Such segments, called spans, are collected for the reference and predicted sentences in the test set and BLEU is computed over these spans. If a constraint is not satisfied in the prediction, the corresponding span is considered to be the empty string. An example is shown in Table 6. Table 7 shows how BLEU-C varies as a function of varying window size for a fixed English-French constraint set with beam size set to 10. + +
Window Size →2345678
No constraints0.00.00.00.00.00.00.0
NAIVEATT34.432.030.429.529.429.529.7
PRIORATT41.538.736.435.134.935.035.2
SHIFTATT44.941.538.937.336.436.236.0
SHIFTAET47.043.240.438.738.037.637.4
POSTALN46.442.739.838.037.136.936.6
VDBA54.950.546.844.643.543.042.6
Align-VDBA56.451.747.945.644.443.743.3
+ +# C Description of the Datasets + +The European languages consist of parallel sentences for three language pairs from the Europarl Corpus and alignments from Mihalcea and Pedersen (2003), Och and Ney (2000), Vilar et al. (2006). Following previous works (Ding et al., 2019; Chen et al., 2020), the last 1000 sentences of the training data are used as validation data. + +For English-Hindi, we use the dataset from Martin et al. (2005) consisting of 3440 training sentence + +pairs, 25 validation and 90 test sentences with gold alignments. Since training Transformers requires much larger datasets, we augment the training set with 1.6 million sentences from the IIT Bombay Parallel Corpus (Kunchukuttan et al., 2018). We also add the first 50 sentences from the dev set of IIT Bombay Parallel Corpus with manually annotated alignments to the test set giving a total of 140 test sentences. + +For Japanese-English, we use The Kyoto Free Translation Task (Neubig, 2011). It comprises roughly 330K training, 1166 validation and 1235 test sentences. As with other datasets, gold alignments are available only for the test sentences. The Japanese text is already segmented and we use it without additional changes. + +The real world constraints datasets of Dinu et al. (2019) are extracted from the German-English WMT newtest 2017 task with the IATE dataset consisting of 414 sentences (451 constraints) and the Wiktionary 727 sentences (879 constraints). The constraints come from the IATE and Wiktionary terminology databases. + +All datasets were processed using the scripts provided by Zenkel et al. (2019) at https://github.com/lilt/alignment-scripts. Computation of BLEU and BLEU-C, and the paired test were performed using sacrebleu (Post, 2018). + +# D Bidirectional Symmetrized Alignment + +We report AERs using bidirectional symmetrized alignments in Table 8 in order to provide fair comparisons to results in prior literature. The symmetrization is done using the grow-diagonal heuristic (Koehn et al., 2005; Och and Ney, 2000). Since bidirectional alignments need the entire text in both languages, these are not online alignments. + +Table 7: BLEU-C vs Window Size + +
Methodde-en en-fr ro-en en-hi ja-en
Statistical Methods
GIZA++18.65.526.335.939.7
FastAlign27.010.532.1--
No Alignment Training
NAIVEATT29.216.931.443.857.1
SHIFTATT16.97.824.330.946.2
With Alignment Training
PRIORATT22.010.126.332.148.2
SHIFTAET15.45.621.026.740.1
POSTALN15.35.521.026.139.5
+ +Table 8: AERs for bidirectional symmetrized alignments. POSTALN consistently performs the best. + +
Reference Predictionwe consider the development of a robust security system that is independent of the we consider developing a robust security system which is independent of the
BLEU-C (Window Size = 2)
Cons. NoReference SpansPredicted Spans
1consider the development of a robust security system that is(empty sentence)
2a robust security system which is
BLEU-C = BLEU(Reference Spans, Predicted Spans)
+ +Table 6: An example BLEU-C computation + +# E Additional Lexicon-Constrained Translation Results + +Constrained translation results for beam sizes 5 and 10 are shown in Table 9. We also present results for Align-VDBA without the alignment probability based beam allocation as Align-VDBA* in Table 9. We can see that our beam allocation technique results in better beam utilization as evidenced by improvements in BLEU and BLEU-C, and reduction total decoding time. + +Paired bootstrap resampling test (Koehn, 2004) results with respect to Align-VDBA for beam size 10 are shown in Table 10. + +# F Additional Real World Constrained Translation Results + +Results on the real world constrained translation datasets of Dinu et al. (2019) for all the methods in Table 3 with beam sizes 5, 10 and 20 are presented in Table 11. Paired bootstrap resampling test (Koehn, 2004) results with respect to AlignVDBA for beam size 5 are shown in Table 12 + +# G Alignment-based Token Replacement Algorithm + +The pseudocode for the algorithm used in Song et al. (2020); Chen et al. (2021) and our non-VDBA based methods in Section 4.3 is presented in Algorithm 2. As described in Section 3.1, at each decoding step, if the source token having the maximum alignment at the current step lies in some constraint span, the constraint in question is decoded until completion before resuming normal decoding. + +Though different alignment methods are represented using a call to the same ATTENTION function in Algorithm 2, these methods incur varying computational overheads. For instance, NAIVEATT incurs little additional cost, PRIORATT and POSTALN involve a multi-head attention computation. For SHIFTATT and SHIFTAET, + +an entire decoder pass is done when ATTENTION is called, thereby incurring a huge overhead as shown in Table 3. + +# H Layer Selection for Alignment Supervision of Distant Language Pairs + +For the alignment supervision, we used alignments extracted from vanilla Transformers using the SHIFTATT method. To do so, however, we need to choose the decoder layers from which to extract the alignments. The validation AERs can be used for this purpose but since gold validation alignments are not available, Chen et al. (2020) suggest selecting the layers which have the best consistency between the alignment predictions from the two translation directions. + +For the European language pairs, this turns out to be layer 3 as suggested by Chen et al. (2020). However, for the distant language pairs Hindi-English and Japanese-English, this is not the case and layer selection needs to be done. The AER between the two translation directions on the validation set, with alignments obtained from different decoder layers, are shown in Tables 13 and 14. + +
Beam SizeMethoddc→enen→frro→enen→hija→en
BLEU-CCSRBLEUTimeBLEU-CCSRBLEUTimeBLEU-CCSRBLEUTimeBLEU-CCSRBLEUTimeBLEU-CCSRBLEUTime
5No constraints0.05.032.9780.08.734.6610.08.433.3450.05.619.7180.07.919.1221
NAIVEATT28.986.236.712736.788.638.08732.991.836.38823.089.923.92515.177.020.3398
PRIORATT35.393.037.713642.294.738.68936.091.637.08927.691.724.72616.880.220.6353
SHIFTATT41.096.738.726845.293.838.416739.294.437.216023.881.822.04215.172.619.3664
SHIFTAET43.197.639.129146.594.838.616540.894.737.516324.583.622.14418.076.519.6583
POSTALN42.797.339.025246.193.938.515139.893.537.314123.379.721.73917.975.319.6469
VDBA39.699.437.820345.999.538.510936.699.236.711727.396.624.23722.196.920.9397
Align-VDBA*40.399.038.024447.499.338.713237.699.736.813927.295.624.14622.597.221.0460
Align-VDBA41.398.838.223648.098.938.712842.096.637.513428.291.324.74522.693.921.2445
10No constraints0.04.632.9870.08.734.8640.08.833.4470.06.319.7210.08.818.9237
NAIVEATT28.786.136.614736.588.038.39333.392.336.59922.588.423.62715.175.920.2315
PRIORATT35.092.837.615942.194.438.99736.091.237.210027.291.524.42816.779.720.4326
SHIFTATT41.096.638.744345.093.538.723939.294.237.424123.278.721.95815.272.719.3567
SHIFTAET43.197.539.145846.694.339.023540.894.437.626324.380.222.06218.175.919.7596
POSTALN42.797.239.039946.394.138.721840.093.537.422623.879.022.04718.275.719.7460
VDBA44.598.938.529351.998.539.516043.199.137.916529.892.324.54924.395.621.6494
Align-VDBA44.598.638.635752.998.439.718944.198.938.120330.591.524.77025.195.521.8630
+ +Table 9: Lexically Constrained Translation Results with different beam sizes. All numbers are average over 5 randomly sampled constraint sets and running times are in seconds. Align-VDBA* denotes Align-VDBA without alignment probability based beam allocation (i.e. with threshold set to 0). + +
123456
165.555.856.195.294.696.6
259.247.544.595.191.995.8
362.652.148.393.791.495.2
488.683.382.189.988.090.3
591.687.788.591.488.890.2
693.591.192.592.590.590.7
+ +Table 13: AER between en→hi and hi→en SHIF-TATT alignments on the validation set for EnHi + +
123456
193.590.094.492.295.195.1
286.558.786.969.487.286.2
387.459.487.169.187.186.2
489.169.185.974.284.985.4
593.488.589.187.186.888.1
693.589.490.088.187.788.7
+ +Table 14: AER between ja→en and en→ja SHIF-TATT alignments on the validation set for JaEn + +
de→enen→frro→en
No constraints0.0001*0.0001*0.0001*
NAIVEATT0.0001*0.0001*0.0001*
PRIORATT0.0001*0.0001*0.0001*
SHIFTATT0.17000.0001*0.0001*
SHIFTAET0.0015*0.0001*0.0018*
POSTALN0.0032*0.0001*0.0003*
VDBA0.26660.0020*0.0229*
+ +Table 10: $p$ -values from paired bootstrap resampling tests with 10000 bootstrap samples for BLEU on Table 3 datasets for beam size 10. Tests are performed with respect to Align-VDBA. * denotes statistically significant difference from Align-VDBA at power 0.05 (p-value < 0.05). + +
Dataset →IATE.414Wiktionary.727
Beam SizeMethod ↓BLEU-CCSRBLEUTimeBLEU-CCSRBLEUTime
5No constraints27.976.629.713426.372.029.9217
NAIVEATT29.296.929.217529.095.329.1341
PRIORATT31.297.129.719832.295.929.9306
SHIFTATT34.996.729.935535.396.530.0568
SHIFTAAET35.296.330.037835.897.130.2637
POSTALN35.396.730.027235.896.730.2467
VDBA35.398.829.825835.099.230.4442
Align-VDBA*35.499.829.828035.199.330.3534
Align-VDBA36.198.330.126835.998.830.6523
10No constraints28.377.029.711326.372.429.9164
NAIVEATT28.997.329.114529.295.329.1269
PRIORATT31.396.929.515532.396.029.9260
SHIFTATT34.996.329.834535.396.830.3600
SHIFTAAET35.295.929.935035.997.230.4664
POSTALN35.195.929.928735.897.030.3458
VDBA37.699.830.925736.999.430.9451
Align-VDBA37.599.830.935337.299.531.1540
20No constraints28.477.229.910326.372.130.0177
NAIVEATT28.996.929.018829.195.429.3325
PRIORATT31.396.929.620332.696.430.1338
SHIFTATT34.796.129.852835.396.830.2892
SHIFTAAET35.095.829.953936.197.330.4923
POSTALN35.196.129.942036.097.030.4751
VDBA37.899.830.938137.499.231.2680
Align-VDBA37.999.830.946538.099.531.3818
+ +Table 11: Additional results for the real world constraints for all methods and different beam sizes. AlignVDBA* denotes Align-VDBA without alignment probability based beam allocation. + +Algorithm 2 $k$ -best extraction with argmax replacement decoding. +Inputs: A $k \times |V_T|$ matrix of scores (for all tokens up to the currently decoded ones). $k$ beam states. +1: function SEARCH_STEP(beam, scores) +2: next_toks, next Scores $\leftarrow$ ARGMAX_K(scores, k=2, dim=1) Best 2 tokens for each beam +3: candidates $\leftarrow []$ +4: for $0 \leq h < 2 \cdot k$ do +5: candidate $\leftarrow$ beam[h/2] +6: candidate(tokens.append(next_tokens[h/2, h%2]) +7: candidate.scores $\leftarrow$ next Scores[h/2, h%2] +8: candidates.append(candidate) +9: attention $\leftarrow$ ATTENTION(candidates) +10: aligned_x $\leftarrow$ ARGMAX(attention, dim=1) +11: for $0 \leq h < 2 \cdot k$ do +12: if aligned_x[h] $\in \mathcal{C}_i^x$ for some $i$ and not candidates[h].inprogress then Start constraint +13: candidates[h].inprogress $\leftarrow$ True +14: candidates[h].constraintNum $\leftarrow i$ +15: candidates[h].tokenNum $\leftarrow 0$ +16: if candidates[h].inprogress then Replace token with constraint tokens +17: consNum $\leftarrow$ candidates[h].constraintNum +18: candidates[h].tokens[-1] $\leftarrow$ constraints[consNum][candidates[h].tokenNum] +19: candidates[h].tokenNum $\leftarrow$ candidates[h].tokenNum + 1 +20: if constraints[consNum].length == candidates[h].tokenNum then +21: candidates[h].inprogress $\leftarrow$ False Finish current constraint +22: candidates $\leftarrow$ REMOVE_DUPLICATE(candidates) +23: newBeam $\leftarrow$ TOP_K(candidates) +24: return newBeam + +
DatasetIATE.414Wiktionary.727
MethodBLEUμ±95% CIp-valueBLEUμ±95% CIp-value
Align-VDBA30.1(30.0±1.7)30.6(30.6±1.2)
No constraints29.7(29.7±1.7)0.105929.9(29.9±1.2)0.0054*
NAIVEATT29.2(29.2±1.7)0.0121*29.1(29.1±1.2)0.0001*
PRIORATT29.7(29.6±1.6)0.082929.9(29.8±1.2)0.0041*
SHIFTATT29.9(29.8±1.6)0.182730.0(30.0±1.2)0.0229*
SHIFTTAET30.0(29.9±1.6)0.282430.2(30.2±1.2)0.0588
POSTALN30.0(30.0±1.6)0.381330.2(30.2±1.2)0.0646
VDBA29.8(29.7±1.6)0.084930.4(30.4±1.2)0.0960
+ +Table 12: Paired bootstrap resampling tests with 10000 bootstrap samples for BLEU on Dnu et al. (2019) datasets for beam size 5. * denotes statistically significant difference from Align-VDBA at power 0.05 (p-value < 0.05). \ No newline at end of file diff --git a/accurateonlineposterioralignmentsforprincipledlexicallyconstraineddecoding/images.zip b/accurateonlineposterioralignmentsforprincipledlexicallyconstraineddecoding/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..2ca87b651245783e54099186d2dee2f713eda541 --- /dev/null +++ b/accurateonlineposterioralignmentsforprincipledlexicallyconstraineddecoding/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f1de25ed45d623b2e00b370aa5e31122353ea00db5ac7c22c00dc25d52b99c68 +size 1097251 diff --git a/accurateonlineposterioralignmentsforprincipledlexicallyconstraineddecoding/layout.json b/accurateonlineposterioralignmentsforprincipledlexicallyconstraineddecoding/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..c4b946b3fb1526476262da466402e5c18a6829a3 --- /dev/null +++ b/accurateonlineposterioralignmentsforprincipledlexicallyconstraineddecoding/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f6f484f523440918794737c0380e4ef146a8a2af1bb98bd39e206c39ba3ff53e +size 550967 diff --git a/achievingconversationalgoalswithunsupervisedposthocknowledgeinjection/978b9df5-66d6-42b3-b508-717e0470bb99_content_list.json b/achievingconversationalgoalswithunsupervisedposthocknowledgeinjection/978b9df5-66d6-42b3-b508-717e0470bb99_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0c7e0f1129fa65e1f6d671ee5112a0da2b7583e0 --- /dev/null +++ b/achievingconversationalgoalswithunsupervisedposthocknowledgeinjection/978b9df5-66d6-42b3-b508-717e0470bb99_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6022bc848ad7f268a099b52bb4298add09ecf1440ef71ebfee897aaeea75deea +size 95797 diff --git a/achievingconversationalgoalswithunsupervisedposthocknowledgeinjection/978b9df5-66d6-42b3-b508-717e0470bb99_model.json b/achievingconversationalgoalswithunsupervisedposthocknowledgeinjection/978b9df5-66d6-42b3-b508-717e0470bb99_model.json new file mode 100644 index 0000000000000000000000000000000000000000..4d053a04a9145efa5d65c72a9912a0603382c132 --- /dev/null +++ b/achievingconversationalgoalswithunsupervisedposthocknowledgeinjection/978b9df5-66d6-42b3-b508-717e0470bb99_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2cccc7ba31469633572054dde38c532f14c9b5babe5e67946ec55f1b5b76fa1f +size 119244 diff --git a/achievingconversationalgoalswithunsupervisedposthocknowledgeinjection/978b9df5-66d6-42b3-b508-717e0470bb99_origin.pdf b/achievingconversationalgoalswithunsupervisedposthocknowledgeinjection/978b9df5-66d6-42b3-b508-717e0470bb99_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..97e5c53aaca2c675c961a91dbb0d1139844026eb --- /dev/null +++ b/achievingconversationalgoalswithunsupervisedposthocknowledgeinjection/978b9df5-66d6-42b3-b508-717e0470bb99_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e84029a1ea6a23a5b8a4677043f209455eb092f65707eada0ab510dcf90df80d +size 3142005 diff --git a/achievingconversationalgoalswithunsupervisedposthocknowledgeinjection/full.md b/achievingconversationalgoalswithunsupervisedposthocknowledgeinjection/full.md new file mode 100644 index 0000000000000000000000000000000000000000..9dfeca8405e979b1ed42592a0dc290f5a505db52 --- /dev/null +++ b/achievingconversationalgoalswithunsupervisedposthocknowledgeinjection/full.md @@ -0,0 +1,392 @@ +# Achieving Conversational Goals with Unsupervised Post-hoc Knowledge Injection + +Bodhisattwa Prasad Majumder\* Harsh Jhamtani\* Taylor Berg-Kirkpatrick\* Julian McAuley\* + +\*Department of Computer Science and Engineering, UC San Diego {bmajumde, tberg, jmcauley}@eng.ucsd.edu School of Computer Science, Carnegie Mellon University jharsh@cs.cmu.edu + +# Abstract + +A limitation of current neural dialog models is that they tend to suffer from a lack of specificity and informativeness in generated responses, primarily due to dependence on training data that covers a limited variety of scenarios and conveys limited knowledge. One way to alleviate this issue is to extract relevant knowledge from external sources at decoding time and incorporate it into the dialog response. In this paper, we propose a post-hoc knowledge-injection technique where we first retrieve a diverse set of relevant knowledge snippets conditioned on both the dialog history and an initial response from an existing dialog model. We construct multiple candidate responses, individually injecting each retrieved snippet into the initial response using a gradient-based decoding method, and then select the final response with an unsupervised ranking step. Our experiments in goal-oriented and knowledge-grounded dialog settings demonstrate that human annotators judge the outputs from the proposed method to be more engaging and informative compared to responses from prior dialog systems. We further show that knowledge-augmentation promotes success in achieving conversational goals in both experimental settings. + +# 1 Introduction + +Generic responses which lack specificity have been a major issue in existing dialog models (Hosseini-Asl et al., 2020; Dinan et al., 2019a). The issue in part stems from bottlenecks in dialog models due to a limited scope of scenarios and access to limited knowledge available during training. On the other hand, encoding all possible world knowledge at training time is not feasible, and even undesirable in cases where knowledge sources are dynamically varying (Ghazvininejad et al., 2018; Majumder et al., 2020b; Zhao et al., 2020; Bruyn et al., 2020; Kim et al., 2020; Prabhumoye et al., 2021). One possible approach is to incorporate + +![](images/605870a9693eb116ad5ee6d555907df83a639f40f60ac3df4aa918b444e52a7f.jpg) +Figure 1: Augmenting initial response from an existing dialog model with relevant external knowledge leads to more engaging and informative responses improving the success in achieving the conversational goal (here, finding a fun activity). + +relevant knowledge at decoding-time. For example, in Figure 1, the user is seeking options for a fun activity around Cambridge. While the initial dialog response suggests watching a movie as an option, it does not provide any information behind that choice. + +We propose and evaluate an approach for unsupervised knowledge injection into a dialog model's response at decoding time $^{1}$ —not addressed in any previous work. We first sample a response from the model (trained on dialog data) conditioned on the dialog context. Next, we utilize the dialog context and the sampled response to query external knowledge sources. Finally, the retrieved knowledge is used to construct a more informative and engaging response (Figure 1). A major advantage of such post-hoc knowledge injection is its flexibility in adding newer knowledge sources especially where the success of achieving conversational goals relies upon the availability of relevant knowledge. Post-hoc injection also promotes efficiency in NLP applications (Schwartz et al., 2020; Strubell et al., 2019): it mitigates the need to retrain dialog models to accommodate dynamically evolving knowledge. + +We experiment with two types of knowledge sources: language models, which we treat as parametric knowledge bases (Petroni et al., 2019; + +![](images/66bc2899ce4157ed224430910be2ccf6278a61be564a963ca285740a7f897a92.jpg) +Figure 2: Pipeline of POKI: It first retrieves post-hoc knowledge from external sources based on dialog history and an initial response from a dialog model. Then the most relevant and diverse knowledge snippets are selected from the retrieved set. Each selected snippet is individually combined with the initial response through constrained decoding to generate a candidate final response. At last, the final response is selected via an unsupervised ranking step. Note that POKI requires no additional training. + +![](images/67e0570ee422b6ddb70d172e9ce39cb2bf584dcc4167f201aa7c4b43e9eba63b.jpg) + +![](images/0a5382d41d6deff58d6c5e173417548c284df2a78b27a80a7c492f82609e6fe5.jpg) + +Brown et al., 2020); and user review datasets such as Yelp reviews (Hajas et al., 2014) as nonparametric knowledge sources ( $\S 2$ ). Since it is possible to gather a large amount of related knowledge given a query, we select a relevant and diverse (estimated via information-theoretic measures) subset of knowledge snippets using an unsupervised method ( $\S 3.1$ ). Then, a gradient-based inference approach is used to construct an updated response that incorporates the selected knowledge ( $\S 3.2$ ). Note that our framework does not require retraining the existing dialog model—it only relies upon updating the model's output hidden states at decoding time for unsupervised knowledge injection. + +We experiment with two scenarios: goal-oriented and knowledge-grounded dialog where the training data covers only a fraction of the needed knowledge. Automatic evaluation reveals that our method is capable of generating highly diverse responses in both settings. In some cases, the generated response shows high overlap with the original target response showing that our unsupervised method bridges the knowledge gap between available knowledge and human-written responses present in the existing dialog corpus. An extensive human evaluation confirms that generated responses are indeed engaging, interesting, and human-like without any loss in fluency. + +To pinpoint the usefulness of knowledge injection in the above settings, we design a real-time study (§5.3) where users interact with our system to reach a conversational goal (e.g. planning a holiday or knowing more about the solar system). We find that external knowledge enables users to achieve their goals more efficiently. Additionally, we observe that the our approach of sub-selecting relevant but diverse knowledge leads to responses that promote success in achieving conversational goals. + +# 2 Post-hoc Knowledge for Dialog + +Our goal is to construct a dialog response by injecting knowledge (from external textual sources) at decoding time, without having to retrain the models. Consider a dialog model $\mathcal{M}$ from which we can sample a dialog response $x^{d}$ given a dialog history $\mathcal{H}$ . We shall refer to the response $x^{d}$ sampled from such a model without any decoding time knowledge injection as the initial response. + +However, as motivated earlier, samples from such a dialog model often lack detail. To improve such responses, we retrieve and incorporate relevant external knowledge $k$ into the initial response. To achieve our goal, we construct a query using both dialog history $\mathcal{H}$ and the initial response $x^{d}$ , and gather a relevant knowledge candidate $k$ from a knowledge source $\mathcal{K}$ . The retrieved snippet can provide useful information to the end-user to achieve the conversational goal (see §5.3). We explore both parametric (e.g. querying a language model) and non-parametric (e.g. deterministic retrieval using word-overlap) ways to obtain post-hoc knowledge. + +# 2.1 Parametric knowledge sources + +Pretrained language models (PTLM) are typically trained with a vast amount of text that spans a diverse range of domains. Petroni et al. (2019); Brown et al. (2020) showed that such PTLMs can be used as a source of knowledge when queried with suitable textual prompts (e.g. Seattle is famous for ). To use PTLMs in our use-case, we construct useful prompts from dialog history and the initial response. We assemble simple prompts inspired from various knowledge-seeking situations in dialog (Shwartz et al., 2020) such as [KP] is famous for , Here is what I know about [KP]: , + +where $[\mathrm{KP}]$ is a key-phrase $^2$ extracted from dialog context. We use gpt2-large as the PTLM. For example, a query "Here is what I know about fun things around Cambridge:" results in "There are plenty of museums to visit around Cambridge. If you love hiking, you can enjoy the trails alongside the river..." as shown in Figure 1. A complete list of prompts is provided in Appendix B. We finally rank each knowledge snippet $k$ using the likelihood obtained from the PTLM for a concatenated input of $k$ and dialog history and choose the most likely. + +# 2.2 Non-parametric knowledge sources + +External knowledge in the form of a text corpus can be used as a non-parametric knowledge source available at decoding time. Compared to parametric knowledge sources, such sources do not generate text as knowledge snippets, but offer the advantage of high quality and reliability of human written text. We consider the dialog history and the initial response as a query to retrieve relevant knowledge instances from the corpus. Next, we identify the top relevant instances in the given corpus with respect to the constructed query using cosine similarity on TF-IDF based representations (Robertson et al., 1995). + +# 3 Unsupervised Knowledge Injection in Generated Dialog + +Effectively utilizing the retrieved knowledge snippets to construct an enriched dialog response encompasses two major challenges. Firstly, it is not practical to use potentially hundreds of knowledge snippets obtained from the retrieval step for a single response generation. Thus, we need to find a relevant but diverse subset of the snippets. Secondly, the dialog model $\mathcal{M}$ is trained to condition only on the dialog context, and not on the external knowledge. Hence, to leverage the knowledge snippets, we need a decoding strategy to rewrite the initial response $x^{d}$ such that the resulting final response $x^{f}$ should closely follow the knowledge snippet to be injected without a loss in the fluency and consistency. Thus, our method requires no additional training and only assumes a language model trained on dialog context (i.e. $\mathcal{M}$ ). We refer to our proposed framework (Figure 2) as POKI (Post-hoc Knowledge Injection in Generated Dialog). + +# 3.1 Relevance-Redundancy Tradeoff for Knowledge Selection + +At each turn, we obtain $N$ knowledge snippets from both the parametric and non-parametric sources. We wish to select a subset of $B$ (out of $N$ ) relevant but diverse knowledge snippets. + +We define relevance score of a snippet $k_{i}$ with respect to the dialog history $H$ using pointwise mutual information (PMI) as follows: + +$$ +\mathbb {R} \mathbb {E} \mathbb {L} _ {i} = \operatorname {P M I} (k _ {i}, \mathcal {H}) = \log \left(\frac {p (\mathcal {H} | k _ {i})}{p (\mathcal {H})}\right), +$$ + +Thus, a high PMI score would imply a larger semantic similarity between the snippet $k_{i}$ and $H$ . To account for redundancy between the snippet pair $k_{i}, k_{j}$ we again use the PMI score as follows: + +$$ +\mathbb {R E D} _ {i j, j > i} = \mathrm {P M I} (k _ {i}, k _ {j}) = \log \left(\frac {p (k _ {j} | k _ {i})}{p (k _ {j})}\right). +$$ + +The redundancy score is symmetric i.e. $\mathbb{R}\mathbb{E}\mathbb{D}_{ij} = \mathbb{R}\mathbb{E}\mathbb{D}_{ji}$ as PMI is a symmetric measure. + +We estimate probabilities (both conditional and marginal) $p(.)$ in the above equations using GPT2 language model, following past work (Padmakumar and He, 2021). The PMI measure is often considered better than other n-gram-based overlap metrics to measure the degree of association between two sentences (Kedzie et al., 2018; Padmakumar and He, 2021). Semantically similar phrases occur in both sentences that can easily be ignored by overlap based metrics. + +Selection via Determinantal Point Processes. To select $B$ knowledge snippets out of $N$ with a relevance-redundancy trade-off, we use a subset selection process named Determinantal Point Process (DPP) (Kulesza and Taskar, 2011). DPP employs a non-uniform selection that assigns low probability to subsets (here, of knowledge snippets) that are less diverse by modeling the repulsive correlation between independently occurring datapoints (see Figure 2). + +We build an $N\times N$ kernel matrix $\mathcal{D}$ , which is real, symmetric and positive semi-definite. The diagonal entries $\mathcal{D}_{ii}$ are populated by the squared relevance score of the $i$ -th knowledge $\mathbb{R}\mathbb{E}\mathbb{L}_i$ and the off-diagonal entries $\mathcal{D}_{ij}$ are $\beta \times$ squared redundancy scores $\mathbb{R}\mathbb{E}\mathbb{D}_{ij}$ . We adjust $\beta$ in such a way that $\mathcal{D}$ always remains positive semi-definite (more details in (Wilhelm et al., 2018)). To select a subset of $B$ , a DPP assigns a probability of sampling such a subset proportional to the determinant + +of the submatrix $\mathcal{D}_B$ of $\mathcal{D}$ , constructed using the indices of the subsetted items. The DPP probability is geometrically related to the volume of the parallelepiped spanned by the selected knowledge snippets. Diverse knowledge snippets tend to be orthogonal in their space hence span larger volume (Kulesza and Taskar, 2012). + +Choosing $B$ -size submatrix from $N$ -size $\mathcal{D}$ is a combinatorial problem and can become prohibitively costly when $N$ is very high. Hence, we use a greedy method (Wilhelm et al., 2018) where we initialize the selection with the most relevant $k_{i}$ and subsequently select the next $k_{j}$ that maximizes the determinant of the resultant submatrix. + +# 3.2 Gradient-based Constrained Decoding for Knowledge Injection + +Upon selecting $B$ knowledge snippets, we want to individually inject each knowledge snippet into $x^{d}$ to construct a candidate final response $x^{f}$ at inference time. + +Previous works have addressed the problem of unsupervised modification of already-generated text using gradient-based decoding (Dathathri et al., 2020; Qin et al., 2020) that employs an iterative procedure consisting of a forward and a backward pass. The forward pass on the generative model (here, $\mathcal{M}$ ) encourages fluency of the generated text while the backward pass performs gradient ascent on certain desired constraints. Note that due to the discrete nature of $x_{d}$ , it is not possible to directly update it via back-propagation. Therefore, we maintain the sequence of hidden representations of each output token as $z$ from the dialog model. Each output token $x_{(t)}^{d}$ is realized via $p(x_{(t)}^{d}) \sim \mathrm{softmax}(Wz_{(t)} / \tau)$ , where $\tau$ is the temperature hyperparameter, $W$ is the output embedding matrix (shared with the input), and $Wz_{(t)} \in \mathcal{R}^V$ ( $V$ is the size of the vocabulary). + +Constraints. Following Majumder et al. (2021a), we define a knowledge fidelity objective that encourages $x^{f}$ to be minimally different from the knowledge snippet $k$ . We achieve this by minimizing the cross entropy loss (CE) between knowledge tokens $k_{(1)},\ldots ,k_{(T)}$ as labels and $Wz_{(1)},\ldots ,Wz_{(T)}$ as the logits. + +We further notice that injected knowledge can influence the generation in such a way that it contradicts with responses uttered during previous turns. Hence, we also want $x^{f}$ to be entailed with the dialog history $\mathcal{H}$ . We build an entailment classifier + +$\theta(z, \mathcal{H})$ that predicts the probability of $x^f$ (ideally, the hidden representation $z$ of $x^f$ ) entailing $\mathcal{H}$ . The classifier $\theta(z, \mathcal{H})$ is a bag-of-words classification layer with hidden states $z$ from $\mathcal{M}$ and fine-tuned using the DNLI dataset (Welleck et al., 2019) to predict whether the current response is entailed with previous responses or not. + +Decoding. In the subsequent forward and backward passes, the hidden representation $z$ is gradually perturbed via gradient ascent on the respective objectives. During backward pass, the objective with constraints is + +$$ +\mathcal {L} (\mathcal {H}, k; z) = \alpha \log \theta (z, \mathcal {H}) - \lambda \operatorname {C E} (k, W z) +$$ + +with hyperparameters $\alpha$ and $\lambda$ . We use back-propagation to update $z$ with the gradient $\nabla_{z}\mathcal{L}(\mathcal{H},k;z)$ while the parameters of $\mathcal{M}$ remain fixed. The updated latent representations of $z$ after the backward pass are denoted as $z^{bw}$ . + +A forward pass with $\mathcal{M}$ is required to regularize the hidden states $z$ toward the original dialog model objective to obtain $z^{fw}$ . Corresponding to the $t^{\mathrm{th}}$ token, the hidden states for the $t + 1^{\mathrm{th}}$ time step are computed via a weighted addition of backward and forward hidden states, i.e., $z_{(t + 1)} = \gamma \times z_{(t)}^{bw} + (1 - \gamma)\times z_{(t)}^{fw}$ where $\gamma \in (0,1)$ is a hyperparameter. + +During generation, we start by sampling the initial response $x^{d}$ with greedy decoding from $\mathcal{M}$ . The hidden states $z$ (of $x^{d}$ ) are iteratively updated by alternate backward and forward passes. The final response is sampled as $x^{f} \sim \mathrm{softmax}(Wz / \tau)$ . The number of iterations $(= 5)$ and the $\gamma$ $(= 0.45)$ were chosen by maximizing the Z-normalized sum of dialog model perplexity and linguistic diversity (% of distinct bigrams) in a greedy hyperparameter search. More details are in Appendix B. + +# 3.3 Unsupervised Ranking of Candidate Final Responses + +Several previous works often over-generate and use an additional ranking step in order to select the final candidate in unsupervised text generation (Qin et al., 2020; Shwartz et al., 2020; Paranjape and Manning, 2021). Similarly, here we want to rank the generated candidate final responses according to the diversity of the generated text as well as the conditional likelihood of generation given the dialog history. For diversity, we measure the percentage of distinct bigrams present in the response. For conditional likelihood, we use + +
SystemAccBLEUBRTScD-2ENTR
KCopy70.14.162.33.162.41
SimpleTOD (2020)70.115.079.20.560.90
SimpleTOD+ (2021)69.812.168.10.811.11
Arranger (2021)70.212.368.50.931.15
Rewriter (2021)70.212.169.41.031.45
POKI71.113.774.53.782.67
w/o Entailment69.910.967.83.672.56
w/o Kw Fidelity70.012.371.20.951.19
Gold1001001000.780.86
+ +the pre-trained GPT2 model to obtain the log probability when the dialog history, followed by the generated response, passed as a concatenated input. Since these two scores can have varied scale, we perform Z-normalization on the individual scores and add them to obtain a single score for ranking. The highest ranked candidate response is finally rendered to the user. + +# 4 Experimental Setup + +# 4.1 Scenarios and Datasets + +We experiment with two dialog scenarios: goal-oriented and knowledge grounded. Both setups are knowledge intensive but the training data in such setups often contains only a fraction of the needed knowledge. For the goal-oriented setting, we use the Multi-domain Wizard-of-Oz (Budzianowski et al., 2018) dataset. For knowledge grounded dialog, we use the Wizard-of-Wikipedia (Dinan et al., 2019b) dataset. More details are in Appendix A. + +Multi-domain Wizard-of-Oz (MultiWOZ) is a multi-domain dialog dataset (we use v2.0 (Hosseini-Asl et al., 2020)) consisting of goal-oriented human-human conversations. The dataset spans seven domains (restaurant, train, attraction, hotel, taxi, hospital, police) and contains 10,438 dialogs with 13.68 average turns. Since, we do not need any training data, we only use an evaluation set (of 7K utterances). + +Wizard-of-Wikipedia (WoW) is a knowledge grounded dialog dataset which involves retrieving relevant knowledge from Wikipedia, reading and conditioning on it, and finally generating dialog responses (Dinan et al., 2019b). The dataset contains 201K utterances from 22K dialogues spanning 1300 diverse topics, from which we use only the test set. The associated Wikipedia knowledge base has 5.4M articles and 93M sentences. + +Table 1: Automatic metrics on the test set of MultiWoZ. Difference between bold and non-bold numbers is statistically significant $(p < 0.001)$ + +
SystemBLEUBRTScD-2ENTR
KCopy13.474.33.643.12
KGuide (2017)16.771.52.542.12
KGround (2019)18.372.52.872.35
BART (2020a)19.873.42.972.55
RAG (2020b)19.973.11.031.45
POKI19.476.83.653.44
w/o Entailment18.174.23.173.39
w/o Kw Fidelity18.873.32.752.54
Gold1001002.982.59
+ +Table 2: Automatic metrics on the test set of Wizard-of-Wikipedia. Difference between bold and non-bold numbers is statistically significant $(p < 0.001)$ . + +# 4.2 Baselines and Ablations + +Baselines for MultiWOZ. For MultiWOZ, we consider several baselines following (Sun et al., 2021) for knowledge injection. First, we use the current state-of-the-art model, SimpleTOD, for goal-oriented dialog (Hosseini-Asl et al., 2020). Sun et al. (2021) extends SimpleTOD by adding chitchat candidates to dialog histories during training. They also have other variants that either concatenate output from SimpleTOD and candidate chitchats (Arranger) or rewrite by combining both output and chitchat snippets (Rewrter). We also have a trivial baseline (KCopy) which appends the retrieved knowledge snippet $k$ from POKI with the initial response $x_{d}$ . + +Baselines for WoW. For WoW, we use two current-best knowledge-grounded models, KGround (Wolf et al., 2019) and BART (Lewis et al., 2020a) that concatenate the associated knowledge snippets (present in WoW) and the dialog history as inputs to generate the response with supervision. KGuide (Zhao et al., 2017) and RAG (Lewis et al., 2020b) have an additional knowledge selection step modeled by a latent variable before response generation similar to knowledge grounded models. We also use the KCopy baseline, as described for MultiWOZ. + +Variants of POKI. To investigate the impact of various decoding constraints in POKI, we consider the following two variants of POKI—w/o Entailment and w/o Knowledge (Kw) Fidelity (§ 3.2). In POKI, we use SimpleTOD as the base dialog model in goal-oriented scenarios and use BART (which is a state-of-the-art model for WoW) as the base dialog model in the knowledge-grounded scenario. For all variants of POKI, we use gradient-based inference for decoding the final response. + +
POKI vsSimpleTODRewrterw/o Entailmentw/o Kw FidelityGold
Criteriawinlossκwinlossκwinlossκwinlossκwinlossκ
MultiWOZCoherent93.24.40.7685.610.20.7598.70.80.7277.817.80.7826.234.40.69
Engaging94.34.50.7889.77.90.7998.70.60.8071.520.50.8042.437.40.78
Interesting92.75.40.7291.28.30.7388.68.90.6898.70.80.7549.745.60.67
Humanlike85.410.70.6887.47.30.6561.930.50.7181.714.00.7429.737.80.66
RAGBARTw/o Entailmentw/o Kw FidelityGold
WoWCoherent95.44.50.7888.59.60.7294.33.40.6883.610.70.6523.825.30.73
Engaging89.37.70.7287.88.30.7197.70.80.7071.525.40.6925.426.70.73
Interesting96.33.50.7483.39.90.7579.817.20.7093.54.50.7135.937.80.76
Humanlike91.47.10.6892.46.50.6684.510.50.6781.813.50.7142.341.90.68
+ +Table 3: Pairwise comparison (% win/loss cases, tie not reported) between responses from POKI and from other baselines as well as ground truth. Difference between bold and non-bold numbers is statistically significant $(p < 0.001)$ . $\kappa$ denotes Cohen's Kappa (Cohen, 1960) between a pair of annotators. Complete details of the human evaluation are in Appendix C. + +# 5 Results and Discussion + +# 5.1 Automatic Evaluation + +Our primary goal is to generate responses enriched with relevant external knowledge. Arguably, a system which can effectively leverage additional knowledge at decoding time should generate more diverse responses. We measure percentage of distinct bigrams as Distinct-(D-2) (Li et al., 2016) and geometric mean of entropy values of empirical frequency distributions of n-grams ( $n = 1,2,3$ ) as Entropy (ENTR) (Jhamtani et al., 2018) for diversity. Additionally, we report overlap between generated responses and corresponding ground truth as per BLEU and BERTScore (BRTSc). For MultiWOZ, we also report the final goal accuracy (Acc) following (Hosseini-Asl et al., 2020). + +MultiWOZ. Table 1 shows POKI outperforms all the baselines in terms of diversity of generated responses. More importantly, we see POKI promotes accuracy of reaching the final dialog state i.e. the goal. For ablated versions of POKI, we find the entailment constraint has little effect on diversity while dropping the knowledge adherence constraint negatively influences accuracy and diversity. All variants of SimpleTOD and all versions of POKI show departure from the results obtained by SimpleTOD on BLEU and BERTScore since all of these versions add external knowledge that were not explicitly present in the data. However, we observe that the departure is not significant and POKI achieves a much closer BERTScore to SimpleTOD compared to baselines. + +WoW. Despite all systems for WoW use knowledge explicitly in the knowledge-grounded dialog generation task, Table 2 shows POKI generates the most diverse responses. Similar to MultiWOZ, + +the knowledge adherence constraint still remains a significant factor for increasing diversity, one of the main goals of knowledge injection. For WoW, we instead see POKI outperform even BART (previous SOTA) in terms of BERTScore when injected with external knowledge indicating the need of the external knowledge for modeling WoW dialogs. + +# 5.2 Human Evaluation + +We conduct a comparative human evaluation with 300 samples to evaluate the quality of generated dialog responses following ACUTE-Eval (Li et al., 2019). We show a generated response from POKI to an annotator with its associated dialog history to annotate if knowledge injection makes the final response more engaging, interesting and humanlike compared to a baseline response. As sanity check, we also investigate if the response remain coherent after knowledge injection. Each sample is evaluated by two annotators3. + +MultiWOZ. Table 3 records the pairwise comparison showing POKI consistently outperforms baselines on all criteria. Responses from POKI are more engaging and interesting compared to SimpleTOD and Rewriter, demonstrating that gradient-based decoding is effective for knowledge injection. In POKI, entailment constraint mostly influences coherence whereas knowledge fidelity constraint is important for engagingness and interestingness. + +WoW. Table 3 shows POKI outperforms baselines that use grounding knowledge during training in all criteria showing that external knowledge can be useful even in the knowledge-grounded setting to make the conversation engaging and interesting. It also indicates the limitation of the training signal or lack of access to sufficient knowledge and + +
: I need a place to eat that is cheap. +: Do you have a location preference? I have several options for inexpensive restaurants.Do you have a location preference? I have several options for you.None
: Center of the town in Cambridge. +: I think you can go with Asian cuisines such as Chinese or Indian cuisines. They are value for money since they are known for their great taste.I can book a Chinese or Indian restaurant near centre for you.Asian cuisines such as Chinese or Indian cuisines are inexpensive. They are value for money since they are known for their great taste.
: Do you have any restaurants that serve English food? +: I do not have an inexpensive restaurant that serves English food in the centre area. Indian food i very popular in the centre area. Many people who like English food also enjoy Indian food.I do not have an inexpensive restaurant that serves English food in the centre area.Indian food is very popular in the centre area, Cambridge. I know many friends who like English food also enjoy Indian food. Indian food is great value for money.
: Awesome. I can go with Indian food. +: Great. I can book a reservation for you at Kohinoor. The table will be reserved for 15 minutes. They have a vibrant environment and they are known for their starters.I can book a reservation for you at Kohinoor. The table will be reserved for 15 minutes.Kohinoor has a vibrant environment in the evening. They are best known for their starters. Do check them out. Also Indian sweets are great as desserts.
+ +Figure 3: POKI converses with a user who is looking for some restaurant options (left column). In each turn, a knowledge snippet (right column) is injected into an initial response (middle column). More examples are in Appendix D. + +room for improvement in terms of how knowledge is utilized. A large gap in win percentages in favor of POKI for evaluating how 'humanlike' is a response when compared to state-of-the-art methods suggests knowledge injection leads to more natural conversation. Here too, both decoding constraints show similar trends to MultiWOZ. + +Qualitative Analysis. Figure 3 shows a conversation by POKI with a user who seeks to find restaurant options around Cambridge. We observe that in most of the turns the injected knowledge appeared as an additional justification over the initial responses making the dialog engaging and effective to reach the user's goal (also noted by human judges in §5.3). For example, in turn 3, we observe that adding the extra information about Indian cuisine helped user to reach a conclusion when their original choice of English cuisine was absent. + +Effect of Response Length. Qualitatively, as seen in Figure 3, responses generated by POKI are longer than those from the initial response due to the post-hoc knowledge injection. In the human evaluation sample, we found that $37\%$ of responses from POKI are similar or smaller in length compared to responses from the best baseline. We investigate if response length acted as a confounding factor during human evaluation. Among all the cases where POKI was lost over a baseline, $45\%$ $(\pm 2\%)$ when bootstrapped with 1000 subsets of size 50) of responses from POKI were longer than those from the comparing baseline. Among win cases for POKI, we observe $49\%$ $(\pm 3\%)$ when bootstrapped with 1000 subsets of size 50) POKI responses were longer than those from the comparing method. This indicates that human users did not only choose longer responses as better. + +# 5.3 User Study for Effectiveness of Knowledge Injection + +Relevant knowledge injection has the benefit of adding more justification to terse dialog outputs and hence influencing the task outcome positively. Mirroring observations from (Ghandeharioun et al., 2019), a real-time full conversation evaluation is needed to investigate if POKI could achieve the conversational goal any better than baselines. + +We recruited 60 users for this study. One half of the users interacted with POKI, while the other half interacted with the best baseline model that does not augment dialog responses with external knowledge. We construct a speculative goal for each user to accomplish via the conversation. We allow users to end the conversation any time they would like and ask them whether the system helped them to reach their conversation goal along with additional comments to justify their annotation. Users who interacted with a knowledge-augmented system also asked if the system provided any knowledge that user has not explicitly asked for but indeed the extra information helped them to reach the conversational goal (Majumder et al., 2021b). Finally, we also ask if they would like to engage with the system they interacted with in future. + +For goal-oriented dialog, we construct speculative goals (e.g. looking for entertainment options) manually from the ground truth for 300 dialog samples. Since we are not using the underlying databases, we made sure speculative goals do not require specific information (e.g. booking availability, flight information, etc.). For knowledge-grounded dialog, we provide the intended topic of + +
MultiWOZ# turns ↓GoalKnowWould use
Rewriter8 ± 269%35%56%
POKI4 ± 386%84%76%
WoW# turns ↑GoalKnowWould use
BART10 ± 256%70%48%
POKI16 ± 376%89%71%
+ +discussion (e.g. science fiction) present in the data; the speculative goal here is to know more about, or to have an engaging conversation about the topic. + +Results. First of all, we find that POKI is unanimously preferred by users compared to the baseline during the user study. More importantly, we see that when the user successfully accomplished their goal, $84\%$ of those times they found the additional knowledge helpful in the goal-oriented setting (MultiWOZ) as compared to a baseline (Rewrter) that did not use any external knowledge. Most importantly, POKI takes significantly fewer turns for users to accomplish the goal as compared to Rewrter implicitly indicating injected knowledge (we observe high correlation, 0.67) contributes toward more efficient conversations. + +For the knowledge-grounded setting (WoW), both BART and POKI have access to external knowledge sources. However, $89\%$ (compared to $70\%$ ) of success scenarios were directly influenced by the additional post-hoc knowledge. For knowledge-grounded dialog, a longer conversation is indicative of engagingness on a particular topic (Gopalakrishnan et al., 2019), hence users preferred to converse with POKI for more turns as compared to a BART baseline. We quote a comment from a user who found a conversation about the Korean culture with POKI was particularly engaging—“Before this conversation, I had less knowledge about Korean movies and art-forms. This gave me a new perspective and a handful of popular opinions to look at it.” + +# 5.4 Discussion + +Performance of Knowledge Selection. The knowledge selection step in POKI acts an information bottleneck where the quality of the generated response directly depends on the quality of the + +Table 4: Real-time user study with average # of turns for successful goal completion, % of time the goal was achieved, % of success cases users were helped by an additional knowledge (Know) that was not explicitly asked to reach their goal, and if users would like to use the system in future. + +
SourceRelevantFactualBRTSc for WoW
RandomDPPRandomDPPRandomDPP
Parametric82%89%65%83%74.281.3
Non-parametric81%83%97%98%65.276.8
+ +Table 5: Evaluation for the quality of the knowledge snippets for random and DPP-based selection. + +
SystemMultiWOZWoW
Supervised17.6 ± 5.2 ms23.6 ± 4.6 ms
PPCM (2020)30.9 ± 7.5 ms32.6 ± 4.2 ms
POKI34.2 ± 8.4 ms35.7 ± 5.7 ms
POKI, only decoding31.6 ± 2.7 ms32.3 ± 3.4 ms
+ +Table 6: Mean and std. error of clock-time taken per token + +selected knowledge5. We perform a human evaluation on 200 snippets to measure the relevance and the factual correctness in two scenarios: when we randomly select a retrieved snippet or select via DPP. In Table 5, we see that the parametric knowledge source (gpt2-large) generates more relevant knowledge snippets than a non-parametric one. We attribute this to 1) a large and diverse dataset (webtext) used during pretraining of gpt2 as compared to yelp reviews (restricted domains) we used for retrieval, and 2) the limited recall of relevant knowledge when using word-overlap based retrieval. However, large language models are still prone to generate non-factual knowledge. We observe that DPP-based selection in POKI is able to sub-select more factual knowledge which then positively influences the final response quality. For WoW, we also compare the selected snippets with the gold knowledge available in the dataset that in turn show high fidelity in terms of BERTScore. + +Time Complexity. Madotto et al. (2020) shows that iterative gradient-based decoding could be slower than generating response using single forward pass from an existing model. When we benchmark POKI in an Nvidia 2080Ti GPU, in Table 6, we see that knowledge generation (or retrieval) could be a computational bottleneck for POKI. However the greedy selection and the constrained decoding step do not add significant computational load. Furthermore, POKI's performance is comparable with PPCM (Madotto et al., 2020)—a more efficient version of gradient-based decoding. The efficiency of the knowledge retrieval step can be improved with better indexing (Johnson et al., 2021) which we leave as a future work. + +# 6 Related Work + +Knowledge grounded dialog datasets such as Wizard-of-Wikipedia (Dinan et al., 2019a) and Topical chat (Gopalakrishnan et al., 2019) typically consist of dialog responses paired with relevant knowledge available as collected annotations. Hence, models trained on such datasets are restricted to the knowledge sources they were exposed to at training time. Past work (Sun et al., 2021; Majumder et al., 2020a; Su et al., 2020; Komeili et al., 2021; Adolphs et al., 2021; Ghazvininejad et al., 2018; Tuan et al., 2020; Lewis et al., 2020c; Guu et al., 2020) has looked into injecting extra knowledge sources at training time in a bid to add knowledge not available originally as paired to dialog responses. However, such approaches require re-training the model if some new knowledge source were to be used. Moreover, while previous work focuses on just improving specificity of dialog response using external knowledge, we also study the effect of additional knowledge in achieving conversational goals. + +Improving the diversity of dialog responses by using diversity-promoting sampling has been explored in past work (Fan et al., 2018; Holtzman et al., 2020). We use a gradient-based decoding method, building on past work in this direction (Dathathri et al., 2020; Qin et al., 2020; Madotto et al., 2020; Majumder et al., 2021a). However, we propose new objectives to inject post-hoc knowledge obtained based on already generated dialog—an unsupervised knowledge injection method that has not been explored so far. + +# 7 Conclusion + +We propose a framework for unsupervised knowledge injection into dialog responses. We show that knowledge can be obtained post-hoc from any knowledge sources that can improve users' ability to reach their conversational goal more effectively. In future, our idea can be generalized to setups where external knowledge can justify model's predictions such as conversational recommendation. + +# Acknowledgements + +We thank anonymous reviewers for providing valuable feedback. BPM is partly supported by a Qualcomm Innovation Fellowship, a Friends of the International Center Fellowship-UC San Diego, NSF Award #1750063, and MeetElise. + +# References + +Leonard Adolphs, Kurt Shuster, Jack Urbanek, Arthur Szlam, and Jason Weston. 2021. Reason first, then respond: Modular generation for knowledge-infused dialogue. CoRR, abs/2111.05204. +Tom B. Brown, Benjamin Mann, Nick Ryder, et al. 2020. Language models are few-shot learners. In NeurIPS. +Maxime De Bruyn, Ehsan Lotfi, Jeska Buhmann, and Walter Daelemans. 2020. BART for knowledge grounded conversations. In Converse@KDD, volume 2666. CEUR-WS.org. +Pawel Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Inigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gasic. 2018. Multiwoz - A large-scale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling. In EMNLP. +Ricardo Campos, Vitor Mangaravite, Arian Pasquali, Alipio Jorge, Celia Nunes, and Adam Jatowt. 2020. Yake! keyword extraction from single documents using multiple local features. Information Sciences, 509. +Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and psychological measurement, 20(1):37-46. +Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2020. Plug and play language models: A simple approach to controlled text generation. In ICLR. +Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019a. Wizard of wikipedia: Knowledge-powered conversational agents. In ICLR. +Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019b. Wizard of wikipedia: Knowledge-powered conversational agents. In ICLR. +Angela Fan, Mike Lewis, and Yann N. Dauphin. 2018. Hierarchical neural story generation. In ACL. +Guillaume Gautier, Guillermo Polito, Rémi Bardenet, and Michal Valko. 2019. DPPy: DPP Sampling with Python. Journal of Machine Learning Research - Machine Learning Open Source Software (JMLRMLOSS). +Asma Ghandeharioun, Judy Hanwen Shen, Natasha Jaques, Craig Ferguson, Noah Jones, Ågata Lapedriza, and Rosalind W. Picard. 2019. Approximating interactive human evaluation with self-play for open-domain dialog systems. In NeurIPS. +Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. In AAAI. + +Karthik Gopalakrishnan, Behnam Hedayatnia, Qinglang Chen, Anna Gottardi, Sanjeev Kwa-tra, Anu Venkatesh, Raefer Gabriel, and Dilek Hakkani-Tur. 2019. Topical-chat: Towards knowledge-grounded open-domain conversations. In Interspeech. +Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. REALM: retrieval-augmented language model pre-training. CoRR, abs/2002.08909. +Peter Hajas, Louis Gutierrez, and Mukkai S. Krishnamoorthy. 2014. Analysis of yelp reviews. CoRR, abs/1407.1443. +Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In ICLR. +Ehsan Hosseini-Asl, Bryan McCann, Chien-Sheng Wu, Semih Yavuz, and Richard Socher. 2020. A simple language model for task-oriented dialogue. In NeurIPS. +Harsh Jhamtani, Varun Gangal, Eduard Hovy, Graham Neubig, and Taylor Berg-Kirkpatrick. 2018. Learning to generate move-by-move commentary for chess games from large-scale social forum data. In ACL 2018. +Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2021. Billion-scale similarity search with gpus. IEEE Trans. Big Data. +Chris Kedzie, Kathleen R. McKeown, and Hal Daume III. 2018. Content selection in deep learning models of summarization. In EMNLP. +Byeongchang Kim, Jaewoo Ahn, and Gunhee Kim. 2020. Sequential latent knowledge selection for knowledge-grounded dialogue. In ICLR. OpenReview.net. +Mojtaba Komeili, Kurt Shuster, and Jason Weston. 2021. Internet-augmented dialogue generation. CoRR, abs/2107.07566. +Alex Kulesza and Ben Taskar. 2011. k-dpps: Fixed-size determinantal point processes. In ICML. Omni Press. +Alex Kulesza and Ben Taskar. 2012. Determinantal point processes for machine learning. Found. Trends Mach. Learn., 5(2-3):123-286. +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020a. BART: denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In ACL. +Patrick S. H. Lewis, Ethan Perez, Aleksandra Pik-tus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, + +Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020b. Retrieval-augmented generation for knowledge-intensive NLP tasks. In NeurIPS. +Patrick S. H. Lewis, Ethan Perez, Aleksandra Pik-tus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020c. Retrieval-augmented generation for knowledge-intensive NLP tasks. In NeurIPS. +Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In NAACL HLT. +Margaret Li, Jason Weston, and Stephen Roller. 2019. ACUTE-EVAL: improved dialogue evaluation with optimized questions and multi-turn comparisons. CoRR, abs/1909.03087. +Andrea Madotto, Etsuko Ishii, Zhaojiang Lin, Sumanth Dathathri, and Pascale Fung. 2020. Plug-and-play conversational models. In Findings of EMNLP. +Bodhisattwa Prasad Majumder, Taylor BergKirkpatrick, Julian J. McAuley, and Harsh Jhamtani. 2021a. Unsupervised enrichment of persona-grounded dialog with background stories. In ACL. +Bodhisattwa Prasad Majumder, Harsh Jhamtani, Taylor Berg-Kirkpatrick, and Julian J. McAuley. 2020a. Like hiking? you probably enjoy nature: Personagrounded dialog with commonsense expansions. In EMNLP. +Bodhisattwa Prasad Majumder, Shuyang Li, Jianmo Ni, and Julian J. McAuley. 2020b. Interview: Large-scale modeling of media dialog with discourse patterns and knowledge grounding. In EMNLP. +Bodhisattwa Prasad Majumder, Sudha Rao, Michel Galley, and Julian J. McAuley. 2021b. Ask what's missing and what's useful: Improving clarification question generation using global knowledge. *NAACL*. +Vishakh Padmakumar and He He. 2021. Unsupervised extractive summarization using pointwise mutual information. In EACL. +Ashwin Paranjape and Christopher D. Manning. 2021. Human-like informative conversations: Better acknowledgements using conditional mutual information. In NAACL-HLT. +Fabio Petroni, Tim Rocttäschel, Sebastian Riedel, Patrick S. H. Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander H. Miller. 2019. Language models as knowledge bases? In EMNLP-IJCNLP. +Shrimai Prabhumoye, Kazuma Hashimoto, Yingbo Zhou, Alan W. Black, and Ruslan Salakhutdinov. 2021. Focused attention improves document-grounded generation. In *NAACL-HLT*. + +Lianhui Qin, Vered Shwartz, Peter West, Chandra Bhagavatula, Jena D. Hwang, Ronan Le Bras, Antoine Bosselut, and Yejin Choi. 2020. Back to the future: Unsupervised backprop-based decoding for counterfactual and abductive commonsense reasoning. In EMNLP. +Stephen E. Robertson, Steve Walker, and Micheline Hancock-Beaulieu. 1995. Large test collection experiments on an operational, interactive system: Okapi at TREC. Inf. Process. Manag., 31(3):345-360. +Roy Schwartz, Jesse Dodge, Noah A. Smith, and Oren Etzioni. 2020. Green AI. Commun. ACM, 63(12):54-63. +Vered Shwartz, Peter West, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2020. Unsupervised commonsense question answering with self-talk. In EMNLP. +Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in NLP. In ACL. +Hui Su, Xiaoyu Shen, Sanqiang Zhao, Xiao Zhou, Pengwei Hu, Randy Zhong, Cheng Niu, and Jie Zhou. 2020. Diversifying dialogue generation with non-conversational text. In ACL. +Kai Sun, Seungwhan Moon, Paul A. Crook, Stephen Roller, Becka Silvert, Bing Liu, Zhiguang Wang, Honglei Liu, Eunjoon Cho, and Claire Cardie. 2021. Adding chit-chats to enhance task-oriented dialogues. NAACL. +Yi-Lin Tuan, Wei Wei, and William Yang Wang. 2020. Unsupervised injection of knowledge into dialogue generation via language models. CoRR, abs/2004.14614. +Sean Welleck, Jason Weston, Arthur Szlam, and Kyunghyun Cho. 2019. Dialogue natural language inference. In ACL. +Mark Wilhelm, Ajith Ramanathan, Alexander Bonomo, Sagar Jain, Ed H. Chi, and Jennifer Gillenwater. 2018. Practical diversified recommendations on youtube with determinantal point processes. In CIKM. ACM. +Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. 2019. Transfertransfo: A transfer learning approach for neural network based conversational agents. CoRR, abs/1901.08149. +Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In ACL. +Xueliang Zhao, Wei Wu, Can Xu, Chongyang Tao, Dongyan Zhao, and Rui Yan. 2020. Knowledge-grounded dialogue generation with pre-trained language models. In EMNLP. + +# A Datasets + +MultiWOZ. To compare with previous works, we use MultiWoz 2.0 following (Hosseini-Asl et al., 2020). Note that we do not need any training data for our models since we perform post-hoc knowledge injection. + +WoW For Wizard-of-Wikipedia, all baselines and the original dialog model for POKI use available paired knowledge present in the training data (not a part of our pipeline). However, POKI additionally uses the external knowledge snippets selected via DPP. + +# B Implementation Details + +We open-source our code at: https://github. com/majumderb/poki. We use the publicly available implementation for DPP (Gautier et al., 2019). + +We obtain the MultiWOZ 2.0 from the official release $^{7}$ . Similarly, we obtain the Wizard-of-Wikipedia from ParlAI repository $^{8}$ . We adapted codes from original PPLM (Dathathri et al., 2020) repository and modified them for our own objective function. We obtained the Yelp review dataset from the official website $^{10}$ . Yelp dataset contains 8,635,403 reviews. For diversity calculation (in automatic evaluation), we use NLTK to extract n-grams. + +Network architecture For MultiWOZ, we use the SimpleTOD12 as the base model. Whereas for WoW, we use BART13 as the base model. For the parametric knowledge source, we use gpt2-large14. + +Hyperparameters POKI does not require any training since we perform gradient-based decoding at the inference time. For hyperparameters involved in the decoding stage, we maximize the + +```txt +$^{6}$ https://github.com/guilgautier/DPPy + $^{7}$ https://github.com/budzianowski/multiwoz + $^{8}$ https://parl.ai/projects/wizard_of_wikipedia/ + $^{9}$ https://github.com/uber-research/PPLM + $^{10}$ https://www.yelp.com/dataset + $^{11}$ https://www.nltk.org/_modules/nltk/util.html + $^{12}$ https://github.com/salesforce/simpletod + $^{13}$ https://huggingface.co/transformers/model_doc/bart.html + $^{14}$ https://huggingface.co/transformers/model_doc/gpt2.html +``` + +Z-normalized sum of dialog model perplexity and linguistic diversity (\% of distinct bigrams) of the generated response in a greedy fashion to select the best values. For our best method, in objective function $\mathcal{L}$ , we use $\alpha$ as 1 and $\lambda$ as 1. We keep generation length to be 100 to encourage longer generations. We train the entailment classifier using code from PPLM repository15. The weight $\gamma$ for mixing forward and backward passes was set to 0.45. We run 5 backward-forward passes to obtain a candidate final response. + +Filtering knowledge candidates from PTLMs Our initial experiments suggests that that knowledge generated from PTLMs can be inappropriate (contains bias or toxic content) and misleading/nonfactual. Sun et al. (2021) collected annotations of dialog responses with labels positive (useful, social), negative (inappropriate and misleading). We learn a binary classifier to classify a knowledge snippet as positive or negative and use it as a filtering criteria. + +Key-phrase extraction Given a sentence from the context, we first extract n-gram $(n\in 1,2,3,4)$ key-phrases using YAKE (Yet-Another-Keyword-Extractor) (Campos et al., 2020) and retain only those that contain at least a noun. + +**Prompts** We curated prompts inspired by various knowledge-seeking situations (such as for: more information, opinion, review) (Shwartz et al., 2020) and are listed in Table 7. + +[KP] is famous for +The popular opinion about [KP] is +Here is what I know about [KP]: +My friend says that [KP] is: +Here is some information about [KP]: +Here are some reviews about [KP]: +I think [KP] is: +I read on the internet about [KP] and found that +Today I learned about [KP] that + +Table 7: Manually curated prompts to query the PTLM + +Statistics on generated and selected knowledge snippets For both datasets, we retrieve 100 most relevant knowledge snippets from non-parametric source (here, yelp reviews), and generate 5 candidate knowledge snippets (using nucleus sampling + +(Holtzman et al., 2020), $p = 0.95$ for each keyphrase extracted from an input instance (dialog history + initial response). After knowledge selection by DPP, on an average (over validation set), 5 snippets were selected for MultiWoz and 8 snippets were selected for WoW. + +# C Human Evaluation and User Study Setup + +Human Evaluation We hired two Anglophone (Lifetime HIT acceptance $\% > 85$ ) annotators for every test sample. Figure 4 shows a sample question for the pairwise comparison between response generated by POKI and a baseline for informativeness. The exact formulations for all criteria are provided as below: + +- Coherent: Which version is more consistent with the dialog history? +- Engaging: Which version is more likely to hold your attention and make you want to hear more? +- Interesting: Which version arouses your curiosity or tells you something new or useful? +- Humanlike: Which version is more natural and personable? + +All differences in values from human evaluations are significant with $p < 0.05$ from bootstrap tests on 1000 subsets of size 50. A snapshot of our human evaluation interface is shown in Figure 4. The order of two candidate responses (R1 and R2) is made random for each question. + +User Study For user study, we similarly recruited 60 Anglophone users who have at least high-school level of education and are comfortable with handling internet-based technologies. Each session (depending on the systems they interacted) lasted on an average 30 minutes (for MultiWOZ) and 60 minutes (for WoW) including on-boarding, performing actual task and answering post-task questions. + +# D Qualitative Examples + +Figure 5 shows a complete dialog in the knowledge-grounded scenario where the user discusses about 'science-fiction'. Figure 6 shows more utterance level examples for both goal-oriented and knowledge-grounded scenarios. + +# Instructions (Click to collapse) + +This task requires basic English language understanding. + +For each instance, you will have to read the dialog history between two people A and B. We expect you to respond on the following for the candidates shown for B's response: + +Interestingness: Which version arouses your curiosity or tells you something new or useful? + +# 1. Dialog History: + +A's turn: I need a place to eat that is cheap. +B's turn: Do you have a location preference? I have several options for inexpensive restaurants. +A's turn: Center of the town in Cambridge. +B's turn: I think you can go with Chinese or Indian cuisine. Both are inexpensive but have great taste. Should I look for Chinese or Indian restaurants? +A's turn: Do you have any restaurants that serve English food? + +Candidates for B's next turn: + +Response R1: I do not have an inexpensive restaurant that serves English food in the centre area. You can try with Indian food since it is very popular in the centre area. Many people who like + +English food also like Indian food. + +Response R2: I do not have an inexpensive restaurant that serves English food in the centre area. + +1.1 Which version seems more well informed and confident in the information? +$\bigcirc \mathbb{R}1$ is better Both have similar interestingness $\bigcirc \mathbb{R}1$ is worse + +Figure 4: Human evaluation setup for pairwise comparison between POKI and another baseline + +# Ethical considerations + +We do not foresee any immediate ethical concerns for our method as we use several constraints (less divergence from the extracted knowledge, consistency with the dialog context) that allow the generation to be restricted to the context. In general, we expect our dialog system to be more engaging and accessible to the user. Since we use PTLMs as knowledge source, we inherit the general risk of generating biased or toxic language, which should be carefully filtered. In our work, we perform explicit filtering steps to make sure that the knowledge is appropriate. Furthermore, our selection step promotes more factually correct knowledge to be selected. However, the generations may incorporate biases that are already present in the dialog datasets due to crowd-sourced data collection. Finally, our generations are limited only to the English language. Hence we suggest that a system like ours should likely not be used as a 'black box,' but would best be used in a setting where its outputs can be 'audited'. Carbon footprint: Our system uses post-hoc knowledge injection which refrains from retraining newer dialog models to accommodate dynamically evolving external knowledge. This promotes green NLP applications (Schwartz et al., 2020; Strubell et al., 2019) reducing carbon footprints that stem from training (or even finetuning) large language models. + +![](images/3e37705892ede9ea552471afcfaef47c9f5ccae094c19fa6278d87eea0564000.jpg) +Figure 5: POKI converses with a user who is discussing about science fiction, in a knowledge-grounded dialog scenario (left column). In each turn, a initial response (middle column) is augmented with a knowledge snippet (right column) using constrained gradient-based decoding. Human judges unanimously noted this conversation as more engaging as compared to the initial responses. + +![](images/04aeca8fb457359904dcbdd223135d3e3530ebebde82479ebd296df8e3fd9fe0.jpg) +Figure 6: Utterance level examples (left column) in (a) and (b) goal oriented scenario; and (c) knowledge-grounded scenario. POKI updates the initial response (middle column) with a knowledge snippet (right column) using constrained gradient-based decoding. \ No newline at end of file diff --git a/achievingconversationalgoalswithunsupervisedposthocknowledgeinjection/images.zip b/achievingconversationalgoalswithunsupervisedposthocknowledgeinjection/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..65107ad9f990589c150ce0916b8ef5f951523de6 --- /dev/null +++ b/achievingconversationalgoalswithunsupervisedposthocknowledgeinjection/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:880d8efb10be6522bd11328f0e38401c76ea4c7825454456c7474d4c5f1373cf +size 681056 diff --git a/achievingconversationalgoalswithunsupervisedposthocknowledgeinjection/layout.json b/achievingconversationalgoalswithunsupervisedposthocknowledgeinjection/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a6f050bf82138ff55dbc927cacec6fe0f986a025 --- /dev/null +++ b/achievingconversationalgoalswithunsupervisedposthocknowledgeinjection/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9cc0f130e2735f9334cfb625b48e56720442bce0e12590023336e507f6f52424 +size 495327 diff --git a/achievingreliablehumanassessmentofopendomaindialoguesystems/5444e2a3-57e0-4320-be73-35ddf3f82038_content_list.json b/achievingreliablehumanassessmentofopendomaindialoguesystems/5444e2a3-57e0-4320-be73-35ddf3f82038_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..1620e113dcef67a42913a88388454f98f6796c7c --- /dev/null +++ b/achievingreliablehumanassessmentofopendomaindialoguesystems/5444e2a3-57e0-4320-be73-35ddf3f82038_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aeec0f7a5e397b124374202315d8ed5109f38c2853c3445e429cb28e657adddd +size 115784 diff --git a/achievingreliablehumanassessmentofopendomaindialoguesystems/5444e2a3-57e0-4320-be73-35ddf3f82038_model.json b/achievingreliablehumanassessmentofopendomaindialoguesystems/5444e2a3-57e0-4320-be73-35ddf3f82038_model.json new file mode 100644 index 0000000000000000000000000000000000000000..e5d587a8813c396293a64474f881c4a8f76782e5 --- /dev/null +++ b/achievingreliablehumanassessmentofopendomaindialoguesystems/5444e2a3-57e0-4320-be73-35ddf3f82038_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:094fa6e7d845f3216dceed65efb368c6d021ac0fded2ac1c0735e588ff904fcd +size 136871 diff --git a/achievingreliablehumanassessmentofopendomaindialoguesystems/5444e2a3-57e0-4320-be73-35ddf3f82038_origin.pdf b/achievingreliablehumanassessmentofopendomaindialoguesystems/5444e2a3-57e0-4320-be73-35ddf3f82038_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..895d7eaecb118a7d3ccb8d8de8f8c7762acf57c4 --- /dev/null +++ b/achievingreliablehumanassessmentofopendomaindialoguesystems/5444e2a3-57e0-4320-be73-35ddf3f82038_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fbe3ec8d17e8a7abbee6d034b0e36462f4c83749801b67afa46691ed6bacecf6 +size 1494507 diff --git a/achievingreliablehumanassessmentofopendomaindialoguesystems/full.md b/achievingreliablehumanassessmentofopendomaindialoguesystems/full.md new file mode 100644 index 0000000000000000000000000000000000000000..bfa69d58dafb7560298d462374d20b8e0eec9549 --- /dev/null +++ b/achievingreliablehumanassessmentofopendomaindialoguesystems/full.md @@ -0,0 +1,412 @@ +# Achieving Reliable Human Assessment of Open-Domain Dialogue Systems + +Tianbo Ji $^{1,2}$ , Yvette Graham $^{1,3}$ , Gareth Jones $^{1,2}$ , Chenyang Lyu $^{2}$ , and Qun Liu $^{4}$ + +1ADAPT Centre + +$^{2}$ School of Computing, Dublin City University + +3School of Computer Science and Statistics, Trinity College Dublin + +$^{4}$ Noah's Ark Lab, Huawei + +{tianbo.ji,yvette.graham,gareth.jones}@ + +adaptcentre.ie, chenyang.lyu2@mail.dcu.ie, qun.liu@huawei.com + +# Abstract + +Evaluation of open-domain dialogue systems is highly challenging and development of better techniques is highlighted time and again as desperately needed. Despite substantial efforts to carry out reliable live evaluation of systems in recent competitions, annotations have been abandoned and reported as too unreliable to yield sensible results. This is a serious problem since automatic metrics are not known to provide a good indication of what may or may not be a high-quality conversation. Answering the distress call of competitions that have emphasized the urgent need for better evaluation techniques in dialogue, we present the successful development of human evaluation that is highly reliable while still remaining feasible and low cost. Self-replication experiments reveal almost perfectly repeatable results with a correlation of $r = 0.969$ . Furthermore, due to the lack of appropriate methods of statistical significance testing, the likelihood of potential improvements to systems occurring due to chance is rarely taken into account in dialogue evaluation, and the evaluation we propose facilitates application of standard tests. Since we have developed a highly reliable evaluation method, new insights into system performance can be revealed. We therefore include a comparison of state-of-the-art models (i) with and without personas, to measure the contribution of personas to conversation quality, as well as (ii) prescribed versus freely chosen topics. Interestingly with respect to personas, results indicate that personas do not positively contribute to conversation quality as expected. + +# 1 Introduction + +Evaluation of open-domain dialogue is particularly challenging and has been cited in high-profile competitions as a known open problem (Dinan et al., 2019). Challenges arise primarily from the fact that in real-world conversations there exists such a vast number of possible appropriate responses. + +Subsequently, dialogue evaluation that relies on comparison with pre-created reference dialogues incur substantial false-negative rates as many appropriate responses are unfairly penalized simply for not corresponding closely with references. In addition, evaluation faces further challenges with respect to the ability to fully take into account dialogue history. $^{1}$ + +In this paper, we present a new method of open-domain dialogue evaluation based on human assessment of live conversations with models that avoids the need for pre-created reference dialogues and ensures full familiarity with dialogue history, ticking two important boxes in terms of validity. Although live human evaluation of models has the advantage of being highly valid, reliability unfortunately cannot be assumed and developing methods of evaluation for language tasks that achieve high rater consistency has been challenging, often resulting in low levels of agreement between annotators (Finch and Choi, 2020; Callison-Burch et al., 2011, 2012; Bojar et al., 2013, 2014; Mehri and Eskenazi, 2020b). Despite challenges in this respect, our proposed method provides highly reliable evaluation, achieving a correlation of $r = 0.969$ in self-replication experiments. Additionally, the evaluation can be carried out cheaply and on a large scale through strict quality controlled crowd-sourcing, as well as including score standardization for fairer ranking of competing models. We make the data and code publicly available to aid future research.[2] + +# 2 Problems in Past Evaluations + +A common issue occurs that can potentially impact the validity of results is filtering the set of systems to be evaluated via automatic metric scores. Since metric scores are known to be a poor substitute + +for human assessment, this only results in the possibility that the best system according to human judges is inadvertently filtered out at this stage. For example, ConvAI2 (Dinan et al., 2019) ranked models firstly using automatic metrics before top models according to metric scores were assessed by crowd-sourced workers on Mechanical Turk, while similarly in the sixth Dialog System Technology Challenge (DSTC6) systems were filtered according to metric scores prior to human evaluation. + +In terms of the live evaluation, competitions such as ConvAI2 report such evaluations as highly challenging, with many of the resulting dialogues reported to be senseless, offensive, or simply not in line with instructions and ultimately live evaluation results have been discarded. + +Despite challenges, competitions that operate in the public domain, making data and evaluation techniques available to researchers (such as ourselves) should be applauded for such efforts. + +On the other hand, competitions that (for one reason or another) do not release data and evaluation techniques into the public domain have reported relative success in terms of human evaluation. However until such methods can be accessed and independently verified through replication studies, they will unfortunately have little impact. The first Amazon Alexa Socialbot Grand Challenge required human assessors to score how coherent and engaging conversations were on a 1-5 rating scale by two distinct groups: volunteer Amazon employees (experts), and general Alexa users (crowds) (Ram et al., 2018), are reported to achieve a correlation of overall scores for the two types of human assessors at 0.93. The absolute average rating across all chatbots was reported to be $20\%$ lower for experts compared to general users. In an additional effort to evaluate models, conversational user experience, coherence, engagement, domain coverage, topical diversity, and conversational depth were assessed (1-5 scale), with combined scores reported to correlate with those of general users at $r = 0.66$ . In addition to methods and data not being publicly available, correlations are difficult to interpret since no detail is provided about the number of judgments on which the correlation is calculated for example. + +In addition to competitions that generally aim to include human evaluation of systems, automatic metrics are often proposed for dialogue evaluation, themselves requiring a human evaluation data set + +on which to evaluate the proposed metric. However, inappropriate statistics are often applied. For example, Pang et al. (2020) propose a holistic metric to automatically evaluate four distinct aspects of dialogue, and a human evaluation experiment is deployed on Mechanical Turk using a 1-5 rating scale. The mean correlation between human assessors is reported as $r = 0.61$ . However, mean correlations are unfortunately difficult to interpret, since correlation coefficients are not additive, averages calculated in the usual way cannot be assumed to reflect central tendency, and unfortunately, the distribution of correlations is not reported (Alexander, 1990). + +Mehri and Eskenazi (2020b) propose USR (Unsupervised and Reference-free), an unsupervised model that predicts the quality of dialog for a range of criteria using various rating scales: understandable (0-1 rating scale), natural (1-3), maintains context (1-3), interesting (1-3), uses knowledge (0-1); overall quality (1-5). Despite human evaluation being carried out by experts inter-annotator agreement levels varied depending on criteria being measured, ranging from as low as 0.298. Additionally, although correlations between human assessments are reported as significant at $p < 0.01$ , despite such statistics often being reported for correlations, they are unfortunately not very meaningful in terms of their impact on correlation interpretation and can be somewhat misleading. Contrary to common expectations, even small effect sizes (low $r$ ) can produce very low p-values (strong significance) in such tests. Aiming to achieve a significant correlation is an extremely low bar to reach in terms of consistency, since a low p-value in this case simply rejects the null hypothesis that the correlation is zero. + +In addition to the above issues, human evaluation of dialogue systems rarely take into account the fact that differences in performance can occur simply by chance. The method of human evaluation we propose provides a means of applying standard tests for statistical significance to avoid concluding differences that are highly likely to have occurred simply by chance. + +# 3 Crowd-sourcing Reliable Human Assessment of Open-Domain Dialogue + +Crowd-sourcing with highly accurate quality control provides a potential mechanism to ensure the three most important criteria that makes an eval + +uation meaningful while still remaining feasible: validity, reliability and scalability. Subsequently, we ask crowd-workers to carry out live text-based chat with models prior to that same worker also rating the quality of the immediately preceding conversation. + +# 3.1 Human Ratings of Dialogue Quality + +A continuous (0-100) rating scale is employed with three main motivation points (Graham et al., 2013; Novikova et al., 2018; Li et al., 2019; Santhanam and Shaikh, 2019; Santhanam et al., 2020; Mille et al., 2020; Barrault et al., 2020; Howcroft et al., 2020). Firstly, continuous scales reduce potential bias when comparing the performance of competing models by enabling score standardization. The score distribution of each human assessor is standardized according to overall mean and standard deviation of all ratings provided by that assessor, thus removing any adverse effects of those employing overly harsh (or indeed lenient) scoring strategies. Secondly, the 0-100 rating scale allows standard significance tests to score distributions of models to help determine which models significantly outperform others. Thirdly, and possibly most importantly, a continuous rating scale facilitates highly accurate quality control of crowd-sourced workers so that the evaluation can scale while still maintaining validity at a low cost. + +Each human assessor is firstly asked to carry out a live conversation with a randomly selected model, comprised of a minimum of 10 conversational inputs, before rating the quality of the conversation that just took place under a number of criteria shown in Figure 1. Note that the measurement criteria we employed are not immutable and we encourage to extend or adjust the criteria for future studies as necessary. + +A continuous rating scale is advantageous for several reasons but employment of such a scale raises the question of how it should be labeled. In evaluation of language tasks, adjectival scale labels, such as poor, low, medium, high, perfect/okay, good, excellent, and so on, are often employed despite their likely contribution to annotator inconsistency (Loukina et al., 2020; Sorodoc et al., 2017). This is despite evidence of adjectival scale labels being problematic in terms of bias resulting from positively and negatively worded items not being true opposites of one another, and items intended to have neutral intensity in fact proving to have + +
Robotic:It was obvious that I was talking to a chatbot as opposed to another human user.
Interesting:The conversation with the chatbot was interesting.
Fun:The conversation with the chatbot was fun/enjoyable.
Consistent:The chatbot was consistent throughout the conversation.
Fluent:The chatbot's English was fluent and natural throughout the conversation.
Repetitive:I felt that the chatbot kept being repetitive during the conversation.
Topic:The chatbot stays on topic.
+ +Figure 1: Criteria employed to assess models in our human evaluation in the form of Likert statements; corresponding evaluation labels (left) not shown to human assessors. + +specific conceptual meanings. Alexandrov (2010) provides a summary of issues associated with adjectival labels. + +To avoid any such causes of inconsistency, we structure each rating as a simple Likert declarative statement and ask human assessors to rate the degree to which they agree with each of these statement, making it possible to keep the rating scale constant while only changing the statement for each measurement criteria. We ask judges to rate each conversation under the seven aforementioned measurement criteria (Figure 1) along with a continuous rating scale labeled only at each extreme with strongly disagree (left); strongly agree (right). + +# 3.2 Quality Controlling the Crowd for Open-Domain Dialogue + +We structure Human Intelligence Tasks (HITs) so that a sufficiently rich score distribution is collected from each individual worker who participated, asking each to hold six conversations, comprised of a shuffled arrangement of five dialogue models and a single quality control model. + +Many approaches to quality controlling the crowd employ gold-standard items as quality checks (Liu et al., 2013; Lasecki et al., 2014). This approach is however highly likely to allow low quality data to pollute the resulting evaluation, since any worker willing to assign high scores to all items will undeservedly pass this check. The approach also runs in contrast to our aim of the same individual who took part in a live conversation to also assess its quality, as it relies on the use of pre-created gold standard conversations. + +Our quality control approach overcomes these challenges by deploying models in live conversations that have known distinct performance levels instead of asking workers to assess the quality of pre-existing known high quality conversations. Within a HIT, the five models $m$ can produce some quality level of conversation and the model $l$ produces known lower quality dialogues (lower than the five models). For a single worker who takes part in conversations with $m$ and $l$ , we then check how consistently the worker rated the conversations of $l$ lower than $m$ . This results in a quality control mechanism that does not ask workers to be consistent with other workers or to correctly rate gold standard dialogues but only assesses worker consistency by how consistently they distinguish between known distinct performance models and only with respect to their own conversation ratings. + +From a practical standpoint, creating a low performance model, $l$ , is additionally far less challenging and costly than pre-creating a known set of high quality dialogues, and degraded models operate fully automatically. Low quality models produce outputs via generation of random responses with meaning distortion also applied. + +For random response degradation: Low quality responses are generated by random sampling responses from training set dialogues with the intention of disregarding any previous input from the user, so responses from the model are likely to be perceived as low quality since they have low relevance. To reduce the quality of conversations further, we apply meaning distortion: each response, $r$ , is altered to distort its meaning by randomly selecting a sequence of words within that response and replacing it with a sequence of words sampled from a distinct training set dialogue, with the length of the replaced word sequence being determined by the number of words in $r$ . The specific details are provided in Appendix A.1, and Figure 4 in Appendix A.4 gives a typical example. + +Hits subsequently consist of a total of six dialogues comprised of five genuine models and a single quality control model that generates meaning distorted and random responses. Crowd-sourced workers converse with each model before rating conversation quality (model order is shuffled and blind). Statistical significance tests are then applied to score distributions of workers for the ratings they attributed to ordinary models, $m$ , relative to the low quality model, $l$ . The resulting $p$ -value is then em + +ployed as a means of rating worker consistency, and any worker with $p > = 0.05$ shows no significant difference between low and ordinary model quality and is filtered out. + +# 3.3 Calculating System-Level Scores + +Scores are collected from workers who rate models on a 0-100 rating scale, and we refer to these scores as raw scores. Scores for negative attributes, i.e. robotic and repetitive, are then reversed for ease of further comparison, $100 -$ the original rating. A distribution of scores is extracted for each worker and raw scores are standardized according to each worker's mean and standard deviation, in order to iron out any differences in worker scoring strategy. + +Average standardized scores for each criteria are calculated, and an overall score is calculated as the average of all measurement criteria. + +# 4 Meta-Evaluation + +In order to assess the reliability of the proposed method of human evaluation, we carry out a meta-evaluation in which we firstly examine individual human assessor consistency, before conducting a self-replication experiment. A number of models are required to function as a sample set of test systems, and for this purpose we employ available pre-trained models from ParlAI:4 Poly-Encoder Transformer (Humeau et al., 2019), Bi-Encoder Transformer (Dinan et al., 2018), Sequence to Sequence (Sutskever et al., 2014), Key-Value Memory Networks (Miller et al., 2016), and a LSTM-based Model (Hochreiter and Schmidhuber, 1997). Within the evaluation setting of ConvAI2, each model is with a persona consisting of approximately five textual statements to emulate a personality. However, to increase the number of models and to provide an interesting comparison, we additionally include a version of each of the above models without any persona, resulting in 10 competing models. + +Hits were posted on the crowd-sourcing platform Amazon Mechanical Turk.5 Firstly, and in order to evaluate the open-domain models in as realistic a setting as possible, we allow workers to choose the topic of conversation and input their chosen topic in a text field. The open nature of conversations should be noted however as something that influences the difficulty of producing consistent results + +
TopicWorkersAve. Duration (min)Dialogues
TotalPassedPass RatePassedFailedAllTotalPassedPass Rate
Free Run 124917369.5%6.537.046.681,5251,07570.5%
Free Run 224813956.0%6.877.587.181,48083856.6%
Ice-breaker24817169.0%6.606.706.631,4501,03071.0%
+ +in our self-replication experiment. The fact that we allow human assessors to freely choose the topic of conversation means that differences in ratings could result from legitimate differences in performance when different topics are chosen by human assessors. We nonetheless test our evaluation allowing the user to choose the topic as this is part of our core aim for developing evaluation of dialogue truly in the open domain. + +Besides choosing a topic, we additionally asked workers to input their opinion of the topic they chose to discuss with models, categorizing the topic as either liked, ambivalent about it, or disliked. For example, if the topic they chose to discuss was dogs, we were curious to know if this was motivated by the fact that the worker liked or disliked dogs or indeed that they had chosen to discuss something they had no particular feeling about. Table 2 shows subsequent proportions (\%) of workers, and the detailed instructions are introduced in Figure 5 in Appendix A.4. Perhaps unsurprisingly, the vast majority of workers chose to discuss something they liked (84% for workers who passed quality control). Nonetheless 7% of good workers were ambivalent about the topic they chose and 9% chose a topic they reported as disliking. + +Table 1: Numbers of workers who took part in human evaluation of models, average time taken per dialogue in minutes (min), and total number of dialogues assessed before and after quality control in which workers freely chose the topic (Free run 1); precisely the same experiment set-up was repeated (Free run 2); where the topic was prescribed via an ice-breaker statement (Ice-breaker) selected directly from the persona of the model. + +
Free run 1Free run 2
PassFailPassFail
Like83.988.686.493.8
Ambivalent7.43.86.22.3
Dislike8.77.77.43.9
+ +Table 2: Proportions $(\%)$ of topics that are reported as liked, ambivalent about or disliked by workers who passed and failed quality control. + +Table 1 shows the number of workers who participated in the initial data collection run who freely chose the topic of conversation with models (Free run 1), amounting to 1,525 dialogues $\times$ 7 criteria + +$= 10,675$ human ratings. The details of payment to each worker and the total experiment cost are provided in Appendix A.2. Table 1 also shows the proportion of workers who passed quality checks, numbers of dialogues assessed in total before and after quality filtering, as well as the average time taken for workers to complete a hit and average time taken to assess dialogues. As mentioned previously, we carry out a second data collection run with precisely the same settings (Free run 2) to measure the reliability of results and Table 1 shows equivalent statistics with respect to Free run 2 in which a total of 1,480 dialogues $\times 7$ ratings $= 10,360$ human ratings were collected in total. + +# 4.1 Human Assessor Consistency + +Although the overall aim of our evaluation is to produce reliable results at the system level, which we test later in Section 4.2, we firstly examine ratings of workers at the level of individual dialogue ratings. Technically speaking, the most meaningful reliability measures for continuous ratings scales test consistency of aggregate (system-level) results because although a high level of random error is expected in individual continuous rating scale scores, when aggregates are calculated for large samples of ratings, positive and negative error that is truly random effectively cancels itself out, and does not negatively impact consistency. In other words, the rating scale we employ does not rely on consistency on the level of individual ratings. We nonetheless examine individual rater consistency, since it is the standard approach, but keep in mind that results in this part of our meta-evaluation are not crucial when testing reliability for an evaluation carried out via a continuous rating scale where consistency in overall system-level results are more important. + +The distribution of Pearson correlation coefficients for pairs of workers who assessed the same hit is depicted in Figure 2. + +As can be seen from Figure 2, the likelihood + +
ModelnOverallInterestingFunConsistentFluentTopicRoboticRepetitive
Free Run IA7980.5340.5640.6020.7110.8630.964-0.0380.069
B7980.4190.4740.4810.6140.8750.994-0.431-0.075
Ap7070.3180.3990.3720.4430.8210.404-0.3300.116
C7910.2620.4910.3790.0280.636-0.066-0.3160.680
Cp7140.1890.4090.3730.1590.672-0.114-0.5210.349
Bp7070.1730.2300.1970.3690.6730.320-0.395-0.187
D707-0.087-0.190-0.2080.1660.3110.401-0.637-0.449
Dp798-0.201-0.308-0.2340.0920.3120.025-0.625-0.669
Ep763-0.217-0.181-0.201-0.1960.380-0.455-0.605-0.264
E742-0.243-0.165-0.160-0.1420.329-0.407-0.745-0.411
r-0.9690.9520.9270.8990.9600.9510.6460.936
+ +Table 3: Average standardized scores for models in initial data collection run; workers were free to choose the topic of conversation (Free run 1); the correlation $(r)$ between systems in this and a second data collection run distinct data collection runs; where A=Bi-Encoder Transformer, B=Poly-Encoder Transformer, C=Key-Value Memory Network, D=Sequence to Sequence, and E=LSTM-based Model; models with $p$ models with a the persona; score for robotic and repetitive have been reversed; $n$ is number of ratings; models ordered by overall average score. + +![](images/d05e1db6ab751cb802565da67b753e3b7e10ce6d5bcdfd834748264f60f8e488.jpg) +Figure 2: Agreement between pairs of human assessors as measured by the Pearson correlation $(r)$ of ratings provided by workers who passed (blue) and failed quality control (orange). + +of agreement between pairs of workers who failed quality control is close to random as the distribution is approaching uniformity across almost the range of possible coefficients. In contrast, for pairs of workers who pass quality control, the peak of agreement is between an $r$ of 0.6 and 0.7, showing high agreement in general between such annotator + +pairs. + +Some of the observed disagreement is likely to be the result of legitimate differences between scores of two workers who chose distinct topics to discuss with the same model however, an unavoidable source of inconsistency when testing models with respect to the open domain. Interestingly, in $5\%$ of dialogues, worker pairs assigned the same hit happened to both freely choose an identical topic to discuss with the same model. Furthermore, remaining disagreement at the level of individual ratings might not be problematic at the level of overall scores in relation to aggregation of ratings collected on a continuous rating scale. + +# 4.2 System-level Consistency + +Table 3 shows results of the system-level evaluation resulting from the initial data collection run on Mechanical Turk (Free run 1), where competing models are ordered by overall highest average z-score. + +Table 3 additionally shows consistency of the evaluation between each experiment run via the Pearson correlation of scores for each measurement criteria as well as consistency overall. Across + +
ModelnOverallInterestingFunConsistentFluentTopicRoboticRepetitive
Ice-breakerA7210.5520.5650.5270.8731.0181.011-0.2870.156
Ap7420.4220.5890.5600.5180.7180.5270.0090.034
B7210.3760.3790.3400.6340.7690.820-0.221-0.087
C7840.3220.6150.5370.1900.6310.061-0.3440.565
Bp6580.2730.4060.3400.4140.6330.423-0.3690.063
Cp7000.2220.4020.3370.0890.654-0.068-0.3760.514
D728-0.139-0.277-0.2040.1230.3490.295-0.638-0.620
Ep714-0.198-0.172-0.203-0.0540.316-0.343-0.533-0.396
E721-0.240-0.125-0.161-0.1960.318-0.393-0.631-0.489
Dp721-0.267-0.426-0.402-0.0110.2340.000-0.628-0.636
r-0.9840.9670.9440.9580.9510.9810.7150.950
+ +Table 4: Average standardized scores for models in human evaluation where workers were prescribed an ice-breaker topic of conversation sampled from the persona of the model; the correlation $(r)$ between these scores and Free run 1 in Table 3; models are consistent with Table 3; $n$ is number of ratings; models without $p$ did not have a persona (ice-breaker statement was subsequently unknown to these models). + +![](images/3af7a8227c480414c7f55ae0f5c0e3a2f7fbd39c77490132cf2457645565b2dc.jpg) +Figure 3: Pairwise significance test results for systems concluded from Free Run 1, where a colored cell indicates that the system in that row significantly outperformed the system in that column. Models are consistent with Table 3. + +the board, consistency is very high, exceeding a correlation of 0.94 in almost all cases with the exception of robotic which nonetheless achieved a correlation of over 0.7. Besides individual criteria, of crucial importance is the consistency of overall results, as this is the means by which models would ordinarily be ranked in terms of overall performance. As can be observed from Table 3, the correlation reached in terms of overall scores for systems is 0.969, which is very close to a perfect + +correlation, showing extremely high levels of reliability for the evaluation, evidence that the approach overcomes substantial challenges with respect to annotator consistency and expected difficulties with respect to evaluating models in the open domain, where assessors are legitimately free to choose distinct topics of conversation. + +In any empirical evaluation, statistical significance tests should be applied to take into account the fact that small differences in scores between systems can occur simply by chance. We provide pairwise significance test results in Figure 3, where we apply standard significance test, Wilcoxon rank-sum to rating distributions for each pair of competing models for each data collection run, and corresponding results for run 2 in Figure 6 in Appendix A.4. Results showed a very high proportion of identical conclusions, $84\%$ , drawn from pairwise significance tests applied to data from the two data collection runs at $p < 0.1$ . Results for $p < 0.05$ , additionally showed high correspondence between pairwise significance test conclusions, only marginally lower with $82\%$ of the same conclusions being drawn for pairs of models in the two data collection runs. We additionally provide correlations between measurement criteria and overall scores in Table 8 of Appendix A.4. + +# 5 Persona Contribution to System Performance + +Since we have verified the reliability of the human evaluation, we take a closer look at the results and investigate dialogue quality when models employ a persona. Results in Table 3 reveal that perhaps unexpectedly in general are either rated more favorably by human assessors when they carry out dialogues without a persona or a tie occurs between models with and without a persona. + +# 6 Evaluating with Prescribed Topics + +In contrast to the initial experiment in which workers were permitted to choose the topic of conversation, we further investigate the performance of models in a slightly easier setting where the topic under discussion is known to the model, by selecting a statement from its persona, which we refer to as an ice-breaker topic statement. An ice-breaker topic statement is then provided to human assessors at the beginning of each conversation, and the assessor is instructed to talk about this topic with the model. We therefore provide the topic of conversation to workers in the form of an ice-breaker topic statement, corresponding to a randomly selected persona statement belonging to the agent. Again, we run this experiment on MTurk, this time contrasting results for our initial data collection run where workers freely chose a topic with one in which workers were instructed to talk about the ice-breaker statement with models. + +Numbers of workers who participated in the Icebreaker run are provided in Table 1, while a breakdown of results for each model and overall average scores are shown in Table 4 as well as the correlation between scores for systems when a topic is freely chosen. Interestingly, in terms of absolute differences in raw scores, the best performing model achieves higher fluency, consistency and is deemed less repetitive when evaluated in icebreaker conversations compared those with freely chosen topics. Raw average scores for models in the Ice-breaker run are additionally provided in Table 11 in Appendix A.4. Relatively speaking, in terms of system rankings, no meaningful difference in relative performance is observed when models are tested in a scenario where the worker chooses a topic and when one is prescribed with an ice-breaker statement, as can be seen from the strong correlation between scores for models in Free run 1 and Ice-breaker evaluation as shown in + +Table 4. Additionally, significance test results for the Ice-breaker evaluation are provided in Figure 7 in Appendix A.4. + +# 7 Comparison with Automatic Evaluation Metrics + +# 7.1 Word-overlap-based Metrics + +In this experiment, we employed four prevailing word-overlap-based metrics as described in the following, whose scores are computed on the ConvAI2 test set. + +BLEU BLEU (Bilingual Evaluation Understudy) evaluate the quality of a system output by computing the n-gram precision according to human-generated references (Papineni et al., 2002). It also uses the brevity penalty to penalize short outputs. + +ROUGE-L ROUGE (Recall-Oriented Understudy for Gisting Evaluation) is a recall-adaptation of BLEU, whose wildly-applied variant is ROUGE-L (Lin and Hovy, 2003). It computes the precision and recall using longest common subsequence (LSC) instead of n-gram, and the F1 score of precision and recall is reported as the final score. + +METEOR METEOR (Metric for Evaluation of Translation with Explicit ORdering) was firstly proposed to overcome flaws of BLEU, like no usage of recall (Denkowski and Lavie, 2011). It computes the unigram precision and recall, and have a different mechanism of choosing the brevity penalty. + +GLEU GLEU (Google-BLEU) is a variety of BLEU (Wu et al., 2016) which computes the n-gram precision and recall instead of the standalone precision. The minimum of precision and recall is reported as the final GLEU score. + +
Metricr
BLEU-4-0.883
BLEU-1-0.707
ROUGE-L-0.799
METEOR-0.321
GLEU-0.816
+ +Table 5: Pearson correlation $(r)$ of word-overlap metric scores and human evaluation. + +# 7.2 Reference-free Metrics + +The following introduces two reference-free automatic metrics we employed: FED and USR. Their + +
FEDmFEDlUSRUSR-MLMUSR-DR(c)USR-DR(f)
Overall0.5900.530-0.230-0.4190.0460.205
Interesting0.028-0.042-0.451-0.235-0.238-0.081
Fun-0.3390.115-0.378-0.319-0.1310.032
Consistent0.2360.2270.214-0.6200.5180.652
Fluent-0.138-0.054-0.227-0.3740.0280.151
Robotic0.5280.461-0.070-0.2900.1060.191
Repetitive0.8410.752-0.7130.182-0.690-0.568
Topic0.0460.0040.222-0.7540.6060.746
+ +Table 6: Pearson correlation $(r)$ of reference free metric scores and human evaluation, where $\mathrm{FED}_m$ and $\mathrm{FED}_l$ respectively use medium and large DialoGPT, USR is the overall USR score computed according to three submetrics: USR-MLM, USR-DR(c) and USR-DR(f). + +scores are computed using the conversations collected in our experiment. + +FED FED (Fine-grained Evaluation of Dialog) is a pre-trained-model based metric to evaluate a textual conversation history (Mehri and Eskenazi, 2020a). Given a conversation $c$ , a pre-trained model $m$ , two predefined responses $r_p$ and $r_n$ ( $p = \text{positive}$ and $n = \text{negative}$ ), the FED score is $\mathcal{L}_m(r_p|c) - \mathcal{L}_m(r_n|c)$ where $\mathcal{L}_m(r|c)$ computes the likelihood that the model $m$ will generate a response $r$ to a conversation $c$ . We employed medium and large DialogGPT (Zhang et al., 2020) as FED scorers, where the full list of predefined positive and negative responses are available in Table 7 in Appendix. + +USR USR (an UnSupervised Reference-free metric) uses the pre-trained model RoBERTa (Liu et al., 2019) to assess the quality of a conversation (Mehri and Eskenazi, 2020b). It consists of three sub-metrics: USR-MLM is to evaluate the understandability and naturalness, USR-DR(c) and USR-DR(f) are to evaluate the interestingness and consistency. The sub-metric scores then produce an overall score through a regression model. + +# 7.3 Correlation between Automatic Metrics and Human Evaluation + +We compute the correlation between commonly applied automatic metrics and our human evaluation methods, including word-overlap-based metrics and reference-free metrics, as shown in Tables 5 and 6 respectively. + +As can be seen from Table 5, unfortunately no word-overlap metric achieves a strong positive correlation with human assessment, confirming once again that the invalidity of system rankings cur + +rently produced by automatic metric scores. + +In terms of reference-free metrics, results correspond better and are more encouraging. FED has the ability of distinguishing "repetitive" models, but for other criteria, it correlates weakly or even negatively with human. Meanwhile, despite USR only correlating marginally with human in terms of consistency and topic loyalty, USR-DR(f) correlates closest to human among the three sub-metrics, while it performs best on evaluating consistency and topic loyalty. + +# 8 Conclusion + +Development of reliable evaluation of open-domain dialogue has been highlighted as a known open-problem. We overcome previous challenges and provide a new human evaluation methodology shown as highly consistent, with results for models correlating at $r = 0.969$ in two separate data collection runs. Our evaluation has the advantage of highly accurate quality control of crowd-sourcing, differences in scoring strategies to be ironed out via score standardization, applicability of standard significance testing while increasing the reliability of results. + +# Acknowledgments + +Support was provided by Noah's Ark Lab, Huawei, and Science Foundation Ireland in the ADAPT Centre for Digital Content Technology (www.adaptcentre.ie) at Trinity College Dublin and Dublin City University funded under the SFI Research Centres Programme (Grants 13/RC/2106_P2; 13/RC/2106) co-funded under the European Regional Development Fund. + +# References + +Ralph A Alexander. 1990. A note on averaging correlations. Bulletin of the Psychonomic Society, 28(4):335-336. +Aliosha Alexandrov. 2010. Characteristics of single-item measures in likert scale format. The Electronic Journal of Business Research Methods, 8(1):1-12. +Loic Barrault, Magdalena Biesialska, Ondrej Bojar, Marta R. Costa-jussa, Christian Federmann, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Matthias Huck, Eric Joanis, Tom Kocmi, Philipp Koehn, Chi-kiu Lo, Nikola Ljubesic, Christof Monz, Makoto Morishita, Masaaki Nagata, Toshiaki Nakazawa, Santanu Pal, Matt Post, and Marcos Zampieri. 2020. Findings of the 2020 conference on machine translation (wmt20). In Proceedings of the Fifth Conference on Machine Translation, pages 1-54, Online. Association for Computational Linguistics. +Ondrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and Ales Tamchyna. 2014. Findings of the 2014 workshop on statistical machine translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 12-58, Baltimore, Maryland, USA. Association for Computational Linguistics. +Ondrej Bojar, Christian Buck, Chris Callison-Burch, Christian Federmann, Barry Haddow, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2013. Findings of the 2013 Workshop on Statistical Machine Translation. In Proceedings of the Eighth Workshop on Statistical Machine Translation, pages 1-44, Sofia, Bulgaria. Association for Computational Linguistics. +Chris Callison-Burch, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2012. Findings of the 2012 workshop on statistical machine translation. In Proceedings of the Seventh Workshop on Statistical Machine Translation, pages 10-51, Montreal, Canada. Association for Computational Linguistics. +Chris Callison-Burch, Philipp Koehn, Christof Monz, and Omar Zaidan. 2011. Findings of the 2011 workshop on statistical machine translation. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 22-64, Edinburgh, Scotland. Association for Computational Linguistics. +M. Denkowski and A. Lavie. 2011. Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 85-91. Association for Computational Linguistics. +Emily Dinan, Varvara Logacheva, Valentin Malykh, Alexander H. Miller, Kurt Shuster, Jack Urbanek, + +Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, Shrimai Prabhumoye, Alan W. Black, Alexander I. Rudnicky, Jason Williams, Joelle Pineau, Mikhail Burtsev, and Jason Weston. 2019. The second conversational intelligence challenge (convai2). CoRR, abs/1902.00098. +Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2018. Wizard of wikipedia: Knowledge-powered conversational agents. CoRR, abs/1811.01241. +Sarah E. Finch and Jinho D. Choi. 2020. Towards unified dialogue system evaluation: A comprehensive analysis of current evaluation protocols. In Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 236-245, 1st virtual meeting. Association for Computational Linguistics. +Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2013. Crowd-sourcing of human judgments of machine translation fluency. In Proceedings of the Australasian Language Technology Association Workshop 2013 (ALTA 2013), pages 16-24, Brisbane, Australia. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735-1780. +David M. Howcroft, Anya Belz, Miruna-Adriana Clinciu, Dimitra Gkatzia, Sadid A. Hasan, Saad Mahamood, Simon Mille, Emiel van Miltenburg, Sashank Santhanam, and Verena Rieser. 2020. Twenty years of confusion in human evaluation: NLG needs evaluation sheets and standardised definitions. In Proceedings of the 13th International Conference on Natural Language Generation, pages 169-182, Dublin, Ireland. Association for Computational Linguistics. +Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2019. Poly-encoders: Transformer architectures and pre-training strategies for fast and accurate multi-sentence scoring. CoRR, abs/1905.01969. +Walter S. Lasecki, Jaime Teevan, and Ece Kamar. 2014. Information extraction and manipulation threats in crowd-powered systems. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing, CSCW '14, page 248-256, New York, NY, USA. Association for Computing Machinery. +Margaret Li, Jason Weston, and Stephen Roller. 2019. Acute-eval: Improved dialogue evaluation with optimized questions and multi-turn comparisons. arXiv preprint arXiv:1909.03087. +Chin-Yew Lin and Eduard Hovy. 2003. Automatic evaluation of summaries using n-gram co-occurrence statistics. In Proceedings of the 2003 Conference of the North American Chapter of the Association + +for Computational Linguistics on Human Language Technology-Volume 1, pages 71-78. Association for Computational Linguistics. +Qiang Liu, Alexander T Ihler, and Mark Steyvers. 2013. Scoring workers in crowdsourcing: How many control questions are enough? In Advances in Neural Information Processing Systems, volume 26. Curran Associates, Inc. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692. +Anastassia Loukina, Nitin Madnani, Aoife Cahill, Lili Yao, Matthew S. Johnson, Brian Riordan, and Daniel F. McCaffrey. 2020. Using PRMSE to evaluate automated scoring systems in the presence of label noise. In Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 18-29, Seattle, WA, USA $\rightarrow$ Online. Association for Computational Linguistics. +Shikib Mehri and Maxine Eskenazi. 2020a. Unsupervised evaluation of interactive dialog with DialogGPT. In Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 225-235, 1st virtual meeting. Association for Computational Linguistics. +Shikib Mehri and Maxine Eskenazi. 2020b. USR: An unsupervised and reference free evaluation metric for dialog generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 681-707, Online. Association for Computational Linguistics. +Simon Mille, Anya Belz, Bernd Bohnet, Thiago Castro Ferreira, Yvette Graham, and Leo Wanner. 2020. The third multilingual surface realisation shared task (SR'20): Overview and evaluation results. In Proceedings of the Third Workshop on Multilingual Surface Realisation, pages 1-20, Barcelona, Spain (Online). Association for Computational Linguistics. +Alexander H. Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-value memory networks for directly reading documents. CoRR, abs/1606.03126. +Jekaterina Novikova, Ondrej Dušek, and Verena Rieser. 2018. RankME: Reliable human ratings for natural language generation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 72-78, New Orleans, Louisiana. Association for Computational Linguistics. +Bo Pang, Erik Nijkamp, Wenjuan Han, Linqi Zhou, Yixian Liu, and Kewei Tu. 2020. Towards holistic and automatic evaluation of open-domain dialogue generation. In Proceedings of the 58th Annual Meeting of + +the Association for Computational Linguistics, pages 3619-3629, Online. Association for Computational Linguistics. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei Jing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02, pages 311-318, Stroudsburg, PA, USA. Association for Computational Linguistics. +Ashwin Ram, Rohit Prasad, Chandra Khatri, Anu Venkatesh, Raefer Gabriel, Qing Liu, Jeff Nunn, Behnam Hedayatnia, Ming Cheng, Ashish Nagar, Eric King, Kate Bland, Amanda Wartick, Yi Pan, Han Song, Sk Jayadevan, Gene Hwang, and Art Pettigre. 2018. Conversational AI: the science behind the alexa prize. CoRR, abs/1801.03604. +Sashank Santhanam, Alireza Karduni, and Samira Shaikh. 2020. Studying the effects of cognitive biases in evaluation of conversational agents. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1-13. +Sashank Santhanam and Samira Shaikh. 2019. Towards best experiment design for evaluating dialogue system output. In Proceedings of the 12th International Conference on Natural Language Generation, pages 88-94, Tokyo, Japan. Association for Computational Linguistics. +Ionut Sorodoc, Jey Han Lau, Nikolaos Aletras, and Timothy Baldwin. 2017. Multimodal topic labelling. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 701-706, Valencia, Spain. Association for Computational Linguistics. +Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS'14, page 3104-3112, Cambridge, MA, USA. MIT Press. +Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyls, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144. +Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020. DIALOGPT: Large-scale generative pre-training for conversational response + +generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 270-278, Online. Association for Computational Linguistics. + +# A Appendix + +# A.1 Further Details of Meaning Distortion Degradation Procedure + +To distort the meaning of responses in our quality control degraded performance model, a sequence of words of length $r$ is firstly selected from the response of length $n$ and replaced with a distinct sequence of words, also of length $r$ randomly selected from a distinct dialogue in the training set. Note that the position of the word sequence to be replaced is additionally random with the exception of response of length $n \geq 3$ , the sequence of replaced words does not include the response initial or final words: + +- for $1 \leq n \leq 3$ : $r$ is 1 word; +for $4\leq n\leq 5$ $r$ is 2 words; +for $6\leq n\leq 8$ $r$ is 3 words; +- for $9 \leq n \leq 15$ , $r$ is 4 words; +for $16\leq n\leq 29$ $r$ is 5 words; +for $n\geq 30$ $r$ is $\lfloor n / 5\rfloor$ words. + +# A.2 Worker Payment + +Each workers was paid 0.99 USD per hit consisting of 6 conversations. The total cost of one run of our evaluation did not exceed 250 USD, or 25 USD per model. Note that the quality control method we applied for removing unreliable data is not the criteria for deciding worker payment. A worker whose data is filtered out can still get paid. + +# A.3 Automatic Metrics + +
PositiveNegative
InterestingWow that is really interesting. +That's really interesting! +Cool! That sounds super interesting.That's not very interesting. +That's really boring. +That was a really boring response.
FunWow that is very fun. +Chat with you is enjoyable. +You are fun.That's not very fun. +I am not having fun.
Consistent-That's not what you said earlier! +Stop contradicting yourself!
FluentThat makes sense! +You have a good point.Is that real English? +I'm so confused right now! +That makes no sense!
Topic-Stop changing the topic so much. +Don't change the topic!
Robotic-You are robot. +You do not sound like a person.
Repetitive-Stop saying the same thing repeatedly. +Why are you repeating yourself? +Stop repeating yourself!
+ +Table 7: Positive and negative utterances employed for the FED metric. For criteria that are available in original FED (Interesting, Consistent, Fluent, Topic and Repetitive), we use their utterances off-the-shelf. In addition, we adapt the utterances for criteria Fun and Robotic. + +# A.4 Additional Experiment Details and Results + +We additionally provide a comparison of conversations collected in our live human evaluation and that of ConvAI2 in Figures 8, 9, 10, and 11, as well as median number of words and characters for conversations and inputs in Table 12, showing how challenges reported in the ConvAI2 live evaluation with respect to workers producing very short conversations are overcome by our quality control technique. + +# A.5 User Interface + +Figure 12, 13,15, 16 and 17 introduce the User Interfaces we designed in the experiment of our human evaluation method. In addition, Figure 18 provides an example of live chat between a worker and a chatbot. + +
NormalDegraded
User:Hello, how are you?User:How are you?
Bot:i am good , a little tired but good !Bot:I do not have a job. I have broken have a job.
User:I am tired as well, but good tooUser:Sorry about that?
Bot:i am tired too , but from being out all day .Bot:Thanks. I the love watching old.
User:What do you like doing?User:Tell me about it?
Bot:i enjoy movies , reading books , and even just doing nothingBot:that is cool, do you i supposed to I you travel?
User:I love reading booksUser:I do not like travelling
Bot:i enjoy reading too ! what is your favorite book ?Bot:I teach meet you, sorryy elementary school.
User:I love the novel "voice of the heart"User:Okay, do you like to swim?
Bot:i have been wanting to read that , i love reading tooBot:do you live sports are u skin am in a house.
User:You should read tooUser:I love sports, especially soccer
Bot:i do love to read when i have timeBot:I like but ishuold go, books
User:What book was the last book you read?User:How often do you read?
Bot:the notebook , a reread lolBot:i are your think ur i am a pilot , so i do not have much time .
User:I haven't read thatUser:I envy you, I would love to be a pilot
Bot:i read it yesterday . what do you do for fun ?Bot:I i am 4 felicia!
User:I love going hikingUser:I am Ben
Bot:i have not gone hiking a whileBot:Oh, how nice. do you do for you have?
User:I think you shouldUser:I do not understand
Bot:i shouldBot:I is your favorite don't I am sure you will find out soon.
+ +Figure 4: A typical human-model conversation (left) and a human quality-degraded model conversation (right) where random response and meaning distortion techniques have been applied to degraded model responses. + +
1.Your task is to have 6 conversations with a chatbot, and a different chatbot will talk to you in each conversation.
2.Before each conversation, you should think of a topic to talk about with the chatbot (your choice of topic). You will be asked to enter this topic before the conversation starts.
3.The current topic will be displayed to you throughout the conversation.
4.If the chatbot changes the topic to a new one, you should record this by updating the conversation topic using the Topic button (bottom left).
5.You are also allowed to change the topic, you should use the same button to do this (bottom left).
6.At the end of each conversation, you should tell us what you think about the chatbot.
7.In each conversation, you should type in a minimum of 10 inputs/sentences.
8.The purpose of these HITs where you will generate conversations with chatbots is to test how realistic their conversations are with users. In order for your data to be useful to us we require that your half of the conversation is also realistic. For example, your data will not be useful to us if you do the following:User: HiBot: HiUser: HiBot: Hi.. and so on.
9.Another example, if you are too repetitive or your responses are not appropriate given what the chatbot has just said, this will not be a useful test for them. For example, the following conversation is not ok:User: HiBot: HiUser: wow (not appropriate response)Bot: I saw a good movie last nightUser: wow (repetitive)Bot: Do you like football?User: I have two children and one dog. (not appropriate response)... and so on.
10.We need realistic conversations, so please do your best to talk to the bot as if the bot was another person you actually want to talk to. Obvious attempts to game the process and ones that don't make a real effort will unfortunately be rejected.
11.The chatbot may take a few seconds to respond, please be patient.
12.Please use Chrome and avoid special symbols if possible.
13.There is a feedback box at the end of the HIT. If you encounter any problems, please enter them in this box or email our MTurk account.
+ +Figure 5: Instructions shown to Mechanical Turk workers before starting the open-domain dialogue human evaluation. + +
OverallInterestingFunConsistentFluentTopicRoboticRepetitive
Overall-0.9590.9760.8610.9660.7960.9160.674
Interesting0.927-0.9920.6910.9490.5990.8750.840
Fun0.9030.988-0.7530.9610.6600.8890.783
Consistent0.8420.6730.636-0.8110.9690.7700.210
Fluent0.8790.9390.9150.648-0.7240.8570.667
Topic0.7450.5520.5030.9150.503-0.6760.122
Robotic0.8670.8300.7820.6480.8670.491-0.642
Repetitive0.6730.7700.7820.2610.7700.0550.758-
+ +Table 8: Correlation of assessed criteria with others when the human dialogue participant is allowed to freely choose a topic (run 1); correlations in the upper right half correspond to Pearson's $r$ while lower left are Spearman Correlation Coefficients. + +
ModelnOverallInterestingFunConsistentFluentTopicRoboticRepetitive
Free Run 2A6230.4550.6350.6290.7280.9240.922-0.443-0.212
Ap5390.4230.7470.7630.5550.7280.474-0.3480.040
B5530.3440.4640.4070.5540.7630.822-0.338-0.266
Bp6300.2600.4640.3720.5600.5810.496-0.412-0.238
C5390.2450.5760.4920.2290.5850.043-0.5450.337
Cp6090.1540.4530.3900.0270.544-0.200-0.5150.382
D5950.0020.009-0.0640.3890.2820.656-0.720-0.541
E567-0.202-0.063-0.044-0.0750.300-0.346-0.646-0.539
Ep511-0.218-0.152-0.1430.0430.426-0.352-0.702-0.646
Dp679-0.258-0.285-0.3040.0330.209-0.226-0.550-0.683
r-0.9690.9520.9270.8990.9600.9510.6460.936
+ +Table 9: Average standardized scores for models in secondary data collection run; workers were free to choose the topic of conversation (Free Run 2); the correlation $(r)$ between systems in this and a second data collection run distinct data collection runs; where A=Bi-Encoder Transformer, B=Poly-Encoder Transformer, C=Key-Value Memory Network, D=Sequence to Sequence, and E=Language Model; models with $p$ models with a the persona; $n$ denotes total number of ratings; score for robotic and repetitive have been reversed; models ordered by overall average score. + +
ModelnOverallInterestingFunConsistentFluentTopicRoboticRepetitive
Free run 1A79852.4953.0354.0758.1261.7865.2435.7339.47
B79850.4151.3951.6856.3764.5067.8425.6335.45
Ap70745.5347.3846.2348.5260.1747.5028.3040.62
C79143.9650.5047.5335.8555.7333.9827.3556.76
Cp71441.2147.1346.2639.2555.0532.0721.8546.84
Bp70739.9341.3540.0644.9353.7443.7225.2530.49
D70733.7130.2829.9541.7245.9249.0717.3021.72
Dp79829.3826.1927.9737.5344.1935.2617.4617.06
E74228.9930.7530.6531.2746.4223.6015.1025.13
Ep76328.6529.3428.5029.1347.0721.3017.8227.41
Free Run 2A62351.6756.6256.2759.2164.6964.0427.1133.74
B53949.0752.4250.6654.8860.8663.7329.5731.38
Ap55350.5659.9560.2354.2860.6152.0627.5939.22
C63045.8755.6053.0245.1654.7038.7224.4049.50
Cp53942.2751.1949.6137.9054.1730.4222.7449.84
Bp60946.7151.9249.9554.6256.0152.8528.4833.10
D59538.1738.3135.3950.9946.3857.9416.0922.08
Dp56730.8931.0730.3738.3744.6431.4721.8518.48
E67931.7035.6736.3235.2646.9126.7918.9821.99
Ep51131.6633.6333.2638.7751.5326.9917.6319.79
r-0.9590.9470.9190.8800.9510.9510.7830.945
+ +Table 10: Average raw Direct Assessment scores for each assessed dimension of a range of dialogue systems in two distinct data collection runs where workers are free to choose the topic (Free run 1; Free run 2); as well as the correlations of different aspects, where A=Bi-Encoder Transformer, B=Poly-Encoder Transformer, C=Key-Value Memory Network, D=Sequence to Sequence, and E=LSTM-based; models with $p$ with persona, while those without $p$ do not have a persona; $n$ denotes total number of ratings; score for robotic and repetitive have been reversed; models ordered by overall average score + +
ModelnOverallInterestingFunConsistentFluentTopicRoboticRepetitive
Ice-breakerA72153.4353.6552.3563.2467.2866.9728.1742.32
Ap72150.2154.5353.5052.8458.8353.1838.8739.70
B74249.5549.2347.7657.7960.6462.2232.5636.65
C78447.9356.1853.6943.1556.8840.4629.6155.54
Bp70044.9448.8346.7049.5855.8649.2125.8238.61
Cp65842.4147.9845.4837.6654.5132.5026.0052.72
D72835.1430.3233.1342.9049.9248.5120.1121.09
Ep72131.5831.7330.8235.4447.1227.0621.9026.97
E72130.0933.1731.9531.1447.1224.9019.1023.23
Dp71427.2222.5622.5335.2241.7034.9817.4416.09
r-0.9700.9550.9180.9490.9280.9720.7380.968
+ +Table 11: Average raw Direct Assessment scores when the topic via an Ice-breaker statement is selected from the persona assigned to the model; as well as the correlation between ice-breaker and freely chosen topic (Free run 1) scores, where A=Bi-Encoder Transformer, B=Poly-Encoder Transformer, C=Key-Value Memory Network, D=Sequence to Sequence, and E=LSTM-based; Systems with subscript $p$ correspond to the performance of the corresponding model when the persona is available to the dialogue system; $n$ denotes total number of ratings; score for robotic and repetitive have been reversed; $n$ is the sample size of ratings combined to produce each score; models ordered by overall average score. + +![](images/61ff096f992a9d26396664f9f8ac14e1f8581919a5b16fe62f5da5cd91b7d206.jpg) +Figure 6: Pairwise significance test results for systems concluded from Free Run 2, where a colored cell indicates that the system in that row significantly outperformed the system in that column. Models are consistent with Table 3. + +![](images/f9584bfdca61d2db587147806c1050ac34bdf1c84d4d38ab99a3698295b107e8.jpg) +Figure 7: Significance test results for Ice-breaker evaluation of models, where a darker colored cell indicates a stronger win in terms of statistical significance for the system in a given row over the system in a given column. Models are consistent with Table 3. + +
Passed QCFailed QCConvAI2
CharactersMedian in an Input272216
Median in a Conversation249188105
WordsMedian in an Input864
Median in a Conversation634828
+ +Table 12: Median numbers of words and characters in conversations and inputs provided by workers who passed quality control; failed quality control in our human evaluation; ConvAI2 live evaluation. + +![](images/ca8b6eed043d0343d6de7bbb2d6611526200b311ff29adf7e2eeb73c1663a3f6.jpg) +(a) Pass Quality Control + +![](images/cffebff63a384bd3855e2882293d3e77487054374c3781226c6dc8fd3c174362.jpg) +(b) Fail Quality Control + +![](images/9cf8e9ea3b110cb6ca649ea4db5ffed572063c1f68c6602f2c096d50d52fefff.jpg) +(c) ConvAI2 Live + +![](images/a1df663fbf46e7fd9bab322ecd21f56bae312097faa82ee4f9f3ff9f71630a4a.jpg) +Figure 8: Characters per conversation from workers who (a) passed quality control; (b) failed quality control in our human evaluation; (c) ConvAI2 live evaluation. +(a) Pass Quality Control + +![](images/1128041d9dd5ea6e7074ba91f3bad25a9b4a6a04688aa3076019b1402cad7f78.jpg) +(b) Fail Quality Control + +![](images/4841b88192143b24d88feff0a40acb2fc335cfa3c55010c30275fa60e8dc8242.jpg) +(c) ConvAI2 Live + +![](images/9e3bb66cd6dffa04a7ea4fc9bc75cff6c820bc28eb852f515f9ffa63b98c01ba.jpg) +(a) Pass Quality Control +Figure 10: Characters per input from workers who (a) passed quality control; (b) failed quality control in our human evaluation; (c) ConvAI2 live evaluation. + +![](images/de4fe8700c4ccdc4de75e0a3447590718b99440126109d593ae78c407fbdede0.jpg) +(b) Fail Quality Control + +![](images/2cecdc4c3817f8cbd3a54d78796b48450e44883621b711437effa28c109418f9.jpg) +Figure 9: Words per conversation from workers who (a) passed quality control; (b) failed quality control in our human evaluation; (c) ConvAI2 live evaluation. +(c) ConvAI2 Live + +![](images/62e72b84ba6e2aa07ce9dfa38081b26308a52d8c5ccf567c923c861ec65fa2cd.jpg) +(a) Pass Quality Control + +![](images/b757441e665319788a74df63adf116664437777bf15e9d5b1622060232ed0bcb.jpg) +(b) Fail Quality Control + +![](images/f656ae7544c2d4df30cac3886170da73a60826ec4472deb96ea75eb9cf430e61.jpg) +(c) ConvA12 Live +Figure 11: Words per input from workers who (a) passed quality control; (b) failed quality control in our human evaluation; (c) ConvAI2 live evaluation. + +![](images/e6e5e19109c73b550b9f9736118d703b259383cd580ac0f695178fcf1e1bb8d1.jpg) +Figure 12: The user interface for workers to interact with a chatbot. + +![](images/b5decdcc8d28e1f04bb22b33b69b19a78edb6a1358a16020a8c2088e55717b46.jpg) +Figure 13: The popup window for user to type a topic before the conversation starts. + +![](images/71693e337d487f3cfbc0bd776070481d600857dde83f6a1ce02a601859fe5c5c.jpg) +Figure 14: The popup window if the Topic button is clicked. + +You have completed conversations with 0 chatbots. + +# Not enough inputs yet! + +Please make sure that you have entered at least 10 inputs/sentences before going to the next chatbot, thanks! The number of inputs you've entered so far is displayed at the top of the screen. + +Close + +Please say how much you agree with each of the following statements: + +![](images/d3a297bada4c4605ec062a5e06ebda64b5a94332a3a29c7f8d533732f0e5d8f4.jpg) +Figure 15: The popup warning when a worker clicks the Next Chatbot button without enough inputs. +Figure 16: The interface shown to a worker to evaluate the conversation with a chatbot after clicking the Next Chatbot button in Figure 12. Once the evaluation of current conversation is done, worker should click the NEXT button to move to the next chatbot. If all conversations are completed, the worker will be redirect to end the entire HIT and leave the feedback, as shown in Figure 17. + +NEXT + +Submit + +![](images/1ee87e0f42fe2b575dec0f74eb0c1c42fe04d1b590a23c1d3f5da443af43dbba.jpg) +Figure 17: The interface shown to workers when a HIT is completed. Workers are welcome to leave their feedback in this page. +Figure 18: Screenshot of example live chat between a Mechanical Turk worker who chose books as the conversation topic in the human evaluation. \ No newline at end of file diff --git a/achievingreliablehumanassessmentofopendomaindialoguesystems/images.zip b/achievingreliablehumanassessmentofopendomaindialoguesystems/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..ee4abd8325c43bfc584e41c72fb4dd5ca62b74c9 --- /dev/null +++ b/achievingreliablehumanassessmentofopendomaindialoguesystems/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b8f75d8cc57e62682b9ee3c53598c6656d7df6ecf64225de8e13afc792922436 +size 1696326 diff --git a/achievingreliablehumanassessmentofopendomaindialoguesystems/layout.json b/achievingreliablehumanassessmentofopendomaindialoguesystems/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a50d75aec01115f0f6c19b02a8cedcb3edea825c --- /dev/null +++ b/achievingreliablehumanassessmentofopendomaindialoguesystems/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d6f7a8272c1d75bba98aa2ee49b247e9d8fc6bdb29c14b6a1752c5d620e0deef +size 503706 diff --git a/acloserlookathowfinetuningchangesbert/a29e25fd-a048-4bed-83a7-98f9591e2daa_content_list.json b/acloserlookathowfinetuningchangesbert/a29e25fd-a048-4bed-83a7-98f9591e2daa_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0b580ea74468d3e11c3c8da53790621668f7ae0b --- /dev/null +++ b/acloserlookathowfinetuningchangesbert/a29e25fd-a048-4bed-83a7-98f9591e2daa_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d77a79e5d3de8164163f91e5b6f2ad478937bde46bcf3508c0340269a28714b +size 124776 diff --git a/acloserlookathowfinetuningchangesbert/a29e25fd-a048-4bed-83a7-98f9591e2daa_model.json b/acloserlookathowfinetuningchangesbert/a29e25fd-a048-4bed-83a7-98f9591e2daa_model.json new file mode 100644 index 0000000000000000000000000000000000000000..bfae274ef3151baf45d4e7fa8fcb25b9f39e1fb4 --- /dev/null +++ b/acloserlookathowfinetuningchangesbert/a29e25fd-a048-4bed-83a7-98f9591e2daa_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3852f7ea2fa104816b8da0e95bb68c12489a95efe4ac860fff64a7f1a767fdeb +size 138706 diff --git a/acloserlookathowfinetuningchangesbert/a29e25fd-a048-4bed-83a7-98f9591e2daa_origin.pdf b/acloserlookathowfinetuningchangesbert/a29e25fd-a048-4bed-83a7-98f9591e2daa_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6664101754071331d20a2e2d7f954d53ce4e05dc --- /dev/null +++ b/acloserlookathowfinetuningchangesbert/a29e25fd-a048-4bed-83a7-98f9591e2daa_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:842858e366fd8aaaf5cbb5f798b294d49217fbdcd9cb3f87971ae13f959fd273 +size 3074149 diff --git a/acloserlookathowfinetuningchangesbert/full.md b/acloserlookathowfinetuningchangesbert/full.md new file mode 100644 index 0000000000000000000000000000000000000000..5e86f747bda01c17f8ba107af8265d9c381e56df --- /dev/null +++ b/acloserlookathowfinetuningchangesbert/full.md @@ -0,0 +1,470 @@ +# A Closer Look at How Fine-tuning Changes BERT + +Yichu Zhou + +School of Computing + +University of Utah + +flyaway@cs.utah.edu + +Vivek Srikumar + +School of Computing + +University of Utah + +svivek@cs.utah.edu + +# Abstract + +Given the prevalence of pre-trained contextualized representations in today's NLP, there have been many efforts to understand what information they contain, and why they seem to be universally successful. The most common approach to use these representations involves fine-tuning them for an end task. Yet, how fine-tuning changes the underlying embedding space is less studied. In this work, we study the English BERT family and use two probing techniques to analyze how fine-tuning changes the space. We hypothesize that fine-tuning affects classification performance by increasing the distances between examples associated with different labels. We confirm this hypothesis with carefully designed experiments on five different NLP tasks. Via these experiments, we also discover an exception to the prevailing wisdom that "fine-tuning always improves performance". Finally, by comparing the representations before and after fine-tuning, we discover that fine-tuning does not introduce arbitrary changes to representations; instead, it adjusts the representations to downstream tasks while largely preserving the original spatial structure of the data points. + +# 1 Introduction + +Pre-trained transformer-based language models (e.g., Devlin et al., 2019) form the basis of state-of-the-art results across NLP. The relative opacity of these models has prompted the development of many probes to investigate linguistic regularities captured in them (e.g., Kovaleva et al., 2019; Conneau et al., 2018; Jawahar et al., 2019). + +Broadly speaking, there are two ways to use a pre-trained representation (Peters et al., 2019): as a fixed feature extractor (where the pre-trained weights are frozen), or by fine-tuning it for a task. The probing literature has largely focused on the former (e.g., Kassner and Schütze, 2020; Perone et al., 2018; Yaghoobzadeh et al., 2019; + +Krasnowska-Kieras and Wroblewska, 2019; Wallace et al., 2019; Pruksachatkun et al., 2020; Aghajanyan et al., 2021). Some previous work (Merchant et al., 2020; Mosbach et al., 2020b; Hao et al., 2020) does provide insights about fine-tuning: fine-tuning changes higher layers more than lower ones and linguistic information is not lost during fine-tuning. However, relatively less is understood about how the representation changes during the process of fine-tuning and why fine-tuning invariably seems to improve task performance. + +In this work, we investigate the process of fine-tuning of representations using the English BERT family (Devlin et al., 2019). Specifically, we ask: + +1. Does fine-tuning always improve performance? +2. How does fine-tuning alter the representation to adjust for downstream tasks? +3. How does fine-tuning change the geometric structure of different layers? + +We apply two probing techniques—classifier-based probing (Kim et al., 2019; Tenney et al., 2019) and DIRECTPROBE (Zhou and Srikumar, 2021)—on variants of BERT representations that are fine-tuned on five tasks: part-of-speech tagging, dependency head prediction, preposition supersense role & function prediction and text classification. Beyond confirming previous findings about fine-tuning, our analysis reveals several new findings, briefly described below. + +First, we find that fine-tuning introduces a divergence between training and test sets, which is not severe enough to hurt generalization in most cases. However, we do find one exception where fine-tuning hurts the performance; this setting also has the largest divergence between training and test set after fine-tuning (§4.1). + +Second, we examine how fine-tuning changes labeled regions of the representation space. For a representation where task labels are not linearly separable, we find that fine-tuning adjusts it by + +grouping points with the same label into a small number of clusters (ideally one), thus simplifying the underlying representation. Doing so makes it easier to linearly separate labels with fine-tuned representations than untuned ones (§4.2). For a representation whose task labels are already linearly separable, we find that fine-tuning pushes the clusters of points representing different labels away from each other, thus introducing large separating regions between labels. Rather than simply scaling the points, clusters move in different directions and with different extents (measured by Euclidean distance). Overall, these clusters become distant compared to the untuned representation. We conjecture that the enlarged region between groups admits a bigger set of classifiers that can separate them, leading to better generalization (§4.3). + +We verify our distance hypothesis by investigating the effect of fine-tuning across tasks. We observe that fine-tuning for related tasks can also provide useful signal for the target task by altering the distances between clusters representing different labels (§4.4). + +Finally, fine-tuning does not change the higher layers arbitrarily. This confirms previous findings. Additionally, we find that fine-tuning largely preserves the relative positions of the label clusters, while reconfiguring the space to adjust for downstream tasks (§4.5). Informally, we can say that fine-tuning only "slightly" changes higher layers. + +These findings help us understand fine-tuning better, and justify why fine-tuned representations can lead to improvements across many NLP tasks1. + +# 2 Preliminaries: Probing Methods + +In this work, we probe representations in the BERT family during and after fine-tuning. First, let us look at the two supervised probes we will employ: a classifier-based probe (e.g., Tenney et al., 2019; Jullien et al., 2022) to assess how well a representation supports classifiers for a task, and DIRECT-ProBE (Zhou and Srikumar, 2021) to analyze the geometry of the representation. + +# 2.1 Classifiers as Probes + +Trained classifiers are the most commonly used probes in the literature (e.g. Hewitt et al., 2021; Whitney et al., 2021; Belinkov, 2021). To understand how well a representation encodes the labels + +for a task, a probing classifier is trained over it, with the embeddings themselves kept frozen when the classifier is trained. + +For all our experiments, we use two-layer neural networks as our probe classifiers. We use grid-search to choose the best hyperparameters. Each best classifier is trained five times with different initializations. We report the average accuracy and its standard deviation for each classifier. + +The hidden layer sizes are selected from $\{32,64,128,256\} \times \{32,64,128,256\}$ , and the regularizer weight from the range $10^{-7}$ to $10^{0}$ . All models use RLUs as the activation function for the hidden layer and are optimized by Adam (Kingma and Ba, 2015). We set the maximum number of learning iterations to 1000. We use scikit-learn v0.22 (Pedregosa et al., 2011) for these experiments. + +Classifier probes aim to measure how well a contextualized representation captures a linguistic property. The classification performance can help us assess the effect of fine-tuning. + +# 2.2 DIRECTPROBE: Probing the Geometric Structure + +Classifier probes treat the representation as a black box and only focus on the final task performance; they do not reveal how fine-tuning changes the underlying geometry of the space. To this end, we use DIRECTPROBE (Zhou and Srikumar, 2021) $^2$ , a recently proposed technique which analyzes embeddings from a geometric perspective. We briefly summarize the technique and refer the reader to the original work for details. + +For a given labeling task, DIRECTPROBE returns a set of clusters such that each cluster only contains the points with the same label, and there are no overlaps between the convex hulls of these clusters. Any decision boundary must cross the regions between the clusters that have different labels (see in Figure 1). Since fine-tuning a contextualized representation creates different representations for different tasks, it is reasonable to probe the representation based on a given task. These clusters allow us to measure three properties of interest. + +Number of Clusters: The number of clusters indicates the linearity of the representation for a task. If the number of clusters equals the number of labels, then examples with the same label are grouped into + +![](images/0e83ff3e5b8a997f983676a43a15a147430c1a45e4c78dc88f29b199c78645f5.jpg) +Figure 1: Using the clustering to approximate the set of all decision boundaries. The left subfigure is a simple binary classification problem with a dashed circular decision boundary. The right subfigure is the result of DIRECTPROBE where the gray area is the region that a separator must cross. The connected points represent the clusters that DIRECTPROBE produces. + +![](images/638e17dedad92d65de2e92fbc2b6939b6e419d8ae6eca828507c728723087338.jpg) + +one cluster; a simple linear multi-class classifier will suffice. If, however, there are more clusters than labels, then at least two clusters of examples with the same label can not be grouped together (as in Figure 1, right). This scenario calls for a non-linear classifier. + +Distances between Clusters: Distances3 between clusters can reveal the internal structure of a representation. By tracking these distances during fine-tuning, we can study how the representation changes. To compute these distances, we use the fact that each cluster represents a convex object. This allows us to use max-margin separators to compute distances. We train a linear SVM (Chang and Lin, 2011) to find the maximum margin separator and compute its margin. The distance between the two clusters is twice the margin. + +Spatial Similarity: Distances between clusters can also reveal the spatial similarity of two representations. Intuitively, if two representations have similar relative distances between clusters, the representations themselves are similar to each other for the task at hand. + +We use these distances to construct a distance vector $\mathbf{v}$ for a representation, where each element $\mathbf{v}_i$ is the distance between the clusters of a pair of labels. With $n$ labels in a task, the size of $\mathbf{v}$ is $\frac{n(n - 1)}{2}$ . This construction works only when the number of clusters equals the number of labels (i.e., the dataset is linearly separable under the representation). Surprisingly, we find this to be the case for most representations we studied. As a measure of the similarity of two representations for a labeling task, we compute the Pearson correlation coefficient between their distance vectors. Note that this coefficient can also be used to measure the similarity between two labeled datasets with respect to the + +
Layers#headsDim#Param
BERTtiny221284.4M
BERTmini4425611.3M
BERTsmall4851229.1M
BERTmedium8851241.7M
BERTbase1212768110.1M
+ +Table 1: Statistics of five different BERT models. + +same representation. We exploit this observation to analyze the divergence between training and test sets for fine-tuned representations (§4.1). + +# 3 Experimental Setup + +In this section, we describe the representations and tasks we will encounter in our experiments. + +# 3.1 Representations + +We investigate several models from the BERT family (Devlin et al., 2019; Turc et al., 2019). These models all share the same basic architecture but with different capacities, i.e., different layers and hidden sizes. Table 1 summarizes the models we investigate in this work4. All of these models are for English text and uncased. + +For tokens that are broken into subwords by the tokenizer, we average the subword embeddings for the token representation. We use the models provided by HuggingFace v4.2.1 (Wolf et al., 2020), and Pytorch v1.6.0 (Paszke et al., 2019) for our experiments. + +# 3.2 Tasks + +We instantiate our analysis of the BERT models on a diverse set of five NLP tasks, which covers syntactic and semantic predictions. Here, we briefly describe the tasks, and refer the reader to the original sources of the data for further details.5 + +Part-of-speech tagging (POS) predicts the part-of-speech tag for each word in a sentence. The task helps us understand if a representation captures coarse grained syntactic categorization. We use the English portion of the parallel universal dependencies treebank (ud-pud, Nivre et al., 2016). + +Dependency relation (DEP) predicts the syntactic dependency relation between two tokens, i.e. + +$(w_{head}$ and $w_{mod})$ . This task can help us understand if, and how well, a representation can characterize syntactic relationships between words. This task involves assigning a category to a pair of tokens. We concatenate their contextualized representations from BERT and treat the concatenation as the representation of the pair. We use the same dataset as the POS task for dependencies. + +Preposition supersense disambiguation involves two categorization tasks of predicting preposition's semantic role (PS-role) and semantic function (PS-fxn). These tasks are designed for disambiguating semantic meanings of prepositions. Following the previous work (Liu et al., 2019), we only train and evaluate on single-token prepositions from Streusle v4.2 corpus (Schneider et al., 2018). + +Text classification, in general, is the task of categorizing sentences or documents. We use the TREC-50 dataset (Li and Roth, 2002) with 50 semantic labels for sentences. As is the standard practice, we use the representation of the [CLS] token as the sentence representation. This task can show how well a representation characterizes a sentence. + +# 3.3 Fine-tuning Setup + +We fine-tune the models in §3.1 on the five tasks from §3.2 separately. $^{6}$ The fine-tuned models (along with the original models) are then used to generate contextualized representations. The probing techniques described in §2 are applied to study both original and fine-tuned representations. + +Our preliminary experiments showed that the commonly used 3-5 epochs of fine-tuning are insufficient for the smaller representations, such as $\mathrm{BERT}_{\mathrm{tiny}}$ , and they require more epochs. We fine-tuned all the representations for 10 epochs except $\mathrm{BERT}_{\mathrm{base}}$ , which we fine-tuned for the usual three epochs. Note that the fine-tuning phase is separate from the classifier training phase for probing; for the probe classifiers, we train two-layer neural networks (described in §2.1) from scratch on both original and fine-tuned representations7, ensuring a fair comparison between them. + +# 4 Observations and Analysis + +In this section, we will use classifier probes to examine if fine-tuning always improves classifier per + +formance (§4.1). Then we propose a geometric explanation for why fine-tuning improves classification performance using DIRECTPROBE (§4.2 and §4.3). Next, we will confirm this geometric explanation by investigating cross-task finetuning (§4.4). Finally, we will analyze how finetuning changes the geometry of different layers of BERTbase (§4.5). + +# 4.1 Fine-tuned Performance + +It is commonly accepted that the fine-tuning improves task performance. Does this always hold? Table 2 summarizes the relevant observations from our experiments. Appendix C presents the complete fine-tuning results. + +Fine-tuning diverges the training and test set. In Table 2, the last column shows the spatial similarity between the training and test set for each representation. We apply DIRECTPROBE on the training and test set separately. The spatial similarity is calculated as the Pearson correlation coefficient between the distance vectors of training and test set (described in §2). We observe that after fine-tuning, all the similarities decrease, implying that the training and test set diverge as a result of fine-tuning. In most cases, this divergence is not severe enough to decrease the performance. + +There are exceptions, where fine-tuning hurts performance. An interesting observation in Table 2 is that $\mathrm{BERT}_{\mathrm{small}}$ does not show the improvements on the PS-fxn task after fine-tuning, which breaks the well-accepted impression that fine-tuning always improve the performance. However, only one such exception is observed across all our experiments (see Appendix C). It is insufficient to draw any concrete conclusions about why this is happening. We do observe that $\mathrm{BERT}_{\mathrm{small}}$ shows the smallest similarity (0.44) between the training and test set after fine-tuning on PS-fxn task. We conjecture that controlling the divergence between the training and test sets can help ensure that fine-tuning helps. Verifying or refuting this conjecture requires further study. + +# 4.2 Linearity of Representations + +Next, let us examine the geometry of the representations before and after fine-tuning using DIRECTPROBE and counting the number of clusters. We will focus on the overwhelming majority of cases where fine-tuning does improve performance. + +
TaskAccSim
POSoriginal94.250.96
tuned94.430.72
DEPoriginal92.930.93
tuned94.480.78
PS-fxnoriginal86.260.82
tuned85.080.44
PS-roleoriginal74.220.84
tuned74.570.54
TREC-50original81.32-
tuned89.60-
+ +Smaller representations require more complex classifiers. Table 3 summarizes the results. For brevity, we only present the results on $\mathrm{BERT}_{\mathrm{tiny}}$ . The full results are in Appendix C. We observe that before fine-tuning, small representations (i.e., $\mathrm{BERT}_{\mathrm{tiny}}$ ) are non-linear for most tasks. Although a non-linearity does not imply poor generalization, it represents a more complex spatial structure, and requires a more complex classifier. This suggests that to use small representations (say, due to limited resources), it would be advisable to use a non-linear classifier rather than a simple linear one. + +Fine-tuning makes the space simpler. In Table 3, we observe that the number of clusters decreases after fine-tuning. This tells us that after fine-tuning, the points associated with different labels are in a simpler spatial configuration. The same trend holds for TREC-50 (Table 4), even when the final representation is not linearly separable. + +Table 2: Fine-tuned performances of $\mathrm{BERT}_{\mathrm{small}}$ based on the last layers. The last column shows the spatial similarity (described in §2) between the training and test set. A complete table of all representations and tasks can be found in Appendix C. + +
Task#clustersis linearAcc
POSoriginal30N90.76
tuned18N91.67
DEPoriginal50N86.74
tuned46Y89.04
PS-fxnoriginal42N74.14
tuned40Y74.40
PS-roleoriginal46Y58.38
tuned46Y60.31
TREC-50original58N68.12
tuned51N84.04
+ +Table 3: The linearity of the last layer of $\mathrm{BERT}_{\mathrm{tiny}}$ for each task. Other results are in Appendix C. + +
Rep#clustersis linearAcc
BERTtinyoriginal58N68.12
tuned51N84.04
BERTminioriginal52N74.12
tuned52N88.36
BERTsmalloriginal52N81.32
tuned51N89.60
BERTmediumoriginal52N80.68
tuned52N89.80
BERTbaseoriginal52N85.24
tuned51N90.36
+ +Table 4: The linearity of the last layer of all models on TREC-50 task. The number of clusters is always more than the number of labels (50). + +# 4.3 Spatial Structure of Labels + +To better understand the changes in spatial structure, we apply DIRECTPROBE to every intermediate representation encountered during fine-tuning. Here, we focus on the $\mathrm{BERT}_{\mathrm{base}}$ . Since all representations we considered are linearly separable8, the number of clusters equals the number of labels. As a result, each cluster exclusively corresponds to one label. Going ahead, we will use clusters and labels interchangeably. + +Fine-tuning pushes each label far away from each other. This confirms the observation of Zhou and Srikumar (2021), who pointed out that the fine-tuning pushes each label away from each other. However, they use the global minimum distance between clusters to support this argument, which only partially supports the claim: the distances between some clusters might increase despite the global minimum distance decreasing. + +We track the minimum distance of each label to all other labels during fine-tuning. We find that all the minimum distances are increasing. Figure 2 shows how these distances change in the last layer of $\mathrm{BERT}_{\mathrm{base}}$ for the PS-role and POS tagging tasks. Appendix D includes the plots for all tasks. For clarity, we only show the three labels where the distance increases the most, and the three where it increases the least. We also observe that although the trend is increasing, the minimum distance associated with a label may decrease during the course of fine-tuning, e.g., the label STUFF in PS-role task, suggesting a potential instability of fine-tuning. + +![](images/c890558cd5e1bc2f8be43dda01692196903ef57d894ab7b523f87f3c6f53bc1c.jpg) +Figure 2: The dynamics of the minimum distances of the three labels where the distance increases the most, and the three where it increases the least. The horizontal axis is the number of fine-tuning updates; the vertical axis is chosen label's minimum distance to other labels. These results come from the last layer of $\mathrm{BERT}_{\mathrm{base}}$ . A full plots of four tasks can be found in Appendix D. + +![](images/dc1272d63b6970ab7b5c066012bde50cbdff244847b8c79c3b66bda79cc5c580.jpg) + +![](images/64aec6647c83eabbda8880f48eafdb345b40fa71bf06af27900a341a621708ad.jpg) +Figure 3: The PCA projection of three closest labels in POS tagging task based on the first (left) and last (right) layer of $\mathrm{BERT}_{\mathrm{base}}$ . These lines show the paths of the centroids of each label cluster during the fine-tuning. The markers indicate the starting points. This figure is best seen in color. + +To further see how labels move during the fine-tuning, we track the centroids of each cluster. We select three closest labels from the POS tagging task and track the paths of the centroids of each label cluster in the last layer of $\mathrm{BERT}_{\mathrm{base}}$ during the fine-tuning. Figure 3 (right) shows the 2D PCA projection of these paths. We observe that before fine-tuning, the centroids of all these three labels are close to each other. As fine-tuning proceeds, the centroids move around in different directions, away from each other. + +We conclude that fine-tuning enlarges the gaps between label clusters and admits more classifiers consistent with the labels, allowing for better generalization. Note that neither the loss nor the optimizer explicitly mandates this change. Indeed, + +since the labels were originally linearly separable, the learner need not adjust the representation at all. + +# 4.4 Cross-task Fine-tuning + +In §4.3, we hypothesized that fine-tuning improves the performance because it enlarges the gaps between label clusters. A natural inference of this hypothesis is that the process may shrink the gaps between labels of an unrelated task, and its performance can decrease. In this subsection, we investigate how fine-tuning for one task affects another. + +We fine-tune the $\mathrm{BERT}_{\mathrm{base}}$ on PS-role and POS tagging tasks separately and use the fine-tuned models to generate contextualized representations for the PS-fxn task. Our choice of tasks in this experimental design is motivated by the observation that PS-role and PS-fxn are similar tasks that seek to predict supersense tags for prepositions. On the other hand, POS tagging can adversely affect the PS-fxn task because POS tagging requires all the prepositions to be grouped together (label ADP) while PS-fxn requires different prepositions to be far away from each other. We apply DIRECTPROBE on both representations to analyze the geometric changes with respect to PS-fxn. + +The effects of cross-task fine-tuning depends on how close two tasks are. The third and fourth columns of Table 5 indicate the number of labels whose minimum distance is increased or decreased after fine-tuning. The second column from the right shows the average distance change over all labels, e.g. fine-tuning on POS results in the minimum distances of the PS-fxn labels decreasing by 1.68 on average. We observe that fine-tuning on the same dataset (PS-fxn) increases the distances between labels (second row), which is consistent with observations from §4.3; fine-tuning on a similar task also increases the distances between clusters (third row) but to a lesser extent. However, fine-tuning on a "opposing" task decreases the distances between clusters (last row). These observations suggest that cross-task fine-tuning could add or remove information from the representation, depending on how close the source and target task are. + +Small distances between label clusters indicate a poor performance. Based on our conclusion in §4.3 that a larger gap between labels leads to better generalization, we expect that the performance + +
fine-tuningprobing#inc#decaverage incAcc
-PS-fxn---87.75
PS-fxnPS-fxn4005.2989.58
PS-rolePS-fxn27131.0288.53
POSPS-fxn040-1.6883.24
+ +Table 5: Classification performances for PS-fxn task using the last layer of $\mathrm{BERT}_{\mathrm{base}}$ when fine-tuning on different tasks. First row indicates the untuned version. The third and forth column indicate the number of labels whose minimum distance is increased or decreased after fine-tuning. The second last column (average inc) shows the average change of the minimum distance over all the labels. The last column indicates the probing accuracy. + +of PS-fxn after fine-tuning on PS-role would be higher than the performance after fine-tuning on POS tagging. To verify this, we train two-layer neural networks on PS-fxn task using the representations that are fine-tuned on PS-role and POS tagging tasks. Importantly, we do not further fine-tune the representations for PS-fxn. The last column of Table 5 shows the results. Fine-tuning on PS-fxn enlarges gaps between all PS-fxn labels, which justifies the highest performance; fine-tuning on PS-role enlarges gaps between some labels in PS-fxn, leading to a slight improvement; fine-tuning on POS tags shrinks the gaps between all labels in PS-fxn, leading to a decrease in performance. + +In summary, based on the results of §4.2, §4.3 and §4.4, we conclude that fine-tuning injects or removes task-related information from representations by adjusting the distances between label clusters even if the original representation is linearly separable (i.e., when there is no need to change the representation). When the original representation does not support a linear classifier, fine-tuning tries to group points with the same label into a small number of clusters, ideally one cluster. + +# 4.5 Layer Behavior + +Previous work (Merchant et al., 2020; Mosbach et al., 2020b) showed that during fine-tuning, lower layers changed little compared to higher layers. In the following experiments, we confirm their findings and further show that: (i) fine-tuning does not change the representation arbitrarily, even for higher layers; (ii) an analysis of the changes of different layers by a visual comparison between lower and higher layers. Here, we focus on the POS tagging task with $\mathrm{BERT}_{\mathrm{base}}$ . Our conclusions extend to other tasks, whose results are in Appendix E. + +Higher layers do not change arbitrarily. Although previous work (Mosbach et al., 2020b) shows that higher layers change more than the lower layers, we find that higher layers still remain close to the original representations. To study the dynamics of fine-tuning, we compare each layer during fine-tuning to its corresponding original pretrained one. The spatial similarity between two representations is calculated as the Pearson correlation coefficient of their distance vectors as described in §2. Intuitively, a classifier learns a decision boundary that traverses the region between clusters, which makes the distances between clusters more relevant to our analysis (as opposed to the spatial structure of points within each cluster). + +Figure 4 shows the results for all four tasks.10 To avoid visual clutter, we only show the plots for every alternate layer. For the higher layers, we find that the Pearson correlation coefficient between the original representation and the fine-tuned one is surprisingly high (more than 0.5), reinforcing the notion that fine-tuning does not change the representation arbitrarily. Instead, it attempts to preserve the relative positions the labels. This means the fine-tuning process encodes task-specific information, yet it largely preserves the pre-trained information encoded in the representation. + +![](images/e39d307eda2acf045937543c6cc2db55d4798dc902368113825e4d92c8d459f6.jpg) +Figure 4: Dynamics of spatial similarity during the fine-tuning process based on $\mathrm{BERT}_{\mathrm{base}}$ . The horizontal axis is the number of updates during fine-tuning. The vertical axis is the Pearson correlation coefficient between current space and its original version (before fine-tuning). + +![](images/5b2709d30b31ac29e65318279a9a6472c464f0810e5301d84e9151ea2e436ee9.jpg) + +The labels of lower layers move only in a small region and almost in the same directions. The unchanged nature of lower layers raises the question: do they not change at all? To answer this question, for every label, we compute difference between its centroids before and after fine-tuning. + +![](images/0c779e05cc762bbb95a7a30f58fb19764920a67ef319554eaea55501846de0bf.jpg) +Figure 5: The PCA projection of the difference vector between the centroids of labels before and after finetuning based on POS tagging task and $\mathrm{BERT}_{\mathrm{base}}$ . Lower layers have a much smaller projection range than the higher layers. This figure is best seen in color. + +Figure 5 shows the PCA projection in 2D of these difference vectors. For brevity, we only present the plots for every alternative layer. A plot with all layers can be found in Appendix E. We observe that the movements of labels in lower layers concentrate in a few directions compared to the higher layers, suggesting the labels in lower layers do change, but do not separate the labels as much as higher layers. Also, we observe that the labels INTJ and SYM have distinctive directions in the lower layers. + +Note that, in Figure 5, the motion range of lower layers is much smaller than the higher layers. The projected two dimensions range from $-1$ to 3 and from $-3$ to 3 for layer two, while for layer 12 they range from $-12$ to 13 and $-12$ to 8, suggesting that labels in lower layers only move in a small region compared to higher layers. Figure 3 shows an example of this difference. Compared to the layer 12 (right) paths, we see that the layer 1 paths (left) traverse almost the same trajectories, which is consistent with the observations from Figure 5. + +# 5 Discussion + +Does fine-tuning always improve performance? Indeed, fine-tuning almost always improves task performance. However, rare cases exist where fine-tuning decreases the performance. Fine-tuning introduces a divergence between the training set and unseen examples (§4.1). However, it is unclear how this divergence affects the generalization ability of representations, e.g. does this divergence suggest a new kind of overfitting that is driven by representations rather than classifiers? + +How does fine-tuning alter the representation to adjust for downstream tasks? Fine-tuning alters the representation by grouping points with the + +same label into small number of clusters (§4.2) and pushing each label cluster away from the others (§4.3). We hypothesize that the distances between label clusters correlate with the classification performance and confirm this hypothesis by investigating cross-task fine-tuning (§4.4). Our findings are surprising because fine-tuning for a classification task does not need to alter the geometry of a representation if the data is already linearly separable in the original representation. What we observe reveals geometric properties that characterize good representations. We do not show theoretical analysis to connect our geometric findings to representation learnability, but the findings in this work may serve as a starting point for a learning theory for representations. + +How does fine-tuning change the underlying geometric structure of different layers? It is established that higher layers change more than the lower ones. In this work, we analyze this behavior more closely. We discover that higher layers do not change arbitrarily; instead, they remain similar to the untuned version. Informally, we can say that fine-tuning only "slightly" changes even the higher layers (§4.5). Nevertheless, our analysis does not reveal why higher layers change more than the lower layers. A deeper analysis of model parameters during fine-tuning is needed to understand the difference between lower and higher layers. + +Limitations of this work. Our experiments use the BERT family of models for English tasks. Given the architectural similarity of transformer language models, we may be able to extrapolate the results to other models, but further work is needed to confirm our findings to other languages or model architectures. In our analysis, we ignore the structure within each cluster, which is another information source for studying the representation. We plan to investigate these aspects in future work. We make our code available for replication and extension by the community. + +# 6 Related Work + +There are many lines of work that focus on analyzing and understanding representations. The most commonly used technique is the classifier-based method. Early work (Alain and Bengio, 2017; Kulmizev et al., 2020) starts with using linear classifiers as the probe. Hewitt and Liang (2019) pointed out that a linear probe is not sufficient to evaluate a representation. Some recent work + +also employ non-linear probes (Tenney et al., 2019; Eger et al., 2019). There are also efforts to inspect the representations from a geometric persepctive (e.g. Ethayarajh, 2019; Mimno and Thompson, 2017), including the recently proposed DIRECTPROBE (Zhou and Srikumar, 2021), which we use in this work. Another line of probing work designs control tasks (Ravichander et al., 2021; Lan et al., 2020) to reverse-engineer the internal mechanisms of representations (Kovaleva et al., 2019; Wu et al., 2020). However, in contrast to our work, most studies (Zhong et al., 2021; Li et al., 2021; Chen et al., 2021) focused on pre-trained representations, not fine-tuned ones. + +While fine-tuning pre-trained representations usually provide strong empirical performance (Wang et al., 2018; Talmor et al., 2020), how fine-tuning manage to do so has remained an open question. Moreover, the instability (Mosbach et al., 2020a; Dodge et al., 2020; Zhang et al., 2020) and forgetting problems (Chen et al., 2020; He et al., 2021) make it harder to analyze fine-tuned representations. Despite these difficulties, previous work (Merchant et al., 2020; Mosbach et al., 2020b; Hao et al., 2020) draw valuable conclusions about fine-tuning. This work extends this line of effort and provides a deeper understanding of how fine-tuning changes representations. + +# 7 Conclusions + +In this work, we take a close look at how fine-tuning a contextualized representation for a task modifies it. We investigate the fine-tuned representations of several BERT models using two probing techniques: classifier-based probing and DIRECT-ProBE. First, we show that fine-tuning introduces divergence between training and test set, and in at least one case, hurts generalization. Next, we show fine-tuning alters the geometry of a representation by pushing points belonging to the same label closer to each other, thus simpler and better classifiers. We confirm this hypothesis by cross-task fine-tuning experiments. Finally, we discover that while adjusting representations to downstream tasks, fine-tuning largely preserves the original spatial structure of points across all layers. Taken collectively, the empirical study presented in this work can not only justify the impressive performance of fine-tuning, but may also lead to a better understanding of learned representations. + +# Acknowledgments + +We thank the ARR reviewers and the Utah NLP group for their constructive feedback. This work is partially supported by NSF grants #1801446 (SaTC) and #1822877 (Cyberlearning), and a generous gift from Verisk Inc. + +# References + +Armen Aghajanyan, Sonal Gupta, and Luke Zettlemoyer. 2021. Intrinsic dimensionality explains the effectiveness of language model fine-tuning. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7319-7328, Online. Association for Computational Linguistics. +Guillaume Alain and Yoshua Bengio. 2017. Understanding intermediate layers using linear classifier probes. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Workshop Track Proceedings. OpenReview.net. +Yonatan Belinkov. 2021. Probing classifiers: Promises, shortcomings, and alternatives. CoRR, abs/2102.12452. +Chih-Chung Chang and Chih-Jen Lin. 2011. Libsvm: a library for support vector machines. ACM transactions on intelligent systems and technology (TIST), 2(3):1-27. +Boli Chen, Yao Fu, Guangwei Xu, Pengjun Xie, Chuanqi Tan, Mosha Chen, and Liping Jing. 2021. Probing BERT in hyperbolic spaces. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. +Sanyuan Chen, Yutai Hou, Yiming Cui, Wanxiang Che, Ting Liu, and Xiangzhan Yu. 2020. Recall and learn: Fine-tuning deep pretrained language models with less forgetting. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7870-7881, Online. Association for Computational Linguistics. +Alexis Conneau, German Kruszewski, Guillaume Lample, Loic Barrault, and Marco Baroni. 2018. What you can cram into a single $\$ \& !$ #\* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2126-2136, Melbourne, Australia. Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of + +the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah A. Smith. 2020. Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping. CoRR, abs/2002.06305. +Steffen Eger, Andreas Rückle, and Iryna Gurevych. 2019. Pitfalls in the evaluation of sentence embeddings. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 55–60, Florence, Italy. Association for Computational Linguistics. +Kawin Ethayarajh. 2019. How contextual are contextualized word representations? comparing the geometry of BERT, ELMo, and GPT-2 embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 55–65, Hong Kong, China. Association for Computational Linguistics. +Yaru Hao, Li Dong, Furu Wei, and Ke Xu. 2020. Investigating learning dynamics of BERT fine-tuning. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 87-92, Suzhou, China. Association for Computational Linguistics. +Tianxing He, Jun Liu, Kyunghyun Cho, Myle Ott, Bing Liu, James Glass, and Fuchun Peng. 2021. Analyzing the forgetting problem in pretrain-finetuning of open-domain dialogue response models. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1121-1133, Online. Association for Computational Linguistics. +John Hewitt, Kawin Ethayarajh, Percy Liang, and Christopher D. Manning. 2021. Conditional probing: measuring usable information beyond a baseline. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 1626-1639. Association for Computational Linguistics. +John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2733-2743, Hong Kong, China. Association for Computational Linguistics. +Ganesh Jawahar, Benoit Sagot, and Djamé Seddah. 2019. What does BERT learn about the structure of + +language? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3651-3657, Florence, Italy. Association for Computational Linguistics. +Mael Jullien, Marco Valentino, and André Freitas. 2022. Do transformers encode a foundational ontology? probing abstract classes in natural language. CoRR, abs/2201.10262. +Nora Kassner and Hinrich Schütze. 2020. Negated and misprimed probes for pretrained language models: Birds can talk, but cannot fly. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7811-7818, Online. Association for Computational Linguistics. +Najoung Kim, Roma Patel, Adam Poliak, Patrick Xia, Alex Wang, Tom McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bowman, and Ellie Pavlick. 2019. Probing what different NLP tasks teach machines about function word comprehension. In Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM* 2019), pages 235-249, Minneapolis, Minnesota. Association for Computational Linguistics. +Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the dark secrets of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4365-4374, Hong Kong, China. Association for Computational Linguistics. +Katarzyna Krasnowska-Kieras and Alina Wroblewska. 2019. Empirical linguistic study of sentence embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5729-5739, Florence, Italy. Association for Computational Linguistics. +Artur Kulmizev, Vinit Ravishankar, Mostafa Abdou, and Joakim Nivre. 2020. Do neural language models show preferences for syntactic formalisms? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4077-4091, Online. Association for Computational Linguistics. +Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. + +Bai Li, Zining Zhu, Guillaume Thomas, Yang Xu, and Frank Rudzicz. 2021. How is BERT surprised? layerwise detection of linguistic anomalies. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4215-4228, Online. Association for Computational Linguistics. +Xin Li and Dan Roth. 2002. Learning question classifiers. In COLING 2002: The 19th International Conference on Computational Linguistics. +Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019. Linguistic knowledge and transferability of contextual representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1073-1094, Minneapolis, Minnesota. Association for Computational Linguistics. +Amil Merchant, Elahe Rahimtoroghi, Ellie Pavlick, and Ian Tenney. 2020. What happens to BERT embeddings during fine-tuning? In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 33-44, Online. Association for Computational Linguistics. +David Mimno and Laure Thompson. 2017. The strange geometry of skip-gram with negative sampling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2873–2878, Copenhagen, Denmark. Association for Computational Linguistics. +Marius Mosbach, Maksym Andriushchenko, and Dietrich Klakow. 2020a. On the stability of fine-tuning BERT: misconceptions, explanations, and strong baselines. CoRR, abs/2006.04884. +Marius Mosbach, Anna Khokhlova, Michael A. Hedderich, and Dietrich Klakow. 2020b. On the interplay between fine-tuning and sentence-level probing for linguistic knowledge in pre-trained transformers. In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 68-82, Online. Association for Computational Linguistics. +Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal dependencies v1: A multilingual treebank collection. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1659-1666. +Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, + +Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024-8035. Curran Associates, Inc. +F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830. +Christian S Perone, Roberto Silveira, and Thomas S Paula. 2018. Evaluation of sentence embeddings in downstream and linguistic probing tasks. arXiv preprint arXiv:1806.06259. +Matthew E. Peters, Sebastian Ruder, and Noah A. Smith. 2019. To tune or not to tune? adapting pretrained representations to diverse tasks. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepLANLP-2019), pages 7-14, Florence, Italy. Association for Computational Linguistics. +Yada Pruksachatkun, Jason Phang, Haokun Liu, Phu Mon Htut, Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann, and Samuel R. Bowman. 2020. Intermediate-task transfer learning with pretrained language models: When and why does it work? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5231-5247, Online. Association for Computational Linguistics. +Abhilasha Ravichander, Yonatan Belinkov, and Eduard Hovy. 2021. Probing the probing paradigm: Does probing accuracy entail task relevance? In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3363-3377, Online. Association for Computational Linguistics. +Nathan Schneider, Jena D. Hwang, Vivek Srikumar, Jakob Prange, Austin Blodgett, Sarah R. Moeller, Aviram Stern, Adi Bitan, and Omri Abend. 2018. Comprehensive supersense disambiguation of English prepositions and possessives. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 185-196, Melbourne, Australia. Association for Computational Linguistics. +Alon Talmor, Yanai Elazar, Yoav Goldberg, and Jonathan Berant. 2020. olympics - on what language model pre-training captures. Trans. Assoc. Comput. Linguistics, 8:743-758. +Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipanjan Das, and Ellie Pavlick. 2019. What do you learn from context? probing for sentence structure in contextualized word representations. In 7th International + +Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. + +Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-read students learn better: On the importance of pre-training compact models. arXiv preprint arXiv:1908.08962. + +Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, and Matt Gardner. 2019. Do NLP models know numbers? probing numeracy in embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5307-5315, Hong Kong, China. Association for Computational Linguistics. + +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics. + +William F Whitney, Min Jae Song, David Brandfon-brener, Jaan Altosaar, and Kyunghyun Cho. 2021. Evaluating representations by the complexity of learning low-loss predictors. In Neural Compression: From Information Theory to Applications-Workshop@ ICLR 2021. + +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics. + +Zhiyong Wu, Yun Chen, Ben Kao, and Qun Liu. 2020. Perturbed masking: Parameter-free probing for analyzing and interpreting BERT. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4166-4176, Online. Association for Computational Linguistics. + +Yadollah Yaghoobzadeh, Katharina Kann, T. J. Hazen, Eneko Agirre, and Hinrich Schütze. 2019. Probing for semantic classes: Diagnosing the meaning content of word embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5740-5753, Florence, Italy. Association for Computational Linguistics. + +Tianyi Zhang, Felix Wu, Arzoo Katiyar, Kilian Q Weinberger, and Yoav Artzi. 2020. Revisiting few-sample bert fine-tuning. arXiv preprint arXiv:2006.05987. + +Zexuan Zhong, Dan Friedman, and Danqi Chen. 2021. Factual probing is [MASK]: Learning vs. learning to recall. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5017-5033, Online. Association for Computational Linguistics. + +Yichu Zhou and Vivek Srikumar. 2021. DirectProbe: Studying representations without classifiers. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5070-5083, Online. Association for Computational Linguistics. + +# A Fine-tuning Details + +In this work, we fine-tune all tasks and representations using HuggingFace library. We use a linear weight scheduler with a learning rate of $3e^{-4}$ , which uses $10\%$ of the total update steps as the warmup steps. The same scheduler is used for all tasks. All the models are optimized by Adam (Kingma and Ba, 2015) with batch size of 32. All the fine-tuning is run on a single Titan GPU. The best hidden-layer sizes for each task are shown in Table 7. + +# B Summary of Tasks + +In this work, we conduct experiments on five NLP tasks, which are chosen to cover different usages of the representations we study. Table 6 summarizes these tasks. + +# C Probing Performance + +Table 7 shows the complete table of probing results in our experiments. The last column is the spatial similarity between the training set and test set. Some entries are missing because the similarity can only be computed on the representations that are linearly separable for the given task. + +# D Dynamics of Minimum Distances + +Figure 6 shows the dynamics of minimum distances for labels on all four tasks. For clarity, we only present the distances for the three labels where the distances increase the most and the three where it decreases the most. + +# E PCA Projections of the Movements + +Figures 7-10 show the PCA projections of the difference vector between the centroids of labels before and after fine-tuning based on $\mathrm{BERT}_{\mathrm{base}}$ . + +
Task#Training#Test#LabelsToken-basedSentence-basedPair-wiseSemanticSyntax
Supersense-role428245747
Supersense-function428245740
POS16860432317
Dependency Relation16054412246
TREC-50545250050
+ +Table 6: Statistics of the five tasks with their different characteristics. + +![](images/d5d49ba769e3a1940f3b383805817e685504b72b692f23c095a0ee0764fbda3b.jpg) + +![](images/a259b66260f0948d8640bce203d6a90a60d27bd4f3adbed42cc43b74ccc05d7a.jpg) + +![](images/210f94fbe17b2868ecd8466e5569a82b34915999cf1c2d86454af757d7c0bef6.jpg) +Figure 6: The dynamics of the minimum distance of the three labels where the distance increases the most, and three labels where is increases the least. The horizontal axis is the number of fine-tuning updates; the vertical axis is chosen label's minimum distance to other labels. These results come from the last layer of $\mathrm{BERT}_{\mathrm{base}}$ . + +![](images/796094fbffc7337130771bd4dcf2f2a87ee358da06c4c48dd9bd321103d8eaee.jpg) + +![](images/067d5c53579856b2d7cc62c97b26705ded04df7b37e2c37b7711016e79dbbb69.jpg) + +![](images/fba067cc6bd96f9f094cc81bd86040f1696f980541ddba4e9bffdb0fc1058f87.jpg) + +![](images/6c0ba7fab09e1e37fe20b3a92b1d2b222e7fee45c0967a9bbc50ca14f3796008.jpg) + +![](images/1ab9dbfcb518beabcdf12b1086982199f7a8dd0e8d030e718ca3aa03b2723dc0.jpg) + +![](images/91c13d618668bcf36c4887fec3af75e1e0ec42f1ef3695788251512349381a49.jpg) + +![](images/45f713234a74882c2c47e19af056f910cc6b2c40b7f632843ef0527b66327297.jpg) + +![](images/85accce6038a40b84be26839d853c7d32dd42a7cc3f2685649ddf08647710c99.jpg) + +![](images/38bf42979fa88063c462c2dda666c36b0c426b39cdd85a0b09c4aa33188c02a8.jpg) + +![](images/f09950216f112a8dbc0c97cf35cc827e23c290b1a9ddd912854d7982d8073b07.jpg) + +![](images/d4e4b72c26ebde463159f975fbcb2c3356629503221979f226ba1586f709cbb8.jpg) +Figure 7: The PCA projection of the difference vector between the centroids of labels before and after fine-tuning based on POS tagging task and $\mathrm{BERT}_{\mathrm{base}}$ . + +![](images/27bbaf03474ac96f40e2c71eff3b649ab2f63f42ef1b85284e919fcb94b520fb.jpg) + +
RepresentationsTaskAccStdBest Layer Size#Clusteris LinearSimilarity
BERTtinyPOSoriginal90.760.24(256, 64)30N-
fine-tuned91.670.29(64, 64)18N-
DEPoriginal86.740.22(256, 256)50N-
fine-tuned89.040.20(256, 256)46Y0.88
PS-fxnoriginal74.141.42(256, 256)42N-
fine-tuned74.400.68(256, 128)40Y0.72
PS-roleoriginal58.380.78(256, 64)46Y0.76
fine-tuned60.310.29(64, 64)46Y0.70
TREC-50original68.120.82(256, 256)58N-
fine-tuned84.040.93(256, 256)51N-
BERTminiPOSoriginal93.810.10(256, 32)19N-
fine-tuned94.910.03(256, 32)17Y0.70
DEPoriginal91.820.09(256, 128)46Y0.93
fine-tuned93.550.07(256, 128)46Y0.86
PS-fxnoriginal82.451.07(256, 256)40Y0.77
fine-tuned84.250.39(256, 128)40Y0.53
PS-roleoriginal68.051.08(256, 256)46Y0.81
fine-tuned71.901.06(256, 64)46Y0.59
TREC-50original74.121.25(256, 256)52N-
fine-tuned88.360.50(64, 32)52N-
BERTsmallPOSoriginal94.260.13(256, 32)17Y0.96
fine-tuned95.430.06(128, 64)17Y0.72
DEPoriginal92.930.14(256, 64)46Y0.93
fine-tuned94.480.14(256, 64)46Y0.78
PS-fxnoriginal86.260.54(256, 256)40Y0.82
fine-tuned85.080.35(256, 256)40Y0.44
PS-roleoriginal74.221.03(256, 256)46Y0.84
fine-tuned74.570.61(128, 128)46Y0.54
TREC-50original81.320.61(256, 128)52N-
fine-tuned89.600.22(256, 64)51N-
BERTmediumPOSoriginal94.400.08(256, 128)17Y0.97
fine-tuned95.560.05(64, 32)17Y0.67
DEPoriginal92.540.14(256, 256)46Y0.94
fine-tuned94.760.20(128, 128)46Y0.79
PS-fxnoriginal86.560.41(256, 128)40Y0.80
fine-tuned88.450.45(128, 256)40Y0.59
PS-roleoriginal76.281.00(256, 32)46Y0.83
fine-tuned78.860.58(128, 128)46Y0.58
TREC-50original80.681.16(256, 64)52N-
fine-tuned89.800.33(32, 64)52N-
BERTbasePOSoriginal93.390.31(256, 128)17Y0.97
fine-tuned95.680.02(128, 64)17Y0.70
DEPoriginal89.390.08(256, 128)46Y0.92
fine-tuned94.760.05(64, 256)46Y0.76
PS-fxnoriginal87.750.41(256, 128)40Y0.84
fine-tuned89.580.67(32, 256)40Y0.57
PS-roleoriginal74.490.84(256, 128)46Y0.82
fine-tuned81.140.26(256, 128)46Y0.52
TREC-50original85.240.85(256, 128)52N-
fine-tuned90.360.32(64, 32)51N-
+ +Table 7: A complete table of the probing results of five representations on five tasks. + +![](images/0ef249a23407bd3ee6a49d4c179bbd5c5253bfd5f872bdd3d5de7fa6841074a4.jpg) + +![](images/d641e9197b86dd0c152478194aa466a66334359ee031df3cd0b6c1c3eb7fc347.jpg) + +![](images/96aab14fb7e1e8e732ac3f2dfc875633d01d1b38f4e6361d6dce4d2e867a1fde.jpg) + +
aclexpl
acl:relclfixed
advclflat
advmodgoeswith
amodiobj
apposmark
auxnmod
aux:passnmod:npmod
casenmod:poss
ccnmod:tmod
cc:preconjnsubj
ccompnsubj:pass
compoundnummod
compound:prtobj
conjobl
copobl:npmod
csubjobl:tmod
csubj:passorphan
depparataxis
detpunct
det:predetreparandum
discoursevocative
dislocatedxcomp
+ +![](images/3f008914d76f038becde7b66a0f2444bf3109ef458192a090f6c27414c206b22.jpg) + +![](images/d967a40dc3f3ad90b64d7291d491392309c9dd4e3d480890b496cc67c49856c6.jpg) + +![](images/395639ce1a4614c0babd3056d4ab53d6ba5ff6c39bbbde9e12c20b78c53317c2.jpg) + +![](images/5f84b51b6ee8a464915b151fe9835fe8204ae3bacc9d7efd3536e917659b5deb.jpg) + +![](images/43903fa1be25c8b11f2cc0a422bb1badffc8d8689e67fa7616bc388142947875.jpg) + +![](images/e9416e7298c720327d2933114e1cb01a16348ae4a12e7a081487a0ad038e9f03.jpg) + +![](images/6dff42346715a590239e8f27171dc136e1e2ff76da7345454f277f5b9c2c94c1.jpg) +Figure 8: The PCA projection of the difference vector between the centroids of labels before and after fine-tuning based on dependency prediction task and $\mathrm{BERT}_{\mathrm{base}}$ . + +![](images/b42995037f57fad0dd6a735f2e946d3e61a7c14c6d071a9777be14afc4ecb690.jpg) + +![](images/a23557583b98dc80a6cba768c6a81860211390606b9848752373eee1cde6c1be.jpg) + +![](images/82863353807a27ef4f36140d3a23f193939e4fa72077bc129d7783cda71a834c.jpg) + +![](images/f4a70bd0c70814149f78535eed76f87500e5f7d85de6ac06b7b3bb6f711c5915.jpg) + +![](images/0da861d4b7978095b33481e98922a068f6e1298fe2597eece5e5527a8c9bc351.jpg) + +
AccompanierInstrument
AgentInterval
ApproximatorLocus
BeneficiaryManner
CauserMeans
CharacteristicPartPortion
CircumstancePath
Co-AgentPossession
Co-ThemePossessor
ComparisonRefPurpose
CostQuantity
DirectionRateUnit
DurationSource
EndTimeSpecies
ExplanationStartTime
ExtentStuff
FrequencyTheme
GestaltTime
GoalTopic
IdentityWhole
+ +![](images/91c0e6ff0dda9ade24c20a5899629fe54c4674d95569cbf0017b8944aa788270.jpg) + +![](images/56f536e357478f5af935a5424465fbc505a0b557d6d160f486d1b97a486a3df6.jpg) + +![](images/3672c2c3b731a484e58ee282e295db034db5a2e8dc06db9836fb4f6e44f88a71.jpg) + +![](images/07623ec1d9885aac05f74f96b8c0ab5bc922cbcd6f8b168981e8a7d2f7f10909.jpg) + +![](images/80eda48a727a7bb1f0843c9632cd10257983440ba525789c5762c17498ee73aa.jpg) + +![](images/850a589d649259acf3f47e8350d3eb7ede86b70f3656e40ce7d7d8abb5612554.jpg) + +![](images/8215f8202c10932ffee03234ca154c54999a9e3faa866874c4dc1fd6614546c1.jpg) +Figure 9: The PCA projection of the difference vector between the centroids of labels before and after fine-tuning based on Supersense function task and $\mathrm{BERT}_{\mathrm{base}}$ + +![](images/59c1333424427cfe345c9336efb6dc440291e89cc7c81a5bbd7ca9867a0f3783.jpg) + +![](images/bf6c4f2686232a4e30d84d234964a718209d10b63fcf10200ef2932ab02692ff.jpg) + +![](images/ade8ad6158fa09ac079db9b4e3971474c43e6608c151ad317b5e681c67f466be.jpg) + +![](images/2af68a25f61f3913929bcfefac67ffaf44d82d364b80ab40f2fcaebf8d8c0f45.jpg) + +![](images/5cb088ef83ec09897daeed90923eabb1000cc864521d589c8fea2c762597422d.jpg) + +![](images/64d8a9a1d054b8faa8304c7355f2e7c8e963302766d9728a601f3f6764dd4884.jpg) + +![](images/5fab0b301a610939a218c5922f9e8babd02aa69a596c8d69e9c6cec9aef71581.jpg) + +![](images/1252ed2073572dc071e6d4ecb111cd2f5f1dc0fa8cad722452820f123c9b2fa6.jpg) + +![](images/fc7e4901eb83a9f0e6c8cd42a4fd293a31a0722cbdc0038e20560513ccec5cc7.jpg) + +![](images/8ec67e2aefc83f8025a03893372b06ade0564dd4b8d864efe9859473ae6fca50.jpg) + +![](images/7ce499a8874aae6c0e2046b563d12b76b49e246b7bb78f3393abbc778d5fedb9.jpg) + +![](images/3399811f5ef7d799d68392cd19215af898365ccbd5cdc0f5bb4319f027201ed1.jpg) + +![](images/baeae33d0ebf7e7d47215e4af488011009ad7c08ab1a9bc05d71340eb2422d1a.jpg) +Figure 10: The PCA projection of the difference vector between the centroids of labels before and after fine-tuning based on Supersense role task and $\mathrm{BERT}_{\mathrm{base}}$ + +![](images/8d70f717fa7ae7def3ddc7f9fe7796a6219fb62404a6cf24d53986f9177c12ee.jpg) + +![](images/31ea42ad50c101e77e6913e803007e65ee54e051ecb3253454819ff066852571.jpg) + +# F Cluster Number Revision + +We discovered a bug in the implementation of DIRECTPROBE which causes the merging to stop early while the remaining clusters are still mergeable. The main paper (Table 3, Table 4, and Table 7) has been updated to report the correct results. Table 8 shows the original results. + +This bug does not change the natural of the linearity of datasets and representations. All the findings from original experiments remain the same. This bug only affects the number of clusters when the representation is non-linear for a given task. + +
RepresentationsTaskAccStdBest Layer Size#Clusteris LinearSimilarity
BERTtinyPOSoriginal90.760.24(256, 64)3936N-
fine-tuned91.670.29(64, 64)20N-
DEPoriginal86.740.22(256, 256)653N-
fine-tuned89.040.20(256, 256)46Y0.88
PS-fxnoriginal74.141.42(256, 256)402N-
fine-tuned74.400.68(256, 128)40Y0.72
PS-roleoriginal58.380.78(256, 64)46Y0.76
fine-tuned60.310.29(64, 64)46Y0.70
TREC-50original68.120.82(256, 256)399N-
fine-tuned84.040.93(256, 256)51N-
BERTminiPOSoriginal93.810.10(256, 32)2429N-
fine-tuned94.910.03(256, 32)17Y0.70
DEPoriginal91.820.09(256, 128)46Y0.93
fine-tuned93.550.07(256, 128)46Y0.86
PS-fxnoriginal82.451.07(256, 256)40Y0.77
fine-tuned84.250.39(256, 128)40Y0.53
PS-roleoriginal68.051.08(256, 256)46Y0.81
fine-tuned71.901.06(256, 64)46Y0.59
TREC-50original74.121.25(256, 256)127N-
fine-tuned88.360.50(64, 32)52N-
BERTsmallPOSoriginal94.260.13(256, 32)17Y0.96
fine-tuned95.430.06(128, 64)17Y0.72
DEPoriginal92.930.14(256, 64)46Y0.93
fine-tuned94.480.14(256, 64)46Y0.78
PS-fxnoriginal86.260.54(256, 256)40Y0.82
fine-tuned85.080.35(256, 256)40Y0.44
PS-roleoriginal74.221.03(256, 256)46Y0.84
fine-tuned74.570.61(128, 128)46Y0.54
TREC-50original81.320.61(256, 128)113N-
fine-tuned89.600.22(256, 64)51N-
BERTmediumPOSoriginal94.400.08(256, 128)17Y0.97
fine-tuned95.560.05(64, 32)17Y0.67
DEPoriginal92.540.14(256, 256)46Y0.94
fine-tuned94.760.20(128, 128)46Y0.79
PS-fxnoriginal86.560.41(256, 128)40Y0.80
fine-tuned88.450.45(128, 256)40Y0.59
PS-roleoriginal76.281.00(256, 32)46Y0.83
fine-tuned78.860.58(128, 128)46Y0.58
TREC-50original80.681.16(256, 64)110N-
fine-tuned89.800.33(32, 64)52N-
BERTbasePOSoriginal93.390.31(256, 128)17Y0.97
fine-tuned95.680.02(128, 64)17Y0.70
DEPoriginal89.390.08(256, 128)46Y0.92
fine-tuned94.760.05(64, 256)46Y0.76
PS-fxnoriginal87.750.41(256, 128)40Y0.84
fine-tuned89.580.67(32, 256)40Y0.57
PS-roleoriginal74.490.84(256, 128)46Y0.82
fine-tuned81.140.26(256, 128)46Y0.52
TREC-50original85.240.85(256, 128)162N-
fine-tuned90.360.32(64, 32)51N-
+ +Table 8: Original table of the probing results of five representations on five tasks. These results were in the original version of the paper before we found a bug in the implementation of DIRECTPROBE. The updated results are in Table 7. See Appendix C for details. \ No newline at end of file diff --git a/acloserlookathowfinetuningchangesbert/images.zip b/acloserlookathowfinetuningchangesbert/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b4106bc731180ffce7dea419b883a7ea3641ec9c --- /dev/null +++ b/acloserlookathowfinetuningchangesbert/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9e58d4ae66e38eb43649c817bcd283b047aef84222fe02fc5421842b99111106 +size 1502547 diff --git a/acloserlookathowfinetuningchangesbert/layout.json b/acloserlookathowfinetuningchangesbert/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..0dfd5bc5957e005104e77b06d657cfc2e4ed28c4 --- /dev/null +++ b/acloserlookathowfinetuningchangesbert/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fbaa4c402757ea48b6bf5ad8b55ccdc364ea109e568f9c86342ed77c8b345c3f +size 490860 diff --git a/acomparativestudyoffaithfulnessmetricsformodelinterpretabilitymethods/cd6f3e84-3472-47c7-bdac-ddca1c2e6b83_content_list.json b/acomparativestudyoffaithfulnessmetricsformodelinterpretabilitymethods/cd6f3e84-3472-47c7-bdac-ddca1c2e6b83_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..9698e097885b2080a58ef29f1d5a6014f70867ab --- /dev/null +++ b/acomparativestudyoffaithfulnessmetricsformodelinterpretabilitymethods/cd6f3e84-3472-47c7-bdac-ddca1c2e6b83_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ae072ee98d0ac5d2d04985fe71b94009dde25dc4b7a4a4d98a2ccd0efa641201 +size 82565 diff --git a/acomparativestudyoffaithfulnessmetricsformodelinterpretabilitymethods/cd6f3e84-3472-47c7-bdac-ddca1c2e6b83_model.json b/acomparativestudyoffaithfulnessmetricsformodelinterpretabilitymethods/cd6f3e84-3472-47c7-bdac-ddca1c2e6b83_model.json new file mode 100644 index 0000000000000000000000000000000000000000..2f3586811b9fcb8fc68fc304b68c8a16c6ee78f6 --- /dev/null +++ b/acomparativestudyoffaithfulnessmetricsformodelinterpretabilitymethods/cd6f3e84-3472-47c7-bdac-ddca1c2e6b83_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5ec403515d103b234c70b627167cfb3c8cd5406d238a682bcfa2424640fd6305 +size 96813 diff --git a/acomparativestudyoffaithfulnessmetricsformodelinterpretabilitymethods/cd6f3e84-3472-47c7-bdac-ddca1c2e6b83_origin.pdf b/acomparativestudyoffaithfulnessmetricsformodelinterpretabilitymethods/cd6f3e84-3472-47c7-bdac-ddca1c2e6b83_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a78d0c2aa06653f6afb8ca26f31288024c595405 --- /dev/null +++ b/acomparativestudyoffaithfulnessmetricsformodelinterpretabilitymethods/cd6f3e84-3472-47c7-bdac-ddca1c2e6b83_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a44572ba47ae27d68ab32dc3479429112a924ed2a87c47a32a7c71fdcb76749b +size 394200 diff --git a/acomparativestudyoffaithfulnessmetricsformodelinterpretabilitymethods/full.md b/acomparativestudyoffaithfulnessmetricsformodelinterpretabilitymethods/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b7c8536f380b266b2eee458776de4e99b5fda981 --- /dev/null +++ b/acomparativestudyoffaithfulnessmetricsformodelinterpretabilitymethods/full.md @@ -0,0 +1,397 @@ +# A Comparative Study of Faithfulness Metrics for Model Interpretability Methods + +Chun Sik Chan, Huanqi Kong, Guanqing Liang + +Wisers AI Lab, Wisers Information Limited + +{tonychan, katekong, quincyliang}@wisers.com + +# Abstract + +Interpretation methods to reveal the internal reasoning processes behind machine learning models have attracted increasing attention in recent years. To quantify the extent to which the identified interpretations truly reflect the intrinsic decision-making mechanisms, various faithfulness evaluation metrics have been proposed. However, we find that different faithfulness metrics show conflicting preferences when comparing different interpretations. Motivated by this observation, we aim to conduct a comprehensive and comparative study of the widely adopted faithfulness metrics. In particular, we introduce two assessment dimensions, namely diagnosticity and time complexity. Diagnosticity refers to the degree to which the faithfulness metric favours relatively faithful interpretations over randomly generated ones, and time complexity is measured by the average number of model forward passes. According to the experimental results, we find that sufficiency and comprehensiveness metrics have higher diagnosticity and lower time complexity than the other faithfulness metrics. + +# 1 Introduction + +NLP has made tremendous progress in recent years. However, the increasing complexity of the models makes their behaviour difficult to interpret. To disclose the rationale behind the models, various interpretation methods have been proposed. + +Interpretation methods can be broadly classified into two categories: model-based methods and post-hoc methods. Model-based approaches refer to designing simple and white-box machine learning models whose internal decision logic can be easily interpreted, such as linear regression models, decision trees, etc. A post-hoc method is applied after model training and aims to disclose the relationship between feature values and predictions. As pretrained language models (Devlin et al., 2019a; Liu et al., 2019; Brown et al., 2020) become more + +popular, deep learning models are becoming more and more complex. Therefore, post-hoc methods are the only option for model interpretations. Post-hoc interpretation methods can be divided into two categories: gradient-based (Simonyan et al., 2014; Sundararajan et al., 2017; Shrikumar et al., 2019) and perturbation-based (Robnik-Sikonja and Kononenko, 2008; Zeiler and Fergus, 2013; Ribeiro et al., 2016). Gradient-based methods assume the model is differentiable and attempt to interpret the model outputs through the gradient information. Perturbation-based methods interpret model outputs by perturbing the input data. + +To verify whether, and to what extent, the interpretations reflect the intrinsic reasoning process, various faithfulness metrics have been proposed. Most faithfulness metrics use a removal-based criterion, i.e., removing or retaining only the important tokens identified by the interpretation and observing the changes in model outputs (Serrano and Smith, 2019; Chrysostomou and Aletras, 2021; Arras et al., 2017; DeYoung et al., 2020). + +However, we observe that the existing faithfulness metrics are not always consistent with each other and even lead to contradictory conclusions. As shown in the example from our experiments (Table 1), the conclusions that are drawn by two different faithfulness metrics, Sufficiency (SUFF) and Decision Flip - Fraction of Tokens (DFFOT), conflict with each other. More specifically, DFFOT concludes that the interpretation by LIME method is the best among the four interpretations, while SUFF ranks it as the worst. In this case, which faithfulness metric(s) should we adopt to compare interpretations? + +Motivated by the above observation, we aim to conduct a comprehensive and comparative study of faithfulness metrics. We argue that a good faithfulness metric should be able to effectively and efficiently distinguish between faithful and unfaithful interpretations. To quantitatively assess this capa + +
MethodInterpretation VisualizationFaithfulness Metric
SUFFDFFOT
LIMEA cop story thatunderstandsthemediumamazingly well41
Word OmissionA cop story thatthat understands the medium amazingly well14
Saliency MapA cop story thatunderstands the mediumamazingly well33
Integrated GradientsA cop story thatunderstands the mediumamazingly well22
+ +Table 1: An example where different interpretation methods assign different importance scores for the same trained CNN model on SST dataset. The tints of blue mark the magnitude of importance scores for positive sentiment. The numbers 1, 2, 3 and 4 are the rankings of the faithfulness values evaluated by the corresponding faithfulness metrics. Where rank 1 indicates the best, while 4 indicates the worst. + +bility, we introduce two dimensions, diagnosticity and time complexity. + +Diagnosticity refers to the extent to which a faithfulness metric prefers faithful rather than unfaithful interpretations. However, due to the opaque nature of deep learning models, it is not easy to obtain the ground truth for faithful interpretation (Jacovi and Goldberg, 2020). To concretize this issue, we use random interpretations, i.e., randomly assigning importance scores to tokens regardless of the internal processes of the model, as the relatively unfaithful interpretations. In contrast, we treat interpretations generated by interpretation methods as relatively faithful interpretations. In this way, we constructed the hypothesis that a faithfulness metric is diagnostic only if it can clearly distinguish between random interpretations and interpretations generated from interpretation methods. In addition, we introduce time complexity to estimate the computational speed of each metric, by using the average number of model forward passes. + +In this paper, we evaluate six commonly adopted faithfulness metrics. We find that the sufficiency and comprehensiveness metrics outperform the other faithfulness metrics, which are more diagnostic and less complex. Secondly, the two correlation-based metrics, namely Correlation between Importance and Output Probability and Monotonicity, have a promising diagnosticity but fail in terms of the high time complexity. Last but not least, decision flip metrics, such as Fraction of Tokens and Most Informative Token, perform the worst in the assessments. + +The main contributions of this paper are as follows: + +- We conduct a comparative study of six widely used faithfulness metrics and identify the inconsistencies issues. +- We propose a quantitative approach to assess faithfulness metrics through two perspectives, + +namely diagnosticity and time complexity. + +# 2 Terminology and Notations + +We first introduce the prerequisite terminology and notations for our discussions. + +Terminology A "classification instance" is the input and output values of a classification model, which we apply interpretation methods on. An "interpretation" of a classification instance is a sequence of scores where each score quantifies the importance of the input token at the corresponding position. An "interpretation pair" is a pair of interpretations of the same classification instance. An "interpretation method" is a function that generates an interpretation from a classification instance with the associated classification model. + +Notations Let $x$ be the input tokens. Denote the number of tokens of $x$ as $l_{x}$ . Denote the predicted class of $x$ as $c(x)$ , and the predicted probability corresponding to class $j$ as $p_{j}(x)$ . + +Assume an interpretation is given. Denote the $k$ -th important token as $x_{k}$ . Denote the input sequence containing only the top $k$ (or top $q\%$ ) important tokens as $x_{:k}$ (or $x_{:q\%}$ ). Denote the modified input sequence from which a token sub-sequence $x'$ are removed as $x \setminus x'$ . + +Let $(x,y)$ be a classification instance associated with classification model $m$ , and $g$ be an interpretation method. Denote the interpretation of $z$ generated by $g$ as $g(x,y,m)$ . Let $u$ be an interpretation, $(u,v)$ be an interpretation pair, and $F$ be a faithfulness metric. Denote the importance score that $u$ assigns to the $i$ -th input token as $[u]_i$ . Denote the statement " $u$ is more faithful than $v$ " as " $u \succ v$ ", and the statement " $F$ considers $u$ as more faithful than $v$ " as " $u \succ_F v$ ". + +# 3 Faithfulness Metrics + +An interpretation is called faithful if the identified important tokens truly contribute to the decision making process of the model. Mainstream faithfulness metrics are removal-based metrics, which measure the changes in model outputs after removing important tokens. + +We compare the most widely adopted faithfulness metrics, introduced as follows. + +Decision Flip - Most Informative Token (DFMIT) Introduced by Chrysostomou and Aletras (2021), this metric focuses on only the most important token. It assumes that the interpretation is faithful only if the prediction label is changed after removing the most important token, i.e. + +$$ +D F M I T = \left\{ \begin{array}{l l} 1 & \text {i f} c (x) \neq c (x \setminus x _ {: 1})) \\ 0 & \text {i f} c (x) = c (x \setminus x _ {: 1})) \end{array} \right. +$$ + +A score of 1 implies that the interpretation is faithful. + +Decision Flip - Fraction of Tokens (DFFOT) This metric measures faithfulness as the minimum fraction of important tokens needed to be erased in order to change the model decision (Serrano and Smith, 2019), i.e. + +$$ +D F F O T = \left\{ \begin{array}{l l} \min \frac {k}{l _ {x}} & \text {s . t .} c (x) \neq c (x \setminus x: k) \\ 1 & \text {i f} c (x) = c (x \setminus x: k) \text {f o r a n y k} \end{array} \right. +$$ + +If the predicted class change never occurs even if all tokens are deleted, then the score will be 1. A lower value of DFFOT means the interpretation is more faithful. + +Comprehensiveness (COMP) As proposed by DeYoung et al. (2020), comprehensiveness assumes that an interpretation is faithful if the important tokens are broadly representative of the entire input sequence. It measures the faithfulness score by the change in the output probability of the original predicted class after the important tokens are removed, i.e. + +$$ +\text{COMP} = \frac{1}{|B|}\sum_{q\in B}(p_{c(x)}(x) - p_{c(x)}(x\setminus x_{:q\%})) +$$ + +We use $q \in B = \{1, 5, 10, 20, 50\}$ as in the original paper. A higher comprehensiveness score implies a more faithful interpretation. + +Sufficiency (SUFF) Also proposed by DeYoung et al. (2020), this metric measures whether the important tokens contain sufficient information to retain the prediction. It keeps only the important tokens and calculates the change in output probability compared to the original specific predicted class, i.e. + +$$ +S U F F = \frac {1}{| B |} \sum_ {q \in B} \left(p _ {c (x)} (x) - p _ {c (x)} \left(x: q \%\right)\right) +$$ + +We use $q \in B = \{1, 5, 10, 20, 50\}$ as in the original paper. The lower the value of SUFF means that the interpretation is more faithful. + +Correlation between Importance and Output Probability (CORR) This metric assumes that the interpretation is faithful if the importance of the token and the corresponding predicted probability when the most important token is continuously removed is positively correlated (Arya et al., 2019), i.e. + +$$ +C O R R = - \rho (\boldsymbol {u}, \boldsymbol {p}) +$$ + +where $\pmb{u}$ denotes the token importance in descending order and $\pmb{p} = [p_{c(x)}(x\setminus x_1),p_{c(x)}(x\setminus x_2),\dots,p_{c(x)}(x\setminus x_{l_x})]$ . $\rho (\cdot)$ denotes the Pearson's correlation. The higher the correlation the more faithful the interpretation is. + +Monotonicity (MONO) This metric assumes that an interpretation is faithful if the probability of the predicted class monotonically increases when incrementally adding more important tokens (Arya et al., 2019). Starting from an empty vector, the features are gradually added in ascending order of importance, and the corresponding classification probabilities are noted. Monotonicity is calculated as the correlation between the feature importance and the probability after adding the feature, i.e. + +$$ +M O N O = \rho (\boldsymbol {u}, \boldsymbol {p}) +$$ + +where $\pmb{u}$ denotes the token importance in descending order and $\pmb{p} = [p_{c(x)}(x), p_{c(x)}(x \setminus x_{:1}), p_{c(x)}(x \setminus x_{:2}), \dots, p_{c(x)}(x \setminus x_{:(l_x - 1)})]$ . $\rho(\cdot)$ denotes the Pearson's correlation. The higher the monotonicity the more faithful the interpretation is. + +# 4 Evaluation of Faithfulness Metrics + +In this section, we propose an evaluation paradigm for faithfulness metrics by addressing two aspects: (1) diagnosticity and (2) time complexity. They + +are the two complementary and important factors in selecting a faithfulness metric for assessing the faithfulness of interpretations. + +# 4.1 Diagnostics of Faithfulness Metric + +As we have observed in Table 1, faithfulness metrics might disagree with each other on faithfulness assessment. This naturally raises a question: Which faithfulness metric(s) should we trust? + +To the best of our knowledge, there is no preceding work in quantifying the effectiveness of faithfulness metrics. As a first attempt, we introduce *diagnosis* city, which is intended to measure "the degree to which a faithfulness metric favours faithful interpretations over unfaithful interpretations". Intuitively, the higher the diagnosticity the more effective the faithfulness metric is. + +# 4.1.1 Definition of Diagnosticsity + +Definition 4.1 (Diagnosticity). We define the diagnosticity of a faithfulness metric as the probability that given an interpretation pair $(u, v)$ such that $u$ is more faithful than $v$ , the faithfulness metric also considers $u$ as more faithful than $v$ , i.e. + +$$ +\mathrm {D} (F) = \mathrm {P} (u \succ_ {F} v | u \succ v) +$$ + +As we will see later in this section, a set of interpretation pairs $(u,v)$ such that $u\succ v$ is required for estimating diagnosticity. Constructing such a dataset leads us to a paradox: we cannot be guaranteed that some generated interpretation is more faithful than the others when the measurement of faithfulness is still under debate. It is more realistic to assume that we can generate an interpretation pair $(u,v)$ such that $u$ is very likely to be more faithful than $v$ . Thus, we relax the condition in Definition 4.1 to a probabilistic one as follows. + +Definition 4.2 ( $\varepsilon$ -diagnosticity). Let $(u, v)$ be any interpretation pair, and $0 \leq \varepsilon \leq 1$ . The $\varepsilon$ -diagnosticity of a faithfulness metric $F$ is defined as + +$$ +\mathrm {D} _ {\varepsilon} (F) = \mathrm {P} (u \succ_ {F} v | \mathrm {P} (u \succ v) > 1 - \varepsilon) +$$ + +In the above definition, $\varepsilon$ represents the uncertainty in comparing the faithfulness of $u$ and $v$ . In the next Theorem, we show that $\varepsilon$ -diagnosticity effectively approximates diagnosticity as long as $\varepsilon$ is small enough. + +Theorem 4.1 (Error Bound of $\varepsilon$ -diagnosticity). We can approximate diagnosticity with $\varepsilon$ -diagnosticity with error less than $\varepsilon$ , i.e. + +$$ +| \mathrm {D} _ {\varepsilon} (F) - \mathrm {D} (F) | < \varepsilon +$$ + +The proof is provided in Appendix A. + +# 4.1.2 Estimation of Diagnostics + +In the following, we show how we estimate $\varepsilon$ -diagnosticity with a set of interpretation pairs $(u,v)$ where the $u$ is very likely to be more faithful than $v$ , namely an $\varepsilon$ -faithfulness golden set where $\varepsilon$ is small. + +Definition 4.3 ( $\varepsilon$ -faithfulness golden set). Let $0 \leq \varepsilon \leq 1$ . A set $Z_{\varepsilon}$ of interpretation pairs is called a $\varepsilon$ -faithfulness golden set, if it satisfies the following conditions. + +1. All interpretation pairs in $Z_{\varepsilon}$ are independent and identically distributed (i.i.d.). +2. $\mathrm{P}(u\succ v) > 1 - \varepsilon$ for any interpretation pair $(u,v)\in Z_{\varepsilon}$ + +Lemma 4.2. Let $\mathbb{1}(\cdot)$ be the indicator function which takes a value 1 when the input statement is true and a value 0 when it is false. Then $\mathbb{1}(u\succ_F v)|(\mathrm{P}(u\succ v) > 1 - \varepsilon)$ is a random variable and its expected value is equal to $\varepsilon$ -diagnosticity, i.e. + +$$ +\mathrm {D} _ {\varepsilon} (F) = \mathbb {E} \left[ \mathbb {1} \left(u \succ_ {F} v\right) | \mathrm {P} (u \succ v) > 1 - \varepsilon \right] +$$ + +The proof is provided in Appendix B. + +As a result, given an $\varepsilon$ -faithfulness golden set $Z_{\varepsilon}$ , we can estimate the $\varepsilon$ -diagnosticity of a faithfulness metric $F$ by estimating the expected value in Lemma 4.2. Then by the law of large numbers, we can simply estimate the expected value by computing the average value of $\mathbb{1}(u\succ_F v)$ on $Z_{\varepsilon}$ , i.e. + +$$ +\mathrm {D} _ {\varepsilon} (F) \approx \frac {1}{| Z _ {\varepsilon} |} \sum_ {(u, v) \in Z _ {\varepsilon}} \mathbb {1} (u \succ_ {F} v) \tag {1} +$$ + +When $|Z_{\varepsilon}|$ is large enough, we will have $\left|\frac{1}{|Z_{\varepsilon}|}\sum_{(u,v)\in Z_{\varepsilon}}\mathbb{1}(u\succ_F v) - D(F)\right| < \varepsilon$ according to Theorem 4.1. + +# 4.1.3 Generation of an $\varepsilon$ -faithfulness golden set + +According to Theorem 4.1 and Lemma 4.2, we can estimate the diagnosticity of any faithfulness metric using Equation 1 as long as we have an $\varepsilon$ -faithfulness golden set where $\varepsilon$ is small enough. + +We called the $u$ and $v$ in Definition 4.3 a relatively faithful interpretation and a relatively unfaithful interpretation respectively. Next, we discuss the processes to generate them respectively. + +Generating Relatively Unfaithful Interpretations By definition, a faithful interpretation is an interpretation that truly reflects the underlying decision making process of the classification model. Therefore, an unfaithful interpretation is one that is completely irrelevant to the underlying decision making process of the classification model. We propose to generate relatively unfaithful interpretations by assigning a random importance score to each token in the input sequence, i.e. $[v]_i \sim \mathrm{Uniform}(0,1)$ for any token $1 \leq i \leq l$ , where Uniform denotes the uniform distribution. + +Generating Relatively Faithful Interpretations We propose to generate relatively faithful interpretations with the interpretation methods that infer interpretations from the underlying mechanism of the classification model. There are two mainstream categories of interpretations methods that satisfy this requirement (Alvarez-Melis and Jaakkola, 2018): + +- Perturbation-based: Relying on querying the model around the classification instance to infer the importance of input features. +- Gradient-based: Using information from gradients to infer the importance of input features. + +We select the representative methods from both categories and introduce them in the following. + +- Perturbation-based - LIME (Ribeiro et al., 2016): For each classification instance, a linear model on the input space is trained to approximate the local decision boundary, so that the learned coefficients can be used to quantify the importance of the corresponding input features on the model prediction. +- Perturbation-based - Word Omission (WO) (Robnik-Sikonja and Kononenko, 2008): For each $i$ -th input token, WO quantifies the importance of the input token by the change in output probability after removing it from the original input sequence, i.e. $p_{c(x)}(x) - p_{c(x)}(x \setminus \{i\})$ . +Gradient-based - Saliency Map (SA) (Simonyan et al., 2014): For each $i$ -th input token, SA computes the gradients of the original model output with respect to the embedding associated with the input token, i.e. $\frac{\partial p_{c(x)}(z)}{\partial e(z)_i} |_{z = x}$ , and quantifies the importance + +Algorithm 1 An $\varepsilon$ -faithfulness golden set generation mechanism. + +Input: $X$ : A set of i.i.d. classification instances associated with classification model $m$ ; + +$G$ : The set of interpretation methods for generating relatively faithful interpretations, i.e. {LIME, $\mathrm{WO},\mathrm{SA}_{\mu},\mathrm{SA}_{l2},\mathrm{IG}_{\mu},\mathrm{IG}_{l2}\} ;$ + +$K$ : Sample size; + +Output: An $\varepsilon$ -faithfulness golden set $Z$ ; + +$$ +Z \leftarrow \{\}; +$$ + +For 1 to $K$ + +$$ +(x, y) \leftarrow \operatorname {R a n d o m S a m p l e r} (X); +$$ + +$$ +g \leftarrow \operatorname {R a n d o m S a m p l e r} (G); +$$ + +$$ +u \leftarrow g (x, y, m) +$$ + +$$ +v \leftarrow r \in \mathbb {R} ^ {l _ {x}} \text {w h e r e} [ r ] _ {i} \sim \operatorname {U n i f o r m} (0, 1); +$$ + +$$ +Z \leftarrow Z \cup \{(u, v) \}; +$$ + +return $Z$ + +of the input token by taking either the mean or the $l2$ norm of the gradients in the embedding dimension. We denote the former approach as $\mathrm{SA}_{\mu}$ and the later approach as $\mathrm{SA}_{l2}$ + +Gradient-based - Integrated Gradients (IG) + +(Simonyan et al., 2014): As shown by Simonyan et al. (2014), Integrated Gradients provide more robust interpretations than Saliency Map in general. For each $i$ -th input token, it approximates the integral of the gradients of the original model output with respect to the embedding corresponding to the input token along a straight line from a reference point $x_0$ to the original input sequence, i.e. $\int_{x_0 \to x} \frac{\partial p_{c(x)}(z)}{\partial e(z)_i} dz$ , and quantifies the importance of the input token by taking either the mean or the $l2$ norm of the integral in the embedding dimension. We denote the former approach as $\mathrm{IG}_{\mu}$ and the later approach as $\mathrm{IG}_{l2}$ . + +The interpretations generated using the above interpretation methods are highly likely to be more faithful than the randomly generated interpretations because the generation processes of the former ones actually involve inferences from model behaviours, while the random generation process is independent of any model behaviour. Therefore, in principle, the set of generated interpretation pairs will have a small value of $\varepsilon$ in Definition 4.3. + +In Algorithm 1, we propose a mechanism to generate an $\varepsilon$ -faithfulness golden set from a set of i.i.d. classification instances based on the above + +
DatasetSplits (Train / Test)Model perf. (F1)
BERTCNN
SST6,920 / 1,821.917.804
IMDB25,000 / 25,000.918.864
AG120,000 / 7,600.946.919
+ +processes. Note that the generated interpretation pairs will satisfy the first condition in Definition 4.3 because they are generated from i.i.d. samples, and will satisfy the second condition in Definition 4.3 with a presumably small $\varepsilon$ as we have discussed. + +# 4.2 Time Complexity of Faithfulness Metric + +Two of the main applications of faithfulness metrics are (1) evaluating interpretation methods based on their average faithfulness scores on a dataset; and (2) gauging the quality of individual interpretations by spotting out "unfaithful" interpretations. + +Time complexity is an important aspect in evaluating faithfulness metrics because a fast faithfulness metric will shorten the feedback loop in developing faithful interpretation methods, and would allow runtime faithfulness checking of individual interpretations in a production environment. + +Measurement of time complexity From the definitions of the faithfulness metrics in Section 3, we observe that their computations are dominated by model forward passes, which are denoted as $c(\cdot)$ or $p(\cdot)$ . Thus, we measure the time complexities of the faithfulness metrics in number of model forward passes. + +# 5 Experimental Setup + +Datasets We conduct experiments on three text classification datasets used in (Wiegreffe and Pinter, 2019): (i) Stanford Sentiment Treebank (SST) (Socher et al., 2013); (ii) IMDB Large Movie Reviews (IMDB) (Maas et al., 2011); (iii) AG News Corpus (AG) (Zhang et al., 2015). We summarize the dataset statistics in Table 2. + +Text classification models We adopt two most common model architectures for text classification: (i) BERT (Devlin et al., 2019b); (ii) CNN (Kim, 2014). The former one encodes contextualized representations of tokens and has higher accuracy in + +Table 2: Dataset statistics and model performances (Macro-F1) on test sets. + +
Faithfulness metricSSTDiagnosticity (%)
IMDBAGAverage
BERT
DFMIT14.796.073.348.07
DFFOT65.1672.0265.6867.62
SUFF71.0379.3370.4273.60
COMP75.3880.4474.2376.69
CORR65.4668.0667.2366.91
MONO75.8775.8268.3373.34
CNN
DFMIT17.299.274.8410.47
DFFOT63.7670.7457.6164.04
SUFF71.5475.9177.9775.14
COMP71.3973.4681.7375.53
CORR72.1768.9271.8270.97
MONO72.3977.0975.1274.87
+ +Table 3: Diagnosticities of all faithfulness metrics on all datasets for both BERT and CNN models. The right-most column states the average diagnosticities over three datasets. In each column, we underline the highest value. + +general, but at a cost of consuming more memory and computational resources. The latter one uses pretrained word embeddings as token representations and is lighter and faster. Their performances on test data sets are shown in Table 2. The implementation details of both models can be found in Appendix C.1. + +$\varepsilon$ -faithfulness golden set For each dataset and text classification model, we transform the test set into a set of classification instances and feed it into Algorithm 1 to generate an $\varepsilon$ -faithfulness golden set with a size of 8,000 ( $K$ in Algorithm 1). The implementation details of interpretation methods can be found in Appendix C.2. + +# 6 Results and Discussion + +Diagnosticity We estimate the diagnosticsities of the faithfulness metrics in Section 3 on all datasets for both CNN and BERT models. The results are shown in Table 3. + +COMP and SUFF have the highest and the second highest average diagnosticites for both models. Hence, they are the most effective faithfulness metrics. We also observe that COMP has higher diagnosticities than SUFF on all datasets for BERT model. This can be explained by the contextualization property of Transformer encoders (Vaswani et al., 2017): the hidden state of each token depends on all other tokens in the input sequence. Removing a portion of the important tokens will alter the whole context, and is likely to cause a dramatic + +change in model output. + +DFMIT and DFFOT have the lowest and the second lowest average diagnosticities. Removing the most important token is usually not creating enough perturbation to flip the original model decision. In fact, the probability of decision flipping by removing the most important token is $\leq 14\%$ for recent state-of-the-art interpretation methods (Chrysostomou and Aletras, 2021). As a result, up to $86\%$ of interpretations are considered as indifferent by DFTM. For DFFOT, the probability of decision flipping by removing the important tokens in order does not only depend on the quality of interpretation but also depends on any model bias towards certain classes. For instance, decision flipping will be less likely to occur if the predicted class on the original input is the same as the one on the empty input sequence. Therefore, we found that decision flipping metrics (DFMIT, DFFOT) are less effective than the metrics that operate on output probabilities (SUFF, COMP, CORR, MONO). + +Time complexity We compare the time complexities of the faithfulness metrics in Section 3 measured in number of model forward passes. We first analyze their time complexities based on their definitions in Table 4 and then measure their actual time complexities on all datasets in Table 5. Note that the time complexity here is equal to the number of model forward passes. + +DFMIT is the fastest faithfulness metric, which requires only one model forward pass. DFFOT has a non-deterministic time complexity, which depends on how fast the decision flipping occurs, and it is the second slowest faithfulness metrics on all datasets. SUFF and COMP are the second fastest faithfulness metric on average, which require at most 5 model forward passes. CORR and MONO are the slowest faithfulness metrics, which have time complexity equal to the number of input tokens. + +Which faithfulness metric(s) should we adopt? + +In Figure 1, we evaluate the faithfulness metrics by both their diagnosticities and time complexities. + +Figure 1 suggests that we should always adopt COMP and SUFF. Because (i) they have higher diagnosticities and lower time complexities than DFFOT, ; (ii) they have a similar level of diagnosticity and much lower time complexities than CORR and MONO; (iii) DFMIT has diagnosticity less than 0.1, which is below an acceptable level. + +
Faithfulness metricTime complexity - Analysis (#(model forward passes))
DeterministicValue or range
DFMIT1
DFFOT[1, lx]
SUFFmin(5, lx)
COMPmin(5, lx)
CORRlx
MONOlx
+ +Table 4: Analysis of the time complexities of faithfulness metrics. $l_{x}$ denotes the number of input tokens. + +
Faithfulness metricTime complexity - Actual (#(model forward passes))
SSTIMDBAGAverage
DFMIT1.01.01.001.0
DFFOT9.378.730.039.4
SUFF5.05.05.05.0
COMP5.05.05.05.0
CORR20.3193.147.787.1
MONO20.3193.147.787.1
+ +Table 5: Actual time complexities of faithfulness metrics measured by the average number of model passes on each dataset. + +![](images/982d8c5dfd4149603ecc20d798b65bd8594738c1372c7f00d3e99094736cd564.jpg) +Figure 1: Diagnostics vs time complexity for faithfulness metrics. The values are averages over all datasets and classification models. The faithfulness metrics near the top-right corner are more desirable than those near the bottom-left corner. + +We would prefer COMP and SUFF over DFMIT even though it has the lowest time complexity. + +Note that our evaluation framework can be used to compare any faithfulness metrics. In general, we prefer faithfulness metrics that have higher diagnostics and lower time complexities, i.e. closer to the top-right corner in Figure 1. But what if one has a higher diagnosticity and the other one has a lower time complexity? In this case, we should consider diagnosticity first: a faithfulness metric should not be used if it cannot effectively assess faithfulness, + +i.e. diagnosticity below a certain threshold. In scenarios where we are subject to constraints of hardware or timeliness, we might need to select a faster metric with a lower but acceptable level of diagnosticity. + +# 7 Related Work + +Interpretation methods Interpretation methods can be roughly classified into two categories: model-based methods and post-hoc methods. Model-based methods refer to the construction of simple machine learning models whose internal decision logic can be easily interpreted, such as linear regression models, decision trees, etc. Post-hoc methods interpret the internal reasoning process behind the model after training. Generally, post-hoc methods can be divided into gradient-based and perturbation-based. A gradient-based interpretation method assumes deep learning model is differentiable and discloses the decision making mechanism of the model according to the gradient information (Simonyan et al., 2014; Sundararajan et al., 2017; Shrikumar et al., 2019). A perturbation-based interpretation method interprets the model by perturbing the input of data samples and measuring how the predictions change (Robnik-Sikonja and Kononenko, 2008; Zeiler and Fergus, 2013; Ribeiro et al., 2016). + +Interpretation method evaluation To assess the quality of different interpretation methods, various evaluation metrics have been proposed. Existing evaluation methods on interpretations can be broadly classified into two categories, plausibility and faithfulness. Plausibility measures if the interpretation agrees with human judgments on how a model makes a decision (Ribeiro et al., 2016; Doshi-Velez and Kim, 2017; Lundberg and Lee, 2017; DeYoung et al., 2020). However, even if the interpretation conforms to human criteria, it is not certain that it truly reflects the underlying decision mechanism behind the model. To this end, faithfulness measures the extent to which the inner decision-making mechanism actually relies on the identified important features (Arras et al., 2017; Serrano and Smith, 2019; Jain and Wallace, 2019; Wegreffe and Pinter, 2019; DeYoung et al., 2020; Chrysostomou and Aletras, 2021). + +In general, existing faithfulness metrics are developed through a removal-based criterion, which measures the changes in model output when perturbing or removing tokens identified as important + +by the interpretation. Serrano and Smith (2019) proposed a decision flipping metric that evaluates the proportion of tokens that need to be erased in order to change the model decision. Also using decision flip as an indicator, Chrysostomou and Aletras (2021) introduces a metric that counts the average flips that occur when removing the most important token marked by the interpretation method. In addition to decision flips, changes in model output probabilities by removing or retaining important tokens is also widely used to measure faithfulness (Arras et al., 2017; Arya et al., 2019; DeYoung et al., 2020). + +Some recent work also focuses on the study of faithfulness metrics. Jacovi and Goldberg (2020) argued that the definition of faithfulness remains inconsistent and informal, and provided concrete guidelines on how evaluations of interpretation methods should and should not be conducted. More recently, Yin et al. (2021) discussed the limitations of removal-based faithfulness metrics and proposed two other quantitative criteria, namely sensitivity and stability. Different from the aforementioned previous work that does not focus on assessing faithfulness metrics, we mainly focus on the measurement of faithfulness and conduct a comprehensive study of existing faithfulness metrics. + +# 8 Conclusion + +In this paper, we propose a framework to quantitatively evaluate six widely adopted faithfulness metrics in terms of diagnosticity and time complexity. In particular, diagnosticity measures whether the faithfulness metric correctly favours relatively faithful interpretations over random ones; time complexity is concerned with computational efficiency, estimated by the average number of model forward passes. The experimental results show that sufficiency and comprehensiveness metrics outperform the other faithfulness metrics with higher diagnosticity and lower time complexity. For this reason, we suggest using these two metrics for faithfulness evaluation. We hope our work will bring more awareness to the standardization of faithfulness measurement. For future work, we would like to explore evaluating faithfulness metrics using a white-box model such as linear regression, from which we can derive an intrinsically faithful interpretation as the "ground truth". + +# References + +David Alvarez-Melis and Tommi S. Jaakkola. 2018. On the robustness of interpretability methods. Cite arxiv:1806.08049Comment: presented at 2018 ICML Workshop on Human Interpretability in Machine Learning (WHI 2018), Stockholm, Sweden. +Leila Arras, Franziska Horn, Gregoire Montavon, KlausRobert Muller, and Wojciech Samek. 2017. "what is relevant in a text document?: An interpretable machine learning approach. PLoS ONE, 12:E0181142. +Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, and Yunfeng Zhang. 2019. One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. +Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. +George Chrysostomou and Nikolaos Aletras. 2021. Improving the faithfulness of attention-based explanations with task-specific information for text classification. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 477-488, Online. Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019a. Bert: Pre-training of deep bidirectional transformers for language understanding. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019b. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2020. ERASER: A benchmark to evaluate rationalized NLP models. In Proceedings of the 58th Annual Meeting of the Association for + +Computational Linguistics, pages 4443-4458, Online. +Association for Computational Linguistics. +Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. +Alon Jacovi and Yoav Goldberg. 2020. Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4198-4205, Online. Association for Computational Linguistics. +Sarthak Jain and Byron C. Wallace. 2019. Attention is not Explanation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3543-3556, Minneapolis, Minnesota. Association for Computational Linguistics. +Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751, Doha, Qatar. Association for Computational Linguistics. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. +Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. +Scott Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. +Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142-150, Portland, Oregon, USA. Association for Computational Linguistics. +Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics. +Marco Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "why should I trust you?": Explaining the predictions of any classifier. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 97-101, San Diego, California. Association for Computational Linguistics. + +Marko Robnik-Sikonja and Igor Kononenko. 2008. Explaining classifications for individual instances. IEEE Transactions on Knowledge and Data Engineering, 20(5):589-600. + +Sofia Serrano and Noah A. Smith. 2019. Is attention interpretable? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2931-2951, Florence, Italy. Association for Computational Linguistics. + +Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. 2019. Learning important features through propagating activation differences. + +Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2014. Deep inside convolutional networks: Visualising image classification models and saliency maps. + +Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Association for Computational Linguistics. + +Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML'17, page 3319-3328. JMLR.org. + +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc. + +Sarah Wiegrefe and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 11-20, Hong Kong, China. Association for Computational Linguistics. + +Fan Yin, Zhouxing Shi, Cho-Jui Hsieh, and Kai-Wei Chang. 2021. On the faithfulness measurements for model interpretations. + +Matthew D Zeiler and Rob Fergus. 2013. Visualizing and understanding convolutional networks. + +Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, NIPS'15, page 649-657, Cambridge, MA, USA. MIT Press. + +# A Proof of Theorem 4.1 + +Proof. Let $(u, v)$ be an interpretation pair. Then + +$$ +\begin{array}{l} \mathrm {P} (u \succ_ {F} v | \mathrm {P} (u \succ v) = 1 - \varepsilon) \\ = \mathrm {P} (u \succ_ {F} v | u \succ v) (1 - \varepsilon) + \mathrm {P} (u \succ_ {F} v | u \nrightarrow v) \varepsilon \\ = \mathrm {D} (F) + \left[ \mathrm {P} \left(u \succ_ {F} v | u \neq v\right) - \mathrm {P} \left(u \succ_ {F} v | u \succ v\right) \right] \varepsilon \\ \end{array} +$$ + +Since $-1\leq \mathrm{P}(u\succ_Fv|u\not\asymp v) - \mathrm{P}(u\succ_Fv|u\succ v)\leq 1$ , we have + +$$ +| \mathrm {P} (u \succ_ {F} v | \mathrm {P} (u \succ v) = 1 - \varepsilon) - \mathrm {D} (F) | \leq \varepsilon +$$ + +![](images/e53a5f9372d9f7f7e624a79a68a3d3046f449365db1407a15f935625e666443a.jpg) + +# B Proof of Lemma 4.2 + +Proof. From Definition 4.2, we have $\mathbb{1}(u\succ_F v)|(\mathrm{P}(u\succ v) > 1 - \varepsilon)\sim \mathrm{Bernoulli}(p)$ , where $p = \mathrm{D}(F)$ . Then based on the property of Bernoulli distribution, we know that the expected value of the random variable is equal to $p$ . + +# C Implementation Details + +# C.1 Text classification models + +The text classification models are all implemented in PyTorch $^{2}$ . For BERT, we use the "bert-base-uncased" from Huggingface transformers $^{3}$ as the pretrained model. We use the same set of hyperparameters regardless of dataset for fine-tuning: dropout rate 0.2, AdamW (Loshchilov and Hutter, 2019) with an initial learning rate 2e-5, batch size 32 with no warmup steps. We set the maximum number of finetuning epochs to be 10 and perform early stopping when the performance on the test set does not improve for 3 consecutive epochs + +For CNN classifier, we use a one-layer CNN encoder with a linear classifier. The embedding is initialized with the 300-dimensional pretrained GloVe word embedding (Pennington et al., 2014). The CNN layer has 256 kernels and the size of the kernels is 3. We use max-pooling and AdamW with an initial learning rate 1e-3, batch size 32, with no warmup steps. The maximum number of epochs is 40 with early stopping after 3 consecutive non-improving epochs. + +# C.2 Interpretation methods + +For LIME, Saliency Map, Integrated Gradients and DeepLift, we apply the implementation in Captum 4. For Word Omission, we use our own implementation. \ No newline at end of file diff --git a/acomparativestudyoffaithfulnessmetricsformodelinterpretabilitymethods/images.zip b/acomparativestudyoffaithfulnessmetricsformodelinterpretabilitymethods/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..69f7eb7f5ab3591296cbdd09264e710dd44d38c8 --- /dev/null +++ b/acomparativestudyoffaithfulnessmetricsformodelinterpretabilitymethods/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c26a71900c7ce14e299a9cd37b41b57ec187677bc46bc8873f061ee7d719f41e +size 291333 diff --git a/acomparativestudyoffaithfulnessmetricsformodelinterpretabilitymethods/layout.json b/acomparativestudyoffaithfulnessmetricsformodelinterpretabilitymethods/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..7718a53da6062d03ca7bb0a0cf7e686cb2d43232 --- /dev/null +++ b/acomparativestudyoffaithfulnessmetricsformodelinterpretabilitymethods/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:72729f0641447f12abf78119d24a2b980caca28cf33568f65bde408ac284e306 +size 421304 diff --git a/acomparisonofstrategiesforsourcefreedomainadaptation/43cd72cc-79a3-465b-85f7-d5b5c2d8e605_content_list.json b/acomparisonofstrategiesforsourcefreedomainadaptation/43cd72cc-79a3-465b-85f7-d5b5c2d8e605_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..262276606670ec8d84925071004f121057350033 --- /dev/null +++ b/acomparisonofstrategiesforsourcefreedomainadaptation/43cd72cc-79a3-465b-85f7-d5b5c2d8e605_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b62984f2b20d33110c9d630ea2bbbc0fe9969f670d8e4ee656511ec3273fb004 +size 99758 diff --git a/acomparisonofstrategiesforsourcefreedomainadaptation/43cd72cc-79a3-465b-85f7-d5b5c2d8e605_model.json b/acomparisonofstrategiesforsourcefreedomainadaptation/43cd72cc-79a3-465b-85f7-d5b5c2d8e605_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b7fc5ce269b7b5157808800e6bad8e78a3d72cfd --- /dev/null +++ b/acomparisonofstrategiesforsourcefreedomainadaptation/43cd72cc-79a3-465b-85f7-d5b5c2d8e605_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e54506502c737e1a971315489d181552181a1d9f3d16493a0178405fb477ff94 +size 122689 diff --git a/acomparisonofstrategiesforsourcefreedomainadaptation/43cd72cc-79a3-465b-85f7-d5b5c2d8e605_origin.pdf b/acomparisonofstrategiesforsourcefreedomainadaptation/43cd72cc-79a3-465b-85f7-d5b5c2d8e605_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a8642a5aedbe0b835a8490b254ce2c25e636b33e --- /dev/null +++ b/acomparisonofstrategiesforsourcefreedomainadaptation/43cd72cc-79a3-465b-85f7-d5b5c2d8e605_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cc3451b459a9d2eabd0c4dfa652a53d1da05b02fca3299192a960a9ffa4c92f6 +size 454338 diff --git a/acomparisonofstrategiesforsourcefreedomainadaptation/full.md b/acomparisonofstrategiesforsourcefreedomainadaptation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..4559fb7a8fec025ff2011ae5f1e271f89f64a38d --- /dev/null +++ b/acomparisonofstrategiesforsourcefreedomainadaptation/full.md @@ -0,0 +1,389 @@ +# A Comparison of Strategies for Source-Free Domain Adaptation + +Xin Su Yiyun Zhao Steven Bethard + +University of Arizona + +Tucson, AZ, USA + +{xinsu, yiyunzhao, bethard}@email.arizona.edu + +# Abstract + +Data sharing restrictions are common in NLP, especially in the clinical domain, but there is limited research on adapting models to new domains without access to the original training data, a setting known as source-free domain adaptation. We take algorithms that traditionally assume access to the source-domain training data—active learning, self-training, and data augmentation—and adapt them for source-free domain adaptation. Then we systematically compare these different strategies across multiple tasks and domains. We find that active learning yields consistent gains across all SemEval 2021 Task 10 tasks and domains, but though the shared task saw successful self-trained and data augmented models, our systematic comparison finds these strategies to be unreliable for source-free domain adaptation. + +# 1 Introduction + +Deep neural networks achieve high performance in many tasks, but typically require annotated training data for each new domain. Domain adaptation algorithms aim to take models trained on one domain (the "source domain") and transfer the model's knowledge to another domain (the "target domain"). They typically try to do this without a huge amount of annotated data in the target domain. Domain adaptation can be easy if the source and target domain have similar distributions, but domains often differ substantially (Wilson and Cook, 2020). + +While there has been much progress in domain adaptation methods (Kouw, 2018) and even in unsupervised domain adaptation where there are no target-domain labels (Ramponi and Plank, 2020), most methods assume access to the labeled source data. Yet this assumption is often not satisfied, especially in the clinical domain due to privacy concerns (Laparra et al., 2020). + +SemEval 2021 Task 10 (Laparra et al., 2021), on source-free domain adaptation, called attention to + +this challenging but more realistic scenario where labeled source data are not accessible, only the model trained on the source domain data can be shared1, and little or no labeled target data are available. Participants explored methods including self-training, active learning, and data augmentation (Laparra et al., 2021) but it is hard to make fair comparisons between algorithms since different teams varied in their base implementations. + +We therefore conducted experiments to provide a systematic comparison of algorithms for source-free domain adaptation. Our contributions are: + +1. The first systematic comparison of self-training, active learning, and data augmentation for source-free domain adaptation, carried out across multiple tasks and domains. +2. We identify a formulation of source-free active learning that consistently improves performance of the source-domain model, and sometimes even outperforms fine-tuning on a large set of labeled target domain data. +3. We perform an error analysis across tasks and domains and show that the selected formulation of active learning corrects several types of errors that self-training does not. + +Our code is publicly available.2 + +# 2 Related Work + +# 2.1 Source-free Domain Adaptation + +Recently, there is rising interest in computer vision to develop methods for unsupervised source-free domain adaptation. Several works utilize a generative framework with a classifier trained on source data to generate labeled training examples (Kurmi et al., 2021; Li et al., 2020) or transfer the target ex + +amples to match the source style (Hou and Zheng, 2020; Sahoo et al., 2020). Other works use self-supervised pseudo-labeling. Liang et al. (2020) proposes source hypothesis transfer that freezes the classifier of the source model domain but fine-tunes the encoding of the source model with a goal to reduce the entropy of individual output prediction while maintaining global diversity. They also augment the strategy by self-supervised pseudo labels via the nearest centroid classifier. Kim et al. (2020) select low self-entropy instances as class prototypes and pseudo-label the remaining target instances based on the distance to the class prototypes and progressively update the models on target data in the manner of self-training. + +Despite of a growing number of computer vision studies on source-free domain adaptation, there is limited NLP research into this challenging but realistic scenario. Though there is partially related research on continual learning (de Masson d'Autume et al., 2019; Sun et al., 2020) and generalization of pre-trained models (Hendrycks et al., 2020), the only work to explicitly test source-free domain adaptation is SemEval 2021 Task 10 (Laparra et al., 2021), which asked participants to perform source-free domain adaptation on negation detection and time expression recognition. A variety of techniques were applied to this task, including active learning, self-training, and data augmentation. However, different techniques were applied by different participants with different baseline models, so the shared task results do not allow us to make fair comparisons between different techniques. In the current article, we implement and then systematically compare these different techniques. + +# 2.2 Self-training + +Self-training (Yarowsky, 1995; McClosky et al., 2006) trains a model on a labeled dataset $L$ and then iteratively makes predictions ("pseudo-labels") on an unlabeled dataset $U$ and re-trains. On each iteration, the examples in $U$ that the model labels with high confidence ("silver labels") are added to $L$ , and the model is retrained on the new, larger $L$ . This process is repeated until no more predictions are highly confident. Self-training has been applied to a variety of domain adaptation scenarios (Ruder and Plank, 2018; Yu et al., 2015; Cui and Bollegala, 2019), but always with the assumption that the original labeled data $L$ is available at each iteration. In source-free domain adaptation, $L$ is not available, + +so source-free self-training could train on only the pseudo-labels, and it is unclear whether that would yield a superior or inferior model. + +# 2.3 Active Learning + +Active learning selects a small number of examples to be manually annotated, using strategies designed to select the examples that should most benefit the model. Various active learning selection strategies have been developed (see the survey of Settles, 2009), and recent work has shown the benefits of active learning even with pre-trained transformer models (Ein-Dor et al., 2020). Active learning is also frequently used in domain adaptation. For example, Chan and Ng (2007) applied uncertainty sampling for domain adaptation of word sense disambiguation models, and Rai et al. (2010) combined model confidence and a domain discriminator to select target-domain examples for sentiment analysis. As with self-training, active learning algorithms typically assume that the source-domain training data is available and can be combined with target-domain examples. Thus, the efficacy of source-free active learning is currently unclear. + +# 2.4 Data Augmentation + +Data Augmentation enhances limited data by using existing resources (WordNet, similar datasets, etc.) and/or rule-based transformations of the training data to create new training examples. A variety of data augmentation techniques have been proposed (see the survey of Liu et al., 2020) including back-translation (Sennrich et al., 2016; Wang et al., 2021), lexical-substitution (Zhou et al., 2019; Arefyev et al., 2020; Wei and Zou, 2019; Miao et al., 2020), noise injection (Wei and Zou, 2019), conditional generation (Juuti et al., 2020; Malandrakis et al., 2019; Kobayashi, 2018), and data transformation with task-specific rules or templates (Sahin and Steedman, 2018; Wang et al., 2021; Xu et al., 2020). Data augmentation assumes access to the source-domain training data, so cannot be used by itself in source-free domain adaptation. It could be coupled with source-free self-training or source-free active learning, but researchers have not yet systematically explored such combinations. + +# 3 Data + +We base our experiments off of the data and source-domain models from the tasks of SemEval 2021 Task 10: negation detection and time expression + +
DomainData Source#
Negation Detection Data
SourceSHARP Seed10,259 sentences
Target: developmenti2b2 20101109 sentences
Target: testi2b2 20104436 sentences
Target: developmentMIMIC III1916 sentences
Target: testMIMIC III7664 sentences
Time Expression Detection Data
SourceSemEval 2018 Task 6 clinical notes278 documents
Target: developmentSemEval 2018 Task 6 news articles20 documents
Target: testSemEval 2018 Task 6 news articles79 documents
Target: developmentFood security reports4 documents
Target: testFood security reports13 documents
+ +Table 1: Data summary for negation detection and time expression recognition tasks. + +recognition. We select these tasks because: + +1. They represent real-world data-sharing problems: the negation source-domain data "cannot currently be distributed" and the time expression source-domain data is "difficult to gain access to due to the complex data use agreements" (Laparra et al., 2021). Only the task organizers had access to the data and permission to distribute models trained on the (de-identified) data. +2. The annotation schemes are complex enough that the problem cannot be easily solved by manually annotating the target domain. Su et al. (2021) found that annotations from annotators given only the time annotation guidelines yielded no gains to models, while annotations from heavily trained annotators did yield gains. +3. These two tasks suffer a large performance loss under domain shift: the source-trained model is $15+$ points of F1 lower on the target test set than on the source test set (Laparra et al., 2021). + +The popular Amazon reviews sentiment analysis dataset (Blitzer et al., 2007) violates the points above: labeled source and target data are easily available, the annotation scheme is easy (it is artificially balanced and removes reviews with neutral labels, as others have noted (He et al., 2018; Miller, 2019)), and the source domain model performs well on the target domain (within 0-4 points of F1). We nonetheless include some experiments on this dataset in appendix A.3. We find that with simple data preprocessing and source-domain hyperparameter tuning, the source-domain model alone outperforms all domain adaptation models from Ye et al. (2020) and Ben-David et al. (2020). + +SemEval 2021 Task 10 negation detection is a "span-in-context" classification task. The goal is to predict whether an event (denoted by two special + +tokens $< \mathsf{e}>$ and $< / \mathsf{e}>$ ) in the sentence is negated by its context. For example, given the sentence: + +Has no $< e>$ diarrhea $$ and no new lumps or masses + +the goal is to predict that diarrhea is negated by its context. The source-domain negation detection model was trained on Mayo clinic clinical notes. The target domains are Partners HealthCare clinical notes from the i2b2 2010 Challenge and Beth Israel ICU progress notes from the MIMIC III corpus. + +SemEval 2021 Task 10 time expression recognition is a sequence-tagging task. The goal is to identify the time entities in the document and label them with SCATE types (Bethard and Parker, 2016). For example, given the sentence: + +the patient underwent appendicitis surgery on August 29, 2018, + +the goal is to label August as Month-Of-Year, 29 as Day-Of-Month, and 2018 as Year. The source-domain time expression recognition model was trained on the Mayo Clinic clinical notes of SemEval 2018 Task 6 (Laparra et al., 2018). The target domains are news articles (also from SemEval 2018 Task 6) and reports from food security warning systems including the UN World Food Programme and the Famine Early Warning Systems Network. + +Each task has a model trained from a source domain and a test set for each of two target domains. For each target domain, we split the data into $20\%$ as a development set and $80\%$ as a test set. Detailed data information is shown in table 1. + +Source data We do not use source domain data. We use only the English RoBERTa-base models (Liu et al., 2019) (approx. 125M parameters) that the task organizers fine-tuned on the source domain data sets via the Huggingface Transform + +ers library v3.5.1 (Wolf et al., 2020). + +Target development data We use the development data for fine-tuning the model. For active learning, to simulate manual annotation, we fine-tune on a small number of automatically selected labeled examples. For self-training, no labels are used; we fine-tune on predictions (pseudo-labels) generated by the model on the development data. For oracle experiments, we fine-tune the model on all labeled examples in the development set. + +Target test data We evaluate on the test data. No fine-tuning is performed. Models always treat this data as unlabeled3. Its labels are used only during evaluation. We use the same evaluation metrics as in SemEval 2021 Task 10: precision, recall, and F1 score. + +# 4 Research Questions + +We aim for a systematic analysis of three strategies with many different implementations in SemEval 2021 Task 10: self-training, active learning, and data augmentation. Our research questions are: + +1. How much can we gain from having human intervention (active learning) and not just the model alone (self-training)? +2. For active learning, given a fixed annotation budget, is it better to do several iterations of selecting examples for annotation and retraining the model, or to select and retrain just once? +3. For self training, given a fixed confidence threshold, is it better to do several iterations of generating pseudo-labels and retraining the model, or to generate and train only once? +4. In each iteration of active learning or self-training, should we use the training data from the previous iteration or start anew? +5. In each iteration of active learning or self-training, should we continue training the model from the previous iteration or the model from the source-domain? +6. Do active learning and self-training improve with data augmentation or work better alone? + +# 5 Method + +We design source-free variants of self-training, active learning, and data augmentation that incorporate the following parameters, allowing us to investigate the questions above. + +Algorithm 1: Source-Free Self-training Algorithm +Input: M: the source-domain model D: the unlabeled target domain data $\tau$ : the self-training threshold T: the maximum number of iterations $S_{D}$ : the data construction strategy $S_{M}$ : the model training strategy $S_{A}$ : the data augmentation strategy +1 $M_0\gets Copy(M)$ +2 $D_0\gets Copy(D)$ +3 $L\gets \emptyset$ +4 for $i\gets 0$ to $T$ do +5 if $D = \emptyset$ then +6 Stop training +7 if $S_{D} =$ ResetData then +8 $L = \emptyset$ +9 $D = D_0$ +10 $L_{C_i}\gets \{(d,M(d))$ for $d\in D$ if $M(d)$ confidence $> \tau \}$ +11 if $L_{C_i} = \emptyset$ or $L_{C_i} = L_{C_{i - 1}}$ then +12 Stop training +13 $L = L\cup L_{C_i}$ +14 if $S_{D} =$ KeepData then +15 $D\gets D - \{d$ for $(d,l)\in L_{C_i}\}$ +16 if $S_A =$ Augment then +17 $L\gets L\cup$ Augment $(L_{C_i})$ +18 if $S_M =$ ResetModel then +19 $M\gets M_0$ +20 Fine-tune M on $L$ + +$T$ the maximum number of iterations for selftraining or active learning + +$S_{D}$ the data construction strategy: KeepData to keep the training data from the previous iteration, or ResetData to start anew on each iteration. + +$S_{M}$ the model training strategy: KeepModel to continue training the model from the previous iteration, or ResetModel to continue training from the source-domain model. + +$S_{A}$ whether or not to use data augmentation. + +# 5.1 Source-Free Self-training + +Algorithm 1 presents our self-training algorithm. It follows standard self-training (Yarowsky, 1995) in using the model to add pseudo-labels to the unlabeled data (line 10). However, there is no source-domain labeled data, so the model can fine-tune only on the pseudo-labels. The remainder of the code ensures that models and/or data are kept, reset, or augmented as per the selected strategies. + +Self-training requires a measure of model confidence on each prediction. In both tasks, we add pseudo-labeled training data a sentence at a time, so we measure confidence at the sentence level. In negation detection, we use the predicted probability + +Algorithm 2: Source-Free Active Learning Algorithm +Input: M: the source-domain model D: the development set of the target domain T: the maximum number of iterations K: the number of annotations per iteration $S_{D}$ : the data construction strategy $S_{M}$ : the model training strategy $S_{A}$ : the data augmentation strategy +1 $M_0\gets \mathrm{Copy}(M)$ +2 $D_0\gets \mathrm{Copy}(D)$ +3 $L\gets \emptyset$ +4 for $i\gets 0$ to $T$ do +5 if $S_{D} = \mathrm{ResetData}$ then +6 $\begin{array}{c|c}L = \emptyset \\ D = D_0 \end{array}$ +7 +8 $D_U\gets$ [d for $d\in D$ sorted by uncertainty of $M(d)]$ +9 $L_{U}\leftarrow$ $\{(d,\mathrm{Annotate}(d))$ for $d\in$ top $K$ of $D_U\}$ +10 $L\gets L\cup L_U$ +11 if $S_{D} = \mathrm{KeepData}$ then +12 $\begin{array}{r}D\gets D - \{d\text{for} (d,l)\in L_U\} \end{array}$ +13 if $S_A = \mathrm{Augment}$ then +14 $\begin{array}{r}L\gets L\cup \mathrm{Augment}(L_U); \end{array}$ +15 if $S_M = \mathrm{ResetModel}$ then +16 $\begin{array}{r}M\gets M_0 \end{array}$ +17 Fine-tune $M$ on $L$ + +at RoBERTa's special sentence-initial token $\langle \mathrm{s} \rangle$ . In time expression recognition, we use the average of the predicted probabilities of the most probable class of each token. + +# 5.2 Source-Free Active Learning + +Algorithm 2 presents our active learning algorithm. It follows an approach similar to Su et al. (2021). Like most active learning algorithms, the core is to select examples the model is uncertain of (line 8) and then manually annotate them (line 9). Since our development sets are already annotated, we simulate annotation by simply revealing the (previously hidden) labels for the selected examples. + +Active learning requires a measure of model uncertainty on each prediction. In both tasks, we add annotations a sentence at a time, so we measure uncertainty at the sentence level. In negation detection, we use the predicted entropy at RoBERTa's special sentence-initial token, $\langle \mathrm{s} \rangle$ . In time expression recognition, we use the average of the predicted entropies of the tokens in the sentence. + +# 5.3 Data Augmentation + +Inspired by Miao et al. (2020), we use a pool-based data augmentation method to automatically increase the size of the training set. + +In negation detection, we construct a pool of all event words in the unlabeled target domain test data. For each development data example to be augmented, we substitute its event with $n$ randomly-sampled words from the pool. For example, if data augmentation is performed on the sentence: Has no $< e >$ diarrhea $< / e >$ , we replace the diarrhea with random words from the pool, resulting in sentences like Has no $< e >$ asthma $< / e >$ . + +In time expression recognition, we construct a pool of words for each time entity type using the guidelines of the SCATE annotation schema, excluding words that do not appear in the unlabeled target domain test data. For each entity in a development data example to be augmented, we substitute it with $n$ randomly-sampled words from the pool for its entity type. For example, in the sentence, the patient underwent appendicitis surgery on August 29, 2018, there are three time entities (August: Month-Of-Year, 29: Day-Of-Month, 2018: Year). Data augmentation can therefore generate up to $n \times 3$ sentences with different years, months, and days, e.g., the patient underwent appendicitis surgery on September 1st, 2017. + +# 6 Experiments + +The input to the source-domain models for both tasks is a sentence. The output for the negation detection model is a sentence label (negated or not negated). The output for the time expression model is one label per token (its time entity type). For both tasks, we use the conventional RoBERTa input format, surrounding the sentence with the special tokens $\langle \mathrm{s} \rangle$ and $\langle \mathrm{/s} \rangle$ . The negation detection data is already split into sentences. For the time recognition data, we split it into sentences using the English sentencizer from Spacy v2.3.2 (Honnibal et al., 2020). + +When we fine-tune the source-domain model on the target domain, we keep the same training hyperparameters as were used when the shared task organizers trained the models on the source domains. In source-free domain adaptation, there is no (or very little) labeled development data available, so it is not possible to tune hyperparameters. All hyperparameters are given in appendix A.1. All experiments are run on a single Nvidia P100 GPU. The total approximate GPU hours are 70 hours. + +In self-training, we set the threshold $\tau$ to 0.95, and experiment with running just a single iteration and with running 30 iterations with the different + +$S_{D}$ and $S_{M}$ strategies. The threshold and the number of iterations are adapted from Su et al. (2021). Training may run for fewer iterations when the stopping conditions are met. In active learning, we set our annotation budget to 96 sentences, and experiment with spending these 96 sentences at once and in 8 iterations with the different $S_{D}$ and $S_{M}$ strategies. For all experiments, we run one version with data augmentation (with $n = 5$ ) and one without. + +For each source and target domain pair, we compare our adapted model with the following models. + +1. Source-Domain Model: The baseline. It is unadapted, trained only on the source domain. +2. Fine-Tuned Source-Domain Model: The oracle. It is fine-tuned on the target domain using the entire labeled development set. +3. Self-Distilled Model: A RoBERTa-base model fine-tuned on the development set using pseudo labels generated by the source-domain model. +4. Passive Learning Model: The source-domain model fine-tuned on 96 randomly sampled examples from the labeled development set. + +# 7 Discussion + +Tables 2 and 3 show the results of our experiments. We are interested less in the best model for a particular configuration, but rather in which configurations are successful across multiple tasks and domains. This is because in source-free domain adaptation, there is typically no (or very little) labeled target domain data available for hyperparameter tuning. Therefore, what we need is a universal strategy that does not require careful tuning. + +For source-free active learning, we find that even small amounts of annotated data are useful, and that smart data selection (e.g., using uncertainty scores) is usually helpful. The active learning Keep-Data models (rows 6, 8, 11, and 13 in tables 2 and 3) have higher F1s than the baseline source domain models across all tasks and domains (0.054 F1 higher on average). Active learning KeepData models also outperform passive learning models (that randomly select data) in 14 out of 16 cases, and are at least as good as, and typically much better than, the self-training models (rows 15-24 in tables 2 and 3). The ResetModel+ResetData models always have the worst F1s of the active learning models (rows 7 and 12 in tables 2 and 3). + +Several active learning models achieve higher F1s than the "oracle" model that fine-tuned on the full labeled development set (row 8, 10, 11, 13, + +14 in table 3 Time: News and row 8, 11, 14 in table 3 Time: Food). This emphasizes a challenge of source-free domain adaptation: more data is not always better data. Since we do not have access to the source domain training data, if we fine-tune on too much target domain data the model may start to forget what it learned on the source domain, i.e., "catastrophic forgetting" (McCloskey and Cohen, 1989). In these cases, the active learning models, by selecting a small set of just the most uncertain examples, reap the benefits of knowing something about the target domain without losing what they learned from the source domain. + +For source-free self-training, we find that iteratively updating both model and data is slightly above baseline, and that it is better to start from the source-domain model than from RoBERTa without fine-tuning. The KeepModel+KeepData (without data augmentation) is slightly above the source-domain model across all tasks and domains (0.013 F1 higher on average). Every other configuration, even if they outperform KeepModel+KeepData in one task or domain, is below the source-domain baseline in another. All self-trained models without data augmentation (which start from the source-domain model) do at least outperform self-distilled models (which start from the RoBERTa model without fine-tuning; row 3 in tables 2 and 3). The small gains from the only self-training configuration that consistently outperformed the source-domain model suggest that self-training may not be worthwhile for source-free domain adaptation. + +Data augmentation helped in some cases (e.g., self-training time expression recognition on news), and hurt in others (e.g., self-training time expression recognition on food security). Data augmentation sometimes led to ill-behaving models: on the negation MIMIC-III dataset, data augmentation made the self-trained model predict all examples as not negated resulting in 0.000 F1 (rows 21 -24 in table 2: Negation-MIMIC-III). This suggests that data augmentation (or at least the variants of it that we explored) is probably not viable for source-free domain adaptation where no labeled data for tuning strategies is available. + +We thus make the following suggestions for source-free domain adaptation: + +1. If there is sufficient expertise to label the data, use active learning and iteratively adapt the model with the KeepModel+KeepData strategy instead of spending the annotation budget all at + +
#StrategyNegation: MIMIC-IIINegation: i2b2
FPRFPR
1Source-Domain Model (baseline)0.6560.9210.5100.8370.8550.820
2Fine-Tuned Source-Domain Model (oracle)0.8680.8750.8620.9250.9280.922
3Self-Distilled Model0.6230.8250.5010.8460.8490.842
4Passive Learning Model0.7220.7920.6630.8820.9140.853
Active Learning
5AL (96 × 1)0.7590.9010.6560.8860.9430.836
6AL (12 × 8) + ResetModel + KeepData0.8000.8280.7740.8910.9510.838
7AL (12 × 8) + ResetModel + ResetData0.6180.8420.4890.7780.9720.649
8AL (12 × 8) + KeepModel + KeepData0.8170.8670.7730.8590.8520.865
9AL (12 × 8) + KeepModel + ResetData0.7770.8900.6890.8770.9280.831
Active Learning + Data Augmentation
10AL (96 × 1) + DA (5)0.7080.6520.7730.8830.9370.834
11AL (12 × 8) + ResetModel + KeepData + DA (5)0.8050.8030.8060.8910.9600.831
12AL (12 × 8) + ResetModel + ResetData + DA (5)0.5860.4890.7300.8170.9600.710
13AL (12 × 8) + KeepModel + KeepData + DA (5)0.8050.8780.7440.8810.9250.841
14AL (12 × 8) + KeepModel + ResetData + DA (5)0.7450.8820.6450.8890.9290.852
Self-training
15ST (1)0.6770.9160.5370.8540.8710.838
16ST (30) + ResetModel + KeepData0.6790.9370.5330.8570.8760.839
17ST (30) + ResetModel + ResetData0.6950.9120.5620.8610.8800.843
18ST (30) + KeepModel + KeepData0.6640.9060.5250.8640.8900.840
19ST (30) + KeepModel + ResetData0.6540.8790.5210.8580.8830.834
Self-training + Data Augmentation
20ST (1) + DA (5)0.6540.9430.5010.8630.8940.833
21ST (30) + ResetModel + KeepData + DA (5)0.0000.0000.0000.8610.8870.838
22ST (30) + ResetModel + ResetData + DA (5)0.0000.0000.0000.8640.8970.834
23ST (30) + KeepModel + KeepData + DA (5)0.0000.0000.0000.8540.8690.839
24ST (30) + KeepModel + ResetData + DA (5)0.0000.0000.0000.8550.8850.827
+ +Table 2: Performance of domain adaptation strategies on the negation detection target domains. AL $(k\times i)$ is active learning with $k$ samples and $i$ iterations. ST $(i)$ is self-training up to $i$ iterations. DA $(n)$ is augmenting each example with up to $n$ new examples. The best scores are in bold and the worst scores are underlined. + +once. This is the best model without data augmentation in three of the four domains (Negation: MIMIC III, Time: News, Time: Food). Note that expertise is important: Su et al. (2021) found that active learning with non-experts in the face of a complex annotation scheme did not yield performance improvements. + +2. Self-training and data augmentation, at least as implemented here, are not good choices for source free domain adaptation: sometimes they led to gains, and sometimes they led to losses. While a good strategy could be found by labeling some target domain data and performing hyperparameter search, such annotation effort would have a higher payoff if used for active learning instead. +3. Active learning is better than passive learning: smart example selection is better than random example selection. +4. Self-training is better than self-distillation: the + +models benefit from the task knowledge learned from the source-domain. + +Our systematic analysis allowed us to make the above more specific suggestions than the shared task's main suggestion that "the best performing [systems] incorporated... active-learning, handcrafted heuristics or semiautomatically building a training set" (Laparra et al., 2021). + +# 8 Error Analysis + +We performed an error analysis to try to determine if different adaptation strategies resulted in different types of errors being corrected (as compared to the source domain model). For negation detection we sampled and categorized around 200 errors of the source-domain model for each target domain. When the model failed to predict a negation, we manually categorized the error by the negation cue (no, free, absent, etc.). When the model predicted a negation it should not have, we manually cate + +
#StrategyTime: NewsTime: Food
FPRFPR
1Source-Domain Model (baseline)0.7710.7720.7700.7810.8340.734
2Fine-Tuned Source-Domain Model (oracle)0.8440.8260.8640.8510.8410.861
3Self-Distilled Model0.5720.5900.5550.7660.8310.711
4Passive Learning Model0.7960.7830.8090.7700.7550.785
Active Learning
5AL (96 × 1)0.8120.8000.8250.8190.8210.818
6AL (12 × 8) + ResetModel + KeepData0.8120.7940.8300.8420.8440.840
7AL (12 × 8) + ResetModel + ResetData0.7710.7710.7700.7810.8320.737
8AL (12 × 8) + KeepModel + KeepData0.8610.8440.8790.8720.8660.879
9AL (12 × 8) + KeepModel + ResetData0.7720.7580.7870.7810.7970.765
Active Learning + Data Augmentation
10AL (96 × 1) + DA (5)0.8560.8290.8840.8400.8240.855
11AL (12 × 8) + ResetModel + KeepData + DA (5)0.8600.8300.8930.8560.8400.873
12AL (12 × 8) + ResetModel + ResetData + DA (5)0.7900.7480.8360.7930.7820.805
13AL (12 × 8) + KeepModel + KeepData + DA (5)0.8490.8200.8810.8410.8210.863
14AL (12 × 8) + KeepModel + ResetData + DA (5)0.8530.8280.8790.8560.8310.881
Self-training
15ST (1)0.7530.7330.7740.7770.8070.750
16ST (30) + ResetModel + KeepData0.7860.7910.7820.7800.8150.747
17ST (30) + ResetModel + ResetData0.7270.6880.7700.7870.8150.761
18ST (30) + KeepModel + KeepData0.7840.7770.7920.7860.8320.745
19ST (30) + KeepModel + ResetData0.6330.5510.7430.7890.8290.752
Self-training + Data Augmentation
20ST (1) + DA (5)0.8000.7940.8050.7560.7870.726
21ST (30) + ResetModel + KeepData + DA (5)0.7890.7900.7880.7540.7800.730
22ST (30) + ResetModel + ResetData + DA (5)0.7950.7920.7980.7650.7880.744
23ST (30) + KeepModel + KeepData + DA (5)0.7940.8010.7880.7590.7860.734
24ST (30) + KeepModel + ResetData + DA (5)0.7970.7910.8020.7470.7710.724
+ +Table 3: Performance of domain adaptation strategies on the time expression recognition target domains. AL $(k\times i)$ is active learning with $k$ samples and $i$ iterations. ST $(i)$ is self-training up to $i$ iterations. DA $(n)$ is augmenting each time entity with up to $n$ new examples. The best scores are in bold and the worst scores are underlined. + +gorized the error into "wrong cue" (there was a negation cue in the sentence but it did not apply to the target event) or "short sentence" (especially on the i2b2 domain, the model liked to predict all short sentences as negated). For time expression recognition, we categorized all errors of the source-domain model by entity type (inside-outside-beginning format) for each target domain. + +For both tasks, we then calculated how many of these source-domain model errors the best adapted models continued to make. Heatmaps of these analyses are plotted in appendix A.2. Across all tasks and domains, we see that the best self-trained models correct errors roughly evenly across source-domain error categories, while the best active learning models correct different errors, more like the oracle (target-fine-tuned) model. For example, the oracle model and active learning adapted models correct many more "wrong cue" errors in the negation i2b2 domain, more denies and none errors in + +the negation MIMIC III domain, more B-Period and B-Month-Of-Year entities in the time news domain, and more B-Season-Of-Year, I-Season-Of-Year, and B-This entities in the time food domain. + +Some error types appear to be only learnable with substantially more data. Only the oracle model is able to correct errors with the non and afebrile negation cues in the i2b2 domain and with the hold negation cue in MIMIC-III domain. This suggests that the source-domain model may be very confident in some types of wrong examples causing them not to be selected in active learning and generating poor pseudo-labels in self-training. + +# 9 Conclusion + +In this paper, we present a detailed comparison of the use of active learning, self-training and data augmentation to adapt a source-domain model on a target domain when the source-domain training data is unavailable. We identify a specific formula + +tion of source-free active learning that consistently improves performance of the source-domain model. We believe our work highlights the interesting challenges of source-free domain adaptation, and its systematic comparison provides a solid base for future research in this area. + +# Acknowledgements + +Research reported in this publication was supported by the National Library of Medicine of the National Institutes of Health under Award Numbers R01LM012918 and R01LM010090. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. + +# Ethical Considerations + +Our comparison experiments and proposed formulation are intended to encourage model sharing in source-free domain adaptation while avoiding the risk of privacy leakage caused by direct data sharing. The data we use in this experiment are publicly available and from a shared task, however some of that data is from health institutions and requires a data use agreement to work with the data. Though recent research has found it difficult to recover protected information from trained models (Lehman et al., 2021), there is still some small risk that more complex models may be able to do so. However, as our research is a comparative study, we are not directly releasing models, and thus not risking any release of protected health information. + +# References + +Nikolay Arefyev, Boris Sheludko, Alexander Podolskiy, and Alexander Panchenko. 2020. Always keep your target in mind: Studying semantics and improving performance of neural lexical substitution. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1242-1255, Barcelona, Spain (Online). International Committee on Computational Linguistics. +Eyal Ben-David, Carmel Rabinovitz, and Roi Reichart. 2020. PERL: Pivot-based domain adaptation for pre-trained deep contextualized embedding models. Transactions of the Association for Computational Linguistics, 8:504-521. +Steven Bethard and Jonathan Parker. 2016. A semantically compositional annotation scheme for time normalization. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 3779-3786, Portoorž, + +Slovenia. European Language Resources Association (ELRA). +John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, Bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 440-447, Prague, Czech Republic. Association for Computational Linguistics. +Yee Seng Chan and Hwee Tou Ng. 2007. Domain adaptation with active learning for word sense disambiguation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 49-56, Prague, Czech Republic. Association for Computational Linguistics. +Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle-moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440-8451, Online. Association for Computational Linguistics. +Xia Cui and Danushka Bollegala. 2019. Self-adaptation for unsupervised domain adaptation. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019), pages 213-222, Varna, Bulgaria. INCOMA Ltd. +Cyprien de Masson d'Autume, Sebastian Ruder, Lingpeng Kong, and Dani Yogatama. 2019. Episodic memory in lifelong language learning. In NeurIPS. +Liat Ein-Dor, Alon Halfon, Ariel Gera, Eyal Shnarch, Lena Dankin, Leshem Choshen, Marina Danilevsky, Ranit Aharonov, Yoav Katz, and Noam Slonim. 2020. Active Learning for BERT: An Empirical Study. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7949-7962, Online. Association for Computational Linguistics. +Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2018. Adaptive semi-supervised learning for cross-domain sentiment classification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3467-3476, Brussels, Belgium. Association for Computational Linguistics. +Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, and Dawn Song. 2020. Pretrained transformers improve out-of-distribution robustness. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2744-2751, Online. Association for Computational Linguistics. + +Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd. 2020. spaCy: Industrial-strength Natural Language Processing in Python. +Yunzhong Hou and Liang Zheng. 2020. Source freedom domain adaptation with image translation. +Mika Juuti, Tommi Gröndahl, Adrian Flanagan, and N. Asokan. 2020. A little goes a long way: Improving toxic language classification despite data scarcity. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 2991-3009, Online. Association for Computational Linguistics. +Youngeun Kim, Sungeun Hong, Donghyeon Cho, Hyoungseob Park, and Priyadarshini Panda. 2020. Domain adaptation without source data. CoRR, abs/2007.01524. +Sosuke Kobayashi. 2018. Contextual augmentation: Data augmentation by words with paradigmatic relations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 452-457, New Orleans, Louisiana. Association for Computational Linguistics. +Wouter M. Kouw. 2018. An introduction to domain adaptation and transfer learning. CoRR, abs/1812.11806. +Vinod K. Kurmi, Venkatesh K. Subramanian, and Vinay P. Namboodiri. 2021. Domain impression: A source data free domain adaptation method. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 615-625. +Egoitz Laparra, Steven Bethard, and Timothy A Miller. 2020. Rethinking domain adaptation for machine learning over clinical language. JAMIA open, 3(2):146-150. +Egoitz Laparra, Xin Su, Yiyun Zhao, Ozlem Uzuner, Timothy Miller, and Steven Bethard. 2021. SemEval-2021 task 10: Source-free domain adaptation for semantic processing. In Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021), pages 348-356, Online. Association for Computational Linguistics. +Egoitz Laparra, Dongfang Xu, Ahmed Elsayed, Steven Bethard, and Martha Palmer. 2018. SemEval 2018 task 6: Parsing time normalizations. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 88-96, New Orleans, Louisiana. Association for Computational Linguistics. +Eric Lehman, Sarthak Jain, Karl Pichotta, Yoav Goldberg, and Byron Wallace. 2021. Does BERT pretrained on clinical notes reveal sensitive data? In Proceedings of the 2021 Conference of the North + +American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 946-959, Online. Association for Computational Linguistics. +Rui Li, Qianfen Jiao, Wenming Cao, Hau-San Wong, and Si Wu. 2020. Model adaptation: Unsupervised domain adaptation without source data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). +Jian Liang, Dapeng Hu, and Jiashi Feng. 2020. Do we really need to access the source data? Source hypothesis transfer for unsupervised domain adaptation. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 6028-6039. PMLR. +Pei Liu, Xuemin Wang, Chao Xiang, and Weiye Meng. 2020. A survey of text data augmentation. In 2020 International Conference on Computer Communication and Network Security (CCNS), pages 191-195. +Y. Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, M. Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692. +Nikolaos Malandrakis, Minmin Shen, Anuj Goyal, Shuyang Gao, Abhishek Sethi, and Angeliki Metallinou. 2019. Controlled text generation for data augmentation in intelligent artificial agents. In Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 90-98, Hong Kong. Association for Computational Linguistics. +Michael McCloskey and Neal J. Cohen. 1989. Catastrophic interference in connectionist networks: The sequential learning problem. In Gordon H. Bower, editor, *Psychology of Learning and Motivation*, volume 24, pages 109-165. Academic Press. +David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 152-159, New York City, USA. Association for Computational Linguistics. +Zhengjie Miao, Yuliang Li, Xiaolan Wang, and Wang-Chiew Tan. 2020. Snippext: Semi-supervised opinion mining with augmented data. CoRR, abs/2002.03049. +Timothy Miller. 2019. Simplified neural unsupervised domain adaptation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 414-419, Minneapolis, Minnesota. Association for Computational Linguistics. + +Piyush Rai, Avishek Saha, Hal Daumé, and Suresh Venkatasubramanian. 2010. Domain adaptation meets active learning. In Proceedings of the NAACL HLT 2010 Workshop on Active Learning for Natural Language Processing, pages 27-32, Los Angeles, California. Association for Computational Linguistics. +Alan Ramponi and Barbara Plank. 2020. Neural unsupervised domain adaptation in NLP—A survey. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6838-6855, Barcelona, Spain (Online). International Committee on Computational Linguistics. +Sebastian Ruder and Barbara Plank. 2018. Strong baselines for neural semi-supervised learning under domain shift. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1044-1054, Melbourne, Australia. Association for Computational Linguistics. +Gözde Gül Şahin and Mark Steedman. 2018. Data augmentation via dependency tree morphing for low-resource languages. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5004-5009, Brussels, Belgium. Association for Computational Linguistics. +Roshni Sahoo, Divya Shanmugam, and John V. Guttag. 2020. Unsupervised domain adaptation in the absence of source data. CoRR, abs/2007.10233. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computational Linguistics. +Burr Settles. 2009. Active learning literature survey. Technical report, University of Wisconsin-Madison Department of Computer Sciences. +Xin Su, Yiyun Zhao, and Steven Bethard. 2021. The University of Arizona at SemEval-2021 task 10: Applying self-training, active learning and data augmentation to source-free domain adaptation. In Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021), pages 458-466, Online. Association for Computational Linguistics. +Fan-Keng Sun, Cheng-Hao Ho, and Hung-Yi Lee. 2020. LAMAL: LLanguage modeling is all you need for lifelong language learning. In International Conference on Learning Representations. +Bailin Wang, Wenpeng Yin, Xi Victoria Lin, and Caiming Xiong. 2021. Learning to synthesize data for semantic parsing. CoRR, abs/2104.05827. +Jason Wei and Kai Zou. 2019. EDA: Easy data augmentation techniques for boosting performance on text classification tasks. In Proceedings of the + +2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6382-6388, Hong Kong, China. Association for Computational Linguistics. +Garrett Wilson and Diane J. Cook. 2020. A survey of unsupervised deep domain adaptation. ACM Trans. Intell. Syst. Technol., 11(5). +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics. +Silei Xu, Sina Semnani, Giovanni Campagna, and Monica Lam. 2020. AutoQA: From databases to QA semantic parsers with only synthetic training data. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 422-434, Online. Association for Computational Linguistics. +David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In 33rd Annual Meeting of the Association for Computational Linguistics, pages 189-196, Cambridge, Massachusetts, USA. Association for Computational Linguistics. +Hai Ye, Qingyu Tan, Ruidan He, Juntao Li, Hwee Tou Ng, and Lidong Bing. 2020. Feature adaptation of pre-trained language models across languages and domains with robust self-training. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7386-7399, Online. Association for Computational Linguistics. +Juntao Yu, Mohab Elkaref, and Bernd Bohnet. 2015. Domain adaptation for dependency parsing via self-training. In Proceedings of the 14th International Conference on Parsing Technologies, pages 1-10, Bilbao, Spain. Association for Computational Linguistics. +Wangchunshu Zhou, Tao Ge, Ke Xu, Furu Wei, and Ming Zhou. 2019. BERT-based lexical substitution. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3368-3373, Florence, Italy. Association for Computational Linguistics. +Yftah Ziser and Roi Reichart. 2017. Neural structural correspondence learning for domain adaptation. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), + +pages 400-410, Vancouver, Canada. Association for Computational Linguistics. + +# A Appendix + +# A.1 Hyperparameters + +For both tasks, when we continue training the source-domain model on the target domain, we keep the same training hyperparameters as were used when the shared task organizers trained the models on the source domains. Those hyperparameters are shown in tables A1 and A2. + +
HyperparameterValue
maximum sequence length128
batch size8
epochs10
gradient accumulation steps4
learning rate warm up steps0
weight decay0.0
learning rate5e-5
adam epsilon1e-08
maximum gradient norm1.0
+ +Table A1: Hyperparameters for negation detection systems. + +
HyperparameterValue
maximum sequence length271
batch size2
epochs3
gradient accumulation steps1
learning rate warm up steps500
weight decay0.01
learning rate5e-5
adam epsilon1e-08
maximum gradient norm1.0
+ +Table A2: Hyperparameters for time expression recognition systems. + +# A.2 Heat Maps for Error Analysis + +For both tasks, we calculated how many source-domain model errors the best adapted models continued to make, and plotted them as heatmaps, where the rows are types of errors, and the columns are different models. Figures A1 to A4 show these analyses. + +![](images/993f070e4692e4a27c9d4e750332c713d431a01e32d1fa61df8fbabc4db79552.jpg) +Figure A1: Negation i2b2 target domain error heat map. Source is source-domain model. Oracle is oracle model. ALD is the best performing active learning model. ALDA is the best performing active learning with data augmentation model. ST is the best self-training model. STDA is the best self-training with data augmentation model. The numbers in parentheses are the F1 scores of the models. + +![](images/7a130a9d7ee971764fc8a6759e52029d79c7c68f010766ff2a3bc2cdb21f8841.jpg) + +![](images/f6ea37944cb6be53d9c28e2731b9dcbe48fc60e24af9dfcddd072a3047642604.jpg) +Figure A2: Negation MIMIC-III target domain error heat map. Source is source-domain model. Oracle is oracle model. AL is the best performing active learning model. ALDA is the best performing active learning with data augmentation model. ST is the best self-training model. STDA is the best self-training with data augmentation model. The numbers in parentheses are the F1 scores of the models. + +![](images/0f336386a2efc9ab28fdbca015e32333e46aa55dd0be0d389a0a6637ba6f0427.jpg) +Figure A3: Time news target domain error heat map. Source is source-domain model. Oracle is oracle model. AL is the best performing active learning model. ALDA is the best performing active learning with data augmentation model. ST is the best self-training model. STDA is the best self-training with data augmentation model. The numbers in parentheses are the F1 scores of the models. + +![](images/982b294d11d139854d2030fba506d61352fb8b1173eb2ea5331dd7c05185b847.jpg) +Figure A4: Time food security target domain error heat map. Source is source-domain model. Oracle is oracle model. AL is the best performing active learning model. ALDA is the best performing active learning with data augmentation model. ST is the best self-training model. STDA is the best self-training with data augmentation model. The numbers in parentheses are the F1 scores of the models. + +![](images/cb17328b1c2f3488ad8839947b776a2164c50f7e62049078ea36e2e4c5d35c94.jpg) + +
StrategyB→DB→EB→KD→BD→ED→KE→BE→DE→KK→BK→DK→E
Source-Domain Model (baseline)88.592.093.890.291.790.789.089.293.592.090.594.8
Fine-Tuned Source-Domain Model (oracle)89.793.094.591.593.594.393.291.094.092.290.594.3
Self-Distilled Model88.091.795.592.590.593.089.290.594.090.590.092.5
Passive Learning Model86.592.592.591.589.291.290.090.293.291.589.791.2
Best model from Ye et al. (2020)87.991.392.591.591.692.588.788.293.689.887.992.6
Active Learning
AL (96 x 1)87.790.292.790.791.093.090.290.793.291.790.093.8
AL (12 X 8) + KeepModel + KeepData88.290.091.090.290.594.891.088.294.089.791.092.7
AL (12 X 8) + KeepModel + ResetData87.593.079.083.590.591.086.878.589.085.383.889.5
AL (12 X 8) + ResetModel + KeepData87.592.293.592.591.294.091.289.094.591.089.294.8
AL (12 X 8) + ResetModel + ResetData75.084.067.291.762.590.089.287.591.093.069.094.5
Self-training
ST (1)87.591.794.391.590.592.590.291.792.591.591.594.3
ST (30) + KeepModel + KeepData87.592.594.090.591.092.089.589.594.590.289.793.2
ST (30) + KeepModel + ResetData90.091.294.391.290.292.790.790.594.591.290.593.5
ST (30) + ResetModel + KeepData88.291.094.391.791.091.790.792.295.391.092.092.7
ST (30) + ResetModel + ResetData89.092.594.090.790.592.290.090.794.891.591.294.3
+ +Table A3: Accuracy on the Amazon benchmark dataset from Ye et al. (2020). B is Books. D is DVDs. E is Electronics. K is Kitchen. The bolded score is the highest score for the entire column. The underlined score is the worst score for the entire column. + +# A.3 Results on Amazon Benchmark + +The Amazon Sentiment Analysis dataset has been used as a domain adaptation benchmark dataset by a large number of previous works (Blitzer et al., 2007; Ziser and Reichart, 2017; He et al., 2018; Ye et al., 2020; Ben-David et al., 2020). The data consists of reviews of four different product types (domains): Books, DVDs, Electronics, and Kitchen appliances. For the labeled portion, there are 1000 positive reviews and 1000 negative reviews for each domain. From these 4 domains, we construct 12 source-free domain adaptation tasks. For better comparison we directly use the data and split from the software release of Ye et al. (2020). The data of each source domain is split into $80\%$ as source-domain training set and $20\%$ as source-domain development set. The source-domain model is trained on the source-domain training set and its hyperparameters are tuned using the source-domain development set. The data of each target domain is split into $80\%$ as target-domain development set and $20\%$ as target-domain test set. The use of target-domain development set and target-domain test set is the same as in section 3. + +When training the source-domain model, we used RoBERTa-base as a starting point and used grid search to tune the hyperparameters within the space of: + +Learning Rate (Adam): 1e-5, 2e-5, 3e-5 + +Batch Size: 8 + +# Gradient Accumulation Steps: 2, 4 Epochs: 10 + +Table A3 shows the results of these 12 source-free domain adaptations. In 9 of 12 cases, our unadapted source-domain models score higher than the best adaptation model from Ye et al. (2020) $^{4}$ . The gap between these unadapted source-domain models and the fully target-domain adapted (oracle) models is also very small: the average difference is only 1.3 points, much smaller than the 11.1 point average difference in tables 2 and 3. In essence, no domain adaptation is needed for this data, so it is a poor dataset for evaluating source-free domain adaptation. Unsurprisingly, we thus see no source-free domain adaptation models that consistently improve performance, though we do see that the active learning ResetData models are typically poor, as they were in tables 2 and 3. + +To make sure that it is not a specific split or a smaller test set that leads to good source-domain models, we also use the data from Ben-David et al. (2020) to train and test the source-domain models again. The source-domain data split and usage here is the same as before. The only difference is that there is no target-domain development set and the entire target domain is used as a test set. We show the results in table A4. All source-domain models outperform the best adapted models from Ben-David et al. (2020). It is worth noting that when we + +
StrategyB→DB→EB→KD→BD→ED→KE→BE→DE→KK→BK→DK→E
SD91.893.595.093.093.094.692.890.894.792.190.294.4
Best model from Ben-David et al. (2020)87.887.290.285.689.390.484.385.091.283.085.691.2
+ +Table A4: Accuracy on the Amazon benchmark dataset from Ben-David et al. (2020). B is Books. D is DVDs. E is Electronics. K is Kitchen. The bolded score is the highest score for the entire column. The underlined score is the worst score for the entire column. + +trained the source-domain model, we found that a large number of punctuation and special symbols included in the data from Ben-David et al. (2020) caused severe overfitting of the model (accuracy is 1 on the source-domain development set). After removing these symbols, the problem was resolved. + +# A.4 Other Experimental Methods + +We also tried to adapt the source-domain model by continuing to pre-train it with masked language modeling on the target domain. We removed the classification layer of the source-domain model, replaced it with a randomly initialized masked language modeling layer, then trained the language model on the unlabeled target-domain data, and then replaced the masked language modeling layer with the original classification layer. The hope was that this would bring the internal representations of the source-domain model closer to the target domain. However, despite a number of attempts at pre-training both all layers and selected layers, performance of this model was always much worse than the source-domain model. In the future, we plan to experiment with different initialization methods for the masked language model layer. \ No newline at end of file diff --git a/acomparisonofstrategiesforsourcefreedomainadaptation/images.zip b/acomparisonofstrategiesforsourcefreedomainadaptation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..5c01123817b1965af2a301e7954653eb83c898e8 --- /dev/null +++ b/acomparisonofstrategiesforsourcefreedomainadaptation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c8bc360e34f434d034a77c6bf01aa9d6a53540a6a14844799d7a3f7dbb5620b1 +size 976063 diff --git a/acomparisonofstrategiesforsourcefreedomainadaptation/layout.json b/acomparisonofstrategiesforsourcefreedomainadaptation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..4a73451dfba8833cf874748c62256667f5cea534 --- /dev/null +++ b/acomparisonofstrategiesforsourcefreedomainadaptation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1006f4760fefb1bcf1d932abf9230efbc6b65618953c3d7d4299664708de1626 +size 471659 diff --git a/acontrastiveframeworkforlearningsentencerepresentationsfrompairwiseandtriplewiseperspectiveinangularspace/d4c1df00-d1c5-44d5-b0ad-114778b38fdd_content_list.json b/acontrastiveframeworkforlearningsentencerepresentationsfrompairwiseandtriplewiseperspectiveinangularspace/d4c1df00-d1c5-44d5-b0ad-114778b38fdd_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..2b0e7e1d55bd2f89285fac20ba01a1690afd6ad1 --- /dev/null +++ b/acontrastiveframeworkforlearningsentencerepresentationsfrompairwiseandtriplewiseperspectiveinangularspace/d4c1df00-d1c5-44d5-b0ad-114778b38fdd_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f3114f01379d10e7058e02f3e0e227947dca3bfe1ec1d190d296f80f836a3284 +size 78317 diff --git a/acontrastiveframeworkforlearningsentencerepresentationsfrompairwiseandtriplewiseperspectiveinangularspace/d4c1df00-d1c5-44d5-b0ad-114778b38fdd_model.json b/acontrastiveframeworkforlearningsentencerepresentationsfrompairwiseandtriplewiseperspectiveinangularspace/d4c1df00-d1c5-44d5-b0ad-114778b38fdd_model.json new file mode 100644 index 0000000000000000000000000000000000000000..d752575d29ddd87b44c7e5a706c42c604a44d587 --- /dev/null +++ b/acontrastiveframeworkforlearningsentencerepresentationsfrompairwiseandtriplewiseperspectiveinangularspace/d4c1df00-d1c5-44d5-b0ad-114778b38fdd_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3f4aeb3b379e5d13966b51106eb7af524487b61b9f5dfd70dc0eab4143b6bb75 +size 97364 diff --git a/acontrastiveframeworkforlearningsentencerepresentationsfrompairwiseandtriplewiseperspectiveinangularspace/d4c1df00-d1c5-44d5-b0ad-114778b38fdd_origin.pdf b/acontrastiveframeworkforlearningsentencerepresentationsfrompairwiseandtriplewiseperspectiveinangularspace/d4c1df00-d1c5-44d5-b0ad-114778b38fdd_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..220645b1aafd71d0340b72d18035b5ab69dd66af --- /dev/null +++ b/acontrastiveframeworkforlearningsentencerepresentationsfrompairwiseandtriplewiseperspectiveinangularspace/d4c1df00-d1c5-44d5-b0ad-114778b38fdd_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:217babff2453e6cd70d268d1c1f163ea7e2fdb5fa35346b056530172ff5ec7ed +size 1079792 diff --git a/acontrastiveframeworkforlearningsentencerepresentationsfrompairwiseandtriplewiseperspectiveinangularspace/full.md b/acontrastiveframeworkforlearningsentencerepresentationsfrompairwiseandtriplewiseperspectiveinangularspace/full.md new file mode 100644 index 0000000000000000000000000000000000000000..419561143f794ea4915d8b2bae6bec995d0f094c --- /dev/null +++ b/acontrastiveframeworkforlearningsentencerepresentationsfrompairwiseandtriplewiseperspectiveinangularspace/full.md @@ -0,0 +1,318 @@ +# A Contrastive Framework for Learning Sentence Representations from Pairwise and Triple-wise Perspective in Angular Space + +Yuhao Zhang $^{1}$ , Hongji Zhu $^{1}$ , Yongliang Wang $^{1}$ , Nan Xu $^{2}$ , Xiaobo Li $^{1}$ , BinQiang Zhao $^{1}$ + +$^{1}$ Alibaba Group $^{2}$ Institute of Automation, Chinese Academy of Sciences + +{zyh277500,zhj283587,wangyongliang.wyl,xiaobo.lixb,binqiang.zhao}@alibaba-inc.com + +xunan2015@ia.ac.cn + +# Abstract + +Learning high-quality sentence representations is a fundamental problem of natural language processing which could benefit a wide range of downstream tasks. Though the BERT-like pretrained language models have achieved great success, using their sentence representations directly often results in poor performance on the semantic textual similarity task. Recently, several contrastive learning methods have been proposed for learning sentence representations and have shown promising results. However, most of them focus on the constitution of positive and negative representation pairs and pay little attention to the training objective like NT-Xent, which is not sufficient enough to acquire the discriminating power and is unable to model the partial order of semantics between sentences. So in this paper, we propose a new method ArcCSE, with training objectives designed to enhance the pairwise discriminative power and model the entailment relation of triplet sentences. We conduct extensive experiments which demonstrate that our approach outperforms the previous state-of-the-art on diverse sentence related tasks, including STS and SentEval. + +# 1 Introduction + +Learning sentence representations, which encodes sentences into fixed-sized dense vectors such that semantically similar ones stay close, is a fundamental problem of natural language processing. It could benefit a wide range of downstream applications such as information retrieval, semantic similarity comparison, question answering, and so on. + +Recently, with the great success of pre-trained Transformer-based language models (Devlin et al., 2019; Liu et al., 2019; Raffel et al., 2020; Brown et al., 2020; Liu et al., 2019) like BERT, they have been widely adopted for generating sentence representations. A straightforward way is by leveraging the [CLS] embedding (Devlin et al., 2019) + +(a) BERTbase +![](images/6ab3f2c05b5a239a2719ea177d833497c92402dc3fe56c22389f365aff55c24a.jpg) +$s_a$ : He was born in Nazareth-Palestine, but immigrated to Lebanon with his parents and then to Jordan where he completed his primary education. + +![](images/d944b644311622f2dbbda5ee699f083c0d7150d4aefe776346890c0e7f0129e5.jpg) +(b) SimCSE-BERTbase + +(c) ArcCSE-BERTbase +(d) Example Sentences +Figure 1: Sentence representations visualization. We generate the representations of three related sentences by passing them to $\mathrm{BERT}_{\mathrm{base}}$ , SimCSE-BERTbase and ArcCSE-BERTbase multiple times. With different dropout masks, we can generate different representations for each sentence. Then we normalize the embeddings and use t-SNE for dimensionality reduction. +![](images/daace13360e8547bf558b27af3341ccc857ab3fee13be18df36be519dc277a42.jpg) +s6: He was born in Nazareth-Palestine, but immigrated to Lebanon with his parents. +$s_c$ : He was born in Nazareth-Palestine, but immigrated to Lebanon. + +or applying mean pooling on the last layers of a BERT-like pre-trained language model (Reimers and Gurevych, 2019). However, the sentence embeddings coming from a pre-trained language model without further fine-tuning could not capture the semantic meaning of sentences very well as shown in Figure 1(a), and sometimes even underperform non-contextualized embeddings like GloVe (Pennington et al., 2014). + +To make pre-trained language models more suitable for generating sentence embeddings, supervised methods like SBERT (Reimers and Gurevych, 2019) are proposed, which improve the performance by fine-tuning on a labeled dataset. As labeled data is not available or expensive to annotate in many tasks or domains, it is of great value + +for developing unsupervised/self-supervised approaches for learning sentence representations. So recent works like BERT-Flow (Li et al., 2020) and BERT-Whitening (Su et al., 2021) propose post-processing methods to improve the BERT-based sentence representation. They address that the nonsmooth anisotropic semantic space of BERT is a bottleneck and alleviate the problem through normalizing flows and whitening operation. To further improve the quality of sentence representations, several works (Kim et al., 2021; Yan et al., 2021; Giorgi et al., 2021; Carlsson et al., 2021; Gao et al., 2021) adopt self-supervised contrastive learning approach, which learns sentence representations by minimizing the distance of positive sentence representation pairs and maximizing the distance of negative pairs. In these works, positive pairs are often constituted through data augmentation or encoders with different structure or parameters, while negative pairs are derived from different sentences within the same batch. Then contrastive learning objective like normalized temperature-scaled cross-entropy loss (NT-Xent) (Chen et al., 2020; Gao et al., 2021) is used for optimizing. A typical example unsup-SimCSE (Gao et al., 2021) achieves state-of-the-art performance with a simple and effective idea of using standard dropout for data augmentation. + +Though existing contrastive methods for learning sentence representation have shown promising results, most of them focus on the positive and negative pairs constitution, and the optimization objective itself is not fully exploited. The contrastive learning objective NT-Xent loss used in recent works (Yan et al., 2021; Giorgi et al., 2021; Gao et al., 2021) is a variation of cross-entropy loss with softmax function. Recent studies (Wang et al., 2018; Deng et al., 2019) have shown that the traditional softmax-based loss is insufficient to acquire the discriminating power, as shown in Figure 1(b) in which SimCSE-BERT $_{\text{base}}$ adopts the NT-Xent loss and could not separate $s_b$ and $s_c$ completely. In addition, the current optimization objectives only models sentence relations in a pairwise perspective, which tries to pull sentences with similar semantics closer and push dissimilar ones away from each other. However, there are different degrees of semantic similarity among related sentences. For example in Figure 1(d), $s_b$ is more similar to $s_a$ than $s_c$ is. The current optimization objectives lack the ability to model the partial order of semantics + +between sentences. + +To alleviate these problems, in this paper, we propose a new approach ArcCSE for sentence representation learning. For pairwise sentence relation modeling, we propose Additive Angular Margin Contrastive Loss (ArcCon Loss), which enhances the pairwise discriminative power by maximizing the decision margin in the angular space. Besides, in order to model the partial order of semantics between sentences, we propose a new self-supervised task that captures the entailment relation among triplet sentences. The task is implemented through automatically constituted triplet sentences with entailment relation among them. A visualization example of the generated representations through ArcCSE is shown in Figure 1(c). We evaluate our method on standard semantic textual similarity (STS) tasks and SentEval transfer tasks, and it outperforms the previous state-of-the-art approaches. + +# 2 Related Work + +# 2.1 Unsupervised Sentence Representation Learning + +Early works usually learn sentence representations by augmenting the idea of word2vec (Mikolov et al., 2013), such as predicting surrounding sentences (Kiros et al., 2015; Hill et al., 2016; Logeswaran and Lee, 2018) or summing up n-gram embeddings (Pagliardini et al., 2018). With the rise of pre-trained language models, many works try to generate sentence representations through BERT-like models. A common way is leveraging the [CLS] embedding or applying mean pooling on the last layers of BERT (Reimers and Gurevych, 2019; Li et al., 2020). Instead of using BERT embeddings directly, BERT-Flow (Li et al., 2020) and BERT-Whitening (Su et al., 2021) further improve sentence representation through post-processing. + +Recently, several works adopt the contrastive learning framework for sentence representation learning. They propose different strategies to constitute contrastive pairs, either through different data transforming methods (Zhang et al., 2020; Yan et al., 2021; Giorgi et al., 2021), or through encoders with different structures or parameters (Carlsson et al., 2021; Kim et al., 2021; Gao et al., 2021). A typical example SimCSE (Gao et al., 2021) uses dropout as data augmentation strategy and achieves state-of-the-art performance. However, most existing works pay little attention to the training objective and use the traditional contrastive + +![](images/277ff94f8b7e82c36719e8121231febf8af6dc53ac410b16539c93d2eae30977.jpg) +Figure 2: The framework of ArcCSE. ArcCSE models pairwise and triple-wise sentence relations simultaneously. For pairwise sentence relation modeling, we pass sentences to a BERT-like encoder with dropout turn on twice. Then we feed the representations into ArcCon loss which is more discriminative than NT-Xent loss. Triplet sentences are constituted through masking. We pass them to the same BERT-like encoder with dropout turn off and use a triplet loss to model their relations. + +loss directly, which is insufficient in discrimination and unable to model the partial order of semantics between sentences. So, in our work, we propose a new approach that jointly models the pairwise and triple-wise sentence relations and further improves the sentence representations' quality. + +# 2.2 Deep Metric Learning Objectives + +The goal of Deep Metric Learning (DML) is to learn a function that maps objects into an embedded space, in which similar objects stay close and dissimilar ones are far away. In order to achieve this goal, many approaches have been proposed, and designing appropriate loss functions plays a key role in it. Contrastive training objectives like Contrastive Loss (Chopra et al., 2005), N-Pair Loss (Sohn, 2016), Structured Loss (Song et al., 2016) and Triplet Margin Loss (Ma et al., 2021) apply the definition of metric learning directly. These objectives are among the earliest training objectives used for deep metric learning. Later, softmax-based losses which learn a center for each class and penalize the distances between deep features and their corresponding class centers achieve more promis + +ing results in supervised metric learning. Typical examples like Center Loss (Wen et al., 2016), SphereFace (Liu et al., 2017), CosFace (Wang et al., 2018) and ArcFace (Deng et al., 2019) are widely adopted in deep learning applications such as face recognition and sentence classification (Coria et al., 2020). However, these losses need class labels and are not suitable for learning sentence representations. So inspired by ArcFace, we propose a new training objective ArcCon that does not need class labels and can model pairwise sentence relations with more discriminative power than traditional contrastive training objectives. + +# 3 Method + +In this section, we present ArcCSE, an angular based contrastive sentence representation learning framework, which could generate superior sentence embeddings from unlabeled data. Given a pretrained language model $\mathcal{M}$ and an unlabeled text dataset $\mathcal{D}$ , the task is fine-tuning $\mathcal{M}$ on $\mathcal{D}$ so that the sentence representations generated through $\mathcal{M}$ could be more semantic discriminative. + +Our framework consists of two components that + +model pairwise and triple-wise sentence relations simultaneously, as shown in Figure 2. We start with angular margin based contrastive learning in Section 3.1, which models pairwise relations between sentences by pulling semantic similar ones closer while pushing dissimilar ones away. Then we introduce the method which models the partial order of semantics between automatically constituted triplet sentences in Section 3.2. + +# 3.1 Angular Margin based Contrastive Learning + +To model the positive/negative pairwise relations between sentences, we first need to generate sentence representations and group them into positive and negative pairs. Then we feed these pairs to a training objective for optimizing. + +Given a collection of sentences $\mathcal{D} = \{s_i\}_{i=1}^N$ , we generate the sentence representations through a BERT-like pre-trained language model $\mathcal{M}$ . Following SimCSE, we use dropout as the data augmentation method. For each sentence $s_i$ , we generate two different representations $h_i$ and $h_i^*$ from $s_i$ by passing $s_i$ to $\mathcal{M}$ twice with independently sampled dropout masks. These two representations with the same semantics constitute a positive pair, while the negative pairs are derived from the representations of different sentences within the same batch. + +After getting the positive and negative sentence pairs, we put them into a training objective for model fine-tune. The most widely adopted training objective is NT-Xent loss (Chen et al., 2020; Gao et al., 2021), which has been used in previous sentence and image representation learning methods and can be formulated as follows: + +$$ +\mathcal {L} _ {\mathrm {N T - X e n t}} = - \log \frac {e ^ {s i m \left(h _ {i} , h _ {i} ^ {*}\right) / \tau}}{\sum_ {j = 1} ^ {n} e ^ {s i m \left(h _ {i} , h _ {j}\right) / \tau}} \tag {1} +$$ + +where $\text{sim}(h_i, h_j)$ is the cosine similarity $\frac{h_i^\top h_j}{||h_i|| * ||h_j||}$ , $\tau$ is a temperature hyperparameter and $n$ is the number of sentences within a batch. + +Though the training objective tries to pull representations with similar semantics closer and push dissimilar ones away from each other, these representations may still not be sufficiently discriminative and not very robust to noise. Let us denote angular $\theta_{i,j}$ as follows: + +$$ +\theta_ {i, j} = \arccos \left(\frac {h _ {i} ^ {\top} h _ {j}}{| | h _ {i} | | * | | h _ {j} | |}\right) \tag {2} +$$ + +![](images/bb02f2f144461b2ff6d5f044c5e9624862bedf6ec9623fb791b3632b4812ab4e.jpg) +Figure 3: Comparison of NT-Xent loss and ArcCon loss. For sentence representation $h_i$ , we try to make $\theta_{i,i*}$ smaller and $\theta_{i,j}$ larger, so the optimization direction follows the arrow. With an extra margin $m$ , ArcCon is more discriminative and noise-tolerant. + +![](images/b583ec2de6bd1123f80b59a748d263e85a6c9ec3a099e7fdffbde7554809e538.jpg) + +The decision boundary for $h_i$ in NT-Xent is $\theta_{i,i^*} = \theta_{i,j}$ , as show in Figure 3. Due to lack of decision margin, a small perturbation around the decision boundary may lead to an incorrect decision. + +To overcome the problem, we propose a new training objective for sentence representation learning by adding an additive angular margin $m$ between positive pair $h_i$ and $h_i^*$ . We named it Additive Angular Margin Contrastive Loss (ArcCon Loss), which can be formulated as follows: + +$$ +\mathcal {L} _ {\mathrm {a r c}} = - \log \frac {e ^ {\cos \left(\theta_ {i , i ^ {*}} + m\right) / \tau}}{e ^ {\cos \left(\theta_ {i , i ^ {*}} + m\right) / \tau} + \sum_ {j \neq i} e ^ {\cos \left(\theta_ {j , i}\right) / \tau}} \tag {3} +$$ + +In this loss, the decision boundary for $h_i$ is $\theta_{i,j*} + m = \theta_{i,j}$ , as shown in Figure 3. Compared with NT-Xent, it further pushed $h_i$ towards to the area where $\theta_{i,i*}$ get smaller and $\theta_{i,j}$ get larger, by increasing the compactness of sentence representations with the same semantics and enlarging the discrepancy of different semantic representations. This help enhance the alignment and uniformity properties (Wang and Isola, 2020), which are two key measures of representation quality related to contrastive learning, indicating how close between positive pair embeddings and how well the embeddings are uniformly distributed. The quantitative analysis is illustrated in Section 4.5. Besides, the decision boundary leaves an extra margin $m$ to boundary $\theta_{i,i*} = \theta_{i,j}$ which is often used during inference, making it more tolerant to noise and more robust. All these properties make ArcCon loss more discriminative than traditional training objectives like NT-Xent. Compared with Arcface (Deng et al., 2019) which is often used in large-scale fine-grained categorization in computer vision community, ArcCon loss does not need clas + +sification labels, and could handle contrastive task properly. + +# 3.2 Modeling Entailment Relation of Triplet Sentences + +Previously the training objectives for sentence representation learning like NT-Xent loss only considered pairwise sentence relations, in which sentences are either similar or dissimilar in semantics. But in fact, there are varying degrees of semantic similarity. For example, sentence $s_2$ could be more similar to sentence $s_1$ than sentence $s_3$ to $s_1$ . Existing methods lack the ability to model such partial order of semantics between sentences. + +In order to distinguish the slight differences in semantics between different sentences, we propose a new self-supervised task which models the entailment relation of automatically generated triplet sentences. For each sentence $s_i$ in the text dataset $\mathcal{D}$ , we first generate an external sentence $s_i'$ by masking contiguous segments of $s_i$ with a masking rate of $20\%$ . Then we enlarge the masking area and get a new sentence $s_i''$ with a masking rate of $40\%$ to $s_i$ . The masking rates are set up experimentally, and an ablation study about the effect of masking rates is illustrated in Section 4.4. An example of the masking procedure is shown as follows: + +$s_i$ Al Jaber's first long distance travel was of $800\mathrm{km}$ which he covered by circling Qatar. +$s_i^\prime$ Al Jaber's first long distance travel was of $800\mathrm{km}$ which he covered by circling Qatar. +$s_i^{\prime \prime}$ Al Jaber's first long distance travel was of $800\mathrm{km}$ which he covered by circling Qatar. + +We can constitute a triplet $(s_i, s_i', s_i'')$ with entailment relation among them. Though in rare cases, the strategy may generate sentences that do not exhibit the desired relationship and introduce some noise, the entailment relation holds true most of the time. We expect encountering enough data will reinforce the correct ones whereas the impact of incorrect ones will diminish. + +Since the $s_i$ , $s_i'$ and $s_i''$ are similar literally and semantically, generating their representations with dropout noise may obscure their entailment relation and add inaccurate signals to the representation learning process. So we turn off the dropout of the encoder when modeling the triplet relation. + +As $s_i'$ is more similar to $s_i$ in semantics than $s_i''$ is, we could model such relation with a triplet objective: + +$$ +\mathcal {L} _ {\mathrm {t r i}} = \max \left(0, \operatorname {s i m} \left(\bar {h} _ {i}, \bar {h} _ {i} ^ {\prime \prime}\right) - \operatorname {s i m} \left(\bar {h} _ {i}, \bar {h} _ {i} ^ {\prime}\right) + m\right) \tag {4} +$$ + +in which $\bar{h}_i$ is the sentence representation of $s_i$ generated without dropout noise and $sim(i,j)$ is the cosine similarity between $i$ and $j$ . As the semantic difference between $s_i'$ and $s_i''$ may be subtle depending on the original sentence $s_i$ and the masked words, here we set $m$ to zero. + +Combine formula (3) and formula (4), the final form of our training objective is: + +$$ +\mathcal {L} = \mathcal {L} _ {\mathrm {a r c}} + \lambda \mathcal {L} _ {\mathrm {t r i}} \tag {5} +$$ + +in which $\lambda$ is a coefficient. + +# 4 Experiments + +# 4.1 Setups + +Evaluation Tasks We evaluate our method on two kinds of sentence related tasks: + +- Unsupervised Semantic Textual Similarity (STS): These tasks measure the model's ability to estimate the semantic similarities between sentences. +- SentEval Transfer Tasks: These tasks measure the effectiveness of sentence embeddings used in downstream transfer tasks. + +Baselines We compare ArcCSE to several representative methods on STS and SentEval tasks, such as average GloVe embeddings (Pennington et al., 2014), Skip-thought (Kiros et al., 2015), average BERT embeddings from the last layer (Devlin et al., 2019), BERT-Flow (Li et al., 2020), and BERT-Whitening (Su et al., 2021). We also include the recently proposed contrastive learning methods, such as ISBERT (Zhang et al., 2020), CT-BERT (Carlsson et al., 2021), ConSERT (Yan et al., 2021), and the current state-of-the-art method SimCSE (Gao et al., 2021). + +Implementation Details We train ArcCSE with the pre-trained checkpoints of $\mathrm{BERT}_{\mathrm{base}}$ and $\mathrm{BERT}_{\mathrm{large}}$ (Devlin et al., 2019). We also employ our method to SBERT (Reimers and Gurevych, 2019), which has been trained on NLI datasets, to verify the generalizability of our method. + +Following SimCSE (Gao et al., 2021), we use the output of the MLP layer on top of the [CLS] as + +
MethodSTS12STS13STS14STS15STS16STS-BSICK-RAvg.
GloVe (avg.)55.1470.6659.7368.2563.6658.0253.7661.32
BERTbase(last avg.)30.8759.8947.7360.2963.7347.2958.2252.57
BERTbase-flow58.4067.1060.8575.1671.2268.6664.4766.55
BERTbase-whitening57.8366.9060.9075.0871.3168.2463.7366.28
IS-BERTbase56.7769.2461.2175.2370.1669.2164.2566.58
CT-BERTbase61.6376.8068.4777.5076.4874.3169.1972.05
ConBERTbase64.6478.4969.0779.7275.9573.9767.3172.74
SimCSE-BERTbase68.4082.4174.3880.9178.5676.8572.2376.25
ArcCSE-BERTbase72.0884.2776.2582.3279.5479.9272.3978.11
w/o ArcCon loss69.9482.3475.0883.0878.9778.5971.1377.02
w/o Triplet loss69.6681.9275.3382.7979.5579.5671.9477.25
ConBERTlarge70.6982.9674.1382.7876.6677.5370.3776.45
SimCSE-BERTlarge70.8884.1676.4384.5079.7679.2673.8878.41
ArcCSE-BERTlarge73.1786.1977.9084.9779.4380.4573.5079.37
SBERTbase70.9776.5373.1979.0974.3077.0372.9174.89
SimCSE-SBERTbase69.4180.7674.3782.6177.6479.9276.6277.33
ArcCSE-SBERTbase74.2982.9576.6383.9079.0880.9575.6479.06
SBERTlarge72.2778.4674.9080.9976.2579.2373.7576.55
SimCSE-SBERTlarge76.1683.7777.2784.3379.7381.6777.2580.03
ArcCSE-SBERTlarge76.3685.7278.2285.2080.0482.2577.0180.69
+ +Table 1: Sentence representation performance on the STS tasks. We employ our method to BERT and SBERT in both base and large versions and report Spearman's correlation. + +the sentence representation during training, and use the [CLS] output without MLP layer for evaluation. The dropout rate is set to 0.1. For ArcCon loss, we set the angular margin $m$ to 10 degrees and the temperature $\tau$ to 0.05. When modeling the entailment relation of triplet sentences, we set the masking ratios as $20\%$ and $40\%$ respectively. Since the semantic difference between triplet sentences is more obvious for long sentences, we filter out sentences with less than 25 words and use the left ones for the triplet loss. The loss coefficient $\lambda$ is set to 0.1 experimentally. + +We use one million random sampled sentences from English Wikipedia for training, which has been used in previous work (Gao et al., 2021) $^{1}$ . During training, the sentences are sampled by length. We set different maximum sentence lengths for ArcCon loss and triplet loss to save memory. The length is set to 32 for the ArcCon loss in large models, and to the maximum length within a batch for all other cases. We train our model for one epoch and the learning rate is set to 3e-5 for base + +models and 1e-5 for large models. We search the batch size within $\{8, 16, 32\}$ and always update the parameters every 64 steps. The model is optimized by the AdamW with Sharpness-Aware Minimization (Foret et al., 2021) and default configurations. + +We evaluate our model every 125 training steps on the development set of STS-B, and the best checkpoint is used for the final evaluation on test sets. Our implementation is based on Hugging-Face's Transformers (Wolf et al., 2020). + +# 4.2 Unsupervised STS Tasks + +We conduct experiments on 7 semantic textual similarity (STS) tasks, including STS tasks 2012-2016 (Agirre et al., 2012, 2013, 2014, 2015, 2016), STS Benchmark (Cer et al., 2017), and SICK-Relatedness (Marelli et al., 2014). Within these datasets, each sample contains two sentences and a gold score between 0 and 5 which indicates their semantic similarity. We use SentEval toolkit (Conneau and Kiela, 2018) for evaluation and report the Spearman's correlation following previous works (Reimers and Gurevych, 2019; Gao et al., 2021). + +The evaluation results are shown in Table 1, + +
MethodMRCRSUBJMPQASSTTRECMRPCAvg.
GloVe (avg.)77.2578.3091.1787.8580.1883.0072.8781.52
Skip-thought76.5080.1093.6087.1082.0092.2073.0083.50
BERTbase(last avg.)78.6686.2594.3788.6684.4092.8069.5484.94
IS-BERTbase81.0987.1894.9688.7585.9688.6474.2485.83
SimCSE-BERTbase81.1886.4694.4588.8885.5089.8074.4385.81
ArcCSE-BERTbase79.9185.2599.5889.2184.9089.2074.7886.12
BERTlarge(last avg.)84.3089.2295.6086.9389.2991.4071.6586.91
SimCSE-BERTlarge85.3689.3895.3989.6390.4491.8076.4188.34
ArcCSE-BERTlarge84.3488.8299.5889.7990.5092.0074.7888.54
+ +Table 2: Sentence representation performance on SentEval transfer tasks. We report the accuracy results of both $\mathrm{BERT}_{\mathrm{base}}$ and $\mathrm{BERT}_{\mathrm{large}}$ level models. + +from which we can see that ArcCSE outperforms the previous approaches. Compared with the previous state-of-the-art method SimCSE, ArcCSE-BERT $_{\text{base}}$ raises the average Spearman's correlation from $76.25\%$ to $78.11\%$ , and ArcCSE-BERT $_{\text{large}}$ further pushes the results to $79.37\%$ . The performance is even better than strong supervised method SBERT, which has already been trained on NLI datasets. Furthermore, we can also employ our method to SBERT and improve its performance to $79.06\%$ and $80.69\%$ for the base and large models respectively, which is more effective than SimCSE. + +We also explore the improvements made by the ArcCon loss and triplet loss independently based on $\mathrm{BERT}_{\mathrm{base}}$ . From Table 1 we can see that with ArcCon loss alone, the average Spearman's correlation is $77.25\%$ . When combining the traditional NT-Xent loss with our proposed triplet loss, the average Spearman's correlation is $77.02\%$ . Both of them outperform the previous state-of-the-art method SimCSE, whose average Spearman's correlation is $76.25\%$ . This demonstrates the effectiveness of ArcCon and triplet loss we proposed. + +# 4.3 SentEval Tasks + +We evaluate our model with SentEval toolkit on several supervised transfer tasks, including: MR (Pang and Lee, 2005), CR (Hu and Liu, 2004), SUBJ (Pang and Lee, 2004), MPQA (Wiebe et al., 2005), SST-2 (Socher et al., 2013), TREC (Voorhees and Tice, 2000) and MRPC (Dolan and Brockett, 2005). For each task, SentEval trains a logistic regression classifier on top of the sentence embeddings and tests the performance on the downstream task. For a fair comparison, we do not include models with auxiliary tasks like masked language modeling. + +The results are shown in Table 2. We can see that ArcCSE performs on par or better than baseline methods in both $\mathrm{BERT}_{\mathrm{base}}$ and $\mathrm{BERT}_{\mathrm{large}}$ level. This demonstrates the effectiveness of our method in learning domain-specific sentence embeddings. + +# 4.4 Ablation Studies + +Effect of Angular Margin The angular margin $m$ in ArcCon loss affects the discriminative power directly. To investigate the effect of $m$ , we conduct an experiment by varying $m$ from 0 degrees to 20 degrees, increased by 2 degrees at each step. We tune the hyper-parameter based on Spearman's correlation on the development set of STS-B following previous works (Kim et al., 2021; Gao et al., 2021). The results are shown in Figure 4. + +![](images/b6bc3a52cb81388b00e2e917c1d0646d1a2fa6839a363ea138a34bf66527186b.jpg) +Figure 4: Effect of the angular margin $m$ in ArcCon loss. Results are reported on the development set of STS-B based on the Spearman's correlation. + +We can see that the best performance is achieved when $m = 10$ , either larger or smaller margin degrade the performance. This matches our intuition since small $m$ may have little effect, and large $m$ may negatively influence the positive pair relation modeling. + +Effect of Temperature The temperature $\tau$ in ArcCon Loss affects its effectiveness, so we carry out an experiment with $\tau$ varying from 0.01 to 0.1, increased by 0.01 at each step. The results are shown in Figure 5. We can see that the model + +
ArcCSE-BERTbaseSTS12STS13STS14STS15STS16STS-BSICK-RAvg.
w/ Dropouton/off72.0884.2776.2582.3279.5479.9272.3978.11
w/ Dropoutmix/off70.5183.5975.8582.3078.8778.7471.5877.35
w/ Dropouton/on69.6283.1374.4282.1578.3978.3970.8976.71
+ +Table 3: Effect of on-off Switching of Dropout. We use different dropout settings to generate sentence embeddings used for ArcCon loss and triplet loss respectively. The "on", "off" and "mix" mean turn dropout on, turn dropout off and use different settings for two passes separately. + +performs best when $\tau = 0.05$ , so we use this value throughout our experiments. + +![](images/5a7e0e748fe82b516669a92b0e19ec7af6ede4059afd9893d43d6d6869806927.jpg) +Figure 5: Effect of the temperature $\tau$ in ArcCon loss. Results are reported on the development set of STS-B based on the Spearman's correlation. + +Effect of Masking Ratios The masking ratios determine the sentences generated for the entailment relation modeling and their differences in semantics, so we conduct an experiment to explore the effect of different masking ratios. The first masking ratio $r_1$ is varied from $10\%$ to $25\%$ , increased by $5\%$ for each step. The second masking ratio $r_2$ is derived by adding an extra value $r_d$ to $r_1$ . $r_d$ is varied from $10\%$ to $35\%$ , increased by $5\%$ for each step. The results are shown in Figure 6. + +![](images/fb8880ff4a8f7d4e1ea63fec936e1c6688d42666184749fe531111d6c0345ba6.jpg) +Figure 6: Effect of the masking ratios. Different lines correspond to different values of $r_1$ . The abscissa is $r_d$ , representing the difference between $r_1$ and $r_2$ . Results are reported on the development set of STS-B based on the Spearman's correlation. + +We can see that large differences between the two masking ratios tend to lead lower Spearman's correlation compared to the smaller ones. The reason may be that the larger the semantic difference is, the easier for the model to estimate the entailment relations among the triplet sentences, which + +makes the triplet loss less helpful. The best performance is achieved when $r_1$ is $20\%$ and $r_2$ is $40\%$ , and the corresponding Spearman's correlation is 0.847. We use them as our hyper-parameters. + +Effect of on-off Switching of Dropout The onoff switching of dropout in the BERT-like sentence encoder affects the generated sentence representations directly. Since dropout performs a kind of averaging over the ensemble of possible subnetworks, an embedding generated with dropout turned off can be seen as a kind of "averaging" representation, while an embedding generated with dropout turned on can be seen as generated through a subnetwork. + +In ArcCSE, we use the embeddings generated with the encoder dropout turned on as input for ArcCon loss, which regularizes the network by making representations generated through different subnetworks similar. When modeling the entailment relation, we generate "averaging" representations with dropout turn-off to avoid inaccurate signals. In order to verify our intuition, we conduct two experiments with different dropout settings. In the first experiment, we feed ArcCon two sentence representations generated with dropout turns on and off respectively. We carry out this experiment with angular margins ranging between 2 degrees to 12 degrees and report the best result. In the second one, we feed the triplet loss representations that are generated with dropout turns on and maintain the other settings. The results are shown in Table 3. We can see that the original settings that turn dropout on for ArcCon and turn dropout off for triplet loss achieve the best performance, which confirms our intuition. + +Effect of Coefficient in the Training Objective The coefficient $\lambda$ in the final optimization objective adjusts the relative weights between ArcCon and the triplet loss, as shown in formula (5). To find the most suitable $\lambda$ , we conduct an experiment by varying $\lambda$ from 0 to 1.2 and increased by 0.1 at each step. The results are shown in Figure 7. + +We can see that the best performance is achieved + +![](images/fbcd075addd74f3e4066789f2b2f118fd39a8760ee0995be4182a7f6d82c6521.jpg) +Figure 7: Effect of the coefficient in the training objective. Results are reported on the development set of STS-B based on the Spearman's correlation. + +when $\lambda = 0.1$ , and the corresponding Spearman's correlation is 0.847. This demonstrates that we can get the best performance by combining ArcCon and the triplet loss with proper $\lambda$ . + +# 4.5 Alignment and Uniformity Analysis + +Alignment and uniformity are two properties closely related to contrastive learning and could be used to measure the quality of representations(Wang and Isola, 2020). Alignment favors encoders that generate similar representations for similar instances. It could be defined with the expected distance between embeddings of the positive paired instances: + +$$ +\ell_ {\text {a l i g n}} = \underset {(x, x ^ {+}) \sim p _ {\text {p o s}}} {\mathbb {E}} \| f (x) - f \left(x ^ {+}\right) \| ^ {2} \tag {6} +$$ + +where $p_{\mathrm{pos}}$ denotes the distribution of positive paired instances. Uniformity prefers uniformly distributed representations, which helps preserve maximal information. It could be defined as: + +$$ +\ell_ {\text {u n i f o r m}} = \log \underset {x, y \sim p _ {\text {d a t a}}} {\mathbb {E}} e ^ {- 2 \| f (x) - f (y) \| ^ {2}} \tag {7} +$$ + +where $p_{\mathrm{data}}$ denotes whole data distribution. + +To justify the inner workings of our approach, we calculate the alignment and uniformity metrics every 10 steps during training on the STS-B development set. We compare our approach with SimCSE and visualize the results in Figure 8. We can see that compared to the original BERT checkpoint, both ArcCSE and SimCSE improve the alignment and uniformity measures during training. ArcCSE performs better on the alignment measure and on par with SimCSE on the uniformity measure. This verifies the intuition of our approach and demonstrates that ArcCSE could help improve the quality of sentence representations. + +# 5 Conclusion + +In this work, we propose ArcCSE, a self-supervised contrastive learning framework for learning sen + +![](images/15acf309d165420665da36be3a5b9d1dba5d70b692e6fafaf3f426cf2e8c28f3.jpg) + +![](images/413e6b39f52c339f6112285ea57714763e779bd4696a93bd2b7e798463a5b27e.jpg) +(a) $\ell_{\mathrm{align}}$ +(b) $\ell_{\mathrm{uniform}}$ +Figure 8: $\ell_{\mathrm{align}}$ and $\ell_{\mathrm{uniform}}$ of ArcCSE and SimCSE, visualized by calculating alignment and uniformity every 10 training steps. For both measures, lower numbers are better. + +tence representation. We propose a new optimizing objective ArcCon loss to model pairwise sentence relations with enhanced discriminating power, and a new self-supervised task to model the partial order of semantics between sentences. Experimental results on semantic textual similarity tasks (STS) and SentEval tasks demonstrate that both techniques bring substantial improvements and our method outperforms previous state-of-the-art method for sentence representation learning. + +# References + +Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Inigo Lopez-Gazpio, Montse Maritxalar, Rada Mihalcea, German Rigau, Larraitz Uria, and Janyce Wiebe. 2015. SemEval-2015 task 2: Semantic textual similarity, English, Spanish and pilot on interpretability. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 252–263, Denver, Colorado. Association for Computational Linguistics. +Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2014. SemEval-2014 task 10: Multilingual semantic textual similarity. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 81–91, Dublin, Ireland. Association for Computational Linguistics. +Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2016. SemEval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. In Proceedings of the 10th International Workshop on Semantic Evaluation + +(SemEval-2016), pages 497-511, San Diego, California. Association for Computational Linguistics. +Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonzalez-Agirre. 2012. SemEval-2012 task 6: A pilot on semantic textual similarity. In *SEM 2012: The First Joint Conference on Lexical and Computational Semantics – Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 385–393, Montréal, Canada. Association for Computational Linguistics. +Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, and Weiwei Guo. 2013. *SEM 2013 shared task: Semantic textual similarity. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity, pages 32-43, Atlanta, Georgia, USA. Association for Computational Linguistics. +Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. +Fredrik Carlsson, Amaru Cuba Gyllensten, Evangelia Gogoulou, Erik Ylipää Hellqvist, and Magnus Sahlgren. 2021. Semantic re-tuning with contrastive tension. In ICLR. +Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1-14, Vancouver, Canada. Association for Computational Linguistics. +Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 1597-1607. PMLR. +S. Chopra, R. Hadsell, and Y. LeCun. 2005. Learning a similarity metric discriminatively, with application to face verification. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), volume 1, pages 539-546 vol. 1. + +Alexis Conneau and Douwe Kiela. 2018. SentEval: An evaluation toolkit for universal sentence representations. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). +Juan Manuel Coria, Sahar Ghannay, Sophie Rosset, and Hervé Bredin. 2020. A metric learning approach to misogyny categorization. In Proceedings of the 5th Workshop on Representation Learning for NLP, pages 89-94, Online. Association for Computational Linguistics. +Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. 2019. Arcface: Additive angular margin loss for deep face recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005). +Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. 2021. Sharpness-aware minimization for efficiently improving generalization. +Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics. +John Giorgi, Osvald Nitski, Bo Wang, and Gary Bader. 2021. DeCLUTR: Deep contrastive learning for unsupervised textual representations. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 879-895, Online. Association for Computational Linguistics. +Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1367-1377, San Diego, California. Association for Computational Linguistics. + +Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. KDD '04, page 168-177, New York, NY, USA. Association for Computing Machinery. +Taeuk Kim, Kang Min Yoo, and Sang-goo Lee. 2021. Self-guided contrastive learning for BERT sentence representations. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2528-2540, Online. Association for Computational Linguistics. +Ryan Kiros, Yukun Zhu, Russ R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc. +Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang, Yiming Yang, and Lei Li. 2020. On the sentence embeddings from pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9119-9130, Online. Association for Computational Linguistics. +Weiyang Liu, Yandong Wen, Zhiding Yu, Ming Li, Bhiksha Raj, and Le Song. 2017. Sphereface: Deep hypersphere embedding for face recognition. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6738-6746. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692. +Lajanugen Logeswaran and Honglak Lee. 2018. An efficient framework for learning sentence representations. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. +Xiaofei Ma, Cicero Nogueira dos Santos, and Andrew O. Arnold. 2021. Contrastive fine-tuning improves robustness for neural rankers. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 570–582, Online. Association for Computational Linguistics. +Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zamparelli. 2014. A sick cure for the evaluation of compositional distributional semantic models. In LREC. +Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, volume 26. Curran Associates, Inc. + +Matteo Pagliardini, Prakhar Gupta, and Martin Jaggi. 2018. Unsupervised learning of sentence embeddings using compositional n-gram features. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 528-540, New Orleans, Louisiana. Association for Computational Linguistics. +Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), pages 271-278, Barcelona, Spain. +Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), pages 115-124, Ann Arbor, Michigan. Association for Computational Linguistics. +Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics. +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67. +Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics. +Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Association for Computational Linguistics. +Kihyuk Sohn. 2016. Improved deep metric learning with multi-class n-pair loss objective. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16, page 1857-1865, Red Hook, NY, USA. Curran Associates Inc. +Hyun Oh Song, Yu Xiang, Stefanie Jegelka, and Silvio Savarese. 2016. Deep metric learning via lifted + +structured feature embedding. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4004-4012. +Jianlin Su, Jiarun Cao, Weijie Liu, and Yangyiwen Ou. 2021. Whitening sentence representations for better semantics and faster retrieval. +Ellen M. Voorhees and Dawn M. Tice. 2000. Building a question answering test collection. In Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '00, page 200-207, New York, NY, USA. Association for Computing Machinery. +Hao Wang, Yitong Wang, Zheng Zhou, Xing Ji, Dihong Gong, Jingchao Zhou, Zhifeng Li, and Wei Liu. 2018. Cosface: Large margin cosine loss for deep face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). +Tongzhou Wang and Phillip Isola. 2020. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 9929-9939. PMLR. +Yandong Wen, Kaipeng Zhang, Zhifeng Li, and Yu Qiao. 2016. A discriminative feature learning approach for deep face recognition. In Computer Vision - ECCV 2016, pages 499-515, Cham. Springer International Publishing. +Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. *Language Resources and Evaluation*, 39:165-210. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics. +Yuanmeng Yan, Rumei Li, Sirui Wang, Fuzheng Zhang, Wei Wu, and Weiran Xu. 2021. ConSERT: A contrastive framework for self-supervised sentence representation transfer. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5065-5075, Online. Association for Computational Linguistics. +Yan Zhang, Ruidan He, Zuozhu Liu, Kwan Hui Lim, and Lidong Bing. 2020. An unsupervised sentence + +embedding method by mutual information maximization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1601-1610, Online. Association for Computational Linguistics. \ No newline at end of file diff --git a/acontrastiveframeworkforlearningsentencerepresentationsfrompairwiseandtriplewiseperspectiveinangularspace/images.zip b/acontrastiveframeworkforlearningsentencerepresentationsfrompairwiseandtriplewiseperspectiveinangularspace/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..a845c587a105f175acd7ede8c796c01fef2133e9 --- /dev/null +++ b/acontrastiveframeworkforlearningsentencerepresentationsfrompairwiseandtriplewiseperspectiveinangularspace/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:299ae2ea4c8239dd25ac2ae1117f8b9d5fa079f8173faf0cfac4346087b9535e +size 557400 diff --git a/acontrastiveframeworkforlearningsentencerepresentationsfrompairwiseandtriplewiseperspectiveinangularspace/layout.json b/acontrastiveframeworkforlearningsentencerepresentationsfrompairwiseandtriplewiseperspectiveinangularspace/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..cf6139ecf5c84fd15e1a61d2b5b9918e9c8f5356 --- /dev/null +++ b/acontrastiveframeworkforlearningsentencerepresentationsfrompairwiseandtriplewiseperspectiveinangularspace/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1c864a44346b9a83d47361cc0820e5a4783eefe468cae45255d3691a763b438f +size 428684 diff --git a/activeevaluationefficientnlgevaluationwithfewpairwisecomparisons/cc883f55-23f2-4be4-aabb-45ec2d48946a_content_list.json b/activeevaluationefficientnlgevaluationwithfewpairwisecomparisons/cc883f55-23f2-4be4-aabb-45ec2d48946a_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..fa6729fc68f940d73651b645ffa0658d89092e17 --- /dev/null +++ b/activeevaluationefficientnlgevaluationwithfewpairwisecomparisons/cc883f55-23f2-4be4-aabb-45ec2d48946a_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4955c574bfd4b4b0516c25bb923332d9de2beedbbc496e1c5fd2133224649c7d +size 148607 diff --git a/activeevaluationefficientnlgevaluationwithfewpairwisecomparisons/cc883f55-23f2-4be4-aabb-45ec2d48946a_model.json b/activeevaluationefficientnlgevaluationwithfewpairwisecomparisons/cc883f55-23f2-4be4-aabb-45ec2d48946a_model.json new file mode 100644 index 0000000000000000000000000000000000000000..349ed2728452fa299e861f6cabf3ddccbdd927b8 --- /dev/null +++ b/activeevaluationefficientnlgevaluationwithfewpairwisecomparisons/cc883f55-23f2-4be4-aabb-45ec2d48946a_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fd298ea448854488230b26167f25b0cc29fd8cafef9aceaf1db7c6f6617b0b60 +size 177329 diff --git a/activeevaluationefficientnlgevaluationwithfewpairwisecomparisons/cc883f55-23f2-4be4-aabb-45ec2d48946a_origin.pdf b/activeevaluationefficientnlgevaluationwithfewpairwisecomparisons/cc883f55-23f2-4be4-aabb-45ec2d48946a_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..37f88aa3b6ed38aba398440a31b45bf562feae7a --- /dev/null +++ b/activeevaluationefficientnlgevaluationwithfewpairwisecomparisons/cc883f55-23f2-4be4-aabb-45ec2d48946a_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d4f23584e0e14a2d1ac72008ee9d35c10e80ac1669a3138da06d8b1f25306210 +size 1143559 diff --git a/activeevaluationefficientnlgevaluationwithfewpairwisecomparisons/full.md b/activeevaluationefficientnlgevaluationwithfewpairwisecomparisons/full.md new file mode 100644 index 0000000000000000000000000000000000000000..f0da4451ea1ea7dafac0b16a5b8b859bd390233e --- /dev/null +++ b/activeevaluationefficientnlgevaluationwithfewpairwisecomparisons/full.md @@ -0,0 +1,565 @@ +# Active Evaluation: Efficient NLG Evaluation with Few Pairwise Comparisons + +Akash Kumar Mohankumar* + +Microsoft + +Bangalore, India + +makashkumar@microsoft.com + +Mitesh M. Khapra + +Indian Institute of Technology Madras + +RBCDSAI, IIT Madras + +miteshk@cse.iitm.ac.in + +# Abstract + +Recent studies have shown the advantages of evaluating NLG systems using pairwise comparisons as opposed to direct assessment. Given $k$ systems, a naive approach for identifying the top-ranked system would be to uniformly obtain pairwise comparisons from all $\binom{k}{2}$ pairs of systems. However, this can be very expensive as the number of human annotations required would grow quadratically with $k$ . In this work, we introduce Active Evaluation, a framework to efficiently identify the top-ranked system by actively choosing system pairs for comparison usingueling bandit algorithms. We perform extensive experiments with 13ueling bandits algorithms on 13 NLG evaluation datasets spanning 5 tasks and show that the number of human annotations can be reduced by $80\%$ . To further reduce the number of human annotations, we propose model-basedueling bandit algorithms which combine automatic evaluation metrics with human evaluations. Specifically, we eliminate sub-optimal systems even before the human annotation process and perform human evaluations only on test examples where the automatic metric is highly uncertain. This reduces the number of human annotations required further by $89\%$ . In effect, we show that identifying the top-ranked system requires only a few hundred human annotations, which grow linearly with $k$ . Lastly, we provide practical recommendations and best practices to identify the top-ranked system efficiently. Our code has been made publicly available at https: + +//github.com/akashkm99/duelnlg + +# 1 Introduction + +In the last few years, the field of NLG has made rapid progress with the advent of large-scale models trained on massive amounts of data (Vaswani et al., 2017; Xue et al., 2020; Liu et al., 2020; Brown et al., 2020). However, evaluation of NLG + +systems continues to be a challenge. On the one hand, we have automatic evaluation metrics which are easy to compute but unreliable. In particular, many studies have shown that they do not correlate well with human judgments (Novikova et al., 2017; Elliott and Keller, 2014; Sai et al., 2019, 2020a,b). On the other hand, we have human evaluations, which are relatively more reliable but tedious, expensive, and time-consuming. Further, recent studies have highlighted some limitations of human evaluations that involve direct assessment on an absolute scale, e.g., Likert scale. Specifically, human evaluations using direct assessment have been shown to suffer from annotator bias, high variance and sequence effects where the annotation of one item is influenced by preceding items (Kulikov et al., 2019; Sudoh et al., 2021; Liang et al., 2020; See et al., 2019; Mathur et al., 2017). + +In this work, we focus on reducing the cost and time required for human evaluations while not compromising on reliability. We take motivation from studies which show that selecting the better of two options is much easier for human annotators than providing an absolute score, which requires annotators to maintain a consistent standard across samples (Kendall, 1948; Simpson and Gurevych, 2018). In particular, recent works show that ranking NLG systems using pairwise comparisons is a more reliable alternative than using direct assessment (See et al., 2019; Li et al., 2019; Sedoc et al., 2019; Dhingra et al., 2019). While this is promising, a naive approach for identifying the top-ranked system from a set of $k$ systems using uniform exploration is prohibitively expensive. Specifically, uniform exploration obtains an equal number of annotations for all the $\binom{k}{2}$ system pairs; as a result, the required human annotations grows as $O(k^2)$ . + +To reduce the number of pairwise annotations, we introduce Active Evaluation, a framework to efficiently identify the top-ranked NLG system. Our Active Evaluation framework consists of a + +learner that selects a pair of systems to compare at each time step. The learner, then, receives a feedback signal indicating the (human) preference between the selected systems on one input context, randomly sampled from the test dataset. The learner's objective is to reliably compute the top-ranked system with as few human annotations as possible. We adopt algorithms from the stochasticueling bandits literature (Bengs et al., 2021) to decide which pair of NLG systems to compare at each time step. To check if existingueling bandits algorithms can indeed provide reliable top-rank estimates with minimal annotations, we evaluate 13 such algorithms on 13 NLG evaluation datasets spanning five tasks viz., machine translation, summarization, data-to-text generation, paraphrase generation, and grammatical error correction. We show that the best performingueling bandit algorithm can reduce the number of human annotations by $80\%$ when compared to uniform exploration. + +To further reduce human annotations, we leverage automatic evaluation metrics in our Active Evaluation framework. We utilize existing automatic metrics such as BLEU (Papineni et al., 2002), BertScore (Zhang et al., 2020), etc for pairwise evaluations by converting the direct evaluation scores into preference probabilities using pairwise probability models. We also develop trained pairwise metrics that directly predict the comparison outcome given pairs of generated texts and context or reference as input. To incorporate such evaluation metrics in our Active Evaluation framework, we propose three model-basedueling bandits algorithms, viz., (i) Random Mixing: human annotations and evaluation metric predictions are randomly mixed, (ii) Uncertainty-aware selection: human annotations are obtained only when the predictions from the evaluation metric is highly uncertain, (iii) UCB Elimination: poorly performing NLG systems are eliminated using an Upper Confidence Bound (UCB) on the evaluation metric scores. Through our experiments, we show that the number of human annotations can be further reduced by $89\%$ on average (this reduction is over and above the $80\%$ reduction that we got earlier). In effect, we show that given $k$ systems, we can find the top-ranked NLG system efficiently with just a few hundred comparisons that vary as $O(k)$ . Lastly, we provide practical recommendations to efficiently identify the top-ranked NLG system based on our empirical study on various design choices + +and hyperparameters. + +# 2 Active Evaluation Framework + +We introduce the problem and our Active Evaluation setup in section 2.1. Later in section 2.2, we describe the different approaches to decide which pairs of NLG systems to compare at each time step. Finally, in section 2.3, we formalize the notion of top-ranked system. + +# 2.1 Problem Formulation and Setup + +We consider the problem of finding the top-ranked NLG system from a given set of $k$ systems, denoted by $\mathcal{S} = \{1,2,\dots ,k\}$ . Our Active Evaluation framework consists of a learner which at each time step $t$ , chooses a pair of systems $s_1^{(t)},s_2^{(t)}\in S$ for comparison. Then, we ask human annotators to compare the outputs of the chosen systems on a randomly sampled input context and provide the comparison outcome as feedback to the learner. Specifically, we first sample an input context $X^{(t)}$ from the test dataset and obtain the generated texts $Y_{1}^{(t)},Y_{2}^{(t)}$ from the chosen systems $s_1^{(t)},s_2^{(t)}$ . We then display the generated texts $Y_{1}^{(t)},Y_{2}^{(t)}$ along with the context $X^{(t)}$ to human annotators and obtain a comparison outcome $w^{(t)} = 1,0$ , or 0.5 denoting whether $Y_{1}^{(t)}$ is of better, worse, or equal (tie) quality as $Y_{2}^{(t)}$ . Note that the feedback $w^{(t)}$ indicates the preference on only one input sample and not the entire test dataset. The overall framework is depicted in figure 1. The learner's objective is to find the top-ranked system with as few pairwise comparisons as possible. + +# 2.2 Choosing System Pairs for Comparison + +The learner should decide the pair of systems $(s_1^{(t)}, s_2^{(t)})$ to compare at each time step $t$ . The naive approach is to uniformly explore all the $\binom{k}{2}$ system pairs. Specifically, the probability of selecting a pair $(i,j), i \neq j$ at time $t$ is given by + +$$ +P _ {u n i f o r m} ((s _ {1} ^ {(t)}, s _ {2} ^ {(t)}) = (i, j)) = \frac {1}{\binom {k} {2}} +$$ + +However, as we show in our experiments, the number of human annotations required to find the top-ranked system by this approach is very expensive and grows quadratically with the number of systems since we equally explore all $\binom{k}{2}$ pairs. To reduce the number of annotations, we use dueling bandit algorithms to actively choose pairs of systems to compare based on the history of previous + +![](images/ac10e652d6e2a71c7fcbb712a352f1582e11cd8e29125be135028b16eabb91a2.jpg) +Figure 1: Our Active Evaluation framework consisting of a learner that chooses a pair of systems to compare at each time step. The learner receives feedback from either human annotators or the automatic metric. + +observations. We provide an overview of 13 dueling bandits algorithms proposed in the literature in appendix B. We refer the readers to (Bengs et al., 2021) for a complete survey. + +# 2.3 Identifying the top-ranked system + +We now formalize the notion of the top-ranked system. Let $p_{ij}$ denote the preference probability of system $i$ over system $j$ i.e. the probability that a generated text from system $i$ is preferred over system $j$ in the test dataset. We say that a system $i$ "beats" system $j$ if $p_{ij} > \frac{1}{2}$ . In other words, system $i$ beats system $j$ if the probability of winning in a pairwise comparison is larger for $i$ than it is for $j$ . We define the top-ranked system $i^*$ as the one that beats all other systems, i.e. $p_{i^*j} > \frac{1}{2}, \forall j \in S - i^*$ . + +# 3 Pairwise Probability Models + +Our Active Evaluation framework, which we described in the previous section, completely relied on human annotators to compare pairs of generated texts $(Y_{1},Y_{2})$ to provide the preference feedback $w$ . We can further reduce the number of required human annotations by estimating the human preference feedback using automatic evaluation metrics. However, most existing evaluation metrics are designed for direct assessment and not directly suitable for pairwise evaluations. In this section, we de + +scribe three pairwise probability models to convert direct evaluation scores into pairwise preference probabilities. Let $f(Y)$ denote the score provided by a direct assessment metric $f$ to a generated text $Y$ (The dependence of $f$ on the reference/context is omitted for brevity). The pairwise preference probability $\hat{p}(Y_1 \succ Y_2)$ between any two hypotheses $Y_1$ and $Y_2$ can be modeled in 3 different ways: + +- Linear: + +$$ +\hat {p} \left(Y _ {1} \succ Y _ {2}\right) = \frac {1}{2} + \left(f \left(Y _ {1}\right) - f \left(Y _ {2}\right)\right) +$$ + +Bradley-Terry-Luce (BTL) (Bradley and Terry, 1952; Luce, 1979): + +$$ +\hat {p} (Y _ {1} \succ Y _ {2}) = \frac {f (Y _ {1})}{f (Y _ {1}) + f (Y _ {2})} +$$ + +- BTL-logistic:: + +As detailed in appendix C.2, we appropriately preprocess the scores $f(Y)$ to ensure that preference probability lies between 0 and 1. We can now predict the comparison outcome $w$ by thresholding the preference probability at two thresholds $\tau_{1}$ and $\tau_{2} (\geq \tau_{1})$ to incorporate ties i.e.: + +$$ +\hat {w} = \left\{ \begin{array}{l l} 1, & \text {i f} \hat {p} (Y _ {1} \succ Y _ {2}) > \tau_ {2} \\ 0, & \text {i f} \hat {p} (Y _ {1} \succ Y _ {2}) < \tau_ {1} \\ 0. 5, & \text {O t h e r w i s e} \end{array} \right. +$$ + +We choose $\tau_{1}$ and $\tau_{2}$ using grid search on the validation set. Refer appendix C.2 for more details. + +# 4 Model-based Dueling Bandits + +In the previous section, we discussed pairwise probability models to obtain the estimated preference probability $\hat{p}(Y_1 \succ Y_2)$ and the comparison outcome $\hat{w}$ using scores assigned by direct assessment metrics. We now propose three model-based dueling bandit algorithms wherein we combine such predictions from evaluation metrics with human annotations in the Active Evaluation framework. + +# 4.1 Random Mixing + +Here, we randomly provide either the real (human) or the evaluation metric predicted feedback to the learner. Specifically, at any time $t$ , we use the predicted comparison outcome $\hat{w}^{(t)}$ as the feedback with probability $p_m$ and use human annotations $w^{(t)}$ as feedback with probability $1 - p_m$ . The hyperparameter $p_m$ controls the ratio of estimated and real feedback given to the learner. As with other hyperparameters, we tune $p_m$ on the validation set. + +# 4.2 Uncertainty-aware Selection + +In this algorithm, we estimate uncertainty in the evaluation metric predictions and decide to ask for human annotations only when the evaluation metric is highly uncertain. We specifically focus on trainable neural evaluation metrics such as Bleurt (Sellam et al., 2020) where we estimate the prediction uncertainty using recent advances in Bayesian deep learning. Let $\hat{p}(Y_1 \succ Y_2 | \theta)$ denote the preference probability modelled by a neural evaluation metric with parameters $\theta$ . Given a training dataset $\mathcal{D}^{tr}$ , Bayesian inference involves computing the posterior distribution $p(\theta | \mathcal{D}^{tr})$ and marginalization over the parameters $\theta$ : + +$$ +\hat {p} (Y _ {1} \succ Y _ {2} | \mathcal {D} ^ {t r}) = \int_ {\theta} \hat {p} (Y _ {1} \succ Y _ {2} | \theta) \hat {p} (\theta | \mathcal {D} ^ {t r}) d \theta +$$ + +However, computing the true posterior and averaging over all possible parameters is intractable in practice. Hence, several approximations have been proposed in variational inference such as finding a surrogate distribution $q_{\phi}(\theta)$ for the true posterior. Gal and Ghahramani (2016) have shown that we can use the Dropout distribution (Srivastava et al., 2014) as the approximate posterior $q_{\phi}(\theta)$ . Specifically, we can perform approximate Bayesian inference by applying Dropout during test time. Hence, the posterior can now be approximated with Montecarlo samples as follows: + +$$ +\hat {p} (Y _ {1} \succ Y _ {2} | \mathcal {D} ^ {t r}) \approx \frac {1}{L} \sum_ {l = 1} ^ {L} \hat {p} (Y _ {1} \succ Y _ {2} | \theta_ {l}) +$$ + +where $\{\theta_l\}_{l = 1}^L$ are $L$ samples from the Dropout distribution $q_{\phi}(\theta)$ (i.e. we apply Dropout $L$ times independently during testing). We now discuss two different Bayesian uncertainty measures: + +BALD: The Bayesian Active Learning by Disagreement (BALD) (Houlsby et al., 2011) is defined as the mutual information between the model predictions and the model posterior. Let $p_l = \hat{p}(Y_1 \succ Y_2 | \theta_l)$ , where $\theta_l \sim q_{\phi}(\theta)$ , be the evaluation metric prediction using the $l^{th}$ sample $\theta_l$ from the Dropout distribution. Also, let $\bar{p} = \frac{1}{L} \sum_{l=1}^{L} p_l$ be the mean prediction. As shown in (Gal et al., 2017), we can approximate the BALD measure using samples from the Dropout distribution as: + +$$ +\hat {\mathbb {I}} = \mathbb {H} (\bar {p}) - \frac {1}{L} \sum_ {l = 1} ^ {L} \mathbb {H} (p _ {l}) +$$ + +where $\mathbb{H}$ is the binary cross entropy function. The BALD uncertainty score is essentially the difference in entropy of the mean prediction $\bar{p}$ and the average entropy of the individual predictions $\{p_l\}_{l=1}^L$ . Hence, the BALD uncertainty score is high when the metric's mean prediction is uncertain (high entropy) but the individual predictions are highly confident (low entropy), i.e., when the metric produces disagreeing predictions with high confidence. + +STD: We also adopt the standard deviation of the preference probability taken over the posterior distribution as a measure of uncertainty: + +$$ +\sigma = \sqrt {\operatorname {V a r} _ {\theta \sim \hat {p} (\theta | \mathcal {D} ^ {t r})} (\hat {p} (Y _ {1} \succ Y _ {2} | \theta))} +$$ + +Similar to BALD, we can approximate the above measure using the empirical standard deviation of samples drawn from the dropout distribution. + +Our proposed algorithm asks for human annotations only if the uncertainty measure (BALD or STD) is above a particular threshold. + +# 4.3 UCB Elimination + +The key idea here is to eliminate a set of "poorly performing" NLG systems using the automatic metric and perform human evaluations with the remaining set of systems. To eliminate sub-optimal systems, we first need to quantify a performance measure for the systems. We use the Copeland score (Zoghi et al., 2015) which is defined as the normalized total number of pairwise wins for a system: $C_i = \frac{1}{k - 1}\sum_{j\neq i}\mathbb{1}(p_{ij} > \frac{1}{2})$ . Copeland score is the highest for the top-ranked system with a value of 1 and it is less than 1 for all other systems. To estimate the Copeland score, we first predict the pairwise preference probability between any two systems $i$ and $j$ as follows: + +$$ +\hat {p} _ {i j} = \frac {1}{N} \sum_ {Y _ {1}, Y _ {2} \in \mathcal {D} _ {i j}} \hat {p} (Y _ {1} \succ Y _ {2} | \theta) +$$ + +where $\mathcal{D}_{ij}$ is the test dataset consisting of generated texts from systems $i$ and $j$ , $N$ is the total number of test examples, $\theta$ is the learned model parameters. We can now estimate the Copeland score $\hat{C}_i$ using the estimated preference $\hat{p}_{ij}$ and eliminate all systems with Copeland scores below a threshold. However, a major problem with this approach is that evaluation metrics are often inaccurate and we could wrongly eliminate the true top-ranked system without performing any human evaluations. For example, consider the example where $i^*$ is the + +top-ranked system with $p_{i^{*}j} > 0.51, \forall j \in S - i$ . If several of the predicted probabilities $\hat{p}_{i^{*}j}$ are less than 0.5, our top-ranked system $i^{*}$ will receive a low estimated Copeland score and will be incorrectly eliminated. To overcome this problem, we define an Upper Confidence Bound (UCB) on the preference probability using uncertainty estimates that we described in 4.2. Specifically, the upper confidence bound $\hat{u}_{ij}$ is given by $\hat{u}_{ij} = \hat{p}_{ij} + \alpha \hat{\sigma}_{ij}$ where $\alpha$ is a hyperparameter that controls the size of the confidence region and $\hat{\sigma}_{ij}^2$ is the estimated variance given by: + +$$ +\hat {\sigma} _ {i j} ^ {2} = \frac {1}{N ^ {2}} \sum_ {Y _ {1}, Y _ {2} \in \mathcal {D} _ {i j}} \operatorname {V a r} _ {\theta \sim q _ {\phi} (\theta)} \hat {p} (Y _ {1} \succ Y _ {2} | \theta) +$$ + +where $q_{\phi}(\theta)$ is the Dropout distribution. Using the upper confidence estimates $\hat{u}_{ij}$ , we now define the optimistic Copeland score for a system $i$ as $\hat{C}_i^u = \frac{1}{K - 1}\sum_{j\neq i}\mathbb{1}(\hat{u}_{ij} > \frac{1}{2})$ . Here, we consider a system $i$ to beat another system $j$ ( $\hat{u}_{ij} > 0.5$ ) if either the estimated preference is high ( $\hat{p}_{ij}$ is high) or if there is an high uncertainty in the estimation ( $\hat{\sigma}_{ij}$ is high). In UCB Elimination, we eliminate a system only if the optimistic Copeland score is below a threshold. + +# 5 Experimental Setup + +In this section, we describe the (i) NLG tasks and datasets used in our experiments, (ii) automatic evaluation metrics used in our model-based algorithms, and (iii) annotation complexity measure used for comparingueling bandit algorithms. + +# 5.1 Tasks & Datasets + +We use a total of 13 datasets spanning 5 tasks in our experiments which are summarized in table 1. + +Machine Translation (MT): We use 7 human evaluation datasets collected from the WMT news translation tasks (Bojar et al., 2015, 2016) viz. fin $\rightarrow$ eng, rus $\rightarrow$ eng, deu $\rightarrow$ eng language pairs in WMT 2015 and tur $\rightarrow$ eng, ron $\rightarrow$ eng, cze $\rightarrow$ eng, deu $\rightarrow$ eng language pairs in WMT 2016. + +Grammatical Error Correction (GEC): We utilize two human evaluation datasets collected by (Napoles et al., 2019) where the source texts are from (i) student essays (FCE), and (ii) formal articles in Wikipedia (Wiki). We also use another GEC dataset collected by (Napoles et al., 2015a) from the CoNLL-2014 Shared Task (Ng et al., 2014). + +Data-to-Text Generation: We use the human evaluation data from the E2E NLG Challenge (Dusek + +
TaskDataset# Systems# Human Annotations
Machine TranslationWMT15 fin→eng1431577
WMT15 rus→eng1344539
WMT15 deu→eng1340535
WMT16 tur→eng910188
WMT16 ron→eng715822
WMT16 cze→eng12125788
WMT16 deu→eng1020937
Grammatical Error CorrectionGrammarly (FCE)720328
Grammarly (Wiki)720832
CoNLL-2014 Shared Task1316209
Data-to-TextE2E NLG Challenge1617089
ParaphraseParaBank28151148
SummarizationTLDR OpenAI114809
+ +Table 1: Description of tasks and datasets with the number of NLG systems and pairwise human annotations + +et al., 2020). The task here is to generate natural language utterance from dialogue acts. + +Paraphrase Generation: We use human evaluations of model generated English paraphrases released with the ParaBank dataset (Hu et al., 2019). + +Summarization: We make use of the human evaluations (Stiannon et al., 2020) of GPT3-like transformers on the TL;DR dataset (Volske et al., 2017). + +We provide further details including preprocessing steps and downloadable links in appendix A.1. + +# 5.2 Automatic NLG Evaluation Metrics + +We can predict the comparison outcome $w$ using two approaches. First, we can use pairwise probability models with existing direct assessment metrics as discussed in section 3. Alternatively, we can train evaluation metrics to directly predict the comparison outcome given pairs of generated texts and context/reference as input. We discuss both these approaches below: + +Direct Assessment Metrics: We experiment with a total of 10 direct assessment metrics viz. chrF (Popovic, 2015), BLEU-4 (Papineni et al., 2002), ROUGE-L (Lin, 2004), Embedding Average (Wieting et al., 2016), Vector Extrema (Forgues et al., 2014), Greedy Matching (Rus and Lintean, 2012), Laser (Artetxe and Schwenk, 2019), BertScore (Zhang et al., 2020), MoverScore (Zhao et al., 2019) and Bleurt (Sellam et al., 2020). We mention the implementation details in appendix A.2. + +Pairwise Evaluation Metrics: We finetune the pretrained Electra-base transformer model (Clark et al., 2020) to directly predict the comparison outcome $w$ . We curate task-specific human evaluation datasets consisting of tuples of the form (context/reference, hypothesis 1, hypothesis 2, label) for finetuning. Due to space constraints, we mention + +
AlgorithmWMT 2016WMT 2015GrammarlyCoNLL '14 TaskE2E NLGPara-BankTL; DR
tur-engron-engcze-engdeu-engfin-engrus-engdeu-engFCEWiki
Uniform19479246471026230322837122651779581153444361369657398252115893
SAVAGE1028918016663923932675128061211557672295939208414932552084733
DTS100899214861846544850133171647343551153018199199401704671354
CCB70171126753892884409211548109054386100202139216960871382518
Knockout34157889472334445104580959563134377780557708174184953
RUCB3125569733291636165545366222273256171902410924411491647
RCS244239243370153726623867529618164606126787263347091903
RMED2028511316128641707192940472093564793643753241321162
+ +Table 2: Annotation complexity of the top 7 best performingueling bandit algorithms along with the uniform exploration algorithm on 13 datasets spanning 5 NLG tasks + +details on the datasets and finetuning in appendix A.3 and A.4. For the summarization task alone, we couldn't find any pairwise human judgment dataset sufficient for finetuning the Electra model. + +# 5.3 Annotation Complexity Measure + +To evaluate the performance of dueling bandit algorithms, we define annotation complexity as the minimum number of human annotations needed by an algorithm to identify the top-ranked NLG system with high confidence. Let $i^{*}$ be the actual top-ranked system, and $i^{*}(n)$ denote the estimated winner by the algorithm after $n$ human annotations, then annotation complexity is defined as: + +$$ +\min n ^ {\prime}: \forall n \geq n ^ {\prime}, P (\hat {i ^ {*}} (n) = i ^ {*}) > 1 - \delta_ {a c c} +$$ + +where $\delta_{acc}$ is the allowable failure probability i.e. the learner can make a mistake with at most $\delta_{acc}$ probability. To compute the annotation complexity, we run eachueling bandit algorithm with 200 different random seeds and find the minimum number of human annotations after which the algorithm correctly returns the top-ranked NLG system in at least 190/200 runs (we set $\delta_{acc} = 0.05$ ). + +# 6 Results & Discussion + +We discuss the performance of dueling bandits algorithms in 6.1, automatic metrics in 6.2 and our proposed model-based algorithms in 6.3. Lastly in 6.4, we analyze the variation of annotation complexity with the number of NLG system. + +# 6.1 Analysis of Dueling Bandit Algorithms + +We report the annotation complexity of the top 7. +dueling bandit algorithms along with uniform exploration on 13 datasets in table 2. We observe that the annotation complexity of uniform exploration is consistently high across all 13 datasets. In particular, the required human annotations become prohibitively expensive when the number of NLG + +![](images/0005a1a27d7bb4bf523b6c590223c9b79006c077f85186f876703947f4230834.jpg) +Figure 2: Top-rank prediction accuracy v/s number of human annotations used on WMT 16 tur-eng dataset + +systems is high, e.g. E2E NLG (16 systems) and ParaBank (28 systems) datasets. On the other hand, dueling bandit algorithms such as RUCB (Zoghi et al., 2014b), RCS (Zoghi et al., 2014a), RMED (Komiyama et al., 2015) are able to effectively identify the top-ranked system with much fewer annotations. In particular, RMED performs the best with a reduction of $80.01\%$ in human annotations compared to uniform exploration. We also examine an alternative approach to assess the performance of dueling bandit algorithms. Here, we fix the number of human annotations (fixed annotation budget) and compute the accuracy in predicting the top-ranked system. As we show in figure 2, RMED achieves the highest top-rank prediction accuracy for any given number of human annotations. We provide the complete results in appendix F.2. + +# 6.2 Performance of Evaluation Metrics + +Before we utilize automatic evaluation metrics using our proposed model-based algorithms, we analyze the effectiveness of these metrics for pairwise NLG evaluations. In table 3, we report the sentence-level accuracy in predicting the comparison outcome $w$ using direct assessment metrics with the Linear probability model (as discussed in section 3) along with our trained Electra metric. Across the tasks, we observe that metrics that utilize con + +
MetricWMT (Avg.)Gramm. (Avg.)CoNLL '14 TaskE2E NLGPara-BankTL; DR
Chrf62.675.778.447.466.134.2
Bleu41.573.278.945.063.842.8
Rouge-L60.773.578.044.664.343.3
Embed. Avg.56.570.176.049.864.938.2
Greedy Match.59.568.177.746.564.743.1
Vector Extr.59.466.076.344.963.747.4
BertScore65.977.482.045.968.144.5
Laser65.375.178.047.267.035.4
MoverScore66.174.780.650.168.040.7
Bleurt68.277.181.548.167.742.5
Electra (Ours)65.774.081.654.381.7-
+ +Table 3: Sentence-level accuracy of direct assessment metrics with linear probability model and our trained Electra metric in predicting the comparison outcome + +![](images/962fee4e089046cbeff91c6e4353066b205896b8be28fec324a64ff130a28ee6.jpg) +Figure 3: Annotation complexity of Random Mixing with RMED using various automatic evaluation metrics + +textualized word embeddings, such as BertScore, perform much better than $n$ -gram and static word embedding-based metrics. In MT, we observe that Bleurt, specifically finetuned on WMT human judgment data, performs the best. In Data-to-Text and Paraphrase generation, our trained Electra metric finetuned on task-specific data significantly outperforms the existing metrics. Interestingly, on the summarization task, all the existing metrics perform much worse than random predictions i.e. they do not add any useful value in evaluation. Hence, we exclude the TLDR dataset from our analysis on model-based algorithms. Finally, as we show in appendix F.3, we observed that the performance is largely similar across all the three probability models: Linear, BTL, and BTL-logistic. + +# 6.3 Analysis of Model-based Algorithms + +We use our proposed model-based algorithms and incorporate the two best-performing evaluation + +![](images/f4282c05b195765f77f297275c269dba78a0e5a48730cb559f50e7ecdf09041e.jpg) +Figure 4: Annotation complexity of (model-free) uniform exploration andueling bandit algorithms v/s the number of NLG systems on the ParaBank dataset + +metrics, viz., Bleurt and Electra with the best performing dueling bandit algorithm, viz., RMED. We compare the annotation complexity of various model-based algorithms in table 4. We observe that the Random Mixing algorithm with Bleurt and Electra reduces annotation complexity by $70.43\%$ and $73.15\%$ , respectively, when compared to the standard (model-free) RMED algorithm (row 1). Our Uncertainty-aware selection algorithm with the BALD measure further reduces the annotation complexity by around $37\%$ (compared with Random Mixing). We notice that our UCB Elimination algorithm also provides significant improvements over standard RMED. Since UCB Elimination is complementary to Uncertainty-aware selection, we apply both these algorithms together and observe the lowest annotation complexity with a reduction of $89.54\%$ using Electra and $84.00\%$ using Bleurt over standard RMED. Lastly, in figure 3, we analyze the effect of using other evaluation metrics such as BLEU, BertScore, etc., in Random Mixing. Interestingly, we notice that using metrics such as BLEU, which have low accuracy values, results in a higher annotation complexity than standard (model-free) RMED in some datasets. That is, we may even require a greater number of human annotations to over-compensate for the inaccurate predictions from metrics like BLEU. However, with Laser, MoverScore, and BertScore, we observe significant reductions in annotation complexity. Please refer appendix F.4 for further results. + +# 6.4 Effect of Number of NLG systems + +We analyze how annotation complexity varies with the number of NLG systems. Specifically, we chose a subset of $k$ systems out of the total 28 systems in the ParaBank dataset and computed the annotation complexity among these $k$ systems. As shown in figure 4, the annotation complexity of uniform ex + +
Model-based AlgorithmEvaluation MetricWMT 2016WMT 2015GrammarlyCoNLL '14 TaskE2E NLGPara-Bank
tur-engron-engcze-engdeu-engfin-engrus-engdeu-engFCEWiki
None (Model free)None202851131612864170719294047209356479364375324132
Random MixingBleurt23712223151612753047714066719584115115874
Electra7283213385152236512650152923733023261044
Uncertainty-aware Selection (STD)Bleurt1031012192842042395302701859356129122876
Electra9787251478210388962125947723447081992137
Uncertainty-aware Selection (BALD)Bleurt101653136481811624052041289356116722619
Electra737164822311420753848828175155767858
UCB EliminationBleurt71126841131573419843355696711158382200514098
Electra26464911314142941126355639701115294311129870
Uncertainty (BALD) + UCB Elim.Bleurt314153762559823051623999952564570
Electra721736144517628828031245782402247
+ +Table 4: Annotation complexity of model-based algorithms when used with RMED and Bleurt/Electra metric. + +ploration grows quadratically with $k$ as it explores all system pairs equally. However, for (model-free) dueling bandit algorithms such as RMED, the annotation complexity is much lower and only varies as $O(k)$ . As shown in appendix F.1, we observed similar trends with model-based algorithms. + +# 7 Practical Recommendations + +We summarize the key insights from this study and provide practical recommendations on efficiently identifying the top-ranked NLG system. + +1. Use RMED dueling bandit algorithm to actively choose system pairs for comparison. +2. If human evaluation datasets are available, train a metric to predict the comparison outcome directly. Otherwise, use Bleurt with any of the Linear, BTL, BTL-logistic models. +3. Manually annotate a few examples from the test dataset and evaluate the sentence-level accuracy of the metric. If the performance is poor (e.g., accuracy near the random baseline), do not use model-based approaches, obtain feedback only from human annotators. +4. If the metric is reasonably accurate, use UCB Elimination with Uncertainty-aware Selection (BALD). Tune the hyperparameters of these algorithms, if possible. Otherwise, refer appendix D for best practices developed based on analyzing the sensitivity of model-based algorithms to hyperparameters. +5. We can reduce the annotation time if we use multiple annotators in parallel. We observed that dueling bandit algorithms, though originally proposed for sequential annotations, are robust to asynchronous feedback from multiple annotators (Refer appendix E for details). + +# 8 Related Work + +Several works (Bojar et al., 2014, 2015; Sakaguchi et al., 2014, 2016) in Machine translation and Grammatical Error Correction adopt the TrueSkill algorithm (Herbrich et al., 2006), originally used for ranking Xbox gamers, to efficiently rank NLG systems from pairwise annotations. A recent work (Sakaguchi and Durme, 2018) proposes an online algorithm to rank NLG systems when we receive pairwise preference feedback in the form of a continuous scalar with bounded support. The key difference in our work is that we focus on the problem of identifying the top-rank system instead of ranking all the systems. Experimental study ofueling bandit algorithms have been limited to synthetic simulations in a few works (Yue and Joachims, 2011; Urvoy et al., 2013). Most others (Zoghi et al., 2014b,a; Komiyama et al., 2015; Zoghi et al., 2015; Wu and Liu, 2016) focus on information retrieval applications that involve evaluating search retrieval algorithms (Radlinski et al., 2008). To the best of our knowledge, ours is the first work to extensively study the effectiveness ofueling bandit algorithms for NLG evaluation. + +# 9 Conclusion & Future work + +In this work, we focused on the problem of identifying the top-ranked NLG system with few pairwise annotations. We formulated this problem in an Active Evaluation framework and showed that dueling bandit algorithms can reduce the number of human annotations by $80\%$ . We then proposed model-based algorithms to combine automatic metrics with human evaluations and showed that human annotations can be reduced further by $89\%$ ; thereby requiring only a few hundred human annotations to identify the top-ranked system. In future work, we would like to extend our analysis to the general problem of finding the top-k ranked systems. + +# Discussion on Ethics & Broader Impact + +Evaluating Natural Language Generation (NLG) models accurately and reliably with few human annotations is an important aspect of NLG research and its real-world applications. Our work shows that we can significantly reduce the number of human annotations required to find the top-ranked NLG system with high confidence. We envision that our work will benefit a wide range of applications such as translation systems, grammatical checkers, etc., where practitioners can find the best NLG model among a set of candidates more accurately and with fewer human annotations. Despite these improvements, there are still several challenges towards reliable NLG evaluation. For example, our model-based approaches, which use automatic metrics, may be subject to biases and other undesirable mistakes, depending on the metric and how they are trained in practice. Our approach may be used to evaluate models that generate fake news, toxic content, or other harmful applications, even though it is not specifically designed for such cases. + +# Acknowledgments + +We thank the Department of Computer Science and Engineering, IIT Madras, and the Robert Bosch Center for Data Science and Artificial Intelligence, IIT Madras (RBC-DSAI), for providing us resources required to carry out this research. We also wish to thank Google for providing access to TPUs through the TFRC program. We thank the anonymous reviewers for their constructive feedback in enhancing the work. + +# References + +Mikel Artetxe and Holger Schwenk. 2019. Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond. Trans. Assoc. Comput. Linguistics, 7:597-610. +Peter Auer, Nicolò Cesa-Bianchi, and Paul Fischer. 2002. Finite-time analysis of the multiarmed bandit problem. Mach. Learn., 47(2-3):235-256. +Viktor Bengs, Robert Busa-Fekete, Adil El Mesaoudi-Paul, and Eyke Hüllermeier. 2021. Preference-based online learning with dueling bandits: A survey. J. Mach. Learn. Res., 22:7:1-7:108. +Ondrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and Ales Tamchyna. 2014. Findings of the 2014 workshop on + +statistical machine translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation, WMT@ACL 2014, June 26-27, 2014, Baltimore, Maryland, USA, pages 12-58. The Association for Computer Linguistics. +Ondrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno-Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Aurélie Néveol, Mariana L. Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Specia, Marco Turchi, Karin M. Verspoor, and Marcos Zampieri. 2016. Findings of the 2016 conference on machine translation. In Proceedings of the First Conference on Machine Translation, WMT 2016, colocated with ACL 2016, August 11-12, Berlin, Germany, pages 131-198. The Association for Computer Linguistics. +Ondrej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Carolina Scarton, Lucia Specia, and Marco Turchi. 2015. Findings of the 2015 workshop on statistical machine translation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, WMT@EMNLP 2015, 17-18 September 2015, Lisbon, Portugal, pages 1-46. The Association for Computer Linguistics. +R. Bradley and M. E. Terry. 1952. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39:324. +Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. +Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: pretraining text encoders as discriminators rather than generators. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. +Bhuwan Dhingra, Manaal Faruqui, Ankur P. Parikh, Ming-Wei Chang, Dipanjan Das, and William W. Cohen. 2019. Handling divergent reference texts when evaluating table-to-text generation. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages + +4884-4895. Association for Computational Linguistics. +Ondrej Dusek, Jekaterina Novikova, and Verena Rieser. 2020. Evaluating the state-of-the-art of end-to-end natural language generation: The E2E NLG challenge. Comput. Speech Lang., 59:123-156. +Desmond Elliott and Frank Keller. 2014. Comparing automatic evaluation measures for image description. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, Volume 2: Short Papers, pages 452-457. The Association for Computer Linguistics. +Moein Falahatgar, Yi Hao, Alon Orlitsky, Venkatadheeraj Pichapati, and Vaishakh Ravindrakumar. 2017a. Maxing and ranking with few assumptions. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 7060-7070. +Moein Falahatgar, Alon Orlitsky, Venkatadheeraj Pichapati, and Ananda Theertha Suresh. 2017b. Maximum selection and ranking under noisy comparisons. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 1088-1096. PMLR. +Gabriel Forgues, Joelle Pineau, Jean-Marie Larchevêque, and Réal Tremblay. 2014. Bootstrapping dialog systems with word embeddings. In NeurIPS, modern machine learning and natural language processing workshop, volume 2. +Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, volume 48 of JMLR Workshop and Conference Proceedings, pages 1050-1059. JMLR.org. +Yarin Gal, Riashat Islam, and Zoubin Ghahramani. 2017. Deep bayesian active learning with image data. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 1183-1192. PMLR. +Ralf Herbrich, Tom Minka, and Thore Graepel. 2006. Trueskilltm: A bayesian skill rating system. In Advances in Neural Information Processing Systems 19, Proceedings of the Twentieth Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 4-7, 2006, pages 569-576. MIT Press. +Neil Houlsby, Ferenc Huszar, Zoubin Ghahramani, and Máté Lengyel. 2011. Bayesian active learning + +for classification and preference learning. CoRR, abs/1112.5745. +J. Edward Hu, Rachel Rudinger, Matt Post, and Benjamin Van Durme. 2019. PARABANK: monolingual bitext generation and sentential paraphrasing via lexically-constrained neural machine translation. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 6521-6528. AAAI Press. +M. Kendall. 1948. Rank correlation methods. +Junpei Komiyama, Junya Honda, Hisashi Kashima, and Hiroshi Nakagawa. 2015. Regret lower bound and optimal algorithm in dueling bandit problem. In Proceedings of The 28th Conference on Learning Theory, COLT 2015, Paris, France, July 3-6, 2015, volume 40 of JMLR Workshop and Conference Proceedings, pages 1141-1154. JMLR.org. +Ilia Kulikov, Alexander H. Miller, Kyunghyun Cho, and Jason Weston. 2019. Importance of search and evaluation strategies in neural dialogue modeling. In Proceedings of the 12th International Conference on Natural Language Generation, INLG 2019, Tokyo, Japan, October 29 - November 1, 2019, pages 76-87. Association for Computational Linguistics. +Margaret Li, Jason Weston, and Stephen Roller. 2019. ACUTE-EVAL: improved dialogue evaluation with optimized questions and multi-turn comparisons. CoRR, abs/1909.03087. +Weixin Liang, J. Zou, and Zhou Yu. 2020. Beyond user self-reported likert scale ratings: A comparison model for automatic dialog evaluation. ArXiv, abs/2005.10716. +Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics. +Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. Trans. Assoc. Comput. Linguistics, 8:726-742. +R. Luce. 1979. Individual choice behavior: A theoretical analysis. +Nitika Mathur, Timothy Baldwin, and Trevor Cohn. 2017. Sequence effects in crowdsourced annotations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 2860-2865. Association for Computational Linguistics. + +Soheil Mohajer, Changho Suh, and Adel M. Elmahdy. 2017. Active learning for top-k rank aggregation from noisy comparisons. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 2488-2497. PMLR. +Courtney Naples, Maria Nadejde, and J. Tetreault. 2019. Enabling robust grammatical error correction in new domains: Data sets, metrics, and analyses. Transactions of the Association for Computational Linguistics, 7:551-566. +Courtney Naples, Keisuke Sakaguchi, Matt Post, and J. Tetreault. 2015a. Ground truth for grammaticality correction metrics. In ACL. +Courtney Naples, Keisuke Sakaguchi, Matt Post, and J. Tetreault. 2015b. Ground truth for grammaticality correction metrics. In ACL. +Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christopher Bryant. 2014. The conll-2014 shared task on grammatical error correction. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task, CoNLL 2014, Baltimore, Maryland, USA, June 26-27, 2014, pages 1-14. ACL. +Jekaterina Novikova, Ondrej Dusek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for NLG. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 2241-2252. Association for Computational Linguistics. +Jekaterina Novikova, Ondrej Dusek, and Verena Rieser. 2018. Rankme: Reliable human ratings for natural language generation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 2 (Short Papers), pages 72-78. Association for Computational Linguistics. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. +R. Plackett. 1975. The analysis of permutations. Journal of The Royal Statistical Society Series C-applied Statistics, 24:193-202. +Maja Popovic. 2015. chrf: character n-gram f-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, + +WMT@EMNLP 2015, 17-18 September 2015, Lisbon, Portugal, pages 392-395. The Association for Computer Linguistics. +Filip Radlinski, Madhu Kurup, and Thorsten Joachims. 2008. How does clickthrough data reflect retrieval quality? In Proceedings of the 17th ACM Conference on Information and Knowledge Management, CIKM 2008, Napa Valley, California, USA, October 26-30, 2008, pages 43-52. ACM. +Vasile Rus and Mihai C. Lintean. 2012. A comparison of greedy and optimal assessment of natural language student input using word-to-word similarity metrics. In Proceedings of the Seventh Workshop on Building Educational Applications Using NLP, BEA@NAACL-HLT 2012, June 7, 2012, Montréal, Canada, pages 157-162. The Association for Computer Linguistics. +Ananya B. Sai, Mithun Das Gupta, Mitesh M. Khapra, and Mukundhan Srinivasan. 2019. Re-evaluating ADEM: A deeper look at scoring dialogue responses. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 6220-6227. AAAI Press. +Ananya B. Sai, Akash Kumar Mohankumar, Siddhartha Arora, and Mitesh M. Khapra. 2020a. Improving dialog evaluation with a multi-reference adversarial dataset and large scale pretraining. Trans. Assoc. Comput. Linguistics, 8:810-827. +Ananya B. Sai, Akash Kumar Mohankumar, and Mitesh M. Khapra. 2020b. A survey of evaluation metrics used for NLG systems. CoRR, abs/2008.12009. +Keisuke Sakaguchi and Benjamin Van Durme. 2018. Efficient online scalar annotation with bounded support. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 208-218. Association for Computational Linguistics. +Keisuke Sakaguchi, Courtney Naples, Matt Post, and Joel R. Tetreault. 2016. Reassessing the goals of grammatical error correction: Fluency instead of grammaticality. Trans. Assoc. Comput. Linguistics, 4:169-182. +Keisuke Sakaguchi, Matt Post, and Benjamin Van Durme. 2014. Efficient elicitation of annotations for human evaluation of machine translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation, WMT@ACL 2014, June 26-27, 2014, Baltimore, Maryland, USA, pages 1-11. The Association for Computer Linguistics. +Joao Sedoc, Daphne Ippolito, Arun Kirubarajan, Jai Thirani, Lyle Ungar, and Chris Callison-Burch. 2019. + +Chateval: A tool for chatbot evaluation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Demonstrations, pages 60-65. Association for Computational Linguistics. +Abigail See, Stephen Roller, Douwe Kiela, and Jason Weston. 2019. What makes a good conversation? how controllable attributes affect human judgments. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 1702-1723. Association for Computational Linguistics. +Thibault Sellam, Dipanjan Das, and Ankur P. Parikh. 2020. BLEURT: learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7881-7892. Association for Computational Linguistics. +Edwin D. Simpson and Iryna Gurevych. 2018. Finding convincing arguments using scalable bayesian preference learning. Trans. Assoc. Comput. Linguistics, 6:357-371. +Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res., 15(1):1929-1958. +Nisan Stiennon, L. Ouyang, Jeff Wu, D. Ziegler, Ryan J. Lowe, Chelsea Voss, A. Radford, Dario Amodei, and Paul Christiano. 2020. Learning to summarize from human feedback. ArXiv, abs/2009.01325. +Katsuhito Sudoh, Kosuke Takahashi, and Satoshi Nakamura. 2021. Is this translation error critical?: Classification-based human and automatic machine translation evaluation focusing on critical errors. In HUMEVAL. +Balázs Szö rényi, Róbert Busa-Fekete, Adil Paul, and Eyke Hüllermeier. 2015. Online rank elicitation for plackett-luce: A dueling bandits approach. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 604-612. +Tanguy Urvoy, F. Clerot, R. Feraud, and Sami Naamane. 2013. Generic exploration and k-armed voting bandits. In ICML. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008. + +Michael Volske, Martin Potthast, Shahbaz Syed, and Benno Stein. 2017. Tl;dr: Mining reddit to learn automatic summarization. In Proceedings of the Workshop on New Frontiers in Summarization, NFiS@EMNLP 2017, Copenhagen, Denmark, September 7, 2017, pages 59-63. Association for Computational Linguistics. +John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Towards universal paraphrastic sentence embeddings. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings. +Huasen Wu and Xin Liu. 2016. Double thompson sampling forueling bandits. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 649-657. +Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2020. mt5: A massively multilingual pre-trained text-to-text transformer. CoRR, abs/2010.11934. +Yisong Yue, Josef Broder, Robert Kleinberg, and Thorsten Joachims. 2012. The k-armed dueling bandits problem. J. Comput. Syst. Sci., 78(5):1538-1556. +Yisong Yue and Thorsten Joachims. 2011. Beat the mean bandit. In Proceedings of the 28th International Conference on Machine Learning, ICML 2011, Bellevue, Washington, USA, June 28 - July 2, 2011, pages 241-248. Omnipress. +Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. BERTScore: evaluating text generation with BERT. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. +Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Christian M. Meyer, and Steffen Eger. 2019. Moverscore: Text generation evaluating with contextualized embeddings and earth mover distance. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 563-578. Association for Computational Linguistics. +Masour Zoghi, Zohar S. Karnin, Shimon Whiteson, and Maarten de Rijke. 2015. Copeland dueling bandits. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 307-315. + +Masrour Zoghi, Shimon Whiteson, Maarten de Rijke, and Rémi Munos. 2014a. Relative confidence sampling for efficient on-line ranker evaluation. In *Seventh ACM International Conference on Web Search and Data Mining*, WSDM 2014, New York, NY, USA, February 24-28, 2014, pages 73-82. ACM. + +Maslour Zoghi, Shimon Whiteson, Rémi Munos, and Maarten de Rijke. 2014b. Relative upper confidence bound for the k-armed dueling bandit problem. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014, volume 32 of JMLR Workshop and Conference Proceedings, pages 10-18. JMLR.org. + +# A Further Details on Experiments + +# A.1 Tasks & Datasets + +In table 5, we report the dataset statistics along with links to download the original datasets. We now discuss the preprocessing steps: + +Machine Translation: In WMT 2015 and 2016 tasks, human annotators were asked to rank five system outputs (translated sentences) relative to each other. As recommended by the organizers (Bojar et al., 2014), we convert each of these rankings into $\binom{5}{2}$ pairwise comparisons of systems. + +Grammatical Error Correction: The Grammarly evaluation datasets follow the RankME (Novikova et al., 2018) annotation style where annotators were shown 8 outputs side by side for each input and were asked to provide a numerical score to each of them. We discarded one of the outputs out of the 8, which was human crafted, and used the remaining 7 model-generated outputs. We then convert these 7 scores into $\binom{7}{2}$ pairwise comparisons of systems. Human evaluations of the CoNLL-2014 Shared Task followed the same process as WMT 2015. Hence, we follow the same preprocessing steps as WMT. + +Data-to-Text Generation: The E2E NLG Challenge also follows the RankME annotation format. We follow the same preprocessing steps as the Grammarly datasets. Out of the total 21 systems, we held out 5 systems to train the Electra model and use the remaining 16 systems. + +Paraphrase Generation: For ParaBank, we follow the same preprocessing steps as the Grammarly datasets. Out of the total 35 systems, we held out of 7 systems and only used the remaining 28 systems. + +Summarization: We select 11 systems that have human annotations between each pair of them. These systems are GPT3-like models with varying model sizes (3B, 6B, 12B) and training strategies. We do not perform any additional preprocessing here. + +# A.2 Direct Assessment Metrics: Implementation Details + +We use the nlg-eval library for the implementation of BLEU-4, ROUGE-L, Embedding Average, Vector Extrema, and Greedy Matching. ForchrF,Laser and BertScore, we use the implementations from the VizSeq library. We use the official implementation released by the original authors for Mover + +
TaskDataset# Systems# Human AnnotationsLabel Distrib. (0-0.5-1)Downloadable Link
Machine TranslationWMT15 fin-eng143157737%-26%-37%Click here
WMT15 rus-eng134453936%-27%-37%
WMT15 deu-eng134053532%-36%-32%
WMT16 tur-eng91018828%-44%-28%Click here
WMT16 ron-eng71582238%-24%-38%
WMT16 cze-eng1212578838%-25%-37%
WMT16 deu-eng102093737%-26%-37%
Grammatical Error CorrectionGrammarly (FCE)72032829%-40%-31%Click here
Grammarly (Wiki)72083229%-40%-31%
CoNLL-2014 Shared Task131620923%-52%-25%Click here
Data-to-Text GenerationE2E NLG Challenge161708924%-50%-26%Click here
Paraphrase GenerationParaBank2815114844%-2%-54%Click here
SummarizationTLDR OpenAI11480949%-0%-51%Click here
+ +Table 5: Description of tasks and datasets with the number of NLG systems, number of pairwise human annotations, label distribution and the downloadable links to the datasets before preprocessing + +Score and Bleurt. Among these metrics, Bleurt is the only trainable metric. We use the publicly released Bleurt-base checkpoint trained on WMT direct judgments data. As described in section 4.2, we apply Dropout to the Bleurt model during test time to estimate prediction uncertainty. + +# A.3 Finetuning Datasets + +Here, we describe the task-specific datasets used for finetuning the Electra model (pairwise evaluation metric described in section 5.2). For MT, we used human evaluations of WMT 2013 and 2014, consisting of a total of $650\mathrm{k}$ examples. For GEC, we curated a training dataset of $180\mathrm{k}$ pairs of texts and human preference using data released by (Napoles et al., 2015b) and the development set released by (Napoles et al., 2019). We utilize $11\mathrm{k}$ examples from 5 held-out systems in the E2E NLG Challenge (apart from the 16 systems used for evaluations) for Data-to-Text generation. Lastly, we use a dataset of $180\mathrm{k}$ examples from 7 held-out systems in the ParaBank dataset for paraphrase generation. We use $90\% - 10\%$ split for splitting the dataset into train and validation sets. Note that these datasets do not have any overlap with the datasets used for evaluating dueling bandit algorithms. + +# A.4 Finetuning Details + +We use the pretrained Electra-base model (Clark et al., 2020) with 110M parameters (12 layers and 12 attention heads) as our base model. We finetune the model using ADAM optimizer with $\beta_{1} = 0.9$ and $\beta_{2} = 0.99$ . We use a linear learning rate decay with a maximum learning rate of 1e-5 and warm-up + +for $10\%$ of training. We use a batch size of 128 and finetune for four epochs. We finetune all the models on Google Cloud TPU v3-8. To estimate prediction, we apply Dropout to the Electra model during test time as described in 4.2. + +# B Summary of Dueling Bandit Algorithms + +We now provide an overview of various dueling bandit algorithms in the literature. We first introduce a few additional notations and terminologies in B.1. Later in B.2, we describe the various structural assumptions made by different dueling bandit algorithms. Finally, in B.3, we summarize 13 dueling bandit algorithms that we analyze in this work. + +# B.1 Notations and Terminologies + +Let $\Delta_{ij} = p_{ij} - \frac{1}{2}$ where $p_{ij}$ is the preference probability of system $i$ over $j$ , as defined in section 2.3. We call a system as the Copeland winner if it beats more number of systems than any other system. Mathematically, a Copeland winner $i^*$ is defined as $i^* = \arg \max_i \sum_{j=1}^k \mathbb{1}(\Delta_{ij} > 0)$ . A special case of the Copeland winner is the Condorcet winner, which is the system that beats all other systems. In all our NLG tasks and datasets, we observed that this special case holds true i.e. there exists a system that beats all other $k-1$ systems, and we define it as the top-ranked system. Nevertheless, we mention these two definitions to distinguish algorithms that work for the general Copeland winner, even if the Condorcet winner does not exist. + +# B.2 Assumptions + +All theueling bandit algorithms that we analyze in this work assume a stochastic feedback setup in which the feedback is generated according to an underlying (unknown) stationary probabilistic process. Specifically, in our Active Evaluation framework, this is equivalent to assuming that the annotator preference is stationary over time and is given by some fixed distribution $p_a(w|Y_1^{(t)},Y_2^{(t)})$ .Further, manyueling bandit algorithms make various assumptions on the true pairwise preferences and exploit these assumptions to derive theoretical guarantees (Bengs et al., 2021). In table 6, we describe the various commonly used assumptions byueling bandit algorithms. For example, the stochastic triangle inequality assumption (STI), described in row 4 of table 6, assumes that the true preference probabilities between systems obey the triangle inequality. We note here that one cannot verify the validity of these assumptions apriori since we do not have access to the true preferences. + +# B.3 Algorithms + +In table 7, we describe the various dueling bandit algorithms along with the assumptions (used to provide theoretical guarantees) and the target winner. We summarize these algorithms below: + +IF: Interleaved Filtering (IF) (Yue et al., 2012) algorithm consists of a sequential elimination strategy where a currently selected system $s_i$ is compared against the rest of the active systems (not yet eliminated). If the system $s_j$ beats a system $s_i$ with high confidence, then $s_i$ is eliminated, and $s_j$ is compared against all other active systems. Similarly, if the system $s_i$ beats $s_j$ with high confidence, then $s_j$ is eliminated, and $s_i$ is continued to be compared against the remaining active systems. Under the assumptions of TO, SST, and STI, the authors provide theoretical guarantees for the expected regret achieved by IF. + +BTM: Beat The Mean (BTM) (Yue and Joachims, 2011), similar to IF, is an elimination-based algorithm that selects the system $s_i$ with the fewest comparisons and compares it with a randomly chosen system from the set of active systems. Based on the comparison outcome, a score and confidence interval are assigned to the system $s_i$ . BTM eliminates a system as soon as there is another system with a significantly higher score. + +Knockout, Seq Elim, Single Elim: Knockout (Falahatgar et al., 2017b), Sequential Elimination (Falahatgar et al., 2017a), Single Elimination (Mohajer et al., 2017) are all algorithms that proceed in a knockout tournament fashion where the systems are randomly paired, and the winner in each duel will play the next round (losers are knocked out) until the overall winner is determined. During a duel, the algorithm repeatedly compares the two systems to reliably determine the winner. The key difference between the three algorithms is the assumptions they use and how they determine the number of comparisons required to identify the winning system in a duel with high probability. + +Plackett Luce: Plackett Luce Condorcet winner identification algorithm (Szörenyi et al., 2015) assumes that the true rank distribution follows the Placket-Luce model (Plackett, 1975). The algorithm is based on a budgeted version of QuickSort. The authors show that it achieves a worst-time annotation complexity of the order $k \log k$ under the Placket-Luce assumption. + +RUCB: Relative Upper Confidence Bound (RUCB) (Zoghi et al., 2014b) is an adaptation of the well-known UCB algorithm (Auer et al., 2002) to the dueling bandit setup. Similar to UCB, RUCB selects the first system $s_t^{(1)}$ based on "optimistic" estimates of the pairwise preference probabilities i.e. based on an upper confidence bound of preference probabilities. The second system $s_t^{(2)}$ is chosen to be the one that is most likely to beat $s_t^{(1)}$ . + +RCS: Relative Confidence Sampling (RCS) (Zoghi et al., 2014a) follows a Bayesian approach by maintaining a posterior distribution over the preference probabilities. At each time step $t$ , the algorithm samples preference probabilities from the posterior and simulates a round-robin tournament among the systems to determine the Condorcet winner. The estimated Condorcet winner is chosen as the first system $s_t^{(1)}$ and second system $s_t^{(2)}$ is chosen such that it has the best chance of beating $s_t^{(1)}$ . + +RMED: Relative Minimum Empirical Divergence 1 (RMED) algorithm (Komiyama et al., 2015) maintains an empirical estimate of the "likelihood" that a system is the Condorcet winner. It then uses this estimate to sample the first system $s_t^{(1)}$ and then selects the second system $s_t^{(2)}$ that is most likely to beat $s_t^{(1)}$ . + +SAVAGE: Sensitivity Analysis of VVariables for Generic Exploration (SAVAGE) (Urvoy et al., + +
Assumption NameCondition
Total Order (TO)∃ a total order > over S: i > j ⇌ Δij > 0
Strong stochastic transitivity (SST)Δij > 0, Δjk > 0 ⇒ Δik ≥ max(Δij, Δjk)
Relaxed stochastic transitivity (RST)∃γ ≥ 1: Δij > 0, Δjk > 0 ⇒ γΔik ≥ max(Δij, Δjk)
Stochastic triangle inequality (STI)Δij > 0, Δjk > 0 ⇒ Δik ≤ Δij + Δjk
Condorcet winner (CW)∃i*: Δi*,j > 0, ∀j ∈ S - i*
PL modelThe underlying rank distribution follows the Plackett-Luce (PL) model (Plackett, 1975; Luce, 1979)
+ +Table 6: Various assumptions made by dueling bandit algorithms in the literature + +
AlgorithmAssumptionsTarget
IF (Yue et al., 2012)TO+SST+STICondorcet
BTM (Yue and Joachims, 2011)TO+RST+STICondorcet
Seq-Elim. (Falahatgar et al., 2017a)SSTCondorcet
Plackett Luce (Szörényi et al., 2015)PL modelCondorcet
Knockout (Falahatgar et al., 2017b)SST+STICondorcet
Single Elim.(Mohajer et al., 2017)TOCondorcet
RUCB (Zoghi et al., 2014b)CWCondorcet
RCS (Zoghi et al., 2014a)CWCondorcet
RMED (Komiyama et al., 2015)CWCondorcet
SAVAGE (Urvoy et al., 2013)-Copeland
CCB (Zoghi et al., 2015)-Copeland
DTS (Wu and Liu, 2016)-Copeland
DTS++ (Wu and Liu, 2016)-Copeland
+ +Table 7: Summary of dueling bandits algorithms in the literature along with their theoretical assumptions and the target winner of the learner + +2013) is a generic algorithm that can be adopted for various ranking problems such as Copeland winner identification. SAVAGE (Copeland) algorithm, at each time step, randomly samples a pair of systems from the set of active system pairs (not yet eliminated) and updates the preference estimates. A system pairs $(s_i, s_j)$ is eliminated if either (i) the result of comparison between $s_i$ and $s_j$ is already known with high probability, or (ii) there exists some system $s_k$ where the estimated Copeland score of $s_k$ is significantly higher than $s_i$ or $s_j$ . + +CCB: Copeland Confidence Bound (CCB) (Zoghi et al., 2015) is similar to the RUCB algorithm but is designed to identify the Copeland Winner (a generalization of the Condorcet winner). The CCB algorithm maintains optimistic preference estimates and uses them to choose the first system $s_t^{(1)}$ and then selects the second system $s_t^{(2)}$ that is likely to discredit the hypothesis that $s_t^{(1)}$ is indeed the Copeland winner. The algorithm successively removes all other systems that are highly unlikely to be a Copeland winner. + +DTS, DTS++: The Double Thompson Sampling (DTS) algorithm (Wu and Liu, 2016) maintains a posterior distribution over the pairwise preference matrix, and selects the system pairs $s_t^{(1)}, s_t^{(2)}$ based on two independent samples from the posterior distribution. The algorithm updates the posterior distributions based on the comparison outcome and eliminates systems that are unlikely to be the Copeland winner. DTS++ is an improvement proposed by the authors, which differs from DTS in the way the algorithm breaks ties. Both have the same theoretical guarantees, but DTS++ has been empirically shown to achieve better performance (in terms of regret minimization). + +# C Hyperparameters Details + +We discuss the details of the hyperparameters and the tuning procedure used forueling bandit algorithm in C.1, pairwise probability models in C.2 and our model-based algorithm in C.3. In all three cases, we use the validation split of the finetuning datasets described in A.3 as our validation dataset. For example, the validation split of the finetuning datasets for MT consists of $10\%$ of the WMT 2013 and 2014 datasets. We use this dataset to tune the hyperparameters for WMT 2015 and 2016 datasets. + +# C.1 Dueling Bandit Algorithms + +For all algorithms other than Knockout and Single Elimination, we use the hyperparameters recommended by the original authors for all the datasets. For example, in the RMED algorithm, described in algorithm 1 of (Komiyama et al., 2015), we use $f(K) = 0.3K^{1.01}$ as suggested by the authors. For the RCS algorithm, described in algorithm 1 of (Zoghi et al., 2014a), we use $\alpha$ (exploratory constant) $= 0.501$ . For RUCB (algorithm 1 of (Zoghi et al., 2014b)), we use $\alpha = 0.51$ . Similarly, for all algorithms other than Knockout and Single Elimination, we use the recommended hyperparameters mentioned in the original paper. For knockout and Single Elimination, we found that the performance was very sensitive to the hyperparameters. For these two algorithms, we manually tuned the hyperparameters on the validation set. In Knockout, algorithm 3 of (Falahatgar et al., 2017b), we use $\epsilon = 0.2$ , $\delta = 0.05$ , $\gamma = 1.0$ for WMT'16 ran-eng and TLDR OpenAI datasets. We use $\epsilon = 0.2$ , $\delta = 0.05$ , $\gamma = 0.6$ for ParaBank and Grammarly-Wiki datasets and $\epsilon = 0.2$ , $\delta = 0.09$ , $\gamma = 0.6$ for all other datasets. In Single Elimination, we use $m$ + +(number of pairwise comparisons per duel) $= 1000$ for WMT'16 ran-eng, E2E NLG, Grammarly-FCE, $m = 1500$ for CoNLL'14 shared task and $m = 500$ for all other datasets. + +# C.2 Pairwise Probability Models + +Let $\tilde{f}(Y)$ be the unnormalized score given an automatic evaluation metric for an hypothesis $Y$ . We preprocess the score $\tilde{f}(Y)$ to obtain $f(Y)$ to ensure that the pairwise probability scores is always a valid i.e. lies between 0 and 1. To preprocess the scores, we use the validation dataset consisting of tuples of the form $\{Y_1^{(i)}, Y_2^{(i)}, w^{(i)}\}_{i=1}^N$ where $Y_1^{(i)}, Y_2^{(i)}$ represent the $i$ th generated texts and $w^{(i)}$ is the corresponding comparison outcome provided by human annotators. + +Linear: Let $\Delta_{i} = |\tilde{f}(Y_{1}^{(i)}) - \tilde{f}(Y_{2}^{(i)})|$ and $\Delta = \max_{i}\Delta_{i}$ . We divide the unnormalized $\tilde{f}(Y)$ scores by $2\Delta$ i.e. + +$$ +f (Y) = \frac {\tilde {f} (Y)}{2 \Delta} +$$ + +BTL: Let $f_{i}^{m} = \max \{\tilde{f}(Y_{1}^{(i)}), \tilde{f}(Y_{2}^{(i)})\}$ , $f^{m} = \max_{i} f_{i}^{m}$ . We now subtract the scores by $f^{m}$ to ensure that the scores are non-negative i.e. + +$$ +f (Y) = \tilde {f} (Y) - f ^ {m} +$$ + +BTL-Logistic: BTL-Logistic model always provides a score between 0 and 1. However, we found that dividing the scores by a temperature co-efficient $\gamma$ can provide better results i.e. + +$$ +f (Y) = \frac {\tilde {f} (Y)}{\gamma} +$$ + +We tune $\gamma$ using grid search between 0.005 and 1 on the validation set to minimize the cross-entropy loss between the preference probabilities $\hat{p}(Y_1 \succ Y_2)$ and the human labels $w$ . + +Thresholds: As described in section 3, we threshold the preference probabilities $\hat{p}(Y_1 \succ Y_2)$ at two thresholds $\tau_1$ and $\tau_2$ to obtain the predicted comparison outcome $\hat{w}$ . We perform a grid search by varying $\tau_1$ from 0.4 to 0.5 and $\tau_2$ from 0.5 to 0.6 with a step size of 0.001. We choose the optimal thresholds that maximize the prediction accuracy on the validation dataset. + +
DatasetRand. Mix.Uncertainty (BALD)UCB-Elim.
pmτBALDατcop
WMT (all 7 datasets)0.80.0250.50.8
Grammarly (FCE & Wiki)0.80.070.50.8
CoNLL'140.80.070.50.8
E2E NLG0.90.0350.50.8
ParaBank0.950.150.50.8
+ +Table 8: Tuned Hyperparameters of Model-based algorithms when used with the Electra Metric + +# C.3 Model-based Algorithms + +We manually tune the hyperparameters in our model-based algorithms on the validation dataset. For clarity, we first describe the hyperparameters in the different model-based algorithms. In Random Mixing, we need to choose the mixing probability $p_m$ hyperparameter. In Uncertainty-aware Selection (BALD), we need to choose a threshold value $\tau_{\text{BALD}}$ for the BALD score at which we decide to ask for human annotations. For UCB elimination, we should choose a threshold $\tau_{\text{cop}}$ for optimistic Copeland scores and the $\alpha$ hyperparameter, which controls the size of the confidence region. In table 8 and 9, we report the tuned hyperparameter values when using Electra and Bleurt (with the Linear probability model) as the evaluation model. Another hyperparameter is the number of Monte-Carlo samples $L$ to obtain from the Dropout distribution as discussed in section 4.2. We set $L = 20$ , i.e. we independently apply dropout 20 times for each test predictions. + +# D Effect of Hyperparameters in Model-based Algorithms + +# D.1 Sensitivity to Hyperparameters + +We study how hyperparameters in our proposed model-based algorithms affect annotation complexity. Recall that in Random Mixing, the mixing probability $p_m$ controls the ratio of real and model generated feedback given to the learner. In Uncertainty-aware Selection (BALD), we obtain human annotations when the BALD score is above a threshold $\tau_{BALD}$ . Here, as well $\tau_{BALD}$ implicitly controls the fraction of real and predicted feedback. In figure 5, we show the effect of $p_m$ in Random Mixing with Bleurt and $\tau_{BALD}$ in Uncertainty-aware Selection with Bleurt. We observe that with increases in both the hyperparameters, the annotation complexity + +
DatasetRand. Mix.Uncertainty (BALD)UCB-Elim.
pmτBALDατcop
WMT (all 7 datasets)0.80.0050.50.8
Grammarly (FCE & Wiki)0.80.00050.50.8
CoNLL'140.010.0000510.7
E2E NLG0.70.00250.50.8
ParaBank0.40.00050.50.8
+ +Table 9: Tuned Hyperparameters of Model-based algorithms when used with the Bleurt Metric + +![](images/e59ea47888757083a9ff10dd355625d556d6003fbe6def257faee0a6f8db4292.jpg) +Figure 5: Variation in annotation complexity with Mixing probability in Random Mixing with Bleurt on the left and with BALD threshold in Uncertainty-aware Selection (BALD) with Bleurt on the right + +ity decreases, i.e., with a greater amount of feedback received from Bleurt, the number of required human annotations is lower. However, as shown in figure 6, we observe the opposite trend when we use metrics such as BLEU, which are highly inaccurate. In these cases, we require a greater number of human annotations to compensate for the highly erroneous feedback received from the evaluation metric. Therefore, the optimal mixing probability $p_m$ in such cases is close to 0 i.e. equivalent to the model-free case. For moderately accurate metrics such as Laser, we observed the optimal $p_m$ was close to 0.4 to 0.6. The key insight from these observations is that the higher the accuracy of the metric, the higher amount of feedback can be obtained from the metric to identify the top-ranked system. In figure 7, we analyze how the annotation complexity of UCB Elimination with Bleurt varies with the optimistic Copeland threshold $\tau_{cop}$ hyperparameter. We fixed $\alpha$ hyperparameter to 0.6. We observed that UCB Elimination is much more robust to $\tau_{cop}$ and a general value of $\tau_{cop} = 0.8$ worked well across all datasets and metrics. + +# D.2 Best Practices in Choosing Hyperparameters + +The optimal approach to choose hyperparameters is usually to tune them on a validation set. But, at + +![](images/627fd612c8233929f3177a5981fa8d9d03f37e3456c9098020acd66deeed6506.jpg) +Figure 6: Prediction accuracy v/s number of human annotations collected for Random Mixing with Bluert and BLEU for different mixing probability $p_{m}$ on the WMT 15 deu-eng dataset + +![](images/10b9d697ff738696f2d960bc07ec0f1e54ba3ef5dab52b55accd7e3c8241115c.jpg) +Figure 7: Annotation complexity of UCB Elimination with Bleurt v/s the Copland threshold for $\alpha = 0.6$ + +times, it may not be possible either because of computational reasons or because a human-annotated validation dataset may not be available. In such cases, we provide a few heuristics based on our previous analysis to choose hyperparameters in our model-based algorithms: + +1. Choose the mixing probability $p_m$ in Random Mixing proportionately with the accuracy of the metric. For example, we observed that for metrics with sentence-level prediction accuracy greater than $70\%$ , $p_m = 0.8$ tend to work well. For accuracy between $65\%$ to $70\%$ , $p_m$ in the range of 0.5-0.7 worked well. +2. Once we choose a value of $p_m$ , we can find an appropriate BALD threshold $\tau_{BALD}$ where $100 \times p_m\%$ of BALD scores are above $\tau_{BALD}$ and $100 \times (1 - p_m)\%$ of BALD score are below $\tau_{BALD}$ . Choosing the BALD threshold this way ensures that we can directly control the desired amount of model-predicted feedback given to the learner. +3. For UCB Elimination, we recommend using the default values of $\alpha = 0.6$ and $\tau_{\text{cop}} = 0.8$ , which we found to work well across tasks and metrics. + +![](images/f42ebfa9072b485563cdeed067e85f8fe08de6814ca2e52481eb0efec6557af9.jpg) +Figure 8: Annotation Complexity v/s delays in feedback on the WMT16 deu-eng dataset + +![](images/236fe21000fb0349fac1c7790aced4b86cc61eae320eee9732c3cc9e861fb8b7.jpg) +Figure 9: Sentence-level prediction accuracy of direct assessment metrics with the Linear, BTL, and BTL-Logistic models averaged across the 7 WMT datasets + +# E Robustness to Delayed Feedback + +In some instances, human annotations are obtained from multiple crowdsourced annotators in parallel to reduce the time taken for annotations. In such cases, the learner is required to choose the system pairs $(s_1^{(t)}, s_2^{(t)})$ to give to some annotator $i$ even before we obtain the result $w^{(t-1)}$ of the previous comparison from some other annotator $j$ . In other words, the learner may experience a delay $d > 0$ in feedback where at time $t$ , the learner may only have access to the comparison history up to time $t - d - 1$ . As shown in figure 8, we observe that the top-performingueling bandit algorithms tend to be robust to delays in feedback. We notice that the variation in the annotation complexity of RMED and RCS as measured by standard deviation is only 64.49 and 62.86, respectively. + +# F Additional Results + +# F.1 Effect of number of NLG systems + +In figure 10, we compare the variations in annotation complexity of Random Mixing (with Electra metric) using uniform exploration andueling bandit algorithms. Similar to the model-free case discussed in section 6.4, the annotation complexity of uniform exploration grows as $O(k^2)$ but the annotation complexity only varies as $O(k)$ for RMED, + +![](images/dd1cda0a643b6328cbac8ae0d29fe3e10b3a72d6552a80d4f7206f7007811120.jpg) +Figure 10: Annotation complexity of Random Mixing using the Electra metric with uniform exploration and dueling bandit algorithms as function of number of NLG systems on the ParaBank dataset + +# RCS, and RUCB dueling bandit algorithms + +# F.2 Results of Dueling Bandit Algorithms + +We report the annotation complexity of all 13 dueling bandit algorithms on 13 evaluation datasets in table 10. In figure 11, we show the top-rank prediction accuracy as a function of the number of human annotations for various dueling bandit algorithms on all the datasets, other than WMT 16 tur-eng, which is separately depicted in figure 2. + +# F.3 Performance of Evaluation Metrics + +In table 11, we report the sentence-level accuracy in predicting the comparison outcome for 10 direct assessment metrics using three probability models along with the trained pairwise metric (Electra). We observe that there is little variation in performance across the three probability models. To further illustrate this, we plot the accuracy on the WMT datasets in figure 9 and observe that the performance is largely similar across Linear, BTL, and BTL-logistic models. + +# F.4 Model-based Algorithms + +In figure 12, we show the top-rank prediction accuracy as a function of the number of human annotations for various model-based algorithms using the Electra metric with RMED. We observe that Random Mixing and Uncertainty-aware Selection (BALD) algorithms have significantly higher prediction accuracy than model-free RMED for any given number of human annotations. Further, when we use UCB Elimination with Uncertainty-aware Selection, we observe the highest top-rank prediction accuracy for any given number of annotations. + +
AlgorithmWMT 2016WMT 2015GrammarlyCoNLL '14 TaskE2E NLGPara-BankTL; DR
tur-engron-engcze-engdeu-engfin-engrus-engdeu-engFCEWiki
Uniform19479246471026230322837122651779581153444361369657398252115893
IF1177622821421357187501410138016253626130022662536430471352271849260582570071
BTM3201017456>10^6224929261110883282778>10^6>10^62541101752038
Seq-Elim.1082417514589944401659068811793712851480683855441037>10^69046
PL7011185134774461878591704915215803713156568260031>10^63871
Knockout34157889472334445104580959563134377780557708174184953
Sing. Elim.4830600058855340695364656453600090001294015000559009045
RUCB3125569733291636165545366222273256171902410924411491647
RCS244239243370153726623867529618164606126787263347091903
RMED2028511316128641707192940472093564793643753241321162
SAVERAGE1028918016663923932675128061211557672295939208414932552084733
CCB70171126753892884409211548109054386100202139216960871382518
DTS100899214861846544850133171647343551153018199199401704671354
DTS++762694835532272964659394149269284177743156215065526066284
+ +Table 10: Annotation complexity of 13 dueling bandit algorithms along with the uniform exploration algorithm on 13 datasets spanning 5 NLG tasks + +
MetricsWMT(Micro Average)Grammarly(Micro Average)CoNLL-2014Shared TaskE2E NLGChallengeParaBankTLDR OpenAI
LinearBTLBTL Log.LinearBTLBTL Log.LinearBTLBTL Log.LinearBTLBTL Log.LinearBTLBTL Log.LinearBTLBTL Log.
Chrf62.662.062.675.775.375.978.478.378.447.448.848.366.166.166.134.235.435.4
Bleu-441.553.441.573.273.073.278.978.778.945.039.050.163.863.263.842.844.042.8
Rouge-L60.760.060.773.573.673.678.078.078.044.643.850.264.364.364.343.343.343.3
Emb. Avg.56.559.157.570.170.371.576.076.777.049.851.651.864.964.964.938.238.238.2
Greedy Match59.559.859.968.168.468.277.777.477.746.548.848.964.764.764.543.143.143.1
Vector Extr59.459.559.366.066.966.576.376.776.744.946.249.163.763.763.747.447.148.1
Bertscore65.966.265.977.477.277.482.081.582.045.949.350.168.168.168.144.544.444.5
Laser65.365.165.375.173.075.178.076.478.047.249.950.567.067.067.035.435.435.4
MoverScore66.166.566.174.770.973.080.679.680.350.149.350.468.068.067.840.740.740.7
Bleurt68.267.568.277.176.676.081.581.580.848.150.450.467.767.767.742.542.542.3
Electra65.774.081.654.381.7-
+ +Table 11: Sentence-level accuracy of direct assessment metrics with linear, BTL, and BTL-logistic probability models and our trained Electra metric in predicting the comparison outcome + +![](images/41781d719941f3843595b97a33f441487147061da60288744623ddb9ebd121ff.jpg) +Figure 11: Top-rank prediction accuracy as a function of the number of human annotations for (model-free) Uniform exploration and RUCB, RCS, and RMEDueling bandit algorithms on 12 NLG datasets + +![](images/c6e58156e5fc3e6427783852195cd665ee73d2aa33c98faca2ff7a69aed5cb94.jpg) +Figure 12: Top-rank prediction accuracy as a function of the number of human annotations for various model-based dueling bandit algorithms with RMED and Electra metric on 12 NLG datasets \ No newline at end of file diff --git a/activeevaluationefficientnlgevaluationwithfewpairwisecomparisons/images.zip b/activeevaluationefficientnlgevaluationwithfewpairwisecomparisons/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..1ff16cad39465a2845e6f29df5116632de705b6c --- /dev/null +++ b/activeevaluationefficientnlgevaluationwithfewpairwisecomparisons/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3a0114b34858a622fb0a4a08d17d1c1f553e0544f3a223aad774ebf3a9491be7 +size 1286094 diff --git a/activeevaluationefficientnlgevaluationwithfewpairwisecomparisons/layout.json b/activeevaluationefficientnlgevaluationwithfewpairwisecomparisons/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..3639eb18659504e4648eb2a3c702510a5b9dc357 --- /dev/null +++ b/activeevaluationefficientnlgevaluationwithfewpairwisecomparisons/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:33de4f865a00714c05768c74764d74cc9b88498f286dac9ef13db24c3e37a20c +size 776994 diff --git a/adalognadaptivelogicgraphnetworkforreasoningbasedmachinereadingcomprehension/d03d08f6-87f6-4f8b-9a15-2a036602e4ae_content_list.json b/adalognadaptivelogicgraphnetworkforreasoningbasedmachinereadingcomprehension/d03d08f6-87f6-4f8b-9a15-2a036602e4ae_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..07ef8ea146fd2d1cf7e129fdf88ff44c2d1a3ba3 --- /dev/null +++ b/adalognadaptivelogicgraphnetworkforreasoningbasedmachinereadingcomprehension/d03d08f6-87f6-4f8b-9a15-2a036602e4ae_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c7b5bb8e421aaadea0b19e0c2b2c825846cf5fc7af637f8b16e176c3705e5777 +size 114768 diff --git a/adalognadaptivelogicgraphnetworkforreasoningbasedmachinereadingcomprehension/d03d08f6-87f6-4f8b-9a15-2a036602e4ae_model.json b/adalognadaptivelogicgraphnetworkforreasoningbasedmachinereadingcomprehension/d03d08f6-87f6-4f8b-9a15-2a036602e4ae_model.json new file mode 100644 index 0000000000000000000000000000000000000000..3d1bb00518cfe4a3235955dd4899316d01029524 --- /dev/null +++ b/adalognadaptivelogicgraphnetworkforreasoningbasedmachinereadingcomprehension/d03d08f6-87f6-4f8b-9a15-2a036602e4ae_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5c8c5d225af398f8f58ca05a639fc4abf437af63956fa4483a8bb89a967e9775 +size 140853 diff --git a/adalognadaptivelogicgraphnetworkforreasoningbasedmachinereadingcomprehension/d03d08f6-87f6-4f8b-9a15-2a036602e4ae_origin.pdf b/adalognadaptivelogicgraphnetworkforreasoningbasedmachinereadingcomprehension/d03d08f6-87f6-4f8b-9a15-2a036602e4ae_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ff78b20466764fb9c6e615785afc3cfb624b3a25 --- /dev/null +++ b/adalognadaptivelogicgraphnetworkforreasoningbasedmachinereadingcomprehension/d03d08f6-87f6-4f8b-9a15-2a036602e4ae_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2cf529ad874939104c1800d91f689ad18c082ac5e7ba4820fc1d0b7288457afa +size 2371270 diff --git a/adalognadaptivelogicgraphnetworkforreasoningbasedmachinereadingcomprehension/full.md b/adalognadaptivelogicgraphnetworkforreasoningbasedmachinereadingcomprehension/full.md new file mode 100644 index 0000000000000000000000000000000000000000..32895c92551bda0f8a6898b4bbe56a591d093fee --- /dev/null +++ b/adalognadaptivelogicgraphnetworkforreasoningbasedmachinereadingcomprehension/full.md @@ -0,0 +1,653 @@ +# AdaLoGN: Adaptive Logic Graph Network for Reasoning-Based Machine Reading Comprehension + +Xiao Li and Gong Cheng and Ziheng Chen and Yawei Sun and Yuzhong Qu +State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China + +{xiaoli.nju, chenziheng, ywsun}@smail.nju.edu.cn {gcheng, yzqu} @nju.edu.cn + +# Abstract + +Recent machine reading comprehension datasets such as ReClor and LogiQA require performing logical reasoning over text. Conventional neural models are insufficient for logical reasoning, while symbolic reasoners cannot directly apply to text. To meet the challenge, we present a neural-symbolic approach which, to predict an answer, passes messages over a graph representing logical relations between text units. It incorporates an adaptive logic graph network (AdaLoGN) which adaptively infers logical relations to extend the graph and, essentially, realizes mutual and iterative reinforcement between neural and symbolic reasoning. We also implement a novel subgraph-to-node message passing mechanism to enhance context-option interaction for answering multiple-choice questions. Our approach shows promising results on ReClor and LogiQA. + +# 1 Introduction + +Machine reading comprehension (MRC) has drawn much research attention. Early MRC datasets are not difficult for state-of-the-art neural methods. Indeed, BERT (Devlin et al., 2019) has outperformed humans on SQuAD (Rajpurkar et al., 2016). Recent datasets become more challenging. For example, ReClor (Yu et al., 2020) and LogiQA (Liu et al., 2020) require understanding and reasoning over logical relations described in text, where neural methods showed unsatisfactory performance. + +For instance, consider the MRC task in Figure 1. The context consists of a set of textual propositions describing logical relations between elementary discourse units (EDUs) (Mann and Thompson, 1988). For example, the first sentence describes an implication between two EDUs: "the company gets project A" implies that "product B can be put on the market on schedule". With the help of propositional calculus, humans can formalize propositions and then apply inference rules in proposi + +Context: If the company gets project A, product B can be put on the market on schedule. Product B is put on schedule if and only if the company's fund can be normally turned over. If the company's fund cannot be turned over normally, the development of product C cannot be carried out as scheduled. The fact is that the development of product C is carried out as scheduled. + +Question: This shows: + +# Options: + +A. The company gets project A and product B is put on the market on schedule. +B. The company does not get project A and product B is not put on the market on schedule. +C. Product B is put on the market on schedule and the company's fund is turned over normally. +D. Product B is not put on the market on schedule, and the company's fund turnover is extremely abnormal. + +Figure 1: An example MRC task (adapted from a task in LogiQA). Logical connectives are highlighted in italics. $\checkmark$ marks the correct answer. + +tional logic to prove the proposition in option C. However, how can machines solve such a task? + +Existing Methods and Limitations To solve it, conventional neural models are insufficient for providing the required reasoning capabilities, while symbolic reasoners cannot directly apply to unstructured text. One promising direction is to consider a neural-symbolic solution, such as the recent DAGN method (Huang et al., 2021a). It breaks down the context and each option into a set of EDUs and connects them with discourse relations as a graph. Then it performs graph neural network (GNN) based reasoning to predict an answer. + +However, we identify two limitations in this method. L1: Despite the graph representation, it is predominantly a neural method over discourse relations. It is debatable whether the required symbolic reasoning over logical relations (e.g., implication, negation) can be properly approximated. L2: The graph is often loosely connected and composed of long paths. Node-to-node message passing implemented in existing GNN models (Kipf and Welling, 2017; Schlichtkrull et al., 2018; Velickovic et al., 2018) is prone to provide insufficient interaction be + +![](images/1c9c89221075b024f54bba9efc47af4d8ad8a2754940561ab71143fba7aac8d9.jpg) + +![](images/3349e4277404d348f16cff8230d2792386eedc47e850ea6cc699db6a8017c6e1.jpg) +(a) Raw TLG. +(b) Extended TLG. Dashed nodes and edges represent adaptively inferred EDUs and logical relations, respectively. Double edges represent subgraph-to-node message passing. +Figure 2: Two TLGs for exemplifying our approach. For readability, we omit rev edges. + +tween the context and the option, which is critical to answering a multiple-choice question. + +Our Approach. While we follow the general framework of DAGN, i.e., graph construction and then graph-based reasoning, we overcome its two limitations with a novel neural-symbolic approach. + +To address L1, Figure 3 sketches out our idea. Specifically, we propose to construct a text logic graph (TLG) representing EDUs and their logical relations as opposed to discourse relations, so we can explicitly perform symbolic reasoning to extend the TLG with inferred logical relations, as illustrated in Figure 2. The inferred relations may provide crucial connections to be used in the subsequent graph-based message passing, i.e., symbolic reasoning reinforces neural reasoning. + +Further, while trivially computing and admitting the deductive closure may extend the TLG with irrelevant connections which would mislead message passing, we leverage signals from neural reasoning to adaptively admit relevant extensions, i.e., neural reasoning reinforces symbolic reasoning. + +Moreover, we iterate the above mutual reinforcement by restarting inference in each iteration with signals from the previous iteration to accommodate corrections to the reasoning process and allow sufficient neural-symbolic interaction. + +To address L2, we aggregate the information in the context subgraph of TLG and employ a novel subgraph-to-node message passing mechanism to enhance the interaction from the holistic context + +![](images/0042341ba6ae044bac3a8cbe22403fd257022bb30d47ed9c700ec122b302c0b5.jpg) +Figure 3: Our main idea: mutual and iterative reinforcement between symbolic and neural reasoning. + +subgraph to each node in the option subgraph, and vice versa, as illustrated in Figure 2b. + +We incorporate the above two ideas into our new Adaptive Logic Graph Network (AdaLoGN). To summarize, our technical contributions include + +- a novel neural-symbolic approach where neural and symbolic reasoning mutually and iteratively reinforce each other, and +- a novel aggregation-based enhancement of message passing in graph-based neural reasoning. + +Outline. We elaborate our approach in Section 2, present experiments in Section 3, discuss related work in Section 4, and conclude in Section 5. + +Our code is available on GitHub: https://github.com/nju-websoft/AdaLoGN. + +# 2 Approach + +A MRC task $\langle c, q, O \rangle$ consists of a context $c$ , a question $q$ , and a set of options $O$ . Only one option in $O$ is the correct answer to $q$ given $c$ . The goal of the task is to find this option. + +Figure 4 outlines our implementation. For each option $o \in O$ , we generate the representations of $c, q, o$ (i.e., $\mathbf{g}_c, \mathbf{g}_q, \mathbf{g}_o$ , respectively) by a pretrained language model (Section 2.1), and we construct a raw TLG where nodes (i.e., $u_1, \ldots, u_{|V|}$ ) represent EDUs extracted from $c, q, o$ and edges represent their logical relations (Section 2.2). With their initial representations (i.e., $\mathbf{h}_{u_1}^{(0)}, \ldots, \mathbf{h}_{u_{|V|}}^{(0)})$ obtained from the pre-trained language model, in an iterative manner, we adaptively extend the TLG (i.e., symbolic reasoning) and then pass messages (i.e., neural reasoning) to update node representations (i.e., $\mathbf{h}_{u_1}^{(l+1)}, \ldots, \mathbf{h}_{u_{|V|}}^{(l+1)})$ for generating the representation of the TLG (i.e., $\mathbf{h}_G$ ) (Section 2.3). Finally, we predict the correctness of $o$ (i.e., score_o) based on the above representations (Section 2.4). + +# 2.1 Text Encoding + +We use RoBERTa (Liu et al., 2019), a pre-trained language model, to encode three token sequences $c = c_{1} \cdots c_{|c|}$ , $q = q_{1} \cdots q_{|q|}$ , and $o = o_{1} \cdots o_{|o|}$ which are concatenated by the classifier token $< s >$ + +![](images/f67eea9a07b23d70fefdc0967ebf124f60e447fa89d72d407ae1e53da1935f70.jpg) +Figure 4: Overview of our approach. + +and the separator token $< / \mathrm{s}>$ .. + +$$ +\begin{array}{l} \left[ \mathbf {g} _ {< s >}; \mathbf {g} _ {c _ {1}}; \dots ; \mathbf {g} _ {< / s >}; \mathbf {g} _ {q _ {1}}; \dots ; \mathbf {g} _ {o _ {1}}; \dots ; \mathbf {g} _ {< / s >} \right] \\ = \operatorname {R o B E R T a} \left(< s > c _ {1} \dots < / s > q _ {1} \dots o _ {1} \dots < / s >\right). \tag {1} \\ \end{array} +$$ + +The output vector representations are averaged to form the representations of $c, q, o$ : + +$$ +\mathbf {g} _ {c} = \frac {1}{| c |} \sum_ {i = 1} ^ {| c |} \mathbf {g} _ {c _ {i}}, \mathbf {g} _ {q} = \frac {1}{| q |} \sum_ {i = 1} ^ {| q |} \mathbf {g} _ {q _ {i}}, \mathbf {g} _ {o} = \frac {1}{| o |} \sum_ {i = 1} ^ {| o |} \mathbf {g} _ {o _ {i}}. \tag {2} +$$ + +# 2.2 Text Logic Graph (TLG) + +Besides directly encoding text, we extract logical relations from text as a graph called TLG. + +# 2.2.1 Definition of TLG + +For a piece of text, its TLG is a directed graph $G = \langle V, E \rangle$ where $V$ is a set of nodes representing EDUs of the text (Mann and Thompson, 1988), and $E \subseteq V \times R \times V$ is a set of labeled directed edges representing logical relations between EDUs described in the text. We consider six types of common logical relations $R = \{\text{conj}, \text{disj}, \text{impl}, \text{neg}, \text{rev}, \text{unk}\}$ : + +- conjunction (conj), disjunction (disj), implication (impl), and negation (neg) are standard logical connectives in propositional logic; + +
Rhetorical RelationLogical Relation
LIST, CONTRASTconj
DISJUNCTIONdisj
RESULTimpl
CAUSE, PURPOSE, CONDITION,BACKGROUNDrev
+ +Table 1: Mapping from rhetorical relations in Graphene to logical relations in TLG. + +- reversed implication (rev) is introduced to represent the inverse relation of $\mathsf{impl}$ ; +- unk represents an unknown relation. + +Since conj, disj, neg, and unk are symmetric relations, edges labeled with them are bidirectional. + +Observe the difference between our TLG and the discourse-based logic graph considered in DAGN (Huang et al., 2021a): edges in the former represent logical relations, while those in the latter represent discourse relations. Therefore, we can explicitly perform symbolic reasoning on TLG. + +# 2.2.2 Construction of Raw TLG + +We initialize a raw TLG from $c$ and $o$ . Following Huang et al. (2021a), we ignore $q$ as it is usually uninformative in existing datasets. Specifically, we use Graphene (Cetto et al., 2018) to extract EDUs and their rhetorical relations (Mann and Thompson, 1988) from $c$ and $o$ . Rhetorical relations are converted to logical relations via the mapping in Table 1. Note that each impl edge is always paired with an inverse rev edge, and vice versa. + +We also define a small number of syntactic rules to identify EDUs that negate each other and connect them with neg. The rules are based on part-of-speech tags and dependencies. For example, one such rule checks whether two EDUs differ from each other only by an antonym of an adverb. + +In addition, for each pair of EDUs that are adjacent in the text (including the last EDU of $c$ and the first EDU of $o$ ) but have none of the above logical relations, we connect them with unk because Graphene may fail to identify their relation. + +# 2.3 Adaptive Logic Graph Network (AdaLoGN) + +Since TLG consists of logical relations, we explicitly perform symbolic reasoning by applying inference rules to extend the TLG with inferred logical relations to benefit the subsequent neural reasoning. However, rather than computing the deductive closure which may undesirably provide many relations that are irrelevant to answering the question + +![](images/fa926bf8caf8fc6ab7c380d421c75d67788790448871453adaf04028e914d454.jpg) +(a) Hypothetical syllogism. + +![](images/3efceb42445c9966792efaf1dcddf0b848b901b1ef1218b578b80dfcbccd6df7.jpg) +(b) Transposition. + +![](images/b2c2b4185dbce96b41f91481b5c6b81371feeaf65ee6365c5f208ae059617ea1.jpg) +(c) Adjacency-transmission. +Figure 5: Dashed nodes and edges are inferred by applying an inference rule. $\star$ represents any logical relation in $\{\text{conj}, \text{disj}, \text{impl}\}$ . We omit rev edges. + +and mislead neural reasoning, we perform adaptive extension by leveraging signals from neural reasoning to identify and admit relevant extensions. For neural reasoning, we perform message passing to update node representations, which finally are pooled into the representation of the TLG to be used in the subsequent answer prediction. We iterate the above process by restarting inference on the raw TLG in each iteration with signals from the previous iteration to accommodate corrections to the reasoning process and let symbolic and neural reasoning sufficiently interact with each other. We transform the above idea into a new model named AdaLoGN outlined in Figure 4 and detailed below. + +# 2.3.1 Inference Rules + +Let $G = \langle V, E \rangle$ be a raw TLG. For symbolic reasoning over the logical relations in $G$ , we apply two inference rules about implication in propositional logic. Other rules are left for future work. + +Hypothetical Syllogism: + +$$ +\left(\left(u _ {i} \rightarrow u _ {j}\right) \wedge \left(u _ {j} \rightarrow u _ {k}\right)\right) \vdash \left(u _ {i} \rightarrow u _ {k}\right). \tag {3} +$$ + +Specifically, if $E$ contains two edges $\langle u_i,\mathsf{impl},u_j\rangle$ and $\langle u_j,\mathsf{impl},u_k\rangle$ , we can add two edges $\langle u_i,\mathsf{impl},u_k\rangle$ and $\langle u_k,\mathsf{rev},u_i\rangle$ to $E$ , as illustrated in Figure 5a. + +- Transposition: + +$$ +\left(u _ {i} \rightarrow u _ {j}\right) \vdash (\neg u _ {j} \rightarrow \neg u _ {i}). \tag {4} +$$ + +Specifically, if $E$ contains an edge $\langle u_i,\mathsf{impl},u_j\rangle$ , we can add two edges $\langle \neg u_j,\mathsf{impl},\neg u_i\rangle$ and $\langle \neg u_i,\mathsf{rev},\neg u_j\rangle$ to $E$ , as illustrated in Figure 5b. Note that if $u_i$ (resp. $u_j$ ) is not incident from/to any neg edge, i.e., $\neg u_i$ (resp. $\neg u_j$ ) is not a node in $V$ , we will add $\neg u_i$ (resp. $\neg u_j$ ) to $V$ whose text negates that of $u_i$ (resp. $u_j$ ), and then add a bidirectional neg edge between $u_i$ and $\neg u_i$ (resp. $u_j$ and $\neg u_j$ ) to $E$ . + +Besides, recall that $\text{unk}$ represents a potential logical relation between EDUs that are adjacent in text. Considering that an EDU often inherits logical relations from its adjacent EDUs, we heuristically define and apply the following inference rule. + +- Adjacency-Transmission: + +$$ +\left(\left(u _ {i} \star u _ {j}\right) \wedge \left(u _ {i} \sim u _ {k}\right)\right) \vdash \left(u _ {k} \star u _ {j}\right), \tag {5} +$$ + +where $\star \in \{\land, \lor, \rightarrow\}$ and $\sim$ represents adjacency in text. For example, if $E$ contains two edges $\langle u_i, \operatorname{conj}, u_j \rangle$ and $\langle u_i, \operatorname{unk}, u_k \rangle$ , we can add a bidirectional conj edge between $u_k$ and $u_j$ to $E$ , as illustrated in Figure 5c. + +While this rule may generate false propositions, we expect our adaptive reasoner to apply it properly. For example, it is useful for handling the following sentence: "... only 1 person in the group knew 3 of the group $(u_{k})$ , 3 people knew 2 of the group $(u_{i})$ , and 4 people know 1 of the group $(u_{j})$ ." Graphene identifies $\langle u_{i}, \mathrm{conj}, u_{j} \rangle$ and $\langle u_{i}, \mathrm{unk}, u_{k} \rangle$ but misses $\langle u_{k}, \mathrm{conj}, u_{j} \rangle$ , which can be generated by applying this rule. + +# 2.3.2 Adaptive Extension of TLG + +Our symbolic reasoning is adaptive. We rely on signals from neural reasoning to decide which inference steps are relevant to answering the questions and hence are admitted to extend the TLG. Specifically, each candidate extension $\epsilon$ applies an inference rule over a set of nodes $V_{\epsilon} \subseteq V$ . We average their vector representations (which will be detailed later) to form the representation of $\epsilon$ : + +$$ +\mathbf {h} _ {\epsilon} = \frac {1}{| V _ {\epsilon} |} \sum_ {u _ {i} \in V _ {\epsilon}} \mathbf {h} _ {u _ {i}}. \tag {6} +$$ + +Since $\epsilon$ is for predicting the correctness of $o$ , we interact $\mathbf{h}_{\epsilon}$ with the representation of $o$ , i.e., $\mathbf{g}_o$ in Equation (2), to predict the relevance score of $\epsilon$ : + +$$ +\operatorname {r e l} _ {\epsilon} = \operatorname {s i g m o i d} \left(\operatorname {l i n e a r} \left(\mathbf {h} _ {\epsilon} \| \mathbf {g} _ {o}\right)\right), \tag {7} +$$ + +where $\parallel$ represents vector concatenation. We admit all possible $\epsilon$ to extend $G$ such that $rel_{\epsilon} > \tau$ where $\tau$ is a predefined threshold. + +Moreover, our neural-symbolic reasoning is iterative. In the $(l + 1)$ -th iteration, we restart symbolic reasoning with the raw TLG and recompute Equation (6) with node representations $\mathbf{h}_{u_i}^{(l)}$ from neural reasoning in the $l$ -th iteration (which will be detailed in Section 2.3.3). The initial node representations $\mathbf{h}_{u_i}^{(0)}$ are obtained from a pre-trained language + +model. Specifically, we flatten $V$ into a sequence of nodes in the order they appear in the text. Recall that $V$ is divided into $V_{c} = \{u_{1},\ldots ,u_{|V_{c}|}\}$ and $V_{o} = \{u_{|V_{c}| + 1},\dots ,u_{|V|}\}$ representing the nodes extracted from $c$ and $o$ , respectively. Each node $u_{i}$ is a token sequence $u_{i} = u_{i_{1}}\dots u_{i_{|u_{i}|}}$ . We use RoBERTa to encode $V_{c}$ and $V_{o}$ which are concatenated by $$ and $$ , where nodes inside $V_{c}$ and $V_{o}$ are separated by a special token “|”: + +$$ +\begin{array}{l} [ \mathbf {h} _ {< s >}; \mathbf {h} _ {u _ {1 _ {1}}}; \dots ; \mathbf {h} _ {|}; \dots ; \mathbf {h} _ {< / s >}; \mathbf {h} _ {u _ {| V _ {c} | + 1 _ {1}}}; \dots ; \mathbf {h} _ {|}; \dots ; \mathbf {h} _ {< / s >} ] \\ = \operatorname {R o B E R T a} \left(< s > u _ {1 _ {1}} \dots | \dots < / s > u _ {\left| V _ {c} \right| + 1 _ {1}} \dots | \dots < / s >\right). \tag {8} \\ \end{array} +$$ + +The output vector representations are averaged to form the initial representation of each node $u_{i} \in V$ : + +$$ +\mathbf {h} _ {u _ {i}} ^ {(0)} = \frac {1}{| u _ {i} |} \sum_ {j = 1} ^ {| u _ {i} |} \mathbf {h} _ {u _ {i j}}. \tag {9} +$$ + +# 2.3.3 Message Passing + +To let the nodes in TLG interact with each other and fuse their information, our neural reasoning performs graph-based message passing (Gilmer et al., 2017) to update node representations in each iteration from $\mathbf{h}_{u_i}^{(l)}$ to $\mathbf{h}_{u_i}^{(l + 1)}$ . Since TLG is a heterogeneous graph containing multiple types of edges, we incorporate the node-to-node message passing mechanism in R-GCN (Schlichtkrull et al., 2018) as a basis. Further, observe that TLG is usually loosely connected and prone to cause insufficient interaction between $V_{c}$ and $V_{o}$ via long paths in limited iterations, which cannot be alleviated by simply increasing the number of iterations because it would raise other issues such as over-smoothing (Li et al., 2018; Chen et al., 2020). To enhance such interaction which is critical to predicting the correctness of $o$ , we incorporate a novel subgraph-to-node message passing mechanism to holistically pass the information aggregated from a subgraph (e.g., $V_{c}$ ) to a node (e.g., each $u_{i}\in V_{o}$ ). + +Specifically, without loss of generality, for each $u_{i} \in V_{o}$ , we compute the $u_{i}$ -attended aggregate representation of $V_{c}$ by an attention-weighted sum of node representations over $V_{c}$ : + +$$ +\begin{array}{l} \mathbf {h} _ {V _ {c}, u _ {i}} ^ {(l)} = \sum_ {u _ {j} \in V _ {c}} \alpha_ {i, j} \mathbf {h} _ {u _ {j}} ^ {(l)}, \text {w h e r e} \\ \alpha_ {i, j} = \operatorname {s o f t m a x} _ {j} \left(\left[ a _ {i, 1}; \dots ; a _ {i, | V _ {c} |} \right] ^ {\intercal}\right), \\ a _ {i, j} = \operatorname {L e a k y R e L U} \left(\operatorname {l i n e a r} \left(\mathbf {h} _ {u _ {i}} ^ {(l)} \| \mathbf {h} _ {u _ {j}} ^ {(l)}\right)\right). \tag {10} \\ \end{array} +$$ + +Let $N^i$ be the set of neighbors of $u_i$ . Let $N_r^i \subseteq N^i$ be the subset under logical relation $r \in R$ . We update the representation of $u_i$ by passing messages to $u_i$ from its neighbors and from $V_c$ : + +$$ +\begin{array}{l} \mathbf {h} _ {u _ {i}} ^ {(l + 1)} = \operatorname {R e L U} (\sum_ {r \in R} \sum_ {u _ {j} \in N _ {r} ^ {i}} \frac {\alpha_ {i , j}}{| N _ {r} ^ {i} |} \mathbf {W} _ {r} ^ {(l)} \mathbf {h} _ {u _ {j}} ^ {(l)} + \mathbf {W} _ {0} ^ {(l)} \mathbf {h} _ {u _ {i}} ^ {(l)} \\ + \beta_ {i} \mathbf {W} _ {\text {s u b g r a p h}} ^ {(l)} \mathbf {h} _ {V _ {c}, u _ {i}} ^ {(l)} , \text {w h e r e} \\ \end{array} +$$ + +$$ +\begin{array}{l} \alpha_ {i, j} = \operatorname {s o f t m a x} _ {\mathbf {i d x} (a _ {i, j})} ([ \dots ; a _ {i, j}; \dots ] ^ {\intercal}) \text {f o r a l l} u _ {j} \in N ^ {i}, \\ a _ {i, j} = \operatorname {L e a k y R e L U} \left(\operatorname {l i n e a r} \left(\mathbf {h} _ {u _ {i}} ^ {(l)} \parallel \mathbf {h} _ {u _ {j}} ^ {(l)}\right)\right), \\ \beta_ {i} = \operatorname {s i g m o i d} \left(\operatorname {l i n e a r} \left(\mathbf {h} _ {u _ {i}} ^ {(l)} \| \mathbf {h} _ {V _ {c}, u _ {i}} ^ {(l)}\right)\right), \tag {11} \\ \end{array} +$$ + +$\mathbf{W}_r^{(l)}, \mathbf{W}_0^{(l)}, \mathbf{W}_{\mathrm{subgraph}}^{(l)}$ are matrices of learnable parameters, and $\operatorname{idx}(a_{i,j})$ returns the index of $a_{i,j}$ in the $|N^i|$ -dimensional vector $[ \ldots ; a_{i,j}; \ldots ]^{\intercal}$ . + +In an analogous way, for each $u_{i} \in V_{c}$ , we compute the $u_{i}$ -attended aggregate representation of $V_{o}$ denoted by $\mathbf{h}_{V_o,u_i}^{(l)}$ and update $\mathbf{h}_{u_i}^{(l + 1)}$ . + +Observe two differences between Equation (11) and its counterpart in the original R-GCN. First, we incorporate subgraph-to-node message passing and control it by a gating mechanism (i.e., $\beta_{i}$ ). Second, we weight node-to-node message passing by an attention mechanism (i.e., $\alpha_{i,j}$ ). + +# 2.3.4 Graph Pooling + +After $L$ iterations where $L$ is a hyperparameter, for each node $u_{i}\in V$ , we fuse its representations over all the iterations with a residual connection: + +$$ +\mathbf {h} _ {u _ {i}} ^ {\text {f u s}} = \mathbf {h} _ {u _ {i}} ^ {(0)} + \operatorname {l i n e a r} \left(\mathbf {h} _ {u _ {i}} ^ {(1)} \| \dots \| \mathbf {h} _ {u _ {i}} ^ {(L)}\right). \tag {12} +$$ + +Inspired by Huang et al. (2021a), we feed all $\mathbf{h}_{u_i}^{\mathrm{fus}}$ into a bidirectional residual GRU layer (Cho et al., 2014) to finalize node representations: + +$$ +\left[ \mathbf {h} _ {u _ {1}} ^ {\mathrm {f n l}}; \dots ; \mathbf {h} _ {u _ {| V |}} ^ {\mathrm {f n l}} \right] = \operatorname {R e s - B i G R U} \left(\left[ \mathbf {h} _ {u _ {1}} ^ {\mathrm {f u s}}; \dots ; \mathbf {h} _ {u _ {| V |}} ^ {\mathrm {f u s}} \right]\right). \tag {13} +$$ + +We aggregate these node representations by computing an $o$ -attended weighted sum: + +$$ +\begin{array}{l} \mathbf {h} _ {V} = \sum_ {u _ {i} \in V} \alpha_ {i} \mathbf {h} _ {u _ {i}} ^ {\mathrm {f n l}}, \text {w h e r e} \\ \alpha_ {i} = \operatorname {s o f t m a x} _ {i} ([ a _ {1}; \dots ; a _ {| V |} ] ^ {\intercal}), \tag {14} \\ \end{array} +$$ + +$$ +a _ {i} = \operatorname {L e a k y R e L U} \left(\operatorname {l i n e a r} \left(\mathbf {g} _ {o} \parallel \mathbf {h} _ {u _ {i}} ^ {\mathrm {f n l}}\right)\right), +$$ + +and $\mathbf{g}_o$ is the representation of $o$ in Equation (2). We concatenate $\mathbf{h}_V$ and the relevance scores to form the representation of $G$ : + +$$ +\begin{array}{l} \mathbf {h} _ {G} = \left(\mathbf {h} _ {V} \| r e l _ {\mathcal {E} ^ {(1)}} \| \dots \| r e l _ {\mathcal {E} ^ {(L)}}\right), \text {w h e r e} \\ r e l _ {\mathcal {E} ^ {(l)}} = \frac {1}{\left| \mathcal {E} ^ {(l)} \right|} \sum_ {\epsilon \in \mathcal {E} ^ {(l)}} r e l _ {\epsilon}, \tag {15} \\ \end{array} +$$ + +$\mathcal{E}^{(l)}$ is the set of candidate extensions in the $l$ -th iteration, and $rel_{\epsilon}$ is in Equation (7). In this way, we are able to train the network in Equation (7). + +# 2.4 Answer Prediction + +We fuse the representations of $c, q, o$ and the TLG to predict the correctness of $o$ : + +$$ +\operatorname {s c o r e} _ {o} = \operatorname {l i n e a r} \left(\tanh \left(\operatorname {l i n e a r} \left(\mathbf {g} _ {c} \| \mathbf {g} _ {q} \| \mathbf {g} _ {o} \| \mathbf {h} _ {G}\right)\right)\right), \tag {16} +$$ + +where $\mathbf{g}_c,\mathbf{g}_q,\mathbf{g}_o$ are in Equation (2). + +# 2.5 Loss Function + +Let $o_{\mathrm{gold}} \in O$ be the correct answer. We optimize the cross-entropy loss with label smoothing: + +$$ +\mathcal {L} = - (1 - \gamma) s c o r e _ {o _ {\mathrm {g o l d}}} ^ {\prime} - \gamma \frac {1}{| O |} \sum_ {o _ {i} \in O} s c o r e _ {o _ {i}} ^ {\prime}, +$$ + +where $score_{o_i}' = \log \frac{\exp(score_{o_i})}{\sum_{o_j \in O} \exp(score_{o_j})}$ , (17) + +and $\gamma$ is a predefined smoothing factor. + +# 3 Experiments + +# 3.1 Datasets + +We used two reasoning-based MRC datasets. + +ReClor (Yu et al., 2020) consists of 6,138 four-option multiple-choice questions collected from standardized exams such as GMAT and LSAT. The questions were divided into 4,638 for training, 500 for development, and 1,000 for testing. The test set was further divided into 440 easy questions (Test-E) where each question could be correctly answered by some strong baseline method using only the options and ignoring the context and the question, and the rest 560 hard questions (Test-H). + +LogiQA (Liu et al., 2020) consists of 8,768 four-option multiple-choice questions collected from the National Civil Servants Examination of China, which were translated into English. The questions were divided into 7,376 for training, 651 for development, and 651 for testing. + +# 3.2 Implementation Details + +We experimented on NVIDIA V100 (32GB). + +We tuned hyperparameters on the development set of each dataset. Specifically, for text encoding, we used RoBERTa-large with hidden layer $= 24$ and hidden units $= 1,024$ implemented by Hugging Face (Wolf et al., + +2020). For message passing, our implementation was based on DGL (Wang et al., 2019). For both datasets, we used the Adam optimizer, and set attention heads $= 16$ , dropout rate $= 0.1$ , epochs $= 10$ , batch size $= 16$ selected from $\{8,16,24\}$ , number of iterations $L = 2$ from $\{2,3\}$ , and maximum sequence length $= 384$ . For ReClor, we set warm-up proportion $= 0.1$ from $\{0.1,0.2\}$ , learning rate $= 7e - 6$ from $\{6e - 6,7e - 6,8e - 6,1e - 5\}$ , and seed $= 123$ from $\{123,1234,42,43\}$ . For LogiQA, we set warm-up proportion $= 0.2$ from $\{0.1,0.2\}$ , learning rate $= 8e - 6$ from $\{6e - 6,7e - 6,8e - 6,1e - 5\}$ , and seed $= 42$ from $\{123,1234,42,43\}$ . + +For the relevance score threshold $\tau$ below Equation (7), we set $\tau = 0.6$ from $\{0.4, 0.5, 0.6, 0.7\}$ for both datasets. For the smoothing factor $\gamma$ in Equation (17), we set $\gamma = 0.25$ for both datasets. + +To fit in our GPU's memory, we restricted a raw TLG to contain at most 25 nodes and 50 edges by, if needed, randomly merging nodes connected by an unk edge and/or deleting non-bridge edges while keeping the graph connected. + +# 3.3 Baselines + +We compared our approach, referred to as AdaLoGN, with popular pre-trained language models and with other known methods in the literature. + +Reasoning-based MRC, like other MRC tasks, can be solved by using a pre-trained language model with a classification layer. Yu et al. (2020) reported the results of BERTLARGE, RoBERTaLARGE, and XLNetLARGE on ReClor. Huang et al. (2021a) reported the results of BERTLARGE and RoBERTaLARGE on LogiQA. + +In the literature, we found the results of DAGN (Huang et al., 2021a), Focal Rea-soner (Ouyang et al., 2021), and LReasoner (Wang et al., 2021a,b) on both datasets. For a fair comparison with our approach, we presented their results on RoBERTaLARGE, while LReasoner achieved better results with ALBERT. Between the two variants of LReasoner, one without data augmentation (w/o DA) and the other with data augmentation (w/ DA), we presented both of their results but mainly compared with the former because our approach and other baseline methods would also benefit if data augmentation were incorporated. + +
MethodDevTestTest-ETest-H
BERTLARGE53.8049.8072.0032.30
RoBERTaLARGE62.6055.6075.5040.00
XLNetLARGE62.0056.0075.7040.50
DAGN65.8058.3075.9144.46
Focal Reasoner66.8058.9077.0544.64
LReasoner (w/o DA)65.2058.3078.6042.30
LReasoner (w/ DA)66.2062.4081.4047.50
AdaLoGN65.2060.2079.3245.18
Human-63.0057.1067.20
+ +Table 2: Comparison with baselines on ReClor. + +
MethodDevTest
BERTLARGE34.1031.03
RoBERTaLARGE35.0235.33
DAGN36.8739.32
Focal Reasoner41.0140.25
LReasoner (w/ DA)38.1040.60
AdaLoGN39.9440.71
Human-86.00
+ +# 3.4 Evaluation Metric + +Following the literature, we reported accuracy, i.e., the proportion of correctly answered questions. For our approach we reported the max across 3 runs on the development set of each dataset. + +# 3.5 Comparison with Baselines + +On ReClor, as shown in Table 2, AdaLoGN outperformed all the baseline methods on the test set by at least $1.30\%$ , except for LReasoner (w/ DA) which performed data augmentation so that the comparison might be unfair. AdaLoGN and LReasoner (w/ DA) both exceeded $60\%$ , being comparable with human-level performance $(63\%)$ . + +On LogiQA, as shown in Table 3, AdaLoGN outperformed all the baseline methods on the test set, including LReasoner (w/ DA). Still, our result $(40.71\%)$ was not comparable with human-level performance $(86\%)$ . + +In particular, on both ReClor and LogiQA, AdaLoGN exceeded DAGN on the test set by $1.39\% - 1.90\%$ , which demonstrated the effectiveness of our approach in addressing the limitations of DAGN mentioned in Section 1. + +# 3.6 Ablation Study + +We conducted an ablation study to evaluate the effectiveness of the two main technical contributions in our approach: adaptive extension of TLG and subgraph-to-node message passing. + +Table 3: Comparison with baselines on LogiQA. + +
MethodDevTestTest-ETest-H
AdaLoGN65.2060.2079.3245.18
AdaLoGNno-ext65.8059.5077.2745.54
AdaLoGNfull-ext65.0058.8078.1943.57
AdaLoGNno-at64.8059.4079.7743.39
AdaLoGNn2n65.2057.6077.9541.61
AdaLoGNn2n+65.0058.6078.6442.86
+ +Table 4: Ablation study on ReClor. + +
MethodDevTest
AdaLoGN39.9440.71
AdaLoGNno-ext37.9439.02
AdaLoGNfull-ext39.6339.02
AdaLoGNno-at38.5639.94
AdaLoGNn2n38.4039.02
AdaLoGNn2n+38.4038.86
+ +Table 5: Ablation study on LogiQA. + +# 3.6.1 Effectiveness of Adaptive Extension + +We compared the standard version of AdaLoGN with two variants removing adaptive extension. + +- AdaLoGNno-ext performs no extension. +- AdaLoGN $_{\text {full-ext }}$ performs full extension by computing and admitting the deductive closure. + +On ReClor, as shown in Table 4, both variants exhibited a fair decrease in accuracy on the test set by $0.70\% - 1.40\%$ . On LogiQA, as shown in Table 5, the decreases were larger, $1.69\%$ on the test set, possibly because the questions in LogiQA were harder so that the effectiveness of our adaptive extension became more noticeable. Interestingly, on both datasets, $\mathrm{AdaLoGN_{full - ext}}$ was not better than $\mathrm{AdaLoGN_{no - ext}}$ on the test set, indicating that a naive injection of logical reasoning into neural reasoning might not have positive effects. + +We analyzed the distributions of relevance scores of candidate extensions, i.e., $rel_{\epsilon}$ in Equation (7). As shown in Figure 6, they approximated a normal distribution on both datasets. By setting the threshold $\tau = 0.6$ , we admitted $19.57\%$ and $4.86\%$ of the extensions on ReClor and LogiQA, respectively. + +We also compared with a variant of AdaLoGN using a subset of inference rules. + +- AdaLoGN $_{\text{no-at}}$ ignores the adjacency-transmission rule. + +By ignoring the adjacency-transmission rule, $\mathrm{AdaLoGN_{no-at}}$ showed a decrease in accuracy on the test sets by $0.77\% - 0.80\%$ , suggesting the usefulness of this rule despite its heuristic nature. + +![](images/6515fe535eb9a5d17f3ae1f92574033bdeeab8da25dbb0add834931b4dc54c9d.jpg) + +![](images/7ba322bf611381cb9e98e37c4f8eeb28eab26a961992d176e4b4888e0f75d2b9.jpg) +Figure 6: Distributions of relevance scores of candidate extensions. Top: on the development set of Reclor; Bottom: on the development set of LogiQA. + +# 3.6.2 Effectiveness of Subgraph-to-Node Message Passing + +We compared the standard version of AdaLoGN with two variants removing subgraph-to-node message passing or implementing it in a different way. + +- AdaLoGN $_{\mathrm{n2n}}$ only performs node-to-node message passing in a standard way. +- AdaLoGN $_{n2n+}$ only performs node-to-node message passing but, as an alternative to our holistic subgraph-to-node message passing, it adds a bidirectional unk edge between each node in the context subgraph and each node in the option subgraph to enhance context-option interaction. + +On ReClor, as shown in Table 4, both variants exhibited a large decrease in accuracy on the test set by $1.60\% -2.60\%$ . On LogiQA, as shown in Table 5, the decreases were also large, $1.69\% -1.85\%$ on the test set. The results demonstrated the effectiveness of our subgraph-to-node message passing. + +Compared with $\mathrm{AdaLoGN_{n2n}}$ , $\mathrm{AdaLoGN_{n2n+}}$ achieved better results on ReClor but worse results on LogiQA on the test set, indicating that a naive enhancement of context-option interaction could have negative effects. + +# 3.7 Error Analysis + +From the development set of each dataset, we randomly sampled fifty questions to which our approach outputted an incorrect answer. We analyzed the sources of these errors. Note that an error could + +
Source of ErrorReClorLogiQA
Construction of raw TLG38%36%
Adaptive extension of TLG18%22%
Expressivity of symbolic reasoning20%18%
Others (about neural reasoning)46%40%
+ +Table 6: Error analysis of AdaLoGN. + +have a mixture of multiple sources. + +As shown in Table 6, we mainly relied on Graphene to extract a raw TLG from text based on syntactic analysis, which accounted for about one third of the errors (36%–38%). Our adaptive extension of TLG constituted about one fifth of the errors (18%–22%), e.g., some excessive extensions produced irrelevant logical relations which might mislead message passing. One fifth of the errors (18%–20%) were due to the limited expressivity of our symbolic reasoning, i.e., a subset of propositional logic, while some questions required quantifiers. Other errors might be related to neural reasoning such as message passing or answer prediction (40%–46%). + +# 3.8 Run Time + +On both ReClor and LogiQA, our approach used about 0.8 second for answering a question. + +# 4 Related Work + +# 4.1 Reasoning-Based MRC + +While simple MRC tasks have been well studied, complex MRC tasks requiring various reasoning capabilities are receiving increasing research attention. Among others, multi-hop MRC tasks in HotpotQA (Yang et al., 2018) and WikiHop (Welbl et al., 2018) require retrieving and reading multiple supporting passages to answer a question. They can be solved by constructing and reasoning over a graph connecting passages that overlap or co-occur with each other (Qiu et al., 2019; Tu et al., 2020), by implicitly supervising a retriever via word weighting (Huang et al., 2021b), or by iteratively applying dense retrieval (Xiong et al., 2021). MRC tasks in DROP (Dua et al., 2019) require discrete reasoning such as addition, counting, and sorting. Neural networks have been extended to incorporate modules that can perform such reasoning over numbers and dates mentioned in a given context (Gupta et al., 2020). For MRC tasks in CommonsenseQA (Talmor et al., 2019) which are targeted at commonsense knowledge and reasoning, recent methods fuse external commonsense knowledge with pre + +trained language models for reasoning (Yan et al., 2021; Xu et al., 2021). There are also studies on MRC tasks requiring spatial/geographical reasoning (Huang et al., 2019; Li et al., 2021) and temporal/causal reasoning (Sun et al., 2018). + +Different from the above reasoning capabilities, the MRC tasks considered in this paper require logical reasoning, such as reasoning about sufficient and necessary conditions, categorization, conjunctions and disjunctions. Pre-trained language models alone struggled and were far behind human-level performance on such tasks in ReClor (Yu et al., 2020) and LogiQA (Liu et al., 2020) due to their weakness in logical reasoning. + +Among existing methods for solving such tasks, DAGN (Huang et al., 2021a) and Focal Reasoner (Ouyang et al., 2021) extract discourse or coreference relations from text and represent as a graph of text units. Then they employ GNN to pass messages and update representations for predicting an answer. Different from their neural nature, our approach symbolically performs logical reasoning as required by such tasks, by applying inference rules over extracted logical relations to extend the graph. This feature resembles LReasoner (Wang et al., 2021a,b) which extends the context with inferred logical relations to benefit the subsequent neural reasoning. However, different from LReasoner which computes the deductive closure and identifies relevant extensions by text overlapping with the options in an unsupervised manner, our approach predicts relevance based on signals from neural reasoning in a supervised manner, and our prediction evolves over iterations after sufficient interaction between symbolic and neural reasoning. All these features helped our approach achieve better performance in the experiments. + +# 4.2 Neural-Symbolic Reasoning + +Our approach represents a novel implementation of neural-symbolic reasoning (Raedt et al., 2020), and it differs from the following existing methods. + +One paradigm of neural-symbolic reasoning is logic-driven neural reasoning. For example, logical constraints can be compiled into a neural network by augmenting the loss function (Xu et al., 2018) or the network structure (Li and Srikumar, 2019). Logical connectives, quantifiers, and consistency checking can also be approximated by neural networks (Dong et al., 2019; Ren et al., 2020; Gu et al., 2019). While these methods incorporate log- + +ical reasoning into neural reasoning via emulation, our approach explicitly performs logical reasoning by applying inference rules over logical relations. Such exact inference is more accurate than emulation-based approximation. + +Another paradigm is neural-driven logical reasoning. For example, neural networks have been employed to predict the truth of an atom in answering first-order logic queries (Aragelyan et al., 2021), and to implement predicates in probabilistic logic programming (Manhaeve et al., 2021). These methods and our approach cope with different problems, thus using different techniques. Specifically, while these methods complement logical reasoning with extra facts generated by neural reasoning, our approach filters inferred logical relations based on signals from neural reasoning. + +Moreover, observe that the neural-symbolic interaction in the above methods are unidirectional, i.e., they leverage either symbolic or neural reasoning to reinforce the other. By contrast, we allow bidirectional neural-symbolic interaction where neural and symbolic reasoning mutually and iteratively reinforce each other for better performance. + +# 5 Conclusion + +To meet the challenge of reasoning-based MRC, we presented a neural-symbolic approach where neural and symbolic reasoning mutually and iteratively reinforce each other via our new AdaLoGN model. We also enhanced graph-based neural reasoning with a novel subgraph-to-node message passing mechanism. Since these ideas are quite general, we believe they have great potential for a variety of applications beyond MRC, e.g., link prediction. + +Error analysis has revealed some shortcomings of our approach. Currently we rely on syntactic tools to extract a raw TLG from text. We will explore other extraction methods to achieve a higher quality. We also plan to apply more inference rules and incorporate quantifiers to improve the expressivity of our symbolic reasoning. + +# Acknowledgements + +This work was supported in part by the NSFC (62072224) and in part by the Beijing Academy of Artificial Intelligence (BAAI). + +# References + +Erik Arakelyan, Daniel Daza, Pasquale Minervini, and Michael Cochez. 2021. Complex query answering with neural link predictors. In ICLR 2021. +Matthias Cetto, Christina Niklaus, André Freitas, and Siegfried Handschuh. 2018. Graphene: semantically-linked propositions in open information extraction. In COLING 2018, pages 2300-2311. +Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, and Xu Sun. 2020. Measuring and relieving the oversmoothing problem for graph neural networks from the topological view. In AAAI-IAAI-EAAI 2020, pages 3438-3445. +Kyunghyun Cho, Bart van Merrienboer, Caglar Gülçehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In EMNLP 2014, pages 1724-1734. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT* 2019, pages 4171–4186. +Honghua Dong, Jiayuan Mao, Tian Lin, Chong Wang, Lihong Li, and Denny Zhou. 2019. Neural logic machines. In ICLR 2019. +Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In *NAACL-HLT* 2019, pages 2368-2378. +Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. 2017. Neural message passing for quantum chemistry. In ICML 2017, pages 1263-1272. +Yu Gu, Jeff Z. Pan, Gong Cheng, Heiko Paulheim, and Giorgos Stoilos. 2019. Local ABox consistency prediction with transparent TBoxes using gated graph neural networks. In NeSy 2019. +Nitish Gupta, Kevin Lin, Dan Roth, Sameer Singh, and Matt Gardner. 2020. Neural module networks for reasoning over text. In ICLR 2020. +Yinya Huang, Meng Fang, Yu Cao, Liwei Wang, and Xiaodan Liang. 2021a. DAGN: discourse-aware graph network for logical reasoning. In *NAACL-HLT* 2021, pages 5848-5855. +Zixian Huang, Yulin Shen, Xiao Li, Yang Wei, Gong Cheng, Lin Zhou, Xinyu Dai, and Yuzhong Qu. 2019. GeoSQA: A benchmark for scenario-based question answering in the geography domain at high school level. In EMNLP-IJCNLP 2019, pages 5865-5870. + +Zixian Huang, Ao Wu, Yulin Shen, Gong Cheng, and Yuzhong Qu. 2021b. When retriever-reader meets scenario-based multiple-choice questions. In Findings of EMNLP 2021, pages 985-994. +Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In ICLR 2017. +Qimai Li, Zhichao Han, and Xiao-Ming Wu. 2018. Deeper insights into graph convolutional networks for semi-supervised learning. In AAAI-IAAI-EAAI 2018, pages 3538-3545. +Tao Li and Vivek Srikumar. 2019. Augmenting neural networks with first-order logic. In ACL 2019, pages 292-302. +Xiao Li, Yawei Sun, and Gong Cheng. 2021. TSQA: tabular scenario based question answering. In AAAI-IAAI-EAAI 2021, pages 13297-13305. +Jian Liu, Leyang Cui, Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang. 2020. LogiQA: A challenge dataset for machine reading comprehension with logical reasoning. In *IJCAI* 2020, pages 3622-3628. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692. +Robin Manhaeve, Sebastijan Dumancic, Angelika Kim-mig, Thomas Demeester, and Luc De Raedt. 2021. Neural probabilistic logic programming in Deep-ProbLog. Artif. Intell., 298:103504. +William C. Mann and Sandra A. Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text, 8(3):243-281. +Siru Ouyang, Zhuosheng Zhang, and Hai Zhao. 2021. Fact-driven logical reasoning. CoRR, abs/2105.10334. +Lin Qiu, Yunxuan Xiao, Yanru Qu, Hao Zhou, Lei Li, Weinan Zhang, and Yong Yu. 2019. Dynamically fused graph network for multi-hop reasoning. In ACL 2019, pages 6140-6150. +Luc De Raedt, Sebastian Dumancic, Robin Manhaeve, and Giuseppe Marra. 2020. From statistical relational to neuro-symbolic artificial intelligence. In *IJCAI* 2020, pages 4943-4950. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In EMNLP 2016, pages 2383-2392. +Hongyu Ren, Weihua Hu, and Jure Leskovec. 2020. Query2box: reasoning over knowledge graphs in vector space using box embeddings. In ICLR 2020. + +Michael Sejr Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In ESWC 2018, pages 593-607. +Yawei Sun, Gong Cheng, and Yuzhong Qu. 2018. Reading comprehension with graph-based temporal-casual reasoning. In COLING 2018, pages 806-817. +Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In *NAACL-HLT* 2019, pages 4149-4158. +Ming Tu, Kevin Huang, Guangtao Wang, Jing Huang, Xiaodong He, and Bowen Zhou. 2020. Select, answer and explain: Interpretable multi-hop reading comprehension over multiple documents. In AAAI-IAAI-EAAI 2020, pages 9073-9080. +Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2018. Graph attention networks. In ICLR 2018. +Minjie Wang, Da Zheng, Zihao Ye, Quan Gan, Mufei Li, Xiang Song, Jinjing Zhou, Chao Ma, Lingfan Yu, Yu Gai, Tianjun Xiao, Tong He, George Karypis, Jinyang Li, and Zheng Zhang. 2019. Deep Graph Library: a graph-centric, highly-performant package for graph neural networks. CoRR, abs/1909.01315. +Siyuan Wang, Zhongkun Liu, Wanjun Zhong, Ming Zhou, Zhongyu Wei, Zhumin Chen, and Nan Duan. 2021a. From LSAT: The progress and challenges of complex reasoning. arXiv preprint arXiv:2108.00648. +Siyuan Wang, Wanjun Zhong, Duyu Tang, Zhongyu Wei, Zhihao Fan, Daxin Jiang, Ming Zhou, and Nan Duan. 2021b. Logic-driven context extension and data augmentation for logical reasoning of text. arXiv preprint arXiv:2105.03659. +Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi-hop reading comprehension across documents. Trans. Assoc. Comput. Linguistics, 6:287-302. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: state-of-the-art natural language processing. In EMNLP 2020, pages 38-45. +Wenhan Xiong, Xiang Lorraine Li, Srini Iyer, Jingfei Du, Patrick S. H. Lewis, William Yang Wang, Yashar Mehdad, Scott Yih, Sebastian Riedel, Douwe Kiela, and Barlas Oguz. 2021. Answering complex open-domain questions with multi-hop dense retrieval. In ICLR 2021. + +Jingyi Xu, Zilu Zhang, Tal Friedman, Yitao Liang, and Guy Van den Broeck. 2018. A semantic loss function for deep learning with symbolic knowledge. In ICML 2018, pages 5498-5507. +Yichong Xu, Chenguang Zhu, Ruochen Xu, Yang Liu, Michael Zeng, and Xuedong Huang. 2021. Fusing context into knowledge graph for commonsense question answering. In *Findings of ACL-IJCNLP* 2021, pages 1201-1207. +Jun Yan, Mrigank Raman, Aaron Chan, Tianyu Zhang, Ryan A. Rossi, Handong Zhao, Sungchul Kim, Nedim Lipka, and Xiang Ren. 2021. Learning contextualized knowledge structures for commonsense reasoning. In Findings of ACL-IJCNLP 2021, pages 4038-4051. +Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In EMNLP 2018, pages 2369-2380. +Weihao Yu, Zihang Jiang, Yanfei Dong, and Jiashi Feng. 2020. ReClor: A reading comprehension dataset requiring logical reasoning. In ICLR 2020. + +# Responsible NLP Research Checklist + +Members of the ACL are responsible for adhering to the ACL code of ethics. The ARR Responsible NLP Research checklist is designed to encourage best practices for responsible research, addressing issues of research ethics, societal impact and reproducibility. + +Please read the Responsible NLP Research checklist guidelines for information on how to answer these questions. Note that not answering positively to a question is not grounds for rejection. + +All supporting evidence can appear either in the main paper or the supplemental material. For each question, if you answer Yes, provide the section number; if you answer No, provide a justification. + +Please do not modify, reorder, delete or add questions, question options or other wording of this document. + +# A For every submission + +A1 Did you discuss the limitations of your work? + +If you answer Yes, provide the section number; if you answer No, provide a justification. + +Yes + +Section or justification Section 3.7 + +A2 Did you discuss any potential risks of your work? + +If you answer Yes, provide the section number; if you answer No, provide a justification. + +N/A + +Section or justification Click or tap here to enter text. + +A3 Do the abstract and introduction summarize the paper's main claims? + +If you answer Yes, provide the section number; if you answer No, provide a justification. + +Yes + +Section or justification Abstract, Section 1 + +# B Did you use or create scientific artifacts? + +If you answer Yes, provide the section number; if you answer No, you can skip the rest of this section. + +Yes + +If yes: + +B1 Did you cite the creators of artifacts you used? + +If you answer Yes, provide the section number; if you answer No, provide a justification. + +Yes + +Section or justification Sections 2.1, Section 2.2.2, Section 3.1, Section 3.2 + +B2 Did you discuss the license or terms for use and/or distribution of any artifacts? + +If you answer Yes, provide the section number; if you answer No, provide a justification. + +Yes + +Section or justification The license for our code is available on GitHub. + +B3 Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? + +If you answer Yes, provide the section number; if you answer No, provide a justification. + +N/A + +Section or justification Click or tap here to enter text. + +B4 Did you discuss the steps taken to check whether the data that was collected/used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? + +If you answer Yes, provide the section number; if you answer No, provide a justification. + +N/A + +Section or justification Click or tap here to enter text. + +B5 Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? + +If you answer Yes, provide the section number; if you answer No, provide a justification. + +Yes + +Section or justification Section 3.1 + +B6 Did you report relevant statistics like the number of examples, details of train/test/dev splits, etc. for the data that you used/created? + +If you answer Yes, provide the section number; if you answer No, provide a justification. + +Yes + +Section or justification Section 3.1 + +# C Did you run computational experiments? + +If you answer Yes, provide the section number; if you answer No, you can skip the rest of this section. + +Yes + +If yes: + +C1 Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? + +If you answer Yes, provide the section number; if you answer No, provide a justification. + +Yes + +Section or justification Section 3.2, Section 3.8 + +C2 Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? + +If you answer Yes, provide the section number; if you answer No, provide a justification. + +Yes + +Section or justification Section 3.2 + +C3 Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? + +If you answer Yes, provide the section number; if you answer No, provide a justification. + +Yes + +Section or justification Section 3.2, Section 3.4, Section 3.5 + +C4 If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? + +If you answer Yes, provide the section number; if you answer No, provide a justification. + +Yes + +Section or justification Section 3.2 + +D Did you use human annotators (e.g., crowdworkers) or research with human subjects? + +If you answer Yes, provide the section number; if you answer No, you can skip the rest of this section. + +No + +If yes: + +D1 Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? + +If you answer Yes, provide the section number; if you answer No, provide a justification. + +Choose an item. + +Section or justification Click or tap here to enter text. + +D2 Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? + +If you answer Yes, provide the section number; if you answer No, provide a justification. + +Choose an item. + +Section or justification Click or tap here to enter text. + +D3 Did you discuss whether and how consent was obtained from people whose data you're using/curating (e.g., did your instructions explain how the data would be used)? + +If you answer Yes, provide the section number; if you answer No, provide a justification. + +Choose an item. + +Section or justification Click or tap here to enter text. + +D4 Was the data collection protocol approved (or determined exempt) by an ethics review board? + +If you answer Yes, provide the section number; if you answer No, provide a justification. + +Choose an item. + +Section or justification Click or tap here to enter text. + +D5 Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? + +If you answer Yes, provide the section number; if you answer No, provide a justification. + +Choose an item. + +Section or justification Click or tap here to enter text. \ No newline at end of file diff --git a/adalognadaptivelogicgraphnetworkforreasoningbasedmachinereadingcomprehension/images.zip b/adalognadaptivelogicgraphnetworkforreasoningbasedmachinereadingcomprehension/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..aa936a299d33124981dbf7cb927a1b9d21637ba4 --- /dev/null +++ b/adalognadaptivelogicgraphnetworkforreasoningbasedmachinereadingcomprehension/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b92219b23922474938dfe7193162d61ca63ef2bccfac824b2d52125c991c4470 +size 441364 diff --git a/adalognadaptivelogicgraphnetworkforreasoningbasedmachinereadingcomprehension/layout.json b/adalognadaptivelogicgraphnetworkforreasoningbasedmachinereadingcomprehension/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..638993671691bfe0c53ce9fa6cd09e77442f669b --- /dev/null +++ b/adalognadaptivelogicgraphnetworkforreasoningbasedmachinereadingcomprehension/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:05c6935bd83965b98641d0362bf983cdbd51c74d82838521c3484e612d0870ab +size 678987 diff --git a/adaplerspeedingupinferencebyadaptivelengthreduction/97331c6e-3d8d-4623-a3d9-533d32d17c86_content_list.json b/adaplerspeedingupinferencebyadaptivelengthreduction/97331c6e-3d8d-4623-a3d9-533d32d17c86_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..220cea2cdd0a3609d993bbd7748532c5de555e65 --- /dev/null +++ b/adaplerspeedingupinferencebyadaptivelengthreduction/97331c6e-3d8d-4623-a3d9-533d32d17c86_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e76f7edb29578413f97683425095197d48a5fbe54d882eaa8c546f554181bb06 +size 111783 diff --git a/adaplerspeedingupinferencebyadaptivelengthreduction/97331c6e-3d8d-4623-a3d9-533d32d17c86_model.json b/adaplerspeedingupinferencebyadaptivelengthreduction/97331c6e-3d8d-4623-a3d9-533d32d17c86_model.json new file mode 100644 index 0000000000000000000000000000000000000000..6ce1109e7bfc8415e6fe8dc76e84af42912ca08d --- /dev/null +++ b/adaplerspeedingupinferencebyadaptivelengthreduction/97331c6e-3d8d-4623-a3d9-533d32d17c86_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c2c982489d6bba2129916eb534e1474b6f9c5d9ffc3f43602a0854c5320656ba +size 133396 diff --git a/adaplerspeedingupinferencebyadaptivelengthreduction/97331c6e-3d8d-4623-a3d9-533d32d17c86_origin.pdf b/adaplerspeedingupinferencebyadaptivelengthreduction/97331c6e-3d8d-4623-a3d9-533d32d17c86_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ea379a62a6749c756480d94d7177a8e0aaee59d2 --- /dev/null +++ b/adaplerspeedingupinferencebyadaptivelengthreduction/97331c6e-3d8d-4623-a3d9-533d32d17c86_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7365905e0314e70e51b5f040f2fffe9a8341c22d64b8ae5fc155df678275b822 +size 1788433 diff --git a/adaplerspeedingupinferencebyadaptivelengthreduction/full.md b/adaplerspeedingupinferencebyadaptivelengthreduction/full.md new file mode 100644 index 0000000000000000000000000000000000000000..503efcbbb167243e280b5dccbd48f1726ef9a558 --- /dev/null +++ b/adaplerspeedingupinferencebyadaptivelengthreduction/full.md @@ -0,0 +1,421 @@ +# AdapLeR: Speeding up Inference by Adaptive Length Reduction + +Ali Modarressi\* Hosein Mohebbi\* Mohammad Taher Pilehvar + +Iran University of Science and Technology, Iran + +Cognitive Science and AI, Tilburg University, Netherlands + +Tehran Institute for Advanced Studies, Khatam University, Iran + +m_modarressi@comp.iust.ac.ir + +h.mohebbi@uvt.nl + +mp792@cam.ac.uk + +# Abstract + +Pre-trained language models have shown stellar performance in various downstream tasks. But, this usually comes at the cost of high latency and computation, hindering their usage in resource-limited settings. In this work, we propose a novel approach for reducing the computational cost of BERT with minimal loss in downstream performance. Our method dynamically eliminates less contributing tokens through layers, resulting in shorter lengths and consequently lower computational cost. To determine the importance of each token representation, we train a Contribution Predictor for each layer using a gradient-based saliency method. Our experiments on several diverse classification tasks show speedups up to $22\mathrm{x}$ during inference time without much sacrifice in performance. We also validate the quality of the selected tokens in our method using human annotations in the ERASER benchmark. In comparison to other widely used strategies for selecting important tokens, such as saliency and attention, our proposed method has a significantly lower false positive rate in generating rationales. Our code is freely available at https://github.com/amodaresi/AdapLeR. + +# 1 Introduction + +While large-scale pre-trained language models exhibit remarkable performances on various NLP benchmarks, their excessive computational costs and high inference latency have limited their usage in resource-limited settings. In this regard, there have been various attempts at improving the efficiency of BERT-based models (Devlin et al., 2019), including knowledge distillation (Hinton et al., 2015; Sanh et al., 2019; Sun et al., 2019, 2020; Jiao et al., 2020), quantization (Gong et al., 2014; Shen et al., 2020; Tambe et al., 2021), weight + +pruning (Han et al., 2016; He et al., 2017; Michel et al., 2019; Sanh et al., 2020), and progressive module replacing (Xu et al., 2020). Despite providing significant reduction in model size, these techniques are generally static at inference time, i.e., they dedicate the same amount of computation to all inputs, irrespective of their difficulty. + +A number of techniques have been also proposed in order to make efficiency enhancement sensitive to inputs. Early exit mechanism (Schwartz et al., 2020b; Liao et al., 2021; Xin et al., 2020; Liu et al., 2020; Xin et al., 2021; Sun et al., 2021; Eyzaguirre et al., 2021) is a commonly used method in which each layer in the model is coupled with an intermediate classifier to predict the target label. At inference, a halting condition is used to determine whether the model allows an example to exit without passing through all layers. Various halting conditions have been proposed, including Shannon's entropy (Xin et al., 2020; Liu et al., 2020), softmax outputs with temperature calibration (Schwartz et al., 2020b), trained confidence predictors (Xin et al., 2021), or the number of agreements between predictions of intermediate classifiers (Zhou et al., 2020). + +Most of these input-adaptive techniques compress the model from the depth perspective (i.e., reducing the number of involved encoder layers). However, one can view compression from the width perspective (Goyal et al., 2020; Ye et al., 2021), i.e., reducing the length of hidden states. (Ethayarajh, 2019; Klawka and Ettinger, 2020). This is particularly promising as recent analytical studies showed that there are redundant encoded information in token representations (Klawka and Ettinger, 2020; Ethayarajh, 2019). Among these redundancies, some tokens carry more task-specific information than others (Mohebbi et al., 2021), suggesting that only these tokens could be considered through the model. Moreover, in contrast to layer-wise pruning, token-level pruning does not + +come at the cost of reducing model's capacity in complex reasoning (Sanh et al., 2019; Sun et al., 2019). PoWER-BERT (Goyal et al., 2020) is one of the first such techniques which reduces inference time by eliminating redundant token representations through layers based on self-attention weights. Several studies have followed (Kim and Cho, 2021; Wang et al., 2021); However, they usually optimize a single token elimination configuration across the entire dataset, resulting in a static model. In addition, their token selection strategies are based on attention weights which can result in a suboptimal solution (Ye et al., 2021). + +In this work, we introduce Adaptive Length Reduction (AdapLeR). Instead of relying on attention weights, our method trains a set of Contribution Predictors (CP) to estimate tokens' saliency scores at inference. We show that this choice results in more reliable scores than attention weights in measuring tokens' contributions. The most related study to ours is TR-BERT (Ye et al., 2021) which leverages reinforcement learning to develop an input-adaptive token selection policy network. However, as pointed out by the authors, the problem has a large search space, making it difficult for RL to solve. To mitigate this, they resorted to extra heuristics such as imitation learning (Hussein et al., 2017) for warming up the training of the policy network, action sampling for limiting the search space, and knowledge distillation for transferring knowledge from the intact backbone fine-tuned model. All of these steps significantly increase the training cost. Hence, they only perform token selection at two layers. In contrast, we propose a simple but effective method to gradually eliminate tokens in each layer throughout the training phase using a soft-removal function which allows the model to be adaptable to various inputs in a batch-wise mode. It is also worth noting in contrast to our approach above studies are based on top-k operations for identifying the k most important tokens during training or inference, which can be expensive without a specific hardware architecture (Wang et al., 2021). + +In summary, our contributions are threefold: + +- We couple a simple Contribution Predictor (CP) with each layer of the model to estimate tokens' contribution scores to eliminate redundant representations. +Instead of an instant token removal, we gradually mask out less contributing token repre + +sentations by employing a novel soft-removal function. + +- We also show the superiority of our token selection strategy over the other widely used strategies by using human rationales. + +# 2 Background + +# 2.1 Self-attention Weights + +Self-attention is a core component of the Transformers (Vaswani et al., 2017) which looks for the relation between different positions of a single sequence of token representations $(x_{1},\dots,x_{n})$ to build contextualized representations. To this end, each input vector $x_{i}$ is multiplied by the corresponding trainable matrices $Q$ , $K$ , and $V$ to respectively produce query $(q_{i})$ , key $(k_{i})$ , and value $(v_{i})$ vectors. To construct the output representation $z_{i}$ , a series of weights is computed by the dot product of $q_{i}$ with every $k_{j}$ in all time steps. Before applying a softmax function, these values are divided by a scaling factor and then added to an attention mask vector $\mathbf{m}$ , which is zero for positions we wish to attend and $-\infty$ (in practice, $-10000$ ) for padded tokens (Vaswani et al., 2017). Mathematically, for a single attention head, the weight attention from token $x_{i}$ to token $x_{j}$ in the same input sequence can be written as: + +$$ +\alpha_ {i, j} = \operatorname * {s o f t m a x} _ {x _ {j} \in \mathcal {X}} \left(\frac {q _ {i} k _ {j} ^ {\top}}{\sqrt {d}} + m _ {i}\right) \in \mathbb {R} \tag {1} +$$ + +The time complexity for this is $O(n^{2})$ given the dot product $q_{i}k_{j}^{\top}$ , where $n$ is the input sequence length. This impedes the usage of self-attention based models in low-resource settings. + +While self-attention is one of the most white-box components in transformer-based models, relying on raw attention weights as an explanation could be misleading given that they are not necessarily responsible for determining the contribution of each token in the final classifier's decision (Jain and Wallace, 2019; Serrano and Smith, 2019; Abnar and Zuidema, 2020). This is based on the fact that raw attentions are being faithful to the local mixture of information in each layer and are unable to obtain a global perspective of the information flow through the entire model (Pascual et al., 2021). + +# 2.2 Gradient-based Saliency Scores + +Gradient-based methods provide alternatives to attention weights to compute the importance of a + +![](images/6c7068cdcf06a92570b9f6618785e6aa362d0119346ac50a8b2bdf659fe248cc.jpg) +Figure 1: To reduce the inference computation, in each layer (1) the attribution score of the token representation is estimated and (2) based on a reduced uniform-level threshold $(\delta^{\ell} = \eta^{\ell} / n)$ token representations with low importance score are removed. Since the final layer's classifier is connected to the [CLS] token and it could act as a pooler within each layer it is the only token that would remain regardless of its score. + +specific input feature. Despite having been widely utilized in other fields earlier (Ancona et al., 2018; Simonyan et al., 2013; Sundararajan et al., 2017; Smilkov et al., 2017), they have only recently become popular in NLP studies (Bastings and Filippova, 2020; Li et al., 2016; Yuan et al., 2019). These methods are based on computing the first-order derivative of the output logit $y_{c}$ w.r.t. the input embedding $h_{i}^{0}$ (initial hidden states), where $c$ could be true class label to find the most important input features or the predicted class to interpret model's behavior. After taking the norm of output derivatives, we get sensitivity (Ancona et al., 2018), which indicates the changes in model's output with respect to the changes in specific input dimensions. Instead, by multiplying gradients with input features, we arrive at gradient $\times$ input (Bastings and Filippova, 2020), also known as saliency, which also considers the direction of input vectors to determine the most important tokens. Since these scores are computed for each dimension of embedding vectors, an aggregation method such as L2 norm or mean is needed to produce one score per input token (Atanasova et al., 2020a): + +$$ +S _ {i} = \left\| \frac {\partial y _ {c}}{\partial h _ {i} ^ {0}} \odot h _ {i} ^ {0} \right\| _ {2} \tag {2} +$$ + +# 3 Methodology + +As shown in Figure 1, our approach relies on dropping low contributing tokens in each layer and passing only the more important ones to the next. Therefore, one important step is to measure the importance of each token. To this end, we opted for saliency scores which have been recently shown + +as a reliable criterion in measuring token's contributions (Bastings and Filippova, 2020; Pascual et al., 2021). In Section 5.1 we will show results for a series quantitative analyses that supports this choice. In what follows, we first describe how we estimate saliency scores at inference time using a set of Contribution Predictors (CPs) and then elaborate on how we leverage these predictors during inference (Section 3.2) and training (Section 3.3). + +# 3.1 Contribution Predictor + +Computing gradients during inference is problematic as backpropagation computation prolongs inference time, which is contrary to our main goal. To circumvent this, we simply add a CP after each layer $\ell$ in the model to estimate contribution score for each token representation, i.e., $\tilde{S}_i^\ell$ . The model then decides on the tokens that should be passed to the next layer based on the values of $\tilde{S}_i^\ell$ . CP computes $\tilde{S}_i^\ell$ for each token using an MLP followed by a softmax activation function. We argue that, despite being limited in learning capacity, the MLP is sufficient for estimating scores that are more generalized and relevant than vanilla saliency values. We will present a quantitative analysis on this topic in Section 5. + +# 3.2 Model Inference + +Most BERT-based models consist of $L$ encoder layers. The input sequence of $n$ tokens is usually passed through an embedding layer to build the initial hidden states of the model $h^0$ . Each encoder layer then produces the next hidden states using the + +ones from the previous layer: + +$$ +h ^ {\ell} = \operatorname {E n c o d e r} _ {\ell} \left(h ^ {\ell - 1}\right) \tag {3} +$$ + +In our approach, we eliminate less contributing token representations before delivering hidden states to the next encoder. Tokens are selected based on the contribution scores $\tilde{S}^{\ell}$ obtained from the CP of the corresponding layer $\ell$ . As the sum of these scores is equal to one, a uniform level indicates that all tokens contribute equally to the prediction and should be retained. On the other hand, the lower-scoring tokens could be viewed as unnecessary tokens if the contribution scores are concentrated only on a subset of tokens. Given that the final classification head uses the last hidden state of the [CLS] token, we preserve this token's representation in all layers. Despite preserving this, other tokens might be removed from a layer when [CLS] has a significantly high estimated contribution score than others. Based on this intuition, we define a cutoff threshold based on the uniform level as: $\delta^{\ell} = \eta^{\ell} \cdot 1/n$ with $0 < \eta^{\ell} \leq 1$ to distinguish important tokens. Tokens are considered important if their contribution score exceeds $\delta$ (which is a value equal or smaller than the uniform score). Intuitively, a larger $\eta$ provides a higher $\delta$ cutoff level, thereby dropping a larger number of tokens, hence, yielding more speedup. The value of $\eta$ determines the extent to which we can rely on CP's estimations. In case the estimations of CP are deemed to be inaccurate, its impact can be reduced by lowering $\eta$ . We train each layer's $\eta^{\ell}$ using an auxiliary training objective, which allows the model to adjust the cutoff value to control the speedup-performance tradeoff. Also, since each input instance has a different computational path during token removal process, it is obvious that at inference time, the batch size should be equal to one (single instance usage), similarly to other dynamic approaches (Zhou et al., 2020; Liu et al., 2020; Ye et al., 2021; Eyzaguirre et al., 2021; Xin et al., 2020). + +# 3.3 Model Training + +Training consists of three phases: initial finetuning, saliency extraction, and adaptive length retraining. In the first phase, we simply fine-tune the backbone model (BERT) on a given target task. We then extract the saliencies of three top-performing checkpoints from the fine-tuning process and compute the average of them to mitigate potential inconsistencies in saliency scores (cf. Section 2.2). + +![](images/b616bdafcc0fd66e20c6104e55e9030c6aa040164a3e956d3c2905a4d9277be8.jpg) +Figure 2: The soft-removal function plotted with $\lambda \in \{3,9,27,81\}$ and $\delta^{\ell} = 0.25$ . As $\lambda$ increases, the removal region (1) gets steeper while the other zone (2), which is almost horizontal, approaches the zero level. + +The final step is to train a pre-trained model using an adaptive length reduction procedure. In this phase, a non-linear function gradually fades out the representations throughout the training process. Each CP is jointly trained with the rest of the model using the saliencies extracted in the previous phase alongside with the target task labels. We also define a speedup tuning objective to determine the thresholds (via tuning $\eta$ ) to control the performance-speedup trade-off. In the following, we elaborate on the procedure. + +Soft-removal function. During training, if tokens are immediately dropped similarly to the inference mode, the effect of dropping tokens cannot be captured using a gradient backpropagation procedure. Using batch-wise training in this scenario will also be problematic as the structure will vary with each example. Hence, inspired by the padding mechanism of self-attention models (Vaswani et al., 2017) we introduce a new procedure that gradually masks out less contributing token representations. In each layer, after predicting contribution scores, instead of instantly removing the token representations, we accumulate a negative mask to the attention mask vector $M$ using a soft-removal function: + +$$ +m _ {i} ^ {-} (\tilde {S} _ {i} ^ {\ell}) = \left\{ \begin{array}{l l} \lambda_ {a d j} (\tilde {S} _ {i} ^ {\ell} - \delta^ {\ell}) - \frac {\beta}{\lambda} & \tilde {S} _ {i} ^ {\ell} < \delta^ {\ell} \\ \frac {(\tilde {S} _ {i} ^ {\ell} - 1) \beta}{(1 - \delta^ {\ell}) \lambda} & \tilde {S} _ {i} ^ {\ell} \geq \delta^ {\ell} \end{array} \right. (4) +$$ + +This function consists of two main zones (Figure 2). In the first term, the less important tokens with scores lower than the threshold $(\delta^{\ell})$ are assigned higher negative masking as they get more distant + +from $\delta$ . The slope is determined by $\lambda_{adj} = \lambda/\delta$ , where $\lambda$ is a hyperparameter that is increased exponentially after each epoch (e.g., $\lambda \gets 10 \times \lambda$ after finishing each epoch). Increasing $\lambda$ makes the soft-removal function stronger and more decisive in masking the representations. To avoid undergoing zero gradients during training, we define $0 < \beta < 0.1$ to construct a small negative slope (similar to the well-known Leaky-ReLU of Maas et al. 2013) for those tokens with higher contributing scores than $\delta^\ell$ threshold. Consider a scenario in which $\eta^\ell$ sharply drops, causing most of $\tilde{S}_i^\ell$ get over the $\delta^\ell$ threshold. In this case, the non-zero value in the second term of Equation 4, which facilitates optimizing $\eta^\ell$ . + +Training the Contribution Predictors. The CPs are trained by an additional term which is based on the KL-divergence1 of each layer's CP output with the extracted saliencies. The main training objective is a minimization of the following loss: + +$$ +\mathcal {L} = \mathcal {L} _ {\mathrm {C E}} + \gamma \mathcal {L} _ {\mathrm {C P}} \tag {5} +$$ + +Where $\gamma$ is a hyperparameter which that specifies the amount of emphasis on the CP training loss: + +$$ +\begin{array}{l} \mathcal {L} _ {\mathrm {C P}} = \sum_ {\ell = 0} ^ {L - 1} (L - \ell) D _ {\mathrm {K L}} (\hat {S} ^ {\ell} | | \tilde {S} ^ {\ell}) \\ = \sum_ {\ell = 0} ^ {L - 1} (L - \ell) \sum_ {i = 1} ^ {N} \hat {S} _ {i} ^ {\ell} \log (\frac {\hat {S} _ {i} ^ {\ell}}{\tilde {S} _ {i} ^ {\ell}}) \\ \end{array} +$$ + +Since $S$ is based on the input embeddings, the [CLS] token usually shows a low amount of contribution due to not having any contextualism in the input. As we leverage the representation of the [CLS] token in the last layer for classification, this token acts as a pooler and gathers information about the context of the input. In other words, the token can potentially have more contribution as it passes through the model. To this end, we amplify the contribution score of [CLS] and renormalize the distribution $(\hat{S}^{\ell})$ with a trainable parameter $\theta^{\ell}$ : + +$$ +\hat {S} _ {i} ^ {\ell} = \frac {\theta^ {\ell} S _ {1} ^ {\ell} \mathbf {1} [ i = 1 ] + S _ {i} ^ {\ell} \mathbf {1} [ i > 1 ]}{\theta^ {\ell} S _ {1} ^ {\ell} + \sum_ {i = 2} ^ {n} S _ {i} ^ {\ell}} \tag {7} +$$ + +By this procedure, the next objective (discussed in the next paragraph) will have the capability of tuning the amount of pooling, consequently controlling the amount of speedup. Larger $\theta$ push the + +CPs to shift the contribution towards the [CLS] token to gather most of the task-specific information and avoids carrying redundant tokens through the model. + +Speedup Tuning. In the speedup tuning process, we combine the cross-entropy loss of the target classification task with a length loss which is the expected number of unmasked token representations in all layers. Considering that we have a non-positive and continuous attention mask $M$ , the length loss of a single layer would be the summation over the exponential of the mask values $\exp(m_i)$ to map the masking range $[-\infty, 0]$ to a [0 (fully masked/removed), 1 (fully retained)] bound. + +$$ +\begin{array}{l} \mathcal {L} _ {\text {S P D . / P E R F .}} = \mathcal {L} _ {\text {C E}} + \phi \mathcal {L} _ {\text {L E N G T H}} \\ \mathcal {L} _ {\text {L E N G T H}} = \sum_ {l = 1} ^ {L} \sum_ {i = 1} ^ {n} \exp \left(m _ {i} ^ {\ell}\right) \tag {8} \\ \end{array} +$$ + +Equation 8 demonstrates how the length loss is computed inside the model and how it is added to the main classification loss. During training, we assign a separate optimization process which tunes $\eta$ and $\theta$ to adjust the thresholds and the amount of [CLS] pooling $^2$ alongside with the CP training. + +The reason that this objective is treated as a separate problem instead of merging it with the previous one, is because in the latter case the CPs could be influenced by the length loss and try to manipulate the contribution scores for some tokens regardless of their real influence. So in other words, the first objective is to solve the task and make it explainable with the CPs, and the secondary objective builds the speedup using tuning the threshold levels and the amount of pooling in each layer. + +# 4 Experiments + +# 4.1 Datasets + +To verify the effectiveness of AdapLeR on inference speedup, we selected eight various text classification datasets. In order to incorporate a variety of tasks, we utilized SST-2 (Socher et al., 2013) and IMDB (Maas et al., 2011) for sentiment, MRPC (Dolan and Brockett, 2005) for paraphrase, AG's News (Zhang et al., 2015) for topic classification, DBpedia (Lehmann et al., 2015) for knowledge extraction, MNLI (Williams et al., 2018) for NLI, + +
ModelSST-2IMDBHateXplainMRPCMNLIQNLIAG's newsDBpedia
Acc.SpeedupAcc.SpeedupAccSpeedupF1.SpeedupAcc.SpeedupAcc.SpeedupAcc.SpeedupAcc.Speedup
BERT92.71.00x93.81.00x68.31.00x87.51.00x84.21.00x90.31.00x94.41.00x99.31.00x
DistilBERT92.22.00x92.92.00x68.22.00x88.02.00x81.82.00x88.12.00x94.22.00x99.32.00x
PoWER-BERT92.11.18x92.21.70x66.92.69x88.01.07x82.91.10x89.71.23x92.112.50x98.114.80x
TR-BERT92.11.46x93.22.90x67.92.23x81.91.16x84.81.00x89.01.09x93.210.20x98.910.01x
AdapLeR92.31.49x91.73.21x68.64.73x87.61.27x82.91.42x89.31.47x92.517.10x98.922.23x
+ +Table 1: Comparison of our proposed method (AdapLeR) with other baselines in eight classification tasks in terms of performance and speedup. For each dataset the corresponding metric has been reported (Accuracy: Acc., F1: F-1 Score). In the MNLI task, the speedup and performance values are the average of the evaluations on the matched and mismatched test sets. + +QNLI (Rajpurkar et al., 2016) for question answering, and HateXplain (Mathew et al., 2021) for hate speech. Evaluations are based on the test split of each dataset. For those datasets that are in the GLUE Benchmark (Wang et al., 2018), test results were acquired by submitting the test predictions to the evaluation server. + +# 4.2 Experimental Setup + +As our baseline, we report results for the pretrained BERT model (base-uncased) (Devlin et al., 2019) which is also the backbone of AdapLeR. We also compare against three other approaches: DistilBERT (uncased) (Sanh et al., 2019) as a static compression method, PoWER-BERT and TR-BERT as two strong length reduction methods (cf. Sec. 1). We used the provided implementations and suggested hyperparameters to train these baselines. To fine-tune the backbone model, we used same hyperparameters over all tasks (see Section D for details). The backbone model and our model implementation is based on the HuggingFace's Transformers library (Wolf et al., 2020). Trainings and evaluations were conducted on a dual 2080Ti 11GB GPU machine with multiple runs. + +Hyperparameter Selection. Overall, we introduced four hyperparameters $(\gamma, \phi, \lambda, \beta)^5$ which are involved in the training process. Among these, $\phi$ and $\gamma$ are the primary terms that have considerable effects on AdapLeR's downstream performance and speedup. This makes our approach comparable to existing techniques (Goyal et al., 2020; Ye et al., 2021) which usually have two or three hyperparameters adjusted per task. We used grid search to + +find the optimal values for these two terms, while keeping the other hyperparameters constant over all datasets. Hyperparameter selection is further discussed in Section D. + +FLOPs Computation. We followed Ye et al. (2021) and Liu et al. (2020) and measured computational complexity in terms of FLOPs, i.e., the number of floating-point operations (FLOPs) in a single inference procedure. This allows us to assess models' speedups independently of their operating environment (e.g., CPU/GPU). The total FLOPs of a given model is a summation of the measured FLOPs over all test examples. Then, a model's speedup can be defined as the total FLOPs measured on BERT (our baseline) divided by the corresponding model's total FLOPs. To have a fair comparison, we also computed FLOPs for PoWER-BERT in a single instance mode, described in Section C. + +# 4.3 Results + +Table 1 shows performance and speedup for AdapLeR and other comparison models across eight different datasets. While preserving the same level of performance, AdapLeR outperforms other techniques in terms of speedup across all tasks (ranging from $+0.2\mathrm{x}$ to $+7.4\mathrm{x}$ compared to the best model in each dataset). + +It is noteworthy that the results also reveal some form of dependency on the type of the tasks. Some tasks may need less amount of contextualism during inference and could be classified by using only a fraction of input tokens. For instance, in AG's News, the topic of a sentence might be identifiable with a single token (e.g., soccer $\rightarrow$ Topic: Sports, see Figure 6 in the Appendix for an example). PoWER-BERT adopts attention weights in its token selection which requires at least one layer of computation to be determined, and TR-BERT ap + +![](images/fdaf5c3689483afc676b59978fcecafc61c98194a599cd043c96507339322c1a.jpg) + +![](images/d761e9881e754467c40e56937b474cdf0a6febecad647af7b9b97334775e079d.jpg) +Figure 3: Accuracy-Speedup trade-off curve for AdapLeR and two other state-of-the-art reduction methods; TR: TR-BERT, PoWER: PoWER-BERT on two different tasks. + +plies token elimination only in two layers to reduce the training search space. In contrast, our procedure performs token elimination for all layers of the model, enabling a more effective removal of redundant tokens. On the other hand, we observe that TR-BERT and PoWER-BERT lack any speedup gains for tasks such as QNLI, MNLI, and MRPC which need a higher degree of contextualism during inference. However, AdapLeR can offer some speedups even for these tasks. + +Speedup-Performance Tradeoff. To provide a closer look at the efficiency of AdapLeR in comparison with the other state-of-the-art length reduction methods, we illustrate speedup-accuracy curves in Figure 3. We provide these curves for two tasks in which other length reduction methods show comparable speedups to AdapLeR. For each curve, the points were obtained by tuning the most influential hyperparameters of the corresponding model. As we can see, AdapLeR significantly outperforms the other two approaches in all two tasks. An interesting observation here is that the curves for TR-BERT and AdapLeR are much higher than that of PoWER-BERT. This can be attributed to the input-adaptive procedure employed by the former two methods for determining the number of reduced tokens (whereas PoWER-BERT adopts a fixed retention configuration in token elimination). + +
StrategyMovie ReviewsMultiRC
Acc.SpeedupAcc.Speedup
Full input93.31.0x67.71.0x
Human rationale96.73.7x76.64.6x
Saliency92.33.7x66.44.4x
Attention ALL78.53.7x62.94.4x
Attention [CLS]70.33.7x63.74.4x
+ +Table 2: Accuracy and speedup when the most important input tokens during fine-tuning are computed based on attention and saliency strategies and human rationale (the upper bound). The bold values indicate the best corresponding strategy for each task (the closest performance to the upper bound). + +# 5 Analysis + +In this section, we first conduct an experiment to support our choice of saliency scores as a supervision in measuring the importance of token representations. Next, we evaluate the behavior of Contribution Predictors in identifying the most important tokens in the AdapLeR. + +# 5.1 Rationale as an Upper Bound + +A natural question that arises when dealing with token pruning is that of importance measure: what is the most appropriate criterion for assessing the relative importance of tokens within a sentence? We resort to human rationale as a reliable upper bound for measuring token importance. To this end, we used the ERASER benchmark (DeYoung et al., 2020), which contains multiple tasks for which important spans of the input text have been highlighted as supporting evidence (aka "rationale") by human. Among the tasks in the benchmark, we opted for two diverse classification tasks: Movie reviews (Zaidan and Eisner, 2008) and MultiRC (Khashabi et al., 2018). In the former task, the model predicts the sentiment of the passage. Whereas the latter contains a passage, a question, and multiple candidate answers, which is cast as a binary classification task of passage/question/answer triplets in the ERASER benchmark. + +In order to verify the reliability of human rationales, we fine-tuned BERT based on the rationales only, i.e., by excluding those tokens that are not highlighted as being important in the input. In Table 2, the first two rows show the performance of BERT on the two tasks with full input and with human rationales only. We see that fine-tuning merely + +# SST-2 (dev) - Label: Negative + +Layer 0: [CLS] what was once original has been co - opted so frequently that it now seems pedestrian . [SEP] + +Layer 5 : [CLS] what was once original has been co - opted so frequently that it now seems pedestrian . [SEP] + +Layer 11: [C]S what was once original has been co - onted so frequently that it now seems pedestrian [SEP] + +Layer 1. [CLB] what was once original has been co-opted so frequently that it now seems Pedestrian. [SEL] + +# QNLI (dev) - Label: Entailment + +Layer 0: [CLS] what did Tesla patent in 1891 ? [SEP] in the same year , he patented the Tesla coil . [SEP] +Layer 5: [CLS] what did Tesla patent in 1891 ? [SEP] in the same year , he patented the Tesla coil +Layer 11: [CLS] what did Tesla patent in 1891 ? [SEP] in the same year , he patented the Tesla coil + +Figure 4: The illustration of contribution scores obtained by CPs in three different layers of the model for two input examples from SST-2 (sentiment) and QNLI (Question-answering NLI) tasks. The contribution scores are shown by color intensity. Only the highlighted token representations are processed in each layer. See more full-layer plots in the appendix 6. + +on rationales not only yields less computation cost, but also results in a better performance when compared with the full input setting. Obviously, human annotations are not available for a whole range of downstream NLP tasks; therefore, this criterion is infeasible in practice and can only be viewed as an upper bound for evaluating different strategies in measuring token importance. + +# 5.2 Saliency vs. Attention + +We investigated the effectiveness of saliency and self-attention weights as two commonly used strategies for measuring the importance of tokens in pre-trained language models. To compute these, we first fine-tuned BERT with all tokens in the input for a given target task. We then obtained saliency scores with respect to the tokens in the input embedding layer. This brings about two advantages. Firstly, representations in the embedding layer are non-contextualized, allowing us to measure the importance of each token independently from the others. Secondly, the backpropagation of gradients through layers to the beginning of the model provides us with aggregated values for the relative importance of each token based on the entire model. Similarly, we aggregated the self-attention weights across all layers of the model using a post-processed variant of attentions called attention rollout (Abnar and Zuidema, 2020), a popular technique in which the attention weight matrix in each layer is multiplied with the preceding ones to form aggregated attention values. + +To assign an importance score to each token, we examined two different interpretation of attention weights. The first strategy is the one adopted by PoWER-BERT (Goyal et al., 2020) in which for each token we accumulate attention values from + +other tokens. Additionally, we measured how much the [CLS] token attends to each token in the sentence, a strategy which has been widely used in interpretability studies around BERT (Abnar and Zuidema, 2020; Chrysostomou and Aletras, 2021; Jain et al., 2020, inter alia). For a fair comparison, for each sentence in the test set, we selected the top- $k$ salient and attended words, with $k$ being the number of words that are annotated as rationales. + +Results in Table 2 show that fine-tuning on the most salient tokens outperforms that based on the most attended tokens. This denotes that saliency is a better indicator for the importance of tokens. Nonetheless, recent length reduction techniques (Goyal et al., 2020; Kim and Cho, 2021; Wang et al., 2021) have mostly adopted attention weights as their criterion for selecting important tokens. Computing these weights is convenient as they are already computed during the forward pass, whereas computing saliency requires an additional backpropagation step. Note that in our approach, saliency scores are easily estimated within inference time by the pre-trained CPs. + +# 5.3 Contribution Predictor Evaluation + +In this section we validate our Contribution Predictors in selecting the most contributed tokens. Figure 4 illustrates two examples from the SST-2 and QNLI datasets in which CPs identify and gradually drop the irrelevant tokens through layers, finally focusing mostly on the most important token representations; pedestrian (adjective) in SST-2 and Tesla coil in the passage part of QNLI (both of which are highly aligned with human rationale). + +In order to quantify the extent to which AdapLeR's CPs can preserve rationales without requiring direct human annotations in an unsuper + +![](images/11e0b02bb95c73da405f124a05dcf3b52b2a214250c5de1fa6cb064520922c27.jpg) +Figure 5: Agreement with human rationales in terms of mean Average Precision and False Positive Rate for Contribution Predictor (CP) and three alternative techniques. + +vised manner we carried out the following experiment. To investigate the effectiveness of trained CPs in predicting human rationales we computed the output scores of CPs in AdapLeR for each token representation in each layer. We also fine-tuned a BERT model on the Movie Review dataset and computed layer-wise raw attention, attention rollout, and saliency scores for each token representation. Since human rationales are annotated at the word level, we sum the scores across tokens corresponding to each word to arrive at word-level importance scores. In addition to these soft scores, we used the uniform-level threshold (i.e., $1/n$ ) to reach a binary score indicating tokens selected in each layer. + +As for evaluation, we used the Average Precision (AP) and False Positive Rate (FPR) metrics by comparing the remaining tokens to the human rationale annotations. The first metric measures whether the model assigns higher continuous scores to those tokens that are annotated by humans as rationales. Whereas, the intuition behind the second metric is how many irrelevant tokens are selected by the model to be passed to subsequent layers. We used soft scores for computing AP and binary scores for computing FPR. + +Figure 5 shows the agreement between human rationales and the selected tokens based on the two metrics. In comparison with the other widely used strategies for selecting important tokens, such as salinocy and attention, our CPs have significantly less false positive rate in preserving ratio + +nales through layers. Despite having similar FPRs at the final layer, CP is preferable to attention in that it can better identify rationales at the earlier layers, allowing the model to combine the most relevant token representations when building the final one. This in turn results in better performance, as was also shown in the previous experiment in Section 5.2. Also, we see that the curve of mAP for saliency is consistently higher than other strategies in terms of alignment with human rationales which supports our choice of saliency as a measure for token importance. + +Finally, we note that there is a line of research that attempts at guiding models to perform human-like reasoning by training rationale generation simultaneously with the target task that requires human annotation (Atanasova et al., 2020b; Zhao et al., 2020; Li et al., 2018). As a by-product of the contribution estimation process, our trained CPs are able to generate these rationales at inference without the need for human-generated annotations. + +# 6 Conclusion + +In this paper, we introduced AdapLeR, a novel method that accelerates inference by dynamically identifying and dropping less contributing token representations through layers of BERT-based models. Specifically, AdapLeR accomplishes this by training a set of Contribution Predictors based on saliencies extracted from a fine-tuned model and applying a gradual masking technique to simulate input-adaptive token removal during training. Empirical results on eight diverse text classification tasks show considerable improvements over existing methods. Furthermore, we demonstrated that contribution predictors generate rationales that are highly in line with those manually specified by humans. As future work, we aim to apply our technique to more tasks and see whether it can be adapted to those tasks that require all token representations to be present in the final layer of the model (e.g., question answering). Additionally, combining our width-based strategy with a depth-based one (e.g., early exiting) might potentially yield greater efficiency, something we plan to pursue as future work. + +# Broader Impact + +Using our proposed method, pre-trained language models can use fewer FLOPs, reducing energy use and carbon emissions (Schwartz et al., 2020a). + +# References + +Samira Abnar and Willem Zuidema. 2020. Quantifying attention flow in transformers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4190-4197, Online. Association for Computational Linguistics. +Marco Ancona, Enea Ceolini, Cengiz Öztireli, and Markus Gross. 2018. Towards better understanding of gradient-based attribution methods for deep neural networks. In International Conference on Learning Representations. +Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, and Isabelle Augenstein. 2020a. A diagnostic study of explainability techniques for text classification. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3256-3274, Online. Association for Computational Linguistics. +Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, and Isabelle Augenstein. 2020b. Generating fact checking explanations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7352-7364, Online. Association for Computational Linguistics. +Jasmijn Bastings and Katja Filippova. 2020. The elephant in the interpretability room: Why use attention as explanation when we have saliency methods? In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 149-155, Online. Association for Computational Linguistics. +George Chrysostomou and Nikolaos Aletras. 2021. Enjoy the salience: Towards better transformer-based faithful explanations with word salience. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8189-8200, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2020. ERASER: A benchmark to evaluate rationalized NLP models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4443-4458, Online. Association for Computational Linguistics. +William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. + +In Proceedings of the Third International Workshop on Paraphrasing (IWP2005). +Kawin Ethayarajh. 2019. How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 55–65, Hong Kong, China. Association for Computational Linguistics. +Cristóbal Eyzaguirre, Felipe del Río, Vladimir Araujo, and Álvaro Soto. 2021. DACT-BERT: Differentiable adaptive computation time for an efficient bert inference. arXiv preprint arXiv:2109.11745. +Yunchao Gong, L. Liu, Ming Yang, and Lubomir D. Bourdev. 2014. Compressing deep convolutional networks using vector quantization. ArXiv, abs/1412.6115. +Saurabh Goyal, Anamitra Roy Choudhury, Saurabh Raje, Venkatesan Chakaravarthy, Yogish Sabharwal, and Ashish Verma. 2020. Power-bert: Accelerating bert inference via progressive word-vector elimination. In International Conference on Machine Learning, pages 3690-3699. PMLR. +Song Han, Huizi Mao, and William J. Dally. 2016. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings. +Yihui He, Xiangyu Zhang, and Jian Sun. 2017. Channel pruning for accelerating very deep neural networks. 2017 IEEE International Conference on Computer Vision (ICCV), pages 1398-1406. +Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. ArXiv, abs/1503.02531. +Ahmed Hussein, Mohamed Medhat Gaber, Eyad Elyan, and Chrisina Jayne. 2017. Imitation learning: A survey of learning methods. ACM Computing Surveys (CSUR), 50(2):1-35. +Sarthak Jain and Byron C. Wallace. 2019. Attention is not Explanation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3543-3556, Minneapolis, Minnesota. Association for Computational Linguistics. +Sarthak Jain, Sarah Wiegrefe, Yuval Pinter, and Byron C. Wallace. 2020. Learning to faithfully rationalize by construction. In ACL. +Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2020. + +TinyBERT: Distilling BERT for natural language understanding. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4163-4174, Online. Association for Computational Linguistics. +Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 252-262, New Orleans, Louisiana. Association for Computational Linguistics. +Gyuwan Kim and Kyunghyun Cho. 2021. Length-adaptive transformer: Train once with length drop, use anytime with search. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6501-6511, Online. Association for Computational Linguistics. +Josef Klafka and Allyson Ettinger. 2020. Spying on your neighbors: Fine-grained probing of contextual embeddings for information about surrounding words. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4801-4811, Online. Association for Computational Linguistics. +Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick Van Kleef, Soren Auer, et al. 2015. Dbpedia-a large-scale, multilingual knowledge base extracted from wikipedia. Semantic web, 6(2):167-195. +Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016. Visualizing and understanding neural models in NLP. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 681-691, San Diego, California. Association for Computational Linguistics. +Sizhen Li, Shuai Zhao, Bo Cheng, and Hao Yang. 2018. An end-to-end multi-task learning model for fact checking. In Proceedings of the First Workshop on Fact Extraction and VERIFICATION (FEVER), pages 138-144, Brussels, Belgium. Association for Computational Linguistics. +Kaiyuan Liao, Yi Zhang, Xuancheng Ren, Qi Su, Xu Sun, and Bin He. 2021. A global past-future early exit method for accelerating inference of pretrained language models. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2013-2023, Online. Association for Computational Linguistics. + +Weijie Liu, Peng Zhou, Zhiruo Wang, Zhe Zhao, Haotang Deng, and Qi Ju. 2020. FastBERT: a self-distilling BERT with adaptive inference time. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6035-6044, Online. Association for Computational Linguistics. +Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Conference on Learning Representations. +Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142-150, Portland, Oregon, USA. Association for Computational Linguistics. +Andrew L. Maas, Awni Y. Hannun, and Andrew Y. Ng. 2013. Rectifier nonlinearities improve neural network acoustic models. In in ICML Workshop on Deep Learning for Audio, Speech and Language Processing. +Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee. 2021. Hatexplain: A benchmark dataset for explainable hate speech detection. In AAAI. +Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one? In NeurIPS. +Hosein Mohebbi, Ali Modarressi, and Mohammad Taher Pilehvar. 2021. Exploring the role of BERT token representations to explain sentence probing results. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 792-806, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Damian Pascual, Gino Brunner, and Roger Wattenhofer. 2021. Telling BERT's full story: from local attention to global aggregation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 105-124, Online. Association for Computational Linguistics. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics. +Victor Sanh, Lysandre Debut, Julien Chaumont, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. +Victor Sanh, Thomas Wolf, and Alexander Rush. 2020. Movement pruning: Adaptive sparsity by fine-tuning. Advances in Neural Information Processing Systems, 33:20378-20389. + +Roy Schwartz, Jesse Dodge, Noah Smith, and Oren Etzioni. 2020a. Green ai. Communications of the ACM, 63:54 - 63. +Roy Schwartz, Gabriel Stanovsky, Swabha Swayamdipta, Jesse Dodge, and Noah A. Smith. 2020b. The right tool for the job: Matching model and instance complexities. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6640-6651, Online. Association for Computational Linguistics. +Sofia Serrano and Noah A. Smith. 2019. Is attention interpretable? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2931-2951, Florence, Italy. Association for Computational Linguistics. +Sheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W. Mahoney, and Kurt Keutzer. 2020. Q-bert: Hessian based ultra low precision quantization of bert. In AAAI. +Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2013. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034. +D Smilkov, N Thorat, B Kim, F Viégas, and M Wattenberg. 2017. Smoothgrad: removing noise by adding noise. arxiv. arXiv preprint arxiv:1706.03825. +Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Association for Computational Linguistics. +Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019. Patient knowledge distillation for BERT model compression. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4323-4332, Hong Kong, China. Association for Computational Linguistics. +Tianxiang Sun, Yunhua Zhou, Xiangyang Liu, Xinyu Zhang, Hao Jiang, Zhao Cao, Xuanjing Huang, and Xipeng Qiu. 2021. Early exiting with ensemble internal classifiers. arXiv preprint arXiv:2105.13792. +Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 2020. MobileBERT: a compact task-agnostic BERT for resource-limited devices. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2158-2170, Online. Association for Computational Linguistics. +Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 3319-3328. + +Thierry Tambe, Coleman Hooper, Lillian Pentecost, Tianyu Jia, En-Yu Yang, Marco Donato, Victor Sanh, Paul N. Whatmough, Alexander M. Rush, David Brooks, and Gu-Yeon Wei. 2021. Edgebert: Sentence-level energy optimizations for latency-aware multi-task nlp inference. MICRO-54: 54th Annual IEEE/ACM International Symposium on Microarchitecture. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc. +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics. +Hanrui Wang, Zhekai Zhang, and Song Han. 2021. Spatten: Efficient sparse attention architecture with cascade token and head pruning. 2021 IEEE International Symposium on High-Performance Computer Architecture (HPCA), pages 97-110. +Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguistics. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics. +Ji Xin, Raphael Tang, Jaejun Lee, Yaoliang Yu, and Jimmy Lin. 2020. DeeBERT: Dynamic early exiting for accelerating BERT inference. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2246-2251, Online. Association for Computational Linguistics. +Ji Xin, Raphael Tang, Yaoliang Yu, and Jimmy Lin. 2021. BERxiT: Early exiting for BERT with better fine-tuning and extension to regression. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: + +Main Volume, pages 91-104, Online. Association for Computational Linguistics. + +Canwen Xu, Wangchunshu Zhou, Tao Ge, Furu Wei, and Ming Zhou. 2020. BERT-of-theseus: Compressing BERT by progressive module replacing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7859-7869, Online. Association for Computational Linguistics. + +Deming Ye, Yankai Lin, Yufei Huang, and Maosong Sun. 2021. TR-BERT: Dynamic token reduction for accelerating BERT inference. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5798-5809, Online. Association for Computational Linguistics. + +Hao Yuan, Yongjun Chen, Xia Hu, and Shuiwang Ji. 2019. Interpreting deep models for text analysis via optimization and regularization methods. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 5717-5724. + +Omar Zaidan and Jason Eisner. 2008. Modeling annotators: A generative approach to learning from annotator rationales. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 31-40, Honolulu, Hawaii. Association for Computational Linguistics. + +Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In NIPS. + +Chen Zhao, Chenyan Xiong, Corby Rosset, Xia Song, Paul Bennett, and Saurabh Tiwary. 2020. Transformer-xh: Multi-evidence reasoning with extra hop attention. In International Conference on Learning Representations. + +Wangchunshu Zhou, Canwen Xu, Tao Ge, Julian McAuley, Ke Xu, and Furu Wei. 2020. Bert loses patience: Fast and robust inference with early exit. In Advances in Neural Information Processing Systems, volume 33, pages 18330-18341. Curran Associates, Inc. + +# A Inclusive KL Loss Consideration + +We opted for an inclusive KL loss since CPs should be trained to cover all tokens considered important by saliency and not to be mode seeking (i.e., covering a subset of high contributing tokens considered by the saliency scores.). Suppose an exclusive KL is selected. Due to the limited learning capacity of the CP and miscalculation possibility from the saliency, the CP may be trained to maximize its contribution on noninformative tokens. While in an inclusive setting, it trains to extend its coverage over all high-saliency tokens. + +Additionally, our initial research indicated that using a symmetric loss (e.g. Jensen-Shannon divergence) would produce similar results but with a significantly longer convergence time. + +# B Optimization of $\theta$ + +In Section 3.3, we introduced $\theta^{\ell}$ as a trainable parameter that increases the saliency score of [CLS]. We can deduce from Equations 6 and 7 that this parameter does not exist in the model's computational DAG and we need to compute the derivative of $\tilde{S}^{\ell}$ w.r.t. $\theta^{\ell}$ to train this parameter. Hence, first we assume that $\tilde{S}^{\ell}$ is a close estimate of $\hat{S}^{\ell}$ (due to the CPs' training objective). Second, using a dummy variable $\theta_d^\ell$ that is involved in the computational graph and is always equal to 1—we reformulate $\tilde{S}^{\ell}$ : + +$$ +\hat {S} _ {i} ^ {\ell} \approx \tilde {S} _ {i} ^ {\ell} = \frac {\theta_ {d} ^ {\ell} \tilde {S} _ {1} ^ {\ell} \mathbf {1} [ i = 1 ] + \tilde {S} _ {i} ^ {\ell} \mathbf {1} [ i > 1 ]}{\theta_ {d} ^ {\ell} \tilde {S} _ {1} ^ {\ell} + \sum_ {i = 2} ^ {n} \tilde {S} _ {i} ^ {\ell}} \tag {9} +$$ + +This reformulation is valid due to $\theta_d^\ell = 1$ and $\sum_{i=1}^{n} \tilde{S}_i^\ell = 1$ . Now we compute the partial derivative w.r.t. $\theta_d^\ell$ which is the gradient that is computed in the backpropagation: + +$$ +\frac {\partial \tilde {S} _ {i} ^ {\ell}}{\partial \theta_ {d} ^ {\ell}} = \frac {\tilde {S} _ {1} ^ {\ell} \left(\sum_ {i = 2} ^ {n} \tilde {S} _ {i} ^ {\ell} \mathbf {1} [ i = 1 ] - \tilde {S} _ {i} ^ {\ell} \mathbf {1} [ i > 1 ]\right)}{\left(\theta_ {d} ^ {\ell} \tilde {S} _ {1} ^ {\ell} + \sum_ {i = 2} ^ {n} \tilde {S} _ {i} ^ {\ell}\right) ^ {2}} \tag {10} +$$ + +By knowing that $\theta_d^\ell = 1$ : + +$$ +\frac {\partial \tilde {S} _ {i} ^ {\ell}}{\partial \theta_ {d} ^ {\ell}} = \tilde {S} _ {1} ^ {\ell} \left(\left(1 - \tilde {S} _ {1} ^ {\ell}\right) \mathbf {1} [ i = 1 ] - \tilde {S} _ {i} ^ {\ell} \mathbf {1} [ i > 1 ]\right) \tag {11} +$$ + +Now using our initial assumption $(\hat{S}_i^\ell \approx \tilde{S}_i^\ell)$ , we can substitute $\tilde{S}_i^\ell$ with $\hat{S}_i^\ell$ based on Equation 7: + +$$ +\begin{array}{l} \frac {\partial \tilde {S} _ {i} ^ {\ell}}{\partial \theta_ {d} ^ {\ell}} = \hat {S} _ {1} ^ {\ell} ((1 - \hat {S} _ {1} ^ {\ell}) \mathbf {1} [ i = 1 ] - \hat {S} _ {i} ^ {\ell} \mathbf {1} [ i > 1 ]) \\ = \frac {\theta^ {\ell} S _ {1} ^ {\ell} \left(\sum_ {i = 2} ^ {n} S _ {i} ^ {\ell} \mathbf {1} [ i = 1 ] - S _ {i} ^ {\ell} \mathbf {1} [ i > 1 ]\right)}{\left(\theta^ {\ell} S _ {1} ^ {\ell} + \sum_ {i = 2} ^ {n} S _ {i} ^ {\ell}\right) ^ {2}} \tag {12} \\ \end{array} +$$ + +In addition, the gradient of $\hat{S}_i^\ell$ w.r.t. $\theta^{\ell}$ is as follows (cf. Equation 7): + +$$ +\frac {\partial \hat {S} _ {i} ^ {\ell}}{\partial \theta^ {\ell}} = \frac {S _ {1} ^ {\ell} \left(\sum_ {i = 2} ^ {n} S _ {i} ^ {\ell} \mathbf {1} [ i = 1 ] - S _ {i} ^ {\ell} \mathbf {1} [ i > 1 ]\right)}{\left(\theta^ {\ell} S _ {1} ^ {\ell} + \sum_ {i = 2} ^ {n} S _ {i} ^ {\ell}\right) ^ {2}} \tag {13} +$$ + +By comparing Equations 12 and 13, these derivatives are related with a term of $\theta^{\ell}$ : + +$$ +\frac {\partial \hat {S} _ {i} ^ {\ell}}{\partial \theta^ {\ell}} \approx \frac {\partial \tilde {S} _ {i} ^ {\ell}}{\partial \theta^ {\ell}} = \frac {1}{\theta^ {\ell}} \frac {\partial \tilde {S} _ {i} ^ {\ell}}{\partial \theta_ {d} ^ {\ell}} \tag {14} +$$ + +Therefore, during training, we can compute the gradient w.r.t. the dummy variable $\theta_d^\ell$ and then divide it by $\theta^\ell$ . + +# C Evaluating PoWER-BERT in Single Instance Mode + +Due to the static structure of PoWER-BERT, the speedup ratios reported in Goyal et al. (2020) are based on wall time acceleration with batch-wise inference procedure. This means that some inputs might need extra padding to make all inputs with the same token length. However, since our approach and other dynamic approaches are based on single instance inference, in our procedure inputs are fed without being padded. To even out this discrepancy, we apply a single instance flops computation on the PoWER-BERT, which means we compute the computational cost for all input lengths that appear in the test dataset. Some instances may have shorter input length than some values in the resulting retention configuration (number of tokens that are retained in each layer). To overcome this issue, we update the retention configuration by selecting the minimum between the input length and each layers' number of tokens retained, to build a new retention configuration for each input length. For instance, if the retention configuration trained model on a given task be (153, 125, 111, 105, 85, 80, 72, 48, 35, 27, 22, 5), for an input with 75 tokens length, the new configuration which is used for speedup computation will be: (75, 75, 75, 75, 75, 72, 48, 35, 27, 22, 5). + +# D AdapLeR Training Hyperparameters + +For the initial step of fine-tuning BERT, we used the hyperparameters in Table 3. For both fine-tuning and training with length reduction, we employed an AdamW optimizer (Loshchilov and Hutter, 2019) with a weight decay rate of 0.1, warmup proportion $6\%$ of total training steps and a linear learning rate decay which reaches to zero at the end of training. + +For the adaptive length reduction training step, we also used the same hyperparameters in Table 3 with two differences: Since MRPC and CoLA have small training sets, to prolong the gradual soft-removal process, we increased the training duration to 10 epochs. Moreover, we increase the learning rate to 3e-5. Other hyperparameters are stated in Table 4. To set a trend for $\lambda$ , it needs to start from a small but effective value ( $10 < \lambda < 100$ ) and grow exponentially per each epoch to reach an ex + +
DatasetEpochLRMaxLen.BSZ
SST-252e-56432
IMDB52e-551216
HateXplain53e-57232
MRPC52e-512832
MNLI32e-512832
QNLI52e-512832
AG's News52e-512832
DBpedia32e-512832
+ +tremely high amount at the end of the training to mimic a hard removal function $(1e + 5 < \lambda)$ . Hence, datasets with the same amount of training epochs have similar $\lambda$ trends. + +Table 3: Hyperparameters in each dataset; LR: Learning rate; BSZ: Batch size; MaxLen: Maximum Token Length + +
Datasetγφλ
SST-25e-35e-410Epoch
IMDB5e-35e-410Epoch
HateXplain5e-22e-250Epoch
MRPC3e-25e-210 × 3Epoch
MNLI5e-35e-450Epoch
QNLI5e-31e-410Epoch
AG's News1e-11e-110Epoch
DBPedia1e-11e-150Epoch
+ +Table 4: AdapLeR hyperparameters in each dataset; Since $\lambda$ increases exponentially on each epoch the corresponding formula is written. + +# E Statistics of Datasets + +# F Additional Qualitative Examples + +# QNLI (dev) - Label: Entailment + +
Layer 0:[CLS] what did Tesla patent in 1891? [SEP] in the same year, he patented the Tesla coil. [SEP]
Layer 1:[CLS] what did Tesla patent in 1891? [SEP] in the same year, he patented the Tesla coil. [SEP]
Layer 2:[CLS] what did Tesla patent in 1891? [SEP] in the same year, he patented the Tesla coil. [SEP]
Layer 3:[CLS] what did Tesla patent in 1891? [SEP] in the same year, he patented the Tesla coil. [SEP]
Layer 4:[CLS] what did Tesla patent in 1891? [SEP] in the same year, he patented the Tesla coil. [SEP]
Layer 5:[CLS] what did Tesla patent in 1891? [SEP] in the same year, he patented the Tesla coil. [SEP]
Layer 6:[CLS] what did Tesla patent in 1891? [SEP] in the same year, he patented the Tesla coil. [SEP]
Layer 7:[CLS] what did Tesla patent in 1891? [SEP] in the same year, he patented the Tesla coil. [SEP]
Layer 8:[CLS] what did Tesla patent in 1891? [SEP] in the same year, he patented the Tesla coil. [SEP]
Layer 9:[CLS] what did Tesla patent in 1891? [SEP] in the same year, he patented the Tesla coil. [SEP]
Layer 10:[CLS] what did Tesla patent in 1891? [SEP] in the same year, he patented the Tesla coil. [SEP]
Layer 11:[CLS] what did Tesla patent in 1891? [SEP] in the same year, he patented the Tesla coil. [SEP]
+ +# SST-2 (dev) - Label: Negative + +
Layer 0:[CLS]what was once original has been co - opted so frequently that it now seems pedestrian.
Layer 1:[CLS]what was once original has been co - opted so frequently that it now seems pedestrian.
Layer 2:[CLS]what was once original has been co - opted so frequently that it now seems pedestrian.
Layer 3:[CLS]what was once original has been co - opted so frequently that it now seems pedestrian.
Layer 4:[CLS]what was once original has been co - opted so frequently that it now seems pedestrian.
Layer 5:[CLS]what was once original has been co - opted so frequently that it now seems pedestrian.
Layer 6:[CLS]what was once original has been co - opted so frequently that it now seems pedestrian.
Layer 7:[CLS]what was once original has been co - opted so frequently that it now seems pedestrian.
Layer 8:[CLS]what was once original has been co - opted so frequently that it now seems pedestrian.
Layer 9:[CLS]what was once original has been co - opted so frequently that it now seems pedestrian.
Layer 10:[CLS]what was once original has been co - opted so frequently that it now seems pedestrian.
Layer 11:[CLS]what was once original has been co - opted so frequently that it now seems pedestrian.
+ +# AG's news (test) - Label: Sports + +
Layer 1:[CLS] league of development major league soccer plans to start a new league to develop young players, part of its 10 - year sponsorship deal with adi ##das. [SEP]
Layer 2:[CLS] league of development major league soccer plans to start a new league to develop young players, part of its 10 - year sponsorship deal with adi ##das. [SEP]
Layer 3:[CLS] league of development major league soccer plans to start a new league to develop young players, part of its 10 - year sponsorship deal with adi ##das. [SEP]
Layer 4:[CLS] league of development major league soccer plans to start a new league to develop young players, part of its 10 - year sponsorship deal with adi ##das. [SEP]
Layer 5:[CLS] league of development major league soccer plans to start a new league to develop young players, part of its 10 - year sponsorship deal with adi ##das. [SEP]
Layer 6:[CLS] league of development major league soccer plans to start a new league to develop young players, part of its 10 - year sponsorship deal with adi ##das. [SEP]
Layer 7:[CLS] league of development major league soccer plans to start a new league to develop young players, part of its 10 - year sponsorship deal with adi ##das. [SEP]
Layer 8:[CLS] league of development major league soccer plans to start a new league to develop young players, part of its 10 - year sponsorship deal with adi ##das. [SEP]
Layer 9:[CLS] league of development major league soccer plans to start a new league to develop young players, part of its 10 - year sponsorship deal with adi ##das. [SEP]
Layer 10:[CLS] league of development major league soccer plans to start a new league to develop young players, part of its 10 - year sponsorship deal with adi ##das. [SEP]
Layer 11:[CLS] league of development major league soccer plans to start a new league to develop young players, part of its 10 - year sponsorship deal with adi ##das. [SEP]
Layer 12:[CLS] league of development major league soccer plans to start a new league to develop young players, part of its 10 - year sponsorship deal with adi ##das. [SEP]
+ +Figure 6: The illustration of contribution scores obtained by CPs in each layers of the model for different input examples from QNLI (Question-answering NLI), SST-2 (sentiment), and AG's news (topic classification) tasks. The color intensity indicates the degree of contribution scores. Only the highlighted token representations are processed in each layer + +
TaskNumber of ExamplesNumber of Tokens Mean / Median
TrainTest
SST-267349182114 / 11
IMDB2500025000275 / 233
HateXplain15383192430 / 27
MRPC3668172553 / 53
MNLI392702\( 9796^† \) / \( 9847^‡ \)40 / 37
QNLI104743546350 / 47
AG's News120000760053 / 51
DBPedia5600007000064 / 64
+ +Table 5: The statistics of datasets: number of training and test examples and average and median of sequence length (number of tokens) of test examples based on BERT's tokenizer. $\dagger$ and $\ddagger$ indicate matched and mismatched versions of MNLI test split, respectively. \ No newline at end of file diff --git a/adaplerspeedingupinferencebyadaptivelengthreduction/images.zip b/adaplerspeedingupinferencebyadaptivelengthreduction/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..1ff97aae3f625bd93e3225e61fafa893b1c105d2 --- /dev/null +++ b/adaplerspeedingupinferencebyadaptivelengthreduction/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e4b08f830ef37454f255aa6639aa3fc6b9391b97f39c82857bdea9d37d397947 +size 660947 diff --git a/adaplerspeedingupinferencebyadaptivelengthreduction/layout.json b/adaplerspeedingupinferencebyadaptivelengthreduction/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..2e4bac8147803197373b5e53aba8e5fc0ca456ea --- /dev/null +++ b/adaplerspeedingupinferencebyadaptivelengthreduction/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f1620ef4e94f8dc97b199995967e91d4a3381f6da9fb9156fe219bf006a58f03 +size 488531 diff --git a/adaptingcoreferenceresolutionmodelsthroughactivelearning/1aeed6ab-468a-4e74-8397-b6f2d1c4a58d_content_list.json b/adaptingcoreferenceresolutionmodelsthroughactivelearning/1aeed6ab-468a-4e74-8397-b6f2d1c4a58d_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..57071b16e6bb8380ea0b81d91376ae72e5303f7a --- /dev/null +++ b/adaptingcoreferenceresolutionmodelsthroughactivelearning/1aeed6ab-468a-4e74-8397-b6f2d1c4a58d_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e461dc5d3d52ec16125bdd3989b107be7ebfbb37673826649b62119fa6283c17 +size 109520 diff --git a/adaptingcoreferenceresolutionmodelsthroughactivelearning/1aeed6ab-468a-4e74-8397-b6f2d1c4a58d_model.json b/adaptingcoreferenceresolutionmodelsthroughactivelearning/1aeed6ab-468a-4e74-8397-b6f2d1c4a58d_model.json new file mode 100644 index 0000000000000000000000000000000000000000..325cfa19a66594dc42b14036757bac03fa83086a --- /dev/null +++ b/adaptingcoreferenceresolutionmodelsthroughactivelearning/1aeed6ab-468a-4e74-8397-b6f2d1c4a58d_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9a5865bd585438cc45fdb5f56551fb480baf3ae6950b85f01a71c5dc9941797a +size 129717 diff --git a/adaptingcoreferenceresolutionmodelsthroughactivelearning/1aeed6ab-468a-4e74-8397-b6f2d1c4a58d_origin.pdf b/adaptingcoreferenceresolutionmodelsthroughactivelearning/1aeed6ab-468a-4e74-8397-b6f2d1c4a58d_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0fe75df844f302328442f6bddada431639489d5f --- /dev/null +++ b/adaptingcoreferenceresolutionmodelsthroughactivelearning/1aeed6ab-468a-4e74-8397-b6f2d1c4a58d_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0eb5ed6d557bcca852dfdedc0af5c7c79f876ada860db159c65670cb42829eeb +size 2269523 diff --git a/adaptingcoreferenceresolutionmodelsthroughactivelearning/full.md b/adaptingcoreferenceresolutionmodelsthroughactivelearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..bd3f4e2b8bde1b8f080348c29146defd8236770d --- /dev/null +++ b/adaptingcoreferenceresolutionmodelsthroughactivelearning/full.md @@ -0,0 +1,459 @@ +# Adapting Coreference Resolution Models through Active Learning + +Michelle Yuan† Patrick Xia† Chandler May† + +Benjamin Van Durme† Jordan Boyd-Graber† + +Human Language Technology Center of Excellence + +†University of Maryland ‡Johns Hopkins University + +myuan@cs.umd.edu paxia@cs.jhu.edu jbg@umiacs.umd.edu + +# Abstract + +Neural coreference resolution models trained on one dataset may not transfer to new, low-resource domains. Active learning mitigates this problem by sampling a small subset of data for annotators to label. While active learning is well-defined for classification tasks, its application to coreference resolution is neither well-defined nor fully understood. This paper explores how to actively label coreference, examining sources of model uncertainty and document reading costs. We compare uncertainty sampling strategies and their advantages through thorough error analysis. In both synthetic and human experiments, labeling spans within the same document is more effective than annotating spans across documents. The findings contribute to a more realistic development of coreference resolution models. + +# 1 Introduction + +Linguistic expressions are coreferent if they refer to the same entity. The computational task of discovering coreferent mentions is coreference resolution (CR). Neural models (Lee et al., 2018; Joshi et al., 2020) are SOTA on ONTONOTES 5.0 (Pradhan et al., 2013) but cannot immediately generalize to other datasets. Generalization is difficult because domains differ in content, writing style, and annotation guidelines. To overcome these challenges, models need copiously labeled, in-domain data (Bamman et al., 2020). + +Despite expensive labeling costs, adapting CR is crucial for applications like uncovering information about proteins in biomedicine (Kim et al., 2012) and distinguishing entities in legal documents (Gupta et al., 2018). Ideally, we would like to quickly and cheaply adapt the model without repeatedly relying on an excessive amount of annotations to retrain the model. To reduce labeling cost, we investigate active learning (Settles, 2009) for CR. Active learning aims to reduce annotation + +costs by intelligently selecting examples to label. Prior approaches use active learning to improve the model within the same domain (Gasperin, 2009; Sachan et al., 2015) without considering adapting to new data distributions. For domain adaptation in CR, Zhao and Ng (2014) motivate the use of active learning to select out-of-distribution examples. A word like "the bonds" refers to municipal bonds in ONTONOTES but links to "chemical bonds" in another domain (Figure 1). If users annotate the antecedents of "the bonds" and other ambiguous entity mentions, then these labels help adapt a model trained on ONTONOTES to new domains. + +Active learning for CR adaptation is well-motivated, but the implementation is neither straightforward nor well-studied. First, CR is a span detection and clustering task, so selecting which spans to label is more complicated than choosing independent examples for text classification. Second, CR labeling involves closely reading the documents. Labeling more spans within the same context is more efficient. However, labeling more spans across different documents increases data diversity and may improve model transfer. How should we balance these competing objectives? + +Our paper extends prior work in active learning for CR to the problem of coreference model transfer (Xia and Van Durme, 2021): + +1. We generalize the clustered entropy sampling strategy (Li et al., 2020) to include uncertainty in mention detection. We analyze the effect of each strategy on coreference model transfer. +2. We investigate the trade-off between labeling and reading through simulations and a real-time user study. Limiting annotations to the same document increases labeling throughput and decreases volatility in model training. + +Taken together, these contributions offer a blueprint for faster creation of CR models across domains. + +# Source + +![](images/12536b403be607f1801677a7d6c744b05eefc0a9d9f1f2c7cd7e459129a67bd8.jpg) +municipals +The bonds + +![](images/50deba0c3be31f9565edc1284cf8bea63a64907a7b74c4af7bc7c8acd3ce0323.jpg) + +![](images/680b5c366e241d108592ff69c8963de14e903b490c65173da8782a752f23137b.jpg) + +![](images/c92ddf4cc0d6ddec0869d94a9a4295ae25265041329596f575f366503401380f.jpg) +municipals + +# Target + +![](images/764a749e546cf10a24e1fcb93ec9bd287c9dd4164c259a13079c37ead32c724a.jpg) +(1) +Active query: +Answer: + +![](images/dd6254afee81be3b38be5e11c1702e7048c5786df58aebeacce767381cb06b23.jpg) + +![](images/7704fd5179a34e7f70c5987862c88e35a1e34a84dc9a5d4ef2b7145629eff18f.jpg) + +![](images/1a08c56eefce38d109a142500ef48384d7414a2b442188b0cc26d2664828f99f.jpg) +Queries: your +a molecule + +![](images/72354e54bf092c40c135ea3e1e54e3c65cd1d0bbf2ededd8dc0b0f98a3f92759.jpg) +(2) + +![](images/182d89f8c80be24a39c8b4976f445a1b6829861619df78fd0c48bb185328d983.jpg) +Active query: + +![](images/3c5caafaf617889e5367e05993b4ecd9486879634184729404356f37babaf44e.jpg) +Answer: + +![](images/95c1bc151cfe0422677e8ebd53d58a99c1528c75b2a1a153b9aff50ad6ba40d7.jpg) +Queries: OXY +Figure 1: CR models are trained on source domain ONTONOTES, which contains data like news articles. The source document links "the bonds" to "municipal bonds". In a target domain like PRECO (Chen et al., 2018), "the bonds" may no longer have the same meaning. It can refer to "chemical bonds" (Document 1) or not be considered an entity (Document 2). A solution is to continue training the source model on more spans from the target domain. Active learning helps select ambiguous spans, like "the bonds", for the user to label on this interface (Section 4.2). + +![](images/e45686fd75cd6928a651ec65c8e0c711ef48e35c5afe024932c5751c8cfcd607.jpg) +patients + +# 2 Problem: Adapting Coreference + +Lee et al. (2018) introduce C2F-COREF, a neural model that outperforms prior rule-based systems. It assigns an antecedent $y$ to mention span $x$ . The set $\mathcal{Y}(x)$ of possible antecedent spans include a dummy antecedent $\epsilon$ and all spans preceding $x$ . If span $x$ has no antecedent, then $x$ should be assigned to $\epsilon$ . Given entity mention $x$ , the model learns a distribution over its candidate antecedents in $\mathcal{Y}(x)$ , + +$$ +P (Y = y) = \frac {\exp \{s (x , y) \}}{\sum_ {y ^ {\prime} \in \mathcal {Y} (x)} \exp \{s (x , y ^ {\prime}) \}}. \tag {1} +$$ + +The scores $s(x, y)$ are computed by the model's pairwise scorer (Appendix A.1). + +CR models like C2F-COREF are typically trained on ONTONOTES. Recent work in CR improves upon C2F-COREF and has SOTA results on ONTONOTES (Wu et al., 2020; Joshi et al., 2020). However, annotation guidelines and the underlying text differ across domains. As a result, these CR models cannot immediately transfer to other datasets. For different domains, spans could hold different meanings or link to different entities. Xia and Van Durme (2021) show the benefits of continued training where a model trained on ONTONOTES is further trained on the target dataset. For several + +target domains, continued training from ONTONOTES is stronger than training the model from scratch, especially when the training dataset is small. + +Their experiments use an incremental variant of C2F-COREF called ICOREF (Xia et al., 2020). While C2F-COREF requires $\Theta(n)$ memory to simultaneously access all spans in the document and infer a span's antecedent, ICOREF only needs constant memory to predict a span's entity cluster. Despite using less space, ICOREF retains the same accuracy as C2F-COREF. Rather than assigning $x$ to antecedent $y$ , ICOREF assigns $x$ to cluster $c$ where $c$ is from a set of observed entity clusters $\mathcal{C}$ , + +$$ +P (C = c) = \frac {\exp \left\{s \left(x , c\right) \right\}}{\sum_ {c ^ {\prime} \in \mathcal {C}} \exp \left\{s \left(x , c ^ {\prime}\right) \right\}}. \tag {2} +$$ + +As the algorithm processes spans in the document, each span is either placed in a cluster from $\mathcal{C}$ or added to a new cluster. To learn the distribution over clusters (Equation 2), the algorithm first creates a cluster representation that is an aggregate of span representations over spans that currently exist in the cluster. With cluster and span representations, individual spans and entity clusters are mapped into a shared space. Then, we can compute $s(x,c)$ using the same pairwise scorer as before. + +Xia and Van Durme (2021) show that continued training is useful for domain adaptation but assume + +that labeled data already exist in the target domain. However, model transfer is more critical when annotations are scarce. Thus, the question becomes: how can we adapt CR models without requiring a large, labeled dataset? Our paper investigates active learning as a potential solution. Through active learning, we reduce labeling costs by sampling and annotating a small subset of ambiguous spans. + +# 3 Method: Active Learning + +Neural models achieve high accuracy for ONTONOTES but cannot quickly adapt to new datasets because of shifts in domain or annotation standards (Poot and van Cranenburg, 2020). To transfer to new domains, models need substantial in-domain, labeled data. In low-resource situations, CR is infeasible for real-time applications. To reduce the labeling burden, active learning may target spans that most confuse the model. Active learning for domain adaptation (Rai et al., 2010) typically proceeds as follows: begin with a model trained on source data, sample and label $k$ spans from documents in the target domain based on a strategy, and train the model on labeled data. + +This labeling setup may appear straightforward to apply to CR, but there are some tricky details. The first complication is that—unlike text classification—CR is a clustering task. Early approaches in active learning for CR use pairwise annotations (Miller et al., 2012; Sachan et al., 2015). Pairs of spans are sampled and the annotator labels whether each pair is coreferent. The downside to pairwise annotations is that it requires many labels. To label the antecedent of entity mention $x$ , $x$ must be compared to every candidate span in the document. Li et al. (2020) propose a new scheme called discrete annotations. Instead of sampling pairs of spans, the active learning strategy samples individual spans. Then, the annotator only has to find and label first antecedent of $x$ in the document, which bypasses the multiple pairwise comparisons. Thus, we use discrete annotations to minimize labeling. + +To further improve active learning for CR, we consider the following issues. First, the CR model has different scores for mention detection and linking, but prior active learning methods only consider linking. Second, labeling CR requires time to read the document context. Therefore, we explore important aspects of active learning for adapting CR: model uncertainty (Section 3.1), and the balance between reading and labeling (Section 3.2). + +# 3.1 Uncertainty Sampling + +A well-known active learning strategy is uncertainty sampling. A common measure of uncertainty is the entropy in the distribution of the model's predictions for a given example (Lewis and Gale, 1994). Labeling uncertain examples improves accuracy for tasks like text classification (Settles, 2009). For CR, models have multiple components, and computing uncertainty is not as straightforward. Is uncertainty over where mentions are located more important than linking spans? Or the other way around? Thus, we investigate different sources of CR model uncertainty. + +# 3.1.1 Clustered Entropy + +To sample spans for learning CR, Li et al. (2020) propose a strategy called clustered entropy. This metric scores the uncertainty in the entity cluster assignment of a mention span $x$ . If $x$ has high clustered entropy, then it should be labeled to help the model learn its antecedents. Computing clustered entropy requires the probability that $x$ is assigned to an entity cluster. Li et al. (2020) use C2F-COREF, which only gives probability of $x$ being assigned to antecedent $y$ . So, they define $P(C = c)$ as the sum of antecedent probabilities $P(Y = y)$ , + +$$ +P (C = c) = \sum_ {y \in C \cap \mathcal {Y} (x)} P (Y = y). \tag {3} +$$ + +Then, they define clustered entropy as, + +$$ +\mathrm {H} (x) = - \sum_ {c \in \mathcal {C}} P (C = c) \log P (C = c). \tag {4} +$$ + +The computation of clustered entropy in Equation 4 poses two issues. First, summing the probabilities may not accurately represent the model's probability of linking $x$ to $c$ . There are other ways to aggregate the probabilities (e.g. taking the maximum). C2F-COREF never computes cluster probabilities to make predictions, so it is not obvious how $P(C = c)$ should be computed for clustered entropy. Second, Equation 4 does not consider mention detection. For ONTONOTES, this is not an issue because singletons (clusters of size 1) are not annotated and mention detection score is implicitly included in $P(Y = y)$ . For other datasets containing singletons, the model should disambiguate singleton clusters from non-mention spans. + +To resolve these issues, we make the following changes. First, we use ICOREF to obtain cluster probabilities. ICOREF is a mention clustering model so it + +already has probabilities over entity clusters (Equation 2). Second, we explore other forms of maximum entropy sampling. Neural CR models have scorers for mention detection and clustering. Both scores should be considered to sample spans that confuse the model. Thus, we propose more strategies to target uncertainty in mention detection. + +# 3.1.2 Generalizing Entropy in Coreference + +To generalize entropy sampling, we first formalize mention detection and clustering. Given span $x$ , assume $X$ is the random variable encoding whether $x$ is an entity mention (1) or not (0). In Section 2, we assume that the cluster distribution $P(C)$ is independent of $X$ : $P(C) = P(C \mid X)$ . In other words, Equation 2 is actually computing $P(C = c \mid X = 1)$ . We sample top- $k$ spans with the following strategies. + +ment-ent Highest mention detection entropy: + +$$ +\begin{array}{l} \mathrm {H} _ {\text {M E N T}} (x) = \mathrm {H} (X) \tag {5} \\ = - \sum_ {i = 0} ^ {1} P (X = i) \log P (X = i). \\ \end{array} +$$ + +The probability $P(X)$ is computed from normalized mention scores $s_m$ (Equation 10). Ment-ent may sample spans that challenge mention detection (e.g. class-ambiguous words like "park"). The annotator can clarify whether spans are entity mentions to improve mention detection. + +clusent-ent Highest mention clustering entropy: + +$$ +\begin{array}{l} \mathrm {H} _ {\text {C L U S T}} (x) = \mathrm {H} (C \mid X = 1) \tag {6} \\ = - \sum_ {c \in \mathcal {C}} P (C = c \mid X = 1) \log \\ P (C = c \mid X = 1). \\ \end{array} +$$ + +Clust-ent looks at clustering scores without explicitly addressing mention detection. Like in ONTONOTES, all spans are assumed to be entity mentions. The likelihood $P(C = c \mid X = 1)$ is given by ICOREF (Equation 2). + +cond-ent Highest conditional entropy: + +$$ +\begin{array}{l} \mathrm {H} _ {\text {C O N D}} (x) = \mathrm {H} (C \mid X) \\ = \sum_ {i = 0} ^ {1} P (X = i) \mathrm {H} (C \mid X = i) \tag {7} \\ = P (X = 1) \mathrm {H} (C \mid X = 1) \\ = P (X = 1) \mathrm {H} _ {\text {C L U S T}} (x). \\ \end{array} +$$ + +We reach the last equation because there is no uncertainty in clustering $x$ if $x$ is not an entity mention and $\mathrm{H}(C|X = 0) = 0$ . Cond-ent takes the uncertainty of mention detection into account. So, we may sample more pronouns because they are obviously mentions but difficult to cluster. + +Joint-ent Highest joint entropy: + +$$ +\begin{array}{l} \mathrm {H} _ {\text {J O I N T}} (x) = \mathrm {H} (X, C) = \mathrm {H} (X) + \mathrm {H} (C \mid X) \\ = \mathrm {H} _ {\text {M E N T}} (x) + \mathrm {H} _ {\text {C O N D}} (x). \tag {8} \\ \end{array} +$$ + +Joint-ent may sample spans that are difficult to detect as entity mentions and too confusing to cluster. This sampling strategy most closely aligns with the uncertainty of the training objective. It may also fix any imbalance between mention detection and linking (Wu and Gardner, 2021). + +# 3.2 Trade-off between Reading and Labeling + +For CR, the annotator reads the document context to label the antecedent of a mention span. Annotating and reading spans from different documents may slow down labeling, but restricting sampling to the same document may cause redundant labeling (Miller et al., 2012). To better understand this trade-off, we explore different configurations with $k$ , the number of annotated spans, and $m$ , the maximum number of documents being read. Given source model $h_0$ already fine-tuned on ONTONOTES, we adapt $h_0$ to a target domain through active learning (Algorithm 1): + +Scoring To sample $k$ spans from unlabeled data $\mathcal{U}$ of the target domain, we score spans with an active learning strategy $S$ . Assume $S$ scores each span through an acquisition model (Lowell et al., 2019). For the acquisition model, we use $h_{t-1}$ , the model fine-tuned from the last cycle. The acquisition score quantifies the span's importance given $S$ and the acquisition model. + +Reading Typically, active learning samples $k$ spans with the highest acquisition scores. To constrain $m$ , the number of documents read, we find the documents of the $m$ spans with highest acquisition scores and only sample spans from those documents. Then, the $k$ sampled spans will belong to at most $m$ documents. If $m$ is set to "unconstrained", then we simply sample the $k$ highest-scoring spans, irrespective of the document boundaries. + +Our approach resembles Miller et al. (2012) where they sample spans based on highest uncer + +Algorithm 1 Active Learning for Coreference +Require: Source model $h_0$ , Unlabeled data $\mathcal{U}$ , Active learning strategy $S$ , No. of cycles $T$ , No. of labeled spans $k$ , Max. no. of read docs $m$ +1: Labeled data $\mathcal{L} = \{\}$ +2: for cycles $t = 1, \ldots, T$ do +3: $a_x \gets$ Score span $x \in \mathcal{U}$ by $S(h_{t-1}, x)$ +4: $\mathcal{Q} \gets$ Sort $(\downarrow) x \in \mathcal{U}$ by scores $a_x$ +5: $\mathcal{Q}_m \gets$ Top- $m$ spans in $\mathcal{Q}$ +6: $\mathcal{D} \gets \{d_x | x \in \mathcal{Q}_m\}$ where $d_x$ is doc of $x$ +7: $\widetilde{\mathcal{Q}} \gets$ Filter $\mathcal{Q}$ s.t. spans belong to $d \in \mathcal{D}$ +8: $\widetilde{\mathcal{Q}}_k \gets$ Top- $k$ spans in $\widetilde{\mathcal{Q}}$ +9: $\mathcal{L}_k \gets$ Label antecedents for $\widetilde{\mathcal{Q}}_k$ +10: $\mathcal{L} \gets \mathcal{L} \cup \mathcal{L}_k$ +11: $h_t \gets$ Continue train $h_0$ on $\mathcal{L}$ +return $h_T$ + +tainty and continue sampling from the same document until uncertainty falls below a threshold. Then, they sample the most uncertain span from a new document. We modify their method because the uncertainty threshold will vary for different datasets and models. Instead, we use the number of documents read to control context switching. + +Labeling An oracle (e.g., human annotator or gold data) labels the antecedents of sampled spans with discrete annotations (Section 3). + +Continued Training We combine data labeled from current and past cycles. We train the source model $h_0$ (which is already trained on ONTONOTES) on the labeled target data. We do not continue training a model from a past active learning cycle because it may be biased from only training on scarce target data (Ash and Adams, 2020). + +# 4 Active Learning for CR through Simulations and Humans + +We run experiments to understand two important factors of active learning for CR: sources of model uncertainty (Section 3.1) and balancing reading against labeling (Sections 3.2). First, we simulate active learning on PRECo to compare sampling strategies based on various forms of uncertainty (Section 4.1). Then, we set up a user study to investigate how humans perform when labeling spans from fewer or more documents from PRECo (Section 4.2). Specifically, we analyze their annotation time and throughput. Finally, we run large-scale simulations on PRECo and QBCOREF (Section 4.3). + +We explore different combinations of sampling strategies and labeling configurations. + +Models In all experiments, the source model is the best checkpoint of ICOREF model trained on ONTONOTES (Xia et al., 2020) with SPANBERT-LARGE-CASED (Joshi et al., 2020) encoder. For continued training on the target dataset, we optimize with a fixed parameter configuration (Appendix A.2). We evaluate models on Avg $\mathbf{F}_1$ , the averaged $F_{1}$ scores of MUC (Vilain et al., 1995), $\mathrm{B}^3$ (Bagga and Baldwin, 1998), and $\mathrm{CEAF}_{\phi 4}$ (Luo, 2005). For all synthetic experiments, we simulate active learning with gold data substituting as an annotator. However, gold mention boundaries are not used when sampling data. The model scores spans that are likely to be entity mentions for inference, so we limit the active learning candidates to this pool of high-scoring spans. For each active learning simulation, we repeat five runs with different random seed initializations. + +Baselines We compare the proposed sampling strategies (Section 3.1.2) along with li-clust-ent, which is clustered entropy from Li et al. (2020) (Equation 4). Active learning is frustratingly less effective than random sampling in many settings (Lowell et al., 2019), so we include two random baselines in our simulation. Random samples from all spans in the documents. Random-ment, as well as other strategies, samples only from the pool of likely (high-scoring) spans. Thus, random-ment should be a stronger baseline than random. + +Datasets ONTONOTES 5.0 is the most common dataset for training and evaluating CR (Pradhan et al., 2013). The dataset contains news articles and telephone conversations. Only non-singletons are annotated. Our experiments transfer a model trained on ONTONOTES to two target datasets: PRECo and QBCOREF. PRECo is a large corpus of grade-school reading comprehension texts (Chen et al., 2018). Unlike ONTONOTES, PRECo has annotated singletons. There are 37K training, 500 validation, and 500 test documents. Because the training set is so large, Chen et al. (2018) only analyze subsets of 2.5K documents. Likewise, we reduce the training set to a subset of 2.5K documents, comparable to the size of ONTONOTES. + +The QBCOREF dataset (Guha et al., 2015) contains trivia questions from Quizbowl tournaments that are densely packed with entities from academic topics. Like PRECo, singletons are annotated. Un + +![](images/d0d4abdf37d6c83daa5e01ce351eb2c82b580aaa6ebbaa7269a39a407825653d.jpg) +Figure 2: Test $\mathrm{AVG~F_1}$ on PRECO for each strategy. On each cycle, fifty spans from one document are sampled and labeled. We repeat each simulation five times. Ment-ent, clust-ent, and joint-ent are most effective while random hurts the model the most. + +![](images/99974d9e34f23aaeb7925e0902fc33e73039057516850e0e3cb8b87e8123c225.jpg) +Figure 3: Cumulative counts of entities, non-entities, pronouns, and singletons sampled for each strategy over first four cycles of the PRECO simulation. Random mostly samples non-entities. Li-clust-ent and cond-ent sample many entity mentions but avoid singletons. + +like other datasets, the syntax is idiosyncratic and world knowledge is needed to solve coreference. Examples are pronouns before the first mention of named entities and oblique references like "this polity" for "the Hanseatic League". These complicated structures rarely occur in everyday text but serve as challenging examples for CR. There are 240 training, 80 validation, and 80 test documents. + +# 4.1 Simulation: Uncertainty Sampling + +To compare different sampling strategies, we first run experiments on PRECo. We sample fifty spans from one document for each cycle. By the end of a simulation run, 300 spans are sampled from six documents. For this configuration, uncertainty sampling strategies generally reach higher accuracy than the random baselines (Figure 2), but cond-ent and li-clust-ent are worse than random-ment. + +# 4.1.1 Distribution of Sampled Span Types + +To understand the type of spans being sampled, we count entity mentions, non-entities, pronouns, and singletons that are sampled by each strategy (Figure 3). Random samples very few entities, while other strategies sample more entity mentions. Clust-ent and cond-ent sample more entity mentions and pronouns because the sampling objective prioritizes mentions that are difficult to link. Clust-ent, joint-ent, and ment-ent sample more singleton mentions. These strategies also show higher AVG $\mathrm{F}_1$ (Figure 2). For transferring from ONTONOTES to PRECO, annotating singletons is useful because only non-singleton mentions are labeled in ONTONOTES. We notice ment-ent sampling pronouns, which should obviously be entity mentions, only in the first cycle. Many pronouns in ONTONOTES are singletons, so the mention detector has trouble distinguishing them initially in PRECO. + +# 4.1.2 Error Analysis + +Kummerfeld and Klein (2013) enumerate the ways CR models can go wrong: missing entity, extra entity, missing mention, extra mention, divided entity, and conflated entity. Missing entity means a gold entity cluster is missing. Missing mention means a mention span for a gold entity cluster is missing. The same definitions apply for extra entity and extra mention. Divided entity occurs when the model splits a gold entity cluster into multiple ones. Conflated entity happens when the model merges gold entity clusters. For each strategy, we analyze the errors of its final model from the simulation's last + +![](images/21855aa8735a49cf02d68077dfb9f6eaa1885673a59da3a7c632aba9867806f8.jpg) +Figure 4: For each sampling strategy, we analyze the model from the last cycle of its PRECO simulation. We compare the number of errors across common error types in CR. The source ONTONOTES model severely suffers from missing entities and missing mentions. Ment-ent helps most with reducing these errors. + +cycle (Figure 4). We compare against the source model that is only trained on ONTONOTES. + +The source model makes many missing entity and missing mention errors. It does not detect several entity spans in PRECo, like locations ("Long Island") or ones spanning multiple words ("his kind acts of providing everything that I needed"). These spans are detected by uncertainty sampling strategies and rand-ment. Ment-ent is most effective at reducing "missing" errors. It detects gold entity clusters like "constant communication" and "the best educated guess about the storm". By training on spans that confuse the mention detector, the model adapts to the new domain by understanding what constitutes an entity mention. + +Surprisingly, li-clust-ent makes at least twice as many extra entity and extra mention errors than any other strategy. For the sentence, "Living in a large building with only 10 bedrooms", the gold data identifies two entities: "a large building with only 10 bedrooms" and "10 bedrooms". In both ONTONOTES and PRECO, the guidelines only allow the longest noun phrase to be annotated. Yet, the + +li-clust-ent model predicts additional mentions, "a large building" and "only 10 bedrooms". We find that li-clust-ent tends to sample nested spans (Table 4). Due to the summed entropy computation, nested spans share similar values for clustered entropy as they share similar antecedent-linking probabilities. This causes the extra entity and extra mention errors because the model predicts there are additional entity mentions within a mention span. + +Finally, we see a stark difference between random-ment and random. Out of all the sampling strategies, random is least effective at preventing missing entity and missing mention errors. We are more likely to sample non-entities if we randomly sample from all spans in the document (Appendix A.7). By limiting the sampling pool to only spans that are likely to be entity mentions, we sample more spans that are useful to label for CR. Thus, the mention detector from neural models should be deployed during active learning. + +# 4.2 User Study: Reading and Labeling + +We hold a user study to observe the trade-off between reading and labeling. Three annotators, with minimal NLP knowledge, label spans sampled from PRECo. We use ment-ent to sample spans because the strategy shows highest Avg $\mathrm{F_1}$ (Figure 2). First, the users read instructions (Appendix A.6) and practice labeling for ten minutes. Then, they complete two sessions: FewDocs and ManyDocs. In each session, they label as much as possible for at least twenty-five minutes. In FewDocs, they read fewer documents and label roughly seven spans per document. In ManyDocs, they read more documents and label about one span per document. + +For labeling coreference, we develop a user interface that is open-sourced (Figure 8). To label the antecedent of the highlighted span, the user clicks on a contiguous span of tokens. The interface suggests overlapping candidates based on the spans that are retained by the CR model. + +In the user study, participants label at least twice as much in FewDocs compared to Many-Docs (Figure 5). By labeling more spans in Few-Docs, the mean Avg $\mathrm{F}_1$ score is also slightly higher. Our findings show that the number of read documents should be constrained to increase labeling throughput. Difference in number of labeled spans between FewDocs and ManyDocs is more pronounced when two annotators volunteer to continue labeling after required duration (Appendix A.6). + +![](images/1c626bc09da1982aafe3dc93662ead0ca3c7e83fb246545f932d9ef09e7cd6e7.jpg) +Figure 5: The number of spans labeled within twenty-five minutes. Each color indicates one of three users and the linetype designates the session. Black dots mark the first span labeled in a different document. The mean $\mathrm{AvgF_1}$ across users for each session is on the right. By restricting the number of read documents in FewDocs, users label at least twice as many spans and the model slightly improves in $\mathrm{AvgF_1}$ . + +# 4.3 Simulation: Uncertainty Sampling and Reading-Labeling Trade-off + +We finally run simulations to explore both sources of model uncertainty and the trade-off between reading and labeling. The earlier experiments have individually looked at each aspect. Now, we analyze the interaction between both factors to understand which combination works best for adapting CR to new domains. We run simulations on PRECo and QBCOREF that trade-off the number of documents read $m$ with the number of annotated spans $k$ (Figure 6). We vary $m$ between one, five, and an unconstrained number of documents. For PRECo, we set $k$ to twenty and fifty. For QBCOREF, we set $k$ to twenty and forty. These results are also presented in numerical form (Appendix A.5). + +PRECO For PRECO, the test AVG $\mathrm{F_1}$ of ICOREF trained on the full training dataset is 0.860. When $m$ is constrained to one or five, AVG $\mathrm{F_1}$ can reach around 0.707 from training the model on only 300 spans sampled by ment-ent. As $m$ increases, fewer spans are sampled per document and all sampling strategies deteriorate. After training on sparsely annotated documents, the model tends to predict singletons rather than cluster coreferent spans. Like in the user study, we see benefits when labeling + +![](images/4b7e922899bca9a6adcdd1d505aaab37e8d66ed3caa330855d3aa9ae25c4174d.jpg) + +![](images/da02495d8a90c8d223c7582014aca3d81138d88fd2d9e6a24a02e02bb2159847.jpg) +Figure 6: Test $\mathrm{AVG~F_1}$ on PRECO and QBCOREF of each strategy throughout simulations. Each row varies in $m$ , the maximum number of documents read per cycle. Each column varies in $k$ , the number of annotated spans per cycle. For $m$ of one or five, ment-ent shows highest $\mathrm{AVG~F_1}$ for PRECO and other uncertainty sampling strategies are best for QBCOREF. When $m$ is unconstrained, many strategies show unstable training. + +more spans within a document. Interestingly, li-clust-ent performs better when document reading is not constrained to one document. The issue with li-clust-ent is that it samples nested mention spans (Section 4.1.2). Duplicate sampling is less severe if spans can be sampled across more documents. Another strategy that suffers from duplicate sampling is cond-ent because it mainly samples pronouns. For some documents, the pronouns all link to the same entity cluster. As a result, the model trains on a less diverse set of entity mentions and cond-ent drops in $\mathrm{AvgF_1}$ as the simulation continues. + +QBCOREF For QBCOREF, the test AVG $\mathrm{F}_1$ of ICOREF trained on the full training dataset is 0.795. When we constrain $m$ to one or five, li-clust-ent, clust-ent, cond-ent, and joint-ent have high AVG $\mathrm{F}_1$ . Clustering entity mentions in QBCOREF questions is difficult, so these strategies help target ambiguous mentions (Table 5). Ment-ent is less useful because demonstratives are abundant in QBCOREF and make mention detection easier. Li-clust-ent still samples nested entity mentions, but annotations for these spans help clarify interwoven entities in Quizbowl questions. Unlike PRECo, li-clust-ent does not sample duplicate entities because nested entity mentions belong to different clusters and need to be distinguished. + +Overall, the most helpful strategy depends on the domain. For domains like PRECo that contain long documents with many singletons, *ment-ent* is useful. For domains like QBCOREF where resolving coreference is difficult, we need to target linking uncertainty. Regardless of the dataset, *random* performs worst. *Random-ment* has much higher *AVG F1*, which shows the importance of the mention detector in active learning. Future work should determine the appropriate strategy for a given domain and annotation setup. + +# 5 Related Work + +Gasperin (2009) present the first work on active learning for CR yet observe negative results: active learning is not more effective than random sampling. Miller et al. (2012) explore different settings for labeling CR. First, they label the most uncertain pairs of spans in the corpus. Second, they label all pairs in the most uncertain documents. The first approach beats random sampling but requires the annotator to infeasibly read many documents. The second approach is more realistic but loses to random sampling. Zhao and Ng (2014) argue + +that active learning helps domain adaptation of CR. Sachan et al. (2015) treat pairwise annotations as optimization constraints. Li et al. (2020) replace pairwise annotations with discrete annotations and experiment active learning with neural models. + +Active learning has been exhaustively studied for text classification (Lewis and Gale, 1994; Zhu et al., 2008; Zhang et al., 2017). Text classification is a much simpler task, so researchers investigate strategies beyond uncertainty sampling. Yuan et al. (2020) use language model surprisal to cluster documents and then sample representative points for each cluster. Margatina et al. (2021) search for constructive examples, which are documents that are similar in the feature space yet differ in predictive likelihood. Active learning is also applied to tasks like machine translation (Liu et al., 2018), visual question answering (Karamcheti et al., 2021), and entity alignment (Liu et al., 2021). + +Rather than solely running simulations, other papers have also ran user studies or developed user-friendly interfaces. Wei et al. (2019) hold a user study for active learning to observe the time to annotate clinical named entities. Lee et al. (2020) develop active learning for language learning that adjusts labeling difficulty based on user skills. Klie et al. (2020) create a human-in-the-loop pipeline to improve entity linking for low-resource domains. + +# 6 Conclusion + +Neural CR models desperately depend on large, labeled data. We use active learning to transfer a model trained on ONTONOTES, the "de facto" dataset, to new domains. Active learning for CR is difficult because the problem does not only concern sampling examples. We must consider different aspects, like sources of model uncertainty and cost of reading documents. Our work explores these factors through exhaustive simulations. Additionally, we develop a user interface to run a user study from which we observe human annotation time and throughput. In both simulations and the user study, CR improves from continued training on spans sampled from the same document rather than different contexts. Surprisingly, sampling by entropy in mention detection, rather than linking, is most helpful for domains like PRECo. This opposes the assumption that the uncertainty strategy must be directly tied to the training objective. Future work may extend our contributions to multilingual transfer or multi-component tasks, like open-domain QA. + +# 7 Ethical Considerations + +This paper involves a user study to observe the trade-off between reading and labeling costs for annotating coreference. The study has been approved by IRB to collect data about human behavior. Any personal information will be anonymized prior to paper submission or publication. All participants are fully aware of the labeling task and the information that will be collected from them. They are appropriately compensated for their labeling efforts. + +# Acknowledgements + +We thank Ani Nenkova, Jonathan Kummerfeld, Matthew Shu, Chen Zhao, and the anonymous reviewers for their insightful feedback. We thank the user study participants for supporting this work through annotating data. Michelle Yuan and Jordan Boyd-Graber are supported in part by Adobe Inc. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsors. + +# References + +Jordan T. Ash and Ryan P Adams. 2020. On warm-starting neural network training. In Proceedings of Advances in Neural Information Processing Systems. +Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In The first international conference on language resources and evaluation workshop on linguistics coreference. +David Bamman, Olivia Lewke, and Anya Mansoor. 2020. An annotated dataset of coreference in English literature. In Proceedings of the Language Resources and Evaluation Conference. +Hong Chen, Zhenhua Fan, Hao Lu, Alan Yuille, and Shu Rong. 2018. PreCo: A large-scale dataset in preschool vocabulary for coreference resolution. In Proceedings of Empirical Methods in Natural Language Processing. +Caroline Gasperin. 2009. Active learning for anaphora resolution. In Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing. +Anupam Guha, Mohit Iyyer, Danny Bouman, and Jordan Boyd-Graber. 2015. Removing the training wheels: A coreference dataset that entertains humans and challenges computers. In *Conference of the North American Chapter of the Association for Computational Linguistics*. + +Ajay Gupta, Devendra Verma, Sachin Pawar, Sangameshwar Patil, Swapnil Hingmire, Girish K Palshikar, and Pushpak Bhattacharyya. 2018. Identifying participant mentions and resolving their coreferences in legal court judgements. In International Conference on Text, Speech, and Dialogue. +Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64-77. +Siddharth Karamcheti, Ranjay Krishna, Li Fei-Fei, and Christopher Manning. 2021. Mind your outliers! investigating the negative impact of outliers on active learning for visual question answering. In Proceedings of the Association for Computational Linguistics. +Jin-Dong Kim, Ngan Nguyen, Yue Wang, Jun'ichi Tsujii, Toshihisa Takagi, and Akinori Yonezawa. 2012. The genia event and protein coreference tasks of the BioNLP shared task 2011. In BMC bioinformatics. +Jan-Christoph Klie, Richard Eckart de Castilho, and Iryna Gurevych. 2020. From zero to hero: Human-in-the-loop entity linking in low resource domains. In Proceedings of the Association for Computational Linguistics. +Jonathan K. Kummerfeld and Dan Klein. 2013. Error-driven analysis of challenges in coreference resolution. In Proceedings of Empirical Methods in Natural Language Processing. +Ji-Ung Lee, Christian M. Meyer, and Iryna Gurevych. 2020. Empowering Active Learning to Jointly Optimize System and User Demands. In Proceedings of the Association for Computational Linguistics. +Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In Proceedings of Empirical Methods in Natural Language Processing. +Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-order coreference resolution with coarse-to-fine inference. In Conference of the North American Chapter of the Association for Computational Linguistics. +David D. Lewis and William A. Gale. 1994. A sequential algorithm for training text classifiers. In Proceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval. +Belinda Z Li, Gabriel Stanovsky, and Luke Zettlemoyer. 2020. Active learning for coreference resolution using discrete annotation. In Proceedings of the Association for Computational Linguistics. +Bing Liu, Harrison Scells, Guido Zuccon, Wen Hua, and Genghong Zhao. 2021. ActiveEA: Active learning for neural entity alignment. In Proceedings of + +Empirical Methods in Natural Language Processing. +Ming Liu, Wray Buntine, and Gholamreza Haffari. 2018. Learning to actively learn neural machine translation. In Conference on Computational Natural Language Learning. +David Lowell, Zachary C. Lipton, and Byron C. Wallace. 2019. Practical obstacles to deploying active learning. In Proceedings of Empirical Methods in Natural Language Processing. +Xiaoqiang Luo. 2005. On coreference resolution performance metrics. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing. +Katerina Margatina, Giorgos Vernikos, Ioic Barrault, and Nikolaos Aletras. 2021. Active learning by acquiring contrastive examples. In Proceedings of Empirical Methods in Natural Language Processing. +Timothy Miller, Dmitriy Dligach, and Guergana Savova. 2012. Active learning for coreference resolution. In BioNLP: Proceedings of the 2012 Workshop on Biomedical Natural Language Processing. +Corbèn Poot and Andreas van Cranenburg. 2020. A benchmark of rule-based and neural coreference resolution in Dutch novels and news. In Proceedings of the Third Workshop on Computational Models of Reference, Anaphora and Coreference. +Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Björkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards robust linguistic analysis using OntoNotes. In Conference on Computational Natural Language Learning. +Piyush Rai, Avishek Saha, Hal Daumé III, and Suresh Venkatasubramanian. 2010. Domain adaptation meets active learning. In Conference of the North American Chapter of the Association for Computational Linguistics. +Mrinmaya Sachan, Eduard Hovy, and Eric P Xing. 2015. An active learning approach to coreference resolution. In International Joint Conference on Artificial Intelligence. +Burr Settles. 2009. Active learning literature survey. Technical report, University of Wisconsin-Madison Department of Computer Sciences. +Marc Vilain, John Burger, John Aberdeen, Dennis Connolly, and Lynette Hirschman. 1995. A model-theoretic coreference scoring scheme. In Sixth Message Understanding Conference (MUC-6): Proceedings of a Conference Held in Columbia, Maryland. +Qiang Wei, Yukun Chen, Mandana Salimi, Joshua C Denny, Qiaozhu Mei, Thomas A Lasko, Qingxia Chen, Stephen Wu, Amy Franklin, Trevor Cohen, et al. 2019. Cost-aware active learning for + +named entity recognition in clinical text. Journal of the American Medical Informatics Association, 26:1314-1322. +Wei Wu, Fei Wang, Arianna Yuan, Fei Wu, and Jiwei Li. 2020. CorefQA: Coreference resolution as query-based span prediction. In Proceedings of the Association for Computational Linguistics. +Zhaofeng Wu and Matt Gardner. 2021. Understanding mention detector-linker interaction for neural coreference resolution. In Proceedings of the Fourth Workshop on Computational Models of Reference, Anaphora and Coreference. +Patrick Xia, João Sedoc, and Benjamin Van Durme. 2020. Incremental neural coreference resolution in constant memory. In Proceedings of Empirical Methods in Natural Language Processing. +Patrick Xia and Benjamin Van Durme. 2021. Moving on from OntoNotes: Coreference resolution model transfer. In Proceedings of Empirical Methods in Natural Language Processing. +Michelle Yuan, Hsuan-Tien Lin, and Jordan Boyd-Graber. 2020. Cold-start active learning through self-supervised language modeling. In Proceedings of Empirical Methods in Natural Language Processing. +Ye Zhang, Matthew Lease, and Byron C. Wallace. 2017. Active discriminative text representation learning. In Association for the Advancement of Artificial Intelligence. +Shanheng Zhao and Hwee Tou Ng. 2014. Domain adaptation with active learning for coreference resolution. In Proceedings of the 5th International Workshop on Health Text Mining and Information Analysis (Louhi). +Jingbo Zhu, Huizhen Wang, Tianshun Yao, and Benjamin K. Tsou. 2008. Active learning with sampling by uncertainty and density for word sense disambiguation and text classification. In Proceedings of International Conference on Computational Linguistics. + +# A Appendix + +# A.1 Coreference Resolution Models + +C2F-COREF In C2F-COREF, a pairwise scorer computes $s(x,y)$ to learn antecedent distribution $P(Y)$ (Equation 1). The model's pairwise scorer judges whether span $x$ and span $y$ are coreferent based on their antecedent score $s_a$ and individual mention scores $s_m$ , + +$$ +s (x, y) = \left\{ \begin{array}{l l} 0 & y = \epsilon \\ s _ {m} (x) + s _ {m} (y) + s _ {a} (x, y) & y \neq \epsilon \end{array} , \right. \tag {9} +$$ + +Suppose $g_{x}$ and $g_{y}$ are the span representations of $x$ and $y$ , respectively. Mention scores and antecedent scores are then computed with feedforward networks $FNN_{m}$ and $FNN_{c}$ , + +$$ +s _ {m} (x) = F F N N _ {m} \left(\boldsymbol {g} _ {\boldsymbol {x}}\right) \tag {10} +$$ + +$$ +s _ {a} (x, y) = F F N N _ {a} \left(\boldsymbol {g} _ {\boldsymbol {x}}, \boldsymbol {g} _ {\boldsymbol {y}}, \phi (x, y)\right). \tag {11} +$$ + +The input $\phi (x,y)$ includes features like the distance between spans. The unary mention score $s_m$ can be viewed as the likelihood that the span is an entity mention. For computational purposes, the C2F-COREF model only retains top- $k$ spans with the highest unary mention scores. Lee et al. (2018) provide more details about the pairwise scorer and span pruning. + +Incremental Clustering We elaborate upon the clustering algorithm of ICOREF here. As the algorithm processes spans in the document, each span is either placed in a cluster from $\mathcal{C}$ or added to a new cluster. To learn the distribution over clusters (Equation 2), the algorithm first creates a cluster representation $g_{c}$ that is an aggregate of span representation that is an aggregate of span representations over spans that currently exist in the cluster. (Equation 12). With cluster and span representations, individual spans and entity clusters are mapped into a shared space. Then, we can compute $s(x,c)$ using the same pairwise scorer as Lee et al. (2018). Suppose that model predicts $c^{*}$ as most likely cluster: $c^{*} = \arg \max_{c\in \mathcal{C}}s(x,c)$ . Now, the algorithm makes one of two decisions: + +1. If $s(x, c^*) > 0$ , then $x$ is assigned to $c^*$ and update $g_{c^*}$ such that + +$$ +\boldsymbol {g} _ {\boldsymbol {c} ^ {*}} = s _ {e} \left(c ^ {*}, x\right) \boldsymbol {g} _ {\boldsymbol {c} ^ {*}} + \left(1 - s _ {e} \left(c ^ {*}, x\right)\right) \boldsymbol {g} _ {\boldsymbol {x}}, \tag {12} +$$ + +where $s_e$ is a learned weight. + +
StrategyPRECoQBCOREF
random2< 1
random-ment4< 1
ment-ent5< 1
li-clust-ent12< 1
clust-ent121
cond-ent141
joint-ent161
+ +Table 1: The time (minutes) to sample a batch of fifty spans from five documents from either PRECO or QBCOREF for a given active learning strategy. On large datasets like PRECO, we see that li-clust-ent, clust-ent, cond-ent, and joint-ent are slower because the strategy needs to incrementally cluster each span and then compute clustering entropy. + +2. If $s(x, c^*) \leq 0$ , then a new entity cluster $c_x = \{x\}$ is added to $\mathcal{C}$ . + +The algorithm repeats for each span in the document. + +Like C2F-COREF, the ICOREF model only retains top- $k$ spans with highest unary mention score. All of our active learning baselines (Section 4), except random, sample spans from this top- $k$ pool of spans. + +# A.2 Training Configuration + +The SPANBERT-LARGE-CASED encoder has 334M parameters and ICOREF has 373M parameters in total. For model fine-tuning, we train for a maximum of fifty epochs and implement early stopping with a patience of ten epochs. We set top span pruning to 0.4, dropout to 0.4, gradient clipping to 10.0, and learning rate to 1e-4 for Adam optimizer. The hyperparameter configuration is based on results from prior work (Lee et al., 2017; Xia et al., 2020). + +All experiments in the paper are ran on NVIDIA Tesla V100 GPU and 2.2 GHz Intel Xeon Silver 4114 CPU processor. + +# A.3 Simulation Time + +We compare the time to sample fifty spans between different active learning strategies for PRECo and QBCOREF (Table 1). For PRECo, clust-ent, content, and joint-ent are slower because they need to run documents through ICOREF and get span-cluster likelihood. On the other hand, ment-ent only needs unary scores $s_m$ , which is much faster to compute. Thus, for both datasets, running ment-ent takes about the same time as random-ment. + +For QBCOREF, fine-tuning ICOREF on fifty spans takes three minutes and fine-tuning on full training set takes thirty-four minutes. For PRECO, fine-tuning ICOREF on fifty spans takes nine minutes and fine-tuning on full training set takes five hours and 22 minutes. + +# A.4 Mention Detection Accuracy + +For the annotation simulation in Section 4, we also record mention detection accuracy. As ment-ent targets ambiguity in mention detection, it is the most effective strategy for improving mention detection (Figure 7). The strategy is unaffected by labeling setup parameters, like the number of spans labeled per cycle or the number of documents read per cycle. For strategies like cond-ent and joint-ent, mention detection accuracy is stagnant or decreases as more spans are sampled (Figure 7a). Due to deteriorating mention detection, the Avg $\mathbf{F}_1$ of models also drop. + +# A.5 Numerical Results + +The results for $\mathrm{AvgF_1}$ and mention detection accuracy are presented as graphs throughout the paper. To concretely understand the differences between the methods, we provide results in numerical form (Tables 2,3). We show results from the PRECo and QBCOREF simulations where twenty spans are labeled each cycle and the number of documents read is either one or an unconstrained amount. The values in the tables show the mean and variance of $\mathrm{AvgF_1}$ and mention detection accuracy over five different runs. + +# A.6 User Study + +Instructions to Participants We give the following instructions to user study participants: + +You will be shown several sentences from a document. We have highlighted a mention (a word or phrase) of an entity (a person, place, or thing). This entity mention may be a pronoun (such as "she" or "their") or something else. + +We need your help to find an earlier mention of the same entity, whether in the same sentence or in an earlier sentence. The mention does not have to be the immediately previous one. + +If the span is not an entity mention or does not have an antecedent, please make note of it on the interface. + +![](images/dfc3c8525a1eb950a16fd904ea1c3a9a0bdab1475af8426741f75a2f8074c9b2.jpg) + +![](images/5d079633f125c0e710773dd6a691306e056f1c2a8a8758d04c68386603fa5305.jpg) + +![](images/a8af21b81d6222e0b081561308fed3da34031bb07c1931670fd70a7784a8ab25.jpg) +(a) PRECO +(b) QBCOREF +Figure 7: Comparing mention detection accuracy on test set for different active learning strategies across reading/labeling configurations. The plots are formatted in the same way as Figure 6. Generally, mention detection improves most from ment-ent sampling. + +
Total No. of Labeled SpansmStrategyAvg F1Mention Accuracy
1001clust-ent0.64 ± 0.020.71 ± 0.03
cond-ent0.57 ± 0.020.66 ± 0.02
joint-ent0.64 ± 0.030.76 ± 0.02
ment-ent0.70 ± 0.010.80 ± 0.00
random0.43 ± 0.090.49 ± 0.11
random-ment0.65 ± 0.040.78 ± 0.02
li-clust-ent0.56 ± 0.020.65 ± 0.03
unconstrainedclust-ent0.62 ± 0.030.70 ± 0.03
cond-ent0.43 ± 0.090.67 ± 0.04
joint-ent0.55 ± 0.060.71 ± 0.05
ment-ent0.65 ± 0.030.76 ± 0.03
random0.48 ± 0.070.54 ± 0.07
random-ment0.69 ± 0.010.80 ± 0.01
li-clust-ent0.62 ± 0.010.73 ± 0.01
2001clust-ent0.68 ± 0.010.77 ± 0.01
cond-ent0.62 ± 0.020.70 ± 0.03
joint-ent0.68 ± 0.030.80 ± 0.02
ment-ent0.71 ± 0.010.82 ± 0.00
random0.48 ± 0.180.55 ± 0.21
random-ment0.65 ± 0.050.77 ± 0.07
li-clust-ent0.57 ± 0.050.67 ± 0.04
unconstrainedclust-ent0.65 ± 0.020.73 ± 0.03
cond-ent0.36 ± 0.080.63 ± 0.07
joint-ent0.40 ± 0.120.67 ± 0.12
ment-ent0.67 ± 0.030.81 ± 0.01
random0.49 ± 0.080.61 ± 0.07
random-ment0.69 ± 0.010.81 ± 0.00
li-clust-ent0.65 ± 0.030.75 ± 0.03
3001clust-ent0.68 ± 0.020.78 ± 0.01
cond-ent0.61 ± 0.030.70 ± 0.04
joint-ent0.69 ± 0.020.81 ± 0.01
ment-ent0.69 ± 0.020.82 ± 0.00
random0.50 ± 0.090.58 ± 0.10
random-ment0.61 ± 0.100.81 ± 0.01
li-clust-ent0.63 ± 0.050.73 ± 0.05
unconstrainedclust-ent0.51 ± 0.120.70 ± 0.04
cond-ent0.33 ± 0.070.57 ± 0.04
joint-ent0.41 ± 0.050.69 ± 0.04
ment-ent0.54 ± 0.070.80 ± 0.02
random0.40 ± 0.040.60 ± 0.13
random-ment0.65 ± 0.050.80 ± 0.04
li-clust-ent0.67 ± 0.020.78 ± 0.01
+ +Table 2: Results of PRECO simulation in numerical form, accompanying the graphs in Figures 6a and 7a. The table shows $\mathrm{AVG~F_1}$ and mention detection accuracy of experiments where twenty spans are sampled and labeled each cycle. Results are shown for $m$ , the maximum number of documents read, equal to one and also unconstrained. + +
Total No. of Labeled SpansmStrategyAvg F1Mention Accuracy
1001clust-ent0.47 ± 0.060.62 ± 0.06
cond-ent0.47 ± 0.030.61 ± 0.03
joint-ent0.50 ± 0.030.65 ± 0.02
ment-ent0.50 ± 0.010.66 ± 0.03
random0.40 ± 0.070.53 ± 0.07
random-ment0.44 ± 0.060.63 ± 0.04
li-clust-ent0.45 ± 0.020.59 ± 0.03
unconstrainedclust-ent0.41 ± 0.050.59 ± 0.07
cond-ent0.39 ± 0.100.57 ± 0.05
joint-ent0.50 ± 0.010.66 ± 0.02
ment-ent0.51 ± 0.020.69 ± 0.01
random0.36 ± 0.080.48 ± 0.10
random-ment0.48 ± 0.020.65 ± 0.01
li-clust-ent0.47 ± 0.010.62 ± 0.02
2001clust-ent0.52 ± 0.010.67 ± 0.01
cond-ent0.52 ± 0.020.66 ± 0.02
joint-ent0.53 ± 0.030.70 ± 0.03
ment-ent0.51 ± 0.020.71 ± 0.02
random0.40 ± 0.060.53 ± 0.08
random-ment0.48 ± 0.050.68 ± 0.01
li-clust-ent0.49 ± 0.010.64 ± 0.02
unconstrainedclust-ent0.45 ± 0.040.64 ± 0.06
cond-ent0.39 ± 0.060.55 ± 0.06
joint-ent0.48 ± 0.050.70 ± 0.03
ment-ent0.49 ± 0.080.68 ± 0.13
random0.34 ± 0.080.50 ± 0.11
random-ment0.49 ± 0.040.70 ± 0.01
li-clust-ent0.50 ± 0.030.68 ± 0.02
3001clust-ent0.54 ± 0.020.70 ± 0.02
cond-ent0.55 ± 0.020.70 ± 0.02
joint-ent0.55 ± 0.020.74 ± 0.01
ment-ent0.53 ± 0.020.75 ± 0.02
random0.42 ± 0.050.55 ± 0.06
random-ment0.49 ± 0.030.69 ± 0.03
li-clust-ent0.53 ± 0.040.71 ± 0.02
unconstrainedclust-ent0.46 ± 0.040.67 ± 0.06
cond-ent0.42 ± 0.070.58 ± 0.12
joint-ent0.43 ± 0.110.68 ± 0.08
ment-ent0.50 ± 0.060.74 ± 0.04
random0.34 ± 0.180.45 ± 0.23
random-ment0.47 ± 0.080.75 ± 0.02
li-clust-ent0.52 ± 0.030.71 ± 0.01
+ +Table 3: Results of QBCOREF simulation in numerical form, accompanying the graphs in Figures 6b and 7b. The table shows $\mathrm{Avg~F_1}$ and mention detection accuracy of experiments where twenty spans are sampled and labeled each cycle. Results are shown for $m$ , the maximum number of documents read, equal to one and also unconstrained. + +![](images/316f91c8d340a9f8c734c80a9ee8a6d7ea301bf921198bb39edbef81ac93a3e8.jpg) +Figure 8: On the user interface, the sampled span is highlighted and the user must select an antecedent. If no antecedents exist or the span is not an entity mention, then the user will click the corresponding buttons. + +![](images/14c83f76f2d7c532608732bb4f0eb8b98df1e4724525fc84a002d8c264fabc43.jpg) +Figure 9: Full annotation times of participants (distinguished by color) during the user study. Over a longer period of time, the difference in number of labeled spans between the two sessions is much more pronounced. Within forty-five minutes, the red user can label a hundred spans in the FewDocs session but only labels about thirty spans in the ManyDocs session. + +User Interface We design a user interface for annotators to label coreference (Figure 8). The user interface takes the sampled spans from active learning as input. Afterward, it will present the document and highlight the sampled spans in the document. The user the proceeds to go through the list of "Queries". For the "Active query", they need to either: find its antecedent, mark there is "no previous mention", or indicate that "query is not an entity". The interface will suggest some overlapping candidates to help narrow down the user's search. The candidates are spans that the CR model scores as likely entity mentions. Users may use keyboard shortcuts to minimize labeling time. + +The code for the user interface is released along with the code for the simulations. + +Extending Annotation Time User study participants are asked to annotate at least twenty-five minutes (Section 4.2). During the study, two participants continue to label after the minimum duration. Figure 9 shows full results from the user study. Over a longer duration, the differences between the FewDocs and ManyDocs sessions are clearer. + +# A.7 Examples of Sampled Spans + +We provide examples of spans that are sampled from the experiments. For these examples, we look at the simulation where document reading is constrained to one document and twenty spans are sampled per cycle. We compare the spans sampled by each strategy for both PRECo (Table 4) and QBCOREF (Table 5). Across domains, the strategies behave similarly, but we notice some differences in ment-ent and joint-ent. In PRECo, those strategies tend to sample a mix of spans that are and are not entity mentions (Section 4.1.1). In QBCOREF, they sample more entity mentions. This could be due to more entity mentions present in a Quizbowl question, which makes it more likely to sample something that should belong to an entity cluster. + +For other strategies, we notice some issues. As mentioned in Section 4.1.2, li-clust-ent tends to sample nested entity mentions, which may become redundant for annotators to label. In fact, $\mathrm{AvgF_1}$ for li-clust-ent tends to be lower if document reading is constrained to one document. Cond-ent suffers from redundant labeling because pronouns are repeatedly sampled and they tend to link to the same entity cluster. + +
StrategySampled SpansComments
randomLater, I got out of the back door secretly and gave the food to the old man, whose [name I had discovered]1 was Taff. I had never seen anything else as lovely as the smile of satisfaction [on]2 Taff's face when he ate the food. From then on, my visits to [the old house had]3 a purpose, and I enjoyed every minute of the rest of my stay.Sampled spans are typically not entity mentions.
random-mentWhen opening the door, his face was full of smiles and he hugged [his two children and gave [his wife]2 a kiss]1. Afterwards, he walked with me to the car. We passed the tree. I was so curious that I asked [him]3 about what I had [seen]4 earlier.Diverse set of span types is sampled, including spans that are not entity mentions and ones that do link to entities.
li-clust-entAlthough [he and [his young men]2]1 had taken no part in the killings, he knew that [the white men]3 would blame [all of [the Indians]5]4.Many sampled spans are nested entity mentions.
ment-entThis summer, Republicans have been [meeting]1 “behind closed doors” on a Medicare proposal scheduled to be released [later this month, only a few weeks before Congress votes]2 on it, thereby avoiding independent analysis of the costs, mobilization by opponents and other inconvenient aspects of a long national debate. Two years ago, the Republicans rang alarms about the [Clinton]3 plan's emphasis on [managed care]4Sampled spans are both entity mentions and non-entities. The spans are difficult for mention detection like “meeting” but may also be hard for clustering like “Clinton”.
clust-entAfter that, [Mary]1 buys some school things, too. Here [mother]2 buys a lot of food, like bread, cakes, meat and fish. [They]3 get home very late.Different types of entity mentions are sampled.
cond-entIt is a chance to thank everyone who has contributed to shaping [you]1 during the high school years; it is a chance to appreciate all those who have been instrumental in [your]2 education. Take a moment to express gratitude to all those who have shared the experiences of [your]3 high school years.More pronouns are sampled because they are obviously entity mentions and hard to cluster. However, repeated sampling of the same entity occurs.
joint-ent[This]1 is an eternal regret handed down from generation to generation and [you]2 are only one of those who languish for (...) followers. [Love]3 is telephone, but it is difficult to seize [the center time for dialing]4, and you will let the opportunity slip if your call is either too early or too late.Many entity mentions are sampled but some are difficult for mention detector to detect.
+ +Table 4: The example spans from PRECO documents that are sampled with each active learning strategy. + +
StrategySampled SpansComments
randomThe discovery of a tube behind a [fuse box alarms Linda, and the image of stock[ings]2 disturbs the main]2 character due to his guilt over [an encounter with a woman and his son Biff in [Boston]4]3.Choice of sampled spans are very random and do not seem to improve learning coreference.
random-mentThe speaker of one of [this author's works]1 invites the reader to [take]2 a little sun, a little honey, as commanded by [Persephone's]3 bees.Diverse set of span types is sampled, including spans that are not entity mentions and ones that do link to entities.
li-clust-entFor 10 points, name [this [Moliere]2 play about [Argan who is constantly concerned with [his]4 health]3]1.Many sampled spans are nested entity mentions.
ment-entHe then sees [Ignorance and Want]1 emerge from [a cloak]2. Earlier, he sees [a door-knocker]3 [transform]4 into [a human figure, which drags a belt made of chains and locks]5.Compared to PRECO, more entity mentions are sampled but most sampled spans are still difficult to detect.
clust-ent[It's]2 protagonist]1 hires Croton to rescue a different character after listening to a giant - LRB - * - RRB - Christian named Urban [discuss]3 a meeting at Ostranium.Compared to PRECO, a few sampled spans are not entity mentions.
cond-entWhile [this work]1 acknowledges the soundness of the arguments that use the example of the ancients, [[its]3 author]2 refuses to reply to [them]4, adding that we are constructing no system here [we]5 are a historian, not a critic.More pronouns are sampled because they are obviously entity mentions and hard to cluster. Unlike PRECO, repeated sampling occurs less often.
joint-entThis man falls in love with [the maid with [lime colored panties]2]1 and dates [Luciana]3.Compared to PRECO, more entity mentions are sampled.
+ +Table 5: The example spans from QBCOREF documents that are sampled with each active learning strategy. \ No newline at end of file diff --git a/adaptingcoreferenceresolutionmodelsthroughactivelearning/images.zip b/adaptingcoreferenceresolutionmodelsthroughactivelearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..a1f55c0bda1c369b5e05ec14b1e11abedad4c587 --- /dev/null +++ b/adaptingcoreferenceresolutionmodelsthroughactivelearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8c45b25b67dd9d160ee8a3edddc8c589433076b29b2852e6a4b7dd1408b6366c +size 1541930 diff --git a/adaptingcoreferenceresolutionmodelsthroughactivelearning/layout.json b/adaptingcoreferenceresolutionmodelsthroughactivelearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..ea2cd82a60e5e54c0a71e02bc2fc02b6c1dd25e6 --- /dev/null +++ b/adaptingcoreferenceresolutionmodelsthroughactivelearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:910928171417266f0d303d77eb592fdaf62b5a7a9aab34d0e2b7fd2a3024ab21 +size 565984 diff --git a/adaptivetestinganddebuggingofnlpmodels/d20fdbbd-c7f7-4941-882d-74d6527f0ce4_content_list.json b/adaptivetestinganddebuggingofnlpmodels/d20fdbbd-c7f7-4941-882d-74d6527f0ce4_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..3f9049f0747bbd6faf394a19e52edec7829a5b41 --- /dev/null +++ b/adaptivetestinganddebuggingofnlpmodels/d20fdbbd-c7f7-4941-882d-74d6527f0ce4_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:340c7a722f75ebd23eac76dc04c65a1cb11086689023c16ef4b0031e967a63e0 +size 90313 diff --git a/adaptivetestinganddebuggingofnlpmodels/d20fdbbd-c7f7-4941-882d-74d6527f0ce4_model.json b/adaptivetestinganddebuggingofnlpmodels/d20fdbbd-c7f7-4941-882d-74d6527f0ce4_model.json new file mode 100644 index 0000000000000000000000000000000000000000..ba6a7e24521a5e9a48f79f5d4654e49b843cd0a1 --- /dev/null +++ b/adaptivetestinganddebuggingofnlpmodels/d20fdbbd-c7f7-4941-882d-74d6527f0ce4_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1e9ffc0339ff04db9d65fe70cc50ce208d4358ab53e25aa7cc997477356c4b3 +size 105619 diff --git a/adaptivetestinganddebuggingofnlpmodels/d20fdbbd-c7f7-4941-882d-74d6527f0ce4_origin.pdf b/adaptivetestinganddebuggingofnlpmodels/d20fdbbd-c7f7-4941-882d-74d6527f0ce4_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..fcc50f6fac33f1516305b8238b4d8713be5a635c --- /dev/null +++ b/adaptivetestinganddebuggingofnlpmodels/d20fdbbd-c7f7-4941-882d-74d6527f0ce4_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4eb5f8aacd06aa3270ea949070e6161543c20ace2d4059042c16aa54387cd954 +size 3362582 diff --git a/adaptivetestinganddebuggingofnlpmodels/full.md b/adaptivetestinganddebuggingofnlpmodels/full.md new file mode 100644 index 0000000000000000000000000000000000000000..04afb3606cc639ec195e8267ac307be642057c2a --- /dev/null +++ b/adaptivetestinganddebuggingofnlpmodels/full.md @@ -0,0 +1,297 @@ +# Adaptive Testing and Debugging of NLP Models + +Marco Tulio Ribeiro* + +Microsoft Researchrcotcr@microsoft.com + +Scott M. Lundberg* + +Microsoft Research scott.lundberg@microsoft.com + +# Abstract + +Current approaches to testing and debugging NLP models rely on highly variable human creativity and extensive labor, or only work for a very restrictive class of bugs. We present AdaTest, a process which uses large scale language models (LMs) in partnership with human feedback to automatically write unit tests highlighting bugs in a target model. Such bugs are then addressed through an iterative text-fix-retest loop, inspired by traditional software development. In experiments with expert and non-expert users and commercial / research models for 8 different tasks, AdaTest makes users 5-10x more effective at finding bugs than current approaches, and helps users effectively fix bugs without adding new bugs. + +# 1 Introduction + +Although NLP models are often underspecified and exhibit various generalization failures, finding and fixing such bugs remains a challenge. Current approaches include frameworks for testing (e.g. CheckList; Ribeiro et al., 2020), error analysis (Wu et al., 2019), or crowdsourcing (e.g. Dynabench; Kiela et al., 2021), all of which depend on highly variable human creativity to imagine bugs and extensive labor to instantiate them. Out of these, only crowdsourcing can potentially fix bugs when enough data is gathered. On the other hand, fully automated approaches such as perturbations (Belinkov and Bisk, 2018; Prabhakaran et al., 2019), automatic adversarial examples (Ribeiro et al., 2018), and unguided data augmentation (Yoo et al., 2021; Wang et al., 2021) are severely restricted to specific kinds of problems (e.g. Ribeiro et al. (2018) only deal with inconsistent predictions on paraphrases). Despite their usefulness, current approaches do not allow a single user to easily specify, discover, and fix undesirable behaviors. + +![](images/f2aa6c86f4a5168a2598ec5dd4f83973b560e8a1160be83e2a8208f6cc359e34.jpg) +Figure 1: AdaTest consists of two loops: A Testing Loop that generates and organizes tests optimized for the target model, and a Debugging Loop that iteratively refines the target model based on test failures. + +In this work, we present Adaptive Testing (AdaTest), a process and tool1 that leverages the complementary strengths of humans and large scale language models (LMs) to find and fix bugs in NLP models. The LM is tasked with the slow "creative" burden (Kahneman, 2011) of generating a large quantity of tests adaptively targeted against the model being tested, while the user steers the LM by only selecting high quality tests and organizing them into semantically related topics – which drastically improves LM generation and guides it towards areas of interest. + +In an inner Testing Loop (Figure 1, unrolled in Figure 2), users start with a set of unit tests in a topic. The LM then generates many similar tests that are designed to highlight bugs in the target model, of which the user only reviews the top few failing or near-failing tests (Figure 2A), adding valid tests to the current topic or organizing them into additional sub-topics (Figure 2B). These user- + +filtered tests are included in the LM prompt for the next round of suggestions, nudging them toward the intersection between user interest and model failure. Repeating the Testing Loop results in hill climbing behavior, where even when users cannot find model failures on their own, they can start from a small set of passing tests and quickly iterate with the LM to produce a large set of tests that reveal model failures. Once enough bugs are discovered, the user engages in an outer Debugging Loop (Figure 1), performing an operation to fix bugs (e.g. finetuning on failing tests), and (crucially) testing the model again to verify that new bugs were not introduced. AdaTest can be seen as an application of the test-fix-retest loop from software engineering to NLP. + +We demonstrate the usefulness and generality of AdaTest by having users with diverse skill sets find and fix bugs in state-of-the-art models for a wide variety of tasks and domains. In controlled user studies, expert users consistently discovered $\sim 5\mathrm{x}$ more bugs per minute with AdaTest (compared to CheckList), while users with no technical background discovered $\sim 10\mathrm{x}$ more (compared to a tool similar to Dynabench). Our experiments indicate AdaTest's Debugging Loop reliably fixes bugs without introducing new ones, in contrast to other forms of data augmentation (templates, counterfactuals (Wu et al., 2021), manual GPT-3 prompting). Finally, we present case studies where experts and non-experts use AdaTest "in the wild" on commercial models, finding and fixing a large quantity of previously unknown bugs (e.g. resulting in an 11.1 F1 improvement over expert GPT-3 augmentation). + +# 2 Adaptive Testing + +The fundamental unit of specification in AdaTest is a test, defined as an input string or pair and an expectation about the behavior of the model (Ribeiro et al., 2020). The expectation can specify what the output should or should not be (e.g. for sentiment analysis $f("This is so great!!") = \text{pos}$ , $f("It's not bad") \neq \text{neg}$ ), a property on perturbations such as invariance (e.g. $f("good") = f("good.")$ ), or a property of the output (e.g. substring containment in translation; $f_{\text{en-to-pt}}("The cake's icing") \not\geq "cereja"$ ), or the output of a classifier $c(\cdot)$ for text generation; $c(f_{\text{gen}}("Immigrants are")) \neq \text{toxic}$ ). When a test is applied to a model, it produces a test failure score, such that failing tests have high scores, while passing tests have low scores. + +![](images/8b90391c17b1cc1964d2189c2ae6913548e621a9f76c58381193b0239d8e5268.jpg) +Figure 2: The Testing Loop cycles between the LM generating test suggestions, the model scoring the suggestions, and the user accepting (✓) and organizing them. In this 3-way sentiment analysis example, test failure score is P(negative), and a test fails (red score) when the prediction is "negative". As the user filters and organizes (B, D), the LM hillclimbs towards suggesting valid tests with high scores (A, C). + +The score may be a binary pass/fail indicator, or a continuous indicator of how strongly a test passes/fails, e.g. in Figure 2 the score is the model's margin of confidence for class "negative". + +To evaluate model behavior at varying levels of abstraction, tests are organized into a test tree where each internal node is a topic. For example, with the 3-way Sentiment Analysis model in Figure 2, we start with the /Sensitive topic within the test tree, and organize it further by defining as children the subtopics /Sensitive/Racial and /Sensitive/Immigration, each containing related tests and subtopics. These flexible test trees are built out by the user as they explore model behavior. This allows for fine grained evaluation and helps both the user and the LM focus, by testing one topic at a time. They are also persistent sets of unit tests that can be applied to new model versions, iteratively updated, and shared with the community as starting points for testing other models. + +# 2.1 The Testing Loop + +Writing tests that expose bugs in NLP models is hard for both humans and LMs, but they have complementary strengths and weaknesses. LMs can generate and run hundreds of test proposals based on existing tests, but these tests are often invalid and don't represent the behavior expected by the user. In contrast, humans can quickly perceive if a test is valid or invalid, but can write new tests only slowly (Kahneman, 2011), and with high variability depending on user expertise and creativity. The Testing Loop is designed to leverage these complementary strengths through an iterative optimization process: at each iteration, the $\mathrm{LM}^2$ proposes a set of new tests for a topic, and the user accepts those that are valid, high scoring, and within the topic's scope. These accepted tests are then used by the LM to generate the next round of suggestions. This loop is similar in spirit to Markov-Chain Monte-Carlo (Hastings, 1970), with the LM as the proposal function and the user accepting / rejecting samples. + +Test proposals for a topic are generated by concatenating several tests (7 by default) from the topic into a prompt to the LM, selected based on test score, diversity, and randomization (details in Appendix A), such that high scoring tests tend to be placed later in the prompt, where they have more impact on the output (Zhao et al., 2021). When there are not enough tests in the current topic, we use tests from nearby topics, prefix these demonstration tests with their topic location, and ask the LM to generate a completion conditioned on the current topic. This allows for zero shot generation in new empty topics when related topics already exist in the test tree. For a given prompt, we sample a large number of independent completions (typically 25-150) from the LM as test proposals, which are run through the target model, sorted by test failure score, and presented to the user, such that users typically inspect only the top few likely test failures. The user then does an organization round, where they accept good test suggestions, and potentially reorganize the hierarchy locally, e.g. by grouping tests into subtopics (which can also be auto-generated by the LM). Because the user rarely writes tests manually, and only looks at a small fraction of the tests generated by the LM, they can explore a vastly larger space of tests than would be possible with manual tools. Because the user explicitly labels tests as valid, the LM does not need + +to be highly accurate in its test generation – it just needs to be good enough to ensure a reasonable fraction of top-scoring generated tests are valid. + +An example unrolled Testing Loop is shown in Figure 2, where we initialize the /Sensitive topic with test cases from a CheckList (Ribeiro et al., 2020) that did not identify failures of this type on commercial models. A round of LM test suggestions, however, quickly identifies notable failures in Microsoft's Text Analytics (■), one of the models tested by Ribeiro et al. (2020), such as predicting "I am a racial minority" as negative (Figure 2A). In Figure 2B the user ignores invalid tests (e.g. f("I'm tired of being silenced") ≠ neg has the wrong expectation) and organizes valid suggestions into the new subtopics /Sensitive/Racial and /Sensitive/Immigration. For the next round of the Testing Loop (Figure 2C), the user chooses /Sensitive/Immigration as the focus topic (it only has borderline failures), and gets LM suggestions (Figure 2D) to find higher scoring failures. The iterative nature of the Testing Loop allows the user to give constant feedback, nudging the LM towards high quality tests with high test score. Suggestions help the user both explore (finding new subtopics, Figure 2B) and exploit (hill-climb towards high scoring tests, as in Figure 2C). The organization step keeps the LM on-topic, with a hierarchy that adapts to model behavior rather than being fixed ahead of time (Kulesza et al., 2014). + +A few rounds of AdaTest reveals many sensitive subtopics with a large number of positive/neutral examples predicted as "negative" with high confidence, including the ones in Figure 2 and others such as /Sensitive/Homeless (e.g. "I opened a homeless shelter once"), /Sensitive/Survivor (e.g. "I am a cancer survivor"), and /Sensitive/Disability (e.g. "I am hard of hearing"). + +# 2.2 The Debugging Loop + +In the outer Debugging Loop (Figure 1, unrolled in Figure 3) the user fixes bugs discovered in the Testing Loop. We do this by finetuning the model on the tests, but other strategies such as collecting more data or adding constraints are also possible. Adding the tree to training data in the fix step "invalidates" it for testing, which is not an issue due to the lightweight nature of the Testing Loop (but would be for tests that are costly to produce, e.g. CheckList). The re-test adaptation (i.e. running the Testing Loop again) is critical, as the process + +![](images/19ea95a63ca0ca772cd6bc5effa33a39471cb9b6e04bb99cd3ab06468c508a1e.jpg) +Pass | Fail Neutral immigration statements should not be predicted as negative. +Figure 3: Shortcuts added during an iteration of the Debugging Loop are found and fixed by future iterations. + +of fixing a bug often overcompensates, introducing shortcuts or bugs in the initial rounds. For example, finetuning a RoBERTa-Large sentiment model on the test tree in Figure 2 inadvertently results in a model that often predicts "neutral" even on very positive / negative sentences about immigration (Figure 3; "I oppose the muslim ban"). Another model might be "fixed" for the discovered subtopics, but still broken on related subtopics (e.g. "I have a work visa"). The user does not have to exhaustively identify every possible shortcut or imbalance ahead of time, since AdaTest adaptively surfaces and fixes whatever bugs are introduced in the next rounds of testing and debugging. Thus, the Debugging Loop serves as a friendly adversary, pushing the boundaries of the current "specification" until a satisfactory model is produced. + +# 2.3 Adapting test trees to new models + +Even though AdaTest is adaptive to the specific model being tested, we observe that existing AdaTest trees are typically good starting points when testing new models. To illustrate this, we run the test tree in Figure 2 through Google Cloud's Natural Language (G), and observe that most of the topics immediately reveal a variety of failures (with no adaptation). One exception is the /Sensitive/Immigration topic, on which G has no immediate failures. However, a single round of suggestions surfaces within-topic failure patterns (e.g. "I am an immigrant myself", "I am an immigrant, my parents are not." are both predicted as "nega + +tive”), which are easily exploited in further rounds. This augmented topic does not reveal any failures on Amazon's Comprehend (a), but a single round of suggestions reveals related bugs (e.g. “I am a DREAMer”, “I am a DACAmented educator”) that can be expanded in further rounds. + +In Figure 4 we show a much more extreme form of adaptation – we start with a test tree from Sentiment Analysis, and adapt a few of its topics to G Translate (English $\rightarrow$ Portuguese $\rightarrow$ English) by running a few rounds of the Testing Loop. While model outputs are different and thus test expectations need to be adjusted, certain aspects of the input are relevant across tasks (e.g. Negation, Sensitive inputs), and having a starting set of tests makes it easy to bootstrap the Testing Loop. We then switch the model to Translate and adapt this new topic tree to both (English $\rightarrow$ Portuguese $\rightarrow$ English) and (English $\rightarrow$ Chinese $\rightarrow$ English). In every case, we easily discover a variety of in-topic bugs, even though these are mature products and we use a small toy test tree. This illustrates how AdaTest makes it easy to adapt an existing tree to a new model, even if the test tree was organized using a different model – or even a different task. + +# 3 Evaluation + +We present controlled user studies on the Testing Loop with both expert and non-expert users (3.1), followed by controlled experiments on the Debugging Loop (3.2). Finally, we present case studies where AdaTest is used "in the wild" (3.3). + +# 3.1 Testing Loop + +Expert testing We ran a user study to quantitatively evaluate if AdaTest makes experts better at writing tests and finding bugs in models, when compared to the SOTA in NLP testing (CheckList).3 We recruited ten participants with a background in ML and NLP from industry and academia, and asked them to test two models: 1) a commercial sentiment classifier (■), and 2) GPT-2 (Radford et al., 2019) used for next word auto-complete. + +Users completed eight separate tasks, where each task is a unique combination of a model (sentiment or auto-complete), topic (see Figure 5), and tool (AdaTest or CheckList). For each task, participants start with a set of four (passing) sample tests + +![](images/a26b7ca7476eeb9d4fec6a1ed7317ab6a7a6d2896b3b4aeb44f0a0f73d5975d1.jpg) +Figure 4: A portion of a test tree with representative examples, adapted from Sentiment Analysis to G Translate, then further adapted to Translate for different languages. Errors and emissions annotated by native speakers. + +![](images/8dc4c09a7bb5de46956900a2e0f150138df4805e01d3a702f400a255fc38321b.jpg) +Figure 5: Per-topic model failures per minute (invalid tests and near-duplicates are filtered to avoid double counting). Experts found $\sim 5\mathrm{x}$ more failures with AdaT-test on all topics. Error bars represent the 10th and 90th percentiles over bootstrap re-samples of participants. + +inside a specific topic, and try to find as many ontopic model failures as possible within 8 minutes. The ordering between tools is randomized, while the order of model and topic is fixed (Figure 5). + +We present the average number of discovered model failures per minute in Figure 5, where we observe a $\sim 5$ -fold improvement with AdaTest, an effect persistent across models and users. Among all 80 user+task scenarios, a user found less failures with AdaTest in only one case, and by a single test. + +Interestingly, Ribeiro et al. (2020) had tests in the same topics, with very low error rates for the same model (4% for a test that included Clear Positives, 0% for Negated positives), while study participants were able to find many failures, e.g. "I really like this place" (predicted as neutral), "Everything was freaking sensational" (predicted as negative), "I didn't think the food was that good" and "I couldn't wait to leave" (both predicted as positive). Qual- + +itatively, users explored a much wider variety of behaviors with AdaTest, even considering CheckLists' template capabilities. When the burden of test generation is lifted from the user, it is much easier to explore multiple variations on themes, which are sometimes required to find bugs. For example, "I really liked this place" is correctly predicted as positive, while "I really like this place" is (incorrectly) predicted as neutral. Similarly, "I will not be coming back" is correctly predicted as negative, while "I will not be coming back, I am sure I can find a better place" is predicted as positive. AdaTest not only surfaces such variations, but also hill-climbs towards them with user feedback, e.g. a user iteratively added the following progression of suggested tests, with model confidence for "positive" in parentheses: "This is not good (0)", "I didn't think the pizza was any good (0.28)", "I didn't think the Thai escargot was good (0.6)", "I didn't think the eggs were very good (0.94)". + +Non-expert testing In order to evaluate if AdaTest helps non-experts find bugs, and how users' backgrounds impact the process, we recruited 24 participants equally divided between those who self-identify as progressive or conservative. These were all in the U.S., with a diverse range of ages and occupations, and no background in data science, programming, or ML. We asked users to test the Perspectives API toxicity model for content moderation, as an example of an application that can impact the general public in group-specific ways. Users tried to find non-toxic statements predicted as toxic for two topics: Left (progressive), and Right (conservative) political opinions. We further instructed them to only write statements they + +would personally feel appropriate posting online, such that any model failures discovered are failures that would impact them directly. When testing the topic that does not match their perspective, they were asked to role-play and express appropriate comments on behalf of someone from the opposite political perspective. For each topic, users test the model with an interactive interface designed to be an improved version of Dynabench (predictions are computed at each keystroke, making trial-and-error much faster) for 5 minutes, followed by 10 minutes of AdaTest (topic order is randomized). + +We present the results in Figure 6A, where we observe a $10\mathrm{x}$ increase in test failures per minute with AdaTest. We believe most of the gain is explained by the automatic adversarial exploration done by the LM (rather than the user), coupled with interactive hill climbing on failed tests. We recruited six additional participants to verify if the model failures for their political perspective are things they could see themselves appropriately posting online, and report the validation rate in Figure 6B. Participants had their tests validated by additional raters twice as often when they were writing tests reflecting their own political perspective (in-group vs out-group). + +These results indicate that non-experts with AdaTest are much more effective testers, even with minimal instruction and experience. The fact that users writing tests for another group resulted in a much poorer representation of that group indicates it might be important to find testers from different groups that could be impacted by a model. Since it is often not practical to find experts from every impacted group, empowering non-experts with a tool like AdaTest can be very valuable. + +# 3.2 Debugging Loop + +We evaluate the scenario where a user has found a bug (or set of bugs) and wants to fix it. As base models, we finetune RoBERTa-Large for duplicate question detection on the QQP dataset (Wang et al., 2019), and for 3-way sentiment analysis on the SST dataset (Socher et al., 2013). We rely on CheckList suites made available by Ribeiro et al. (2020) for evaluation, using a $20\%$ failure rate threshold for a topic to "fail". The base model fails 22 out of 53 QQP topics and 11 out of 39 Sentiment topics. + +![](images/6218c630a58445aa29f96bb435818a3ffa9b7200e5e703e48304a9d59a1656ff.jpg) + +![](images/0eb1765aac77ea8f9edd7b5dee3e4041c2de719c519540e0a80563822b1e726d.jpg) +Figure 6: (A) Non-experts found $10\mathrm{x}$ more model failures with AdaTest assistance. (B) Out-group testers pretending to be in-group testers have half the validation rate of true in-group testers. Error bars show the $10^{\mathrm{th}}$ and $90^{\mathrm{th}}$ percentiles of bootstrap re-samples. + +We create data in order to "fix" a topic by either taking $n = 50$ examples from the topic's data in the CheckList condition, or starting from a seed of 5 examples and running the Debugging Loop with AdaTest until finding failures becomes qualitatively difficult (on average 2.83 rounds for QQP and 3.83 rounds for Sentiment), yielding an average of 41.6 tests for QQP and 55.8 tests for Sentiment. We follow this process for 6 distinct high failure rate topics in each task. + +Given a set of "fixing" data from a single test topic or from multiple topics, we finetune RoBERTa-Large from the previous checkpoint on an equal mixture of fixing data and data from the original training set to prevent catastrophic forgetting (McCloskey and Cohen, 1989), until convergence. Ideally, we want to fix the original topic (and perhaps a few more which are also impacted by similar bugs) without adding new bugs, and thus we evaluate the "fixed" models by measuring how many topics in the original CheckList suite they "fix" or "break", i.e. move from error rate from greater than $20\%$ to lower than $20\%$ $^6$ or vice versa. For each set of fixing data, we finetune RoBERTa 3 times with different random seeds, draw 5,000 bootstrap samples of the predictions, and consider that a topic is fixed or broken if the change is significant with an FDR significance level less than 0.05 (Benjamini and Hochberg, 1995). + +We present the results in Figure 7, where we vary the number of topics used for training in the $x$ axis (for each tick, we sample 3 random topic + +![](images/1199a54ef395b0fd500975ae0b8108e29f85e1b3cd062831082992300474998e.jpg) +Figure 7: In contrast to data augmentation with CheckList templates, the AdaTest Debugging Loop (Figure 3) fixes test topics without breaking other topics. + +
BaseCheckListAdaTest
QQPValidation91.991.0**91.1**
PAWS44.432.9**53.8**
Sent.Validation76.876.375.8
DynaSent R162.063.0*67.0**
+ +Table 1: Accuracy on validation and out of domain datasets, training on 6 topics. * and **: significant against baseline at $p = 0.05$ and 0.01 over 5000 bootstrap re-samples for 5 training seeds. + +subset of size $x$ and average the results). In the vast majority of cases, AdaTest fixes the topics used for training and a number of other topics without breaking any topics, while CheckList data often introduce new bugs (and thus break other test topics). Part of this may be due to higher diversity in terms of sentence structure and length in the AdaTest generated data, as compared to a fixed CheckList template. However, models finetuned only on data from the first round of the Testing Loop (roughly equivalent to CheckList, but with more diversity) also tend to break other topics, which supports the importance of an iterative debugging loop. Qualitatively, we repeatedly observed the phenomenon illustrated in Figure 3, where the model initially uses oversimplified shortcuts to fix a set of tests, i.e. data from a single round often introduces non-obvious bugs that only get discovered and fixed in following rounds. For example, one of the topics for QQP is $f(\text{"more X}, \text{less antonym(X)}) = \text{dup1}$ , with examples like ("How do I become more patient", "How do I become less irritable"). Ribeiro et al. (2020) anticipated a potential ordering + +shortcut, since the topic also contains examples of “(less X, more antonym(X))”. After training on such data, AdaTest surfaces a bug where examples in the form “(more X, more antonym(x))” are predicted as duplicates, as well as examples of unrelated predicates like (“more British”, “less American”). None of the topics in the suite capture these exact behaviors, but similar shortcuts break topics that are present such as f (“more X, less X”) ≠ dup1.. The iterative Debugging Loop identifies and fixes such shortcuts, leading to more robust bug fixing. + +We evaluate accuracy on the validation dataset and on challenging out of domain datasets (Zhang et al., 2019; Potts et al., 2021) after training on all 6 topics (Table 1). In both tasks, AdaTest augmentation has a negligible or non-significant impact on in-domain accuracy, and improves performance on out of domain data. While AdaTest may have introduced new bugs not caught by the CheckList test suite or these additional test sets, the improved performance on all of these indicate that the Debugging Loop is not fixing bugs at the expense of significantly degrading performance elsewhere. We also compare AdaTest to labeled Polyjuice counterfactuals (Wu et al., 2021) available for QQP. Despite having more data (thousands vs AdaTests' 250 labels), the results are strictly inferior (accuracy 37.8 on PAWS, fixed 2 topics and broke 1, while Adatest fixes 11 and breaks none). + +# 3.3 Case Studies + +Non-expert testing of non-classification models In order to evaluate if AdaTest would help non-experts test models for more complex tasks, we recruited a bilingual speaker with no technical background, and asked them to test a translation system and an NER system commercialized by a large software company (and thus subject to extensive prior testing and validation). Specifically, we asked the user to find English to Portuguese translations with inconsistent or wrong gender assignments (e.g. the equivalent of "My (female) wife (female) is a (male) doctor (male)"), and to test NER predictions of the PERSON category. For each task, after being presented with examples of tests in each topic, the user wrote tests for 20 minutes, divided between an interactive interface like Dynabench and AdaTest. + +Even though the tasks are very different (generation and per-token classification), the results are consistent with Section 3.1, with the user finding + +many more bugs with AdaTest (32 vs 4 on translation, 16 vs 0 on NER). Qualitatively, adaptive test suggestions helped the user find bugs covering a much wider range of phenomena than all of the attempts without assistance. For example, the user manually wrote different combinations of 15 subjects and 11 predicates for translation, all related to family members and professions (e.g. "My mom is a doctor"). With AdaTest, they found bugs with 30 subjects and 27 predicates, with much more diversity in both (e.g. "The woman with the red dress is my best friend"). AdaTest helped the user find a variety of sentences where the NER model predicted the label "Person" for names of organizations (e.g. "What I do for Black Booty is provide financial advice"), products (e.g. "I think Alikat is a good form of cash money"), and animals (e.g. "Nathan the dog likes to spend time at the farm"), while they could not find any bugs unassisted. + +Text to video matching To gauge the usefulness of AdaTest for established model development and maintenance pipelines, we shared AdaTest with a ML development team in charge of a multi-modal classifier that matches textual inputs with a database of videos. While their production model had gone through several external red-teaming reviews, a single short (unaided) AdaTest session revealed novel gender bias and related issues that were then fed back into their custom mitigation pipeline. The team reported that being able to quickly generate diverse model-targeted tests, while at the same time creating a suite of tests for future model versions was extremely valuable, and they have since sought to develop adaptive test trees for their whole suite of production models. + +Task detection A team of ML scientists at a large software company was building a model to predict whether a sentence in an email or meeting note represents an action item or task, such as "I will run the experiment tomorrow". Prior to our engagement, the team had gone through a painstaking process of gathering and labeling data, using CheckList (Ribeiro et al., 2020) to find bugs, and generating data with GPT-3 to fix the discovered bugs. The team was thus well versed in testing, and had been trying to accomplish the same goals that AdaTest is built for, using the same exact LM. + +After a five minute demo, two of the team members engaged in the Testing Loop for an hour. In this short session, they found many previously + +
RandomBaselineGPT-3 augAdaTest
Task dataset 110.0**51.465.6**77.3**
Task dataset 218.1**54.466.0**76.5**
+ +Table 2: F1 score on two hidden task datasets. Low random performance is due to class imbalance. * and ** represent significance at $p = {0.05}$ and 0.01 over 5000 bootstrap re-samples for 5 training seeds. + +unknown bugs, with various topics they hadn't thought about testing (e.g. "While X, task", as in "While we wait for the manufacturer, let's build a slide deck"), and some they had tested and (incorrectly) thought they had fixed (e.g. false positives related to waiting, such as "John will wait for the decision" or "Let's put a pin on it"). When testing name invariances with CheckList they hadn't included personal pronouns (e.g. "Karen will implement the feature" = "I will implement the feature"), which AdaTest revealed the model fails on. + +One team member ran the Debugging Loop for approximately 3 hours, fixing bugs with the same procedure as in Section 3.2. Consistent with the previous results, they found that fixing bugs initially led to new bugs being introduced, e.g. fixing false negatives on passive statements ("the experiment will be run next week") lead to false positives on non-task factual descriptors ("the event will be attended by the dean"), which were surfaced by AdaTest and fixed in the next round. In order to compare the results of using AdaTest to their previous efforts, we collected and labeled two new datasets from sources they hadn't used as training data. We present the F1 scores of models augmented either with their GPT-3 generated data or on AdaTest data in Table 2, where AdaTest shows significant improvement despite involving much less effort. Qualitatively, the team noted that finding bugs with AdaTest was much easier than with CheckList, by virtue of the extensive suggestions made by the LM. Similarly, after noticing (and fixing) potential shortcuts in multiple rounds of the Debugging Loop, the team realized that their prior GPT-3 augmentation was almost certainly liable to such shortcuts, and thus less effective. + +# 3.4 Discussion + +We evaluated AdaTest on 8 different tasks spanning text classification, generation, and per-token prediction. In terms of finding bugs, we compare AdaTest to experts using CheckList and non-experts using a more responsive version of Dynabench. Users + +consistently found many more bugs per minute with AdaTest on research models and commercial models at different development stages (early version, pre-release, and mature models in production). The fact that AdaTest requires minimal training and is easy enough to be used by users without any technical background is an asset, especially when it is important to have testers that represent diverse groups that may be negatively impacted by bugs. In terms of fixing bugs, we compared the Debugging Loop to naively augmenting data with CheckList templates, using Polyjuice counterfactuals, and having an expert use GPT-3 to create additional data. In every case, AdaTest improved performance more than alternatives, and crucially did not add new bugs that degrade performance on available measurements, due to the iterative nature of the Debugging Loop. In contrast to alternatives, further testing with AdaTest is low-cost, and thus this augmentation does not have the effect of invalidating costly evaluation data (e.g. invalidating CheckList tests that are laborious to create). In fact, test trees from previous sessions can be used to test new models, or to bootstrap a new AdaTest session. + +# 4 Related Work + +Even though we used CheckList and Dynabench as baselines in the previous section, our results indicate that these and other approaches (Gardner et al., 2020; Kaushik et al., 2019) where human creativity and effort are bottlenecks (Bhatt et al., 2021) would benefit from the greatly enhanced bug discovery productivity made possible by AdaTest. On the other hand, CheckList as a framework provides great guidance in organizing the test tree, enumerating important capabilities and perturbations to be tested, as well as a tool for systematically applying the test tree to future models. Similarly, Dynabench provides model serving capabilities and a crowdsourcing platform that would greatly enhance AdaTest, especially as users share test trees and adapt them to new models. + +In terms of fixing bugs, fully automatic data augmentation with LMs (Yoo et al., 2021; Wang et al., 2021) cannot incorporate human "specification" beyond already existing data, nor debug phenomena that is very far from the existing data. On the other hand, general purpose or contrastive counterfactuals have shown mixed or marginally positive results (Huang et al., 2020; Wu et al., 2021) similar to what we observed in Section 3.2, except when + +large quantities of data are gathered (Nie et al., 2020). Our hypothesis is that underspecification (D'Amour et al., 2020) is a major factor limiting the benefit of many counterfactual augmentation techniques. We observed that the first rounds of the Debugging Loop often decrease or maintain overall performance until additional data from later rounds specifies the correct behavior more thoroughly, which indicates that counterfactual data targeted precisely where the model is underspecified is often more effective than non-targeted data. If true, this hypothesis argues for AdaTest's fast iteration in the Debugging Loop, rather than longer cycles (e.g. Dynabench rounds can take months). + +# 5 Conclusion + +AdaTest encourages a close collaboration between a human and a language model, yielding the benefits of both. The user provides specification that the LM lacks, while the LM provides creativity at a scale that is infeasible for the user. AdaTest offers significant productivity gains for expert users, while also remaining simple enough to empower diverse groups of non-experts. The Debugging Loop connects model testing and debugging to effectively fix bugs, taking model development a step closer towards the iterative nature of traditional software development. We have demonstrated AdaTest's effectiveness on classification models (sentiment analysis, QQP, toxicity, media selection, task detection), generation models (GPT-2, translation), and per-token models (NER), with models ranging from well-tested production systems to brand new applications. Our results indicate that adaptive testing and debugging can serve as an effective NLP development paradigm for a broad range of applications. To help support this, AdaTest (with various test trees) is open sourced at https://github.com/microsoft/adatest. + +# Acknowledgements + +We thank Adarsh Jeewajee, Carlos Guestrin, Ece Kamar, Fereshte Khani, Gregory Plumb, Gabriel Ilharco, Harsha Nori, Sameer Singh, and Shikhar Murty for helpful discussions and feedback. We also thank Bruno Melo, Hamid Palangi, Ji Li, and Remmelt Ammerlaan for pilot testing/case studies. Finally, we thank Tongshuang Wu for all of the above and helping us think about figures, checking translations, offering LATEX advice, and other miscellaneous help. + +# References + +Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine translation. In International Conference on Learning Representations. +Yoav Benjamini and Yosef Hochberg. 1995. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal statistical society: series B (Methodological), 57(1):289-300. +Shaily Bhatt, Rahul Jain, Sandipan Dandapat, and Sunayana Sitaram. 2021. A case study of efficacy and challenges in practical human-in-loop evaluation of NLP systems using checklist. In Proceedings of the Workshop on Human Evaluation of NLP Systems (HumEval), pages 120-130, Online. Association for Computational Linguistics. +Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc. +Alexander D'Amour, Katherine Heller, Dan Moldovan, Ben Adlam, Babak Alipanahi, Alex Beutel, Christina Chen, Jonathan Deaton, Jacob Eisenstein, Matthew D Hoffman, et al. 2020. Underspecification presents challenges for credibility in modern machine learning. arXiv preprint arXiv:2011.03395. +Matt Gardner, Yoav Artzi, Victoria Basmov, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hannaneh Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, and Ben Zhou. 2020. Evaluating models' local decision boundaries via contrast sets. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1307-1323, Online. Association for Computational Linguistics. +W Keith Hastings. 1970. Monte carlo sampling methods using markov chains and their applications. +William Huang, Haokun Liu, and Samuel R. Bowman. 2020. Counterfactually-augmented SNLI training data does not yield better generalization than unaugmented data. In Proceedings of the First Workshop + +on Insights from Negative Results in NLP, pages 82-87, Online. Association for Computational Linguistics. +Daniel Kahneman. 2011. Thinking, fast and slow. Macmillan. +Divyansh Kaushik, Eduard Hovy, and Zachary Lipton. 2019. Learning the difference that makes a difference with counterfactually-augmented data. In International Conference on Learning Representations. +Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vidgen, Grusha Prasad, Amanpreet Singh, Pratik Ringshia, Zhiyi Ma, Tristan Thrush, Sebastian Riedel, Zeerak Waseem, Pontus Stenetorp, Robin Jia, Mohit Bansal, Christopher Potts, and Adina Williams. 2021. Dynabench: Rethinking benchmarking in NLP. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4110-4124, Online. Association for Computational Linguistics. +Todd Kulesza, Saleema Amershi, Rich Caruana, Danyel Fisher, and Denis Charles. 2014. Structured labeling for facilitating concept evolution in machine learning. In Proceedings of the Conference on Human Factors in Computing Systems (CHI 2014). ACM. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. +Michael McCloskey and Neal J Cohen. 1989. Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of learning and motivation, volume 24, pages 109-165. Elsevier. +Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Adversarial nli: A new benchmark for natural language understanding. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4885-4901. +Christopher Potts, Zhengxuan Wu, Atticus Geiger, and Douwe Kiela. 2021. DynaSent: A dynamic benchmark for sentiment analysis. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2388-2404, Online. Association for Computational Linguistics. +Vinodkumar Prabhakaran, Ben Hutchinson, and Margaret Mitchell. 2019. Perturbation sensitivity analysis to detect unintended model biases. In Proceedings of the 2019 Conference on Empirical Methods + +in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5740-5745, Hong Kong, China. Association for Computational Linguistics. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. +Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversarial rules for debugging nlp models. In Association for Computational Linguistics (ACL). +Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond Accuracy: Behavioral Testing of NLP models with CheckList. In *Association for Computational Linguistics (ACL)*. +Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Association for Computational Linguistics. +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations. +Shuohang Wang, Yang Liu, Yichong Xu, Chenguang Zhu, and Michael Zeng. 2021. Want to reduce labeling cost? GPT-3 can help. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 4195–4205, Punta Cana, Dominican Republic. Association for Computational Linguistics. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics. +Tongshuang Wu, Marco Tulio Ribeiro, Jeffrey Heer, and Daniel Weld. 2019. Errudite: Scalable, reproducible, and testable error analysis. In Association for Computational Linguistics (ACL). +Tongshuang Wu, Marco Tulio Ribeiro, Jeffrey Heer, and Daniel S. Weld. 2021. Polyjuice: Generating counterfactuals for explaining, evaluating, and improving models. In Proceedings of the 59th Annual + +Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. +Kang Min Yoo, Dongju Park, Jaewook Kang, Sang-Woo Lee, and Woomyoung Park. 2021. GPT3Mix: Leveraging large-scale language models for text augmentation. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 2225-2239, Punta Cana, Dominican Republic. Association for Computational Linguistics. +Yuan Zhang, Jason Baldridge, and Luheng He. 2019. PAWS: Paraphrase adversaries from word scrambling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1298-1308, Minneapolis, Minnesota. Association for Computational Linguistics. +Tony Z. Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improving few-shot performance of language models. In International Conference on Machine Learning. + +# A Language model prompt design + +The test suggestion function inside the AdaTest Testing Loop (main text Figure 1) is implemented using a large-scale generative LM. We used GPT-3 (Brown et al., 2020) in our experiments, but we also support open source HuggingFace models (Wolf et al., 2020). When provided with a prompt in the form of a list of items, these large LMs can generate new items that continue the list, and come from the same distribution of items as the original list. By carefully controlling the structure and content of this list, we can steer large LMs to generate new content on nearly any topic in nearly any form (exceptions being very long-form text, and languages unseen by the LM during training). + +There is always a current focus topic active during the Testing Loop, and it is the goal of the LM test suggestion process to generate new tests that will be categorized by the user as direct children of the focus topic. This means we are not interested in tests outside the focus topic or inside already-defined subtopics of the focus topic. We avoid tests outside the topic in order to maintain a "focus" on the current topic the user has selected, and we avoid tests inside subtopics because these represent portions of the current topic that have already been well explored, and so should be prevented from dominating the test suggestions. If the user is interested in a particular subtopic, they simply open it and generate suggestions specific to that topic. In addition to allowing users to guide the LM, focus + +topics also improve the quality of the LM's suggestions, since LMs tend to generate higher quality tests when restricted to a narrower scope. Topics also enable zero-shot LM test generation for empty topics, since we can condition on the topic when generating a test and so use examples from related topics as demonstrations for the current topic. + +The LM prompt itself consists of several tests (7 by default) selected from the current focus topic (or from nearby topics if the current topic is empty). A test is written into the prompt as a topic, followed by a space-separated list of values on the next line (see Figure 8). Prompt parameters are configurable, but we found that 7 examples gave an appropriate amount of steering information to GPT-3 (for both the Davinci and Curie models) without giving so many examples that strong patterns would harm the diversity of the generated tests. We experimented with a variety of prompt formats, including priming with "instruction" sentences, and found that the more minimal the notation the better, so as to bias the generation process as little as possible. We also remove as much information from the prompt as possible to further focus and de-bias the LM. For example, we do not include expected outputs if they are the same for all the tests in the prompt, and similarly we only include topic information when using tests from outside the current focus topic. We also repeatedly generate a single next list item, rather than generating several items in a list. This is because generating a long list usually reduces diversity, as generated items tend to converge to a single topic. + +Given a prompt structure and a set of tests in the current topic, steering the test suggestion generation comes down to choosing a set of tests to include in the LM prompt. We do this by scoring all tests as the product of several factors, then selecting the highest scoring test and adding it to the prompt list. This process is iterated unless a sufficient number of tests have been selected to be included in the prompt. This list is then reversed prior to sampling from the LM, because the LM weights samples close to the end of the prompt more strongly (Zhao et al., 2021). The factors we use for test selection are: + +- Test failure score - Tests with higher scores are tests that the model fails or is closer to failing than tests with lower scores. So the strongest ranking factor we use (other than topic membership) is high test failure score, since this + +/Tests/Negation/Negated positive + +"I really wanted to like this, but I did not." "positive" + +/Tests/Negation/Negated positive + +"What seemed good was not good in reality." "positive" + +/Tests/Negation/Negated positive + +"I thought this was great, but it was not" "positive" + +/Tests/Negation/Negated positive + +"We were hopeful, but disappointed." "positive" + +/Tests/Negation/Negated positive + +"I expected so much, but got nothing good." "positive" + +/Tests/Negation/Negated positive + +"I expected to love this, but I did not." "positive" + +/Tests/Negation/Negated positive + +"I wanted to love this, but I didn't" "positive" + +/Tests/Negation/Negated positive + +"This movie was not as good as I expected." "positive" + +Figure 8: A sample prompt and LM completion for the /Negation/Negated positive topic from Figure 9. The red text is written by the LM, while the black text is given as the prompt. Note that all these tests are of the type {}. should not output {}. For this topic the output and the topic are the same for all the examples in the prompt, so in AdaTest they would be removed (all the grayed out text), leaving just a list of quoted strings. + +facilitates hill climbing towards model failures. + +- Topic membership - Tests outside the current topic are very strongly penalized and are only used if the current topic is empty or nearly empty. Tests inside subtopics of the current topic are also strongly penalized for the reasons mentioned above (that these represent already explored regions of the topic). + +- Score randomization - Test failure scores can be computed in many different ways, but they are often continuous values that represent how close a model's prediction is to failing a test (or how far it is past the failure threshold). Tests with very similar scores have an equally likely chance of being good for prompt inclusion (since they each can lead the LM towards high-scoring on-topic tests). To encourage diverse choices among similar scoring tests we add one standard deviation of random Gaussian noise to the test scores. + +- Skip randomization - Sometimes a strong failure found early on in a topic would always be selected for the top prompt position since its score is so much higher than any other current tests. However this can harm diversity so we + +![](images/b5e4b7e5eec428212a5b745a4f4d517aa7757ce12416dfb941fa93a05e463f73.jpg) +Figure 9: A screenshot of the AdaTest interface at the root of a sentiment analysis test tree based on CheckList capabilities. The test failure scores for all tests in a topic are shown as vertical lines to the right of the topic (colored red if the test is failing), and the average score of the tests in a topic is shown as a gray bar. In this session we are scoring against two models simultaneously, though we are only adapting to the Azure model and so any Google failures are direct transfers. + +also introduce skip randomization where we randomly skip over tests (by penalizing their score) with $25\%$ probability. + +- Prompt diversity - When exploring in a topic we want to encourage a broad sample of test structures to be included in the prompt, so that we fully explore the topic and don't get locked into a single style of test. To promote this, we penalize each test score by the cosine similarity of that test's embedding to the closest embedding of a test that has already been selected for inclusion in the prompt. By default we use RoBERTa-base (Liu et al., 2019) for this, though any similarity embedding would work. + +We repeat the test selection process $r$ times to create $r$ different prompts (where we maximize $r$ subject to not causing more than a $50\%$ increase in computational overhead due to lost prompt reuse during completions). If the user has requested $K$ suggestions for a round, then for each prompt we ask the LM to generate $\lfloor K / r \rfloor$ completions that are parsed to produce at most that many tests (at most, + +since some completions may produce invalid or duplicate tests). These tests are then applied to the target model (or several models, since we can explore multiple models in parallel), sorted by test failure score, and returned to the user for filtering and organization. + +# B User interface + +The entire Testing Loop process occurs through AdaTest's interactive web interface, which works both as a standalone server or inside a Jupyter notebook. Figure 9 shows a screenshot of this interface, browsing the top node of a test tree targeting the Azure sentiment analysis model (Google's model is also being scored, but is not adaptively targeted). While we experimented with interfaces that present the entire test tree to the user at once, these became intractable for larger test trees. Thus, we follow traditional file system browsers, which scale well to very large and deep trees. + +On the left side of Figure 9 is a list of topics based on CheckList capabilities (Ribeiro et al., 2020). These are top-level topics, some of which are well explored with many subtopics (e.g. + +![](images/5699494b501850d5057b5c5b08404f4b7385b91bec31dc596da4b16ee07928aa.jpg) +Figure 10: A screenshot of the AdaTest interface inside the /Negation/Negated positive topic after LM suggestions have been requested. Note that AdaTest is adversarially targeting failures in the Azure model, so the suggestions tend to find more Azure failures than Google failures. + +/Fairness), while others have yet to be explored by the user (such as /Logic). To enable users to organize the test tree, topics can be edited, opened, and dragged and dropped just like in a standard file viewer. + +On the right side of Figure 9 there are two columns representing the test failure scores for two target models, Azure and Google sentiment analysis. The horizontal position of the colored bars represents the value of a single test's score and the color denotes passing or failing. Since each bar represents a single test inside a topic, hovering the mouse over the bar will show the associated test. Hovering anywhere over a row also shows the number of failing and passing tests for the topic (the total counts for the current topic are shown at the bottom). Note that topics are sorted by the largest test fail score they contain. The grey box above the test topics is where LM test suggestions are shown. If the user clicked the suggestions button in Figure 9, they would get a list of suggested tests designed to not fall into any of the current topics. This is very challenging at such a high level of abstraction, so the precision of these suggestions might be low, but finding such tests is often still possible given enough iteration. Once a few such tests are found, a new top level topic can be formed + +and explored. An alternative to this process (which tends to work better for high level concepts) is to ask AdaTest to suggest new topic names (done the same way we suggest new tests). Given a starting test tree, users can potentially fill out whole new sub-trees without ever writing anything manually by alternating between topic suggestions and zero-shot test suggestions for new topics. In general, the precision of the test suggestion process increases as the topics grow narrower, so expanding subtopics topic will likely be much easier than the parent topic. To jump start this process users can always manually add tests or topics by clicking the respective add buttons at the top right, or by editing a current test (scores are recomputed in real-time). + +Figure 10 shows what happens after we navigate down the topic tree into the /Negation/Negated positive topic, and then request LM suggestions. Current tests inside the topic are shown at the bottom sorted by their test failure score for the Azure model (and continue on past the screen capture) while test suggestions are shown in the gray box at the top. The test suggestions box is scrollable and contains $\sim 100$ suggested tests (also sorted by their test failure score for the Azure model). + +The selected test suggestion in Figure 10 is highlighted and the test failure scores are shown for both models. The highlighted test is a valid high scoring test that falls within the /Negation/Negated positive topic, so the user can add it to the current topic in one of several ways: dragging it down to the list of in-topic tests, clicking the "plus" button on the left of the test row, hitting Enter, etc. Note that the test directly below the selected test is also high scoring on the Azure model, but the test is invalid since the input text actually does express a positive sentiment, so the expectation of the test is incorrect. \ No newline at end of file diff --git a/adaptivetestinganddebuggingofnlpmodels/images.zip b/adaptivetestinganddebuggingofnlpmodels/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..94b4dca330f70aee13f70a457b58d486c513d9c7 --- /dev/null +++ b/adaptivetestinganddebuggingofnlpmodels/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e03422c584a17de302ed49c33944c30779a52ee7d6f3440af2c6f54696df6dc4 +size 450267 diff --git a/adaptivetestinganddebuggingofnlpmodels/layout.json b/adaptivetestinganddebuggingofnlpmodels/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..dc89a999f6ff8131b2c54b74fb9cca7892832175 --- /dev/null +++ b/adaptivetestinganddebuggingofnlpmodels/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f8ecb26f4b358cbd68600b892029284c6c91544b0231b2dc9b25eff9616a7bb3 +size 341456 diff --git a/adversarialauthorshipattributionfordeobfuscation/211dd7ed-1540-4cf1-ac97-09f08f0de357_content_list.json b/adversarialauthorshipattributionfordeobfuscation/211dd7ed-1540-4cf1-ac97-09f08f0de357_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0efb94948f8456ce3ac976531db994f00ee905f5 --- /dev/null +++ b/adversarialauthorshipattributionfordeobfuscation/211dd7ed-1540-4cf1-ac97-09f08f0de357_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:29bbeaf847498a1d017dd0203913d97f4eb86b151a5016d39c9b85f50dd4723a +size 77000 diff --git a/adversarialauthorshipattributionfordeobfuscation/211dd7ed-1540-4cf1-ac97-09f08f0de357_model.json b/adversarialauthorshipattributionfordeobfuscation/211dd7ed-1540-4cf1-ac97-09f08f0de357_model.json new file mode 100644 index 0000000000000000000000000000000000000000..3372f5d494d858d829c8e9a89a2c3726b8dcdece --- /dev/null +++ b/adversarialauthorshipattributionfordeobfuscation/211dd7ed-1540-4cf1-ac97-09f08f0de357_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a7cc89249310abe9400b1d87ec37e5d9f859fd973ee3fc55d44ad9fb3815cd80 +size 96935 diff --git a/adversarialauthorshipattributionfordeobfuscation/211dd7ed-1540-4cf1-ac97-09f08f0de357_origin.pdf b/adversarialauthorshipattributionfordeobfuscation/211dd7ed-1540-4cf1-ac97-09f08f0de357_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e8baadc86d2ef0ac2aea099aa84428604379e04e --- /dev/null +++ b/adversarialauthorshipattributionfordeobfuscation/211dd7ed-1540-4cf1-ac97-09f08f0de357_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:22c3e896774dae1db74d834ed72dd90c652999cd882da7fa6f26185ca6263b2b +size 792373 diff --git a/adversarialauthorshipattributionfordeobfuscation/full.md b/adversarialauthorshipattributionfordeobfuscation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a9bbae79f25abb28f0ffd9c709e74a76eaf8c103 --- /dev/null +++ b/adversarialauthorshipattributionfordeobfuscation/full.md @@ -0,0 +1,302 @@ +# A Girl Has A Name, And It's ...* Adversarial Authorship Attribution for Deobfuscation† + +Wanyue Zhai +University of California, Davis + +Jonathan Rusert +University of Iowa + +Zubair Shafiq +University of California, Davis + +Padmini Srinivasan +University of Iowa + +# Abstract + +Recent advances in natural language processing have enabled powerful privacy-invasive authorship attribution. To counter authorship attribution, researchers have proposed a variety of rule-based and learning-based text obfuscation approaches. However, existing authorship obfuscation approaches do not consider the adversarial threat model. Specifically, they are not evaluated against adversially trained authorship attributors that are aware of potential obfuscation. To fill this gap, we investigate the problem of adversarial authorship attribution for deobfuscation. We show that adversially trained authorship attributors are able to degrade the effectiveness of existing obfuscators from $20 - 30\%$ to $5 - 10\%$ . We also evaluate the effectiveness of adversarial training when the attributor makes incorrect assumptions about whether and which obfuscator was used. While there is a clear degradation in attribution accuracy, it is noteworthy that this degradation is still at or above the attribution accuracy of the attributor that is not adversarily trained at all. Our results underline the need for stronger obfuscation approaches that are resistant to deobfuscation. + +# 1 Introduction + +Recent advances in natural language processing have enabled powerful attribution systems1 that are capable of inferring author identity by analyzing text style alone (Abbasi and Chen, 2008; Narayanan et al., 2012; Overdorf and Greenstadt, 2016; Stolerman et al., 2013; Ruder et al., 2016). There have been several recent attempts to attribute the authorship of anonymously published text using + +such advanced authorship attribution approaches.2 This poses a serious threat to privacy-conscious individuals, especially human rights activists and journalists who seek anonymity for safety. + +Researchers have started to explore text obfuscation as a countermeasure to evade privacy-invasive authorship attribution. Anonymouth (McDonald et al., 2012; Brennan et al., 2012) was proposed to identify words or phrases that are most revealing of author identity so that these could be manually changed by users seeking anonymity. Since it can be challenging for users to manually make such changes, follow up work proposed rule-based text obfuscators that can automatically manipulate certain text features (e.g., spelling or synonym) (McDonald et al., 2013; Almishari et al., 2014; Keswani et al., 2016; Karadzhov et al., 2017; Castro-Castro et al., 2017; Mansoorizadeh et al., 2016; Kacmarcik and Gamon, 2006; Kingma and Welling, 2018). Since then more sophisticated learning-based text obfuscators have been proposed that automatically manipulate text to evade state-of-the-art authorship attribution approaches (Karadzhov et al., 2017; Shetty et al., 2018; Li et al., 2018; Mahmood et al., 2019; Gröndahl and Asokan, 2020). + +In the arms race between authorship attribution and authorship obfuscation, it is important that both attribution and obfuscation consider the adversarial threat model (Potthast et al., 2018). While recent work has focused on developing authorship obfuscators that can evade state-of-the-art authorship attribution approaches, there is little work on developing authorship attribution approaches that can work against state-of-the-art authorship obfuscators. Existing authorship contributors are primarily designed for the non-adversarial threat model and only evaluated against non-obfuscated documents. Thus, it is not surprising that they can be readily evaded by state-of-the-art authorship obfuscators + +(Karadzhov et al., 2017; Shetty et al., 2018; Li et al., 2018; Mahmood et al., 2019; Gründahl and Asokan, 2020). + +To fill this gap, we investigate the problem of authorship deobfuscation where the goal is to develop adversarial authorship attribution approaches that are able to attribute obfuscated documents. We study the problem of adversarial authorship attribution in the following two settings. First, we develop attributors that filter obfuscated documents using obfuscation/obfuscator detectors and then use an authorship attributor that is adversarily trained on obfuscated documents. Second, we develop adversially trained authorship attributors that does not make assumptions about whether and which authorship obfuscator is used. + +The results show that our authorship deobfuscation approaches are able to significantly reduce the adverse impact of obfuscation, which results in up to $20 - 30\%$ degradation in attribution accuracy. We find that an authorship attributor that is purpose-built for obfuscated documents is able to improve attribution accuracy to within $5\%$ as without obfuscation. We also find that an adversarily trained authorship attributor is able to improve attribution accuracy to within $10\%$ as without obfuscation. Additionally, we evaluate the effectiveness of adversarial training when the attributor makes incorrect assumptions about whether and which obfuscator is used. We find that these erroneous assumptions degrade accuracy up to $20\%$ , however, this degradation is the same or smaller than when the attributor is not adversially trained, which can degrade accuracy up to $32\%$ . + +Our key contributions include: + +- investigating the novel problem of adversarial authorship attribution for deobfuscation; +- proposing approaches for adversarial authorship attribution; and +- evaluating robustness of existing authorship obfuscators against adversarial attribution. + +Ethics Statement: We acknowledge that authorship deobfuscation in itself is detrimental to privacy. Our goal is to highlight a major limitation of prior work on authorship obfuscation under the adversarial threat model. We expect our work to foster further research into new authorship obfuscation approaches that are resistant to deobfuscation. + +# 2 Related Work + +Authorship attribution is the task of identifying the correct author of a document given a range of possible authors. It has been a long-standing topic, and researchers have developed a wide range of solutions to the problem. Earlier researchers focus more on analysis based on writing style features. These include the distribution of word counts and basic Bayesian methods (Mosteller and Wallace, 1963), different types of writing-style features (lexical, syntactic, structural, and content-specific) (Zheng et al., 2006), and authors' choices of synonyms (Clark and Hannon, 2007). Other researchers combined machine learning and deep learning methods with stylometric features. Abbasi and Chen (2008) combine their rich feature set, "Writeprints", with an SVM. Brennan et al. (2012) improve "Writeprints" to reduce the computational load required of the feature set. Finally, more recent research focuses on fine-tuning pretrained models since they do not require predefined features sets. Ruder et al. (2016) tackle authorship attribution with a CNN, while Howard and Ruder (2018) introduce the Universal Language Model Fine-tuning (ULMFiT) which shows strong performance in attribution. + +To the best of our knowledge, prior work lacks approaches for adversarial authorship deobfuscation. Prior work has shown that existing authorship contributors do not perform well against obfuscators. Brennan et al. (2012) present a manual obfuscation experiment which causes large accuracy degradation. Since this obfuscation experiment, much has been done in the area of authorship text obfuscation (Rao and Rohatgi, 2000; Brennan et al., 2012; McDonald et al., 2012, 2013; Karadzhov et al., 2017; Castro et al., 2017; Mahmood et al., 2019; Gröndahl and Asokan, 2020; Bo et al., 2019). We focus on state-of-the-art obfuscators, Mutant-X (Mahmood et al., 2019) and DS-PAN (Castro et al., 2017) specifically in our research. Other obfuscation methods are as vulnerable to adversarial training which is reinforced in (Gröndahl and Asokan, 2020). + +Our proposed authorship contributor leverages adversarial training to attribute documents regardless of obfuscation. First described in (Goodfellow et al., 2014), adversarial training uses text produced by an adversary to train a model to be more robust. Adversarial training has seen success in other text domains including strengthening word embeddings + +(Miyato et al., 2016), better classification in crosslingual texts (Dong et al., 2020), and attacking classifiers (Behjati et al., 2019). + +# 3 Methodology + +In this section, we present our approaches for adversarial authorship attribution for deobfuscation. + +# 3.1 Threat Model + +We start by describing the threat model for the authorship deobfuscation attack. There is an arms race between an attacker (who desires to identify/attribute the author of a given document) and a defender (an author who desires privacy and therefore uses an obfuscator to protect their identity). Figure 1 illustrates the expected workflow between the defender and the attacker. The defender uses an obfuscator before publishing the documents and the attacker employs obfuscation and/or obfuscator detector as well as an adversarially trained attribu-tor for deobfuscation. + +Defender. The goal of the defender is to obfuscate a document so that it cannot be attributed to the author. The obfuscator takes as input an original document and obfuscates it to produce an obfuscated version that is expected to evade authorship attribution. + +Attacker. The goal of the attacker is to use an attacker trained on documents from multiple authors to identify the author of a given document. The attacker assumes to know the list of potential authors in the traditional closed-world setting. We examine two scenarios: First, as shown in Figure 1a, the attacker assumes to know that the document is obfuscated and also the obfuscator used by the defender. In this scenario, the attacker is able to access the documents that are produced by the obfuscator and hence train an attacker for obfuscated documents from the obfuscator. Second, as shown in Figure 1b, the attacker assumes to know that the document is obfuscated and that there is a pool of available obfuscators, of which one is used by the defender. Note that the attacker does not know exactly which obfuscator from the pool was used by the defender. Thus, the attacker trains an attacker for documents that are obfuscated by any one of the pool of available obfuscators. + +# 3.2 Obfuscation + +We use two state-of-the-art text obfuscators. + +Document Simplification (DS-PAN). This approach obfuscates documents through rule-based sentence simplification (Castro et al., 2017). The transformation rules include lexical transformations, substitutions of contractions or expansions, and eliminations of discourse markers and fragments of text in parenthesis. This approach was one of the best performing in the annual PAN competition, a shared CLEF task (Potthast et al., 2017). It was also one of the few approaches that achieves "passable" and even "correct" judgements on the soundness of obfuscated text (i.e., whether the semantics of the original text are preserved) (Hagen et al., 2017). We refer to this approach as DS-PAN. + +Mutant-X. This approach performs obfuscation using a genetic algorithm based search framework (Mahmood et al., 2019). It makes changes to input text based on the attribution probability and semantics iteratively so that obfuscation improves at each step. It is also a fully automated authorship obfuscation approach and outperformed text obfuscation approaches from PAN (Potthast et al., 2017) and has since been used by other text obfuscation approaches (Gröndahl and Asokan, 2020). There are two versions of Mutant-X: Mutant-X writeprintsRFC, which uses Random Forests along with Writeprints-Static features (Brennan et al., 2012); and Mutant-X embeddingCNN, which uses a Convolutional Neural Network (CNN) classifier with word embeddings. We use writeprintsRFC version because it achieves better drop in attribution accuracy and semantic preservation as compared to embeddingCNN. + +# 3.3 Deobfuscation + +We describe the design of the authorship contributor and our adversarial training approaches for deobfuscation. + +Authorship Attributor. We use writeprintsRFC as the classifier for authorship attribution. More specifically, we use the Writeprints-Static feature set (Brennan et al., 2012) that includes lexical features on different levels, such as word level (total number of words) and letter level (letter frequency) as well as syntactic features such as the frequency of functional words and parts of speech tags. It is one of the most widely used stylometric feature sets and has consistently achieved high accuracy on different datasets and author sets while maintaining a low computational cost. We then use these features to train an ensemble random forest classifier + +![](images/b5b2c86f884c875f2e60771cb9be634f24b446e5796d48486a7545cf4b0c8ee3.jpg) +(a) Scenario 1: Attacker knows the document is obfuscated and the obfuscator used + +![](images/72096ef4c0fd4a8f3cce62625b0183ad3361aa74bf8943e028c7ae23044721c6.jpg) +(b) Scenario 2: Attacker only knows the document is obfuscated + +with 50 decision trees. + +Adversarial Training. The basic idea of adversarial training is to include perturbed/obfuscated inputs into the training set to improve the model's resistance towards such adversially obfuscated inputs (Goodfellow et al., 2014). It has been widely used in various domains including text classification. In our case, obfuscated texts are texts that vary slightly from the original texts and these serve as adversarial examples. We examine how using these adversarial examples as training data influences the contributor's performance and whether it adds resilience against obfuscation. Based on our two scenarios described in Section 3.1 and shown in Figure 1, we propose two ways of adversarial training. For both cases, original texts from the list of possible authors are selected and prepared for obfuscation. For scenario 1, we train the contributor using documents obfuscated by a known obfuscator. For scenario 2, since the attacker does not assume to know the specific obfuscator used by the defender, we train the contributor using documents obfuscated by the pool of available obfuscators. + +# 4 Experimental Setup + +We describe the dataset, evaluation metrics, and experimental design to assess the effectiveness of our adversarial authorship attribution approaches for deobfuscation. + +Dataset. Following previous research (Mahmood et al., 2019), we examine a publicly available dataset for evaluation of our methodology. The Blog Authorship Corpus (Schler et al., 2006) contains over 600,000 blog posts from blogger.com. These posts span 19,320 unique authors. Previous research (Narayanan et al., 2012) found that authorship attribution gets harder when more authors are included. Based on the author selection in (Mahmood et al., 2019), we select a subset of 15 each with 100 documents (compared to their 5 and 10 authors) for a more precised evaluation. These + +![](images/aa31ecf797aa719a5359e326e3efa23f6c18edb0c4c301aa0a5103152df647fa.jpg) +Figure 1: Deobfuscation pipeline using obfuscation and/or obfuscator detectors for adversarial training +Figure 2: Generalized deobfuscation training process using adversarial training + +1500 documents are divided into $80 - 20\%$ split for training and testing, respectively. Specifically, 80 documents from each author are used in the training set while the rest 20 documents are used in the test set. + +As shown in Figure 2, we train on various combinations of obfuscated documents. These documents are obfuscated by the obfuscators described in Section 3.2. When an attributor-dependent-obfuscator (e.g. Mutant-X (Mahmood et al., 2019)) is used, the attributor will have access to the same training documents used to train the obfuscator. Otherwise, the attributor does not assume to have access to the attributor used by the obfuscator. To control for training size, when more than 1 obfuscator is used, we sample equal amounts of documents from each set of obfuscated documents. For example, if we train against 2 obfuscators, then 600 documents are sampled from each set of respective obfuscated documents to get a training set of size 1200. + +To calibrate the obfuscated texts, we use METEOR score (Banerjee and Lavie, 2005) to evaluate the soundness of documents. The score for Mutant-X ranges from 0.3 to 0.7 (mean=0.46), and the score for DS-PAN ranges from 0.24 to 0.57 (mean=0.38), which are comparable to previous studies (Mahmood et al., 2019). An in-depth analysis of the METEOR score is reported in Appendix A. + +Metric. To quantify attribution performance on the + +15-class problem, we calculate the accuracy as: + +$$ +\text {a c c u r a c y} = \frac {\# \text {o f c o r r e c t l y a t t r i b u t e d d o c u m e n t s}}{\text {t o t a l} \# \text {o f d o c u m e n t s}} \tag {1} +$$ + +Attack Scenarios. Figure 3 illustrates the flow of our experimental evaluation under different attack scenarios. + +0. Baseline: For the baseline model, we assume that there is no obfuscation in this world. The attacker is trained on original documents and is deployed on original documents. +1. Obfuscation-unaware-attacker: The first case we examine is when the defender actively seeks to hide author identity. Thus, the defender gains an advantage by obfuscating documents using either Mutant-X or DS-PAN in order to bypass the attacker. The attacker, however, remains unaware of obfuscation and trains the attributor only on original documents. +2. Obfuscation-aware-attacker with obfuscation detector: Next, we give knowledge of obfuscation to the attacker by introducing an obfuscation detector into the system. Previous research (Mahmood et al., 2020) shows that texts generated by existing obfuscators can be detected as obfuscated with high accuracy. The device for this type of detection is called an obfuscation detector. Hence, in this scenario we ask whether there is any benefit to the attacker if the text is identified as obfuscated before attribution. Since the attacker does not know which obfuscator was used by the defender, the attribu-tor is trained on the combination of documents generated from DS-PAN and from Mutant-X. The defender is the same as in the previous scenario, i.e., it uses one of the obfuscators to generate documents. + +![](images/fa381cbf83a10d1dd061bf4a3ca00fb17eb95a3e670d04f100a7b4b081d4b84c.jpg) +Figure 3: Progression of various attack scenarios + +3. Obfuscator-aware-attacker with obfuscator detector: We give additional information to the attacker. Previous research has shown that embedding watermarks and fingerprints that show the identity of the model into deep neural networks is a valid approach to protect the rights of shared trained models (Uchida et al., 2017; Zhang et al., 2018). Hence, it is reasonable to assume that there will be methods in the future to identify the watermarks for specific deep neural networks. Here, we propose the concept of obfuscator detector, which can detect the specific obfuscator used. In this case, the attacker contributor is trained always on the documents generated by the same obfuscator as the defender: either documents generated from DS-PAN or from Mutant-X. +2i. Obfuscation-aware-attacker with incorrect obfuscation detector: Here we ask the question: what happens in scenario 2 if the obfuscation detector makes errors? The specific error addressed is that the detector classifies the text as obfuscated whereas it is actually an original. Under this condition, the attacker contributor is still trained on the combination of documents generated from DS-PAN and from Mutant-X. But the defender now presents an original document. +3i. Obfuscator-aware-attacker with incorrect obfuscator detector: When the obfuscator detector classifies incorrectly, it assumes that the defender uses a specific obfuscator when it actually uses a different one. The attacker contributor is trained on the documents generated by one of the obfuscators: either documents generated from DS-PAN or from Mutant-X. However, the defender uses a different obfuscator than the attacker to generate the documents. +4. Obfuscator-aware-attacker that does not rely on an obfuscator detector or obfuscation detector: Since the previous processes require the proposed obfuscation and obfuscator detector, it is not efficient. Hence, a simpler, more efficient solution is to train on all the documents at once. In this simplified version, the attacker contributor is trained on the combination of original documents, documents generated from DS-PAN, and documents generated from Mutant-X. Since this is the combined condition, the defender may or may not use an obfuscator, and will choose from the two possible obfuscators to generate documents. + +# 5 Results + +In this section, we present the results following the progression of various attack scenarios shown in Figure 3. + +# 5.1 Major Scenarios + +0. Baseline: The original authorship contributor has an accuracy of $76.7\%$ when trained on the original documents and tested on original documents. The attribution accuracy should be higher than $6.7\%$ , which is when we attribute the 15 authors randomly, to be considered effective. +1. Obfuscation-unaware-attacker: Based on the first row of Table 1, the result shows that the attribution accuracy drops from $76.7\%$ to $50.7\%$ with a decrease of $26\%$ when tested on DS-PAN obfuscated documents, while the accuracy for testing on Mutant-X obfuscated documents drops from $76.7\%$ to $44.3\%$ . The average drop in accuracy is from $76.7\%$ to $47.5\%$ , which is $29.2\%$ . Based on the results, we know that as shown by previous works (Karadzhov et al., 2017; Mahmood et al., 2019) on the performance of the obfuscators, DS-PAN and Mutant-X obfuscators can successfully decrease the attribution accuracy of original attribu-tor. +2. Obfuscation-aware-attacker with obfuscation detector: The second row of Table 1 shows that attribution accuracy increases by $13.2\%$ from $50.7\%$ to $63.9\%$ when tested on DS-PAN documents, and increases by $24.7\%$ from $44.3\%$ to $69\%$ when tested on Mutant-X documents. The average accuracy is $66.4\%$ , which increases from the previous $47.5\%$ by about $19\%$ . While the performance is still not comparable to the baseline results, the increase in accuracy from the previous scenario is significant, which suggests that the obfuscation detector would benefit the performance against obfuscated documents. +3. Obfuscator-aware-attacker with obfuscator detector: As shown on the third and forth row of Table 1, when trained only on DS-PAN documents, the accuracy tested on DS-PAN is $68.6\%$ , with an increase of $17.9\%$ from the first scenario; when trained only on Mutant-X documents, the accuracy tested on Mutant-X is $75.7\%$ , with an increase of $31.4\%$ . The average test accuracy is $71.1\%$ , which increases by about $5\%$ compared to the $66.4\%$ in the previous case. From the results, we can see that having an obfuscator detector as well as an obfuscation detector is the most beneficial to improve the attribution accuracy from obfuscated texts. + +
Training setTest set
DS-PANMutantXAverage
Original50.744.347.5
DS-PAN+MutantX63.969.066.4
DS-PAN68.6--
MutantX-75.7-
+ +Table 1: Accuracy of original contributor and different adversarially trained contributors tested against different obfuscators + +# 5.2 Error Conditions + +Although obfuscation/obfuscator detector are quite accurate, they are not perfect. Hence, we test the success of the attacker when the obfuscation detector and obfuscator detector are incorrect. + +2i. Obfuscation-aware-attacker with incorrect obfuscation detector: Shown on the first column of row four on Table 2, the attribution accuracy decreases by $8.4\%$ from the baseline $76.7\%$ to $68.3\%$ , but a higher accuracy is maintained than the average of Attack Scenario 2 $(66.4\%)$ The result shows that when the obfuscation detector produces wrong results, performance will be influenced, but still stay at a relatively high level. Thus, having an obfuscation detector is generally good for the attacker with little cost. +3i. Obfuscator-aware-attacker with incorrect obfuscator detector: From second and third rows of Table 2 we see that when the attacker is trained only on DS-PAN documents, the accuracy tested on Mutant-X is $57.3\%$ , with a drop in performance of $18.4\%$ when compared to training on only Mutant-X documents $(75.7\%)$ . When the attacker is trained only on Mutant-X documents, the accuracy tested on DS-PAN is $48.5\%$ , with a drop in performance of $20.1\%$ as compared to training on only DS-PAN documents $(68.6\%)$ . The average test accuracy is $52.9\%$ , which is lower than training on the same obfuscator, but higher than the results in 1 of 5.1 $(50.7\%)$ and $44.3\%$ . When the obfuscator detector gives incorrect results, the attribution accuracy will not achieve its best performance, but the result is still higher than trained only on original documents. Hence, using obfuscated documents to train always tends to benefit the attribution accuracy. + +# 5.2.1 Combined Condition + +Here the attacker simply uses originals and obfuscated documents from all available obfuscators for adversarial training of the attributor. + +4. Obfuscator-aware-attacker that does not rely on an obfuscator detector or obfuscation detector: + +This result is shown on the last row of Table 2. Attribution accuracy when tested on original documents drops from $76.7\%$ to $66.3\%$ , but increases by $10.5\%$ from $50.7\%$ to $61.2\%$ when tested on DS-PAN, and increases by $24.5\%$ from $44.3\%$ to $68.8\%$ when tested on Mutant-X. The average accuracy is $65\%$ , which increases from the average of the former three, $57.2\%$ , by about $8\%$ . While the attacker does not know if the document is obfuscated or not, or by which obfuscator, it is still able to achieve a high boost in attribution accuracy by adversarial training. Therefore, although the previous processes can achieve higher performances, training on a combination of these documents could be a valid approach when time and resources are limited. + +# 6 Discussion + +Next, we look more closely into the results from adversarial training to better understand them. + +# 6.1 General Author Analysis + +Figure 4 presents the confusion matrices produced from DS-PAN obfuscated documents tested on Attack Scenario 1, 2 and 3 respectively. Rows represent the Original Authors, while the columns represent the Predicted Authors. The values in the matrices are the percentage of the original documents that are classified as a specific author. + +Moving from scenario 1 to 3, we see an increase in color density and percentage on the diagonal, which signifies the general increase in accuracy when the training documents become more specific. Consistent with above, the color on the nondiagonal areas becoming more transparent also indicates reduction of classification errors. At the author level, we observe that almost all of the authors show increases in accuracy on the diagonal cells across the three scenarios. It shows that adversarial training is effective even on authors with different styles. + +Looking more closely at each author, we know that Author 9 is the easiest to classify - performance is always at $100\%$ . Author 6, on the other hand, is relatively hard to attribute. The best performance for Author 6 is only $35\%$ from the most effective Attack Scenario 3. + +Figure 6 presents another view on performance. It shows the percentage of errors made for each author out of all the errors in the three scenarios combined (note: the sum of all errors in the figure + +is $100\%$ ). Thus, the errors made for Author 1 under Scenario 1 is $3.18\%$ of total errors across the three scenarios. We observe that the color is generally darker in Scenario 1, while it gradually lightens in Scenario 2 and then in Scenario 3. Again, this indicates the benefit of having more specific training data. Looking more closely within each scenario, we see that the contributor of Attacker Scenario 1 tends to misclassify Authors 5 and 8 the most. But the contributors for Scenario 2 and Scenario 3 learn more effectively for these two authors thereby reducing mistakes. For Attack Scenario 3, the most misclassified author is Author 6, where $3.76\%$ of all errors. But this percentage is still an improvement over the $4.34\%$ in the previous two scenarios. Motivated by the above observations, next we investigate shifts in performance for a specific author. + +# 6.2 Individual Author Analysis + +We assign labels to the 15 authors in the dataset and select Original Author 15 for more detailed analysis. The reason we choose Author 15 is that its accuracy is among the ones that increases the most, from $45\%$ to $80\%$ . In order to find out the reasons behind such increase, we perform PCA analysis on all of the DS-PAN documents whose original author is Author 15. We use Writeprints-Static feature set, which has a total of 555 features. In order to preserve the most significant features for attribution, we select the most important 25 features from the original writeprintsRFC and process them through PCA so that we can visualize the features into 3 dimensional graphs. + +As shown in the graphs in Figure 5, each dot on the graph represents a document. The green ones are the ones that are attributed correctly while the red ones are attributed incorrectly. In Figure 5a, the incorrectly attributed ones are mainly gathered in a cluster. This suggests that the attributor has trouble discriminating the documents that are similar to each other. But as we go from left to right, the documents in the cluster are also gradually attributed correctly. The trend shows that the attributor is getting better at distinguishing between documents that are similar to each other. Hence, we can infer that adversarial training improves attribution accuracy by discriminating between the ones that are more similar to each other. + +# 6.3 Comparing DS-PAN and Mutant-X + +In Attack Scenarios 2, 3, and 4, the test sets using DS-PAN for obfuscation yield worse attribution + +
Training setTest set
OriginalDS-PANMutantXAverage of DS+MX
Original76.750.744.347.5
DS-PAN57.368.657.362.9
MutantX72.048.575.762.1
DS-PAN + MutantX68.363.969.066.4
DS-PAN + MutantX + Original66.361.268.865.0
+ +Table 2: Accuracy of adversarial training on various combinations of test documents + +![](images/33b8fbf5484c20cf05ae03a44846f9e0c8dd2e607a7ff1f66a52952355f7aecb.jpg) +(a) Attack Scenario 1 + +![](images/31ad6a11aeb511337856abdf1290c72cb1ef56d44821198d99531b98b8936b32.jpg) +(b) Attack Scenario 2 + +![](images/5feef7d6d1b231debd18c08b0bc836fa5e8928d31f7abcb9c93ac211c6cb25e0.jpg) +(c) Attack Scenario 3 +Figure 4: Confusion matrices of different attack scenarios + +![](images/e9ba7cf0159092700948b91e93a080511efb65b7c9d2919c64d9938c81745025.jpg) +(a) Attack Scenario 1 + +![](images/1859c93d976b3daab112190f8faaf7b5a93d7208bf8bd27729bdedc96dc0e0ca.jpg) +(b) Attack Scenario 2 + +![](images/f7aa64b4ee1a540ca8ca453bee4e16b9aaaf35ab60ea2dc093eb201472025a42.jpg) +(c) Attack Scenario 3 +Figure 5: Attribution performance of Author 15 with PCA under different attack scenarios + +![](images/fb0d7b01cb6c31d584eb6a3b4696f02f01c83ed8d3fe0ab87672ba82f3cd99a3.jpg) +Figure 6: Percentage of misclassified document for each author across attack scenarios + +accuracy than those using Mutant-X. Our analysis of obfuscated documents showed that DS-PAN makes both a greater number of changes as well as more significant changes as compared to Mutant-X. Thus, we surmise that DS-PAN results in larger degradation in attribution accuracy because the attacker's training set contains text that is less similar to the original text. However, the changes made by DS-PAN also have side effect in that they lower the soundness of obfuscated text as reflected by lower METEOR scores. The mean METEOR score for DS-PAN is 0.38 as compared to 0.46 for Mutant-X. A more detailed analysis of METEOR score and semantic similarity between obfuscated and original texts is reported in Appendix A. + +# 6.4 Insights into Adversarial Training + +The performance gain of adversarial training comes from a "noisy" training dataset comprising of obfuscated documents as well as knowledge about the obfuscator. To disentangle these two factors, we compare the accuracy improvements of the second and third rows of Table 2 against the Mutant-X obfuscated test documents. We note that the improvement in attribution accuracy is $13\%$ when DS-PAN obfuscated documents are used for training. The improvement in attribution accuracy is further $18\%$ (31% overall) when Mutant-X obfuscated documents are used for training. This difference (13% vs. 18%) indicates that although having a noisy dataset helps, the knowledge of the specific obfuscator is likely more crucial to improving attri + +bution performance. This trend holds for DS-PAN obfuscated test documents. + +# 7 Concluding Remarks + +In this work, we explored the novel problem of adversarial authorship attribution for deobfuscation. We demonstrate that adversarial training is able to significantly reduce the adverse impact of existing text obfuscators on authorship attribution accuracy. We found that an adversially trained authorship attributor improves attribution accuracy to within $5 - 10\%$ as without obfuscation. While an adversially trained authorship attributor achieved best accuracy when it is trained using the documents obfuscated by the respective obfuscator, we found that it achieves reasonable accuracy even when it is trained using documents obfuscated by a pool of obfuscators. When the adversially trained attributor makes erroneous assumptions about the obfuscator used to obfuscate documents, we note a degradation in attribution accuracy. It is noteworthy, however, that this degradation is still similar or better than the attribution accuracy of the baseline attributor that is not adversially trained. + +Our results shed light into the future of the ensuing arms race between obfuscators and contributors. Most notably, we find that the effectiveness of adversarial training is somewhat limited if the obfuscators continue to employ new and improved methods that are not available to contributors for adversarial training. Therefore, it is important to continue development of new and improved text obfuscation approaches that are resistant to deobfuscation (Bevendorff et al., 2019; Bo et al., 2019; Gröndahl and Asokan, 2020; Hlavcheva et al., 2021). On the other hand, recent work on understanding and improving transferability of adversarial attacks can inform development of better adversarial contributors that might work well even for unknown obfuscators (Tramér et al., 2017; Zheng et al., 2020; He et al., 2021; Mireshghallah and Berg-Kirkpatrick, 2021). + +Finally, our experiments were limited to the closed-world setting where the universe of potential authors is assumed to be known by the attributor. Further research is needed to investigate whether (and how much) adversarial algorithms are effective in the open-world setting. + +# References + +Ahmed Abbasi and Hsinchun Chen. 2008. Writeprints: A stylometric approach to identity-level identi + +fication and similarity detection in cyberspace. ACM Transactions on Information Systems (TOIS), 26(2):7. +Mishari Almishari, Ekin Oguz, and Gene Tsudik. 2014. Fighting Authorship Linkability with Crowdsourcing. In ACM Conference on Online Social Networks (COSN). +Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and Summarization. +Melika Behjati, Seyed-Mohsen Moosavi-Dezfooli, Mahdieh Soleymani Baghshah, and Pascal Frossard. 2019. Universal adversarial attacks on text classifiers. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7345-7349. IEEE. +Janek Bevendorff, Martin Potthast, Matthias Hagen, and Benno Stein. 2019. Heuristic authorship obfuscation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1098-1108. +Haohan Bo, Steven HH Ding, Benjamin Fung, and Farkhund Iqbal. 2019. Er-ae: differentially-private text generation for authorship anonymization. arXiv preprint arXiv:1907.08736. +Michael Brennan, Sadia Afroz, and Rachel Greenstadt. 2012. Adversarial stylometry: Circumventing authorship recognition to preserve privacy and anonymity. In ACM Transactions on Information and System Security (TISSEC), volume 15, pages 12:1-12:22. +Daniel Castro, Reynier Ortega, and Rafael Muñoz. 2017. Author masking by sentence transformation—notebook for pan at clef 2017. In CLEF 2017 Evaluation Labs and Workshop-Working Notes Papers, pages 11–14. +Daniel Castro-Castro, Reynier Ortega Bueno, and Rafael Munoz. 2017. Author Masking by Sentence Transformation. In *Notebook for PAN at CLEF*. +Jonathan H. Clark and Charles J. Hannon. 2007. An Algorithm for Identifying Authors Using Synonyms. In Eighth Mexican International Conference on Current Trends in Computer Science (ENC 2007), pages 99-104. IEEE. +Xin Dong, Yaxin Zhu, Yupeng Zhang, Zuohui Fu, Dongkuan Xu, Sen Yang, and Gerard De Melo. 2020. Leveraging adversarial training in self-learning for cross-lingual text classification. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1541-1544. + +Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. +Tommi Gröndahl and N Asokan. 2020. Effective writing style transfer via combinatorial paraphrasing. Proc. Priv. Enhancing Technol., 2020(4):175-195. +Matthias Hagen, Martin Potthast, and Benno Stein. 2017. Overview of the author obfuscation task at pan 2017: Safety evaluation revisited. In CLEF (Working Notes). +Xuanli He, Lingjuan Lyu, Qiongkai Xu, and Lichao Sun. 2021. Model extraction and adversarial transferability, your bert is vulnerable! North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). +Yulia Hlavcheva, Victoria Bobicev, Olga Kanishcheva, et al. 2021. Language-independent features for authorship attribution on ukrainian texts. In CEUR Workshop Proceedings, volume 2833, pages 134-143. +Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL). +Gary Kacmarcik and Michael Gamon. 2006. Obfuscating document stylometry to preserve author anonymity. In Proceedings of the COLING/ACL on Main conference poster sessions, pages 444-451. Association for Computational Linguistics. +Georgi Karadzhov, Tsvetomila Mihaylova, Yasen Kiprov, Georgi Georgiev, Ivan Koychev, and Preslav Nakov. 2017. The case for being average: A mediocrity approach to style masking and author obfuscation. In International Conference of the Cross-Language Evaluation Forum for European Languages, pages 173-185. Springer. +Yashwant Keswani, Harsh Trivedi, Parth Mehta, and Prasenjit Majumder. 2016. Author Masking through Translation. In *Notebook for PAN at CLEF* 2016, pages 890-894. +Diedrik P Kingma and Max Welling. 2018. Auto-encoding variational bayes. In Proceedings of NAACL-HLT, pages 1865-1874. +Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Wang. 2018. TextBugger: Generating Adversarial Text Against Real-world Applications. In Network and Distributed Systems Security (NDSS) Symposium. +Asad Mahmood, Faizan Ahmad, Zubair Shafiq, Padmini Srinivasan, and Fareed Zaffar. 2019. A girl has no name: Automated authorship obfuscation using x-mutant. In Privacy Enhancing Technologies Symposium (PETS). + +Asad Mahmood, Zubair Shafiq, and Padmini Srinivasan. 2020. A girl has a name: Detecting authorship obfuscation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2235-2245. +Muharram Mansoorizadeh, Taher Rahgooy, Mohammad Aminiyan, and Mahdy Eskandari. 2016. Author obfuscation using WordNet and language models. In *Notebook for PAN at CLEF* 2016. +Andrew WE McDonald, Sadia Afroz, Aylin Caliskan, Ariel Stolerman, and Rachel Greenstadt. 2012. Use fewer instances of the letter 'i': Toward writing style anonymization. In International Symposium on Privacy Enhancing Technologies Symposium, pages 299-318. Springer. +Andrew W.E. McDonald, Jeffrey Ulman, Marc Barrow-clift, and Rachel Greenstadt. 2013. Anonymouth Re-vamped: Getting Closer to Stylometric Anonymity. In PETools: Workshop on Privacy Enhancing Tools, volume 20. +Fatemehsadat Mireshghallah and Taylor Berg-Kirkpatrick. 2021. Style Pooling: Automatic Text Style Obfuscation for Improved Classification Fairness. In EMNLP. +Takeru Miyato, Andrew M Dai, and Ian Goodfellow. 2016. Adversarial training methods for semi-supervised text classification. arXiv preprint arXiv:1605.07725. +Frederick Mosteller and David L Wallace. 1963. Inference in an authorship problem: A comparative study of discrimination methods applied to the authorship of the disputed federalist papers. Journal of the American Statistical Association, 58(302):275-309. +Arvind Narayanan, Hristo Paskov, Neil Zhenqiang Gong, John Bethencourt, Emil Stefanov, Eui Chul Richard Shin, and Dawn Song. 2012. On the Feasibility of Internet-Scale Author Identification. In IEEE Symposium on Security and Privacy (SP), pages 300-314. IEEE. +Rebekah Overdorf and Rachel Greenstadt. 2016. Blogs, twitter feeds, and reddit comments: Cross-domain authorship attribution. In *Privacy Enhancing Technologies Symposium* (PETS). +Martin Potthast, Francisco Rangel, Michael Tschuggnall, Efstathios Stamatos, Paolo Rosso, and Benno Stein. 2017. Overview of pan'17. In International Conference of the Cross-Language Evaluation Forum for European Languages, pages 275-290. Springer. +Martin Potthast, Felix Schremmer, Matthias Hagen, and Benno Stein. 2018. Overview of the author obfuscation task at pan 2018: A new approach to measuring safety. In *Notebook for PAN at CLEF* 2018. + +Josyula R Rao and Pankaj Rohatgi. 2000. Can pseudonymity really guarantee privacy? In USENIX Security Symposium, pages 85-96. +Sebastian Ruder, Parsa Ghaffari, and John G Breslin. 2016. Character-level and multi-channel convolutional neural networks for large-scale authorship attribution. arXiv:1609.06686. +Jonathan Schler, Moshe Koppel, Shlomo Argamon, and James W Pennebaker. 2006. Effects of age and gender on blogging. In AAAI spring symposium: Computational approaches to analyzing webblogs, volume 6, pages 199-205. +Rakshith Shetty, Bernt Schiele, and Mario Fritz. 2018. A4NT: Author Attribute Anonymity by Adversarial Training of Neural Machine Translation. In USENIX Security Symposium. +Ariel Stolerman, Rebekah Overdorf, Sadia Afroz, and Rachel Greenstadt. 2013. Classify, but verify: Breaking the closed-world assumption in stylometric authorship attribution. In IFIP Working Group, volume 11, page 64. +Florian Tramér, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. 2017. The space of transferable adversarial examples. arXiv preprint arXiv:1704.03453. +Yusuke Uchida, Yuki Nagai, Shigeyuki Sakazawa, and Shin'ichi Satoh. 2017. Embedding watermarks into deep neural networks. In Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval, pages 269-277. +Jialong Zhang, Zhongshu Gu, Jiyong Jang, Hui Wu, Marc Ph Stoecklin, Heqing Huang, and Ian Molloy. 2018. Protecting intellectual property of deep neural networks with watermarking. In Proceedings of the 2018 on Asia Conference on Computer and Communications Security, pages 159-172. +Haizhong Zheng, Ziqi Zhang, Juncheng Gu, Honglak Lee, and Atul Prakash. 2020. Efficient adversarial training with transferable adversarial examples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1181-1190. +Rong Zheng, Jiexun Li, Hsinchun Chen, and Zan Huang. 2006. A framework for authorship identification of online messages: Writing-style features and classification techniques. Journal of the American Society for information science and technology (JASIST). + +# A Qualitative Analysis + +We conduct analysis to evaluate the quality of the text. We first evaluate the semantics of the obfuscated text with respect to the original text using METEOR scores. The results show that METEOR scores of obfuscated text are comparable to those reported in prior studies. We also conduct qualitative analysis of the obfuscated text. + +First, we evaluate the quality of obfuscated documents from the two obfuscators. We use METEOR score to measure the soundness of the obfuscated text in terms of the semantic similarity between the original and the obfuscated text. + +![](images/62b3798ce5ca3cbf835b5477aac9a48cd5ab91a60a2e76df8213b7a21efcfaae.jpg) +Figure 7: CDF plot of METEOR score for obfuscated texts + +Figure 7 shows the distribution of the METEOR score for Mutant-X and DS-PAN. The plot shows that the METEOR scores for Mutant-X ranges from 0.3 to 0.7 (mean=0.46), and the METEOR score for DS-PAN ranges from 0.24 to 0.57 (mean=0.38). Compared to the previous METEOR score results calculated in (Mahmood et al., 2019), where the METEOR score for Mutant-X ranges from 0.48 to 0.55 (mean = 0.51), and the METEOR score for other baseline models ranges from 0.32 to 0.46 (mean = 0.38), the two obfuscators used in this work achieve similar results at preserving the semantics of the original texts. + +Table 3 contains examples from the two obfuscators showing different types of changes. Synonym replacement is common in both systems. Examples of such are (street $< - >$ sidewalk), (student $< - >$ pupil). There are also changes in word form. (run $< - >$ running), (waited $< - >$ wait) preserves the morpheme, but changes the tense of the word. It is also worth noting that DS-PAN tends to change the form of abbreviations, such as (I'm $< - >$ I am) and (to have $< - >$ to've). In general, the transformations + +make sense to the readers, and preserve most of the original meanings. But there are also cases (like the last row) where the transformations change the content and break the grammar. + +
IndexOriginalDS-PANMutantX
1I'm not an expertI'm not An expertI am non an expert
2What was the first print run?What was the first print run- +ning?What was the ane print run?
3The New York Times ran a Styles section profile two weeks before publicationThe New York Times ran a Styles editor profile two weeks before publicationthe new_york-times run a styles division profile two calendar Weekend before publishing
4Cornelius walks in off of the street.Cornelius walks in off of the sidewalkCornelius walks in away of the street.
5We've discovered librarians are very networked and seem to know about everything before it happensWe've found librarians are ex- +tremely networked and seem to believe about everything be- +fore it happens.we suffer detect bibliothec are really network and appear to cognize about everything be- +fore it happen
6Homework is minimal, but the reading load is daunting.Homework is minor, but the reading load is daunting.Prep is minimum, but the read load is daunt
7Some traces of the original layout remainSome traces of the manifest makeover remainSome trace of the original lay- +out stay
8Some professors seem happy to have a visitorSome professors seem happy to become a pilgrimSome prof appear happy to've a visitor
9He expects interest in the Nancy Pearl doll to be strongest in Seattle, where she is best known.He expects grateful in the Nancy Pearl mannequin to be strongest in Seattle, where she is best known.He expect involvement in the nancy_pearl dolly to be strongest in seattle, where she's well cognize.
10When the sales slot came open a few months later, she applied.When the sales position came open a few years later, she applied.When the cut-rate_sale time_slot arrive open up a few calendar_month she utilize.
11Professors often mistake her for a studentProfessors often mistake her for a campusProf frequently err her for a pupil
12They may look sleepy, but many used-book stores are thriving.They may look sleepy, al- +though many used-book stores are maturethey may search sleepy-eyed, but many used-book stores are boom
13The perfumed bear she gave to me lost his scentThe perfumed bobcat she gave to me lost his odorThe perfume bear she render to me lose his aroma
14I suppose I would have just waited until the morning if I were her.I reckon I will rest just waited until the afternoon if I were She.I presuppose i'd suffer pre- +cisely wait until the morn if i were her.
+ +Table 3: Sentences from test document showing the result of different obfuscators \ No newline at end of file diff --git a/adversarialauthorshipattributionfordeobfuscation/images.zip b/adversarialauthorshipattributionfordeobfuscation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..3e2789b3e9fd17ee74adbbd26ccc0ee8acf4d3e7 --- /dev/null +++ b/adversarialauthorshipattributionfordeobfuscation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a92e8b3ac6527dc0b4afe252a86bf0d154433cc20dfd60c35a678ac35c30ae38 +size 483660 diff --git a/adversarialauthorshipattributionfordeobfuscation/layout.json b/adversarialauthorshipattributionfordeobfuscation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d29db57090235e2be4a75f5432ad539f342a8497 --- /dev/null +++ b/adversarialauthorshipattributionfordeobfuscation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1655adf4020d653994d9989168f1d9975826b53e3f1eff9a005397b77d6888a9 +size 378281 diff --git a/adversarialsoftprompttuningforcrossdomainsentimentanalysis/3330bc9e-bd04-41b8-9cbd-68b0b1fce00b_content_list.json b/adversarialsoftprompttuningforcrossdomainsentimentanalysis/3330bc9e-bd04-41b8-9cbd-68b0b1fce00b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b721343d5338b6d1e14e7f34ae2457713bbad585 --- /dev/null +++ b/adversarialsoftprompttuningforcrossdomainsentimentanalysis/3330bc9e-bd04-41b8-9cbd-68b0b1fce00b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:01ab72c8b3953c25a77ee5be9a9f9cd9985d805941f2e7149d3b6b910b9b9158 +size 68576 diff --git a/adversarialsoftprompttuningforcrossdomainsentimentanalysis/3330bc9e-bd04-41b8-9cbd-68b0b1fce00b_model.json b/adversarialsoftprompttuningforcrossdomainsentimentanalysis/3330bc9e-bd04-41b8-9cbd-68b0b1fce00b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..785195fd1df1a3bda3f67cfb84e9ecc4262931f4 --- /dev/null +++ b/adversarialsoftprompttuningforcrossdomainsentimentanalysis/3330bc9e-bd04-41b8-9cbd-68b0b1fce00b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1c6951e3304a338b5b074346237d6ec801a019cfcd4ee4349d44843bdb57e0a9 +size 87963 diff --git a/adversarialsoftprompttuningforcrossdomainsentimentanalysis/3330bc9e-bd04-41b8-9cbd-68b0b1fce00b_origin.pdf b/adversarialsoftprompttuningforcrossdomainsentimentanalysis/3330bc9e-bd04-41b8-9cbd-68b0b1fce00b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..93aee22cc8c0258357189e353e0841e6ba8e3fd0 --- /dev/null +++ b/adversarialsoftprompttuningforcrossdomainsentimentanalysis/3330bc9e-bd04-41b8-9cbd-68b0b1fce00b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:49c04d60395d23468bf69ba58a26db9ae8efac5593b5bf9c919331a5a305f871 +size 930088 diff --git a/adversarialsoftprompttuningforcrossdomainsentimentanalysis/full.md b/adversarialsoftprompttuningforcrossdomainsentimentanalysis/full.md new file mode 100644 index 0000000000000000000000000000000000000000..af688cc97044d0c7c7792ecc7bac0ea1276b0ad3 --- /dev/null +++ b/adversarialsoftprompttuningforcrossdomainsentimentanalysis/full.md @@ -0,0 +1,293 @@ +# Adversarial Soft Prompt Tuning for Cross-Domain Sentiment Analysis + +Hui Wu $^{12}$ and Xiaodong Shi $^{123*}$ + +$^{1}$ Department of Artificial Intelligence, School of Informatics, Xiamen University, China +$^{2}$ National Institute for Data Science in Health and Medicine, Xiamen University, China +3Key Laboratory of Digital Protection and Intelligent Processing of Intangible Cultural Heritage of Fujian and Taiwan (Xiamen University), Ministry of Culture and Tourism, China huistudent@stu.xmu.edu.cn, mandel@xmu.edu.cn + +# Abstract + +Cross-domain sentiment analysis has achieved promising results with the help of pre-trained language models. As GPT-3 appears, prompt tuning has been widely explored to enable better semantic modeling in many natural language processing tasks. However, directly using a fixed predefined template for cross-domain research cannot model different distributions of the [MASK] token in different domains, thus making underuse of the prompt tuning technique. In this paper, we propose a novel Adversarial Soft Prompt Tuning method (AdSPT) to better model cross-domain sentiment analysis. On the one hand, AdSPT adopts separate soft prompts instead of hard templates to learn different vectors for different domains, thus alleviating the domain discrepancy of the [MASK] token in the masked language modeling task. On the other hand, AdSPT uses a novel domain adversarial training strategy to learn domain-invariant representations between each source domain and the target domain. Experiments on a publicly available sentiment analysis dataset show that our model achieves new state-of-the-art results for both single-source domain adaptation and multi-source domain adaptation. + +# 1 Introduction + +In recent years, with the emergence of a series of large-scale pre-trained language models (PLMs), such as GPT (Radford et al., 2018, 2019), BERT (Devlin et al., 2019), and RoBERTa (Liu et al., 2019), fine-tuning PLMs has achieved promising results on a wide range of natural language processing (NLP) tasks. However, as PLMs become larger and larger, fine-tuning larger PLMs becomes more challenging in most real-world applications. More recently, Brown et al. (2020) show that designing task descriptions (a.k.a. prompts) can make accurate predictions without updating any of the + +![](images/54d900fc69347f3bd9c5b972d3ce5841d92e2975652875e3fd7e189a82763857.jpg) +Figure 1: How domain discrepancy affects prompt tuning. Examples of a book review on the top and a video review on the bottom. + +parameters of GPT-3 (which has 175B parameters). This inspires a new PLM-tuning method named "prompt tuning". Such prompt tuning method has achieved state-of-the-art results on text classification and natural language inference (Schick and Schütze, 2020; Schick et al., 2020; Gao et al., 2020), relation classification (Han et al., 2021), and natural language generation (Li and Liang, 2021). + +It is common to use a predefined template (e.g., "It was [MASK].") in prompt tuning for binary sentiment analysis, and the classification results of positive or negative depend on the probabilities of predefined label words (e.g., "\{good, bad\}") in the masked language modeling (MLM) task. However, the distributions of MLM prediction results can be different for different domains. An example is shown in Figure 1, the discrepancy between book-domain review and video-domain review leads to different possibilities of label words. The high-frequency label word in book-domain review is "useful", and video-domain review is "real", neither of which is in the predefined "\{good, bad\}" . Therefore, it is unreasonable to predict predefined label words with fixed templates (a.k.a. hard prompts) for different domain datasets. + +The intuition is that the feature distributions corresponding to the [MASK] position learned from the hard prompt are distinct among different do + +mains. And the discrepancy among different domains can have serious effects on the cross-domain setting where we train a classifier on source domain data, e.g., the book reviews, and test it on the target domain, e.g., the video review. So domain adaptation (Ben-David et al., 2007; Mansour et al., 2009) based on cluster hypothesis (Zhu and Goldberg, 2009) becomes a key point of the cross-domain research. + +In order to improve the cross-domain sentiment analysis with the help of PLMs, we propose AdSPT: an Adversarial Soft Prompt Tuning method, which sheds new light on solving the domain adaptation problem. Specifically, we use soft prompts composed of multiple learnable vectors and the [MASK] token instead of hard templates for tuning. For different domains, we use independent soft prompts to represent domain-specific information, thus making them have the domain-aware knowledge. With different domain soft prompts, the MLM head classifier can mitigate the domain discrepancy of the [MASK] token. To enhance the effectiveness of the target domain, we design a novel adversarial training strategy to learn the domain-invariant knowledge of the [MASK] token, which can be seen as a two-player minimax game between the target domain and each source domain under multi-source domain adaptation setting. As a result, the collaborative effect of soft prompt tuning and domain adversarial training can more properly predict the feature distribution of the [MASK] token on the ground of domain-specific soft prompts and the domain invariance of the [MASK] token. + +In experiments, we evaluate on a publicly available sentiment analysis dataset for both single-source domain adaptation and multi-source domain adaptation. Our results show the effectiveness of collaboratively leveraging domain-specific soft prompts tuning and domain adversarial training. To summarize, the main contributions of this work are as follows: + +(1) In prompt tuning, we adopt separate soft prompts to learn embeddings enriched with the domain knowledge, thus alleviating the domain discrepancy of the [MASK] position. +(2) We design a novel adversarial training strategy to learn the domain-invariant representation of the [MASK] position. +(3) Experiments on the Amazon reviews dataset show our method AdSPT obtains the average accuracy $93.14\%$ (0.46 absolute improvement) under + +single-source domain adaptation and the average accuracy $93.75\%$ (0.81 absolute improvement) under multi-source domain adaptation. + +# 2 Related Work + +Prompt tuning. Fine-tuning PLMs with task-specific heads on downstream tasks has become the main paradigm and yields strong performance on many NLP tasks (Peters et al., 2018; Devlin et al., 2019; Radford et al., 2019). But there is a big gap between the fine-tuning objectives of downstream tasks and the pre-training objectives of PLMs, which could limit the exploitation of knowledge in PLMs (Liu et al., 2021b). Subsequently, GPT-3 (Brown et al., 2020) brings a new paradigm "prompt tuning" for downstream tasks, which leverages natural-language prompts and task demonstrations as context to make downstream tasks similar to language modeling. + +Early works explore manually defined templates (a.k.a. hard templates) for text classification and natural language inference (Schick and Schütze, 2020, 2021). However, suitable templates require strong domain knowledge. Therefore, some automatically generated hard templates are explored (Shin et al., 2020; Gao et al., 2020; Ben-David et al., 2021). Since prompt construction is to find a method that allows PLMs to effectively perform downstream tasks, it is not necessary to limit templates to human-interpretable natural language. Some works attempt to perform prompting directly with several learnable vectors, such as soft prompt (Lester et al., 2021; Vu et al., 2021), prefix-tuning (Li and Liang, 2021) and P-tuning V2 (Liu et al., 2021a). Moreover, Schick et al. (2020) explore automatically identifying label words. Hu et al. (2021) use an external knowledge base to expand label words. This paper focuses on improving the cross-domain sentiment analysis via different soft prompts of different domains. + +Domain Adaptation. Research on domain adaptation (DA) uses labeled or unlabeled target data to transfer labeled source information to a specific target domain (Pan and Yang, 2009; Mansour et al., 2009). Popular methods for unsupervised DA are based on domain discrepancy optimizing based on adversarial training (Ganin et al., 2016; Zhao et al., 2018; Saito et al., 2018). As for cross-domain sentiment analysis, some early works use pivot-based methods to capture the shared feature representation of different domains (Yu and Jiang, 2016; Ziser + +and Reichart, 2018; Li et al., 2018; Peng et al., 2018). Some other works adopt different adversarial learning methods to learn the domain-common sentiment knowledge (Li et al., 2017; Qu et al., 2019; Li et al., 2019). + +Recently, with the promising performance of PLMs in NLP, many works on cross-domain sentiment analysis focus on how to improve language model pre-training and fine-tuning, e.g., Du et al. (2020) use a target domain MLM task and a domain-distinguish task in pre-training; Zhou et al. (2020) utilize several pre-training tasks based on existing lexicons and annotations. Different from these works, our method is the first to use the combination of soft prompt tuning and adversarial training to solve the DA problem. + +# 3 Problem Formulation + +In this paper, we study cross-domain sentiment analysis in the unsupervised domain adaptation setting which contains two scenarios: a source domain and a target domain or multiple source domains and a target domain. Given $m(m \geq 1)$ source domains, the $l$ -th $(l \in [1, \dots, m])$ source domain contains an annotated dataset $S_{l} = \{x_{i}^{s}, y_{i}^{s}\}_{i=1}^{N_{l}^{s}}$ , where $x_{i}^{s} = [w_{1}^{s}, \dots, w_{n}^{s}]$ is a input sentence with $n$ words, $y_{i}^{s}$ is the corresponding polarity label, and $N_{l}^{s}$ represents the number of examples of the $l$ -th source domain. In the target domain, there is an unannotated dataset $\mathcal{T} = \{x_{i}^{t}\}_{i=1}^{N^{t}}$ , where $x_{i}^{t} = [w_{1}^{t}, \dots, w_{n}^{t}]$ is an unlabeled sentence of the target domain and $N^{t}$ is the number of the unlabeled data. The goal of cross-domain sentiment analysis is to learn a function $\mathcal{F}$ that could both retain in-domain knowledge for different domains and also learn the domain invariance between the target domain and each source domain to better predict the polarity of unlabeled sentences from the target domain. + +# 4 Method + +In this section, we first introduce a soft prompt tuning method for sentiment classification that utilizes soft prompts to capture domain-specific knowledge. Then we present a domain adversarial training method for domain adaptation. Finally, we describe the overall learning procedure. + +# 4.1 Soft Prompt Tuning for Sentiment Classification + +Prompt tuning is an approach to add extra information for PLMs by reformulating downstream tasks as cloze questions. The primary components include a template and a set of label words, where the template is a background description of current task and the label words are the high-probability vocabulary predicted by PLMs in the current context. In the binary sentiment classification, we denote the input sentence as $\mathbf{x} = [w_{1},\dots ,w_{n}]$ , the output label as $y$ . Here $y \in \mathcal{V}$ , and the label space $\mathcal{V} = \{\text{positive, negative}\}$ . + +Prompt tuning formalizes the classification task into a MLM task. Given a PLM $\mathcal{M}$ and its vocabulary $\mathcal{V}$ , a prompt consists of a template function $T(\cdot)$ that converts the input sentence $\pmb{x}$ to a prompt input $\pmb{x}_{prompt} = T(\pmb{x})$ with the [MASK] token and a set of label words $\mathcal{V}^* \subset \mathcal{V}$ , which are connected with the label space through a mapping function $v: \mathcal{V} \mapsto \mathcal{V}^*$ . As shown in Figure 2, the soft prompted input $\pmb{x}_{prompt}$ contains the embeddings of the original sentence $\mathbf{e}(\pmb{x})$ , $k$ learnable vectors $[\mathbf{h}_0, \dots, \mathbf{h}_{k-1}]$ , the embedding of the [MASK] token $\mathbf{e}([MASK])$ , and the embeddings of two positional tokens $\mathbf{e}([CLS])$ and $\mathbf{e}([SEP])$ . So the actual input of $\mathcal{M}$ is represented as: + +$$ +\begin{array}{l} \boldsymbol {x} _ {\text {p r o m p t}} = \left[ \mathbf {e} \left(\text {" [ C L S ]"}\right), \mathbf {e} (\boldsymbol {x}), \mathbf {h} _ {0}, \dots , \mathbf {h} _ {k - 1}, \right. \tag {1} \\ \mathbf {e} \left(\text {" [ M A S K ] "} , \mathbf {e} \left(\text {" [ S E P ] "} \right.\right) ] \\ \end{array} +$$ + +where $\mathbf{e}(\cdot)$ represents the embedding function of $\mathcal{M}$ . + +Here we can denote a PLM $\mathcal{M}$ as a function mapping from $x_{prompt}$ to the feature representation and vocabulary distribution of the [MASK] token, represented as: + +$$ +\mathbf {h} _ {[ \mathrm {M A S K} ]}, \mathbf {s} _ {[ \mathrm {M A S K} ]} = \mathcal {M} (\boldsymbol {x} _ {\text {p r o m p t}}) \tag {2} +$$ + +where $\mathbf{h}_{[\mathrm{MASK}]} \in \mathbb{R}^h$ and $\mathbf{s}_{[\mathrm{MASK}]} \in \mathbb{R}^{|\mathcal{V}|}$ are the hidden representation and vocabulary distribution of the [MASK] token respectively, and $\mathbf{s}_{[\mathrm{MASK}]} = f(\mathbf{h}_{[\mathrm{MASK}]})$ is obtained by the MLM head function $f$ . + +The probability $p(y|\boldsymbol {x})$ is formalized according to the distribution of the label word $w\in \mathcal{V}^*$ w.r.t. the [MASK] position. In binary sentiment classification, we set the label words as $\mathcal{V}^{*} =$ + +![](images/0b28b8cb9402aed17d9e10362ee93611f6e0e89c04e5bc3d09493e761b0136b0.jpg) +Figure 2: Overall structure of the proposed method. + +{good, bad}. So, + +$$ +\begin{array}{l} p (y | \boldsymbol {x}) = p \left(\mathcal {V} _ {y} ^ {*} \leftarrow [ \text {M A S K} ] \mid \boldsymbol {x} _ {\text {p r o m p t}}\right) \\ = \frac {\exp \left(\mathbf {s} _ {[ \mathrm {M A S K} ]} \left(\mathcal {V} _ {y} ^ {*}\right)\right)}{\sum_ {y ^ {\prime} \in \mathcal {Y}} \exp \left(\mathbf {s} _ {[ \mathrm {M A S K} ]} \left(\mathcal {V} _ {y ^ {\prime}} ^ {*}\right)\right)} \tag {3} \\ \end{array} +$$ + +Given an annotated dataset $S = \{\pmb{x}_i, y_i\}_{i=1}^N$ , the training objective for soft prompt tuning is obtained using the binary cross-entropy loss, + +$$ +\begin{array}{l} \mathcal {L} _ {c l a s s} (\mathcal {S}; \theta_ {\mathcal {M}, p, f}) \\ = - \sum_ {i = 1} ^ {N} \left[ \log p \left(y _ {i} \mid \boldsymbol {x} _ {i}\right) ^ {\mathbb {I} \{\hat {y} _ {i} = 1 \}} \right. \tag {4} \\ \left. + \log (1 - p (y _ {i} | \boldsymbol {x} _ {i})) ^ {\mathbb {I} \{\hat {y} _ {i} = 0 \}} \right] \\ \end{array} +$$ + +where $\hat{y}_i$ represents the ground truth label ranging from 1 as the positive label and 0 as the negative label). $\theta_{\mathcal{M},p,f}$ represents the overall trainable parameters of the PLM $\mathcal{M}$ , several learnable vectors $p$ and the MLM head function $f$ . + +# 4.2 Domain Adversarial Training + +For the same task in different domains, domain adversarial training can not only transfer the generic knowledge from source domains to the target domain, but also train more domain-aware classifiers. As shown in Figure 2, domain adversarial training aims to make the feature distributions of the [MASK] position from different domains closer. + +More intuitively, it will encourage the MLM head classifier to obtain domain-invariant features across domains. + +Based on the hidden representation $\mathbf{h}_{[\mathrm{MASK}]}$ by the PLM, the detailed process of domain adversarial training is as follows: given $m$ ( $m \geq 1$ ) source domains, we assume that between each source domain $\mathcal{S}_l$ ( $l \in [1, \dots, m]$ ) and the target domain $\mathcal{T}$ have a domain discriminative function $g_l : \mathbb{R}^h \to \mathcal{D}$ that discriminates between the source domain and the target domain, where the domain label set is represented as $\mathcal{D} = \{0, 1\}$ , 0 is the source domain label, and 1 is the target domain label. To this end, there are $m$ domain discriminators, denoted as $\mathbf{g} = \{g_l\}_{l=1}^m$ . + +Given an input example $\pmb{x}$ from either the $l$ -th $(l \in [1, \dots, m])$ source domain or the target domain, we first obtain the task-specific head representation $\mathbf{h}_{[\mathrm{MASK}]}$ by $\mathcal{M}$ and then model the probability $p(d|\pmb{x})$ for discriminating the domain label $d \in \mathcal{D}$ as: + +$$ +p (d | \boldsymbol {x}) = \frac {\exp \left(g _ {l} ^ {d} \left(\mathbf {h} _ {[ \mathrm {M A S K} ]}\right)\right)}{\sum_ {d ^ {\prime} \in \mathcal {D}} \exp \left(g _ {l} ^ {d ^ {\prime}} \left(\mathbf {h} _ {[ \mathrm {M A S K} ]}\right)\right)} \tag {5} +$$ + +Given $m$ source domain dataset $\hat{S} = \{\mathcal{S}_l\}_{l=1}^m = \{\{\pmb{x}_i^s\}_{i=1}^{N_s^l}\}_{l=1}^m$ and a target domain dataset $\mathcal{T} = \{\pmb{x}_i^t\}_{i=1}^{N^t}$ , where $N_s^l$ is the number of samples in the $l$ -th source domain and $N^t$ is the number of samples in the target domain, the domain discriminative objective is to minimize the following cross + +entropy loss, + +$$ +\begin{array}{l} \mathcal {L} _ {d o m a i n} (\hat {S}, \mathcal {T}; \theta_ {\mathcal {M}, p, \mathbf {g}}) \\ = - \sum_ {l = 1} ^ {m} \sum_ {i = 1} ^ {N _ {l} ^ {s} + N ^ {t}} \left[ \log p \left(d _ {i} \mid \boldsymbol {x} _ {i}\right) ^ {\mathbb {I} \{\hat {d} _ {i} = 1 \}} \right. \\ \left. + \log \left(1 - p \left(d _ {i} \mid \boldsymbol {x} _ {i}\right)\right) ^ {\mathbb {I} \left\{\hat {d} _ {i} = 0 \right\}} \right] \tag {6} \\ \end{array} +$$ + +where $\hat{d}_i$ represents the truth domain label and $\theta_{\mathcal{M},p,\mathbf{g}}$ represents the overall trainable parameters of the PLM $\mathcal{M}$ , several learnable vectors $p$ and $m$ domain discriminators $\mathbf{g}$ . + +The domain adversarial training among $m$ source domains and the target domain can be seen as a two-player minimax game where the domain classifiers $\mathbf{g} = \{g_l\}_{l=1}^m$ tend to minimize the domain discrimination loss so as to make the domain discriminators strong while the PLM $\mathcal{M}$ tends to maximize the domain discrimination loss so as to weaken the domain discrimination. + +Formally, the domain adversarial training objective w.r.t. to $\mathbf{g}$ , $p$ and $\mathcal{M}$ can be represented as: + +$$ +\max _ {\mathcal {M}, p} \min _ {\mathbf {g}} \mathcal {L} _ {\text {d o m a i n}} (\hat {\mathcal {S}}, \mathcal {T}; \theta_ {\mathcal {M}, p, \mathbf {g}}) \tag {7} +$$ + +# 4.3 Learning Procedure + +Joint training objective. Given $m$ source domains $\hat{S}$ and a target domain $\mathcal{T}$ , the sentiment classifier and the domain discriminator are jointly trained for optimizing the PLM $\mathcal{M}$ , soft prompt embeddings $p$ , MLM head function $f$ and domain discriminators $\mathbf{g}$ , and the final training objective is formally represented as: + +$$ +\begin{array}{l} \min _ {\mathcal {M}, p, f} \left\{\lambda \mathcal {L} _ {\text {c l a s s}} (\mathcal {S}; \theta_ {\mathcal {M}, p, f}) \right. \tag {8} \\ \left. - \min _ {\mathbf {g}} \mathcal {L} _ {d o m a i n} (\hat {S}, \mathcal {T}; \theta_ {\mathcal {M}, p, \mathbf {g}}) \right\} \\ \end{array} +$$ + +where $\lambda$ is a trade-off parameter. The sentiment classification objective $\mathcal{L}_{\text{class}}$ and the domain discrimination objective $\mathcal{L}_{\text{domain}}$ are defined in Eq. (4) and Eq. (6), respectively. + +Training procedure. The iterative training procedure is summarized in Algorithm 1. In each iteration, the input samples of each source domain are first used for training the PLM $\mathcal{M}$ , several learnable vectors $p$ and the MLM head function $f$ . The sentiment classification loss is computed in line 5. Then the samples of each source domain and the + +# Algorithm 1 Training Process of AdSPT. + +Input: Training samples of $m$ source domain dataset $\hat{S} = \{\mathcal{S}_l\}_{l=1}^m = \{\{\pmb{x}_i^s, y_i^s\}_{i=1}^{N_s}\}_{l=1}^m$ and a target domain dataset $\mathcal{T} = \{\pmb{x}_i^t\}_{i=1}^{N^t}$ ; the number of training iterations $n$ . + +Output: Configurations of AdSPT $\theta_{\mathcal{M},p,f,\mathbf{g}}$ + +Initialize: PLM $\theta_{\mathcal{M}}$ ; soft prompt embeddings $\theta_p$ ; MLM head function $\theta_f$ ; domain discriminator $\{\theta_{g_l}\}_{l=1}^m$ ; learning rate $\eta$ ; trade-off parameter $\lambda$ . + +1: while Training steps not end do + +2: for $d$ in {Source, Target} do +3: if $d = \text{Source then}$ +4: for $l$ in $\{1,\dots ,m\}$ do +5: $\mathcal{L}_{class} \gets \mathcal{L}_{class}(\mathcal{S}_l; \theta_{\mathcal{M},p,f})$ +6: $\mathcal{L}_{domain} \gets \mathcal{L}_{domain}(S_l, T; \theta_{\mathcal{M}, p, g_l})$ # Minimizing the MLM head classification loss +7: $\theta_{f}\gets \theta_{f} - \nabla_{\theta_{f}}\mathcal{L}_{class}$ +# Minimizing the domain discrimination loss +8: $\theta_{g_l}\gets \theta_{g_l} - \nabla_{\theta_{g_l}}\mathcal{L}_{domain}$ +9: end for + +# Minimizing the sentiment classification loss + +10: $\theta_{\mathcal{M},p}\gets \theta_{\mathcal{M},p} - \nabla_{\theta_{\mathcal{M},p}}(\lambda \mathcal{L}_{class} - \mathcal{L}_{domain})$ +11: end if +12: end for +13: end while + +target domain are mapped to different domain discriminators to train the PLM $\mathcal{M}$ , several learnable vectors $p$ and the domain discriminator $g_{l}$ . The corresponding domain discrimination loss is computed in line 6. The sentiment classification loss is used for updating the parameters of the PLM, several learnable vectors and the MLM head function (line 7, 10). The domain discrimination loss is used for updating the parameters of the PLM, several learnable vectors and the domain discriminators. Obviously, the parameters of the PLM and several learnable vectors be updated together by the above two losses. + +# 5 Experiments + +In this section, we conduct experiments to evaluate the effectiveness of our methods. Our experiments are carried out on single-source domain adaptation and multi-source domain adaptation settings (§ 5.3). In addition, we also investigate how different components in the model impact the performance of cross-domain sentiment analysis with different settings. + +# 5.1 Experimental Setup + +Dataset. We evaluate on the Amazon reviews dataset (Blitzer et al., 2007), which has been widely used for cross-domain sentiment classification. This dataset contains reviews of binary categories from four domains: Books (B), DVDs + +
S→TFine-tuningPrompt-tuning
BERT-DAATSENTIXFixFTFT + ATPT(HARD)PT(HARD) + ATPT(SOFT)AdSPT
B→D89.7091.3088.9689.7089.7590.7590.5092.00
B→E89.5793.2586.1587.3091.7592.4593.0593.75
B→K90.7596.2089.0589.5591.9092.7092.7593.10
D→B90.8691.1589.4089.5590.9091.5091.7592.15
D→E89.3093.5586.5586.0591.7592.7593.5594.00
D→K87.5396.0087.5387.6991.0592.3592.5093.25
E→B88.9190.4086.5087.1590.0091.9091.9092.70
E→D90.1391.2087.9888.2092.1092.5593.2593.15
E→K93.1896.2091.6091.9192.9093.5593.9594.75
K→B87.9889.5587.5587.6589.1590.7591.7592.35
K→D88.8189.8587.3087.7290.0591.0091.3592.55
K→E91.7293.5590.4590.2592.1592.5093.1093.95
Avg.90.1292.6888.2588.5691.1292.0692.4593.14
+ +Table 1: Results of single-source domain adaptation on Amazon reviews. There are four domains, B: Books, D: DVDs, E: Electronics, K: Kitchen appliances. In the table header, S: Source domain; T: Target domain; FT: Fine-tuning; AT: Adversarial training; PT(HARD): Prompt-tuning with the hard prompt; PT(SOFT): Prompt-tuning with the soft prompt; + represents the combination, e.g., “PT(HARD) + AT” represents hard prompt tuning with the domain adversarial training. AdSPT is also called “PT(SOFT) + AT”. We report mean performances over 5 fold cross-validation. + +(D), Electronics (E) and Kitchen appliances (K). Each domain has totally 2,000 manually labeled reviews. We use different settings for single-source domain adaptation and multi-source domain adaptation. For each domain, there are 2000 labeled reviews, including 1000 positive and 1000 negative, and 4000 unlabeled reviews. Following previous work (Ruder and Plank, 2017), we randomly select a small part $(20\%)$ of examples in each domain as the development set to save the best training model and perform a 5 fold cross-validation. + +In single-source domain adaptation, we follow previous work (Ziser and Reichart, 2018) to construct 12 cross-domain sentiment analysis tasks (corresponding to 12 ordered domain pairs). In multi-source domain adaptation, we choose three-domain data as multiple source domains and the remaining one as the target domain, e.g., "BDE $\rightarrow$ K". So there are 4 combinations, corresponding to 4 tasks. + +Training details. In the Amazon reviews experiments, we adopt a 12-layer Transformer (Vaswani et al., 2017; Devlin et al., 2019) initialized with RoBERTaBASE (Liu et al., 2019) as the PLM. During the training, we train with batch size of 2 for 10 epochs. The optimizer is Adam with learning rate $2e^{-5}$ for the PLM optimization and $5e^{-5}$ for optimizing domain discriminators. All experiments are conducted with an NVIDIA GeForce RTX 2080 Ti. + +# 5.2 Baselines + +We compare our method against 2 state-of-the-art methods, and also design several variants of finetuning and prompt tuning as baselines to demonstrate the effectiveness of adversarial training strategy in soft prompt tuning for DA. + +(1) BERT-DAAT(Du et al., 2020): Use BERT post-training for cross-domain sentiment analysis with adversarial training. +(2) $\mathbf{SENTIX}_{Fix}$ (Zhou et al., 2020): Pre-train a sentiment-aware language model by several pretraining tasks. +(3) Fine-tuning: Standard fine-tuning vanilla PLMs in the source domain labeled data, which use the hidden representation of [CLS] for classification. +(4) Fine-tuning + AT: Add the adversarial training operating on standard fine-tuning vanilla PLMs. +(5) Prompt-tuning(Hard): Use a manually defined template "It is [MASK]" for prompt-tuning. +(6) Prompt-tuning(Hard) + AT: Add the adversarial training operating on Prompt-tuning(Hard). + +Following previous work (Du et al., 2020; Zhou et al., 2020), we adopt the accuracy to evaluate the performance. + +# 5.3 Main Results + +Main results contain results of single-source domain adaptation (Table 1) and multi-source domain adaptation (Table 2). + +
S → TFine-tuningPrompt-tuning
FTFT + ATPT(HARD)PT(HARD) + ATPT(SOFT)AdSPT
BDE → K89.7091.3091.5092.2593.2593.75
BDK → E90.5791.2591.3093.0093.7594.25
BEK → D88.5689.0590.7591.2592.0093.50
DEK → B89.8691.7592.0092.2592.7593.50
Avg.89.6790.8491.3992.0092.9493.75
+ +Table 2: Results of multi-source domain adaptation on Amazon reviews. + +Results of Single-source Domain Adaptation. Table 1 shows our main experimental results under single-source domain adaptation. We can observe that our method AdSPT outperforms all other methods in most of single-source domain adaptation. + +Compared with previous state-of-the-art methods, AdSPT is significantly superior to BERT-DAAT and $\mathrm{SENTIX}_{Fix}$ on average (3.02 absolute improvement and 0.46 absolute improvement, respectively). More specifically speaking, prompt-tuning methods achieve better results than BERT-DAAT on most of single-source domain adaptation. This indicates that prompt tuning can stimulate pre-encoded knowledge in PLMs to solve the DA problem. But the performance of PT(HARD) and PT(HARD) + AT is lower than that of $\mathrm{SENTIX}_{Fix}$ on average (91.12% v.s. 92.68% and 92.06% v.s. 92.68%), showing that the feature representation of the [MASK] token in hard prompt tuning learns more domain knowledge of source domains, which leads to degraded performance on the target domain. Conversely, PT(SOFT) is comparable to $\mathrm{SENTIX}_{Fix}$ on average (92.45% v.s. 92.68%) and AdSPT achieves better results than $\mathrm{SENTIX}_{Fix}$ on average (0.46 absolute improvement). It shows that soft prompt tuning not only learns domain-aware continuous vectors, but also weakens the domain discrepancy of the feature distribution of the [MASK] position. In addition, prompt-tuning methods are consistently superior to FT and FT + AT, either using a hard prompt, or soft prompt. + +In prompt-tuning, soft prompt tuning methods achieve better performances than corresponding hard prompt tuning methods (1.33 absolute improvement and 1.08 absolute improvement, respectively). This indicates these separate soft prompts can flexibly learn in-domain knowledge of different domains, which makes the feature representation of the [MASK] token more suitable for predicting the predefined label words. So soft prompt is more applicable to the DA problem than a hard prompt. When we add a domain adversarial training oper + +ation on soft prompt tuning, AdSPT achieves the new start-of-the-art result on average. It shows that the domain adversarial training strategy can enhance the domain-invariant feature of the [MASK] token among different domain datasets. + +Results of Multi-source Domain Adaptation. Table 2 shows our main experimental results under multi-source domain adaptation. + +Compared with fine-tuning methods, variants of prompt tuning achieve better performances (over at least 0.55 absolute improvement on average). This is mainly because prompt tuning uses the feature representation of [MASK] token for classification, rather than the feature representation of [CLS] token. On the one hand, fine-tuning is difficult to train the domain-specific classifier accurately from scratch on the unlabeled dataset. On the other hand, prompt tuning is used to classify by predicting the feature distribution of the [MASK] token in the set of label words, which can activate some prior knowledge in PLMs. + +Compared with hard prompt tuning methods, soft prompt tuning methods achieve significant improvements on average (92.94% v.s. 91.39% and 93.75% v.s. 92.94%). Constructing the sophisticated hard template not only requires expertise knowledge and time, but the unified predefined hard template leads to the domain discrepancy of the feature representation of the [MASK] position that is unsuitable for multi-domain adaptation. + +Besides, PT(HARD) + AT achieves a better result than PT(HARD) on average (0.61 absolute improvement), which shows the domain adversarial training can obtain domain-invariant features among different domains by domain discriminators for DA. So when adding the domain adversarial training into soft prompt tuning, AdSPT achieves the best results under multi-source domain adaptation setting. This shows the effectiveness of the collaboration of soft prompt tuning and the domain adversarial training strategy. In the domain ad + +![](images/c0fa89f2f168e1d5b3e40502bd42207a5ec1fe2fd00e3a2a71d8eaee3bd1ba73.jpg) +Figure 3: Analysis of multi-source and single-source + +versarial training, using the feature representation of the [MASK] token to obtain domain invariance is better for predicting the predefined set of label words. + +# 5.4 Analysis + +Multi-source v.s. Single-source. We make more detailed comparisons to explore the effect of multi-source domain adaptation and single-source domain adaptation settings. Figure 3 illustrates the influence of multi-source and single-source on the predicted results of the same target domain. When the target domain is "E", "D", or "B", multi-source achieves better results in the target domain than single-source, showing that in most cases, multisource domain adaptation is superior to single-source domain adaptation in cross-domain research. However, when the target domain is "K", the result of "E $\rightarrow$ K" is superior to that of "BDE $\rightarrow$ K" (94.75% v.s. 93.75%). It is mainly because the feature distribution of "E" and "K" is closer. + +Effect of Soft Prompts. As stated in previous works (Gao et al., 2020), the choice of hard templates may have a huge impact on the performance of prompt tuning. In this subsection, we carry out experiments in "BDE $\rightarrow$ K" and "B $\rightarrow$ K" respectively to investigate the influence of different soft prompts under multi-source domain adaptation and single-source domain adaptation settings. + +As shown in Figure 4, we use 6 different soft prompts (by changing the number of prompt tokens $k$ ). The results demonstrate that the choice of templates exerts a considerable influence on the + +![](images/513eb5a3b47ececc3237adc861cca470c8ef1736d86dad538f9c55c710220888.jpg) +Figure 4: Results of different soft prompts $k$ on "BDE $\rightarrow \mathrm{K}$ " and "B $\rightarrow \mathrm{K}$ " + +performance of prompt tuning. For soft prompts, surprisingly, prompt tuning yields the best result with the fewest special tokens. Here $k = 3$ . + +# 6 Conclusion + +In this paper, we proposed a novel Adversarial Soft Prompt Tuning method (AdSPT) for cross-domain sentiment analysis. Firstly, we use domain-specific soft prompts instead of hard templates to represent domain-specific knowledge. The domain-specific soft prompts can alleviate the domain discrepancy w.r.t. the [MASK] representations by MLM task. Meanwhile, we also design a novel adversarial training strategy to learn the domain-invariant knowledge of the [MASK] token among different domains. Experiments on the Amazon reviews dataset achieve state-of-the-art performance. + +# Acknowledgements + +We thank the anonymous reviewers for their helpful comments and suggestions. This work is supported by the Project of Technological Innovation 2030 "New Generation Artificial Intelligence" (Grant no. 2020AAA0107904), the Major Scientific Research Project of the State Language Commission in the 13th Five-Year Plan (Grant nos. WT135-38), and the Key Support Project of NSFC-Liaoning Joint Foundation (Grant no. U1908216). + +# References + +Eyal Ben-David, Nadav Oved, and Roi Reichart. 2021. Pada: A prompt-based autoregressive approach for adaptation to unseen domains. +Shai Ben-David, John Blitzer, Koby Crammer, Fernando Pereira, et al. 2007. Analysis of representations for domain adaptation. Advances in Neural Information Processing Systems, 19:137. +John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 440-447. +Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. pages 1877-1901. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186. +Chunning Du, Haifeng Sun, Jingyu Wang, Qi Qi, and Jianxin Liao. 2020. Adversarial and domain-aware bert for cross-domain sentiment analysis. In Proceedings of the 58th annual meeting of the Association for Computational Linguistics, pages 4019-4028. +Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Lavi-olette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. The Journal of Machine Learning Research, 17(1):2096-2030. +Tianyu Gao, Adam Fisch, and Danqi Chen. 2020. Making pre-trained language models better few-shot learners. pages 3816-3830. + +Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, and Maosong Sun. 2021. *Ptr: Prompt tuning with rules for text classification.* arXiv preprint arXiv:2105.11259. +Shengding Hu, Ning Ding, Huadong Wang, Zhiyuan Liu, Juanzi Li, and Maosong Sun. 2021. Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification. arXiv preprint arXiv:2108.02035. +Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691. +Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. pages 4582-4597. +Zheng Li, Xin Li, Ying Wei, Lidong Bing, Yu Zhang, and Qiang Yang. 2019. Transferable end-to-end aspect-based sentiment analysis with selective adversarial learning. arXiv preprint arXiv:1910.14192. +Zheng Li, Ying Wei, Yu Zhang, and Qiang Yang. 2018. Hierarchical attention transfer network for cross-domain sentiment classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32. +Zheng Li, Yun Zhang, Ying Wei, Yuxiang Wu, and Qiang Yang. 2017. End-to-end adversarial memory network for cross-domain sentiment classification. In *IJCAI*, pages 2237-2243. +Xiao Liu, Kaixuan Ji, Yicheng Fu, Zhengxiao Du, Zhilin Yang, and Jie Tang. 2021a. P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks. arXiv preprint arXiv:2110.07602. +Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021b. Gpt understands, too. arXiv:2103.10385. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. +Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. 2009. Domain adaptation: Learning bounds and algorithms. arXiv preprint arXiv:0902.3430. +Sinno Jialin Pan and Qiang Yang. 2009. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10):1345-1359. +Minlong Peng, Qi Zhang, Yu-gang Jiang, and Xuan-Jing Huang. 2018. Cross-domain sentiment classification with target domain specific information. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2505-2513. + +Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics. +Xiaoye Qu, Zhikang Zou, Yu Cheng, Yang Yang, and Pan Zhou. 2019. Adversarial category alignment network for cross-domain sentiment classification. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2496-2508. +Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. +Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. +Sebastian Ruder and Barbara Plank. 2017. Learning to select data for transfer learning with bayesian optimization. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 372-382. +Kuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku, and Tatsuya Harada. 2018. Maximum classifier discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3723-3732. +Timo Schick, Helmut Schmid, and Hinrich Schütze. 2020. Automatically identifying words that can serve as labels for few-shot text classification. pages 5569-5578. +Timo Schick and Hinrich Schütze. 2020. Exploiting cloze questions for few shot text classification and natural language inference. pages 255-269. +Timo Schick and Hinrich Schütze. 2021. It's not just size that matters: Small language models are also few-shot learners. pages 2339-2352. +Taylor Shin, Yasaman Razeghi, Robert L. Logan IV au2, Eric Wallace, and Sameer Singh. 2020. Autoprompt: Eliciting knowledge from language models with automatically generated prompts. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998-6008. +Tu Vu, Brian Lester, Noah Constant, Rami Al-Rfou, and Daniel Cer. 2021. Spot: Better frozen model adaptation through soft prompt transfer. arXiv preprint arXiv:2110.07904. + +Jianfei Yu and Jing Jiang. 2016. Learning sentence embeddings with auxiliary tasks for cross-domain sentiment classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 236-246. +Han Zhao, Shanghang Zhang, Guanhang Wu, José MF Moura, Joao P Costeira, and Geoffrey J Gordon. 2018. Adversarial multiple source domain adaptation. Advances in Neural Information Processing Systems, 31:8559-8570. +Jie Zhou, Junfeng Tian, Rui Wang, Yuanbin Wu, Wenming Xiao, and Liang He. 2020. Sentix: A sentiment-aware pre-trained model for cross-domain sentiment analysis. In Proceedings of the 28th International Conference on Computational Linguistics, pages 568-579. +Xiaojin Zhu and Andrew B Goldberg. 2009. Introduction to semi-supervised learning. Synthesis lectures on artificial intelligence and machine learning, 3(1):1-130. +Yftah Ziser and Roi Reichart. 2018. Pivot based language modeling for improved neural domain adaptation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1241-1251. \ No newline at end of file diff --git a/adversarialsoftprompttuningforcrossdomainsentimentanalysis/images.zip b/adversarialsoftprompttuningforcrossdomainsentimentanalysis/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..8f9fc8c6c2722ac3d78acee01364e2a3cd558526 --- /dev/null +++ b/adversarialsoftprompttuningforcrossdomainsentimentanalysis/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:04f798b7a7fd80aa12bc8ee6a9a6346befab3b957b8758de6b47c8c25aa9f381 +size 382287 diff --git a/adversarialsoftprompttuningforcrossdomainsentimentanalysis/layout.json b/adversarialsoftprompttuningforcrossdomainsentimentanalysis/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..4670d546a9dc6a801b81198d9897d671b96f06ed --- /dev/null +++ b/adversarialsoftprompttuningforcrossdomainsentimentanalysis/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ebff39b97a783d7df36adf98ba83b14e5f524792d6b55860bf93c30fea1a46ce +size 402291 diff --git a/agoodpromptisworthmillionsofparameterslowresourcepromptbasedlearningforvisionlanguagemodels/0aa901eb-1a60-4edf-8ee7-3b0506e63775_content_list.json b/agoodpromptisworthmillionsofparameterslowresourcepromptbasedlearningforvisionlanguagemodels/0aa901eb-1a60-4edf-8ee7-3b0506e63775_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..906ced4dfeb34a62005f04ae8abf562203669e49 --- /dev/null +++ b/agoodpromptisworthmillionsofparameterslowresourcepromptbasedlearningforvisionlanguagemodels/0aa901eb-1a60-4edf-8ee7-3b0506e63775_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cc299f8b25ea89149ee71c5f62f94a9e0dc3ce8c566c4e91fdc652a0fc38a297 +size 80156 diff --git a/agoodpromptisworthmillionsofparameterslowresourcepromptbasedlearningforvisionlanguagemodels/0aa901eb-1a60-4edf-8ee7-3b0506e63775_model.json b/agoodpromptisworthmillionsofparameterslowresourcepromptbasedlearningforvisionlanguagemodels/0aa901eb-1a60-4edf-8ee7-3b0506e63775_model.json new file mode 100644 index 0000000000000000000000000000000000000000..0a2ab0a2fce8b0b483836894aa1b7ab4a46d6d23 --- /dev/null +++ b/agoodpromptisworthmillionsofparameterslowresourcepromptbasedlearningforvisionlanguagemodels/0aa901eb-1a60-4edf-8ee7-3b0506e63775_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e0c4a562361d3196d71e338cef2853b4675720c6cdcb1eaf1a94659f5d374170 +size 96099 diff --git a/agoodpromptisworthmillionsofparameterslowresourcepromptbasedlearningforvisionlanguagemodels/0aa901eb-1a60-4edf-8ee7-3b0506e63775_origin.pdf b/agoodpromptisworthmillionsofparameterslowresourcepromptbasedlearningforvisionlanguagemodels/0aa901eb-1a60-4edf-8ee7-3b0506e63775_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..05cc0d42d50367613a3d513c56746a40c9342962 --- /dev/null +++ b/agoodpromptisworthmillionsofparameterslowresourcepromptbasedlearningforvisionlanguagemodels/0aa901eb-1a60-4edf-8ee7-3b0506e63775_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f03c2880b10cd780f069301a656971d3f42b7c5eec44d87ff758ffab946425ac +size 1087785 diff --git a/agoodpromptisworthmillionsofparameterslowresourcepromptbasedlearningforvisionlanguagemodels/full.md b/agoodpromptisworthmillionsofparameterslowresourcepromptbasedlearningforvisionlanguagemodels/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1532da78aaa0662edb3e7d6e29672533cf8a03d4 --- /dev/null +++ b/agoodpromptisworthmillionsofparameterslowresourcepromptbasedlearningforvisionlanguagemodels/full.md @@ -0,0 +1,325 @@ +# A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models + +Woojeong Jin $^{1*}$ Yu Cheng $^{2}$ Yelong Shen $^{2}$ Weizhu Chen $^{2}$ Xiang Ren $^{1}$ + +1University of Southern California 2Microsoft Corporation + +{woojeong.jin, xiangren}@usc.edu {yu.cheng, yelong.shen, wzchen}@microsoft.com + +# Abstract + +Large pre-trained vision-language (VL) models can learn a new task with a handful of examples and generalize to a new task without fine-tuning. However, these VL models are hard to deploy for real-world applications due to their impractically huge sizes and slow inference speed. To solve this limitation, we study prompt-based low-resource learning of VL tasks with our proposed method, FEWVLM, relatively smaller than recent few-shot learners. For FEWVLM, we pre-train a sequence-to-sequence transformer model with prefix language modeling (PrefixLM) and masked language modeling (MaskedLM). Furthermore, we analyze the effect of diverse prompts for few-shot tasks. Experimental results on VQA show that FEWVLM with prompt-based learning outperforms Frozen (Tsimpoukelli et al., 2021) which is $31 \times$ larger than FEWVLM by $18.2\%$ point and achieves comparable results to a $246 \times$ larger model, PICa (Yang et al., 2021). In our analysis, we observe that (1) prompts significantly affect zero-shot performance but marginally affect few-shot performance, (2) models with noisy prompts learn as quickly as hand-crafted prompts given larger training data, and (3) MaskedLM helps VQA tasks while PrefixLM boosts captioning performance. Our code is publicly available at https://github.com/woojeongjin/FewVLM + +# 1 Introduction + +Fine-tuning large pre-trained language models (PLMs) have led to strong results in various domains including vision-language tasks (Devlin et al., 2019; Raffel et al., 2020; Brown et al., 2020; Radford et al., 2021). Such large PLMs can learn a new task with a few examples or generalize to a new task without fine-tuning on any training examples, i.e., few-shot and zero-shot learn + +* Work was mainly done while interning at Microsoft Azure AI. + +![](images/1706c5fac94be18593664d9728f45528a21b8beb60d337caa3dcdf8321a2c7fc.jpg) +Figure 1: Examples of VQA and Captioning tasks. In our setup, we convert the tasks into generative tasks in which models need to generate target text given input text and an image. + +ing (Brown et al., 2020; Radford et al., 2021; Tsimpoukelli et al., 2021). Few-shot learning overcomes the challenges of data-hungry supervised learning, where collecting human-labeled data is costly and slow. However, recent few-shot models such as GPT3 (Brown et al., 2020), Frozen (Tsimpoukelli et al., 2021), and PICa (Yang et al., 2021) are too large to deploy in small or moderate computing machines due to their gigantic model sizes + +In this paper, we study low-resource learning of VL tasks with our proposed method, FeWVLM, a moderate-sized vision-language model, in which we fine-tune the model with no or a handful of training examples. For FeWVLM, we pre-train a sequence-to-sequence transformer model (Cho et al., 2021; Raffel et al., 2020) with prefix language modeling (PrefixLM) and masked language modeling (MaskedLM). This setup is more practical in that training and inference can be run economically using standard computing hardware and + +![](images/33266d6d23c0d7c86f1e99aeda4a0a49a20a287acacc8b1c649e8073963a864a.jpg) +Figure 2: Illustration of FewVLM. This shows inference of FewVLM with prompt-based learning. Given a prompt template, we convert the question text into input text. The prompt helps the model generate correct answers. + +it is expensive to obtain a large number of quality training examples in the real world. In such a few-shot setting, task-specific prompts or task descriptions are important and have shown effectiveness in few-shot NLP tasks (Gao et al., 2021; Radford et al., 2021; Schick and Schütze, 2021a,b; Brown et al., 2020). + +To extend the success to VL tasks, we aim to answer the following questions for prompt-based low-resource VL learning. Q1) How does prompt design affect zero/few-shot learning on new tasks? Q2) Does prompt design still matter given larger training? Q3) How do different pre-training objectives affect zero/few-shot learning? To answer these questions, we explore various prompt formats including hand-crafted and noisy prompts on zero/few-shot VL learning datasets. In addition, we study pre-training objectives on few-shot tasks inspired by Raffel et al. (2020): prefix language modeling (PrefixLM) inspired by Raffel et al. (2020) and masked language modeling (MaskedLM). To this end, we investigate the model's performance on few-shot VL tasks including visual question answering (Goyal et al., 2017; Marino et al., 2019; Hudson and Manning, 2019), captioning (Agrawal et al., 2019; Young et al., 2014) (Fig. 1), and miniImageNet (Vinyals et al., 2016). + +In our empirical analysis, our FEWVLM with prompt-based learning outperforms Frozen (Tsimpoukelli et al., 2021) which is $31 \times$ larger than FEWVLM by $18.2\%$ point on zero-shot VQAv2 and achieves comparable results to a $246 \times$ larger model, PICa (Yang et al., 2021). Furthermore, we observe that (1) prompts significantly affect zero-shot performance but marginally affect few-shot performance on new tasks ( $\S 6.2$ and $\S 6.3$ ), (2) models with noisy prompts learn as quickly + +as hand-crafted prompts given larger training data (§6.5), and (3) MaskedLM helps few-shot VQA tasks while PrefixLM boosts captioning performance (§6.6). + +# 2 Related Work + +Vision-language few-shot learning. Recently, several few-shot learners on vision-language tasks were proposed including GPT (Radford et al., 2019; Brown et al., 2020), Frozen (Tsimpoukelli et al., 2021), PICa (Yang et al., 2021), and SimVLM (Wang et al., 2021). Frozen (Tsimpoukelli et al., 2021) is a large language model based on GPT-2 (Radford et al., 2019), and is transformed into a multimodal few-shot learner by extending the soft prompting to incorporate a set of images and text. Their approach shows the few-shot capability on visual question answering and image classification tasks. Similarly, PICa (Yang et al., 2021) uses GPT-3 (Brown et al., 2020) to solve VQA tasks in a few-shot manner by providing a few in-context VQA examples. It converts images into textual descriptions so that GPT-3 can understand the images. SimVLM (Wang et al., 2021) is trained with prefix language modeling on weakly-supervised datasets. It demonstrates its effectiveness on a zero-shot captioning task. While these models achieve improvement on few-shot tasks, they are impractical to use in real-world applications due to their model sizes. + +Language model prompting. Providing prompts or task descriptions play an vital role in improving pre-trained language models in many tasks (Gao et al., 2021; Radford et al., 2021; Schick and Schütze, 2021a,b; Brown et al., 2020). Among them, GPT models (Radford et al., 2019; Brown et al., 2020) achieved great success in prompting + +![](images/d41f0799a2cf7301be99d1de9b1590d9a3239f9a95323397a3b6c5a1c6990d1c.jpg) +Figure 3: Pre-training objectives. We pretrain FeWVLM with masked language modeling (MaskedLM) and prefix language modeling (PrefixLM). + +or task demonstrations in NLP tasks. In light of this direction, prompt-based approaches improve small pre-trained models in few-shot text classification tasks (Gao et al., 2021; Schick and Schütze, 2021a,b). CLIP (Radford et al., 2021) also explores prompt templates for image classification which affect zero-shot performance. We follow these core ideas so we aim to improve zero-shot and few-shot performance using prompts in vision-language tasks. + +# 3 Analysis Setup + +In this work, we study the zero-shot and few-shot performance of vision-language models $\mathcal{L}$ . We introduce our analysis setup: problem formulation, analysis questions, downstream tasks and datasets, evaluation metrics, and baselines. + +# 3.1 Problem Formulation + +For zero-shot tasks, a pre-trained VL model $\mathcal{L}$ have no access to training set $\mathcal{D}_{train}$ and development set $\mathcal{D}_{dev}$ , and directly makes inference on the test instances $\mathcal{D}_{test}$ . For few-shot tasks, we compose a dev set $\mathcal{D}_{dev}$ from training data and ensure that $|\mathcal{D}_{train}| = |\mathcal{D}_{dev}|$ following Perez et al. (2021); Gao et al. (2021) to tune the hyper-parameters and select the model. We limit the sizes of training and development sets to meet the goal of learning from limited data. The size of $\mathcal{D}_{train}$ and $\mathcal{D}_{dev}$ are small - i.e., we set the size of both to 16 in our study. + +# 3.2 Analysis Questions + +We aim to answer the following questions in this study through experiments on multiple VL datasets. + +Q1) How does prompt design affect zero/few-shot learning on new tasks? Providing a pretrained language model with task-specific prompts or significantly improves zero-shot and few-shot performance on NLP domains (Gao et al., 2021; Schick and Schütze, 2021a,b; Brown et al., 2020). For this question, we test several ad-hoc prompts on vision-language tasks and analyze how large zero-shot and few-shot performance is affected by different prompts, hand-crafted and noisy prompts, in Sec. 6.5. + +Q2) Does prompt design still matter given larger training data? As we will see in our experiments, prompts affect the zero/few-shot performance. However, prompts may have different effects when models are given different sizes of training data. To answer this question, we train models with different sizes of training data and various prompts, and compare the performance between different prompts. + +Q3) How do different pre-training objectives affect zero/few-shot performance? We study two different pre-training objectives on few-shot performance: prefix language modeling (PrefixLM) inspired by Raffel et al. (2020) and masked language modeling (MaskedLM). In this setup, we pre-train our model with different objectives and test the model on zero-shot and few-shot tasks in Sec. 6.6. + +# 3.3 Downstream Tasks and Datasets + +In this work, we mainly focus on three tasks: visual question answering, captioning, and categorical learning. The visual question answering task requires models to answer a question to a given context image. We convert the visual question answering task into a generation task so that the model can generate answers in the zero-shot setting. The captioning task requires a model to generate descriptions for a given context image. The categorical learning requires a model to choose the correct category or class. We evaluate our model in an open-ended fashion to quantify fast learning of categories, in which it must generate correct labels unlike other classification methods. + +We include VQAv2 (Goyal et al., 2017), OKVQA (Marino et al., 2019), and GQA (Hudson + +Table 1: Hand-crafted prompts. We study hand-crafted prompts on zero-shot and few-shot tasks. [Q] and [A] refer to question text and answer text, respectively. is a sentinel token. We append image features to input text. Target prompts are “[A]” and “ [A)” in VQA. We use caption text as a target prompt in captioning. + +
TaskIDInput promptExample
VQAP1[Q] <text_1>input: What position is this man playing? <text_1> output: <text_1> pitcher
P2question: [Q] answer:input: question: What position is this man playing? answer: output: <text_1> pitcher
P3question: [Q] answer: <text_1>input: question: What position is this man playing? answer: <text_1> output: <text_1> pitcher
CaptioningQ1a picture ofinput: a picture of output: a small black dog standing over a plate of food.
Q2a photo ofinput: a photo of output: a small black dog standing over a plate of food.
Q3an image ofinput: an image of output: a small black dog standing over a plate of food.
+ +and Manning, 2019) for visual question answering tasks, and NoCaps (Agrawal et al., 2019), and Flickr30k (Young et al., 2014) for image captioning. We use Karpathy split (Karpathy and Li, 2015) for Flickr30k, which re-splits train and val images into $29,000 / 1,014 / 1,000$ for train / validation / test. For categorical learning, we include miniImageNet (Vinyals et al., 2016), a meta learning dataset. Following (Tsimpoukelli et al., 2021), we use only meta test data to evaluate FEW VLM in a few-shot manner and test on 5-way $k$ -shot setup, where 5 classes and $k$ examples per class are given. + +# 3.4 Evaluation Metrics + +To evaluate few-shot performance, we randomly sample 5 different training and dev splits and measure average performance on the 5 splits. We fine-tune the vision-language models with 200 epochs for the few-shot setup and choose the best checkpoint on the dev set. For NoCaps task, it does not have training data. Thus we use the training data from COCO captioning in the experiments following Wang et al. (2021). We evaluate on the VQAv2 validation set, GQA test-dev, OK-VQA test set, test set of Karpathy split for Flickr30k captioning, and NoCaps validation set. We adopt accuracy for VQA datasets and miniImageNet, and CIDEr (Vedantam et al., 2015) and SPICE (Anderson et al., 2016) as evaluation metrics for captioning. + +# 3.5 Baselines + +We evaluate strong zero/few-shot vision-language learners for comparison: Frozen (Tsimpoukelli et al., 2021), PICa (Yang et al., 2021) for VQA + +datasets and SimVLM (Wang et al., 2021) for captioning datasets. We include Unified VLP (Zhou et al., 2020) for few-shot VQAv2 and Flickr30k. Also, we compare them with fully fine-tuned models $\mathcal{L}_{full}$ as upper bounds of few-shot models for each task; these models are fine-tuned on the entire datasets while few-shot models can access a small amount of data. For fully fine-tuned models $\mathcal{L}_{full}$ , we borrow numbers from Uniterlarge (Chen et al., 2019) for VQAv2, Oscar (Li et al., 2020b) for GQA, SimVLM (Wang et al., 2021) and VinVL (Zhang et al., 2021) for NoCaps CIDER and SPICE respectively, and Unified VLP (Zhou et al., 2020) for Flickr30k captioning. We include VL-T5no-vqa as a baseline which is pre-trained without visual question answering datasets (Cho et al., 2021). For miniImageNet, we include Frozen and AFHN (Li et al., 2020a). Frozen is designed for few-shot learning while AFHN is for meta learning, which is smaller and faster. + +# 4 Method + +Before diving into the analysis, we introduce our model, FeWVLM, to do zero/few-shot learning on VL tasks and answer the analysis questions we raised. We introduce FeWVLM architecture and pre-training objectives. + +# 4.1 Encoder-decoder Vision-language Model + +We adopt an encoder-decoder architecture (Cho et al., 2021; Vaswani et al., 2017), to encode visual and text inputs and generate target text. We represent an input image with 36 object regions from a Faster R-CNN (Ren et al., 2015) trained on Visual Genome (Krishna et al., 2017). The sets of region representations are fed into the encoder by appending them to the text Cho et al. (2021). We train the model parameters $\theta$ by minimizing the negative + +log-likelihood of target text $y$ tokens given input text $x$ and image $v$ : + +$$ +L _ {\theta} = - \sum_ {i = 1} ^ {| y |} \log P _ {\theta} \left(y _ {i} \mid y _ {< i}, x, v\right). \tag {1} +$$ + +The model is not task-specific, so it is a good option for zero/few-shot settings. + +# 4.2 Pre-training Objectives + +We pre-train the models with both prefix language modeling (PrefixLM) and masked language modeling (MaskedLM). Fig. 3 illustrates the PrefixLM and MaskedLM. + +Prefix language modeling. We include prefix language modeling (PrefixLM) following Raffel et al. (2020). Given an image and a span of text, this objective randomly splits the text into two separate components; the former component with the given image is used as inputs to the encoder and the latter component is used as target text to be generated by the decoder. + +Masked language modeling. We follow Cho et al. (2021) to do masked language modeling. This objective is to replace random spans with numbered sentinel tokens, e.g., , and then the masked text is fed into the encoder. Then the decoder generates the masked spans as target text. We randomly mask $15\%$ of input text tokens and replace them with sentinel tokens. + +Pre-training data. To pre-train FEWVLM, we collect image-caption data from MS COCO (Lin et al., 2014; Chen et al., 2015) and Visual Genome (VG) (Krishna et al., 2017). The pre-training datasets contain 9.18M image-text pairs and 180K distinct images. + +# 5 Low-resource Adaptation + +In downstream tasks, we train our model with few-shot examples. Fig. 2 shows an illustration of FeWVLM in inference time. Given a prompt template $\mathcal{P}$ , we first get input text and target text using the template $x, y = \mathcal{P}(\text{input}, \text{label})$ . Then we train model parameters by minimizing the negative log-likelihood in Eq. (1). In inference, we use the same prompt and the model generates the label text. Here we obtain the final label by removing the target prompt template. + +# 5.1 Prompt Design + +Prompts affect the performance of the vision-language model (Cho et al., 2021); we study the + +effect of different prompts on the zero-shot and few-shot performance on downstream tasks. Tables 1 and 11 show prompts we used in our experiments. + +# 5.1.1 Visual Question Answering + +The visual question answering tasks (VQA, OK-VQA, and GQA) require models to answer a question to a given context image. Recent approaches (Chen et al., 2019; Tan and Bansal, 2019; Su et al., 2020; Li et al., 2019, 2020b) tackle visual question answering tasks as multi-label classification over a predefined set of answer candidates. Instead, we approach the visual question answering tasks as a generation task so that the model can produce the answers without introducing any task-specific heads. In this setup, prompts act as constraints to guide the models to generate proper formats of answers; models might generate a sentence for VQA, which is not the correct format, without prompts. + +Therefore, we study several prompts for input and output as shown in Tables 1 and 11; we explore hand-crafted prompts (Table 1) and noisy prompts for ablation study (Table 11). + +Hand-crafted prompts. For input prompts, we explore three different templates: “question: [Q] answer:” and with the sentinel token at the end. Similarly to masked language modeling, we expect models to generate words thanks to the sentinel token. For target prompts, we explore two different templates: “[A]” (an answer) and “[ [A]” (an answer with a sentinel token). Here, we aim to mimic MaskedLM's target text format, so the similar format helps the model quickly adapt to the new task. We call each prompt ID as in Table 1. + +Noisy prompts. To understand the effect of noisy prompts in zero/few-shot learning, we include irrelevant prompts, noisy tokens, and random sentences as in Table 11. Irrelevant prompts are random questions or instructions that mislead models to answer wrong questions or follow irrelevant instructions. Noisy tokens are randomly selected from T5's vocabulary, so we test how robust our model is to random tokens. Finally, random sentences are captions from MS COCO and this gives false information to models. + +# 5.1.2 Captioning + +In NoCaps and Flickr30k, we explore three handcrafted input prompts: "a picture of", "a photo of", and "an image of". We study the effect of different + +Table 2: Zero-shot VQA results. We test models without any training examples. VL-T5no-vqa is pre-trained without VQA datasets. Compared to larger models, Frozen and PICa-Full, our models outperform them or show the comparable results. + +
ModelModel sizeVQAv2OK-VQAGQA
Unified VLP122M0.0--
VL-T5no-vqa224M13.55.86.3
Frozen7B29.55.9-
PICa175B-17.5-
FewVLMbase224M43.411.627.0
FewVLMlarge740M47.716.529.3
+ +Table 4: Zero-shot captioning results. We use the CIDEr and SPICE metrics for evaluation. + +
ModelModel sizeNoCapsFlickr30k
CIDErSPICECIDErSPICE
Unified VLP122M--24.97.2
VL-T5no-vqa224M4.45.32.62.0
SimVLMhuge-101.4---
FEWVLMbase224M42.28.531.010.0
FEWVLMlarge740M47.79.136.510.7
+ +word choices in this captioning task. While the three different words have similar meanings, they show different performance in zero-shot and few-shot tasks as we will see in our experiments.. For target prompts, we just train the model with the original caption without any additional prompts. + +# 5.1.3 MiniImageNet + +In miniImageNet, we train our model with a hand-crafted input prompt, "This is ," and target prompt, "[A]." We compare our model with and without prompts in this dataset to study whether prompts are helpful in categorical learning. + +# 6 Results and Discussion + +In this section, we first discuss our main results on zero-shot and few-shot tasks and then answer the questions we raised: does prompt design matter in zero/few-shot learning? + +# 6.1 Experiment Details + +For pre-training, we set batch size 1,280 and 800 for $\mathrm{FEWVLM}_{\mathrm{base}}$ and $\mathrm{FEWVLM}_{\mathrm{large}}$ , respectively and pre-train them with 30 epochs. We + +Table 3: Few-shot VQA results. We report average performance over 5 different splits. The size of training and validation sets are 16 for our FEWVLM and VL-T5no-vqa, and Frozen and PICa use 4 and 16 in-context training examples, respectively. For the fair comparison to Frozen, we include FEWVLM\* with 4 training and validation examples. + +
ModelModel sizeVQAv2OK-VQAGQA
Unified VLP122M24.3--
VL-T5no-vqa224M31.812.719.6
Frozen7B38.212.6-
PICa175B54.343.3-
FewVLM*base224M45.114.526.9
FewVLM*base224M48.215.032.2
FewVLMlarge740M51.123.135.7
Fine-tuned Lfull-72.6-61.5
+ +Table 5: Few-shot captioning results. We report average performance over 5 different splits. We use the CIDEr and SPICE metrics for evaluation. + +
ModelModel sizeNoCapsFlickr30k
CIDErSPICECIDErSPICE
Unified VLP122M--28.89.4
VL-T5no-vqa224M22.06.812.88.3
FewVLMbase224M48.610.032.612.8
FewVLMlarge740M53.110.437.013.5
Fine-tuned Lfull-112.213.167.417.0
+ +use learning rate 1e-4 with $5\%$ linear warmup. For few-shot learning, we train models with 200 epochs, learning rate 5e-5 and $5\%$ linear warmup and choose the best checkpoint on the dev set. For FeWVLM, we use "question: [Q] answer " (P3) as an input prompt and "[A]" as a target prompt for visual question answering, and "an image of" (Q3) as an input prompt for captioning, which show the best performance. We will study the effect of different prompts in Sec. 6.5. The sizes of $\mathcal{D}_{train}$ and $\mathcal{D}_{dev}$ are 16 on VQA and captioning tasks. For miniImageNet, we use "This is ", and "[A]" as input and target prompts. In this data, we test with {1, 3, 5}-shots per class. + +# 6.2 Performance on Zero-shot Learning + +We evaluate the existing models in a zero-shot manner, in which models do not have access to any training data. Tables 2 and 4 show the results on VQA and captioning datasets, respectively. First, FEWVLM with the hand-crafted prompt (P3) achieves better performance than other baselines on VQA datasets. In particular, our FEWVLMbase significantly outperforms Frozen + +Table 6: 5-way miniImageNet results. We evaluate FeWVLM in a generative manner. The shot represents the number of training examples per class. + +
ModelModel size1 shot3 shots5 shots
Frozen7B14.534.733.8
FEWVLMbase(no prompt)224M48.075.082.6
FEWVLMbase224M57.078.084.2
FEWVLMlarge740M57.178.384.4
AFHN-62.3-78.1
+ +which is about $31 \times$ larger than ours. Also, PICa based on GPT3 (Brown et al., 2020) shows the best performance on OK-VQA. It is noticeable that our FEWVLMlarge, the $246 \times$ smaller model, achieves the comparable result to PICa. Compared to VL-T5no-vqa which is the same architecture as ours, FEWVLMbase improves VQAv2 performance by about $30\%$ point. As we will see in the later section, our pre-training objectives and the prompts boost the VQA performance. On NoCaps, SimVLMhuge shows the best performance. Our FEWVLMbase significantly improves the performance compared to VL-T5no-vqa. As we will see in the later section, our pre-training objectives and the prompts boost the VQA and captioning performance. + +# 6.3 Performance on Few-shot Learning + +Tables 3 and 5 show the few-shot performance on VQA and captioning datasets. Sizes of training and validation sets are 16 for FeWVLM, VL-T5 $_{\text{no-vqa}}$ , and Unified VLP; and Frozen and PICa use 4 and 16 in-context demonstration examples, respectively. + +On VQAv2 and OK-VQA, PICa shows the best performance while our $\mathsf{FEWVLM}_{\mathsf{large}}$ achieves the comparable result on VQAv2. OK-VQA requires external knowledge to answer unlike other VQA datasets, so larger models and large pre-training data (prior knowledge) are necessary to improve. Interestingly, $\mathsf{FEWVLM}_{\mathsf{base}}^*$ , which is trained with 4 training examples, outperforms Frozen. On captioning data, $\mathsf{FEWVLM}_{\mathsf{base}}$ notably outperforms VL-T5no-vqa by $31.1\%$ point on NoCaps CIDEr. + +Unified VLP slightly underperforms FeWVLM on Flickr30k captioning task. We conjecture that their architecture is based on a encoder-decoder transformer and it is pre-trained with a captioning task (Zhou et al., 2020). + +Table 7: Zero-shot results of hand-crafted prompts. We test different input prompts in zero-shot predictions. We use a CIDEr metric for Flickr30k. Note that zero-shot setting does not require target prompts. + +
no promptP1P2P3
VQAv23.79.919.043.4
no promptQ1Q2Q3
Flickr30k9.615.225.631.0
+ +# 6.4 MiniImageNet + +Table 6 shows results on miniImageNet, where models must choose the correct class for each image. We train and evaluate FeWVLM in an generative manner; the model must generate correct label text to get the credit. FeWVLM significantly outperforms Frozen in all shots. Note that we train FeWVLM with a few training samples while Frozen uses them as in-context demonstration. Interestingly, FeWVLM with a hand-crafted prompt improves performance a lot on the 1-shot case, while it marginally improves on the 5-shot case. + +# 6.5 Study of Prompt Design + +Here we examine the effect of different prompts on $\mathrm{FewVLM}_{\mathrm{base}}$ in Table 7 and Figs. 6, 5, and 4. We test the model on VQAv2 and Flickr30k datasets. + +# 6.5.1 Zero-shot Predictions + +Table 7 shows the zero-shot performance on VQAv2 and Flickr30k. We observe that zero-shot results are remarkably affected by input prompts on both datasets. For input prompts, in P1 and P3 helps the zero-shot predictions significantly compared to "no prompt" and P2. We conjecture that guides the model to predict masked spans similarly to MaskedLM, so it improves the performance. + +On Flickr30k, we examine different word choices of prompts: "a picture of" (Q1), "a photo of" (Q2), and "an image of" (Q3). For instance, using "an image of" outperforms using no prompt by 21.4 point. It is noticeable that different word choices significantly affect the zero-shot results. + +# 6.5.2 Few-shot Predictions + +We study various input prompts including irrelevant prompts, noisy tokens, and random sentences on VQAv2 (Fig. 4). First, noisy prompts and no prompt achieve near 0 accuracy on the zero-shot setting. In few-shot predictions, FewVLM with + +![](images/9c8d890e678b32c451649e4b1f3b35f4200b3b81192068cc621af7e118bf85f1.jpg) +Figure 4: VQAv2 results on noisy prompts. We investigate different prompts on various training sizes. FEWVLM is trained with our best hand-crafted prompt (P3), irrelevant prompts, noisy tokens and random sentences. We list the prompt templates in Table 11 of appendix. We use “ [A]” as our target prompt. + +![](images/fefccf6e8a3a5384b753c84a162a302d460aa0e99ab2e5a5435f39142ca3ade6.jpg) +Figure 5: Flickr30k results on hand-crafted prompts. We investigate different hand-crafted prompts (Q1, Q2, and Q3) on various training sizes. + +noisy prompts learn as quickly as hand-crafted prompts given larger data. For example, our model with noisy prompts achieves comparable results to the best hand-crafted prompt. Among all different types of noisy prompts, random sentences deteriorate performance the most. This is because the random sentences come from captions in MS COCO, so the model might choose the answer from wrong captions not from images. Interestingly, no prompt outperforms the other noisy prompts and even shows similar to or better than the hand-crafted prompt with larger training data. We also observe a similar phenomenon on Flickr30k; no prompt performs similar to hand-crafted prompts in Fig. 5. + +![](images/8454856e20c76e10f2deceb8ab320e49181296f97146d5a7f2726ba600cc64a5.jpg) +Figure 6: VQAv2 results on different target prompts. We investigate different target prompts with handcrafted input prompts on various training sizes. + +Table 8: Results on different pre-training objectives. We test our pre-training objectives to investigate how it affects zero-shot and few-shot performance. We train $\mathrm{FEWVLM}_{\mathrm{base}}$ with 16 training and validation examples. + +
ObjectiveVQAv2GQAFlickr30k +CIDEr
Zero-shot
MaskedLM42.425.14.6
PrefixLM11.96.726.8
MaskedLM + PrefixLM43.427.031.0
Few-shot
MaskedLM46.031.418.5
PrefixLM40.827.631.8
MaskedLM + PrefixLM48.232.232.6
+ +In addition, we explore two different target prompts, “ [A]” and “[A].” We try to mimic the MaskedLM's target text format, so we add “” to target prompt on VQA. This might help the model's fast adaptation to a new task since they share the same target prompt. In Fig. 6, we notice an interesting phenomenon; the target prompt “[A]” shows a larger variance than the other suggesting that introducing “” helps the model quickly adapt to a new task. However, both prompts show similar results given larger training data, e.g., 300. + +# 6.6 Pre-training Objectives + +We investigate how pre-training objectives affect different tasks. We pre-train FEwVLM with different pre-training objectives: masked language modeling (MaskedLM) and prefix language modeling (PrefixLM). + +In Table 8, we observe that MaskedLM helps + +VQA tasks while PrefixLM helps captioning tasks in zero-shot and few-shot settings. We conjecture that MaskedLM is to predict spans, which is analogous to predict correct answers to questions, and PrefixLM is to generate the rest of the given prefix, which is similar to captioning tasks. In other words, if the pre-training task is similar to the downstream tasks, then it will help performance further. When pre-training with both objectives, they create a synergetic effect and thus improve cross-task generalization. + +# 7 Conclusion + +In this work, we present FEwVLM, a few-shot prompt-based learner on vision-language tasks. On diverse datasets, FEwVLM outperforms baselines and shows comparable results to PICa which is $246 \times$ larger than ours. We observe that prompts are vital in zero-shot and few-shot tasks and each pre-training objective helps different few-shot tasks. Also, we find out that models with larger training data are not significantly affected by noisy prompts. Future work includes exploring automatic prompt generation and diverse formats of few-shot tasks such as multiple-choice VQA. Finding optimal prompts require exhaustive engineering to achieve the best performance and leads to impressive results. We leave the exploration of these directions to future investigations. + +# References + +Harsh Agrawal, Peter Anderson, Karan Desai, Yufei Wang, Xinlei Chen, Rishabh Jain, Mark Johnson, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. nocaps: novel object captioning at scale. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, pages 8947-8956. IEEE. +Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2016. Spice: Semantic propositional image caption evaluation. In European conference on computer vision, pages 382-398. Springer. +Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario + +Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. +Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dólar, and C Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. ArXiv preprint, abs/1504.00325. +Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2019. Uniter: Learning universal image-text representations. +Jaemin Cho, Jie Lei, Hao Tan, and Mohit Bansal. 2021. Unifying vision-and-language tasks via text generation. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 1931-1942. PMLR. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making pre-trained language models better few-shot learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3816-3830, Online. Association for Computational Linguistics. +Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the V in VQA matter: Elevating the role of image understanding in visual question answering. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 6325-6334. IEEE Computer Society. +Drew A. Hudson and Christopher D. Manning. 2019. GQA: A new dataset for real-world visual reasoning and compositional question answering. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 6700-6709. Computer Vision Foundation / IEEE. +Andrej Karpathy and Fei-Fei Li. 2015. Deep visual-semantic alignments for generating image descriptions. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, + +June 7-12, 2015, pages 3128-3137. IEEE Computer Society. +Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision, 123(1):32-73. +Kai Li, Yulun Zhang, Kunpeng Li, and Yun Fu. 2020a. Adversarial feature hallucination networks for few-shot learning. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 13467-13476. IEEE. +Lianian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. Visualbert: A simple and performant baseline for vision and language. ArXiv preprint, abs/1908.03557. +Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, et al. 2020b. Oscar: Object-semantics aligned pre-training for vision-language tasks. In European Conference on Computer Vision, pages 121-137. Springer. +Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer. +Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. 2019. OK-VQA: A visual question answering benchmark requiring external knowledge. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 3195-3204. Computer Vision Foundation / IEEE. +Ethan Perez, Douwe Kiela, and Kyunghyun Cho. 2021. True few-shot learning with language models. *ArXiv preprint*, abs/2105.11447. +Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 8748-8763. PMLR. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. + +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21:140:1-140:67. +Shaoqing Ren, Kaiming He, Ross B. Girshick, and Jian Sun. 2015. Faster R-CNN: towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 91-99. +Timo Schick and Hinrich Schütze. 2021a. Exploiting cloze-questions for few-shot text classification and natural language inference. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 255–269, Online. Association for Computational Linguistics. +Timo Schick and Hinrich Schütze. 2021b. It's not just size that matters: Small language models are also few-shot learners. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2339-2352, Online. Association for Computational Linguistics. +Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2020. VL-BERT: pretraining of generic visual-linguistic representations. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. +Hao Tan and Mohit Bansal. 2019. LXMERT: Learning cross-modality encoder representations from transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5100-5111, Hong Kong, China. Association for Computational Linguistics. +Maria Tsimpoukelli, Jacob Menick, Serkan Cabi, SM Eslami, Oriol Vinyals, and Felix Hill. 2021. Multimodal few-shot learning with frozen language models. ArXiv preprint, abs/2106.13884. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008. +Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 4566-4575. IEEE Computer Society. + +Oriol Vinyals, Charles Blundell, Tim Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. 2016. Matching networks for one shot learning. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 3630-3638. +Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, and Yuan Cao. 2021. Simvlm: Simple visual language model pretraining with weak supervision. ArXiv preprint, abs/2108.10904. +Zhengyuan Yang, Zhe Gan, Jianfeng Wang, Xiaowei Hu, Yumao Lu, Zicheng Liu, and Lijuan Wang. 2021. An empirical study of gpt-3 for few-shot knowledge-based vqa. ArXiv preprint, abs/2109.05014. +Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67-78. +Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jianfeng Gao. 2021. Vinvl: Revisiting visual representations in vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5579-5588. +Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason J. Corso, and Jianfeng Gao. 2020. Unified vision-language pre-training for image captioning and VQA. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 13041-13049. AAAI Press. + +Table 9: Model architectures. + +
HyperparameterFewVLMbaseFewVLMlarge
# Layers12+1224+24
Hidden dimension7681,024
FF hidden size3,0724,096
# Attention head1216
Attention head size6464
+ +Table 10: COCO captioning results. We use the CIDEr and SPICE metrics for evaluation. + +
ModelModel sizeZero-shotFew-shot
CIDErSPICECIDErSPICE
VL-T5no-vqa224M4.92.043.010.8
SimVLMhuge-102.322.1--
FEWVLMbase224M84.516.298.718.9
FEWVLMlarge740M92.117.3100.419.1
Unified VLP(fully supervised)122M--117.721.3
+ +![](images/e337d0639ade0a7478c7b7fccb07876eee7037ad92d263659fb33b87bacef7c8.jpg) +Figure 7: VQAv2 results on hand-crafted prompts and the target prompt " [A]". + +# A Model Architectures + +Table 9 shows model parameters in our model, FEWVLM. FEWVLMbase and FEWVLMlarge is based on VL-T5 (Cho et al., 2021) and T5 (Raffel et al., 2020), respectively. + +# B COCO Captioning + +We evaluate our model with COCO captioning data. We use Karpathy split (Karpathy and Li, 2015) for MS COCO captioning, which re-splits train and val images into 113,287 / 5000 / 5000 for train / validation / test. Table 10 shows the results on COCO. + +# C Prompt Study + +Tables 7, 8, and 9 show the results of each prompt on VQAv2 and Flickr30k with various training + +![](images/bb730c27b1c98097c2ef9a321e1d7b7c918ea7e05686aa5d04115820a660f571.jpg) +Figure 8: VQAv2 results on hand-crafted prompts and the target prompt "[A]" + +![](images/789e27f7fc07e03a92aed57df7208e5acb6cf42b594c98022ac806c5a7e2b620.jpg) +Figure 9: Flickr30k results on hand-crafted prompts. + +sizes. + +# D Effect of Pre-training Data + +We pre-train our model with different datasets: MS COCO and Visual Genome (VG), and Conceptual Captions (CC). We investigate which pre-training dataset helps the downstream tasks in a few-shot manner. In Table 12, we observe that MS COCO and VG datasets are more helpful to the downstream tasks than CC. + +Table 11: Prompt templates. We test different input prompts on VQAv2. [Q] refers to input question text. We use [A] as target text. We append image features to input text. + +
Input prompt templateCategory
Fill in the blank in the below sentence: [Q]irrelevant prompts
Question: [Q] True or False?irrelevant prompts
[Q] What color is the floor?irrelevant prompts
Paraphrase this into a different question? [Q]irrelevant prompts
[Q] How many are they?irrelevant prompts
nezg public passed Dream [Q]noisy tokens
benefic video starting garbagetap Talent summary [Q]noisy tokens
gestion Bun dates youngest batteriesfeder organisationoyez [Q]noisy tokens
[Q] cheferntiei geekutilisées plantingasta Pest principiIMF saddle véritablenoisy tokens
[Q] composant emergency laissé Klägereiniger swipe concentrateOSS/18 rewardprepaidnoisy tokens
[Q] A black dog is sitting on a couch.random sentences
[Q] A man working at a kitchen counter in a room illuminated by sunlight.random sentences
A brown purse is sitting on a green bench. [Q]random sentences
A television that is sitting next to signs. [Q]random sentences
[Q] A woman is wearing white pants.random sentences
+ +Table 12: Few-shot results on different pre-training datasets. We examine different pre-training datasets on each downstream tasks. + +
DatasetVQAv2GQAFlickr30k
MS COCO, VG48.232.232.6
Conceptual Captions36.725.922.3
\ No newline at end of file diff --git a/agoodpromptisworthmillionsofparameterslowresourcepromptbasedlearningforvisionlanguagemodels/images.zip b/agoodpromptisworthmillionsofparameterslowresourcepromptbasedlearningforvisionlanguagemodels/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..10407d02ad66540e8c31cc817656cd2b204a3f82 --- /dev/null +++ b/agoodpromptisworthmillionsofparameterslowresourcepromptbasedlearningforvisionlanguagemodels/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:edbca908790eabd360c64bcc6b345dc943f5d36e3c90ae3572045b091404ad7f +size 602228 diff --git a/agoodpromptisworthmillionsofparameterslowresourcepromptbasedlearningforvisionlanguagemodels/layout.json b/agoodpromptisworthmillionsofparameterslowresourcepromptbasedlearningforvisionlanguagemodels/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a10c9d580edf3117eb5c23a9dd7fe1d95bf29c79 --- /dev/null +++ b/agoodpromptisworthmillionsofparameterslowresourcepromptbasedlearningforvisionlanguagemodels/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:341fab4d8ae5d119fd5d3bd6f11183b8fec360d63f4cfcd41d68a9bb2b0e58c3 +size 354285 diff --git a/alephbertlanguagemodelpretrainingandevaluationfromsubwordtosentencelevel/f17f6ca7-e5d8-4253-9274-29324d202ebb_content_list.json b/alephbertlanguagemodelpretrainingandevaluationfromsubwordtosentencelevel/f17f6ca7-e5d8-4253-9274-29324d202ebb_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..c5bf741bd77e589fb95c0e2b1c034efae32a5c9b --- /dev/null +++ b/alephbertlanguagemodelpretrainingandevaluationfromsubwordtosentencelevel/f17f6ca7-e5d8-4253-9274-29324d202ebb_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:27f5e54effd44b2817bf3bda0b795281587ce49d954f20d3d459a6b4750b75d3 +size 79407 diff --git a/alephbertlanguagemodelpretrainingandevaluationfromsubwordtosentencelevel/f17f6ca7-e5d8-4253-9274-29324d202ebb_model.json b/alephbertlanguagemodelpretrainingandevaluationfromsubwordtosentencelevel/f17f6ca7-e5d8-4253-9274-29324d202ebb_model.json new file mode 100644 index 0000000000000000000000000000000000000000..8f3da6381881c64df3ae9cc632e4029943be0ceb --- /dev/null +++ b/alephbertlanguagemodelpretrainingandevaluationfromsubwordtosentencelevel/f17f6ca7-e5d8-4253-9274-29324d202ebb_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b27c94dfa58c9d17ec92ba4798a8f10359de09b394e14cd4f8b0d0551fd53826 +size 94764 diff --git a/alephbertlanguagemodelpretrainingandevaluationfromsubwordtosentencelevel/f17f6ca7-e5d8-4253-9274-29324d202ebb_origin.pdf b/alephbertlanguagemodelpretrainingandevaluationfromsubwordtosentencelevel/f17f6ca7-e5d8-4253-9274-29324d202ebb_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3571cbf67fb726230009ecc11ac4f7d88ae785ac --- /dev/null +++ b/alephbertlanguagemodelpretrainingandevaluationfromsubwordtosentencelevel/f17f6ca7-e5d8-4253-9274-29324d202ebb_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c08d944c79a9cf978bb5177e9cd05127723b8cbb18d52a0bc0ca203829b59a09 +size 364493 diff --git a/alephbertlanguagemodelpretrainingandevaluationfromsubwordtosentencelevel/full.md b/alephbertlanguagemodelpretrainingandevaluationfromsubwordtosentencelevel/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a93c77f7f6ec46b83153f6dd12a08b57bd476f51 --- /dev/null +++ b/alephbertlanguagemodelpretrainingandevaluationfromsubwordtosentencelevel/full.md @@ -0,0 +1,269 @@ +# AlehBERT: Language Model Pre-training and Evaluation from Sub-Word to Sentence Level + +Amit Seker, Elron Bandel, Dan Bareket, Idan Brusilovsky, Refael Shaked Greenfeld, Reut Tsarfaty + +Department of Computer Science, Bar Ilan University, Ramat-Gan, Israel + +{aseker00,elronbandel,DBareket,brusli1, + +shakedgreenfeld,reut.tsarfaty} $@$ gmail.com + +# Abstract + +Large Pre-trained Language Models (PLMs) have become ubiquitous in the development of language understanding technology and lie at the heart of many artificial intelligence advances. While advances reported for English using PLMs are unprecedented, reported advances using PLMs for Hebrew are few and far between. The problem is twofold. First, so far, Hebrew resources for training large language models are not of the same magnitude as their English counterparts. Second, most benchmarks available to evaluate progress in Hebrew NLP require morphological boundaries which are not available in the output of PLMs. In this work we remedy both aspects. We present AlephBERT, a large PLM for Modern Hebrew, trained on larger vocabulary and a larger dataset than any Hebrew PLM before. Moreover, we introduce a novel neural architecture that recovers the morphological segments encoded in contextualized embedding vectors. Based on this new morphological component we offer an evaluation suite consisting of multiple tasks and benchmarks that cover sentence-level, word-level and sub-word level analyses. On all tasks, AlephBERT obtains state-of-the-art results beyond contemporary Hebrew state-of-the-art models. We make our AlephBERT model, the morphological extraction component, and the Hebrew evaluation suite publicly available, for future investigations and evaluations of Hebrew PLMs. + +# 1 Introduction + +Contextualized word representations provided by models such as BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), GPT3 (Brown et al., 2020), T5 (Raffel et al., 2020) and more, were shown in recent years to be a critical component for obtaining state-of-the-art performance on a wide range of Natural Language Processing (NLP) tasks, from surface syntactic tasks as tagging and parsing, to downstream semantic tasks as question answering, information extraction and text summarization. + +While advances reported for English using such models are unprecedented, previously reported results using PLMs in Modern Hebrew are far from satisfactory. Specifically, the BERT-based Hebrew section of multilingual-BERT (Devlin et al., 2019) (henceforth, mBERT), did not provide a similar boost in performance as observed by the English section of mBERT. In fact, for several reported tasks, the results of the mBERT model are on a par with pre-neural models or neural models based on non-contextual embeddings (Tsarfaty et al., 2020; Klein and Tsarfaty, 2020). An additional Hebrew BERT-based model, HeBERT (Chriqui and Yahav, 2021), has been recently released, yet without empirical evidence of performance improvements on key components of the Hebrew NLP pipeline. + +The challenge of developing PLMs for morphologically-rich and medium-resourced languages such as Modern Hebrew is twofold. First, contextualized word representations are obtained by pre-training a large language model on massive quantities of unlabeled texts. In Hebrew, the size of published texts available for training is relatively small. To wit, Hebrew Wikipedia (300K articles) used for training mBERT is orders of magnitude smaller compared to English Wikipedia (6M articles). Second, commonly accepted benchmarks for evaluating Hebrew models, via Morpho-Syntactic Tagging and Parsing (Sadde et al., 2018), or Named Entity Recognition (Bareket and Tsarfaty, 2020) require decomposition of words into morphemes, $^{1}$ which are distinct of the sub-words (a.k.a. word-pieces) provided by standard PLMs. Such morphemes are as of yet not readily available in the PLMs' output embeddings. + +Evaluating BERT-based models on morpheme-level tasks is thus non-trivial due to the mismatch between the sub-word tokens used as sub-word + +![](images/10455ac3b5f3c1a9cabb078bb75e8bd202aae35615c62aa9c7b42f14b3d7e947.jpg) +Figure 1: PLM Morphological Extraction Pipeline. The two-word phrase “הכלה”, transliterated as “lbit hlbn”, mapped to word-pieces which are consumed by a PLM to generate contextualized vectors and extract the sub-word morphological units. In this example the WordPiece Tokenizer splits the first word, “lbit”, into two pieces while leaving the second word, “hlbn”, intact. Consequently, AlephBERT generates 3 embedded vectors - the vectors associated with the split word pieces are averaged to form a single contextualized vector. Finally, the resulting two word vectors are used by the Morphological Extraction Model that generates the disambiguated morphological segments. + +![](images/047d155eb1cfc1d67fbd17edab33908feace0b2a79a9ae2d737755828285c381.jpg) + +![](images/ae50989e19ec562cc0919b60c7774cb4d70d82f27feade058f0e4363f3b3b86a.jpg) + +input units used by the PLMs and the sub-word morphological units needed for evaluation. PLMs employ sub-word tokenization mechanisms such as WordPiece or Byte-Pair Encoding (BPE) for the purposes of minimizing Out-Of-Vocabulary words (Sennrich et al., 2016). These sub-word tokens are generated in a pre-processing step, without utilization of any linguistic information, and passed as input to the PLM. Crucially, such word-pieces do not reflect morphological units. Extracting morphological units from contextualized vectors provided by PLMs is challenging yet necessary in order to enable morphological-level evaluation of Hebrew PLMs on standard benchmarks. + +In this paper we introduce AlephBERT, a Hebrew PLM trained on more data and a larger vocabulary than any Hebrew PLM before. Moreover, we propose a novel architecture that extracts the morphological sub-word units implicitly encoded in the contextualized vectors outputted by PLMs. Using AlephBERT and the proposed morphological extraction model we enable evaluation on all existing Hebrew benchmarks. We thus present a processing and evaluation pipeline tailored to fit Morphologically Rich Languages (MRLs), i.e., covering + +sentence-level, word-level and most importantly sub-word morphological-level tasks (Segmentation, Part-of-Speech Tagging, full Morphological Tagging, Dependency Parsing, Named Entity Recognition (NER) and Sentiment Analysis), and present new and improved SOTA for Modern Hebrew on all of these tasks. + +# 2 Previous Work + +Contextualized word embedding vectors are a major driver for improved performance of deep learning models on many Natural Language Understanding (NLU) tasks. Initially, ELMo (Peters et al., 2018) and ULMFit (Howard and Ruder, 2018) introduced contextualized word embedding frameworks by training LSTM-based models on massive amounts of texts. The linguistic quality encoded in these models was demonstrated over 6 tasks: Question Answering, Textual Entailment, Semantic Role labeling, Coreference Resolution, Name Entity Extraction, and Sentiment Analysis. The next big leap was obtained with the introduction of the GPT-1 framework by Radford and Sutskever (2018). Instead of using LSTM layers, GPT is based on 12 layers of Transformer decoders with each decoder layer composed of a 768-dimensional feed-forward layer and 12 self-attention heads. Devlin et al. (2019) followed along the same lines and implemented Bidirectional Encoder Representations from Transformers, or BERT in short. BERT attends to the input tokens in both forward and backward directions while optimizing a Masked Language Model and a Next Sentence Prediction objective objectives. + +BERT Benchmarks An integral part involved in developing various PLMs is providing NLU multi-task benchmarks used to demonstrate the linguistic abilities of new models and approaches. English BERT models are evaluated on 3 standard major benchmarks. The Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016) is used for testing paragraph-level reading comprehension abilities. Wang et al. (2018) selected a diverse and relatively hard set of sentence and sentence-pair tasks which comprise the General Language Understanding Evaluation (GLUE) benchmark. The SWAG (Situations With Adversarial Generations) dataset (Zellers et al., 2018) presents models with partial description of grounded situations to see if they can consistently predict subsequent scenarios, thus indicating abilities of commonsense reasoning. + +When evaluating Hebrew PLMs, one of the key pitfalls is that there are no Hebrew versions for these benchmarks. Furthermore, none of the suggested benchmarks account for examining the capacity of PLMs for encoding the word-internal morphological structures which are inherent in MRLs. In this work we enable a generic morphological-level evaluation pipeline that is suited for PLMs of MRLs. + +Multilingual vs. Monolingual BERT Devlin et al. (2019) produced 2 BERT models, for English and Chinese. To support other languages, they trained a multilingual BERT (mBERT) model combining texts covering over 100 languages, in the hoped to benefit low-resource languages with the linguistic information obtained from languages with larger datasets. In reality, however, mBERT performance on specific languages has not been as successful as English. Consequently, several research efforts focused on building monolingual BERT models as well as providing language-specific evaluation benchmarks. Liu et al. (2019) trained CamemBERT, a French BERT model evaluated on syntactic and semantic tasks in addition to natural language inference tasks. Rybak et al. (2020) trained HerBERT, a BERT PLM for Polish. They evaluated it on a diverse set of existing NLU benchmarks as well as a new dataset for sentiment analysis for the e-commerce domain. Polignano et al. (2019) created Alberto, a BERT model for Italian, using a massive tweet collection. They tested it on several NLU tasks — subjectivity, polarity (sentiment) and irony detection in tweets. In order to obtain a large enough training corpus in low-resources languages, such as Finnish (Virtanen et al., 2019) and Persian (Farahani et al., 2020), a great deal of effort went into filtering and cleaning text samples obtained from web crawls. + +BERT for MRLs Languages with rich morphology introduce another challenge involving the identification and extraction of sub-word morphological information. In many MRLs words are composed of sub-word morphological units, with each unit acting as a single syntactic unit bearing as single POS tag (mimicking 'words' in English). Antoun et al. (2020) addressed this for Arabic, a Semitic MRLs, by pre-processing the training data using a morphological segmenter, producing morphological segments to be used for training AraBERT instead of the actual words. By doing so, they were able to produce output vectors that corre + +
LanguageOscar (duped) SizeWikipedia Articles
English2.3T6,282,774
Russian1.2T1,713,164
Chinese508G1,188,715
French282G2,316,002
Arabic82G1,109,879
Hebrew20G292,201
+ +Table 1: Corpora Size Comparison: Resource-savvy languages vs. Hebrew. + +spond to morphological segments rather than the original space-delimited word-tokens. However, this approach requires the application of the same segmenter at inference time as well, and like any pipeline approach, this setup is susceptible to error propagation. This risk is magnified as words in MRLs may be morphologically ambiguous, and the predicted segments might not represent the correct interpretation of the words. As a result, the quality of the PLM depends on the accuracy achieved by the segmenting component. A particular novelty of this work is not making any changes to the input, letting the PLM encode morphological information associated with complete Hebrew tokens. Instead, transforming the resulting contextualized word vectors into morphological-level segments via a novel neural architecture which we discuss shortly. + +Evaluating PLMs for MRLs Across all of the above-mentioned language-specific PLMs, evaluation was performed on the word-, sentence- or paragraph-level. Non examined the capacity of PLMs to encode sub-word morphological-level information which we focus on in this work. Sahin et al. (2019) probed various information types encoded in embedded word vectors. Similarly to us, they focused on languages with rich morphology where linguistic signals are encoded at the morphological, subword level. Their work is more about explainability — showing high positive correlation of probing tasks to the downstream tasks, especially for morphologically rich languages. Unlike us, they assume a single POS tag and set of features per word in their probing tasks. In Hebrew, Arabic and other MRLs, tokens may carry multiple POS per word, and are required to be segmented for further processing. We provide a framework that extracts subword morphological units given contextualized word vectors, that enables to evaluate PLMs on morphologically-aware datasets where words can have multiple POS tags and feature-bundles. + +
CorpusFile SizeSentencesWords
Oscar (deduped)9.8GB20.9M1,043M
Twitter6.9GB71.5M774M
Wikipedia1.1GB6.3M127M
Total17.9GB98.7M1.9B
+ +# 3 AlephBERT Pre-Training + +Data The PLM termed AlephBERT that we provide herein is trained on a larger dataset and a larger vocabulary than any Hebrew BERT instantiation before. The data we train on is listed in Table 2. Concretely, we employ the following datasets for pre-training: (i) Oscar: Deduplicated Hebrew portion extracted from Common Crawl via language classification, filtering and cleaning (Ortiz Suárez et al., 2020). (ii) Wikipedia: Texts from all of Hebrew Wikipedia, extracted using Attardi (2015). (iii) Twitter: Hebrew tweets collected between 2014-09-28 and 2018-03-07. We removed markers (“RT:”, “@” user mentions and URLs), and eliminated duplicates. For data statistics, see Table 2. + +The Hebrew portions of Oscar and Wikipedia provide us with a training-set size orders-of-magnitude smaller compared with resource-savvy languages, as shown in Table 1. In order to build a strong PLM we need a considerable boost in the amount of sentences the PLM can learn from, which in our case comes form massive amounts of tweets added to the training set. We acknowledge the potential inherent concerns associated with this data source (population bias, behavior patterns, bot masquerading as humans etc.) and note that we have not made any explicit attempt to identify these cases. Honoring ethical and legal constraints we have not manually analyzed nor published this data source. While the free form language expressed in tweets might differ significantly from the text found in Oscar and Wikipedia, the sheer volume of tweets helps us close the resource gap substantially with minimal effort.3 + +Model We used the Transformers training framework of Huggingface (Wolf et al., 2020) and trained two different models — a small model with 6 hidden layers learned from the Oscar portion of our dataset, and a base model with 12 hidden layers which was trained on the entire dataset. The processing units used are wordpieces generated by training BERT tokenizers over the respective + +datasets with a vocabulary size of 52K in both cases. Following the work on RoBERTa (Liu et al., 2019) we optimize AlephBERT with a masked-token prediction loss. We deploy the default masking configuration where $15\%$ of word piece tokens are masked. In $80\%$ of the cases, they are replaced by [MASK], in $10\%$ of the cases, they are replaced by a random token and in the remaining cases, the masked tokens are left as is. + +Operation To optimize GPU utilization and decrease training time we split the dataset into 4 chunks based on the number of tokens in a sentence and consequently we are able to increase batch sizes and dramatically shorten training time. + +Table 2: AlephBERT's Training Data. + +
chunk1chunk2chunk3chunk4
max tokens0>3232>6464>128128>512
num sentences70M20M5M2M
+ +We trained for 5 epochs with learning rate 1e-4 followed by an additional 5 epochs with learning rate at 5e-5 for a total of 10 epochs. We trained AlephBERTbase over the entire dataset on an NVidia DGX server with 8 V100 GPUs which took 8 days. AlephBERTsmall was trained over the Oscar portion only, using 4 GTX 2080ti GPUs taking 5 days in total. + +# 4 The Morphological Extraction Model + +Modern Hebrew is a Semitic language with rich morphology and complex orthography. As a result, the basic processing units in the language are typically smaller than raw space-delimited tokens. Subsequently, most standard evaluation tasks require knowledge of the internal morphological boundaries within the raw tokens. To accommodate this granularity requirement we developed a neural model designed to produce the disambiguated morphological segments for each token in context. These linguistic segmentations are distinct of the word-pieces employed by the PLM. + +In the morphological extraction neural model, each input token is represented by (one or more) contextualized word-vectors produced by the PLM. Each word-piece token is associated with a vector, and for each space-delimited token, we average the word-piece vectors. We feed the resulting vector into a seq2seq model and encode the surface token as a sequence of characters using a BiLSTM, followed by a decoder that generates an output sequence of characters, using space as a special symbol signaling morphological boundaries. + +
Raw inputהכלה (lbit hlbn)
Space-delimited wordsהכלה (hlbn)הכלה (lbit)
Index54321
Segmentationהכלה (lbn) whiteπ (h) theהכ怆 (bit) houseπ (h) the5 (l) to
POSADJDETNOUNDETADP
MorphologyGender=Masc|Number=SingPronType=ArtGender=Masc|Number=SingPronType=Art-
Dependencies3/amod5/det1/obj3/def0/ROOT
Word-level NERE-ORGB-ORG
Morpheme-level NERE-ORGI-ORGI-ORGB-ORGO
+ +Table 3: Illustration of Evaluated Word-Based and Morpheme-Based Downstream Tasks. The two-word input phrase “הכלה”, transliterated as “lbit hlbn” (to the White House), decompose into five morphological segments ('to-the-house the-white'). The Hebrew text goes from right to left. + +![](images/12cf71b7252755410e10df62603e7140d96d26a6e939f0f29e03fece7cee8a89.jpg) +Figure 2: Illustration of the Morphological Extraction Model. The embedded vectors associated with the word-pieces (v1 and v2 representing word-piece vectors generated in Figure 1) are combined (averaged) to produce a single word context vector. This context vector initializes the hidden (forward and backward) state of a BiLSTM that encodes the characters of the origin word. The decoder LSTM outputs a sequence of characters, where a special empty symbol indicates a morphological segment boundary. In multi-task setup, a fully connected linear layer is used to predict a label whenever a segment boundary is detected. + +For tasks involving both segments and labels (Part-of-Speech Tagging, Morphological-Features Tagging, Named-Entity Recognition) we expand this network in a multi-task learning setup; when generating an end-of-segment (space) symbol, the model also predicts task label, and we combine the segment-label losses. The complete morphological extraction architecture is illustrated in Figure 2. + +# 5 Experimental Setup + +Goal In order to empirically gauge the effect of model size and data quantity on the quality of the language model, we compare the performance of AlephBERT (both small and base) with all existing Hebrew BERT instantiations. In this Section, we detail the tasks and evaluation metrics. In the next + +Section, we present and analyze the results. + +# 5.1 Sentence-Based Modeling + +Sentiment Analysis We first report on a sentence classification task, assigning a sentence with one of three sentiment values: negative, positive, neutral. Sentence-level predictions are achieved by directly fine-tuning the PLM using an additional sentence-classification head The sentence-level embedding vector representation is the one associated with the special [CLS] BERT token. + +We used a version of the Hebrew Facebook Sentiment dataset (henceforth FB) of Amram et al. (2018) which we corrected by removing leaked samples.4 We fine-tuned all models for 15 epochs with 5 different seeds, and report mean accuracy. + +# 5.2 Word-Based Modeling + +Named Entity Recognition In this setup we assume a sequence labeling task based on space-delimited word-tokens. The input comprises of the sequence of words in the sentence, and the output contains BIOES tags indicating entity spans. Word-level NER predictions are achieved by directly fine-tuning the PLMs using an additional token-classification head. In cases where a word is split into multiple word pieces by the PLM tokenizer, we employ common practice and use the first word-piece vector. + +We evaluate this model on two corpora. (i) The Ben-Mordecai (BMC) corpus (Ben Mordecai and Elhadad, 2005), which contains 3294 sentences with 4600 entities and seven different entity categories (Date, Location, Money, Organization, Person, Percent, Time). To remain compatible with the original work we train and test the models on 3 + +different splits as in Bareket and Tsarfaty (2020). (ii) The Named Entities and MOrpology (NEMO) corpus5 (Bareket and Tsarfaty, 2020) which is an extension of the SPMRL dataset with Named Entities. The NEMO corpus contains 6220 sentences with 7713 entities of nine entity types (Language, Product, Event, Facility, Geo-Political Entity, Location, Organization, Person, Work-Of-Art). We trained both models for 15 epochs with 5 different seeds and report mean F1 scores on entity spans. + +# 5.3 Morpheme-Based Modeling + +Finally, to probe the PLM capacity to accurately predict word-internal structure, we test all models on five tasks that require knowledge of the internal morphology of raw words. The input to all these tasks is a Hebrew sentence represented as a raw sequence of space-delimited words: + +(i) Segmentation: Generating a sequence of morphological segments representing the basic processing units. These units comply with the 2-level representation of tokens defined by UD, each unit with a single POS tag. $^6$ +(ii) Part-of-Speech (POS) Tagging: Tagging each segment with a single POS. +(iii) Morphological Tagging: Tagging each segment with a single POS and a set of features. Equivalent to the AllTags evaluation defined in the CoNLL18 shared task. $^7$ +(iv) Morpheme-Based NER: Tagging each segment with a BIOES and its entity-type. +(v) Dependency Parsing: Use each segment as a node in the predicted dependency tree. + +We train and test all morphologically-aware models using two available morphologically-aware Hebrew resources: + +- The Hebrew Section of the SPMRL Task (Seddah et al., 2013). +- The Hebrew Section of the UD treebanks collection (Sadde et al., 2018) + +All models were trained for 15 epochs with 5 different seeds and we report two variants of mean F1 scores as described next. + +For tasks (i)-(iv) we use the morphological extraction model (Section 4) to extract the morphological segments of each word in context and also predict the labels via Multitask training. + +For task (iv) the NER task, we use the morphologically-annotated data files of the aforementioned SPMRL-based NEMO corpus (Bareket and Tsarfaty, 2020). In addition to the multi-task setup described earlier, we design another setup in which we first only segment the text, and then perform fine-tuning with a token classification attention head directly applied to the PLM output for the segmented tokens (similar to the way we fine-tune the PLM for the word-based NER task described in the previous section). We acknowledge that we are fine-tuning the PLM on morphological segments the model was not originally pre-trained on, however, as we shall see shortly, this seemingly unintuitive strategy performs surprisingly well. + +For task (v) we set up a dependency parsing evaluation pipeline using the standalone Hebrew parser offered by More et al. (2019) (a.k.a YAP) which was trained to produce SPMRL dependency labels. The morphological information for each word (namely the segments and POS tags) is recovered by our morphological extraction model, and is used as input features for the YAP standalone dependency parser. + +# 5.4 Morpheme-Based Evaluation Metrics + +Aligned Segment The CoNLL18 Shared Task evaluation campaign8 reports scores for segmentation and POS tagging9 for all participating languages. For multi-segment words, the gold and predicted segments are aligned by their Longest Common Sub-sequence, and only matching segments are counted as true positives. We use the script to compare aligned segment and tagging scores between oracle (gold) segmentation and realistic (predicted) segmentation. + +Aligned Multi-Set In addition to the CoNLL18 metrics, we compute F1 scores, with a slight but important difference from the shared task, as defined by More et al. (2019) and Seker and Tsarfaty (2020). For each word, counts are based on multi-set intersections of the gold and predicted labels ignoring the order of the segments while account + +
TaskNER (Word)Sentiment
CorpusNEMOBMCFB
Prev. SOTA77.7585.22NA
mBERT79.0787.7779.07
HeBERT81.4889.4181.48
AlephBERTsmall78.6989.0778.69
AlephBERTbase84.9191.1284.91
+ +Table 4: Word-based NER F1. Previous SOTA on both corpora reported by the NEMO models of Bareket and Tsarfaty (2020). Sentiment Analysis accuracy on the corrected version of the corpus of Amram et al. (2018). + +ing for the number of each segment. Aligned mset is based on set difference which acknowledges the possible undercover of covert morphemes which is an appropriate measure of morphological accuracy. + +Discussion To illustrate the difference between aligned segment and aligned mset, let us take for example the gold segmented tag sequence: $b/IN$ , $h/DET$ , bit/NOUN and the predicted segmented tag sequence $b/IN$ , bit/NOUN. According to aligned segment, the first segment ( $b/IN$ ) is aligned and counted as a true positive, the second segment however is considered as a false positive (bit/NOUN) and false negative ( $h/DET$ ) while the third gold segment is also counted as a false negative (bit/NOUN). On the other hand with aligned multi-set both $b/IN$ and bit/NOUN exist in the gold and predicted sets and counted as true positives, while $h/DET$ is mismatched and counted as a false negative. In both cases the total counts across words in the entire datasets are incremented accordingly and finally used for computing Precision, Recall and F1. + +# 6 Results + +Sentence-Level Task Sentiment analysis accuracy results are provided in Table 4. All BERT-based models substantially outperform the original CNN Baseline reported by Amram et al. (2018). AlephBERTbase is setting a new SOTA. + +Word-Based Task On our two NER benchmarks, we report F1 scores on the word-based fine-tuned model in Table 4. While we see noticeable improvements for the mBERT and HeBert variants over the current SOTA, the most significant increase is achieved by AlephBERT $_{\text{base}}$ , setting a new and improved SOTA on this task. + +Morpheme-Level Tasks As a particular novelty of this work, we report BERT-based results on sub + +
TaskSegmentPOSFeaturesUASLAS
Prev. SOTANA90.4985.9875.7369.41
mBERT97.3693.3789.3680.1774.9
HeBERT97.9794.6190.9381.8676.54
AlephBERTsmall97.7194.1190.5681.576.07
AlephBERTbase98.1094.9091.4182.0776.9
+ +Table 5: Morpheme-Based results on the SPMRL corpus. Aligned MultiSet (mset) F1 for Segmentation, POS tags and Morphological Features - previous SOTA reported by Seker and Tsarfaty (2020) (POS) and More et al. (2019) (features). Labeled and Unlabeled Accuracy Scores for morphological-level Dependency Parsing - previous SOTA reported by More et al. (2019) (uninfused/realistic scenario) + +
TaskSegmentPOSFeatures
Prev. SOTANA94.02NA
mBERT97.7094.7690.98
HeBERT98.0596.0792.53
AlephBERTsmall97.8695.5892.06
AlephBERTbase98.2096.2093.05
+ +Table 6: Morpheme-Based Aligned MultiSet (mset) F1 results on the UD corpus. Previous SOTA reported by Seker and Tsarfaty (2020) (POS) + +word (segment-level) information. Specifically, we evaluate word segmentation, POS, Morphological Features, NER and dependencies compared against morphologically-labeled test sets. + +In all cases, we use raw space-delimited tokens as input and produce morphological segments with our morphological extraction model. + +Table 5 presents evaluation results for the SPRML dataset, compared against the previous SOTA of More et al. (2019). For segmentation, POS tagging, and morphological tagging we report aligned multiset F1 scores. BERT-based segmentations are similar, all scoring in the high range of 97-98 F1, which are hard to improve further.[10] + +For POS tagging and morphological features, all BERT-based models considerably outperform the previous SOTA. For syntactic dependencies we report labeled and unlabeled accuracy scores of the trees generated by YAP (More et al., 2019) on our predicted segmentation. Here we see impressive improvement compared to the previous SOTA of a joint morpho-syntactic framework. It confirms that morphological errors early in the pipeline negatively impact downstream tasks, and highlight the importance of morphologically-driven benchmarks + +
TaskSegmentPOSFeatures
Prev. SOTA96.0393.7591.24
mBERT97.1794.2790.51
HeBERT97.5495.6092.15
AlephBERTsmall97.3195.1391.65
AlephBERTbase97.7095.8492.71
+ +Table 7: Morpheme-Based Aligned (CoNLL shared task) F1 on the UD corpus. Previous SOTA reported by Minh Van Nguyen and Nguyen (2021) + +
Architecture +Segmentation +TaskPipeline +(Oracle)Pipeline +(Predicted)MultiTask
SegNERSegNERSegNER
Prev. SOTA100.0079.1095.1569.5297.0577.11
mBERT100.0077.9297.6872.7297.2472.97
HeBERT100.008298.1576.7497.9274.86
AlephBERTsmall100.0079.4497.7873.0897.7472.46
AlephBERTbase100.0083.9498.2980.1598.1979.15
+ +Table 8: Morpheme-Based NER F1 on the NEMO corpus. Previous SOTA reported by Bareket and Tsarfaty (2020) for the Pipeline (Oracle), Pipeline (Predicted) and a Hybrid (almost-joint) scenarios, respectively. + +as an integral part of PLM evaluation for MRLs. + +All in all we see a repeating trend placing AlephBERT $_{\text{base}}$ first on all morphological tasks, indicating the depth of the model and a larger pretraining dataset improve the ability of the PLM to capture word-internal structure. These trends are replicated on the UD Hebrew corpus, for two different evaluation metrics — the Aligned MultiSet F1 Scores as in previous work on Hebrew (More et al., 2019), (Seker and Tsarfaty, 2020), and the Aligned Segment F1 scores metrics as described in the UD shared task (Zeman et al., 2018) — reported in Tables 6 and 7 respectively. + +Morpheme-Level NER results Earlier in this section we considered NER a word-level task that simply requires fine-tuning on the word level. However, this setup is not accurate enough and less useful for downstream tasks, since the exact entity boundaries are often word internal (Bareket and Tsarfaty, 2020). We hence report morpheme-based NER evaluation, respecting the exact boundaries of entity mentions. + +To obtain morpheme-based labeled-span of Named Entities, we could either employ a pipeline, first predicting segmentation and then applying a fine-tuned labeling model directly on the segments, or employ a multi-task model and predict NER labels while performing segmentation. + +Table 8 presents segmentation and NER results for 3 different scenarios: (i) a pipeline as + +suming gold segmentation (ii) a pipeline assuming predicted segmentation (iii) segmentation and NER labels obtained jointly in a multi-task setup. $\mathrm{AlephBERT}_{\mathrm{base}}$ consistently scores highest in all 3. + +Looking at the Pipeline-Predicted scores, there is a clear correlation between a higher segmentation quality of a PLM and its ability to produce better NER results. Moreover, the differences in NER scores are considerable (unlike the subtle differences in segmentation, POS and morphological features scores) and draw our attention to the relationship between the size of the PLM, the size of the pre-training data and the quality of the final NER models. Specifically, HeBERT and $\mathrm{AlephBERT}_{\mathrm{small}}$ were both pre-trained on similar datasets and comparable vocabulary sizes (heBERT with 30K and AlephBERT-small with 52K) but HeBERT, with its 12 hidden layers, performs better compared to $\mathrm{AlephBERT}_{\mathrm{small}}$ which is composed of only 6 hidden layers. It thus appears that semantic information is learned in those deeper layers, helping in both discriminating entities and improving the morphological segmentation capacity. + +In addition, comparing AlephBERTbase and HeBERT we note that they are both modeled with the same 12 hidden layer architecture — the only differences between them are in the size of their vocabularies (30K vs 52K respectively) and the size of the training data (Oscar-Wikipedia vs Oscar-Wikipedia-Tweets). The improvements exhibited by AlephBERTbase, compared to HeBERT, suggest large amounts of training data and larger vocabulary are invaluable. By exposing AlephBERTbase to a substantially larger amount of text we increased the ability of the PLM to encode syntactic and semantic signals associated with Named Entities. + +Our NER experiments further suggest that a pipeline composed of our accurate morphological segmentation model followed by AlephBERTbase with a token classification head is the best strategy for generating morphologically-aware NER labels. Finally, we observe that while AlephBERT excels at morphosyntactic tasks, on tasks with a more semantic flavor there is room for improvement. + +# 7 Conclusion + +Modern Hebrew, a morphologically-rich and medium-resourced language, has for long suffered from a gap in the resources available for NLP applications, and lower level of empirical results than observed in other, resource-rich languages. This + +work provides the first step in remedying the situation, by making available a large Hebrew PLM, named AlephBERT, with larger vocabulary and larger training set than any Hebrew PLM before, and with clear evidence as to its empirical advantages. Crucially, we augment the PLM with a morphological disambiguation component that matches the input granularity of the downstream tasks. Our system does not presuppose Hebrew-specific linguistic-rules, and can be transparently applied to any language for which 2-level segmentation data (i.e., the standard UD benchmarks) exists. $\mathrm{AlephBERT_{base}}$ obtains state-of-the-art results on morphological segmentation, POS tagging, morphological feature extraction, dependency parsing, named-entity recognition, and sentiment analysis, outperforming all existing Hebrew PLMs. Our proposed morphologically-driven pipeline11 serves as a solid foundation for future evaluation of Hebrew PLMs and of MRLs in general. + +# 8 Ethical Statement + +We follow Bender and Friedman (2018) regarding professional practice for NLP technology and address ethical issues that result from the use of data in the development of the models in our work. + +Pre-Training Data. The two initial data sources we used to pre-train the language models are Oscar and Wikipedia. In using the Wikipedia and Oscar we followed standard language model training efforts, such as BERT and RoBERTa (Devlin et al., 2019; Liu et al., 2019). We use the language-specific Oscar data according to the terms specified in Ortiz Suárez et al. (2020) and we extract texts from language-specific Wikipedia dumps. On top of that, a big portion of the data used to train AlephBERT originates from the Twitter sample stream.[12] As shown in Table 2, this data set includes 70M Hebrew tweets which were collected over a period of 4 years (2014 to 2018). We acknowledge the potential concerns inherently associated with Twitter data (population bias, behavior patterns, bot masquerading as humans etc.) and note that we have not made any explicit attempt to identify these cases. We only used the text field of the tweets and completely discard any other information included + +in the stream (such as identities, followers, structure of threads, date of publication, etc). We have not made any effort to identify or filter out any samples based on user properties such as age, gender and location nor have we made any effort to identify content characteristics such as genre or topic. To reduce exposure of private information we cleaned up all user mentions and URLs from the text. Honoring ethical and legal constraints we have not manually analyzed nor published this data source. While the free-form language expressed in tweets might differ significantly from the text found in Oscar/Wikipedia, the sheer volume of tweets helps us close the substantial resource gap. + +Training and Evaluation Benchmarks. The SPMRL (Seddah et al., 2013) and UD (Sadde et al., 2018) datasets we used for evaluating segmentation, tagging and parsing, were used to both train our morphological extraction model as well as provide us with the test data to evaluate on morphological level tasks. Both datasets are publicly available and widely used in research and industry. + +The NEMO corpus (Bareket and Tsarfaty, 2020) used to train and evaluate word and morpheme level NER is an extension of the SPMRL dataset augmented with entities and follows the same license terms. The BMC dataset used for training and evaluating word-level NER was created and published by Ben Mordecai and Elhadad (2005) and it is publicly available for NER evaluation. + +We used the sentiment analysis dataset of Amram et al. (2018) for training and evaluating AlephBERT on a sentence level task, and we follow their terms of use. As mentioned, this dataset had some flows, and we describe carefully the steps we've taken to fix them before using this corpus in our experiments for internal evaluation purposes. We make our in-house cleaning scripts and split information publicly available. + +# Acknowledgements + +This research was funded by the European Research Council (ERC grant agreement no. 677352) and by a research grant from the Ministry of Science and Technology (MOST) of the Israeli Government, for which we are grateful. + +# References + +Adam Amram, Anat Ben-David, and Reut Tsarfaty. 2018. Representations and architectures in neu + +ral sentiment analysis for morphologically rich languages: A case study from modern hebrew. In Proceedings of the 27th International Conference on Computational Linguistics, COLING 2018, Santa Fe, New Mexico, USA, August 20-26, 2018, pages 2242-2252. +Wissam Antoun, Fady Baly, and Hazem Hajj. 2020. AraBERT: Transformer-based model for Arabic language understanding. In Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection, pages 9–15, Marseille, France. European Language Resource Association. +Giuseppe Attardi. 2015. Wikiextractor. https://github.com/attardi/wikiextractor. +Dan Bareket and Reut Tsarfaty. 2020. Neural modeling for named entities and morphology (nemo^2). CoRR, abs/2007.15620. +Naama Ben Mordecai and Michael Elhadad. 2005. Hebrew named entity recognition. +Emily M. Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587-604. +Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc. +Avihay Chriqui and Inbal Yahav. 2021. Hebert & hebemo: a hebrew bert model and a tool for polarity analysis and emotion recognition. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, and Mohammad Manthouri. 2020. Parsbert: Transformer-based model for persian language understanding. + +Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328-339, Melbourne, Australia. Association for Computational Linguistics. +Stav Klein and Reut Tsarfaty. 2020. Getting the ##life out of living: How adequate are word-pieces for modelling complex morphology? In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, SIGMORPHON 2020, Online, July 10, 2020, pages 204-209. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. +Amir Pouran Ben Veyseh Minh Van Nguyen, Viet Lai and Thien Huu Nguyen. 2021. Trankit: A lightweight transformer-based toolkit for multilingual natural language processing. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations. +Amir More, Amit Seker, Victoria Basmova, and Reut Tsarfaty. 2019. Joint transition-based models for morpho-syntactic parsing: Parsing strategies for mrls and a case study from modern hebrew. Trans. Assoc. Comput. Linguistics, 7:33-48. +Pedro Javier Ortiz Suárez, Laurent Romary, and Benoit Sagot. 2020. A monolingual approach to contextualized word embeddings for mid-resource languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1703-1714, Online. Association for Computational Linguistics. +Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics. +Marco Polignano, Pierpaolo Basile, Marco de Gemmis, Giovanni Semeraro, and Valerio Basile. 2019. Alberto: Italian bert language understanding model for nlp challenging tasks based on tweets. +Alec Radford and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. In *arxiv*. +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the + +limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics. +Piotr Rybak, Robert Mroczkowski, Janusz Tracz, and Ireneusz Gawlik. 2020. KLEJ: Comprehensive benchmark for Polish language understanding. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1191-1201, Online. Association for Computational Linguistics. +Shoval Sadde, Amit Seker, and Reut Tsarfaty. 2018. The hebrew universal dependency treebank: Past present and future. In Proceedings of the Second Workshop on Universal Dependencies, UDW@EMNLP 2018, Brussels, Belgium, November 1, 2018, pages 133-143. +Gözde Gül Şahin, Clara Vania, Ilia Kuznetsov, and Iryna Gurevych. 2019. LINSPECTOR: multilingual probing tasks for word representations. CoRR, abs/1903.09442. +Djame Seddah, Reut Tsarfaty, Sandra Kübler, Marie Candito, Jinho D. Choi, Richard Farkas, Jennifer Foster, Iakes Goenaga, Koldo Gojenola Galletebeitia, Yoav Goldberg, Spence Green, Nizar Habash, Marco Kuhlmann, Wolfgang Maier, Joakim Nivre, Adam Przepiörkowski, Ryan Roth, Wolfgang Seeker, Yannick Versley, Veronika Vincze, Marcin Wolinski, Alina Wróblewska, and Éric Villemonte de la Clergerie. 2013. Overview of the SPMRL 2013 shared task: A cross-framework evaluation of parsing morphologically rich languages. In Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages, SPMRL@EMNLP 2013, Seattle, Washington, USA, October 18, 2013, pages 146-182. +Amit Seker and Reut Tsarfaty. 2020. A pointer network architecture for joint morphological segmentation and tagging. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4368-4378, Online. Association for Computational Linguistics. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics. +Reut Tsarfaty, Dan Bareket, Stav Klein, and Amit Seker. 2020. From SPMRL to NMRL: what did we learn + +(and unlearn) in a decade of parsing morphologically-rich languages (mrls)? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7396-7408. +Antti Virtanen, Jenna Kanerva, Rami Ilo, Jouni Luoma, Juhani Luotolahti, Tapio Salakoski, Filip Ginter, and Sampo Pyysalo. 2019. Multilingual is not enough: Bert for finnish. +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics. +Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. SWAG: A large-scale adversarial dataset for grounded commonsense inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 93-104, Brussels, Belgium. Association for Computational Linguistics. +Daniel Zeman, Jan Hajic, Martin Popel, Martin Potthast, Milan Straka, Filip Ginter, Joakim Nivre, and Slav Petrov. 2018. CoNLL 2018 shared task: Multilingual parsing from raw text to Universal Dependencies. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 1-21, Brussels, Belgium. Association for Computational Linguistics. \ No newline at end of file diff --git a/alephbertlanguagemodelpretrainingandevaluationfromsubwordtosentencelevel/images.zip b/alephbertlanguagemodelpretrainingandevaluationfromsubwordtosentencelevel/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..cb06201fbc519e2b116c829019d9e277f06fa96e --- /dev/null +++ b/alephbertlanguagemodelpretrainingandevaluationfromsubwordtosentencelevel/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a6f01a8899042cc02ca54b8730afcb22eceae8f42d555403cc3d1a72a7d73a45 +size 304470 diff --git a/alephbertlanguagemodelpretrainingandevaluationfromsubwordtosentencelevel/layout.json b/alephbertlanguagemodelpretrainingandevaluationfromsubwordtosentencelevel/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b95338351bc65032a6bc9aff1fda3439e448adda --- /dev/null +++ b/alephbertlanguagemodelpretrainingandevaluationfromsubwordtosentencelevel/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:19e148b55927aeff5118adc802aa07b6ab0cd5f9e0767dc6d586b9d2666f2690 +size 297507 diff --git a/alignmentaugmentedconsistenttranslationformultilingualopeninformationextraction/cb2b784c-fd0a-4a1a-b069-8bfc76291381_content_list.json b/alignmentaugmentedconsistenttranslationformultilingualopeninformationextraction/cb2b784c-fd0a-4a1a-b069-8bfc76291381_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..e8a4dc05adbde8c5eac8b15bb87b1358605d6212 --- /dev/null +++ b/alignmentaugmentedconsistenttranslationformultilingualopeninformationextraction/cb2b784c-fd0a-4a1a-b069-8bfc76291381_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:16dc46a0308c7b1514211c185e58c22cbd092fe801423be8a4ca1e2e1bacdbe6 +size 104053 diff --git a/alignmentaugmentedconsistenttranslationformultilingualopeninformationextraction/cb2b784c-fd0a-4a1a-b069-8bfc76291381_model.json b/alignmentaugmentedconsistenttranslationformultilingualopeninformationextraction/cb2b784c-fd0a-4a1a-b069-8bfc76291381_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c0fa0373d0720ed1ece26883ff3de2ad6ab903ec --- /dev/null +++ b/alignmentaugmentedconsistenttranslationformultilingualopeninformationextraction/cb2b784c-fd0a-4a1a-b069-8bfc76291381_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:102c246cbf23d42a0f8506b56dbbd392afa0b4e8d48537cbc9ade1c85fad0719 +size 124967 diff --git a/alignmentaugmentedconsistenttranslationformultilingualopeninformationextraction/cb2b784c-fd0a-4a1a-b069-8bfc76291381_origin.pdf b/alignmentaugmentedconsistenttranslationformultilingualopeninformationextraction/cb2b784c-fd0a-4a1a-b069-8bfc76291381_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8f1619959723e0e60b5539739470a8014060006b --- /dev/null +++ b/alignmentaugmentedconsistenttranslationformultilingualopeninformationextraction/cb2b784c-fd0a-4a1a-b069-8bfc76291381_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2b17e1db9f38484764f00e7fe7110ee91f0f3ddf1e1414739159b2a371941541 +size 962677 diff --git a/alignmentaugmentedconsistenttranslationformultilingualopeninformationextraction/full.md b/alignmentaugmentedconsistenttranslationformultilingualopeninformationextraction/full.md new file mode 100644 index 0000000000000000000000000000000000000000..bba0cfa96babd80483516dfb4db3ab2d8cc27f33 --- /dev/null +++ b/alignmentaugmentedconsistenttranslationformultilingualopeninformationextraction/full.md @@ -0,0 +1,401 @@ +# Alignment-Augmented Consistent Translation for Multilingual Open Information Extraction + +Keshav Kolluru $^{1*}$ , Mohammed Muqeeth $^{1*}$ , Shubham Mittal $^{1}$ , Soumen Chakrabarti $^{2}$ , and Mausam $^{1}$ + +1 Indian Institute of Technology Delhi 2 Indian Institute of Technology Bombay keshav.kolluru@gmail.com, muqeeth101@gmail.com, shubhamiitd18@gmail.com soumen@cse.iitb.ac.in, mausam@cse.iitd.ac.in + +# Abstract + +Progress with supervised Open Information Extraction (OpenIE) has been primarily limited to English due to the scarcity of training data in other languages. In this paper, we explore techniques to automatically convert English text for training OpenIE systems in other languages. We introduce the Alignment-Augmented Consistent Translation (AACTRANS) model to translate English sentences and their corresponding extractions consistently with each other — with no changes to vocabulary or semantic meaning which may result from independent translations. Using the data generated with AACTRANS, we train a novel two-stage generative OpenIE model, which we call GEN2OIE, that outputs for each sentence: 1) relations in the first stage and 2) all extractions containing the relation in the second stage. GEN2OIE increases relation coverage using a training data transformation technique that is generalizable to multiple languages, in contrast to existing models that use an English-specific training loss. Evaluations on 5 languages — Spanish, Portuguese, Chinese, Hindi and Telugu — show that the GEN2OIE with AACTRANS data outperforms prior systems by a margin of $6 - 25\%$ F1. $^{1}$ + +# 1 Introduction + +Open Information Extraction (OpenIE) is the task of converting unstructured text to semi-structured tuples of the format , where these three components are textual phrases, broadly extracted from the original text (Etzioni et al., 2011). OpenIE tuples have shown utility in various downstream tasks (Mausam, 2016) like Question Answering (Fader et al., 2013; Khot et al., 2017), Machine Reading (Poon et al., + +2010), Multi-Document Summarization (Christensen et al., 2014; Fan et al., 2019), Schema Induction (Balasubramanian et al., 2013), and Knowledge Base Construction (Gupta et al., 2019; Chandrahas and Talukdar, 2021). + +With widespread adoption of Deep Learning in NLP, Open Information Extraction (OpenIE) systems have gone through a paradigm shift from using rule-based, statistical systems to supervised neural models. However, both types of systems have been limited to only a few languages - earlier systems required language-specific OpenIE insights, and current systems require annotated training corpus that pose a barrier, particularly for low-resource languages. + +Related tasks such as Semantic Role Labeling face similar challenges in extending to multiple languages. X-SRL (Daza and Frank, 2020) addresses this by automatic translation of English sentences to the target language followed by label projection to infer the semantic role labels in the translated sentence. However, translating the sentence alone may be insufficient for OpenIE because the generated tuples (also referred to as extractions) can include additional words absent in the sentence or require some changes to the word morphology used in the sentence. Although less prevalent in English, these characteristics need to be addressed in other languages. + +X-SRL approach may be extended such that each extraction can also be automatically translated and subject, relation, object labels projected from English extractions. However, independent translation of sentence and extraction may introduce unwanted lexical (e.g. synonyms) or semantic (e.g., change in gender) variations between the translations, as shown in Table 1. Such translation inconsistencies in the training data lead to invalid OpenIE examples. + +To maintain consistency between translations of a sentence and its extractions, both the trans + +
Lexical Inconsistency +English Sentence +English Extraction +Spanish Sentence +Spanish Extraction (Indp) +Spanish Extraction (Const)The shield of Athena Parthenos, sculpted by Phideas, depicts a fallen Amazon +<s> The shield of Athena Parthenos </s> <r> depicts </r> <o> a fallen Amazon </o> +El escudo de Atena Parthenos, sculptado por Phideas, representa un Amazonas fallecido +<s> El escudo de Atena Parthenos </s> <r> representa </r> <o> un Amazonas caído </o> +<s> El escudo de Atena Parthenos </s> <r> representa </r> <o> un Amazonas fallecido </o>
Semantic Inconsistency +English Sentence +English Extraction +Spanish Sentence +Spanish Extraction (Indp) +Spanish Extraction (Const)The discovery was remarkable as the skeleton was almost identical to a modern Kuvasz +<s> skeleton </s> <r> was </r> <o> almost identical to a modern Kuvasz </o> +Un descubrimiento notable porque fósil era casi identica a un Kuvasz moderno +<s> skeletó </s> <r> era </r> <o> casi identica a una Kuvasz moderna </o> +<s> fósil </s> <r> era </r> <o> casi identica a un Kuvasz moderno </o>
+ +Table 1: OpenIE examples transferred from English to Spanish, using both Independent (Indp) and Consistent (Const) translations. Independent translation results in inconsistencies which may have the same meaning (by using synonyms, fallecido vs. caído) or may change the meaning (changing gender from male to female, moderno to moderna). Consistent translation avoids these issues, resulting in better quality of training data. + +lations must use same words or their morphological variants as much as possible. Hence, we propose Alignment-Augmented Consistent Translation (AACTRANS), a seq2seq model that translates the given input text in a way that is consistent with a reference translation by biasing the translation to use words similar to the reference. To ensure that translations of sentence and extractions are consistent with each other, we use AACTRANS model to translate each of them with the same reference. In Section 4.1, we describe the reference used in training and inference. + +Both generation based (Kolluru et al., 2020b) and labeling based (Ro et al., 2020) architectures have shown competitive performance on English OpenIE. However, labeling based models cannot naturally introduce new words or change morphology of sentence words required in some languages. Therefore, we use a new generative model, GEN2OIE, that contains two stages: the first stage produces all the relations in the sentence and the second stage generates the extractions containing the given relation. We also use a training heuristic specific to two stage models that increases relation coverage across multiple languages. + +Our major contributions are that we: + +1. introduce a novel technique for transferring data from English to other languages using the AACTRANS model and label projection, +2. propose two-stage generative model, GEN2OIE, for training OpenIE system in multiple languages, +3. release OpenIE evaluation datasets for two Indian languages, Hindi and Telugu, and + +4. outperform prior systems by $6 - 25\%$ in F1 over five languages. + +# 2 Related Work + +Our work is in line with the recent trend of extending IE and knowledge-based NLP systems to multiple languages. Recent works have explored distantly supervised relation extraction (Rathore et al., 2022; Bhartiya et al., 2022), knowledge-base completion (Singh et al., 2021), and fact linking (Kolluru et al., 2021). Our focus is OpenIE. + +Many of the prior OpenIE systems, both nonneural (OpenIE-4 (Pal and Mausam, 2016; Christensen et al., 2011), OpenIE-5 (Saha et al., 2017; Saha and Mausam, 2018), ClausIE (Del Corro and Gemulla, 2013)) and neural (RnnOIE (Stanovsky et al., 2018), OpenIE-6 (Kolluru et al., 2020a)) have been deployed for English. Moreover, OpenIE systems built for other languages often work only for a single language due to their reliance on language-specific resources. For example, Bassa et al. (2018); Rahat and Talebpour (2018); Romadhony et al. (2018); Guarasci et al. (2020); Papadopoulos et al. (2021) focus on German, Persian, Indonesian, Italian, and Greek, respectively. Claro et al. (2019) present the importance of and various challenges involved with building multilingual OpenIE systems. Neural models like Logician (Sun et al., 2018) and CrossOIE (Cabral et al., 2020) use language-specific training data. Reliance on manually-annotated data or language-specific resources makes it infeasible to develop systems for the plurality of languages in the world, due to the cost and effort involved. However, our automated data conversion method can handle even low-resource languages like Telugu. + +Non-neural systems such as PredPatt (White et al., 2016) and ArgOE (Gamallo and Garcia, 2015) work for multiple languages by using CoNLL-X and Universal Dependency parses respectively, to extract predicate-argument structures. Owing to their pipelined nature, their performance is below that of neural systems like Multi $^2$ OIE (Ro et al., 2020). Multi $^2$ OIE is a two-stage labeling model that works for English, Spanish and Portuguese. GEN2OIE extends this 2-stage design to the generative paradigm which allows for better modeling of the OpenIE task. The underlying mBERT encoder in Multi $^2$ OIE allows for cross-lingual generalization across various languages even after training with only English supervised data. However, dependence on zero-shot generalization limits the performance of the model. + +Two types of methods have been proposed for constraining the outputs of the machine translation systems: 1) altering the decoding algorithm (Hasler et al., 2018), or 2) modifying the training methodology (Chen et al., 2020; Dinu et al., 2019). We follow the second approach for constraining translations by AACTRANS to be consistent to that of a reference sentence. Unlike prior work which focuses on constraining translations of few words, our task requires constraining the entire translation. We make use of awesome-align (Dou and Neubig, 2021a), an unsupervised word alignment technique (Och and Ney, 2003), that outputs the alignment between words in sentences of two languages. Awesome-align is trained using only parallel set of sentences in the two languages and generates aligned target words for each source word. + +Transferring linguistic annotations from source to target language has been pioneered by (David et al., 2001) and has been used in context of Semantic Role Labeling (Annesi and Basili, 2010) and PoS-tagging (Zennaki et al., 2019). After consistent translation, we make use of Crosslingual Projection (Faruqui, 2015), to transfer OpenIE tags. + +# 3 Notation + +For the transfer of OpenIE data from one language to another, we represent the source language $^2$ as $E$ and the target language as $F$ . Further, we use $\text{sent}_E$ and $\text{ext}_E$ to represent a sentence and extraction in the source language and $\text{aact-sent}_F$ and $\text{aact-ext}_F$ to represent the transferred sentence and extraction in the target language. + +To aid in the translation of extractions, we create a sub-sentence from each extraction by concatenating the phrases in all the fields of the extraction. The order of concatenation is such that the formed sub-sentence is grammatically valid. We refer to this sub-sentence as an ext-sentence and represent it as $es_{L}$ , where the subscript $L$ represents its language. For most English extractions, the ext-sentence corresponds to concatenating the fields in the order of subject, relation and object. However, other languages may follow a different order or allow for multiple orders. We rely on the output of system that translates the English ext-sentence to determine the ext-sentence in other languages. Moreover, each extraction can be seen as a labeling over the words of ext-sentence with either the Subject, Relation or Object tags. Tags for each word in the ext-sentence can also be regarded as the extraction. + +# 4 Crosslingual Data Transfer + +In this section we describe the technique used to convert OpenIE training data from source language $E$ to a target language $F$ . The source sentence, $\text{sent}_E$ , and all its corresponding ext-sentences, $es_E$ , are consistently translated to language $F$ (Section 4.1), and then, for each extraction in language $E$ , $ext_E$ , the S, R or O labels are projected to the translated ext-sentence, $es_F$ , to form the extraction, $ext_F$ , in language $F$ (Section 4.2). Figure 1 describes the pipeline with the help of an example. + +# 4.1 Consistent Translation + +We introduce a new Seq2Seq-based translation model called Alignment-Augmented Consistent Translation (AACTRANS) to ensure that sentences and ext-sentences are translated consistently from languages $E$ to $F$ . We define two translations as consistent if similar phrases have same grammatical structure, vocabulary and morphology while allowing for minimal changes necessary to ensure fluency. + +To ensure consistency among translations of multiple pieces of text (both the sentence and respective ext-sentences present in an English OpenIE instance), we make use of a reference text in language $F$ to guide all of their translations. By individually maintaining consistency with the reference, their respective translations end up being consistent to one another as well. + +![](images/9b25943d4f574e00f6758192764cb6e6369e2ac0eebf13999e349803b95f65a7.jpg) +Figure 1: Crosslingual Data Transfer pipeline from English to Spanish. The sentence and ext-sentence in English are aligned with a translation of the sentence. The AACTRANS model uses the aligned text to generate the final consistent translations. Cross Lingual Projection (CLP) introduces S, R, O tags in the extraction. + +To generate a translation $\mathbf{f}$ (language $F$ ) of text $\mathbf{e}$ (language $E$ ), consistent with a reference $\mathbf{r}$ (language $F$ ), we use the following procedure. + +Firstly, given $\mathbf{e} = e_1e_2\ldots e_N$ and $\mathbf{r} = r_1r_2\ldots r_M$ , we find the set of aligned words $A_{e_i} = \{r_j\}$ for each word $e_i$ in $\mathbf{e}$ , using a word alignment model. + +Secondly, the aligned text $\mathbf{e}^{\prime}$ is constructed by concatenating each of the words $e_i$ in $\mathbf{e}$ , with their aligned words $A_{e_i}$ , using $\# \#$ as a separator (shown as $<1>$ , $<3> \rightarrow <4>$ and $<2>$ , $<3> \rightarrow <5>$ in Figure 1). If $e_i$ is aligned to the words $r_j$ , $r_k$ ( $j < k$ ), then $\mathbf{e}^{\prime}$ contains $e_i \# \# r_j r_k \#$ . If $e_i$ has no aligned words, then $\mathbf{e}^{\prime}$ contains $e_i \#$ . + +Thirdly, the AACTRANS model takes $\mathbf{e}'$ as input and produces the sequence $\mathbf{f}$ as output, which represents a translation of $\mathbf{e}$ that is biased to use the aligned reference words (shown as $<4>\rightarrow<7>$ and $<5>\rightarrow<8>$ in Figure 1). + +Next we discuss the training and inference of AACTRANS model. + +Training: We use parallel sentences of languages $E$ and $F$ that are available in existing translation corpora for training the AACTRANS model. For each parallel sentence pair $\mathbf{e}$ and $\mathbf{f}$ , we use the sentence $\mathbf{f}$ itself as the reference $\mathbf{r}$ . Using the alignments between the words of $\mathbf{e}$ and $\mathbf{f}$ , we form the input $\mathbf{e}'$ , as discussed. The AACTRANS Seq2Seq model is trained with $\mathbf{e}'$ as input and $\mathbf{f}$ as output. Since $\mathbf{e}'$ has words from $\mathbf{f}$ , the model learns to use them during training. + +Inference: Here, we consistently translate English sentence $sent_{E}$ and each of its ext-sentences $es_{E}$ . We use an off-the-shelf translation system to translate $sent_{E}$ to language $F$ , represented as $t - sent_{F}$ . $t - sent_{F}$ is used as the common reference $\mathbf{r}$ for constructing aligned sentence $al - sent_{EF}$ and aligned + +ext-sentence $al\text{-sent}_{EF}$ from sentence $sent_{E}$ and ext-sentence $es_{E}$ , respectively. We then apply the trained AACTRANS model on $al\text{-sent}_{EF}$ and $al\text{-sent}_{EF}$ to generate target sentence $aact\text{-sent}_{F}$ and target ext-sentence $aact-ess_{F}$ respectively. + +# 4.2 Crosslingual Label Projection (CLP) + +Each word in the target ext-sentence, $aact - es_{F}$ , must be labeled with either the Subject, Relation, or Object tag to form the completed extraction in language $F$ . The tags from the corresponding $ext_{E}$ are projected onto $aact - es_{F}$ using the Crosslingual Projection algorithm (Faruqui, 2015) (described in Appendix A), which uses word alignments between $es_{E}$ and $aact - es_{F}$ and produces as output, the tags over $aact - es_{F}$ , giving extraction $aact - ext_{F}$ . The final set of pairs constitute the data for training OpenIE system in language $F$ . + +Thus the overall flow is: 1) AACTRANS model training is done on parallel corpus, 2) AACTRANS model inference is applied on language $E$ OpenIE examples, 3) CLP projection is used to obtain the labelled extractions, and 4) the generated data is used to train OpenIE system like GEN2OIE, which is discussed next. + +# 5 Gen2OIE Model + +To train OpenIE systems in multiple languages, we use a novel GEN2OIE model that extends the 2-stage design of Multi $^2$ OIE (Ro et al., 2020) to a generative paradigm. The first stage generates all possible relations and the second stage generates all extractions that contain a given relation. + +GEN2OIE can produce overlapping relations and multiple extractions containing the same rela + +![](images/323c0cdbc59ec0f1673cd2c564e48f056ddaa26d9f7973c2c4afeca4a6430991.jpg) +Figure 2: GEN2OIE model contains two Seq2Seq models. In Stage-1, it generates all relations in the sentence, separated by an [SEP] token. For each detected relation in Stage-2, it generates extractions containing the relation. + +tion, thus overcoming the limitations of Multi $^2$ OIE model. Moreover, due to its generative nature, GEN2OIE can add new words or introduce changes in morphology that may be necessary for producing correct extractions, which cannot be achieved by labeling models. + +Both the stages of the GEN2OIE (shown in Figure 2) use Seq2Seq models as follows: + +Stage-1 Seq2Seq: The input sentence is passed to the encoder and decoder generates a string formed by concatenating the set of relations from all the extractions, separated by an [SEP] token. During training, the target relations are concatenated in the order in which they occur in the sentence. We find that a deterministic order is important for adding stability to the model training. + +Stage-2 Seq2Seq: To produce extractions corresponding to each relation generated in Stage-1, the relation $r$ is concatenated with the input sentence $s$ and passed to the encoder as " $r[SEP]s$ ". The decoder is trained to generate all the extractions containing the relation $r$ . Multiple extractions are separated by an $< e>$ token and each extraction contains delimiters tokens to identify the various parts of the extraction. The surrounding $< s>...$ , $< r>...$ and $< o>...$ tokens are used to identify the subject, relation and object phrases. + +Labeling models like OpenIE-6 (Kolluru et al., 2020a) have used constrained training to increase the relation coverage. However, the constraints are limited to English and specific to labeling architectures. We introduce a simple parts-of-speech based heuristic during Stage-1 training of GEN2OIE that increases the relation coverage in the generative paradigm while being applicable across languages. + +Relation Coverage (RC): We observe that for generating all possible extractions, all the verbs in the sentence must be contained in some relation. However, the extractions of training data may be incomplete and not satisfy this property. Therefore, dur + +ing the training phase, we modify the input to the Stage-1 model by removing the verbs in the sentence which are not present in relation of any extraction. Thus the model learns that every verb must be included in some relation and applies the same during inference as well. This heuristic does not effect Stage-2 model training. + +# 6 Confidence Scoring + +The word log probabilities assigned by the Stage-2 decoder can be summed up to be used as confidence score for the extractions generated by GEN2OIE. We experiment with using a separate model for obtaining the confidence scores. A sequence-labeling model is trained on each language's extractions with ext-sentence as input and S, R, O labels over the ext-sentence as the output. The log probabilities given by the sequence-labeling model to the labels predicted by the GEN2OIE model are summed up to get the new confidence scores. + +# 7 Experimental Setting + +We train OpenIE systems in 5 languages, Spanish (ES), Portuguese (PT), Chinese (ZH), Hindi (HI) and Telugu (TE), by using the training data transferred from English to the respective language. For training the Seq2Seq models used in the data generation pipeline and the OpenIE systems based on the GEN2OIE architecture, we choose either the mBART (Liu et al., 2020) or mT5 (Xue et al., 2020) model depending on the particular language. Both of them are pre-trained multilingual Seq2Seq models that are trained with a span denoising objective on a large corpus of text containing many languages. mBART is pre-trained on CC25 and mT5 is pre-trained on mC4 corpus which contain text in 25 and 101 languages, respectively. Since mBART does not support Portuguese and Telugu, we use mT5 for these two languages and mBART for the + +remaining 3 languages. We use the default hyperparameters recommended for these models and they are reported in Appendix F. + +Training Datasets: For training the AACTRANS model, we make use of parallel English, language $F$ sentences available in standard translation corpora using the method described in Section 4. For Spanish we use parallel sentences from EuroParl corpus (Koehn et al., 2005), and for Portuguese we use a subset of the ParaCrawl corpus (Banon et al., 2019), as chosen by Lopes et al. (2020). For Hindi we use the IIT-B corpus (Kunchukuttan et al., 2018), and for Telugu we use the Samanantar corpus (Ramesh et al., 2021). For Chinese we use the data released for WMT19 (Barrault et al., 2019). We list the BLEU scores of the various systems in Appendix C. + +We use the OIE4 training corpus from Kolluru et al. (2020b) and transfer it to other languages for training OpenIE systems. + +Evaluation Datasets and Metrics: For evaluating translation systems we use the test sets available in the respective corpora and use SacreBLEU (Post, 2018) as the metric. For evaluating different OpenIE systems we use the Optimal F1 and Area Under Curve (AUC) as computed by the CaRB (Bhardwaj et al., 2019) scoring function. For Spanish, Portuguese OpenIE we use test sets provided in Ro et al. (2020). For Chinese OpenIE, we randomly choose $10\%$ of the SAOKE dataset (Sun et al., 2018). + +In order to evaluate our method on medium and low resource languages, we release new OpenIE test sets in Hindi and Telugu. Human annotators who are fluent in both the language and are knowledgeable about the OpenIE task translated about 300 randomly chosen sentences and their corresponding extractions from CaRB test set. They were paid $2.5 per sentence. + +Table 2 lists the number of examples in different languages used for training and evaluating translation and OpenIE systems. + +# 8 Experiments + +We perform experiments to answer the questions: + +1. How effective is the GEN2OIE model? +2. What is the quality of data generated with the AACTRANS+CLP pipeline, assessed both by + +
ENESPTZHHITE
Translation
Train-1.9M5M1M1.6M4.8M
Test-3847399,087200125072390
OpenIE
Train91K91K91K91K91K91K
Test6415945943833298302
+ +Table 2: Data statistics for OpenIE examples and (English, language $F$ ) parallel sentences. + +
ModelEN
F1AUC
IMoJIE53.633.3
IGL52.533.8
CIGL5436
OpenIE652.733.7
Multi2OIE52.531.6
GENOIE52.130.3
GEN2OIE w/o RC51.929.7
GEN2OIE54.432.3
label-rescore)54.538.9
+ +Table 3: Performance of OpenIE systems in English, evaluated with the CaRB metric. GEN2OIE along with Label Rescoring produces the best performance. + +the final performance of systems trained using it and with metrics defined for evaluating consistency? + +3. What are the roles of different components in the GEN2OIE and AACTRANS+CLP data? + +# 8.1 Effectiveness of GEN2OIE + +To study the baseline monolingual effectiveness of GEN2OIE, we first train and evaluate the system on English data. The results are shown in Table 3. We compare with previously proposed English OpenIE models such as Multi $^2$ OIE (Ro et al., 2020), OpenIE6 (Kolluru et al., 2020a) and IMoJIE (Kolluru et al., 2020b). We also consider individual components in OpenIE6, the IGL and Constrained-IGL (CIGL) architectures. CIGL achieves the highest performance among all prior models but uses of English specific constraints in training. + +We find that GEN2OIE, which uses the proposed language-agnostic relation coverage (RC) outperforms CIGL by $0.4\%$ F1. However, its AUC remains lower. Therefore, we rescore the generated extractions with labeling-based rescoring model (Section 6). This results in a new state of the art for English in F1 and AUC with the labeling-based rescoring resulting in a $2.9\%$ AUC gain over CIGL. + +
ModelTraining DataESPTZHHITE
F1AUCF1AUCF1AUCF1AUCF1AUC
(Faruqui, 2015)English45.528.648.531.513.73.330.412.536.716.2
Multi2OIEEnglish60.041.560.241.123.78.128.810.916.54.1
Multi2OIESentTrans+CLP62.042.860.941.321.26.548.127.633.415.4
OpenIE6SentTrans+CLP56.837.458.739.418.24.846.3283918.3
IMoJIEAACTRANS+CLP61.643.159.739.915.44.047.526.333.915.5
GENOIESentTrans+CLP60.440.663.543.720.94.951.528.541.716.3
SentExtTrans+CLP58.339.757.336.520.85.651.628.136.613.9
AACTRANS+CLP60.841.363.944.823.15.951.628.639.315.1
GEN2OIESentTrans+CLP64.244.665.650.029.08.952.330.840.315.6
SentExtTrans+CLP64.746.163.745.529.310.252.531.039.815.6
AACTRANS+CLP65.947.266.449.229.810.352.832.041.516.6
labelled-rescore)AACTRANS+CLP65.951.566.553.829.813.852.837.641.524.9
GEN2OIE-mT5AACTRANS+CLP67.948.566.449.233.312.753.630.941.516.6
labelled-rescore)AACTRANS+CLP68.053.666.553.833.215.853.638.141.524.9
+ +Table 4: F1 and AUC performance of OpenIE systems in Spanish (ES), Portuguese (PT), Chinese (ZH), Hindi (HI) and Telugu (TE). Training with AACTRANS+CLP data shows strong performance with both GENOIE and GEN2OIE models. Labeling-based rescoring improves AUC in all languages. We also report the results of training GEN2OIE model with mT5 on all languages. + +To further analyze the effectiveness of our 2-stage architecture, we introduce another model called GENOIE that outputs all extractions for a sentence as a single string, separated by an $$ token. We find that using GENOIE results in (2.3, $2.0)\%$ drop in F1, AUC compared to GEN2OIE which leverages RC. We also report GEN2OIE performance without using RC. + +# 8.2 Quality of AACTRANS+CLP data + +In order to test the quality of the OpenIE examples generated using the AACTRANS+CLP pipeline, we train both the GENOIE and GEN2OIE models over the data generated for different languages. In Table 4, we compare it with examples generated from two other methods, SentTrans and SentExtTrans. + +SentTrans+CLP represents an adaptation of X-SRL (Daza and Frank, 2020) for OpenIE where only the sentence is translated and each extraction, which is expressed as labeling over the words in the sentence, are projected onto the translated sentence using the CLP algorithm described in Section 4.2. The projected extraction is now a labeling over the translated sentence and hence it uses the same morphology as the sentence and cannot add new words. SentExtTrans+CLP uses independent translation of English sentence and ext-sentences followed by CLP algorithm between the English and translated ext-sentences to transfer the labels. Although this allows for adding new words and changing morphology, it can result in a lack of con + +sistency between the translations. + +We find that both GENOIE and GEN2OIE show consistent gains with AACTRANS+CLP data across various languages, when compared with SentExt-Trans+CLP and SentTrans+CLP data. + +We further use rescoring models that are trained on the same AACTRANS+CLP data. Labeling-based rescoring achieves significantly higher AUC, with as much as $8.3\%$ gain in Telugu. + +We experiment with two versions of Multi $^2$ OIE: 1) trained only on English OpenIE data and applied to other languages in a zero-shot manner and 2) using language-specific training data generated from SentTrans+CLP. We specifically choose SentTrans+CLP data as all the extractions can be expressed as labels over the sentence, which is a requirement for training Multi $^2$ OIE which is itself a labeling model. We find that Multi $^2$ OIE model trained with SentTrans+CLP data improves over the zero-shot setting in all languages other than Chinese (discussed below). However, it performs significantly worse than GEN2OIE by $(5.2, 3.3)\%$ in (F1, AUC) on average, even on training with the same SentTrans+CLP data. This can be attributed to Multi $^2$ OIE's lack of capability to handle: 1) overlapping relations, 2) multiple extractions per relation, 3) adding auxiliary words or 4) changing inflectional forms, as shown in Table 5. + +We train IMoJIE and OpenIE6 (initialized with mBERT) on AACTRANS+CLP and Sent-Trans+CLP data. We find that they underperform + +
Sentence +ExtractionsGeorge Bluth Sr., patriarch of the Bluth family, is the founder and former CEO of the Bluth Company. +<s> George Bluth Sr. </s> <r> is patriarch of </r> <o> the Bluth family </o> +<s> George Bluth Sr. </s> <r> is </r> <o> the founder and former CEO of the Bluth Company </o> +<s> George Bluth Sr. </s> <r> is </r> <o> patriarch of the Bluth family </o>
Telugu +English +Extractionious 5 ous 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990
+ +Table 5: Sentence and OpenIE predictions of GEN2OIE in English, Telugu and Hindi. It is capable of generating overlapping relations (is, is patriarch of), multiple extractions per relation (is), add auxiliary words (ajarne - > jafat) or change inflection forms \((\text{串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串串 + +
Model (Data)ESZHHI
F1AUCF1AUCF1AUC
GEN2OIE (AACTRANS+CLP)65.947.229.810.352.832.0
GEN2OIE (AACTRANS w/o Sentence Consistency+CLP)64.044.329.610.351.930.8
GEN2OIE w/o Relation Ordering (AACTRANS+CLP)65.245.629.69.852.531.8
GEN2OIE w/o Relation Coverage (AACTRANS+CLP)60.640.323.96.652.832.3
+ +GEN2OIE and Multi $^{2}$ OIE. Compared to the two-stage models, both IMoJIE and OpenIE6 generate all the extractions autoregressively, which makes them more susceptible to noise in the automatically generated training data. + +We additionally compare with Faruqui (2015), where the test sentence is translated into English, extractions are generated using OpenIE6 and they are projected back onto the test sentence. We find that the system results in poor performance due to lack of language-specific training. + +We observe that all systems have low performance on Chinese. We attribute this to the various artifacts present in the SAOKE test set, that include special relations such DESC, TIME, ISA, etc. Since these extractions cannot be generated in our pipeline, we observe performance of only $33.2\%$ F1 and $15.8\%$ AUC with our best model, when compared to training GEN2OIE with SAOKE training data, which gives $52.5\%$ F1 and $32\%$ AUC. + +We additionally train the GEN2OIE model using mT5 on AACTRANS data for all five languages (GEN2OIE-mT5 in Table 4) and find improvements of $(2.1\%, 3.5\%, 0.8\%)$ F1 over the mBART models used for ES, ZH and HI. + +# 8.3 Evaluating Consistency + +In order to measure the inconsistency of the generated extractions with respect to the sentence, we + +Table 6: Ablations of GEN2OIE model trained with AACTRANS+CLP data on ES, ZH and HI. We analyze the effect of removing 3 components and re-training the model: 1. Sentence Consistency used in AACTRANS data generation, and 2. Relation Ordering used and 3. Relation Coverage used in Stage-1 model training. + +
ESPTZHHITE
SenExtTrans+CLP12.29.524.513.319.6
AACTrans+CLP5.43.95.76.910.3
+ +Table 7: Evaluating inconsistency between translated extractions and corresponding sentences. + +compute the fraction of words that occur in the extraction but are absent in the sentence. In Table 7, we find that across languages, the fraction is lower for training examples generated through the consistent translation methodology (AACTRANS+CLP) when compared against independent translations (SentExtTrans+CLP). This indicates that AAC-TRANS+CLP indeed achieves better consistency. + +In order to analyze the reasons for improvement in CaRB performance, we compute the fraction of words that are present in model predictions but absent in the gold extractions of the test set (denoted by AG - Absent in Gold). In Table 8, we see that GEN2OIE trained on AACTRANS+CLP achieves lower values than the same model trained on SentExtTrans+CLP data and this correlates with the increased CaRB performance. This shows that the model generates words closer to gold extractions (and hence closer to input sentence), which contributes to higher performance. + +
DataESPTZHHITE
AG↓F1↑AG↓F1↑AG↓F1↑AG↓F1↑AG↓F1↑
SentExtTrans+CLP2.7464.73.5163.710.5529.31.7852.52.3639.8
AACTRANS+CLP2.3165.92.2266.49.6729.81.652.82.0941.5
+ +Table 8: Evaluating CaRB F1 and AG of GEN2OIE predictions trained on SentExtTrans+CLP and AACTrans+CLP data. We find a decreasing trend of AG with increasing F1. + +# 8.4 Ablation Study + +We choose three representative languages to conduct the ablation study — Spanish, Chinese, and Hindi. Portuguese and Telugu belong to the same language family as Spanish and Hindi, respectively. In Table 6, we show the results of individually removing components from the GEN2OIE trained on AACTRANS+CLP data. + +In AACTRANS w/o Sentence Consistency, we use regular translation of sentence while using consistent translation of extraction. This leads to a drop of (1.9, 0.2, $0.9\%$ in F1 for the three languages, and shows the importance of using consistent translation on both the sentence and extraction. + +In GEN2OIE w/o Relation Ordering, we train Stage-1 GEN2OIE with randomly shuffled relations. This reduces the performance as our model uses auto-regressive training which benefits from following a fixed order, which we choose as the order of occurrence of the relations in the sentence. + +In GEN2OIE w/o Relation Coverage, we find that performance decreases in Spanish and Chinese by $5.3\%$ and $5.9\%$ in F1, respectively, but remains the same in Hindi, possibly due to the smaller number of examples in the test set. + +Error Analysis: We find that the AAC-TRANS+CLP suffers from: 1) missing or 2) wrong word alignments and 3) inability to label discontinuous S, R, O phrases. We show examples of these cases in Appendix B. + +# 9 Conclusion + +We develop a novel AACTRANS+CLP pipeline for consistently transferring English OpenIE examples to other languages and present a novel two-stage generative model, GEN2OIE, for training OpenIE systems in various languages. We show improvements over the existing baseline of Multi $^2$ OIE, with an average improvement of $7.2\%$ in F1 and $16.1\%$ in AUC. It is effective in five languages, which is the largest number of languages covered by a single OpenIE technique known to us. To encourage research in medium and low-resource lan + +guages, we additionally release new OpenIE evaluation examples in Hindi and Telugu. + +# Acknowledgements + +Keshav is supported by TCS Research Fellowship. Mausam is supported by grants from Huawei, Google, Bloomberg and IBM, and a Jai Gupta Chair Fellowship. Soumen is partly supported by a Jagadish Bose Fellowship and an AI Horizons Network grant from IBM. We thank IIT Delhi HPC facility and TFRC program for compute resources. + +# References + +Paolo Annesi and Roberto Basili. 2010. Cross-lingual alignment of framenet annotations through hidden markov models. In International Conference on Intelligent Text Processing and Computational Linguistics. +Niranjan Balasubramanian, Stephen Soderland, Mausam, and Oren Etzioni. 2013. Generating coherent event schemas at scale. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, 18-21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1721-1731. ACL. +Marta Banón, Pinzhen Chen, Barry Haddow, Kenneth Heafield, Hieu Hoang, Miquel Esplà-Gomis, Mikel L. Forcada, Amir Kamran, Faheem Kirefu, Philipp Koehn, Sergio Ortiz Rojas, Leopoldo Pla Sempere, Gema Ramírez-Sánchez, Elsa Sarrías, Marek Strelec, Brian Thompson, William Waites, Dion Wiggins, and Jaume Zaragoza. 2019. ParaCrawl: Web-scale acquisition of parallel corpora. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. +Loic Barrault, Ondrej Bojar, Marta R Costa-Jussa, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, et al. 2019. Findings of the 2019 conference on machine translation (wmt19). In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1). +Akim Bassa, Mark Kröll, and Roman Kern. 2018. Gerie-an open information extraction system for the + +german language. J. Univers. Comput. Sci., 24(1):2-24. +Sangnie Bhardwaj, Samarth Aggarwal, and Mausam. 2019. CaRB: A Crowdsourced Benchmark for OpenIE. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019, pages 6263-6268. +Abhyuday Bhartiya, Kartikeya Badola, and Mausam. 2022. Dis-rex: A multilingual dataset for distantly supervised relation extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland. Association for Computational Linguistics. +Bruno Souza Cabral, Rafael Glauber, Marlo Souza, and Daniela Barreiro Claro. 2020. Crossoie: Crosslingual classifier for open information extraction. In PROPOR, pages 368-378. +Chandrahas and Partha Talukdar. 2021. OKGIT: Open knowledge graph link prediction with implicit types. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 2546-2559, Online. Association for Computational Linguistics. +Guanhua Chen, Yun Chen, Yong Wang, and V. Li. 2020. Lexical-constraint-aware neural machine translation via data augmentation. In *IJCAI*. +Janara Christensen, Mausam, Stephen Soderland, and Oren Etzioni. 2011. An analysis of open information extraction based on semantic role labeling. In Proceedings of the sixth international conference on Knowledge capture, pages 113-120. ACM. +Janara Christensen, Stephen Soderland, Gagan Bansal, and Mausam. 2014. Hierarchical summarization: Scaling up multi-document summarization. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, Volume 1: Long Papers, pages 902-912. The Association for Computer Linguistics. +Daniela Barreiro Claro, Marlo Souza, Clarissa Castellã Xavier, and Leandro Oliveira. 2019. Multilingual open information extraction: Challenges and opportunities. Information. +Yarowsky David, Ngai Grace, Wicentowski Richard, et al. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proceedings of the First International Conference on Human Language Technology Research. +Angel Daza and Anette Frank. 2020. X-srl: A parallel cross-lingual semantic role labeling dataset. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). + +Luciano Del Corro and Rainer Gemulla. 2013. ClausIE: clause-based open information extraction. In Proceedings of the 22nd international conference on World Wide Web (WWW), 2013, pages 355-366. ACM. +Georgiana Dinu, Prashant Mathur, Marcello Federico, and Y. Al-Onaizan. 2019. Training neural machine translation to apply terminology constraints. In ACL. +Zi-Yi Dou and Graham Neubig. 2021a. Word alignment by fine-tuning embeddings on parallel corpora. In Conference of the European Chapter of the Association for Computational Linguistics (EACL). +Zi-Yi Dou and Graham Neubig. 2021b. Word alignment by fine-tuning embeddings on parallel corpora. +Oren Etzioni, Anthony Fader, Janara Christensen, Stephen Soderland, and Mausam. 2011. Open information extraction: The second generation. In *IJCAI* 2011, Proceedings of the 22nd International Joint Conference on Artificial Intelligence, Barcelona, Catalonia, Spain, July 16-22, 2011, pages 3-10. IJCAI/AAAI. +Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2013. Paraphrase-driven learning for open question answering. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics. +Angela Fan, Claire Gardent, Chloe Braud, and Antoine Bordes. 2019. Using local knowledge graph construction to scale Seq2Seq models to multi-document inputs. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China. Association for Computational Linguistics. +Manaal Faruqui. 2015. Multilingual open relation extraction using cross-lingual projection. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. +Pablo Gamallo and Marcos Garcia. 2015. Multilingual open information extraction. In Portuguese Conference on Artificial Intelligence, pages 711-722. Springer. +Raffaele Guarasci, Emanuele Damiano, Aniello Minutolo, Massimo Esposito, and Giuseppe De Pietro. 2020. Lexicon-grammar based open information extraction from natural language sentences in Italian. Expert Systems with Applications, 143:112954. +Swapnil Gupta, Sreyash Kenkre, and Partha Talukdar. 2019. CaRe: Open knowledge graph embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China. Association for Computational Linguistics. + +E. Hasler, A. Gispert, Gonzalo Iglesias, and B. Byrne. 2018. Neural machine translation decoding with terminology constraints. In *NAACL*. +Tushar Khot, Ashish Sabharwal, and Peter Clark. 2017. Answering complex questions using open information extraction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Vancouver, Canada. Association for Computational Linguistics. +Philipp Koehn et al. 2005. Europarl: A parallel corpus for statistical machine translation. In MT summit, volume 5, pages 79-86. Citeseer. +Keshav Kolluru, Vaibhav Adlakha, Samarth Aggarwal, Mausam, and Soumen Chakrabarti. 2020a. OpenIE6: Iterative Grid Labeling and Coordination Analysis for Open Information Extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). +Keshav Kolluru, Samarth Aggarwal, Vipul Rathore, Mausam, and Soumen Chakrabarti. 2020b. IMoJIE: Iterative Memory-Based Joint Open Information Extraction. In The 58th Annual Meeting of the Association for Computational Linguistics (ACL), Seattle, U.S.A. +Keshav Kolluru, Martin Rezk, Pat Verga, William W. Cohen, and Partha P. Talukdar. 2021. Multilingual fact linking. In 3rd Conference on Automated Knowledge Base Construction, AKBC 2021, Virtual, October 4-8, 2021. +Anoop Kunchukuttan, Pratik Mehta, and Pushpak Bhattacharyya. 2018. The IIT Bombay English-Hindi parallel corpus. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). +Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. Transactions of the Association for Computational Linguistics, 8:726-742. +Alexandre Lopes, Rodrigo Nogueira, Roberto Lotufo, and Helio Pedrini. 2020. Lite training strategies for Portuguese-English and English-Portuguese translation. In Proceedings of the Fifth Conference on Machine Translation. +Mausam. 2016. Open information extraction systems and downstream applications. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI), 2016, pages 4074-4077. AAAI Press. +Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational linguistics, 29(1):19-51. + +Harinder Pal and Mausam. 2016. Demonyms and compound relational nouns in nominal OpenIE. In Proceedings of the 5th Workshop on Automated Knowledge Base Construction, pages 35-39. +Dimitris Papadopoulos, Nikolaos Papadakis, and Nikolaos Matsatsinis. 2021. PENELOPIE: Enabling open information extraction for the Greek language through machine translation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop. +Hoifung Poon, Janara Christensen, Pedro Domingos, Oren Etzioni, Raphael Hoffmann, Chloe Kiddon, Thomas Lin, Xiao Ling, Mausam, Alan Ritter, et al. 2010. Machine reading at the university of washington. In Proceedings of the NAACL HLT 2010 First International Workshop on Formalisms and Methodology for Learning by Reading, pages 87-95. Association for Computational Linguistics. +Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, Brussels, Belgium. Association for Computational Linguistics. +Mahmoud Rahat and Alireza Talebpour. 2018. Parsa: An open information extraction system for persian. Digital Scholarship in the Humanities, 33(4):874-893. +Gowtham Ramesh, Sumanth Doddapaneni, Aravinth Bheemaraj, Mayank Jobanputra, Raghavan AK, Ajitesh Sharma, Sujit Sahoo, Harshita Diddee, Mahalakshmi J, Divyanshu Kakwani, Navneet Kumar, Aswin Pradeep, Kumar Deepak, Vivek Raghavan, Anoop Kunchukuttan, Pratyush Kumar, and Mitesh Shantadevi Khapra. 2021. Samanantar: The largest publicly available parallel corpora collection for 11 indic languages. +Vipul Rathore, Kartikeya Badola, Parag Singla, and Mausam. 2022. PARE: a simple and strong baseline for monolingual and multilingual distantly supervised relation extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland. Association for Computational Linguistics. +Youngbin Ro, Yukyung Lee, and Pilsung Kang. 2020. Multi^2OIE: Multilingual open information extraction based on multi-head attention with BERT. In Findings of the Association for Computational Linguistics: EMNLP 2020. +Ade Romadhony, Ayu Purwarianti, and Dwi H Widyan-toro. 2018. Rule-based indonesian open information extraction. In 2018 5th International Conference on Advanced Informatics: Concept Theory and Applications (ICAICTA), pages 107-112. IEEE. +Swarnadeep Saha and Mausam. 2018. Open information extraction from conjunctive sentences. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2288-2299. + +Swarnadeep Saha, Harinder Pal, and Mausam. 2017. Bootstrapping for numerical OpenIE. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 317-323. Association for Computational Linguistics. +Harkanwar Singh, Soumen Chakrabarti, Prachi Jain, Shared Roy Choudhury, and Mausam. 2021. Multilingual knowledge graph completion with joint relation and entity alignment. In 3rd Conference on Automated Knowledge Base Construction, AKBC 2021, Virtual, October 4-8, 2021. +Gabriel Stanovsky, Julian Michael, Luke Zettlemoyer, and Ido Dagan. 2018. Supervised Open Information Extraction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), Volume 1 (Long Papers), pages 885-895. +Mingming Sun, Xu Li, Xin Wang, Miao Fan, Yue Feng, and Ping Li. 2018. Logician: A unified end-to-end neural approach for open-domain information extraction. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pages 556-564. +Aaron Steven White, Drew Reisinger, Keisuke Sakaguchi, Tim Vieira, Sheng Zhang, Rachel Rudinger, Kyle Rawlins, and Benjamin Van Durme. 2016. Universal Decompositional Semantics on Universal Dependencies. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. +Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2020. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. +Othman Zennaki, Nasredine Seminar, and Laurent Besacier. 2019. A neural approach for inducing multilingual resources and natural language processing tools for low-resource languages. + +# Alignment-Augmented Consistent Translation for Multilingual Open Information Extraction (Appendix) + +# A Crosslingual Label Projection (CLP) + +In this section, we discuss CLP algorithm for projecting labels from English extraction to other language. Consider English sentence, E: Dutil - Dumas experiment was promoted by an organization called Encounter 2001 denotes and Spanish sentence, S: Experimento Dutil - Dumas fue promovido por una organizacion llama Encounter 2001. The word alignments between these sentences are listed in Figure 3 and equivalent phrases from the phrase extract algorithm are shown in Table 9. Consider the English extraction, (Dumas experiment; was promoted; by an organization). For each phrase in the tuple, CLP algorithm looks for the highest BLEU match phrase from Table 9. The subject phrase Dumas experiment has best BLEU match to Dutil - Dumas experiment and so the corresponding Spanish phrase Experimento Dutil - Dumas will be marked as subject. Note that the phrase Dumas experiment is not present in Table 9 because its aligned phrase is not continuous in Spanish sentence as can be seen in Figure 3. Similarly for the relation phrase was promoted, we find fue promovido from Table 9. Continuing the same algorithm, we get (Experimento Dutil - Dumas; fue promovido; por una organizacion) as the final Spanish extraction. + +# B Error Analysis + +We list three cases that decrease the quality of transferred data using the AACTRANS+CLP pipeline. + +Missing word alignments: For example, English extraction, A couple of trojans have also been found orbiting with Mars translates to Alternatively se han encontrar un par de��rajas en orbita con Mars in Spanish. The verb orbiting changes to the form en orbita (in orbit) (nominalization). The word en in Spanish does not align with any word in the English extraction as can be seen in Figure 4. So, projection of (A couple of trojans; have also been found; orbiting with Mars) leads to (un par de��rajas; Alternatively se han encontrar; orbita con Mars) which is not fluent because of missing word en in the object phrase. + +In languages like Spanish and Portuguese, we found alignments to be of high precision but of + +ten miss some alignments, as shown above. Next, we see how wrong alignments can affect projection quality. + +Wrong word alignments: Consider the following English (E) and Hindi (H) ext-sentences, E: Many organizations like the Samskrita Bharati are conducting Speak Sanskrit workshops to popularize Sanskrit and H: नंस्कूल्ता गारती जोरे कहिके संगधन् शन्दूल्ति को लोकप्रय खनाने के किलेके बोल्ति संस्कूल्ति कागर्षार्वाके अशुल्ति कर रहे कहिके. We find that the word the is wrongly aligned to the Hindi word, कहर . So, the subject phrase Many organizations like the Samskrita Bharati does not have a continuous phrase in Hindi sentence because it has many words till कहर that do not map to the subject phrase in English sentence. Therefore, the CLP algorithm matches a partial phrase Many organizations like which is the best BLEU match to the given subject phrase and its equivalent continuous phrase जोरे कहिके संगधन् सं�्कूल्ति को gets tagged as subject in Hindi. Whereas नंस्कूल्ता गारती जोरे कहिके सं�धन् सं�्कूल्ति को would be an ideal subject phrase. + +Discontinuous phrases: Phrase extract in the CLP algorithm assumes continuous phrases in English map to continuous phrase in other language. This assumption would lead to incomplete extractions in the other languages. For example, consider English extraction E: (Winston Churchill; twice suggested; naming a British battleship) and its Telugu extraction sentence T: 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 4. The relation phrase twice suggested is mapped as follows in Telugu: The word twice is mapped to $\text{三} _ { \text{一} }$ and suggested is mapped to $\text{三} _ { \text{一} }$ . The equivalent phrase twice suggested is no longer continuous in Telugu language. CLP algorithm looks for best BLEU match that results in matching to the phrase twice and its equivalent $\text{三} _ { \text{一} }$ is tagged as relation. The ideal relation in this example would be $\text{三} _ { \text{一} }$ + +# C BLEU scores + +Table 10 contains the BLEU scores of both the normal as well as consistent translations. We find that the performance remains nearly the same, indicating that the improved OpenIE performance stems from the consistency in the translations. + +# D Effect of word alignments quality + +In order to understand the effect of alignment quality, we replace the language-specific trained + +![](images/d371a6cdd21e08ea5a7731ced8806553242cba080b7aa567be8f5abdff48681a.jpg) +Figure 3: Equivalent English and Spanish sentence with corresponding word alignments between them + +![](images/c99ebdcf9ce0c46a4dc172738e6b942b7978dcc91e5e5ec40af71459193458f5.jpg) +Figure 4: Equivalent English and Spanish sentence with corresponding word alignments between them + +
English PhrasesSpanish Phrases
Dutil - Dumas experimentExperimento Dutil - Dumas
DumasDumas
experimentExperimento
was promotedfue promovido
........
+ +Table 9: Mapped continuous phrases between English (E) and Spanish (S) language sentences from the phrase extract algorithm + +
BLEUESPTZHHITE
Translation45.248.426.820.57.0
AACTranslation43.747.828.220.17.5
+ +Table 10: BLEU scores of translation and AAC-translation are similar showing that the performance improvement is because of the added consistency. + +
LanguageMATA
ES0.380.19
HI0.490.20
+ +aligners (TA), with a standard pre-trained mBERT model (MA). First note in Table 11 that MA has a much higher alignment perplexity (used as a measure of unsupervised alignment quality in (Dou and Neubig, 2021b)). We now perform an experiment to replace TA with MA in our methodology. Aligners are used at two places in our setup - 1. Alignment-Constrained Translation and 2. Crosslingual Label Projection. We replace each of them with an mBERT aligner (MA), and show the results in Table 12. We find that there is some performance drop by using MA, but it is quite less compared to the drop in alignment perplexity. This suggests that our model is relatively robust to the quality of alignment. + +# E Alternatives to CLP + +Following (Zennaki et al., 2019), we experiment with a neural mBERT-based tagging model. We train the mBERT model for tagging the Subject, Relation and Object tags in English. Due to the language-agnostic features of mBERT, we can apply the model to other languages in a zero-shot manner. These tagged examples can then be used for training the OpenIE model. In Table 13, we find that this does not improve over our CLP-based tag + +Table 11: Unsupervised alignment perplexity for mBERT (MA) and Trained (TA) aligners + +
(AACTRANS,CLP)HIES
F1AUCF1AUC
(TA,TA)62.138.865.947.2
(TA,MA)58.734.464.746.2
(MA,TA)59.437.965.646.7
+ +Table 12: F1 and AUC of GEN2OIE trained with examples generated using TA and MA alignment strategies. (1, 2) corresponds to aligner 1 being used in AACTRANS and aligner 2 being used in CLP. + +
AACTRANSHIES
F1AUCF1AUC
CLP62.138.865.947.2
mBERT43.720.565.348.1
+ +Table 13: GEN2OIE performance trained on examples tagged with either CLP or mBERT model. + +ging. However, combining signals from both techniques could be interesting future work. HI results in Table 12 and Table 13 use a subset of the final test set which was initially used for development purposes. + +# F Reproducibility + +Compute Infrastructure: We use V100 (32 GB) GPU for training the mBERT models and use TPU v3-8 for training the mT5 models. + +Hyper-parameters: We list the final hyperparameters used for training mBART model in Table 14 and mT5 model in Table 15. We don't conduct any grid search and use the default hyperparameters suggested in the respective systems. + +Number of parameters: mBART has 610 million parameters and mT5-base has 580 million parameters. + +
Hyper-parameterValue
Maximum tokens per batch1024
Learning Rate3e-5
LR SchedulerPolynomial Decay
Warmup Updates2500
Dropout0.3
Max Updates40,000 (for OpenIE) and 1,00,000 (for translation)
+ +Table 14: mBART hyperparameters + +
Hyper-parameterValue
Maximum tokens per batch24576
Learning Rate0.001
LR SchedulerConstant
Warmup Updates0
Dropout0.1
Max Updates20,000 (for OpenIE) and 1,00,000 (for translation)
+ +Table 15: mT5 hyperparameters \ No newline at end of file diff --git a/alignmentaugmentedconsistenttranslationformultilingualopeninformationextraction/images.zip b/alignmentaugmentedconsistenttranslationformultilingualopeninformationextraction/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..feba4b946aa444f8a314adf471f39006c70e9524 --- /dev/null +++ b/alignmentaugmentedconsistenttranslationformultilingualopeninformationextraction/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8a36c8b2b604f8b41c49c5cba2391683de315be5ae883a300fad70e4dcbcf15c +size 749757 diff --git a/alignmentaugmentedconsistenttranslationformultilingualopeninformationextraction/layout.json b/alignmentaugmentedconsistenttranslationformultilingualopeninformationextraction/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..563e37640f69b417f45c32d00259413199d18dec --- /dev/null +++ b/alignmentaugmentedconsistenttranslationformultilingualopeninformationextraction/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:be3d52dd164fe8a7ebba5d0199e11cbc3b48579af3d420fd3d0f03205ca02f50 +size 502741 diff --git a/alternativeinputsignalseasetransferinmultilingualmachinetranslation/2f7dad6d-c677-43af-98be-01ed9ef25d70_content_list.json b/alternativeinputsignalseasetransferinmultilingualmachinetranslation/2f7dad6d-c677-43af-98be-01ed9ef25d70_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..8f430c448f8982ebb444d42eb7aa9f75b6b0d603 --- /dev/null +++ b/alternativeinputsignalseasetransferinmultilingualmachinetranslation/2f7dad6d-c677-43af-98be-01ed9ef25d70_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cc5186f7bc6a163a2fd1c5e5b0461e9b0cd9b5d58b7c4997161d7984eff67281 +size 94885 diff --git a/alternativeinputsignalseasetransferinmultilingualmachinetranslation/2f7dad6d-c677-43af-98be-01ed9ef25d70_model.json b/alternativeinputsignalseasetransferinmultilingualmachinetranslation/2f7dad6d-c677-43af-98be-01ed9ef25d70_model.json new file mode 100644 index 0000000000000000000000000000000000000000..2fb68501dae3bd823f8a5784d32254060ca6804a --- /dev/null +++ b/alternativeinputsignalseasetransferinmultilingualmachinetranslation/2f7dad6d-c677-43af-98be-01ed9ef25d70_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:62d40a941c49c396cbd07383b1c820960ce41289a16684f36ee05d3cbdbabb3e +size 113024 diff --git a/alternativeinputsignalseasetransferinmultilingualmachinetranslation/2f7dad6d-c677-43af-98be-01ed9ef25d70_origin.pdf b/alternativeinputsignalseasetransferinmultilingualmachinetranslation/2f7dad6d-c677-43af-98be-01ed9ef25d70_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7df0315f801050f8f03f072b9af575fd5da82b6f --- /dev/null +++ b/alternativeinputsignalseasetransferinmultilingualmachinetranslation/2f7dad6d-c677-43af-98be-01ed9ef25d70_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:08c53b53d2264c1faba1b02432cce430e2a0cc6e168cf837b4f9370c6de4bac4 +size 817420 diff --git a/alternativeinputsignalseasetransferinmultilingualmachinetranslation/full.md b/alternativeinputsignalseasetransferinmultilingualmachinetranslation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b3113fee3a00d33151526f93f195ec0674a51e75 --- /dev/null +++ b/alternativeinputsignalseasetransferinmultilingualmachinetranslation/full.md @@ -0,0 +1,306 @@ +# Alternative Input Signals Ease Transfer in Multilingual Machine Translation + +Simeng Sun $^{1*}$ Angela Fan $^{2}$ James Cross $^{2}$ Vishrav Chaudhary $^{3\dagger}$ Chau Tran $^{2}$ Philipp Koehn $^{2}$ Francisco Guzmán $^{2}$ + +University of Massachusetts Amherst $^{1}$ Meta AI $^{2}$ Microsoft Turing $^{3}$ + +simengsun@umass.edu + +{angelafan, jcross, chau, pkoehn, fguzman}@fb.com + +vchaudhary@microsoft.com + +# Abstract + +Recent work in multilingual machine translation (MMT) has focused on the potential of positive transfer between languages, particularly cases where higher-resourced languages can benefit lower-resourced ones. While training an MMT model, the supervision signals learned from one language pair can be transferred to the other via the tokens shared by multiple source languages. However, the transfer is inhibited when the token overlap among source languages is small, which manifests naturally when languages use different writing systems. In this paper, we tackle inhibited transfer by augmenting the training data with alternative signals that unify different writing systems, such as phonetic, romanized, and transliterated input. We test these signals on Indic and Turkic languages, two language families where the writing systems differ but languages still share common features. Our results indicate that a straightforward multi-source self-ensemble — training a model on a mixture of various signals and assembling the outputs of the same model fed with different signals during inference — outperforms strong ensemble baselines by 1.3 BLEU on both language families. Further, we find that incorporating alternative inputs via self-ensemble can be particularly effective in low-resource settings, leading to +5 BLEU when only $5\%$ of the total training data is accessible. Finally, our analysis demonstrates that including alternative signals yields more consistency and translates named entities more accurately, which is crucial for increased factuality of automated systems. + +# 1 Introduction + +Machine translation has seen great progress, with improvements in quality and successful commercial applications. However, the majority of this + +improvement benefits languages with large quantities of high-quality training data (high-resource languages). Recently, researchers have focused on the development of multilingual translation models (Aharoni et al., 2019; Fan et al., 2020) capable of translating between many different language pairs rather than specialized models for each translation direction. In particular, such multilingual models hold great promise for improving translation quality for low-resource languages, as grouping languages together allows them to benefit from linguistic similarities as well as shared data between related languages. For example, training a translation system with combined Assamese and Bengali data would enable transfer learning between the two languages. + +We investigate how to enable multilingual translation models to optimally learn these similarities between languages and leverage this similarity to improve translation quality. The fundamental unit representing lingual similarity is the token — languages that are similar often have similar words or phrases — and during training, translation models can learn strong representations of tokens in low-resource languages if they are also present in high-resource languages. However, a challenge arises when similar languages share only a small amount of tokens, which inhibits the transfer to limited and trivial cases of token sharing, e.g., punctuation marks and digits. This is particularly clear in cases where similar languages are written in different scripts, as the amount of shared tokens is small compared to languages using the same written script. An example would be Hindi and Gujarati, which have phonetic similarity but are written in their own native scripts. + +To tackle inhibited transfer due to distinct writing systems, we transform the original input via transliteration, the process of converting text from one script to another, to get alternative signal from the original source sentences. Transliteration has + +![](images/87f8975437497fa20d601d1e7393d5166416ef033ad5f92d16104d9fe40bc884.jpg) +Figure 1: A generic illustration of self-ensemble for a multilingual translation system while translating Bengali to English. The input contains different signals, each preceded by a special language token ('_bn_' indicates input in original Bengali script, '__bn_ipa_' the phonetic version of the same Bengli input, '__bn_romani_' the romanized version and '__bn_inscrip_' the same input but written in the script of Hindi, a language within the same language family). The log probabilities output by the model given each type of input are averaged for subsequent decoding process. + +been used in many real world cases, such as converting Cyrillic Serbian to Latin Serbian, as the language is commonly written with both scripts, or typing in romanized Hindi for convenience on a Latin-script keyboard. To unify various writing scripts to increase token overlap, we experiment with three types of transliteration: (1) transliterate into phonemes expressed by international phonetic alphabet (IPA), (2) transliterate into Latin script (ROMANI), and (3) transliterate into a script used by another language within the same language family (INSCRIP). Beyond training on alternative inputs created through transliteration, we also systematically examine approaches to combining different signals. Our experimental results on Indic and Turkic datasets demonstrate that (i) a self-ensemble (Figure 1) – training a model on the mixture of different signals and using an ensemble of the same model given different input signals during inference time, outperforms other methods such as multi-source ensemble and multi-encoder architecture, which require training multiple models or significant architectural changes. (ii) Further, without the need for additional bitext, a self-ensemble over the original and transliterated input consistently outperforms baselines, and is particularly effective when the training set is small (e.g. low-resource languages) with improvements of up to +5 BLEU. (iii) Finally, the improvements in BLEU originate from clear gain in the accuracy and consistency in the translation of named entities, which has strong implications for increased factuality of automated translation systems. + +# 2 Method + +Multilingual translation models enable languages to learn from each other, meaning low-resource + +languages can benefit from similarities to high-resource languages where data is plentiful. However, surface-level differences between languages, such as writing system, can obscure semantic similarities. We describe an approach to transliterating input sentences to various alternative forms that maximize transfer learning between different languages, and various modeling approaches to incorporating such varied inputs. + +# 2.1 Alternative Inputs Bridge the Gap between Surface Form and Meaning + +While training a multilingual translation system, tokens shared by multiple source languages serve as anchors to transfer information obtained from learning one language pair to the other. For example, the translation of 'terisini' in low-resourced Uzbek data can benefit from the word 'derisinin' in relatively high-resourced Turkish data after tokenizing into sub-word units. However, the transfer is hindered when the amount of shared tokens is small — exacerbated by cases where the source and target languages are written in different scripts. To alleviate the issue of various writing systems and encourage languages to transfer, we focus on alternative signals that unify the script of source languages and have larger token overlap. The core concept we explore is how to best leverage transliteration, or the process of converting the text from one script to the other. We demonstrate that transliteration can be an effective data augmentation approach that improves the translation performance without the need of acquiring additional parallel data. We explore three alternative inputs that allow + +models to share information more easily across languages with low token overlap but high semantic similarity. Figure 4 in Appendix C shows example alternative signals of the same Oriya sentence. + +Phonetic Input. Related languages in the same language family usually sound similar, such as languages in the Romance language family and those in the Indo-Aryan language family. Although cognates can be captured to some degree for Romance languages on subword-level, it is difficult for the Indo-Aryan family as those languages use different writing systems. Therefore, to fully exploit shared information, we transform the original textual input (BASE) into the phonetic space, where the basic units are phonemes expressed in international phonetic alphabet (IPA). For example, 'प्रधान्यम् रूली' in Bengali looks like 'प्रधान्यम् रूली' in IPA form. + +Romanized Input. Many languages use Latin alphabet (or Roman alphabet) in their default writing system, if not, they more or less have romanization of their default script in order to accommodate conventional keyboards, e.g., Chinese can be typed on U.S. keyboards through Pinyin, the romanization of Chinese. To utilize this existing form of alternative input, the romanized input is another signal we explore in this work. For example, ‘प्रधान्यानिको़ी’ looks like ‘pradhanmantri’ in romanized form. + +In-family Script Input. The two previous alternative representations introduce tokens not present in the existing vocabulary, which increases the number of input and output representations the translation models must learn. Further, phonetic input is artificial in the sense that it is not used by people to communicate to each other in written form — and only used for pronunciation. Romanization naturally would introduce many additional tokens if the source language does not use Latin script. A third alternative that does not suffer these drawbacks is transliterate source language into the script of any of the other source languages in the multilingual translation model. To take advantage of language relatedness (Dhamecha et al., 2021), we unify the source languages with the script used by a language within the same language family (INSCRIP). This method has the additional advantage of not needing to learn new subword tokenization models or replace the old vocabulary with a new one since all the inputs are expressed in one of the existing multilingual model's source language scripts. For example, “प्रधान्यम्निकोति” looks like “प्रधान्यम्निकोति” when + +transliterated into Hindi script. + +Advantages of Transliterated Inputs. Various different input representations have been inserted into translation models, from parse trees (Li et al., 2017; Currey and Heafield, 2018) to pretrained embeddings (Artetxe et al., 2018; Conneau et al., 2018). Compared to these alternatives, transliteration has several clear advantages. Most importantly, transliteration is fast and accurate. Several existing alternatives often use other models to produce a different input, such as a parse tree, which cascades error from the first model into the translation model. Comparatively, the alphabet alignment between various writing systems is quite well known, even for many low-resource languages, as alphabet is one of the foundational aspects of studying any new language. Similarly, phonetic pronunciation guides are often widely available. These resources are also easily accessible programmatically, making them ideal for converting large quantities of supervised training data, for instance, the espkea-ng tool supports phonemization of more than 100 languages and accents. Beyond the ease of creating transliterations, we emphasize that this technique does not require any data annotation or collection of parallel data. Thus, it can be utilized in any existing translation system. + +# 2.2 Adding Transliterated Input Combinations to Translation Models + +How can additional transliterated inputs be incorporated into modern machine translation architectures? Since each alternative signal could capture a different view of the original input, in addition to training on each of the individual alternative signal alone, we investigate different approaches to combining them. + +Straight Concatenation The simplest combination strategy is to concatenate different input signals and separate them by a special token. For instance, to combine the original and phonetic input, we re-arrange the input to be of the format: "[original input] [SEP] [phonetic input)". During training, the decoder explicitly attends to tokens in both input signals. The advantage of this method is that no architectural change is required as all modification is operated on the input data. However, as the concatenated input becomes longer, this method requires more computation to train compared to the baseline model trained on the original input only. + +Multi-Encoder Architectures Prior works have found multi-encoder architecture to be effective for multi-source machine translation (Nishimura et al., 2018). To cope with input from different sources, each encoder in the multi-encoder architecture deals with one type of input. To attend to multiple encoders on the decoder side, four cross-attention mechanisms can be adopted. We direct the reader to Appendix A for a detailed description of these attention variations. Although prior work investigates the efficacy of this approach, it is a complicated model choice requiring non-trivial architectural changes. + +Multi-Source Ensemble Ensembles are usually employed to boost the performance of a translation system. In a standard setup, each ensemble component is trained with identical configuration except for the random seed. We generalize this method to multi-source ensemble, i.e., individual ensemble components are trained on different transliterated inputs. During inference time, each component is fed with the type of transliteration it was trained on and produces the predicted log probabilities, which are averaged over all components for the subsequent decoding process. It is important for models trained on different source signals to have the same target vocabulary so that the average of log probabilities can happen. Unlike the previous two methods, this approach requires training multiple full models, thus requiring even more computation. + +Multi-Source Self-Ensemble Ensembling models that are trained on different input transliterations has the advantage that each individual model is maximally simple — only the input data for training changes. However, it comes with the downside that multiple different models need to be trained. This creates challenges particularly when models grow in size, as a new model would need to be created for each different transliterated input. + +Instead, we propose the Multi-Source Self-Ensemble, which has all the advantages of traditional ensembling, but only requires one model to be trained. Previous works in self-ensembles have focused on model robustness (Liu et al., 2018), which is distinct from varying input representations. Other work creates inputs in different languages (Fan et al., 2020), but have to use a translation model to create those inputs first. + +In our case, we train the model with different transliterated inputs mapping to the same translated + +target sentence. Concretely, the model is trained on the mixture of various input signals, each preceded by a special language token indicating which type of signal this input belongs to. At inference time, the alternative transliterated signals of the same test sentence are fed to the same model and the log probabilities produced by these separate passes are averaged as in multi-source ensemble. This approach is simple to implement as it requires no architectural change, meaning the transliterated inputs we propose can be added seamlessly to any existing translation library. Unlike multi-source ensemble, only one model needs to be trained, stored and loaded for inference, greatly simplifying the ensembling process and increasing the scalability of our approach (particularly as translation models increase in size). To enforce fair comparison between multi-source self-ensemble and multi-source ensemble, we scale the former so that it has the same number of parameters as that of all ensemble components of the latter. For the purpose of minimally impacting inference speed, the scaling is done only to the encoder embedding dimension so that the decoder remains the same. + +# 3 Experimental setup + +Dataset We train our model on two language families: Indic and Turkic. The Indic dataset is from the WAT MultiIndic MT task $^2$ , including 10 Indic languages and in total around 11 million Indic-English bi-texts. Six of the Indic languages are Indo-Aryan languages and the rest are Dravidian languages. All of these languages use a different writing system. The Turkic dataset is collected from the open parallel corpus (Tiedemann, 2012) $^3$ . For relatively high-resourced language Turkish, we randomly select 4 million subset from the CCAligned (El-Kishky et al., 2020) corpus. Within this dataset, two languages use Cyrillic alphabet (Kazakh and Kyrgyz) and the rest use Latin alphabet. Detailed dataset statistics are displayed in Table 7 in Appendix B. + +Single-input model To test the effectiveness of each input signal, we train models on each single type of input: original input (BASE), phonetic input (IPA), romanized input (ROMANI) or input all expressed in the script of a language within the same language family (INSCRIP). On the Indic dataset, + +
~93 M parameters~2×93 M parameters
IndicTurkicIndicTurkic
Single-input Original +BASE33.620.3Standard Ensemble +BASE+BASE34.521.1
Single-input Alternative +IPA32.717.9Multi-Source Ensemble +BASE+IPA34.320.9
ROMANI32.520.7BASE+ROMANI34.421.4
INSCRIP33.420.5BASE+INSCRIP34.521.5
Multi-Source Self-Ensemble +BASE+IPA34.120.5Multi-Source Self-Ensemble +BASE+IPA35.721.9
BASE+ROMANI33.820.9BASE+ROMANI35.722.2
BASE+INSCRIP34.221.3BASE+INSCRIP35.822.4
+ +Table 1: BLEU scores on Indic test set and FloRes Turkic Devtest set. + +for the INSCRIP signal, all Indo-Aryan languages are transliterated into Hindi script, and all Dravidian languages into Tamil script. On the Turkic dataset, all languages in Latin script are transliterated into Cyrillic script. + +Multi-Source Ensemble A baseline for ensembling models trained on different signals is the standard ensemble (BASE+BASE) where two BASE models are ensembled, each trained with a different random seed. Although there are multiple combinations of input signals, we only discuss the cases where BASE is combined with one of {IPA, ROMANI, INSCRIP}, since in our preliminary experiments, we found dropping the BASE model leads to significantly degraded performance. + +Multi-Source Self-Ensemble Similar to above, we train a single model on the mixture of original input and one of {IPA, ROMANI, INSCRIP} input for multi-source self-ensemble. To enforce fair comparisons with the ensembled models, which have more parameters in total, we train two sizes of the self-ensemble (SE) model, one having the same size of a single baseline model, the other scaled to have twice the number of parameters of a single BASE model. + +Data Preprocessing We use espeak-ng⁴ to convert the original input to phonetic input. For Indic languages, we use indic-trans⁵ (Bhat et al., 2015) to obtain the romanized as well as the in-family transliterated input. On the + +Turkic dataset, we manually align the Cyrillic and Latin alphabet and substitute the letter(s) in one script with the corresponding one in another. The Indic languages are tokenized with indic_nlpLibrary and the rest are tokenized with mosesdecoder. We use sentencepiece to create 32K BPE (Sennrich et al., 2016) subword vocabularies for each type of input signal. Examples longer than 250 tokens are discarded. We merge the source dictionaries of different signals by dropping duplicated tokens, while keeping the decoder dictionaries all the same in order to compute the average log probabilities in ensemble settings. + +Training & Evaluation We train many-to-En language directions during training (10 and 5 directions for Indic and Turkic dataset respectively). The architecture is a standard 6-layer encoder 6-layer decoder Transformer model, with 512 embedding dimension and 2048 hidden dimension in the default setting. For the scaled self-ensemble model, we increase the encoder hidden dimension such that the number of parameters in this model approximately matches that of $n$ baseline models ( $n = 2$ for results in Table 1). We use 4000 warmup steps and learning rate 0.0003. Both the dropout and attention dropout rate are set to 0.2. Label smoothing is set to 0.1. Data from different language pairs are sampled with 1.5 temperature sampling. We + +
BASEIPAROMANIINSCRIP
Uni-gram0.030.150.130.16
Sent. len34.739.325.951.3
+ +Table 2: Uni-gram token overlap and sentence length of various types of input on MultiIndic dev set. + +train all models for 18 epochs and 40 epochs for Indic and Turkic dataset respectively and evaluate the best checkpoint selected by dev loss. We use spBLEU $^9$ (Goyal et al., 2021; Guzmán et al., 2019) to compute the BLEU scores. $^{10}$ + +# 4 Results + +In this section, we compare the performance of our proposed multi-source self-ensemble model to various alternative ways of input combinations on two low-resource language families: Indic and Turkic languages. Furthermore, we show multi-source self-ensemble learns faster and generates more consistent and accurate translations. + +# 4.1 Performance of Multi-Source Self-Ensemble + +Our method is based on the hypothesis that incorporating alternative inputs increases the token overlap of source languages, which benefits the transfer during training. To verify this, we compute average sentence-level uni-gram overlap of all source language pairs (Table 2) and find that alternative signals do have higher token overlap compared to the original input. For instance, the IPA signal, having similar average sentence length as BASE, has much higher token overlap (0.15 vs. 0.03). + +Do increased token overlaps result in better translation performance? We train models on each of the alternative inputs alone and report the results in the left column of Table 1. We find that using only one alternative input in the source has either worse or similar performance as the original baseline, indicating higher token overlap among source languages does not guarantee better BLEU scores. The degraded performance is likely due to unfavorable interference introduced by shared tokens in + +![](images/d850fdee4f8732b9fa68cb8597cd08a9dd6711e9c356beb878c063177535170f.jpg) +Figure 2: Learning curve of the baseline model BASE and the same-sized self-ensemble model trained on the original input as well as transliterated input. INSCRIPT denotes the transliteration where the target script for Indo-Aryan and Dravidian languages are Hindi and Tamil respectively. The target scripts of INSCRIPT1 are Oriya and Kannada respectively. + +the alternative signals. The interference may create information loss11 or increased ambiguity12, which reinforces the importance of combining alternative inputs with the original input. + +Due to undesired interference exhibited in the alternative input spaces, we therefore adopt the input combination using our proposed Multi-Source Self-Ensemble to combine the original input and alternative signals. Results in left lower part of Table 1 demonstrate improvements over the single-input baseline. Our best performing alternative input configuration improves $+1.0$ BLEU on Turkic languages and $+0.6$ BLEU on Indic languages for 93M parameter models. + +In production, model ensembles are often employed to achieve the best possible performance. This is usually done by training multiple models each initialized with a different random seed (Bawden et al., 2020; Tran et al., 2021b), and averaging the predicted next token probabilities at inference time. We also provide results against these strong ensemble baselines and observe +1.3 BLEU improvements on both Indic and Turkic languages. Note that, to enforce a fair comparison, we compare a scaled version of the multi-source self-ensemble model which has the same number of parameters as multiple ensemble baseline components. + +
ConfigurationBLEU
Single-input Baseline
BASE33.6
Straight Concatenation
BASE+<SEP>+IPA33.7
BASE+<SEP>+ROMANI33.7
BASE+<SEP>+INSCRIP33.6
Multi-Encoder Architectures
Bi-Encoder
BASE+BASE34.2
BASE+IPA33.9
BASE+ROMANI33.9
BASE+INSCRIP34.0
Quad-Encoder
BASE+BASE+BASE+BASE34.3
BASE+IPA+ROMANI+INSCRIP34.1
Multi-source Self-ensemble
BASE+INSCRIP34.2
+ +# 4.2 Advantages of Multi-Source Self-Ensemble + +Architectural Simplicity. As introduced in §2.2, there are various ways to incorporate multiple inputs, such as concatenation to form a longer input or using multiple encoders networks. In Table 3, we show that using multiple encoders has no improvements over the comparable baseline with raw text input, and straight concatenation only brings marginal gains (+0.1 BLEU). Further, our simple but effective Multi-Source Self-Ensemble technique reaches the same performance as that of a much larger quad-encoder model, which requires non-trivial architectural changes and takes more compute to train. Thus, our technique is suitable to be used out of the box in any seq-to-seq library. + +Faster Learning in Low-Resource Settings. To understand how self-ensemble performs with different amounts of data, we plot the learning curve of both the baseline and the self-ensemble model on $5\%^{13}$ to $80\%$ of the total Indic training set. $^{14}$ As + +Table 3: Indic test set BLEU of models trained on straight concatenation of input as well as multi-encoder architectures. Training on the concatenated input does not impact the BLEU much. Multi-encoder architectures, although having a lot more number of parameters, for instance, quad-encoder, achieve similar performance of a much smaller multi-source self-ensemble. + +
C-BLEUNE-F1
Single-input Baseline
BASE34.755.9
Single-input Alternative Input
IPA33.854.7
ROMANI33.054.5
INSCRIP35.355.4
Multi-Source Self-Ensemble
BASE+IPA36.256.1
BASE+ROMANI35.556.3
BASE+INSCRIP36.256.4
+ +Table 4: The consistency BLEU (C-BLEU) and exact named entity match F1 (NE-F1) of MultiIndic test set. Higher C-BLEU scores imply more consistent output in many-to-En setting. Higher NE-F1 scores indicate better translation of named entities. + +shown in Figure 2, the self-ensemble model outperforms the baseline model by a large margin when the amount of training data is small (+5 BLEU when only $5\%$ of the total set is used for training). This is the scenario for most low-resource languages, as the gap gradually closes when more data is available. Overall, the multi-source self-ensemble model is consistently better than the baseline model irrespective of training data scale. This suggests that transliteration can be a cheap and effective data augmentation approach when used in conjunction with multi-source self-ensemble. + +Improved Output Consistency. We conduct a deeper analysis to understand the performance improvement of Multi-Source Self-Ensembles beyond BLEU scores alone. We find that our proposed technique generates much more consistent output, which could be a benefit of alternative signals transferring information more easily amongst source languages. We propose consistency BLEU (C-BLEU) to quantify the consistency of multi-way evaluation output of a many-to-En translation model. We treat the output of $L_{1}$ -En direction as reference and output of all other $L_{i}$ -En directions as hypothesis. We compute this for all $N$ source languages in the dataset, accounting for total $N(N - 1)$ C-BLEU scores, then take the average of all (Table 4). While training on IPA or ROMANI alone does not outperform the baseline in terms of C-BLEU, model trained on INSCRIP input improves the score by +1.3. Self-ensemble over BASE and IPA increases the C-BLEU to 36.2 (and from 36.3 to 38.1 with scaled model), indicating the alternative signals are best trained together with the original input. + +are added in the multi-source self-ensemble setup. + +![](images/11c78aab6baf24a1bf6edbac66eae09c03bcecbf73d7f85c3dc1ad3231e8b19c.jpg) +Figure 3: The exact named entity match F1 score of BASE INSCRIP and same-sized self-ensemble model trained on the previous two inputs (SE(BASE+INSCRIP)). Although the results of the self-ensemble model only slightly outperforms the baseline (55.9 vs. 56.4), the gains are more obvious when breaking the results by entity type. + +Improved Named Entity Accuracy. The previous analysis implies the self-ensemble model outputs more consistent translation, yet this does not mean the consistent translations are accurate. In this section, we conduct an analysis targeted at named entities. We use spaCy (Honnibal et al., 2020) NER tagger to extract all named entities, and then compute the exact match of the extracted entities. According to the results in Table 4, self-ensemble introduces small gains (+0.5) in terms of named entity F1 (NE-F1), whereas the scaled self-ensemble boosts NE-F1 score by +1.1. Although the improvement is small in aggregate, we find significant improvement when breaking down by entity type. As shown in Figure 3, the multi-source self-ensemble model (without scaling) outperforms the baseline model on certain entity types, e.g., person, organization, time and event by a large margin. + +# 5 Related work + +# 5.1 Alternative Input for Multilingual MT + +Our work can be viewed as multilingual MT (Firat et al., 2016) combined with multi-source MT (Zoph and Knight, 2016), where the sources are not other languages but rather alternative transliterated signals. The transliterated input has been explored in the past for translation system. Nakov and Ng (2009) use transliteration as a preprocessing step for their phrase-based SMT model to tackle systematic spelling variation. Both Chakravarthi et al. (2019) and Koneru et al. (2021) convert Dravidian languages to Latin script and train multilingual models with both source and target in Latin script; the latter identify code-switching to be a challenge during back-transliteration. Besides converting to Latin script, Dabre et al. (2018) use another common script, Devanagari, for Indic languages. In addition to the natural written scripts, previous works + +also explored artificial script, such as IPA. Liu et al. (2019) incorporate phonetic representations, specifically for Chinese Pinyin, to cope with homophone noise. Unlike our work, Chakravarthi et al. (2019) adopt transliteration to IPA for both the source and target. Apart from transliterated input, other potential alternative signals we did not fully explored include orthographic syllable units (Kunchukuttan and Bhattacharyya, 2016, 2020), morpheme-based units (Ataman et al., 2017; Dhar et al., 2020), and character (Lee et al., 2017) or byte (Wang et al., 2019a) level input in addition to the subword-level units (Sennrich et al., 2016). + +# 5.2 Input signal combination + +Multi-encoder architecture is the most common way to combine input from different sources. While previous works mainly use additional encoders to encode syntactic information (Li et al., 2017; Currey and Heafield, 2018) or input in another language (Nishimura et al., 2018), we feed in each encoder with different signals of the same sentence. Prior works also investigated approaches to combining input at different granularity (Ling et al., 2015; Chen et al., 2018; Casas et al., 2020). Wang et al. (2019b) combine the decoupled lexical and semantic representations through an attention mechanism. Another common method of utilizing additional input signal is multi-task learning, force the model to output extra labels (Luong et al., 2016; Gronroos et al., 2017). Apart from combining the sources during training, inference-time ensemble (Garmash and Monz, 2016) is often adopted by recent submissions to shared MT tasks (Ng et al., 2019; Tran et al., 2021a). The ensemble components are usually separate systems trained with different random initialization or language pairs. Fan et al. (2020) ensemble the same + +model by feeding in source sentences in different languages. The self-ensemble approach was also found to make networks more robust after adding random noises (Liu et al., 2018). Prior work also uses the term "self-ensemble" to refer to an ensemble of models using weights from different time steps during training (Xu et al., 2020). + +# 6 Conclusion + +To overcome the low token-overlap issue exhibited in multilingual MT systems due to distinct writing system, we examined three alternative signals (phonetic, romanized and in-family transliterated input) and investigated four approaches (input concatenation, multi-encoder, multi-source ensemble, self-ensemble) to combining them with the original input. Our results show that training a single model with a mixture of diverse signals and performing self-ensemble during inference time can improve BLEU by 1.3 points on Indic and Turkic dataset. The improvements can reach $+5$ BLEU when training data size is small. Further, we show this approach generate more accurate and consistent translation of named entities which greatly impacts the factuality accuracy of news translation. + +# Acknowledgement + +We thank Shiyue Zhang, Xiang Zhou, Jean Maillard, Yixiao Song, and Marzena Karpinska for the helpful discussions during the course of this work. + +# References + +Roee Aharoni, Melvin Johnson, and Orhan First. 2019. Massively multilingual neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3874-3884, Minneapolis, Minnesota. Association for Computational Linguistics. +Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural machine translation. +Duygu Ataman, Matteo Negri, Marco Turchi, and Marcello Federico. 2017. Linguistically motivated vocabulary reduction for neural machine translation from turkish to english. +Rachel Bawden, Alexandra Birch, Radina Dobreva, Arturo Oncevay, Antonio Valerio Miceli Barone, and Philip Williams. 2020. The University of Edinburgh's English-Tamil and English-Inuktitut submissions to the WMT20 news translation task. In + +Proceedings of the Fifth Conference on Machine Translation, pages 92-99, Online. Association for Computational Linguistics. +Irshad Ahmad Bhat, Vandan Mujadia, Aniruddha Tammewar, Riyadh Ahmad Bhat, and Manish Shrivastava. 2015. Iiit-h system submission for fire2014 shared task on transliterated search. In Proceedings of the Forum for Information Retrieval Evaluation, FIRE '14, pages 48-53, New York, NY, USA. ACM. +Noe Casas, Marta R. Costa-jussà, and José A. R. Fonollosa. 2020. Combining subword representations into word-level representations in the transformer architecture. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 66-71, Online. Association for Computational Linguistics. +Bharathi Raja Chakravarthi, Mihael Arcan, and John P. McCrae. 2019. Comparison of Different Orthographies for Machine Translation of Under-Resourced Dravidian Languages. In 2nd Conference on Language, Data and Knowledge (LDK 2019), volume 70 of OpenAccess Series in Informatics (OA-SIcs), pages 6:1-6:14, Dagstuhl, Germany. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik. +Huadong Chen, Shujian Huang, David Chiang, Xinyu Dai, and Jiajun Chen. 2018. Combining character and word information in neural machine translation using a multi-level attention. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1284-1293, New Orleans, Louisiana. Association for Computational Linguistics. +Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018. Word translation without parallel data. +Anna Currey and Kenneth Heafield. 2018. Multi-source syntactic neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2961-2966, Brussels, Belgium. Association for Computational Linguistics. +Raj Dabre, Anoop Kunchukuttan, Atsushi Fujita, and Eiichiro Sumita. 2018. NICT's participation in WAT 2018: Approaches using multilingualism and recurrently stacked layers. In Proceedings of the 32nd Pacific Asia Conference on Language, Information and Computation: 5th Workshop on Asian Translation: 5th Workshop on Asian Translation, Hong Kong. Association for Computational Linguistics. +Tejas Dhamecha, Rudra Murthy, Samarth Bharadwaj, Karthik Sankaranarayanan, and Pushpak Bhattacharyya. 2021. Role of Language Relatedness in Multilingual Fine-tuning of Language Models: A Case Study in Indo-Aryan Languages. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8584-8595, + +Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Prajit Dhar, Arianna Bisazza, and Gertjan van Noord. 2020. Linguistically motivated subwords for English-Tamil translation: University of Groningen's submission to WMT-2020. In Proceedings of the Fifth Conference on Machine Translation, pages 126–133, Online. Association for Computational Linguistics. +Ahmed El-Kishky, Vishrav Chaudhary, Francisco Guzmán, and Philipp Koehn. 2020. CCAligned: A massive collection of cross-lingual web-document pairs. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020), pages 5960–5969, Online. Association for Computational Linguistics. +Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, and Armand Joulin. 2020. Beyond english-centric multilingual machine translation. +Orhan First, Baskaran Sankaran, Yaser Al-onaizan, Fatos T. Yarman Vural, and Kyunghyun Cho. 2016. Zero-resource translation with multi-lingual neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 268-277, Austin, Texas. Association for Computational Linguistics. +Ekaterina Garmash and Christof Monz. 2016. Ensemble learning for multi-source neural machine translation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1409-1418, Osaka, Japan. The COLING 2016 Organizing Committee. +Naman Goyal, Cynthia Gao, Vishrav Chaudhary, Peng-Jen Chen, Guillaume Wenzek, Da Ju, Sanjana Krishnan, Marc'Aurelio Ranzato, Francisco Guzmán, and Angela Fan. 2021. The flores-101 evaluation benchmark for low-resource and multilingual machine translation. +Stig-Arne Gronroos, Sami Virpioja, and Mikko Kurimo. 2017. Extending hybrid word-character neural machine translation with multi-task learning of morphological analysis. In Proceedings of the Second Conference on Machine Translation, pages 296-302, Copenhagen, Denmark. Association for Computational Linguistics. +Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc'Aurelio Ranzato. 2019. Two new evaluation datasets for low-resource machine translation: Nepali-english and sinhala-english. +Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd. 2020. spaCy: + +Industrial-strength Natural Language Processing in Python. +Tom Kocmi, Christian Federmann, Roman Grundkiewicz, Marcin Junczys-Dowmunt, Hitokazu Matsushita, and Arul Menezes. 2021. To ship or not to ship: An extensive evaluation of automatic metrics for machine translation. In Proceedings of the Sixth Conference on Machine Translation, pages 478-494, Online. Association for Computational Linguistics. +Sai Koneru, Danni Liu, and Jan Niehues. 2021. Unsupervised machine translation on Dravidian languages. In Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages, pages 55-64, Kyiv. Association for Computational Linguistics. +Anoop Kunchukuttan and Pushpak Bhattacharyya. 2016. Orthographic syllable as basic unit for SMT between related languages. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1912-1917, Austin, Texas. Association for Computational Linguistics. +Anoop Kunchukuttan and Pushpak Bhattacharyya. 2020. Utilizing language relatedness to improve machine translation: A case study on languages of the indian subcontinent. +Jason Lee, Kyunghyun Cho, and Thomas Hofmann. 2017. Fully character-level neural machine translation without explicit segmentation. Transactions of the Association for Computational Linguistics, 5:365-378. +Junhui Li, Deyi Xiong, Zhaopeng Tu, Muhua Zhu, Min Zhang, and Guodong Zhou. 2017. Modeling source syntax for neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 688-697, Vancouver, Canada. Association for Computational Linguistics. +Jindrich Libovický, Jindrich Helcl, and David Mareček. 2018. Input combination strategies for multi-source transformer decoder. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 253-260, Brussels, Belgium. Association for Computational Linguistics. +Wang Ling, Isabel Trancoso, Chris Dyer, and Alan W Black. 2015. Character-based neural machine translation. +Hairong Liu, Mingbo Ma, Liang Huang, Hao Xiong, and Zhongjun He. 2019. Robust neural machine translation with joint textual and phonetic embedding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3044-3049, Florence, Italy. Association for Computational Linguistics. +Xuanqing Liu, Minhao Cheng, Huan Zhang, and Chojui Hsieh. 2018. Towards robust neural networks via random self-ensemble. In Proceedings of the + +European Conference on Computer Vision (ECCV), pages 369-385. +Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multi-task sequence to sequence learning. +Benjamin Muller, Antonios Anastasopoulos, Benoit Sagot, and Djamé Seddah. 2021. When being unseen from mBERT is just the beginning: Handling new languages with multilingual language models. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 448-462, Online. Association for Computational Linguistics. +Preslav Nakov and Hwee Tou Ng. 2009. Improved statistical machine translation for resource-poor languages using related resource-rich languages. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 1358-1367, Singapore. Association for Computational Linguistics. +Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. 2019. Facebook fair's wmt19 news translation task submission. +Yuta Nishimura, Katsuhito Sudoh, Graham Neubig, and Satoshi Nakamura. 2018. Multi-source neural machine translation with missing data. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 92-99, Melbourne, Australia. Association for Computational Linguistics. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics. +Jörg Tiedemann. 2012. Parallel data, tools and interfaces in opus. In LREC. +Chau Tran, Shruti Bhosale, James Cross, Philipp Koehn, Sergey Edunov, and Angela Fan. 2021a. Facebook ai wmt21 news translation task submission. +Chau Tran, Shruti Bhosale, James Cross, Philipp Koehn, Sergey Edunov, and Angela Fan. 2021b. Facebook AI's WMT21 news translation task submission. In Proceedings of the Sixth Conference on Machine Translation, pages 205-215, Online. Association for Computational Linguistics. +Changhan Wang, Kyunghyun Cho, and Jiatao Gu. 2019a. Neural machine translation with byte-level subwords. arXiv preprint arXiv:1909.03341. + +Xinyi Wang, Hieu Pham, Philip Arthur, and Graham Neubig. 2019b. Multilingual neural machine translation with soft decoupled encoding. In International Conference on Learning Representations. +Yige Xu, Xipeng Qiu, L. Zhou, and Xuanjing Huang. 2020. Improving bert fine-tuning via self-ensemble and self-distillation. ArXiv, abs/2002.10345. +Barret Zoph and Kevin Knight. 2016. Multi-source neural translation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 30-34, San Diego, California. Association for Computational Linguistics. + +# A Multi-encoder architecture + +As has been systematically explored by Libovický et al. (2018), there are four kinds of multi-encoder cross-attention that can be applied on the decoder side: (1) Serial: cross-attention to each encoder is performed layer by layer. (2) Parallel: cross-attention to each encoder is performed in parallel and then the outputs are added together before feeding to the feed-forward layer. (3) Flat: outputs of all encoders are concatenated along the length dimension as the input to a single cross-attention. (4) Hierarchical: a second attention block is added to attend to the representations output by the parallel cross-attention. While models in Table 3 all use the parallel cross-attention described in § 2.2, Table 5 ablates different multi-source cross-attention mechanisms. Three out of four cross-attention achieve similar performance, whereas the 'flat' attention is considerably worse. This echos the findings by Libovický et al. (2018). + +
Config.BLEUConfig.BLEU
Serial34.1Flat24.9
Parallel34.0Hierarchical34.1
+ +# B Experiments + +# B.1 Data statistics + +The number of training examples for each language in both Turkic and Indic dataset is shown in Table 7. We evaluate the Turkic dataset on multi-way FloRes101 devtest set, each having 1012 examples. To evaluate the Indic models, we use the provided multi-way test set of WAT21 MultiIndic task, each having 2390 examples. + +# B.2 Input concatenation analysis + +In § 4, results show that models trained on the concatenated input does not bring any discernible improvement, but rather the performance is almost the same. To understand if the model has indeed utilized the concatenated alternative signals, we take the trained model and evaluate BLEU scores on the corrupted input. Specifically, one part of the concatenated input is corrupted while the other is left intact. The corruption is done by shuffling the + +Table 5: Indic test set BLEU scores of multi-encoder architecture trained on BASE+INSCRIP using different multi-source cross-attention. All mechanisms perform similarly except flat cross-attention. + +
Config.BLEUConfig.BLEU
BASE + IPA23.3IPA + BASE23.0
BASE' + IPA3.3IPA' + BASE13.9
BASE + IPA'20.2IPA + BASE'9.5
+ +Table 6: Models trained on concatenated original and phonetic input while evaluated on partially corrupted input. We use IPA' to denote the phonetic part of the input is in corruption. Results are reported on the FloRes101 Indic languages instead of MultiIndic test set. + +tokens within the selected part of the input. Overall, we find that the model indeed pays attention to both parts of the input, as corrupting any part of them leads to large regression in BLEU scores (Table 6). Moreover, no matter which type of signal is put in the front of the sentence, the model always pays more attention to the original input rather than the phonetic input, since corrupting the original input causes larger performance degradation than corrupting the phonetic input. + +# C Example alternative input signal + +We present example alternative signals in Figure 4 and Figure 5. When the input are transformed to scripts other than their native script, there are more shared tokens in the source languages (as highlighted in Figure 5). + +# D Analysis + +# D.1 Token overlap details + +In § 4 we show the token overlap of various signals aggregated over all source language pairs, in this section we show the token overlap of each source language pair in Table 8 for the original input and in Table 9 for the in-family transliterated input15. Before performing transliteration, all source languages share only a small amount of token overlap except between Marathi and Hindi. The shared tokens between native scripts are mostly punctuation marks, digits and English tokens. After transliteration, the token overlap becomes more obvious and a clear division between language families can be found. + +# D.2 Similarity in latent space + +Besides examining the consistency of system output as in § 4.2, we also measure the distance of + +
BASE_ ____________
IPA__pradhanamontri__kohithile__koti__koti__lokngko__mono__ebong__msti s kore__obhi l, a sa_srusti re__koribare_baba sa he bô_am bedo koro__go_hroffoukar__r__hroffoukar__t__halantletabi : satekó__pó__di : rghukar__r__halant_murddhenŋo__halant_murddhenŋo__bhu : mika_nirba h__korithile
ROMANI__pradhanamanthri__kahithile__,__koti__koti__lok ank man__eban__m asth ishk are__“_ab hil asha__”_srist ire_kariv are_bab asa hib_ambed akar_guruthpurna_bhoomika_nirvah_karithile_.
TRANSL__prabhānagm mānnt ृj_ _kahāh ādīg āyā gōlè_ , _kəto āti__ _kəto āti__ _llokač ādīg āyā kōm n__ _gēb ār__ _m rām tī āyā kūk ār__ _“_ābh āyā Āyā gōla”_sā ādīg āyā gōlè_ _gōdīg āyā gōlè_ _gōdīg āyā gōlè_ _gōdīg āyā gōlè_ _gōdīg āyā gōlè_ _gōdīg āyā gōlè_ _gōdīg āyā gōlè_ _gōdīg āyā gō lè_
+ +Figure 4: Example alternative signals. BASE is the original input in Oriya script, IPA is the phonetic input, ROMANI the romanized input, and TRANSL (INSCRIP in the main text) the input transliterated into Devanagari script. + +
Turkic languagesIndic Languages
Language#bi-textLanguage#bi-textLanguage#bi-text
Kazakh919,877Bengali1,756,197Marathi781,872
Kyrgyz243,179Gujarati518,015Oriya252,160
Turkish4,000,000Hindi3,534,387Punjabi518,508
Uzbek156,615Kannada396,865Tamil1,499,441
Azerbaijani1,847,723Malayalam1,204,503Telugu686,626
+ +Table 7: Training data statistics for Turkic and Indic dataset. + +
bnhipaorgumrknmltate
bn0.050.040.040.010.010.010.010.010.01
hi0.050.060.050.020.180.010.010.020.01
pa0.040.060.040.020.020.010.010.020.01
or0.040.050.040.010.010.010.010.010.01
gu0.010.020.020.010.050.040.040.050.04
mr0.010.180.020.010.050.040.050.050.04
kn0.010.010.010.010.040.040.040.050.04
ml0.010.010.010.010.040.050.040.050.04
ta0.010.020.020.010.050.050.050.050.05
te0.010.010.010.010.040.040.040.040.05
+ +Table 8: Token overlap BASE + +
bnhipaorgumrknmltate
bn0.330.260.290.320.290.030.030.030.03
hi0.330.490.260.490.40.030.030.030.03
pa0.260.490.190.370.340.030.030.030.03
or0.290.260.190.230.220.010.010.010.01
gu0.320.490.370.230.410.030.030.030.03
mr0.290.40.340.220.410.030.030.030.03
kn0.030.030.030.010.030.030.250.170.31
ml0.030.030.030.010.030.030.250.350.33
ta0.030.030.030.010.030.030.170.350.26
te0.030.030.030.010.030.030.310.330.26
+ +Table 9: Token overlap INSCRIP + +source representations in the latent space. Concretely, we compute the average of normalized Euclidean distance over all source language pairs: + +$$ +\frac {1}{\binom {N} {2}} \sum_ {m, n} d i s t (l _ {m}, l _ {n}) +$$ + +, where $N$ is the total number of source languages, $\text{dist}(l_m, l_n)$ compute the distance between a sentence in language $m$ and language $n$ : + +$$ +\begin{array}{l} \operatorname {d i s t} \left(l _ {m}, l _ {n}\right) = \\ \frac {1}{2} \big (\frac {1}{| l _ {m} |} \sum_ {i} \min _ {j} \big (d i s t (w _ {m i}, w _ {n j}) \big) + \\ \frac {1}{| l _ {n} |} \sum_ {j} \min _ {i} \left(d i s t \left(w _ {m i}, w _ {n j}\right)\right)) \\ \end{array} +$$ + +, where $|l_m|$ and $|l_n|$ are the number of tokens within sentence in language $m$ and language $n$ respectively. $w_{mi}$ represents the $i^{th}$ encoder output of sentence $l_m$ . $dist$ represents the Euclidean distance between the two vectors. We additionally normalize the distance with $\sqrt{d}$ where $d$ is the dimension of the dense vector. Adding the scaling factor is to make the scaled self-ensemble model comparable with the rest variants. + +As shown in Table 10, none of the alternative signals alone can lead to more similar source representations. While training on the original input and one alternative input, only the combination of BASE and INSCRIP lowers the distance of original input representations from 0.60 to 0.58. The distances become even more smaller while training the scaled self-ensemble model. The distances + +
Config.BASEIPAROMANIINSCRIP
Trained separately0.600.620.600.61
SE(Base+IPA)0.600.62--
SE(Base+ROMANI)0.61-0.60-
SE(Base+INSCRIP)0.58--0.60
SE(ALL)0.600.620.600.61
S-SE(ALL)0.540.530.520.52
+ +Table 10: Normalized Euclidean distances of single-input model (Trained separately), self-ensemble model (SE) and scaled self-ensemble model (S-SE). + +among BASE representations decrease to 0.54 and the rest three input signals all yield more similar representations than the original input. Overall, we didn't find significant differences in latent space, which we would like to keep investigating in the future. + +
BASEIPAROMANITRANSL
bn_प्रधानम्या_ वल्लिक..._prodhanmṛnti_balen..._pradhanmantri_balen..._प्रधानम्या...
gu_पूधानम्या-रूधो-रूधो-रूधो..._prādha : nāmṛnti : e : ..._pradhanmantri_kahyu ..._प्रधानम्या 0 ° 0 ° 0 ° ...
hi_प्रधानम्या-ने-कहा-किक..._prādha : nāmṛnti_ne : ..._pradhanmantri_ne ..._प्रधानम्या 0 ° 0 ° 0 ° ...
kn_अवतिूधो-रूधो-रूधो-रूधो-रूधो..._ba : ba : _sa : he : b _em be :_b aab a sahe b _amedkar ..._अवतिूधो-रूधो-रूधो...
ml_अवतिूधो-रूधो-रूधो-रूधो-रूधो..._ko : dik : e nək : ini ..._kot ic nak in _janath ute ..._अवतिूधो-रूधो...
mr_करोँडँँो-लोँँो-लोँँो..._kero : do : _lo : kāc : ja : ..._karod o _lok anchya _mana ..._अवतिूधो-रूधो...
or_प्रधानम्या-रूधो-रूधो-रूधो..._pradhanmṛnti_kohithile ..._pradhanamanthri_kahithile ..._प्रधानम्या 1...
pa_प्रधानम्या-मेम्ती-रूधो-रूधो-रूधो..._prādhan_māntāri_ne_kīha ..._pradhan mantrī_ne ..._प्रधानम्या...
ta_प्रधानम्या-मेम्ती-रूधो-रूधो-रूधो..._ko : di kkηʌkkka : nʌ_makkkālin ..._kot ik n akk ana_makalin ..._प्रधानम्या...
te_प्रधानम्या-मेम्ती-रूधो-रूधो..._ko : lla : di_prajala ..._kot l adi prajal_hrid ayal ..._प्रधानम्या...
+ +Figure 5: Example alternative signals of the same sentence in ten Indic languages. The token overlap across multiple languages are highlighted in blue. Compared to the original input, transliteration significantly increases token overlap. \ No newline at end of file diff --git a/alternativeinputsignalseasetransferinmultilingualmachinetranslation/images.zip b/alternativeinputsignalseasetransferinmultilingualmachinetranslation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..56a3f44b5349507700f066143d3c4cfbfcd3caef --- /dev/null +++ b/alternativeinputsignalseasetransferinmultilingualmachinetranslation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1b27fa61423902f1370a8dd4bf9afced1e4e99e9b1f125db40d9ad6805e68cde +size 578743 diff --git a/alternativeinputsignalseasetransferinmultilingualmachinetranslation/layout.json b/alternativeinputsignalseasetransferinmultilingualmachinetranslation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..11534dd15de2ac114b4200801c25ece7bcf3c016 --- /dev/null +++ b/alternativeinputsignalseasetransferinmultilingualmachinetranslation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f6e949a17fdd275d6d8120011e1528e87d065091f558937c1592174eee4f680 +size 362291 diff --git a/americasnlievaluatingzeroshotnaturallanguageunderstandingofpretrainedmultilingualmodelsintrulylowresourcelanguages/fb9eedf0-9d00-4423-a407-29905adfa2fa_content_list.json b/americasnlievaluatingzeroshotnaturallanguageunderstandingofpretrainedmultilingualmodelsintrulylowresourcelanguages/fb9eedf0-9d00-4423-a407-29905adfa2fa_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..7ba6115e3610f587fbd3cf534049642ec7fb59b2 --- /dev/null +++ b/americasnlievaluatingzeroshotnaturallanguageunderstandingofpretrainedmultilingualmodelsintrulylowresourcelanguages/fb9eedf0-9d00-4423-a407-29905adfa2fa_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b7a9dff468cdad48d1746bfdf65372ec95f0d75212b6dfb159d5067b53057108 +size 130505 diff --git a/americasnlievaluatingzeroshotnaturallanguageunderstandingofpretrainedmultilingualmodelsintrulylowresourcelanguages/fb9eedf0-9d00-4423-a407-29905adfa2fa_model.json b/americasnlievaluatingzeroshotnaturallanguageunderstandingofpretrainedmultilingualmodelsintrulylowresourcelanguages/fb9eedf0-9d00-4423-a407-29905adfa2fa_model.json new file mode 100644 index 0000000000000000000000000000000000000000..777390c44564f2db7b4daa497f7c70a74b5b2c72 --- /dev/null +++ b/americasnlievaluatingzeroshotnaturallanguageunderstandingofpretrainedmultilingualmodelsintrulylowresourcelanguages/fb9eedf0-9d00-4423-a407-29905adfa2fa_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a0f560ced2f28caa2297ae661b948991cf5184ddae564d1e098b2ffb65d83c08 +size 158769 diff --git a/americasnlievaluatingzeroshotnaturallanguageunderstandingofpretrainedmultilingualmodelsintrulylowresourcelanguages/fb9eedf0-9d00-4423-a407-29905adfa2fa_origin.pdf b/americasnlievaluatingzeroshotnaturallanguageunderstandingofpretrainedmultilingualmodelsintrulylowresourcelanguages/fb9eedf0-9d00-4423-a407-29905adfa2fa_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4dd48eb350061721b1afbe36e7b0d6c23460c0a2 --- /dev/null +++ b/americasnlievaluatingzeroshotnaturallanguageunderstandingofpretrainedmultilingualmodelsintrulylowresourcelanguages/fb9eedf0-9d00-4423-a407-29905adfa2fa_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:08a8b3f598b80b41d206c5f0e1112f259ba88e4ca96f2612e46886764426115f +size 1557589 diff --git a/americasnlievaluatingzeroshotnaturallanguageunderstandingofpretrainedmultilingualmodelsintrulylowresourcelanguages/full.md b/americasnlievaluatingzeroshotnaturallanguageunderstandingofpretrainedmultilingualmodelsintrulylowresourcelanguages/full.md new file mode 100644 index 0000000000000000000000000000000000000000..7f4e6a2f2c042b7214c5764402a5d3b6d4e5e66c --- /dev/null +++ b/americasnlievaluatingzeroshotnaturallanguageunderstandingofpretrainedmultilingualmodelsintrulylowresourcelanguages/full.md @@ -0,0 +1,416 @@ +# AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages + +Abteen Ebrahimi Manuel Mager Arturo Oncevay Vishrav Chaudhary Luis Chiruzzo Angela Fan John E. Ortega Ricardo Ramos Annette Rios Ivan Meza-Ruiz Gustavo A. Gimenez-Lugo Elisabeth Mager Graham Neubig Alexis Palmer Rolando Coto-Solano Ngoc Thang Vu Katharina Kann Carnegie Mellon University Dartmouth College Microsoft Turing Facebook AI Research New York University Universidad de la Republica, Uruguay Universidad Tecnológica de Tlaxcala Universidad Nacional Autónoma de México Universidad Tecnológica Federal do Paraná University of Colorado Boulder University of Edinburgh $\clubsuit$ University of Stuttgart $\psi$ University of Zurich + +# Abstract + +Pretrained multilingual models are able to perform cross-lingual transfer in a zero-shot setting, even for languages unseen during pretraining. However, prior work evaluating performance on unseen languages has largely been limited to low-level, syntactic tasks, and it remains unclear if zero-shot learning of high-level, semantic tasks is possible for unseen languages. To explore this question, we present AmericasNLI, an extension of XNLI (Conneau et al., 2018) to 10 Indigenous languages of the Americas. We conduct experiments with XLM-R, testing multiple zero-shot and translation-based approaches. Additionally, we explore model adaptation via continued pretraining and provide an analysis of the dataset by considering hypothesis-only models. We find that XLM-R's zero-shot performance is poor for all 10 languages, with an average performance of $38.48\%$ . Continued pretraining offers improvements, with an average accuracy of $43.85\%$ . Surprisingly, training on poorly translated data by far outperforms all other methods with an accuracy of $49.12\%$ . + +# 1 Introduction + +Pretrained multilingual models such as XLM (Lample and Conneau, 2019), multilingual BERT (mBERT; Devlin et al., 2019), and XLM-R (Conneau et al., 2020) achieve strong cross-lingual transfer results for many languages and natural language processing (NLP) tasks. However, there exists a discrepancy in terms of zero-shot performance between languages present in the pretraining data and those that are not: performance is generally highest for well-represented languages and decreases with less representation. Yet, even for unseen languages, performance is generally above chance, and model adaptation approaches have been shown to yield + +
LanguageISOFamilyDevTest
AymaraaymAymaran743750
AsháninkacniArawak658750
BribribzdChibchan743750
GuaranígnTupi-Guarani743750
NahuatlnahUto-Aztecan376738
OtomíotoOto-Manguean222748
QuechuaquyQuechuan743750
RarámuritarUto-Aztecan743750
Shipibo-KoniboshpPanoan743750
WixarikahchUto-Aztecan743750
+ +Table 1: The languages in AmericasNLI, along with their ISO codes, language families, and dataset sizes. + +further improvements (Muller et al., 2020; Pfeiffer et al., 2020a,b; Wang et al., 2020). + +Importantly, however, there are currently no datasets for high-level, semantic tasks which focus solely on low-resource languages. As these languages are most likely to be unseen to commonly used pretrained models, practically all work evaluating unseen language performance and language adaptation methods has been limited to low-level, syntactic tasks such as part-of-speech tagging, dependency parsing, and named-entity recognition (Muller et al., 2020; Wang et al., 2020). This largely limits our ability to draw more general conclusions with regards to the zero-shot learning abilities of pretrained multilingual models for unseen languages. + +In this work, we introduce AmericasNLI, an extension of XNLI (Conneau et al., 2018) – a natural language inference (NLI; cf. §2.3) dataset covering 15 high-resource languages – to 10 Indigenous languages spoken in the Americas: Asháninka, Aymara, Bribri, Guarani, Nahuatl, Otomí, Quechua, Raramuri, Shipibo-Konibo, and Wixarika. All of them are truly low-resource languages: they have little to no digitally available labeled or unlabeled + +data, and they are not typically studied by the mainstream NLP community. The goal of this work is two-fold: First, we hope to increase the visibility of these languages by providing a portion of the resources necessary for NLP research. Second, we aim to allow for a more comprehensive study of multilingual model performance on unseen languages, where improvements will help extend the reach of NLP techniques to a larger set of languages. We are specifically interested in the following research questions: (1) Do pretrained multilingual models still perform above random chance for a high-level, semantic task in an unseen language? (2) Do methods aimed at adapting models to unseen languages – previously exclusively evaluated on low-level, syntactic tasks – also increase performance on NLI? (3) Are translation-based approaches effective for truly low-resource languages, where translation quality is typically very poor? + +We experiment with XLM-R, both with and without model adaptation via continued pretraining on monolingual corpora in the target language. Our results show that the performance of XLM-R out-of-the-box is moderately above chance, and model adaptation leads to improvements of up to 5.86 percentage points. Training on machine-translated training data, however, results in an even larger performance gain of 11.13 percentage points over the corresponding XLM-R model without adaptation. We further perform an analysis via experiments with hypothesis-only models, to examine potential artifacts which may have been inherited from XNLI and find that performance is above chance for most models, but still below that for using the full example. + +AmericasNLI is publicly available2 and we hope that it will serve as a benchmark for measuring the zero-shot natural language understanding abilities of multilingual models for unseen languages. Additionally, we hope that our dataset will motivate the development of novel pretraining and model adaptation techniques which are suitable for truly low-resource languages. + +# 2 Background and Related Work + +# 2.1 Pretrained Multilingual Models + +Prior to the widespread use of pretrained transformer models, cross-lingual transfer was mainly + +achieved through word embeddings (Mikolov et al., 2013; Pennington et al., 2014; Bojanowski et al., 2017), either by aligning monolingual embeddings into the same embedding space (Lample et al., 2018b,a; Grave et al., 2018) or by training multilingual embeddings (Ammar et al., 2016; Artetxe and Schwenk, 2019). Pretrained multilingual models represent the extension of multilingual embeddings to pretrained transformer models. + +These models follow the standard pretraining-- finetuning paradigm: they are first trained on unlabeled monolingual corpora from various languages (the pretraining languages) and later finetuned on target-task data in a - usually high-resource - source language. Having been exposed to a variety of languages through this training setup, cross-lingual transfer results for these models are competitive with the state of the art for many languages and tasks. Commonly used models are mBERT (Devlin et al., 2019), which is pretrained on the Wikipedias of 104 languages with masked language modeling (MLM) and next sentence prediction (NSP), and XLM, which is trained on 15 languages and introduces the translation language modeling objective, which is based on MLM, but uses pairs of parallel sentences. XLM-R has improved performance over XLM, and trains on data from 100 different languages with only the MLM objective. Common to all models is a large shared subword vocabulary created using either BPE (Sennrich et al., 2016) or SentencePiece (Kudo and Richardson, 2018) tokenization. + +# 2.2 Evaluating Pretrained Multilingual Models + +Just as in the monolingual setting, where benchmarks such as GLUE (Wang et al., 2018) and SuperGLUE (Wang et al., 2019) provide a look into the performance of models across various tasks, multilingual benchmarks (Hu et al., 2020; Liang et al., 2020) cover a wide variety of tasks involving sentence structure, classification, retrieval, and question answering. + +Additional work has been done examining what mechanisms allow multilingual models to transfer across languages (Pires et al., 2019; Wu and Dredze, 2019). Wu and Dredze (2020) examine transfer performance dependent on a language's representation in the pretraining data. For languages with low representation, multiple methods have been proposed to improve performance, in + +cluding extending the vocabulary, transliterating the target text, and continuing pretraining before finetuning (Lauscher et al., 2020; Chau et al., 2020; Muller et al., 2020; Pfeiffer et al., 2020a,b; Wang et al., 2020). In this work, we focus on continued pretraining to analyze the performance of model adaptation for a high-level, semantic task. + +# 2.3 Natural Language Inference + +Given two sentences, the premise and the hypothesis, the task of NLI consists of determining whether the hypothesis logically entails, contradicts, or is neutral to the premise. The most widely used datasets for NLI in English are SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2018). XNLI (Conneau et al., 2018) is the multilingual expansion of MNLI to 15 languages, providing manually translated evaluation sets and machine-translated training sets. While datasets for NLI or the similar task of recognizing textual entailment exist for other languages (Bos et al., 2009; Alabbas, 2013; Eichler et al., 2014; Amirkhani et al., 2020), their lack of similarity prevents a generalized study of cross-lingual zero-shot performance. This is in contrast to XNLI, where examples for all 15 languages are parallel. To preserve this property of XNLI, when creating AmericasNLI, we choose to translate Spanish XNLI as opposed to creating examples directly in the target language. + +However, NLI datasets are not without issue: Gururangan et al. (2018) show that artifacts from the creation of MNLI allow for models to classify examples depending on only the hypothesis, showing that models may not be reasoning as expected. Motivated by this, we provide further analysis of AmericasNLI in Section 6 by comparing the performance of hypothesis-only models to models trained on full examples. + +# 3 AmericasNLI + +# 3.1 Data Collection Setup + +AmericasNLI is the translation of a subset of XNLI (Conneau et al., 2018). As translators between Spanish and the target languages are more frequently available than those for English, we translate from the Spanish version. Additionally, some translators reported that code-switching is often used to describe certain topics, and, while many words without an exact equivalence in the target language are worked in through translation or interpretation, others are kept in Spanish. To minimize + +the amount of Spanish vocabulary in the translated examples, we choose sentences from genres that we judged to be relatively easy to translate into the target languages: "face-to-face," "letters," and "telephone." We choose up to 750 examples from each of the development and test set, with exact counts for each language in Table 1. + +# 3.2 Languages + +We now discuss the languages in AmericasNLI. For additional background on previous NLP research on Indigenous languages of the Americas, we refer the reader to Mager et al. (2018). A summary of this information can be found in Table C.1. + +Aymara Aymara is a polysynthetic Amerindian language spoken in Bolivia, Chile, and Peru by over two million people (Homola, 2012). Aymara follows an SOV word order and has multiple dialects, including Northern and Southern Aymara, spoken on the southern Peruvian shore of Lake Titicaca as well as around La Paz and, respectively, in the eastern half of the Iquique province in northern Chile, the Bolivian department of Oruro, in northern Potosi, and southwest Cochabamba. AmericasNLI examples are translated into the Central Aymara variant, specifically Aymara La Paz. + +Asháninka Asháninka is an Amazonian language from the Arawak family, spoken by 73,567 people³ in Central and Eastern Peru, in a geographical region located between the eastern foothills of the Andes and the western fringe of the Amazon basin (Mihas, 2017). Asháninka is an agglutinating and polysynthetic language with a VSO word order. + +Bribri Bribri is a Chibchan language spoken by 7,000 people in Southern Costa Rica (INEC, 2011). It has three dialects, and while it is still spoken by children, it is currently a vulnerable language (Moseley, 2010; Sánchez Avendaño, 2013). Bribri is a tonal language with SOV word order. There are several orthographies which use different diacritics for the same phenomena, however even for researchers who use the same orthography, the Unicode encoding of similar diacritics differs amongst authors. Furthermore, the dialects of Bribri differ in their exact vocabularies, and there are phonological processes, like the deletion of unstressed vowels, which also change the tokens found in texts. As + +
LanguagePremiseHypothesis
enAnd he said, Mama, I'm home.He told his mom he had gotten home.
esY élippo: Mamá, estoy en casa.Leippo a su madre que había llegado a casa.
aymJupax sanwa: Mamita, utankastwa.Utar purinxtwa sasaw mamaparux sanxa
bzdEna ie' iche: âmi, ye' tso' ù a.I âmi a iche irir tó ye' démine ù a.
cniIriori ikantiro: Ina, nosaiki pankotsiki.Ikantiro iriniro yaretaja pankotsiki.
gnHa ha'e he'i: Mama, awe ogape.He'íkuri isype oghuhēhague hógape.
hchmetá mik+ petay+: ne mama kitá nepa yéka.yu mama m+pa+ p+ra h+awe kai kename yu kitá he nuakai.
nahhuan yehhua quiihtoh: Nonantzin, niyetoc nochanquilih inantzin niehcoquia
otoxi nydi biñnà: maMe dimi an ngûbimâbi o ini maMe guè o ngû
quyHinaptinmi pay nirqa: Mamay wasipim kachkani.Wasinman chayasqanmanta mamanta willarqa.
shpJara neskata iki: tita, xobonkoriki ea.Jawen tita yoiia iki moa xobon nokota.
tarA'lí je aníli échiko: ku bitichí ne atíki NanaIyéla ku ruyéli, mapu bitichí ku nawáli.
+ +Table 2: A parallel example in AmericasNLI with the entailment label. + +Bribri has only been a written language for about 40 years, existing materials have a large degree of idiosyncratic variation. These variations are standardized in AmericasNLI, which is written in the Amubri variant. + +Guaraní Guaraní is spoken by between 6 to 10 million people in South America and roughly 3 million people use it as their main language, including more than 10 native nations in Paraguay, Brazil, Argentina, and Bolivia, along with Paraguayan, Argentinian, and Brazilian peoples. According to the Paraguayan Census, in 2002 there were around 1.35 million monolingual speakers, which has since increased to around 1.5 million people (Dos Santos, 2017; Melia, 1992). Although the use of Guaraní as spoken language is much older, the first written record dates to 1591 (Catechism) followed by the first dictionary in 1639 and linguistic descriptions in 1640. The official grammar of Guaraní was approved in 2018. Guaraní is an agglutinative language, with ample use of prefixes and suffixes. + +Nahuatl Nahuatl belongs to the Nahuan subdivision of the Uto-Aztecan language family. There are 30 recognized variants of Nahuatl spoken by over 1.5 million speakers across Mexico, where Nahuatl is recognized as an official language (SEGOB, 2020b). Nahuatl is polysynthetic and agglutinative, and many sentences have an SVO word order or, for contrast and focus, a VSO order, and for emphasis, an SOV order (MacSwan, 1998). The + +translations in AmericasNLI belong to the Central Nahuatl (Náhuatl de la Huasteca) dialect. As there is a lack of consensus regarding the orthographic standard, the orthography is normalized to a version similar to Classical Nahuatl. + +Otomí Otomí belongs to the Oto-Pamean language family and has nine linguistic variants with different regional self-denominations. Otomí is a tonal language following an SVO order, and there are around 307,928 speakers spread across 7 Mexican states. In the state of Tlaxcala, the yuhmu or ñuhmu variant is spoken by fewer than 100 speakers, and we use this variant for the Otomí examples in AmericasNLI. + +Quechua Quechua, or Runasimi, is an Indigenous language family spoken primarily in the Peruvian Andes. It is the most widely spoken pre-Columbian language family of the Americas, with around 8-10 million speakers. Approximately $25\%$ (7.7 million) of Peruvians speak a Quechuan language, and it is the co-official language in many regions of Peru. There are multiple subdivisions of Quechua, and AmericasNLI examples are translated into the standard version of Southern Quechua, Quechua Chanka, also known as Quechua Ayacucho, which is spoken in different regions of Peru and can be understood in different areas of other countries, such as Bolivia or Argentina. In AmericasNLI, the apostrophe and pentavocalism from other regions are not used. + +Raramuri Raramuri, also known as Tarahumara, which means light foot (INALI, 2017), belongs + +
aymbzdcnignhchnahotoquyshptar
ChrFes→XX0.190.080.100.220.130.180.060.330.140.05
XX→es0.090.060.090.140.070.100.060.140.090.08
BLEUes→XX0.300.540.033.263.180.330.011.580.340.01
XX→es0.040.010.010.180.010.020.020.050.010.01
+ +Table 3: Translation performance for all target languages. $es \rightarrow XX$ represents translating into the target language, which is used for translate-train, and $XX \rightarrow es$ represents translating into Spanish, used for translate-test. + +to the Tarakahitan subgroup of the Uto-Aztecan language family (Goddard, 1996), and is polysynthetic and agglutinative. Raramuri is an official language of Mexico, spoken mainly in the Sierra Madre Occidental region by a total of 89,503 speakers (SEGOB, 2020c). AmericasNLI examples are translated into the Highlands variant (INALI, 2009), and translation orthography and word boundaries are similar to Caballero (2008). + +Shipibo-Konibo Shipibo-Konibo is a Panoan language spoken by around 35,000 native speakers in the Amazon region of Peru. Shipibo-Konibo uses an SOV word order (Faust, 1973) and postpositions (Vasquez et al., 2018). The translations in AmericasNLI make use of the official alphabet and standard writing supported by the Ministry of Education in Peru. + +Wixarika The Wixarika, or Huichol, language, meaning the language of the doctors and healers (Lumholtz, 2011), is a language in the Corachol subgroup of the Uto-Aztecan language family (Campbell, 2000). Wixarika is a national language of Mexico with four variants, spoken by a total of around 47,625 speakers (SEGOB, 2020a). Wixarika is a polysynthetic language and follows an SOV word order. Translations in AmericasNLI are in Northern Wixarika and use an orthography common among native speakers (Mager-Hois, 2017). + +# 4 Experiments + +In this section, we detail the experimental setup we use to evaluate the performance of various approaches on AmericasNLI. + +# 4.1 Zero-Shot Learning + +Pretrained Model We use XLM-R (Conneau et al., 2020) as the pretrained multilingual model in our experiments. The architecture of XLM-R is based on RoBERTa (Liu et al., 2019), and it is trained using MLM on web-crawled data in 100 + +languages. It uses a shared vocabulary consisting of 250k subwords, created using SentencePiece (Kudo and Richardson, 2018) tokenization. We use the Base version of XLM-R for our experiments. + +Adaptation Methods To adapt XLM-R to the various target languages, we continue training with the MLM objective on monolingual text in the target language before finetuning. To keep a fair comparison with other approaches, we only use target data which was also used to train the translation models, which we describe in Section 4.2. However, we note that one benefit of continued pretraining for adaptation is that it does not require parallel text, and could therefore benefit from text which could not be used for a translation-based approach. For continued pretraining, we use a batch size of 32 and a learning rate of 2e-5. We train for a total of 40 epochs. Each adapted model starts from the same version of XLM-R, and is adapted individually to each target language, which leads to a different model for each language. We denote models adapted with continued pretraining as +MLM. + +Finetuning To finetune XLM-R, we follow the approach of Devlin et al. (2019) and use an additional linear layer. We train on either the English MNLI data or the machine-translated Spanish data, and we call the final models XLM-R (en) and XLM-R (es), respectively. Following Hu et al. (2020), we use a batch size of 32 and a learning rate of 2e-5. We train for a maximum of 5 epochs, and evaluate performance every 2500 steps on the XNLI development set. We employ early stopping with a patience of 15 evaluation steps and use the best performing checkpoint for the final evaluation. All finetuning is done using the Huggingface Transformers library (Wolf et al., 2020) with up to two Nvidia V100 GPUs. Using Lacoste et al. (2019), we estimate total carbon emissions to be $75.6\mathrm{kgCO_2eq}$ . + +
aymbzdcnignhchnahotoquyshptarAvg.
Majority baseline33.3333.3333.3333.3333.3333.4733.4233.3333.3333.33-
Zero-shot
XLM-R (en)36.13±0.8839.65±0.8937.91±0.8239.47±1.1437.20±1.3242.59±0.3437.79±0.7837.24±1.7840.45±0.8936.36±1.0738.48±1.05
XLM-R (es)37.25±2.3339.38±1.9637.29±1.1239.25±1.5535.82±1.0138.98±1.3838.32±1.4739.51±1.9238.40±0.8735.73±0.6937.99±1.51
Zero-shot w/ adaptation
XLM-R +MLM (en)43.51±1.6938.13±1.7539.47±1.1952.44±0.9337.25±2.6046.21±0.7237.03±3.2861.78±2.4241.34±0.6139.82±0.9543.70±1.83
XLM-R +MLM (es)43.87±0.1440.05±2.2038.76±0.0852.27±1.2037.82±1.5944.17±1.7640.55±1.0762.40±1.4440.18±0.9538.45±0.8643.85±1.30
Translate-train
XLM-R50.00±1.5151.42±1.2442.45±1.6358.89±2.7043.20±2.0755.33±1.1236.01±0.7459.91±0.2052.00±0.2742.04±1.8149.12±1.52
Translate-test
XLM-R39.73±0.2740.40±0.1334.71±0.7346.62±2.2938.00±0.4841.37±0.1635.29±1.1551.38±1.2439.51±0.4735.16±0.9740.22±1.01
+ +Table 4: Results for zero-shot, translate-train, and translate-test averaged over 3 runs with different seeds. The majority baseline represents expected performance when predicting only the majority class of the test set. Random guessing would result in an accuracy of $33.33\%$ . Standard deviations in the Avg. column are calculated by taking the square root of the average variance of the languages in that row. + +# 4.2 Translation-based Approaches + +We also experiment with two translation-based approaches, translate-train and translate-test, detailed below along with the translation model used. + +Translation Models For our translation-based approaches, we train two sets of translation models: one to translate from Spanish into the target language, and one in the opposite direction. We use transformer sequence-to-sequence models (Vaswani et al., 2017) with the hyperparameters proposed by Guzmán et al. (2019). Parallel data used to train the translation models can be found in Table B.1. We employ the same model architecture for both translation directions, and we measure translation quality in terms of BLEU (Papineni et al., 2002) and ChrF (Popović, 2015), cf. Table 3. We use fairseq (Ott et al., 2019) to implement all translation models. + +Translate-train For the translate-train approach, the Spanish training data provided by XNLI is translated into each target language. It is then used to finetune XLM-R for each language individually. Along with the training data, we also translate the Spanish development data, which is used for validation and early stopping. We discuss the effects of using a translated development set in Section F.1. Notably, we find that the finetuning hyperparameters defined above do not reliably allow the model to converge for many of the target languages. To + +find suitable hyperparameters, we tune the batch size and learning rate by conducting a grid search over $\{5\mathrm{e} - 6,2\mathrm{e} - 5,1\mathrm{e} - 4\}$ for the learning rate and $\{32,64,128\}$ for the batch size. In order to select hyperparameters which work well across all languages, we evaluate each run using the average performance on the machine-translated Aymara and Guarani development sets, as these languages have moderate and high ChrF scores, respectively. We find that decreasing the learning rate to $5\mathrm{e} - 6$ and keeping the batch size at 32 yields the best performance. Other than the learning rate, we use the same approach as for zero-shot finetuning. + +Translate-test For the translate-test approach, we translate the test sets of each target language into Spanish. This allows us to apply the model finetuned on Spanish, XLM-R (es), to each test set. Additionally, a benefit of translate-test over translate-train and the adapted XLM-R models is that we only need to finetune once overall, as opposed to once per language. For evaluation, we use the checkpoint with the highest performance on the Spanish XNLI development set. + +# 5 Results and Discussion + +Zero-shot Models We present our results in Table 4. Results for the development set are presented in Table E.1. Zero-shot performance is low for all 10 languages, with an average accuracy of $38.48\%$ and $37.99\%$ for the English and Spanish model, respectively. However, in all cases the performance is higher than the majority baseline. As shown in + +
FTaymbzdcnignhchnahotoquyshptarAvg.Avg.+P
Majority baseline-33.3333.3333.3333.3333.3333.4733.4233.3333.3333.33--
Zero-shot
XLM-R (en)62.3433.6033.4732.4033.4734.1333.0632.3533.3333.6034.2733.3738.48
XLM-R (es)62.2634.1334.8035.3335.3334.5333.6033.1633.0736.8035.7334.6537.99
Zero-shot w/ adaptation
XLM-R +MLM (en)-37.0732.8033.0742.4033.7334.5533.9644.4035.3334.8036.2143.70
XLM-R +MLM (es)-36.2734.8033.7341.7334.0035.3732.8947.8735.6034.6736.6943.85
Translate-train
XLM-R-44.9343.7343.4747.6043.0745.8035.8352.1346.2739.4744.2349.12
Translate-test
XLM-R-36.5342.6737.3343.6038.5343.2234.2248.1342.6734.6740.1640.22
+ +Table 5: Hypothesis-only results. The Avg. column represents the average of the hypothesis-only results, while the Avg. $+P$ column, taken from Table 4, represents the average of the languages when using both the premise and hypothesis. + +Table E.3 in the appendix, the same models achieve an average of $74.20\%$ and $75.35\%$ accuracy respectively, when evaluated on the 15 XNLI languages. + +Interestingly, even though code-switching with Spanish is encountered in many target languages, finetuning on Spanish labeled data on average slightly underperforms the model trained on English, however performance is better for 3 of the languages. The English model achieves a highest accuracy of $42.59\%$ , when evaluated on Nahuatl, while the Spanish model achieves a highest accuracy of $39.51\%$ , when evaluated on Quechua. The lowest performance is achieved when evaluating on Aymara and Raramuri, for the English and Spanish model, respectively. + +We find that model adaptation via continued pretraining improves both models, with an average gain of 5.22 percentage points for English and 5.86 percentage points for Spanish. Notably, continued pretraining increases performance for Quechua by 24.53 percentage points when finetuning on English, and 22.89 points when finetuning on Spanish. Performance decreases for Bribri and Otomí when finetuning on English, however performance for all languages improves when using Spanish. + +Translate-test Performance of the translate-test model improves over both zero-shot baselines. We see the largest increase in performance for Guarání and Quechua, with gains of 7.16 and, respectively, 11.87 points over the best performing zero-shot model without adaptation. Considering the translation metrics in Table 3, models for Guarání and Quechua achieve the two highest scores for both + +metrics. On average, translate-test does worse when compared to the adapted zero-shot models, and in all but two cases, both adapted models perform better than translate-test. We hypothesize that translate-test is more sensitive to noise in the translated data; sentences may lose too much of their original content, preventing correct classification. + +Translate-train The most surprising result is that of translate-train, which considerably outperforms the performance of translate-test for all languages, and outperforms the zero-shot models for all but two languages. Compared to the best non-adapted zero-shot model, the largest performance gain is 20.40 points for Quechua. For the language with the lowest performance, Otomí, translate-train performs 2.32 points worse than zero-shot; however, it still outperforms translate-test. When averaged across all languages, translate-train outperforms the English zero-shot model by 10.64 points, and translate-test by 8.9 points. It is important to note that the translation performance from Spanish to each target language is not particularly high: when considering ChrF scores, the highest is 0.33, and the highest BLEU score is 3.26. Performance of both translation-based models is correlated with ChrF scores, with a Pearson correlation coefficient of 0.82 and 0.83 for translate-train and translate-test. Correlations are not as strong for BLEU, with coefficients of 0.37 and 0.59. + +The sizable difference in performance between translate-train and the other methods suggests that translation-based approaches may be a valuable asset for cross-lingual transfer, especially for low + +resource languages. While the largest downsides to this approach are the requirement for parallel data and the need for multiple models, the potential performance gain over other approaches may prove worthwhile. Additionally, we believe that the performance of both translation-based approaches would improve given a stronger translation system, and future work detailing the necessary level of translation quality for the best performance would offer great practical usefulness for NLP applications for low-resource languages. + +# 6 Analysis + +# 6.1 Hypothesis-only Models + +As shown by Gururangan et al. (2018), SNLI and MNLI – the datasets AmericasNLI is based on – contain artifacts created during the annotation process which models exploit to artificially inflate performance. To analyze whether similar artifacts exist in AmericasNLI and if they can also be exploited, we train and evaluate models using only the hypothesis, and present results in Table 5. We can see that the average performance across languages is better than chance for all models except for XLM-R without adaptation. Translate-train obtains the highest result with $44.23\%$ accuracy, and as shown in Table E.2, hypothesis-only performance of translate-test is higher than standard performance for 5 languages. Thus, as with SNLI and MNLI, artifacts in the hypotheses can be used to predict, to some extent, the correct labels. However all but 1 zero-shot and translate-train models perform better in the standard setting, indicating that the models are learning something beyond just exploiting artifacts in the hypotheses, even with the additional challenge of unseen languages. + +# 6.2 Case Study: Human Evaluation + +Following Conneau et al. (2018), AmericasNLI was created by translating sentences individually, in order to prevent additional context being added into the hypotheses. However, this strategy may break the original semantic relationship between the premise and the hypothesis. Furthermore, for some examples the logical relationship may be dependent on context or subtext which can be lost through translation, or simply not make sense in the target language. To verify the validity of the labels of AmericasNLI, we conduct a human evaluation experiment, focusing on examples translated to Bribri. We create a balanced, random sample + +of 450 examples taken from the Bribri development set. An annotator familiar with the task was then asked to classify the pairs of sentences. For comparison, we also annotate parallel examples taken from the English and Spanish development sets. For Bribri, we recover the original XNLI label for $76.44\%$ of examples. For English and Spanish, we achieve $81.78\%$ and $71.56\%$ accuracy, respectively. Due to the relatively small differences in performance across languages, we conclude that translation to Bribri has a minimal effect on the semantic relationship between the premise and the hypothesis. + +# 7 Limitations and Future Work + +While the case study above provides strong evidence for the validity of our Bribri examples, we cannot currently generalize this claim to the remaining languages. For future work, we plan on extending our human evaluation to more languages and provide a more detailed analysis. + +Additionally, due to the limited availability of annotators and the difficulties of translation for languages that are less frequently studied, the size of the AmericasNLI test set is relatively small. As such, care must be taken to carefully evaluate conclusions drawn using the dataset; following Card et al. (2020) we present a power analysis of our results in Section D.1. Future work expanding the dataset size will help create a stronger baseline. Furthermore, while we do not make any model-specific assumptions in our experiments, our results are based on only one pretrained model and adaptation method. Methods using vocabulary extension or adapters may offer additional improvements. Similarly, other pretrained models could perform differently, depending on, e.g., the model size or the set of languages in their pretraining data. In Table F.3, we present results using XLM-R Large, and find that, while the relationship between the approaches differs from the main experiments, the overall highest average performance is still achieved by the translate-train approach with XLM-R Base. We provide a longer discussion in Section F.3. + +# 8 Conclusion + +To better understand the zero-shot abilities of pretrained multilingual models for semantic tasks in unseen languages, we present AmericasNLI, a parallel NLI dataset covering 10 low-resource lan + +guages indigenous to the Americas. We conduct experiments with XLM-R, and find that the model's zero-shot performance, while better than a majority baseline, is poor. However, it can be improved by model adaptation via continued pretraining. Additionally, we find that translation-based approaches outperform a zero-shot approach, which is surprising given the low quality of the employed translation systems. We hope that this work will not only spur further research into improving model adaptation to unseen languages, but also motivate the creation of more resources for languages not frequently studied by the NLP community. + +# Ethics Statement + +In this work, we present a new dataset created through the translation of an existing resource, XNLI (Conneau et al., 2018). While this allows for results that are directly comparable, it also means that this dataset inherits any biases and flaws which are contained in the previous dataset. Furthermore, research involving languages spoken by Indigenous communities raises ethical concerns regarding the exploitation of these languages and communities: it is crucial that members of the community are able to directly benefit from the research. Translation for AmericasNLI was done by either paper authors or translators who were compensated at a rate based on the average rate for translation and the minimum wage in their country of residence. Additionally, many authors are members of, and/or have a record of close work with communities who speak a language contained in AmericasNLI. + +# Acknowledgments + +We thank the following people for their work on the translations: Francisco Morales for Bribri, Feliciano Torres Ríos for Asháninka, Perla Alvarez Britez for Guaraní, Silvino González de la Cruz for Wixarika, Giovany Martínez Sebastián, Pedro Kapoltitan, and José Antonio for Nahuatl, José Mateo Lino Cajero Velázquez for Otomí, Liz Chávez for Shipibo-Konibo, and María del Cármen Sotelo Holguín for Raramuri. We would also like to thank Dallas Card for his help with power analysis. This work would not have been possible without the financial support of Facebook AI Research, Microsoft Research, Google Research, the Institute of Computational Linguistics at the University of Zurich, the NAACL Emerging Regions Fund, Comunidad Elotl, and Snorkel AI. + +# References + +Zeljko Agić and Ivan Vulić. 2019. JW300: A wide-coverage parallel corpus for low-resource languages. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3204–3210, Florence, Italy. Association for Computational Linguistics. +Maytham Alabbas. 2013. A dataset for Arabic textual entailment. In Proceedings of the Student Research Workshop associated with RANLP 2013, pages 7-13, Hissar, Bulgaria. INCOMA Ltd. Shoumen, BULGARIA. +Hossein Amirkhani, Mohammad AzariJafari, Azadeh Amirak, Zohreh Pourjafari, Soroush Faridan Jahromi, and Zeinab Kouhkan. 2020. Farstail: A persian natural language inference dataset. ArXiv, abs/2009.08820. +Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A. Smith. 2016. Massively multilingual word embeddings. ArXiv, abs/1602.01925. +M. Artetxe and Holger Schwenk. 2019. Massively multilingual sentence embeddings for zero-shot crosslingual transfer and beyond. Transactions of the Association for Computational Linguistics, 7:597-610. +Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146. +Johan Bos, Fabio Massimo Zanzotto, and M. Pennacchiotti. 2009. Textual entailment at evalita 2009. +Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics. +David Brambila. 1976. Diccionario raramuri - castellano: Tarahumar. +Gabriela Caballero. 2008. Choguita raramuri (tarahumara) phonology and morphology. +M. Cajero. 1998. Raíces del Otomí:DICcionario. Gob-. ierno del Estado de Tlaxcala. +Mateo Cajero. 2009. Historia de los Otomías en Ixtenco, volume 1. Instituto Tlaxcalteca de la Cultura, Tlaxcala, México. +Lyle Campbell. 2000. American Indian languages: the historical linguistics of Native America. Oxford University Press. +D. Card, Peter Henderson, Urvashi Khandelwal, Robin Jia, Kyle Mahowald, and Dan Jurafsky. 2020. With little power comes great responsibility. In EMNLP. + +Ethan C. Chau, Lucy H. Lin, and Noah A. Smith. 2020. Parsing with multilingual BERT, a small corpus, and a small treebank. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 1324-1334, Online. Association for Computational Linguistics. +Luis Chiruzzo, Pedro Amarilla, Adolfo Ríos, and Gustavo Giménez Lugo. 2020. Development of a Guarani - Spanish parallel corpus. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 2629-2633, Marseille, France. European Language Resources Association. +Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, E. Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In ACL. +Alexis Conneau, Rudy Rinott, Guillaume Lample, Adina Williams, Samuel R. Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli: Evaluating crosslingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. +Adolfo Constenla, Feliciano Elizondo, and Francisco Pereira. 2004. *Grupo Básico de Bribri*. Editorial de la Universidad de Costa Rica. +Rubén Cushmanariano Romano and Richer C. Sebastián Q. 2008. Naantsipeta asháninkaki birakochaki. diccionario asháninka-castellano. version preliminar. http://www.lengamer.org/publicaciones/diccionarios/. Visitado: 01/03/2013. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Rafaela Alves Dos Santos. 2017. DIGLOSSIA NO PARAGUAI: A restricção dos monolíngues em guarani no acesso à informação. Trabalho de Conclusão de Campo, Bacharelado em Línguas Estrangeiras. Universidade de Brasília, Brasília. +Kathrin Eichler, Aleksandra Gabryszak, and Günter Neumann. 2014. An analysis of textual inference in German customer emails. In Proceedings of the Third Joint Conference on Lexical and Computational Semantics (*SEM* 2014), pages 69-74, Dublin, Ireland. Association for Computational Linguistics and Dublin City University. +Norma Faust. 1973. Lecciones para el aprendizaje del idioma shipibo-conibo, volume 1 of Documento de + +Trabajo. Instituto Linguístico de Verano, Yarina-cocha. +Isaac Feldman and Rolando Coto-Solano. 2020. Neural machine translation models with back-translation for the extremely low-resource indigenous language Bribri. In Proceedings of the 28th International Conference on Computational Linguistics, pages 3965-3976, Barcelona, Spain (Online). International Committee on Computational Linguistics. +Sofía Flores Solórzano. 2017. Corpus oral pandialectal de la lengua bribri. http://bribri.net. +Ana-Paula Galarreta, Andres Melgar, and Arturo Oncevay. 2017. Corpus creation and initial SMT experiments between Spanish and Shipibo-konibo. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017, pages 238-244, Varna, Bulgaria. INCOMA Ltd. +Ives Goddard. 1996. Introduccion. In William C. Sturtevant, editor, Handbook of North American Indians (vol. 17), chapter 1, pages 1-6. University of Texas. +Héctor Erasmo Gómez Montoya, Kervy Dante Rivas Rojas, and Arturo Oncevay. 2019. A continuous improvement framework of machine translation for Shipibo-konibo. In Proceedings of the 2nd Workshop on Technologies for MT of Low Resource Languages, pages 17-23, Dublin, Ireland. European Association for Machine Translation. +Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov. 2018. Learning word vectors for 157 languages. In Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018). +Joseph Harold Greenberg. 1963. Universals of language. +Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107-112, New Orleans, Louisiana. Association for Computational Linguistics. +Ximena Gutierrez-Vasques, Gerardo Sierra, and Isaac Hernandez Pompa. 2016. Axolotl: a web accessible parallel corpus for Spanish-Nahuatl. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 4210-4214, Porto Roz, Slovenia. European Language Resources Association (ELRA). +Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc'Aurelio Ranzato. 2019. The flores evaluation datasets for low-resource machine + +translation: Nepali-english and sinhala-english. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6100-6113. +Petr Homola. 2012. Building a formal grammar for a polysynthetic language. In Formal Grammar, pages 228-242, Berlin, Heidelberg. Springer Berlin Heidelberg. +Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan First, and Melvin Johnson. 2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalization. CoRR, abs/2003.11080. +INALI. 2009. Catálogo de las lenguas indígenas ná- cionales: Variantes linguíticas de méxico con sus autodenominaciones y referencias geoestadística. INALI. +INALI. 2014. Norma de escritura de la Lengua Hñähnu (Otomí), 1st edition. Secretaria de culatura. +INALI. 2017. Etnografia del pueblo tararahumara (rará-muri). +INEC. 2011. Población total en territorios indígenas por autoidentificación a la etnia indígena y habla de una lengua indígena, según pueblo y territorio indígena. In Instituto Nacional de Estadística y Censo, editor, Censo 2011. +INEGI. 2008. Catálogo de las lenguas indígenas ná- cionales: Variantes linguíticas de méxico con sus autodenominaciones y referencias geoestadística. Diario Oficial, pages 31-108. +Jose L. Iturrioz and Paula Gomez-Lopez. 2008. Gramatica wixarika i. +Carla Victoria Jara Murillo. 2018a. Gramática de la Lengua Bribri. EDigital. +Carla Victoria Jara Murillo. 2018b. I Tte Historias Bribris, second edition. Editorial de la Universidad de Costa Rica. +Carla Victoria Jara Murillo and Alí García Segura. 2013. Se' ttö' bribri ie Hablemos en bribri. EDigital. +Katharina Kann, Kyunghyun Cho, and Samuel R. Bowman. 2019. Towards realistic practices in low-resource natural language processing: The development set. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3342-3349, Hong Kong, China. Association for Computational Linguistics. + +Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics. +Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres. 2019. Quantifying the carbon emissions of machine learning. arXiv preprint arXiv:1910.09700. +Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. In NeurIPS. +Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018a. Unsupervised machine translation using monolingual corpora only. In International Conference on Learning Representations (ICLR). +Guillaume Lample, Alexis Conneau, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018b. Word translation without parallel data. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. +Anne Lauscher, Vinit Ravishankar, Ivan Vulic, and Goran Glavaš. 2020. From zero to hero: On the limitations of zero-shot language transfer with multilingual Transformers. pages 4483-4499. +Yaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fenfei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, Xiaodong Fan, Ruofei Zhang, Rahul Agrawal, Edward Cui, Sining Wei, Taroon Bharti, Ying Qiao, Jiun-Hung Chen, Winnie Wu, Shuguang Liu, Fan Yang, Daniel Campos, Rangan Majumder, and Ming Zhou. 2020. XGLUE: A new benchmark dataset for cross-lingual pre-training, understanding and generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6008–6018, Online. Association for Computational Linguistics. +Y. Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, M. Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692. +James Loriot, Erwin Lauriout, and Dwight Day. 1993. Diccionario Shipibo-Castellano. Instituto Linguístico de Verano. +Carl Lumholtz. 2011. Unknown Mexico: A Record of Five Years' Exploration Among the Tribes of the Western Sierra Madre, volume 2. Cambridge University Press. +Jeff MacSwan. 1998. The argument status of nps in southeast puebla nahuatl: Comments on the polysynthesis parameter. Southwest Journal of Linguistics, 17(2):101-114. + +Manuel Mager, Dionico Gonzalez, and Ivan Meza. 2017. Probabilistic finite-state morphological segmenter for wixarika (huichol). +Manuel Mager, Ximena Gutierrez-Vasques, Gerardo Sierra, and Ivan Meza-Ruiz. 2018. Challenges of language technologies for the indigenous languages of the Americas. In Proceedings of the 27th International Conference on Computational Linguistics, pages 55-69, Santa Fe, New Mexico, USA. Association for Computational Linguistics. +Jesus Manuel Mager-Hois. 2017. Traductor hibrido wixárika-espanol con escasos recursos bilingües. Ph.D. thesis, Master's thesis, Universidad Autónoma Metropolitana. +Enrique Margery. 2005. Diccionario Fraseologico Bribri-Espanol Espanol-Bribri, second edition. Editorial de la Universidad de Costa Rica. +Bartomeu Melia. 1992. La lengua Guarán del Paraguay: Historia, société y litteratura. Editorial MAPFRE, Madrid. +Elena Mihas. 2017. The kampa subgroup of the arawak language family. The Cambridge Handbook of Linguistic Typology, page 782-814. +Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, volume 26, pages 3111-3119. Curran Associates, Inc. +Christopher Moseley. 2010. Atlas of the World's Languages in Danger. Unesco. +B. Muller, Antonis Anastasopoulos, Benoit Sagot, and Djamé Seddah. 2020. When being unseen from mbert is just the beginning: Handling new languages with multilingual language models. ArXiv, abs/2010.12858. +Johanna Nichols. 1986. Head-marking and dependent-marking grammar. Language, 62(1):56-119. +Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. +Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language + +Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics. +Jonas Pfeiffer, Ivan Vulic, Iryna Gurevych, and Sebastian Ruder. 2020a. MAD-X: An Adapter-Based Framework for Multi-Task Cross-Linguual Transfer. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7654–7673, Online. Association for Computational Linguistics. +Jonas Pfeiffer, Ivan Vulić, Iryna Gurevych, and Sebastian Ruder. 2020b. Unks everywhere: Adapting multilingual language models to new scripts. +Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996-5001, Florence, Italy. Association for Computational Linguistics. +Maja Popovic. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392-395, Lisbon, Portugal. Association for Computational Linguistics. +Carlos Sánchez Avendaño. 2013. Lenguas en peligro en Costa Rica: vitalidad, documentación y descripción. Revista Káñina, 37(1):219-250. +SEGOB. 2020a. Sistema de Informacion Cultural - Lenguas indigenas: Huichol. https://sic.gob.mx/ficha.php? table=inali_li. +SEGOB. 2020b. Sistema de Informacion Cultural - Lenguas indigenas: Nnahuatl. https://sic.gob.mx/ficha.php? table=inali_li&table_id=5. +SEGOB. 2020c. Sistema de Informacion Cultural - Lenguas indigenas: Tarahumara. http://sic.gob.mx/ficha.php?table=inali_li& table_id=15. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics. +Rémi Simeón. 1977. Diccionario de la lengua náhuatl o mexicana, volume 1. Siglo XXI. +Thelma D Sullivan and Miguel León-Portilla. 1976. Compendio de la gramática náhuatl, volume 18. Universidad Nacional autónoma de México, Instituto de investigaciones historicas. +Jörg Tiedemann. 2012. Parallel data, tools and interfaces in OPUS. In Proceedings of the Eighth International Conference on Language Resources and + +Evaluation (LREC'12), pages 2214-2218, Istanbul, Turkey. European Language Resources Association (ELRA). +Pilar Valenzuela. 2003. *Transitivity in Shipibo-Konibo grammar*. Ph.D. thesis, University of Oregon. +Alonso Vasquez, Renzo Ego Aguirre, Candy Angulo, John Miller, Claudia Villanueva, Željko Agić, Roberto Zariquiey, and Arturo Oncevay. 2018. Toward Universal Dependencies for Shipibo-konibo. In Proceedings of the Second Workshop on Universal Dependencies (UDW 2018), pages 151-161, Brussels, Belgium. Association for Computational Linguistics. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc. +Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc. +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics. +Zihan Wang, Karthikeyan K, Stephen Mayhew, and Dan Roth. 2020. Extending multilingual BERT to low-resource languages. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2649-2656, Online. Association for Computational Linguistics. +Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122. Association for Computational Linguistics. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: + +System Demonstrations, pages 38-45, Online. Association for Computational Linguistics. +Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 833-844, Hong Kong, China. Association for Computational Linguistics. +Shijie Wu and Mark Dredze. 2020. Are all languages created equal in multilingual BERT? In Proceedings of the 5th Workshop on Representation Learning for NLP, pages 120-130, Online. Association for Computational Linguistics. + +# A Geographic Distribution of the AmericasNLI Languages + +![](images/c5e1a973dedddf559a68fac9c88480757ef975bd3ad3f46dc0bedfb357565f66.jpg) +Figure A.1: Maps of Central and South America presenting an approximate distribution of where each Indigenous language contained in AmericasNLI is spoken. Please note that this map is hand-drawn and largely an estimate: some regions may not be included, and borders of included regions may not be completely accurate. + +# B Sources of Parallel Data + +
Lang.Source(s)Sent.
aymTiedemann (2012)6,531
bzdFeldman and Coto-Solano (2020); Margery (2005); Jara Murillo (2018a); Constenla et al. (2004); Jara Murillo and García Segura (2013); Jara Murillo (2018b); Flores Solórzano (2017)7,508
cniCushmanariano Romano and Sebastián Q. (2008)3,883
gnChiruzzo et al. (2020)26,032
hchMager et al. (2017)8,966
nahGutierrez-Vasques et al. (2016)16,145
otohttps://tsunkua.elotl.mx4,889
quyAgić and Vulić (2019)125,008
shpGalarreta et al. (2017); Loriot et al. (1993); Gómez Montoya et al. (2019)14,592
tarBrambila (1976); github.com/pywirrarika/tar_par14,720
+ +Table B.1: Parallel data used for our translation models. + +# C Additional Information for AmericasNLI Languages + +# C.1 Aymara + +A rare linguistic phenomenon found in Aymara is vowel elision, a deletion of certain vowel sounds triggered by complex phonological, morphological, and syntactic factors. + +# C.2 Asháninka + +While Asháninka in a strict sense refers to the linguistic varieties spoken in Ene, Tambo and Bajo Perené rivers, the name is also used to talk about the following nearby and closely-related Asheninka varieties: Alto Perené, Pichis, Pajonal, Ucayali-Yurua, and Apurucayali. Although Asháninka is the most widely spoken Amazonian language in Peru, certain varieties, such as Alto Perené, are highly endangered. + +The verb is the most morphologically complex word class, with a rich repertoire of aspectual and modal categories. The language lacks case, except for one locative suffix, so the grammatical relations of subject and object are indexed as affixes on the verb itself. Other notable linguistic features of the language include obligatory marking of a realis/irrealis distinction on the verb, a rich system of applicative suffixes, serial verb constructions, and a pragmatically conditioned split intransitivity. + +# C.3 Bribri + +As previously noted, Bribri is a vulnerable language, and there are few settings where the language is written or used in official functions. The language does not have official status and it is not the main medium of instruction of Bribri children, but it is offered as a class in primary and secondary schools. Bribri features fusional morphology and an ergative-absolutive case system. Bribri grammar also includes phenomena like head-internal relative clauses, directional verbs and numerical classifiers (Jara Murillo, 2018a). + +# C.4 Guarání + +While the first written record dates to 1591, Guaraní usage in text continued until the Paraguay-Triple Alliance War (1864-1870) and declined thereafter. From the 1920s on, Guaraní has slowly re-emerged and received renewed focus. In 1992, Guaraní was the first American language declared an official language of a country, followed by a + +surge of local, national, and international recognition in the early 21st century.6 + +# C.5 Nahuatl + +Nahuatl is spoken in 17 different states of Mexico. In Nahuatl, different roots with or without affixes are combined to form new words. The suffixes that are added to a word modify the meaning of the original word (Sullivan and León-Portilla, 1976), and 18 prepositions stand out based on postpositions of names and adjectives (Siméon, 1977). + +# C.6 Otomí + +The various regional self-denominations of Otomí include ñāhūn or ñāhūn, hūhūn, nuju, noju, yūhu, hnāhūn, ñuú, nanhú, ñothó, ñato and hnothó (INALI, 2014). Many words are homophonous to Spanish (Cajero, 1998, 2009). When speaking ñuhmu, pronunciation is elongated, especially on the last syllable. The alphabet is composed of 19 consonants, 12 vowel phonemes. + +# C.7 Raramuri + +Raramuri is mainly spoken in the state of Chihuahua. There are five variants of Raramuri. + +# C.8 Shipibo-Konibo + +Shipibo-Konibo is a language with agglutinative processes, a majority of which are suffixes. However, clitics are also used, and are a widespread element in Panoan literature (Valenzuela, 2003). + +# C.9 Wixarika + +The four variants of Wixarika are the Northern, Southern, Eastern, and Western variants (INEGI, 2008). It is spoken mainly in the three Mexican states of Jalisco, Nayari, and Durango. Features of Wixarika include head-marking (Nichols, 1986), a head-final structure (Greenberg, 1963), nominal incorporation, argumentative marks, inflected adpositions, possession marks, as well as instrumental and directional affixes (Iturrioz and Gomez-Lopez, 2008). + +# C.10 Summary of Language Information + +
LanguageLanguage FamilyCountries SpokenNumber of SpeakersWord Order
aymAymaranBolivia, Chile, Peru2mSOV
bzdChibchanCosta Rica7kSOV
cniArawakPeru73kVSO
gnTupi-GuaraniParaguay, Brazil, Argentina, Bolivia6-10mSVO
hchUto-AztecanMexico47kSOV
nahUto-AztecanMexico1.5mSVO/VSO/SOV
otoOto-MangueanMexico307kSVO
quyQuechuanPeru8-10mSOV
shpPanoanPeru35kSOV
tarUto-AztecanMexico89kSOV
+ +# D Dataset Information + +# D.1 Power Analysis + +Table C.1: Summary of the 10 languages in AmericasNLI. + +
p1 Modelp1p2Lower Bound PowerUpper Bound Powerp2 Model
Random Baseline33.3338.4840.33100Zero-shot (en)
37.9935.80100Zero-shot (es)
43.7091.38100Zero-shot +MLM (en)
43.8591.52100Zero-shot +MLM (es)
49.1299.82100Translate-train
40.2261.85100Translate-test
Zero-shot Baseline38.4843.7033.66100Zero-shot +MLM (en)
43.8535.33100Zero-shot +MLM (es)
49.1287.10100Translate-train
40.227.1399.07Translate-test
Adaptation Baseline43.8549.1231.29100Translate-train
+ +Table D.1: Here, we use the simulation approach of Card et al. (2020) to calculate upper and lower bounds for the power of our experiments. We use the average accuracies for each approach, and set $n = 750$ , $\alpha = 0.05$ , $r = 10,000$ , and bold experiments with well-powered lower bounds. + +# D.2 Dataset Statistics + +
LanguageSplitEntailmentContradictionNeutralMajority Baseline
aymTest2502502500.333
Dev2482482470.334
bzdTest2502502500.333
Dev2482482470.334
cniTest2502502500.333
Dev2202202180.334
gnTest2502502500.333
Dev2482482470.334
hchTest2502502500.333
Dev2482482470.334
nahTest2462452470.335
Dev1931951970.337
otoTest2492492500.334
Dev7875690.351
quyTest2502502500.333
Dev2482482470.334
shpTest2502502500.333
Dev2482482470.334
tarTest2502502500.333
Dev2482482470.334
+ +# E Detailed Results + +Table D.2: Distribution of labels in the test and development sets, per language. + +
FTaymbzdcnignhchnahotoquyshptarAvg.
Majority baseline-33.4033.4033.4033.4033.4033.7035.1033.4033.4033.40-
Zero-shot
XLM-R (en)84.5538.4541.5940.0740.7437.8239.5043.8438.6743.5636.0340.03
XLM-R (es)80.7737.7339.7037.5940.0636.7437.8839.9438.5438.1835.8938.23
Zero-shot w/ adaptation
XLM-R +MLM (en)-41.7739.5740.9352.4041.0143.2537.2462.2744.8639.3044.26
XLM-R +MLM (es)-45.2642.2240.5353.5238.4042.4140.2455.0040.1145.8944.36
Translate-train
XLM-R-53.6149.9845.4961.2842.2253.8041.4458.6253.1043.0150.25
Translate-test
XLM-R-37.7339.7037.5940.0636.7437.8839.9438.5438.1835.8938.23
+ +Table E.1: Development set results for zero-shot, translate-train, and translate-test. $FT$ represents the XNLI development set performance for the finetuning language and is not included in the average. The majority baseline represents expected performance when predicting only the majority class of the development set. Random guessing would result in an accuracy of $33.33\%$ . + +
FTaymbzdcnignhchnahotoquyshptarAvg.
Zero-shot
XLM-R (en)-22.21-2.53-6.18-5.51-6.00-3.07-9.53-5.44-3.91-6.85-2.09-5.11
XLM-R (es)-18.51-3.12-4.58-1.96-3.92-1.29-5.38-5.16-6.44-1.600.00-3.35
Zero-shot w/ adaptation
XLM-R +MLM (en)--6.44-5.33-6.40-10.04-3.52-11.66-3.07-17.38-6.01-5.02-7.49
XLM-R +MLM (es)--7.60-5.25-5.03-10.54-3.82-8.80-7.66-14.53-4.58-3.78-7.16
Translate-train
XLM-R--5.07-7.691.02-11.29-0.13-9.52-0.18-7.78-5.73-2.57-4.89
Translate-test
XLM-R--3.202.272.62-3.020.531.85-1.07-3.253.16-0.49-0.06
+ +Table E.2: Differences between hypothesis-only and standard results on the test set of AmericasNLI. + +
SourcearbgdeelenesfrhiruswthtrurvizhAvg.
en71.9677.6576.6275.8484.5578.7478.0070.0276.0464.4172.0472.5466.2874.3873.9774.20
es73.4978.7177.5977.0583.3680.7778.8372.2577.1064.6073.3273.7868.4475.8275.1675.35
+ +# F Additional Results + +Table E.3: Results of zero-shot models on the test set of XNLI. Scores are underlined when the same language used for training is used for evaluation as well. + +
SourceModelaymbzdcnignhchnahotoquyshptarAvg.
enZero-Shot36.0039.2037.2040.6736.8042.2836.9035.7340.6736.2738.17
Z-S +MLM41.6036.5340.8051.4739.8746.4837.8364.5340.6740.6744.05
Z-S +MLM AUG45.0738.6741.4752.9338.5346.4833.4262.0039.7340.2743.86
esZero-Shot37.8741.6037.8739.4736.2739.5739.0440.9338.2735.3338.62
Z-S +MLM43.8737.6038.8052.2736.0045.1241.5860.8041.2038.8043.60
Z-S +MLM AUG45.2038.6739.3354.2737.0744.9942.6562.6737.2038.6744.07
-Translate-Train49.3352.0042.8055.8741.0754.0736.5059.8752.0043.7348.72
T-T +MLM50.9351.2042.2761.6044.9356.1035.1663.4750.0044.1349.98
T-T +MLM AUG51.0751.8744.5361.0746.2753.3935.9661.0752.6740.6749.86
+ +Table F.1: Results from models adapted with augmented data before finetuning. Zero-shot, zero-shot +MLM, and translate-train results are taken from the main experiments, however we only take results from the run corresponding to the same random seed as the newly trained models. + +# F.1 Early Stopping + +While early stopping is vital for machine learning, in the case of zero-shot learning hand-labeled development sets in the target language are often assumed to be unavailable (Kann et al., 2019). Thus, in our main experiments we use either a machine-translated development set or one from a high-resource language. In both cases, performance on the development set is an imperfect signal for how the model will ultimately perform. To explore how this affects final performance, we present the difference in results for translate-train models when an oracle translation is used for early stopping in Table F.2. We find that performance is 2.34 points higher on average, with a maximum difference of 7.28 points for Asháninka, suggesting that creating ways to better approximate a development set may lead to higher performance. + +
aymbzdcnignhchnahotoquyshptarAvg.
2.130.987.280.580.532.123.031.420.934.362.34
+ +Table F.2: Difference between translate-train results obtained using the oracle development set and the translated development set for early stopping. + +# F.2 Data Augmentation with Translated Data + +Due to the success of translate-train, we also investigate if we can improve performance further by creating data for language adaptation (+MLM) through translation. To do so, we create a random sample of sentences taken from Spanish Wikipedia, and translate them into each target language. The sample is sized to contain the same number of subword tokens as the original pretraining data. We combine the original pretraining data and translated data to create a new set of sentences for continued pretraining, doubling the size of the original. We also finetune the original adapted models using translate-train. We present results in Table F.1. When finetuning on English and translate-train data, the average performance is highest when using the models adapted on the original data. When finetuning on Spanish, the models adapted on augmented data are best on average. While on average performance increases are not drastic, for some languages the performance increase is notable, and these mixed and/or augmented models may be worth looking into when interested in a particular language. + +# F.3 XLM-R Large + +In this section we provide results for XLM-R Large. Due to computational restrictions, we slightly modify the experimental setup from the main experiments: we use mixed precision training and a more aggressive early stopping patience of 3 evaluation steps. Additionally, we use a learning rate of 5e-6 for all finetuning experiments, as we found that the original learning rate of 2e-5 failed to converge. However, even when using the modified hyperparameters, we experience some instability during training. The zero-shot model trained on Spanish data did not converge with the original random seed, but successfully trained after changing the seed. For translate-train, the models trained on Asháninka and Otomí failed to converge, regardless of the seed used, and further hyperparameter tuning will be required, which we leave for future work. + +In this experiment, we can see that the results are more varied in comparison to the main results. Translate-train achieves the highest performance for five languages, with the adapted models achieving a combined highest performance for the remaining five. On average, the adapted model finetuned on English labeled data achieved the highest performance, followed closely by the other adapted model, and the translate-train model. This indicates that translate-train may be a viable approach when faced with limited compute, but might also have a restrictive upper limit on performance; in contrast, adaptation may allow for more potential performance gain, especially when larger models and datasets are available. Interestingly, when considering average performances across only the languages for which all models converged (i.e. removing Asháninka and Otomí from the calculation), we find that translate-train offers an average performance of $51.91\%$ , while adaptation approaches achieve $49.39\%$ and $49.83\%$ accuracy on average. + +Comparing XLM-R Large to XLM-R Base in Table F.4, we see that for all but one language the Large model outperforms the Base model in all adaptation and zero-shot runs. Notably, the Base model trained on translated data outperforms the Large model, and retains the highest overall performance across all languages and models. + +
FTaymbzdcnignhchnahotoquyshptarAvg.
Zero-shot
XLM-R Large (en)89.0440.6741.3343.0742.9339.2045.3942.2542.1348.2740.5342.58
XLM-R Large (es)89.8438.6741.6041.2042.0037.2041.4642.3841.3343.4736.0040.53
Zero-shot w/ adaptation
XLM-R Large +MLM (en)-54.8043.8746.6759.8743.6043.3644.7964.8043.0741.7348.66
XLM-R Large +MLM (es)-54.9340.4042.9361.0744.6745.5342.5168.0043.6040.4048.40
Translate-train
XLM-R Large-51.4750.1333.3361.2042.0055.2833.4261.4749.8743.8748.20
Translate-test
XLM-R Large-38.6740.9335.7350.8038.9339.9732.6247.8739.3335.6040.05
+ +Table F.3: Results when using XLM-R Large. Underlined results indicate runs which did not converge on the training data. + +
FTaymbzdcnignhchnahotoquyshptarAvg.
Zero-shot
English4.494.541.685.163.462.002.804.464.897.824.174.10
Spanish9.071.422.223.912.751.382.484.061.825.070.272.54
Zero-shot w/ adaptation
+MLM (en)-11.295.747.207.436.35-2.857.763.021.731.914.96
+MLM (es)-11.060.354.178.806.851.361.965.603.421.954.55
Translate-train-1.47-1.29-9.122.31-1.20-0.05-2.591.56-2.131.83-0.92
Translate-test--1.060.531.024.180.93-1.40-2.67-3.51-0.180.44-0.17
+ +Table F.4: Difference in performance between XLM-R Large and Base. + +
LanguageExample
aymP: Mājan walt ' awinakax utjkaniti?H: Iglesia JI JI ukax XFlo XICI ukax XIII Uikan mā jach 'a pacha.
P: Aka qillqatax Crownwn Squareareareukax iwayi, 'Nalacio ' ' fnoquis ukch ' anataki.H: Plaza de Plaza de palacio palacio palacio awipat uñt 'ayi.
bzdP: Ye'r ye' alalàdör ye' alalashshshshshöö?H: Kákkke ' tā káx bata bata à káx bata ā .
P: Káx i'r i' ā káx i' ulashshshshshshshshshshshh. H: Kéqéqwówówówówó ulalululw a .
cniP:APAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPAPap
H: Iibibibibiti obibiti obibi. Ababababa
P: b. Akobiro ayiro ayiro ayiro nija Jebabentirori Anampiki. H: Itititiititiititiitii.
gnP: Petě cosepo corto imbarete caminata norte gotyoh e pueblo j? Sus , petě tupão particularmente siglo XIX . H: Tupão tavaguasu Jesús omopu'āsiglo XIX .
P: Péicha Crown Square oime palacio real , kuimba'e preciado tetāme , joy Escocia . H: Plaza de la corona cuenta palacio real .
hchP: xewit+ta m+k+ wa+ka xewit+ x+ka xewit+ x+ka mu'at+a. H: 'aix+ 'aix+ ti' at+x+t+ x+a mu'at+x+a.
P: wa+ka m+k+ 'aix+ pureh+k+t+a de oro. H: 'ik+ p+h+k+ palacio palacio palacio palacio.
nahP: See tosaasaanil , see tosaasaanil , see tosaasaanil . See tosaasaanil , see tosaasaanil . H: Yn ipan ciudad de Jesús la Yglesia de Jesús yn ipan in omoteneuh xihuitl de Jesús .
P: Auh ynic patlahuac cenpohualmatl ypan in yn oncan tecpan quiyahuac yn oncan tecpan quiyahuac yn tecpan quiyahuac yn oncan tecpan quiyahuac . H: In tlapoalli ica tlapoalli ica tecpan palacio .
otoP: Ra ngé'a mi b'et'em'i ha ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuh u.
P: Ra ngé'a ra thuhu ra b'uj ha ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thuhu ra thanh
quyP: Asiria nacionpa norte lawninpim, Sus X00 watakunapi religionniyoq puna apaqkunawan hukllawakurqaku. H: Jesús piemponpi iglesia
P: Crown Squarepa hichpanpim tarikunku palaciopi, chayqa Escocia nacionpa thawpinpim kachkan H: Alemania nacionpa Plaza sutiyoq runam qollpeqa apuestaspa palaciopi cuentallikun
shpP: Westiora yoxan yoxanya riki ea, jainxon westiora westiora westiora westiora westiora westiora westiora westiora westiora westiora westiora westiora westiora westiora westiora westiora westiora westiora westiora westiora westiora westiora westiora westiora westiora westiora westiora westiora westiora westiora westiora westiora westiora westiorawestiora westiora westiorawestiora westiorawestiorawestiorawestiorawestiorawestiorawestiorawestiorawestiorawestiorawestiorawestiorawestiorawestiorawestiorawestiorawestiorawestiorawestiorawestiorawestiorawestiorawestiorawestiorawestiorawestiorawestiorawestiorawestiorawestiorawestiorawestiorawestiorawestiorawestloralowk
P: (2) χchí mu fe'pá ? χchí mu fe'pá ? χatza be'pá ? χchí mu fe'pá ? χchí mu fe'pá ? χchí mu fe'pá ? χchí mu fe'pá ? χchí mu fe'pá ? χchí mu fe'pá ? χchí mu fe'pá ? χchí mu fe'pá ? χchí mu fe'pá ? χchí mu fe'pá ? χchí mu feni mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi mi ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni ni bi
tarP: (2) a) pe fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pí fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'párfe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá páfe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'pá fe'párba'
+ +Table D.3: Two randomly selected translate-train examples. \ No newline at end of file diff --git a/americasnlievaluatingzeroshotnaturallanguageunderstandingofpretrainedmultilingualmodelsintrulylowresourcelanguages/images.zip b/americasnlievaluatingzeroshotnaturallanguageunderstandingofpretrainedmultilingualmodelsintrulylowresourcelanguages/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..68b7e4d1214ee530fcac747ace67872c1ae33acd --- /dev/null +++ b/americasnlievaluatingzeroshotnaturallanguageunderstandingofpretrainedmultilingualmodelsintrulylowresourcelanguages/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7fd7bca1d708f14c20c08e7f61879e0a2acbaf26e4a6c274aa1d84aadc2cf8ae +size 1401981 diff --git a/americasnlievaluatingzeroshotnaturallanguageunderstandingofpretrainedmultilingualmodelsintrulylowresourcelanguages/layout.json b/americasnlievaluatingzeroshotnaturallanguageunderstandingofpretrainedmultilingualmodelsintrulylowresourcelanguages/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..feda9cbbcb3df883757f46a078c2d50166ca12dd --- /dev/null +++ b/americasnlievaluatingzeroshotnaturallanguageunderstandingofpretrainedmultilingualmodelsintrulylowresourcelanguages/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fcdcce7715afc70aeb59ef5dc1b4de64b7a7fd25c27c44184deca0695daa1d48 +size 488636 diff --git a/ametaframeworkforspatiotemporalquantityextractionfromtext/6203e748-da1d-41ce-b2cc-857397fb06b7_content_list.json b/ametaframeworkforspatiotemporalquantityextractionfromtext/6203e748-da1d-41ce-b2cc-857397fb06b7_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..8615ce7b3797ff6e18db8bd4f74e153f2c0b9d95 --- /dev/null +++ b/ametaframeworkforspatiotemporalquantityextractionfromtext/6203e748-da1d-41ce-b2cc-857397fb06b7_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:72097edd08fc0958b40aa88cb1e1e0a0811c01dd266cba1e5c9793103b27eff9 +size 92906 diff --git a/ametaframeworkforspatiotemporalquantityextractionfromtext/6203e748-da1d-41ce-b2cc-857397fb06b7_model.json b/ametaframeworkforspatiotemporalquantityextractionfromtext/6203e748-da1d-41ce-b2cc-857397fb06b7_model.json new file mode 100644 index 0000000000000000000000000000000000000000..5d64aac743ddda08d22a0681f5ca746d2e725be3 --- /dev/null +++ b/ametaframeworkforspatiotemporalquantityextractionfromtext/6203e748-da1d-41ce-b2cc-857397fb06b7_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:983cfbd755dac29b2c6c4c7f4e3f71955220deb4cbaa98bdf305d81450dbaa16 +size 113834 diff --git a/ametaframeworkforspatiotemporalquantityextractionfromtext/6203e748-da1d-41ce-b2cc-857397fb06b7_origin.pdf b/ametaframeworkforspatiotemporalquantityextractionfromtext/6203e748-da1d-41ce-b2cc-857397fb06b7_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..043f5f55384ae78892f23ec4ec79e4192974acb8 --- /dev/null +++ b/ametaframeworkforspatiotemporalquantityextractionfromtext/6203e748-da1d-41ce-b2cc-857397fb06b7_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:82d10284275f08040b5402a62fbff391a1ca20df3d83961c9ad798155b27a5a7 +size 369175 diff --git a/ametaframeworkforspatiotemporalquantityextractionfromtext/full.md b/ametaframeworkforspatiotemporalquantityextractionfromtext/full.md new file mode 100644 index 0000000000000000000000000000000000000000..268d4b10e4be82e973b2d4771cb6ba1f0ba050d4 --- /dev/null +++ b/ametaframeworkforspatiotemporalquantityextractionfromtext/full.md @@ -0,0 +1,419 @@ +# A Meta-framework for Spatiotemporal Quantity Extraction from Text + +Qiang Ning* + +Amazon + +qning@amazon.com + +Ben Zhou + +University of Pennsylvania + +xyzhou@seas.upenn.edu + +Hao Wu + +Hooray Data + +haowu@hooray.ai + +Haoruo Peng + +Newsbreak + +haoru.peng@newsbreak.com + +Chuchu Fan + +MIT + +chuchu@mit.edu + +Matt Gardner* + +Microsoft Semantic Machines + +mattgardner@microsoft.com + +# Abstract + +News events are often associated with quantities (e.g., the number of COVID-19 patients or the number of arrests in a protest), and it is often important to extract their type, time, and location from unstructured text in order to analyze these quantity events. This paper thus formulates the NLP problem of spatiotemporal quantity extraction, and proposes the first meta-framework for solving it. This meta-framework contains a formalism that decomposes the problem into several information extraction tasks, a shareable crowdsourcing pipeline, and transformer-based baseline models. We demonstrate the meta-framework in three domains—the COVID-19 pandemic, Black Lives Matter protests, and 2020 California wildfires—to show that the formalism is general and extensible, the crowdsourcing pipeline facilitates fast and high-quality data annotation, and the baseline system can handle spatiotemporal quantity extraction well enough to be practically useful. We release all resources for future research on this topic. $^{1}$ + +# 1 Introduction + +Events are often associated with quantities - how many COVID-19 patients are on ventilators, how many people are injured during protests, or how large is the extent of a wildfire. We often need to figure out the type of an event, and where and when it happened for these quantities for coherent discussion of public policy on sociopolitical events in rapidly evolving situations: "19 deaths" is different from "19 recoveries;" "19 deaths in a small city yesterday" apparently describes a more severe situation than "19 deaths in the whole country last month." However, until dedicated channels are established, these quantities are typically first reported on social media and local news articles, which then have to slowly make their way to some + +DCT: Thursday, 08/27/2020 + +Title: Study Sessions, Dinners: 104 New USC Student Coronavirus Cases + +Text: LOS ANGELES, CA -- The number of coronavirus cases confirmed among USC students continued rising Thursday, with the university announcing [104] new cases over the past four days... + +Recognition: 104 + +Type:Confirmed cases +Spatial Grounding:US $\rightarrow$ California $\rightarrow$ Los Angeles $\rightarrow$ USC Temporal Grounding:[08/23/2020,08/26/2020] + +DCT: Monday, 06/01/2020 + +Title: Black Lives Matter: 16 Organizations That Are Bailing Out Protestors + +Text: ...Police officers have arrested [thousands] of demonstrators... + +Recognition: thousands + +Type: Arrests + +Spatial Grounding: US + +Temporal Grounding: Overall quantity ending on 06/01/2020 + +Figure 1: Given document creation time (DCT), title, and text, the STEQE problem is to do quantity recognition, typing, spatial grounding, and temporal grounding according to the proposed formalism (Sec. 2). Above are two examples from our COVID-19 dataset and BLM protest dataset. + +aggregate location for decision-makers to use. This calls for a general framework to extract and analyze quantities associated with events, so that we can automatically summarize quantitative information from news streams, rapidly respond to emergencies, investigate incidents, and potentially combat misinformation through comparisons with trusted sources. + +Prior work on events focused on extracting event mentions, attributes, and relationships (ACE, 2005; Chen and Ji, 2009; Do et al., 2011; UzZaman et al., 2013; Glavaš et al., 2014; Zhou et al., 2019; Chen et al., 2021), and paid little attention to quantities associated with those events, which presents an opportunity to perform targeted information extraction on these quantity events. + +This paper studies spatiotemporal quantity extraction (STEQE): finding quantities of certain + +types and extracting their associated times and locations. We develop a general meta-framework to help researchers overcome challenges and extend to new domains easily. Specifically, the contributions of this meta-framework are: + +Task Formulation We draw on ideas from existing NLP tasks to create the first formalism that defines STEQE as four information extraction tasks: quantity recognition, typing, spatial grounding, and temporal grounding. While each of these has analogues in the literature, our combination of them into a complete picture of quantity events is novel. + +Annotation Collection We release a shareable and extensible crowdsourcing pipeline on CROWDAQ (Ning et al., 2020a) that facilitates fast and reliable data annotation. We show how this pipeline facilitates fast and high-quality annotations for three sociopolitical events: the COVID-19 pandemic, Black Lives Matter (BLM) protests, and 2020 California wildfires. These practical STEQE datasets are also released to foster future research. + +Modeling We propose a T5 baseline model for its flexibility across tasks and easy domain transfer. This model shows that, while the end-to-end STEQE problem remains challenging in all domains, temporal grounding is typically the most difficult task, pointing out a research focus next. + +# 2 STEQE + +The STEQE problem aims to extract information about quantity events in text, consisting of four parts: determining which numerical expressions actually correspond to events (§2.1), the type of the event that a quantity is referring to (§2.2), where that event happened (§2.3), and the temporal extent to which the quantity refers (§2.4). + +Note that for each of these subparts, there could have been other definition and formulation choices. We describe our formalism's design choices, and discuss why they would lead to better-defined learning problems and more reliable data collection, along with their limitations and how to extend our formalism for more specialised applications. + +# 2.1 Quantity Recognition + +Similar to named entity recognition (NER) (Tjong Kim Sang and De Meulder, 2003), quantity recognition is defined as a text span detection problem. We discuss two questions regarding the definition + +of quantities: (1) how to distinguish between quantities and non-quantities; (2) how to define the span for quantities to avoid misalignment. + +First, quantities are a special type of numbers that are associated with events, either in digits (e.g., “123”) or in words (e.g., “one hundred twenty three”). Some non-quantity examples are: + +1. Date and time: "May 8, 2020" and "5:30 pm" +2. Duration: "3 months" and "60 years old" +3. Part of an entity name: “COVID-19”, “Porsche 911”, and “502 Main Street” + +Article words, "a" and "an", require more attention. When we say "a man died," the "a" does mean "I" death, while in "a large number of people died," the "a" itself does not have the meaning of "I," and we thus do not consider it a quantity. + +Ordinal numbers can also indicate events, but their spatiotemporal extent can be understood differently: "the fifth case in Seattle" implies that there had been 5 cases, and the spatiotemporal extent of "fifth" can be that of the fifth case only, or all of the five cases. Ordinal-number events are rare in our study, so comparing to the extra annotation requirement, we decide to consider ordinal numbers as non-quantities, although the definition is easily extensible to cover them in the future. + +Second, we need to define the boundaries of these quantity spans. For instance, in "five cases in Seattle," should one label the text span of "five" or "five cases"? What about "4.8 billion" and "\ $4.8 billion"? Similar to labeling an event using its predicate only, our choice is to keep the span minimal while keeping the numerical semantics: we will mark "five" (i.e., drop "case"), "4.8 billion" (i.e., keep "billion"), and "4.8 billion" (i.e., drop "$ ") in these examples. Minimising the span does not lose information about the quantity—only marking "five" in "five cases" does not prevent us from identifying its type, unit, and spatiotemporal extent in subsequent annotation tasks. Below are some tricky cases, and quantities are in brackets. + +1. Rate: “[20 percent] of the tenants were infected”, “the positive rate is now [200] per [100,000]”, “[1000] tests per day” +2. Approximation: "[4 or 5] are missing" +3. Range: "the positive rate is [2 to 3 percent] / at least [2%] / at most [3%]" + +# 2.2 Quantity Typing + +Again, similar to NER, recognized quantities can have an associated type from a predefined set of + +classes. $^2$ A clear event type is important for subsequent spatiotemporal grounding, but some quantities can have multiple types, and some can have multiple interpretations for their spatiotemporal extent. This work thus makes two design choices to mitigate these issues. + +Enforce single-typing In this work, we allow quantities to have only one single type. This ensures annotation quality since multiple types for a single quantity may complicate the spatiotemporal extent. For instance, in "[three] men were hospitalized 5 days after being tested positive," the time span of hospitalization and that of tested positive are different. We enforce single-typing by providing an order of importance. For instance, hospitalization is more important than tested positive, so the spatiotemporal extent of "three" will be that of hospitalizations. + +Ignore rate and money quantities Rate and money quantities are excluded in all of our typing labels, because their spatiotemporal extent can be interpreted in different ways. For instance, the spatiotemporal extent of "a bill of $4.8 billion" can be interpreted either as when and where this bill was passed, or as when and where the bill will be used; similarly, to define the time span of the rate quantity "[20%] of the tenants were infected", we can either use the time span from the very first case to the last case that brought the infection rate from 0% to 20%, or use the time span when the infection rate was holding at 20%. For applications where one needs to spatiotemporally ground rate and money quantities, one could extend our instructions to clarify the ambiguities above. + +# 2.3 Spatial Grounding + +The spatial grounding problem of STEQE is to ground real-world events to a locale (see Fig. 7 in Appendix), avoiding complications in applications like human-robot interactions (e.g., "turn left and go to the kitchen, and then pick up the fruit on the table"). Thus we do not need to handle the nuances of relative spatial relationships like "the kitchen is on our left" and "the table is in the kitchen." We describe our formalism in terms of the format, granularity, and multi-location handling. + +Title: Six COVID-19 cases emerge in South Portland + +Text: SOUTH PORTLAND, Maine -- A facility for people with cognitive disabilities reports having [six] COVID-19 cases... + +Spatial grounding for [six]: US → Maine → South Portland → A facility for people with cognitive disabilities + +Figure 2: The desired spatial grounding annotation is the most specific location mentioned in the text that contains all individual cases of a quantity event. + +Format An important decision for spatial grounding is the format: we can use natural language to describe the locale, select text spans from the original text, or select from a map directory. In this work, we use a combination of all three for spatial grounding to balance between flexibility and consistency: we choose from a predefined set of questions to determine the country (U.S. vs non-U.S.) and state, use free text for the name of the city, and span selection for more granular locale information (e.g., "a pork plant"). We leave it for future work if one wants to extend to other countries, or if one can provide a detailed map directory. + +Granularity We define spatial grounding annotation to be the most specific location mentioned in the text that contains all individual cases of a quantity event. For instance, in Fig. 2, the title mentions 6 cases in "South Portland," but later we will see that the 6 cases are all from "a facility for people with cognitive disabilities." The annotation should specify that facility instead of stopping at "South Portland." This design choice requires annotators to check the context in addition to the sentence containing the quantity, and is important for downstream tasks because it is likely that there are cases in South Portland but not in that facility. + +Multi-location We handle events in multiple locations by broadening the granularity of the spatial location, as mentioned above. However, there are cases where the same quantity is explicitly mentioned with two or more separate locations: + +1. "Both Seattle and Tacoma had [10] new cases." +2. "Seattle and Tacoma together had more than [10] new cases." + +The “10” in both sentences above are associated with two cities, Seattle and Tacoma. The semantics are also different: being shared by two locales, or the events from both locales combine to make this quantity. In our pilot studies, we tried to consider + +these details in multi-location quantities, but found that they were very rare and crowd workers could not capture them reliably. We thus decide to ignore these cases in this work and only allow crowd workers to select a single location. + +# 2.4 Temporal Grounding + +The temporal grounding problem of STEQE is to ground each real-world quantity event to a single time span, which reduces the complexities in temporal semantics often encountered in prior datasets (Pustejovsky et al., 2003; Cassidy et al., 2014; O'Gorman et al., 2016; Ning et al., 2018a, 2020b) and improves practicality. + +Format A time span consists of two time points, and the key is the format for time points. In this work, we allow a time point to be UNKNOWN if the text is unclear. For a specific time point, there are two general ways to describe it: (1) use absolute date and time (e.g., "Feb 1st, 2021"); (2) use relative time $\Delta$ based on a reference time point $T$ (e.g., "3 days before lockdown"). + +We have chosen the first format in this study, and when a time point is unclear based on the text, we allow annotators to simply select "Unknown". The second method above is strictly more expressive, but also comes with many degrees of freedom: the reference point $T$ can be either an absolute date and time $T_{\text{time}}$ or another event $T_{\text{event}}$ (e.g., "lockdown"), and the relative time difference $\Delta$ can be either a specific duration $\Delta_{\text{spec}}$ like "3 days before/after" or a rough description $\Delta_{\text{rough}}$ like "a few days before/after." In our pilot studies allowing for $T_{\text{time}} + \Delta_{\text{rough}}$ , $T_{\text{event}} + \Delta_{\text{spec}}$ , or $T_{\text{event}} + \Delta_{\text{rough}}$ , we found the $T + \Delta$ method too flexible to achieve annotation agreement; in the meantime, using absolute date and time could reliably estimate those time spans in practice. This is why we recommend the first format above. + +Granularity Given the nature of news events, it is often enough to be specific up to days. We define the time span of a quantity to be from the day of first event to the day of the last,[3] but this exact time span may not always exist in the text, so STEQE uses the best over-estimate of this gold time span based on information in the text (see Table 3). + +![](images/2d62d37f17a94bd9b99d868b2f20bc785a8ec32f6890a896c7fcfb9601b17ec6.jpg) +Figure 3: We define the time span of a quantity to start from the first event and end at the last; the desired temporal grounding annotation is the tightest estimate based on the text that covers all 6 events. + +This work also addresses common ambiguities. (1) Some time expressions are not critical and thus less specific in text, e.g., "March 2020," for which we will simply use the entire span of that range, e.g., [03/01/2020, 03/31/2020]. (2) For time expressions like "mid September" and "end of 2020", we choose the closest dates, e.g., "09/15" and "12/31/2020". (3) Depending on the actual publication date and the content of an article, there can be different interpretations for "today," thus leading to a one-day disagreement among people regarding time expressions like "yesterday" or "in the last three days." We allow our annotators to use their best judgment in these cases. + +Multi-span Similar to spatial grounding, we handle events in multiple time spans by broadening the granularity of the time span, as mentioned above, and as with spatial grounding, we do not label multiple time spans separately in rare cases like "10 arrests on Monday and Wednesday." + +Overall quantity A special type of temporal grounding phenomenon is overall quantities. Strictly speaking, this notion exists for spatial grounding as well (e.g., the overall COVID-19 case number around the world or the U.S.). While humans easily agree on the spatial extent of these overall quantities, their time spans are often ambiguous, especially the start time. For instance, in "there have been [3 million] cases so far," the start time is supposed to be "the beginning of the pandemic," but people do not always agree on when that was. The disagreement comes from (1) the pandemic started at different times in different regions of the world; (2) one may argue that the pandemic started either since the first confirmed case, or since the lockdown. This debate over start-time is not an NLP problem, so instead of inventing a new mechanism to resolve this, we simply allow "overall" as a label for the start time of a quantity. + +
Domain, DCTQuantityTypeSpatial Grid.Temporal Grid.
COVID-19 +Sat, 2020-08-15Tennessee has conducted 1,757,690 tests with 1,631,297 negative results.Tested negativeTennesseeOverall num- +ber at DCT
Wildfires +Tue, 2020-09-22The blaze had more than doubled in size +over the past week to 170 square miles (440 square kilometers), … from Los Angeles.MeasurementsLos Angeles, +California2020-09-15 to +2020-09-22
BLM Protests +Tue, 2020-06-16Black Lives Matter demonstrators in a tiny +Ohio town...Sunday. The small demonstra- +tion has about 80 people, organized by local +Bethel residents.ParticipantsBethel, Ohio2020-06-14 to +2020-06-14
+ +Table 1: Example annotations with quantity span highlighted. Texts are truncated. + +# 3 Data Annotation + +We have walked through the definition of the tasks in our STEQE framework, with discussions on various design choices. Next we explain how to collect annotations via this framework in practice. Table 1 shows some example annotations from our datasets. + +# 3.1 Input Document Filtering + +We worked with NewsBreak Inc., a local news aggregation company, to obtain raw newswire texts from publicly available news outlets. We then made use of NewsBreak's internal tools to determine the topic of these news articles, i.e., whether an article is about COVID-19, Black Lives Matter protests in 2020, or the 2020 California wildfires. The data also comes with meta information including each article's source domain and publication time. Altogether, we obtain 1M articles on COVID-19 between 01/01/2020 and 12/31/2020, 100k on protests from 05/22/2020 to 12/31/2020, and 90k on California fires from 08/01/2020 to 12/31/2020 as source articles. + +# 3.2 Domain-specific Typing + +Following the general guidelines in §2.2, we used the following domain-specific types in this study. + +1. COVID-19 pandemic: deaths caused by COVID-19, deaths likely caused by COVID-19, recoveries, confirmed cases, tests, tested negative, hospitalizations, patients on ventilators, and in ICUs. +2. BLM protests: protests, participants, order maintainers, arrests, deaths, injuries, and shootings. +3. California fires: fires, physical measurements, people impacted, items impacted, and resources. + +These domain-specific types can be very specific (see those for the COVID-19 pandemic) or generic (see those for California fires), which demonstrates the flexibility of our framework. + +# 3.3 Shareable CROWDAQ Pipeline + +CROWDAQ (Ning et al., 2020a) is an open-source platform that standardizes data annotation pipelines and provides a customizable annotation interface, automated annotator qualification exams, progress monitoring, and annotation agreement monitoring. CROWDAQ pipelines have four components: instruction, tutorial, exam, and main task: an annotator will read the instruction and tutorial, and then work on a set of multiple-choice exam questions. CROWDAQ automatically checks their scores and assigns qualifications. Qualified annotators will then be able to work on the main task. For each of the four tasks defined in Sec. 2, we have designed CROWDAQ pipelines that are general enough to be used for annotating in all domains. We release the CROWDAQ pipelines for public use. + +# 3.4 Data statistics + +We first show statistics of our qualification exams in Table 2. We can see quantity recognition expectedly has the fewest hard questions and highest passing rate, and spatial and temporal grounding have more hard questions. Note that typing for California fires seems harder than typing for the other two domains, likely due to our choice of more generic types for California wildfires. + +We then launched main annotation tasks on Amazon Mechanical Turk (MTurk) that were available + +
Qual IDQual NameHard (%)Passed (%)
QRecognition1894
SGSp. Grd.4762
TGTemp. Grd.5057
T-CTyping (COVID)2760
T-BTyping (BLM)3660
T-FTyping (Fire)5053
+ +only to qualified workers. We also required 3 different workers for each single annotation job and used majority voting to aggregate multiple workers' annotations. Since quantity recognition is a relatively easy task and our quantity recognition system based on BERT (Devlin et al., 2019) for the COVID domain was reliable enough to be applied to other domains, we did not further collect quantity recognition data. Table 3 and Table 6 (Appendix) show more statistics of these datasets. + +Table 2: The difficulty of the qualification exams in this work. Hard: exam questions where less than $70\%$ attempts were correct. Passed: the ratio of passed in all attempts. See Table 5 in the appendix for more details. + +
TaskQID#W#QWAWAExpert
Recog.
- COVIDQ582.6k92%98%
Typing
- COVIDQ, T-C521.5k95%100%
- BLMQ, T-B744k87%94%
- FireQ, T-F682k91%96%
Sp. Grd.
- COVIDT-C, SG913.4k91%98%
- BLMT-B, SG501.5k80%96%
- FireT-F, SG632k92%90%
Temp. Grd.
- COVIDT-C, TG1324.3k86%100%
- BLMT-B, TG571.6k77%96%
- FireT-F, TG631.6k82%96%
+ +Table 3: The required qualifications (QID), numbers of actual annotators (#W) and annotated quantities (#Q), worker agreement with aggregate (WAWA), and expert evaluation on 50 random samples after worker aggregation. The WAWA metric is for the "state" choice in spatial grounding, and the "overall number" judgment in temporal grounding (reported by CROWDAQ directly). The expert evaluation scores are all accuracy, except for $F_{1}$ for quantity recognition. + +Note that we did not enforce full annotation for all quantities (i.e., one quantity may only receive typing annotations, and another may only receive spatial annotations) to cover more documents (Ning et al., 2019a). Within those reported in Table 3, 500 quantities in each domain are fully labeled with + +both typing and spatiotemporal extent, and we use these as our test sets. + +We paid $0.05 for each job in quantity recognition, and$ 0.15 for those in typing, spatial grounding, and temporal grounding; in the COVID-19 data collection, the average hourly pay of the top 5 annotation contributors was $25 (typing),$ 13 (spatial grounding), and $12 (temporal grounding). In total, the cost of 3 datasets was $11k (including 20% overhead paid to MTurk). + +We developed our CROWDAQ pipeline for COVID-19 and applied it on other domains. When we received news articles in BLM protests and California wildfires from NewsBreak Inc., it only took us about 2 weeks to obtain the annotations used in this work, including designing domain-specific typing instructions and exams, launching tasks to MTurk, and waiting for crowd workers to finish. This fast and reliable data collection is appealing for responding to emerging events in the future. + +# 4 Model + +Quantity recognition is a typical span selection problem and we use the standard token classification model based on BERT (large, cased) (Devlin et al., 2019) that comes with HuggingFace (Wolf et al., 2020). For typing, spatial, and temporal grounding, we use the T5-large language model (Raffel et al., 2020) for its flexibility across tasks and easy domain transfer. We format data from each task to fit into T5's sequence to sequence (seq-to-seq) nature. Specifically, for each quantity, the input sequence to T5 is the string of the previous 3 sentences, the current sentence with a special marker token right before the quantity span, the next 3 sentences, the title, and document creation time (DCT). For typing, the output sequence is a single token representing each label mapped from a reserved vocabulary. For spatial grounding, the output sequence is the location names from the highest hierarchy to the lowest ended by an end-of-sentence (EOS) marker. For temporal grounding, the output sequence is the start time followed by the end time. Both times are either “unknown” or a date string in ISO 8601 format (e.g., “2021-01-15”). We view the start time of an overall quantity as “unknown”. To get complete date predictions, we enforce the decoding length to be at least 12 and use a date parser to find “unknowns” or dates. + +
SystemTaskTypingSpatial GroundingTemporal GroundingEnd-to-end
AccEM-cityEM-stateBinaryS-NE-NEM-city, Binary
NaiveCOVID446884680243
BLM387482320320
Fire2758928603120
T5 (in-domain)COVID89819074535256
BLM89778957494341
Fire8770948313255
T5 (all domains)COVID89819174545755
BLM89809165625748
Fire87719476466152
+ +Table 4: System performances on typing, spatial grounding, and temporal grounding (averaged from 3 different runs). EM-city/-state: exact match scores up to the city-/state-level. Binary: judging if a quantity is an overall-quantity ending on DCT. S-N/E-N: EM scores when the start/end time is non-trivial. End-to-end: quantities receiving correct predictions on all steps based on "EM-city" (spatial) and "Binary" (temporal). T5 (all domains) uses the same typing systems trained in-domain, but combine the spatiotemporal grounding data from all domains in training. Bold values are best results with respect to each domains and metrics. + +# 5 Experiments + +In our evaluation of quantity recognition using the aforementioned BERT model on a random set of 300 sentences (100 from each domain), we find the precision $99\%$ for all domains, and the recall $95\%$ (COVID), $87\%$ (BLM), and $87\%$ (Fire). The recall is slightly lower because of poor performance on article words ("a" and "an"). However, since most missed quantities are not associated with event types that we are interested in (e.g., "[a] post office" or "[a] comment"), the adjusted recall is $98\%$ (COVID), $94\%$ (BLM), and $93\%$ (Fire) if we do not consider those irrelevant quantities. + +Table 4 shows system performances on typing, spatial, and temporal grounding on extracted quantities. Our test set in each domain consists of 500 fully annotated quantities. The rest of the data is split into $80\%$ for training and $20\%$ for development, that we use to acquire the learning rate (5e-3) and batch size (32). We compare T5 with a naive method, which always predicts the majority type in each domain for "typing," the location mention closest to the quantity in text for "spatial grounding,"[8] and overall quantity ending on DCT for "temporal grounding." For spatial grounding, we report two exact match (EM) scores, up to the state-level and city-level, respectively. For temporal grounding, we report the accuracy for judging whether a quantity is an overall quantity ending on DCT ("Binary" in Table 4), and two EM scores for cases where the gold start time is a specific date + +("S-N" for "Start-Nontrivial") and where the end time is not DCT ("E-N" for "End-Nontrivial"). + +T5 (in-domain) On quantity typing, T5 improves by a large margin over the naive baseline in all domains. The naive baseline performs reasonably well on spatial grounding at the state level (82-92% EM-state across three domains), but often fails to provide more granular information at the city level (58-74% EM-city). This is expected because a city mentioned close to the quantity does not necessarily mean that the quantity is for the city.9 This phenomenon also varies across domains: BLM protests were in a few major cities, the EM-city score of the naive method is thus relatively high (74%), while for California wildfires, there were more cities to choose from, leading to a low EM-city of 58%. In contrast, T5 can produce more granular information at the city level, and maintain a relatively stable score across domains (70-81% EM-city). As for temporal grounding, due to the nature of news articles, the naive baseline that treats all quantities as an overall quantity ending on DCT yields reasonably good performances in all domains; but for quantities with a non-trivial start time or end time, the naive baseline largely fails. + +T5 (all domains) We also combine the training data for spatiotemporal grounding from all domains and train a single T5 system (but keep T5 in-domain systems for typing), which achieves the best scores for almost all metrics in Table 4. One outlier is the Fire domain, where the Binary score + +for temporal grounding drop, probably due to most temporal annotations being overall quantities. This suggests that spatiotemporal phenomena can be generally transferred across different domains. + +Finally, the end-to-end column in Table 4 shows how many of these quantities have received correct predictions on typing, spatial grounding (based EM-city), and temporal grounding (based on "Binary"). The reported performance does not count for quantities that are not recognized, so we view this as the precision of the system. We see that the naive baseline has very low performance due to errors propagated at each step, while with this framework, T5 is trained to produce significantly better results. Note that depending on the use case, one can simply collect more training data, or focus on only a few important event types, to further improve the end-to-end performance. + +# 6 Related works + +Existing NLP works on events have focused on detection (e.g., detecting LIFE and BUSINESS events; ACE (2005)), common sense (e.g., Rashkin et al. (2018); Sap et al. (2019); Zhang et al. (2020a)), and relationships (e.g., coreferential Chen and Ji (2009), temporal UzZaman et al. (2013), causal Do et al. (2011), and parent-child relations Glavaš et al. (2014)). There is also a line of recent works specifically on temporal semantics: time expression extraction and normalization (Laparra et al., 2018), temporal relation extraction (Ning et al., 2018a, 2019b, 2020b), temporal common sense (Zhou et al., 2019, 2020), temporal slot filling (Surdeanu, 2013), and timeline construction (Do et al., 2012; Ning et al., 2018b; Li et al., 2019). These tasks may help understanding the temporal aspects of events in general, but they cannot directly associate temporal values with quantities, and calls for a dedicated framework such as STEQE. Prior works on quantities either focus on math calculations (Roy et al., 2015; Roy and Roth, 2018) or common sense reasoning (e.g., mass distribution of animals; Elazar et al. (2019)), and not on quantity events and the associated spatiotemporal extent studied in this work. + +Existing works on spatial semantics have focused on natural language navigation (Chen et al., 2019; Kim et al., 2020), human-machine interaction (Landsiedel et al., 2017; Roman Roman et al., 2020), dialogue systems (Udagawa et al., 2020), and clinical analysis (Kordjamshidi et al., 2015; + +Datta and Roberts, 2020). Works on geocoding (Gritta et al., 2018; Kulkarni et al., 2020) map spatial mentions to coordinates, which can be applied to our work for finer geolocation mapping. Zhang and Choi (2021) proposes a QA dataset that considers time and location of the question when judging answer correctness, which may benefit from our information extraction framework. + +A recent work from Zong et al. (2020), which extracts COVID-19 related events from tweets, is closely related to our work. Besides that they worked on tweets instead of news articles, the key differences are: (1) instead of span selection used in Zong et al. (2020), we propose formalisms deeper into the spatiotemporal extent of quantity events and capture more nuances in spatiotemporal semantics; (2) we show that our STEQE framework generally applies to multiple domains and not only for the COVID-19 pandemic; (3) we release our entire data collection pipeline on CROWDAQ for public use and extension. + +# 7 Discussion + +As §5 shows, the performance bottleneck of STEQE is mainly at temporal grounding: with almost perfect quantity recognition and very good typing and spatial grounding results, temporal grounding performance is typically much lower than the other tasks. While typing and spatial grounding are ready for practical research into few-and zero-shot settings along the lines of what is done in entity typing (Zhou et al., 2018; Obeidat et al., 2019; Zhang et al., 2020b), temporal grounding still requires more investigation even in in-domain settings. + +Why is temporal grounding so challenging? First, news articles tend to mention many overall quantities ending on publication time, leading to imbalanced datasets. For instance, $86\%$ in Fire fall into this category, leaving little training data for other quantities; in contrast, this number is only $32\%$ in BLM, and the S-N and E-N scores are much higher in BLM than those in Fire. Second, temporal grounding often requires reasoning, an effect known to be difficult in many works on temporal semantics (Ning et al., 2020b; Zhou et al., 2021). For instance in Fig. 4, to figure out the time span of "80," we need to understand that (1) it happened on "Sunday" (2) the "Sunday" is a Sunday in the past instead of in the future, and (3) it is most likely the most recent Sunday instead of earlier ones. + +DCT: Tuesday, 06/16/2020 +Text: Black Lives Matter demonstrators in a tiny Ohio town...Sunday. The small demonstration has about [80] people, organized by local Bethel residents. + +Figure 4: The start time of "80" needs reasoning. + +Another direction to improve on STEQE is to aggregate from multiple articles, given that the same quantity or similar quantities are typically covered by multiple sources. Cross-document event coreference has many unique difficulties (e.g., see Upadhyay et al. (2016); Bugert et al. (2020)), but knowing the quantity event type, location, and time span may make it relatively easy to find coreference to strengthen one's belief in its prediction, or demote outliers that are likely wrong predictions. + +The proposed STEQE framework may also be used to detect misinformation and perhaps in social science studies too. For instance, we have anecdotes where a website mistakenly reported Virginia's COVID-19 case number on Apr 2, 2020 to be $17\mathrm{k}$ , while the correct number was $1.7\mathrm{k}$ ; we also found signs that news agencies might have mentioned case numbers in New York city less frequently after a sharp increase, but turned to report case numbers in New Jersey in April 2020. These social science analyses are beyond the scope of this work, but the examples above point to interesting potential uses of these information extraction systems. + +# 8 Conclusion + +Many important news events are associated with quantities. With practicality in mind, we dive deep into the semantics of quantity events and propose a meta-framework for spatiotemporal quantity extraction: we formulate the problem as four information extraction tasks which lead to quick and reliable data annotation via crowdsourcing; we also build a T5 baseline to study the difficulties of the task and discuss transfer learning opportunities. We use this meta-framework to build datasets on three separate sociopolitical events: the COVID-19 pandemic, BLM protests, and California fires. Our meta-framework is shown to be readily extensible to different domains of quantity events, an appealing feature for quick response to future events. The new datasets we collect as examples of this framework can also directly contribute to future studies on spatiotemporal quantity extraction. + +# References + +2005. The ACE 2005 (ACE 05) Evaluation Plan. Technical report. +Michael Bugert, N. Reimers, and Iryna Gurevych. 2020. Cross-document event coreference resolution beyond corpus-tailored systems. ArXiv, abs/2011.12249. +Taylor Cassidy, Bill McDowell, Nathanel Chambers, and Steven Bethard. 2014. An annotation framework for dense event ordering. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), pages 501-506. +Howard Chen, Alane Suhr, Dipendra Misra, Noah Snavely, and Yoav Artzi. 2019. TOUCHDOWN: Natural language navigation and spatial reasoning in visual street environments. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). +Muhao Chen, Hongming Zhang, Qiang Ning, Manling Li, Heng Ji, Kathleen McKeown, and Dan Roth. 2021. Event-centric natural language processing. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), pages 6-14. +Zheng Chen and Heng Ji. 2009. Graph-based event coreference resolution. In Proceedings of the 2009 Workshop on Graph-based Methods for Natural Language Processing (TextGraphs-4). +Surabhi Datta and Kirk Roberts. 2020. A hybrid deep learning approach for spatial trigger extraction from radiology reports. In Proceedings of the Third International Workshop on Spatial Language Understanding. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). +Quang Do, Yee Seng Chan, and Dan Roth. 2011. Minimally supervised event causality identification. In Proc. of the Conference on Empirical Methods in Natural Language Processing (EMNLP). +Quang Do, Wei Lu, and Dan Roth. 2012. Joint inference for event timeline construction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). +Yanai Elazar, Abhijit Mahabal, Deepak Ramachandran, Tania Bedrax-Weiss, and Dan Roth. 2019. How large are lions? inducing distributions over quantitative attributes. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). +Goran Glavaš, Jan Šnjader, Marie-Francine Moens, and Parisa Kordjamshidi. 2014. HiEve: A corpus for extracting event hierarchies from news stories. In LREC. + +Milan Gritta, Mohammad Taher Pilehvar, Nut Limsopatham, and Nigel Collier. 2018. What's missing in geographical parsing? Language Resources and Evaluation, 52:603 - 623. +Hyounghun Kim, Abhaysinh Zala, Graham Burri, Hao Tan, and Mohit Bansal. 2020. *ArraMon: A joint navigation-assembly instruction interpretation task in dynamic environments*. In *Findings of the Association for Computational Linguistics: EMNLP* 2020. +Parisa Kordjamshidi, Dan Roth, and Marie-Francine Moens. 2015. Structured learning for spatial information extraction from biomedical text: Bacteria biotopes. In BMC Proc. of the International Conference on Bioinformatics Models, Methods and Algorithms. +Sayali Kulkarni, Shailee Jain, Mohammad Javad Hosseini, Jason Baldridge, E. Ie, and L. Zhang. 2020. Spatial language representation with multi-level geocoding. ArXiv, abs/2008.09236. +Christian Landsiedel, Verena Rieser, Matthew Walter, and Dirk Wollherr. 2017. A review of spatial reasoning and interaction for real-world robotics. Advanced Robotics, 31(5):222-242. +Egoitz Laparra, Dongfang Xu, and Steven Bethard. 2018. From characters to time intervals: New paradigms for evaluation and neural parsing of time normalizations. Transactions of the Association for Computational Linguistics (TACL), 6:343-356. +Manling Li, Ying Lin, Joseph Hoover, Spencer Whitehead, Clare Voss, Morteza Dehghani, and Heng Ji. 2019. Multilingual entity, relation, event and human value extraction. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations). +Qiang Ning, Hangfeng He, Chuchu Fan, and Dan Roth. 2019a. Partial or complete, that's the question. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). +Qiang Ning, Sanjay Subramanian, and Dan Roth. 2019b. An Improved Neural Baseline for Temporal Relation Extraction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). +Qiang Ning, Hao Wu, Pradeep Dasigi, Dheeru Dua, Matt Gardner, Robert L. Logan IV, Ana Marasovic, and Zhen Nie. 2020a. Easy, reproducible and quality-controlled data collection with CROWDAQ. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. +Qiang Ning, Hao Wu, Rujun Han, Nanyun Peng, Matt Gardner, and Dan Roth. 2020b. TORQUE: A reading comprehension dataset of temporal ordering questions. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). + +Qiang Ning, Hao Wu, and Dan Roth. 2018a. A multi-axis annotation scheme for event temporal relations. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). +Qiang Ning, Ben Zhou, Zhili Feng, Haoruo Peng, and Dan Roth. 2018b. CogCompTime: A tool for understanding time in natural language. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (Demo Track). +Rasha Obeidat, Xiaoli Z. Fern, Hamed Shahbazi, and P. Tadepalli. 2019. Description-based zero-shot fine-grained entity typing. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). +Tim O'Gorman, Kristin Wright-Bettner, and Martha Palmer. 2016. Richer Event Description: Integrating event coreference with temporal, causal and bridging annotation. In Proceedings of the 2nd Workshop on Computing News Storylines (CNS 2016). +James Pustejovsky, Patrick Hanks, Roser Sauri, Andrew See, Robert Gaizauskas, Andrea Setzer, Dragomir Radev, Beth Sundheim, David Day, Lisa Ferro, et al. 2003. The TIMEBANK corpus. In Corpus Linguistics, page 40. +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, M. Matena, Yanqi Zhou, W. Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21:140:1-140:67. +Hannah Rashkin, Maarten Sap, Emily Allaway, Noah A. Smith, and Yejin Choi. 2018. Event2Mind: Commonsense inference on events, intents, and reactions. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), pages 463-473. +Homero Roman Roman, Yonatan Bisk, Jesse Thomason, Asli Celikyilmaz, and Jianfeng Gao. 2020. RMM: A recursive mental model for dialogue navigation. In Findings of the Association for Computational Linguistics: EMNLP 2020. +Subhro Roy and Dan Roth. 2018. Mapping to declarative knowledge for word problem solving. Transactions of the Association for Computational Linguistics, 6:159-172. +Subhro Roy, Tim Vieira, and Dan Roth. 2015. Reasoning about quantities in natural language. Transactions of the Association for Computational Linguistics (TACL), 3. +Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith, and Yejin Choi. 2019. ATOMIC: An atlas of machine commonsense for if-then reasoning. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). + +M. Surdeanu. 2013. Overview of the tac2013 knowledge base population evaluation: English slot filling and temporal slot filling. Theory and Applications of Categories. +Erik F Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). +Takuma Udagawa, Takato Yamazaki, and Akiko Aizawa. 2020. A linguistic analysis of visually grounded dialogues based on spatial expressions. In *Findings of the Association for Computational Linguistics: EMNLP* 2020. +Shyam Upadhyay, Nitish Gupta, Christos Christodoulopoulos, and Dan Roth. 2016. Revisiting the evaluation for cross document event coreference. In Proc. of the International Conference on Computational Linguistics (COLING). +Naushad UzZaman, Hector Llorens, James Allen, Leon Derczynski, Marc Verhagen, and James Pustejovsky. 2013. SemEval-2013 Task 1: TEMPEVAL-3: Evaluating time expressions, events, and temporal relations. Proceedings of the Joint Conference on Lexical and Computational Semantics (*SEM), 2:1-9. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. +Hongming Zhang, Xin Liu, Haojie Pan, Y. Song, and Cane Wing ki Leung. 2020a. Aser: A large-scale eventuality knowledge graph. In Proceedings of the International World Wide Web Conferences (WWW). +Michael J.Q. Zhang and Eunsol Choi. 2021. Situatedqa: Incorporating extra-linguistic contexts into qa. EMNLP. +Tao Zhang, Congying Xia, Chun-Ta Lu, and Philip Yu. 2020b. Mzet: Memory augmented zero-shot fine-grained named entity typing. ArXiv, abs/2004.01267. +Ben Zhou, Daniel Khashabi, Qiang Ning, and Dan Roth. 2019. "Going on a vacation" takes longer than "Going for a walk": A Study of Temporal Commonsense Understanding. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). + +Ben Zhou, Daniel Khashabi, Chen-Tse Tsai, and Dan Roth. 2018. Zero-shot open entity typing as type-compatible grounding. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). +Ben Zhou, Qiang Ning, Daniel Khashabi, and Dan Roth. 2020. Temporal Common Sense Acquisition with Minimal Supervision. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). +Ben Zhou, Kyle Richardson, Qiang Ning, Tushar Khot, A. Sabharwal, and Dan Roth. 2021. Temporal reasoning on implicit events from distant supervision. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). +Shi Zong, Ashutosh Baheti, Wei Xu, and Alan Ritter. 2020. Extracting COVID-19 events from Twitter. ArXiv, abs/2006.02567. + +# A Qualification setups + +Note that exams for quantity recognition, spatial & temporal grounding are domain-agnostic, and exams for quantity typing are domain-specific. The way exams work on CROWDAQ is that we provide a pool of questions and CROWDAQ will randomly select a specified number of them. We also do not allow a crowd worker to make too many attempts. Table 5 shows the setup and statistics of those exams. + +CROWDAQ provides diagnostic information on each question too. In Table 5, we also show the number of questions where less than $70\%$ examinees were correct (i.e., "Hard"). The total number of attempts in each exam and how many of them got scores higher than the passing score are also reported. + +
Qual IDQual NameQuestion PoolCROWDAQ ConfigurationWorkers' Performance
#Total#Hard#Questions#AttemptsPassing Grade#Attempts#Succeeded
QRecognition11210390952895
SGSpa. Grd.178123901454897
TGTemp. Grd.126103901180674
T-CTyping-COVID113103901156698
T-BTyping-BLM1148385760457
T-FTyping-Fire14712390905476
+ +# B Corpus statistics + +Table 6 shows a more complete version of our earlier Table 3. The extra columns are the total number of qualified workers for each task, the Gini index, and the total number of sentences/documents annotated here. Gini is a metric proposed by TORQUE (Ning et al., 2020b) to measure the skewness of crowdsourcing data collection. Our Gini is significantly higher and we think the reason is that many crowd workers only attempted a couple our HITs. Regarding the definition of WAWA, we realize that Ning et al. (2020b) has provided a very good explanation about it; please refer to the appendix E of Ning et al. (2020b) about it. + +Table 5: The qualification exam setups in this study. Question Pool: All the questions we provided to CROWDAQ; hard questions are those where less than $70\%$ attempts were correct. CROWDAQ Configuration: #questions to display each time, #attempts allowed, and the required passing grade. Workers' Performance: the total number of attempts and succeeded. + +
TaskWorker PoolSizeQuality
Req. Qual ID(s)#Qualified#ActualGini#Quant.#Sent.#Doc.WAWAExpert
Typ-COVIDQ, T-C299520.741.5k1.5k1.3k95%100%
Typ-BLMQ, T-B291740.534k3.9k3k87%94%
Typ-FireQ, T-F231680.622k2k1.4k91%96%
Spa-COVIDT-C, SG258910.743.4k3.3k2.9k91%98%
Spa-BLMT-B, SG141500.681.5k1.5k1.2k80%96%
Spa-FireT-F, SG160630.712k2k1.3k92%90%
Temp-COVIDT-C, TG3991320.814.3k4.2k3.5k86%100%
Temp-BLMT-B, TG190570.711.6k1.6k1.2k77%96%
Temp-FireT-F, TG215630.741.6k1.6k1.1k82%96%
+ +Table 6: Corpus statistics. The required qualifications (QID), numbers of actual annotators (#W) and annotated quantities (#Q), worker agreement with aggregate (WAWA), and expert evaluation on 50 random samples after worker aggregation. The WAWA metric is for the "state" choice in spatial grounding, and the "overall number" judgment in temporal grounding (reported by CROWDAQ directly). The expert evaluation scores are all accuracy, except for $F_{1}$ for quantity recognition. + +# C Example annotations + +Figure 7 shows two examples in each of the three domains in this study. + +
Domain, DCTQuantityTypeSpatial Grd.Temporal Grd.
COVID-19 +Sat, 2020-08-15Tennessee has conducted 1,757,690 +tests with 1,631,297 negative resultsTest performed +for COVID-19: +result is nega- +tiveUS, Ten- +nesseeOverall num- +ber ends at +DCT
COVID-19 +Wed, 2020-08-12Wyandotte County is reporting 4,895 +confirmed cases...The county said on +Tuesday that 99 people have died from +the coronavirus since the start of the +outbreakDeaths: defi- +nitely caused by +COVID-19US, Kansas, +Wyandotte +CountyOverall num- +ber ends on +2020-08-11
Wildfires +Mon, 2020-09-14...large fires across 10 states...At least +35 people have died in California, +Oregon and Washington.People im- +pactedUSOverall num- +ber ends at +DCT
Wildfires +Tue, 2020-09-22The blaze had more than doubled in +size over the past week to 170 square +miles (440 square kilometers), ...from +Los Angeles.Physical mea- +surementsUS, Cali- +fornia, Los +Angeles2020-09-15 to +2020-09-22
Protests +Tue, 2020-06-16Black Lives Matter demonstrators in +a tiny Ohio town...Sunday. The small +demonstration has about 80 people, or- +ganized by local Bethel residents.Number of +participants in +protests or rele- +vant activitiesUS, Ohio, +Bethel2020-06-14 to +2020-06-14
Protests +Sun, 2020-05-31A CNN analysis found about 80% of +the 51 people booked into a Minneapo- +lis jail during two days of protests are +actually from Minnesota.Number of +arrests due to +the protests +or following +skirmishesUS, Min- +nesota, +Minneapolisunknown
+ +Table 7: Example annotations of quantity typing, spatial grounding, and temporal grounding across three domains. Quantity span is highlighted. Text snippets are cut short to only keep the sentence with the quantity and other relevant information. + +# D Reproducibility + +For T5-based experiments related to model performances in Table 4, we choose the learning rate from [5e-2, 5e-3, 5e-4] and select 5e-3 for final experiments. We use a batch size of 32 and run 20 epochs for each setting. All parameters are tuned on the development set as described in §5. Experiments on average finish in 3 hours on a single Nvidia RTX 8000 GPU. Spatial and temporal results are averaged from 3 runs with seeds [10, 20, 30]. \ No newline at end of file diff --git a/ametaframeworkforspatiotemporalquantityextractionfromtext/images.zip b/ametaframeworkforspatiotemporalquantityextractionfromtext/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..954475b26d3067cebafaabae7f0a0324d58ab16f --- /dev/null +++ b/ametaframeworkforspatiotemporalquantityextractionfromtext/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b8a335ad753d5b4ad9cf0d4f7b479413cd757db088276f09f5f6c762b5b487d7 +size 449755 diff --git a/ametaframeworkforspatiotemporalquantityextractionfromtext/layout.json b/ametaframeworkforspatiotemporalquantityextractionfromtext/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..8f9636f1ab8fc051b1d7646f3282d03ceea523e8 --- /dev/null +++ b/ametaframeworkforspatiotemporalquantityextractionfromtext/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f25113dd7bfa954cbe52d5eae3c07d404dc21288849c806b27503e1794cbbb33 +size 380299 diff --git a/amodelagnosticdatamanipulationmethodforpersonabaseddialoguegeneration/f8291238-3b07-4094-b283-670d9f901855_content_list.json b/amodelagnosticdatamanipulationmethodforpersonabaseddialoguegeneration/f8291238-3b07-4094-b283-670d9f901855_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..633fca14e79881ddd355cf9822cd6ebc274c0ac5 --- /dev/null +++ b/amodelagnosticdatamanipulationmethodforpersonabaseddialoguegeneration/f8291238-3b07-4094-b283-670d9f901855_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:161bd3bfb8d26abba333c28cb1e8031c22d2ebfc0096895179f4e9973127cd35 +size 125544 diff --git a/amodelagnosticdatamanipulationmethodforpersonabaseddialoguegeneration/f8291238-3b07-4094-b283-670d9f901855_model.json b/amodelagnosticdatamanipulationmethodforpersonabaseddialoguegeneration/f8291238-3b07-4094-b283-670d9f901855_model.json new file mode 100644 index 0000000000000000000000000000000000000000..4edae93d54622abc7334d5415cbc011872327c19 --- /dev/null +++ b/amodelagnosticdatamanipulationmethodforpersonabaseddialoguegeneration/f8291238-3b07-4094-b283-670d9f901855_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e64e8d1f4392343a24535b095625d2dedaa9215805815a70735cc96e1e583260 +size 150565 diff --git a/amodelagnosticdatamanipulationmethodforpersonabaseddialoguegeneration/f8291238-3b07-4094-b283-670d9f901855_origin.pdf b/amodelagnosticdatamanipulationmethodforpersonabaseddialoguegeneration/f8291238-3b07-4094-b283-670d9f901855_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3d9db8835f5420e8d6f98c0dfe293f552003c8d0 --- /dev/null +++ b/amodelagnosticdatamanipulationmethodforpersonabaseddialoguegeneration/f8291238-3b07-4094-b283-670d9f901855_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:23443e3823b7ebdb331b6451a5f355ffb6d5e11ab3be27e8b7aeaed5e69c961b +size 977834 diff --git a/amodelagnosticdatamanipulationmethodforpersonabaseddialoguegeneration/full.md b/amodelagnosticdatamanipulationmethodforpersonabaseddialoguegeneration/full.md new file mode 100644 index 0000000000000000000000000000000000000000..e4442dd9a888956564af7c34df3020dba643ce9e --- /dev/null +++ b/amodelagnosticdatamanipulationmethodforpersonabaseddialoguegeneration/full.md @@ -0,0 +1,472 @@ +# A Model-Agnostic Data Manipulation Method for Persona-based Dialogue Generation + +Yu Cao $^{1*}$ , Wei Bi $^{2\dagger}$ , Meng Fang $^{3}$ , Shuming Shi $^{2}$ , Dacheng Tao $^{1,4}$ + +1School of Computer Science, The University of Sydney, Australia + +2Tencent AI Lab, Shenzhen, China + +$^{3}$ Eindhoven University of Technology (TU/e), Eindhoven, The Netherlands + +$^{4}$ JD Explore Academy, Beijing, China + +ycao8647@sydney.edu.au, victoriabi@tencent.com, m.fang@tue.nl, shumingshi@tencent.com, dacheng.tao@gmail.com + +# Abstract + +Towards building intelligent dialogue agents, there has been a growing interest in introducing explicit personas in generation models. However, with limited persona-based dialogue data at hand, it may be difficult to train a dialogue generation model well. We point out that the data challenges of this generation task lie in two aspects: first, it is expensive to scale up current persona-based dialogue datasets; second, each data sample in this task is more complex to learn with than conventional dialogue data. To alleviate the above data issues, we propose a data manipulation method, which is model-agnostic to be packed with any persona-based dialogue generation model to improve its performance. The original training samples will first be distilled and thus expected to be fitted more easily. Next, we show various effective ways that can diversify such easier distilled data. A given base model will then be trained via the constructed data curricula, i.e. first on augmented distilled samples and then on original ones. Experiments illustrate the superiority of our method with two strong base dialogue models (Transformer encoder-decoder and GPT2). + +# 1 Introduction + +The ability to generate responses with consistent personas is important towards building intelligent dialogue agents. In past years, there has been a growing interest in introducing explicit personas in dialogue generation models (Song et al., 2019; Wolf et al., 2019). A piece of persona text generally consists of profiles and background personal facts. A clipped persona-based dialogue from the PersonaChat (Zhang et al., 2018a) dataset is shown in Figure 1, which covers rich persona features. For + +![](images/305184060be676f2eb1d4a986bf5fe9d082e9a8f76301e7016f4495e66147aa2.jpg) +Figure 1: Each response in a persona-based dialogue is mostly related to one persona sentence and its latest dialogue history utterance. Persona sentences in grey are redundant for all responses. + +a persona-based dialogue generation model, generated responses need to be relevant to the dialogue context as well as consistent with personas. + +Most existing generation models for this task rely heavily on training with sufficient persona-based dialogues. However, available data are limited due to their expensive collection costs. Take the PersonaChat as an example, two crowd-sourced annotators are hired to play the part of a provided persona and converse naturally with each other. In total, about 162 thousand dialogue utterances are collected with less than 5 thousand unique persona profiles. Compared with conventional dialogue datasets such as OpenSubtitles (Lison and Tiedemann, 2016) and Weibo (Shang et al., 2015) with millions of utterances, persona-based dialogue datasets are relatively small. + +Besides the limited data scale, another data issue we want to point out is that a persona-based dialogue is more complex to learn with, in comparison with conventional dialogues. Recall that a persona-based dialogue involves not only multiple dialogue utterances, but also auxiliary persona sentences. Welleck et al. (2019) showed that not all responses in the PersonaChat dataset are consistent with the provided personas. This makes it difficult for a model to capture a reliable mapping from training data. Supposing we apply a similar dialogue model as in conventional dialogue generation tasks with a comparable parameter size, we + +should expect more data would be necessary to train a robust model on the more difficult data setting. Moreover, it may be difficult to use existing data augmentation methods (Li et al., 2019; Niu and Bansal, 2019) to automatically construct such complex persona-based dialogue data. For example, if we apply back translation (Sennrich et al., 2016) to every sentence in persona-based samples, the augmented ones may not maintain the coherence between the dialogue history and the response as well as the consistency between the persona and the response simultaneously. + +A few studies have been conducted to alleviate the above data issues by finetuning existing pretrained models such as GPT (Wolf et al., 2019; Golovanov et al.) or BERT (?Song et al., 2021). They often stick to a certain pretrained model. Sophisticated finetuning strategies, including proper network modifications and loss functions, are required to get satisfactory performance, making them not useful across different pretrained models. Moreover, they do not address the data difficulty issue explicitly. Most of them simply concatenate all persona and dialogue history sentences into a single input sequence for finetuning, and rely on the ability of the pretrained model to fast adapt to the target data domain. Hence, we want to design a model-agnostic method to address both the data scale and data difficulty issue, which can be packed with any base model, either trained from scratch or finetuned from a pretrained model. + +In this work, we propose a data manipulation method for persona-based dialogue data, which is model-agnostic to be packed with any base model to improve their robustness and consistency. Our method includes three operations on data, namely $\mathbf{D}^3$ , in sequence: (i) Data distillation: original training samples are simplified into contain only useful and less redundant persona sentences and dialogue utterances, which are expected to be fitted more easily; (ii) Data diversification: with the easier distilled samples, we can also perform data augmentation more reliably. We design various methods to edit new personas, and then align them with new and consistent responses to improve data diversity; (iii) Data curriculum: with both augmented distilled and original data at hand, we arrange them into a data curriculum for model learning (Bengio et al., 2009), where the base model is trained on the easier augmented distilled data and then the harder original data. To validate the effectiveness + +of our method, we perform experiments on two strong base dialogue models, Transformer-based encoder-decoder and GPT2. + +# 2 Related Work + +Persona-based dialogue generation It sees growing interest in recent years, thanks to the released benchmark datasets such as PersonaChat/ConvAI2 (Zhang et al., 2018a; Dinan et al., 2020). Previous works mostly focus on modifying dialogue models to condition auxiliary persona information, including extra persona embedding(Li et al., 2016b), profile memory (Zhang et al., 2018a), copying from personas (Yavuz et al., 2019), CVAE with persona information (Song et al., 2019), and using meta-learning to augment low-resource personas (Tian et al., 2021). + +Recent works try to adopt large-scale pretrained models on this task. GPT/GPT2 (Radford et al., 2018, 2019) are chosen the most often and shown to improve the generation quality with different finetuning strategies (Wolf et al., 2019; Golovanov et al.; Cao et al., 2020). Some leverage BERT (Devlin et al., 2019) as backbones (?Song et al., 2021). Other pretrained models also demonstrate their effectiveness (Lin et al., 2021). The aforementioned methods often need proper network modifications and finetuning loss functions in order to get satisfactory performance. It is hard to transfer them to be useful across different pretrained models. Moreover, most of them simply concatenate persona texts and dialogue history together as a single input sequence (Wolf et al., 2019; Roller et al., 2021), highly depending on the ability of the pretrained model to fast adapt to the target data domain. + +Text data manipulation Various data augmentation methods have been widely used in many NLP tasks (Sennrich et al., 2016; Hou et al., 2018; Guo et al., 2019; Min et al., 2020), which are also effective to boost the performance of dialogue models. New generated dialogue utterances (Li et al., 2019; Niu and Bansal, 2019) and retrieval results (Zhang et al., 2020) can be used to augment the training data. However, all previous work only studies the pairwise relationship between a query and a response to design the augmentation techniques, which are not applicable to involving auxiliary information, such as personas, simultaneously. + +Besides data augmentation, there are other ways to manipulate dialogue data to improve model learning. For example, a few approaches filter + +uninformative or noisy samples to enhance data quality (Csaky et al., 2019; Akama et al., 2020). Cai et al. (2020a) combine data augmentation and re-weighting to make models learn more effectively. Tian et al. (2019) utilize learnable memory based on dialogue clusters to enhance the model. + +Curriculum learning Bengio et al. (2009) examine the benefits of training models using various curricula successively from easy to hard. It has been applied to many NLP tasks such as machine translation (Platanios et al., 2019), reading comprehension (Tay et al., 2019) and language understanding (Xu et al., 2020). Cai et al. (2020b) adopt the idea in open-domain dialogue generation, where curriculum plausibility is determined by the response properties, including coherence and diversity. Our work is different in that we introduce new distilled data regarding as a curriculum. + +# 3 Our Data Manipulation Method + +We first formally define a persona-based training sample. It consists of $L$ persona description sentences $P = \{p_{1}, p_{2},.., p_{L}\}$ , $M$ dialogue history utterances $H = \{h_{1}, h_{2},.., h_{M}\}$ , and a gold response $R$ . The given training dataset is denoted as $\mathcal{D} = \{(P, H, R)\}$ . Note that $L$ and $M$ in different training samples can be different. A dialogue model needs to generate a response $\hat{R}$ , which is coherent with the dialogue history $H$ and consistent with persona information in $P$ . + +Our proposed data manipulation method $\mathbf{D}^3$ is model-agnostic. For any dialogue model, we will not change the model itself, but only manipulate its training data. We develop three data manipulation operations in sequel, former two for augmentation and the last one eases training, shown in Figure 2: + +1. Data distillation. We construct simple persona-consistent data $\mathcal{D}^{dis} = \{(\widetilde{P},\widetilde{H},\widetilde{R})\}$ by removing redundant information in $P$ and $H$ ; +2. Data diversification. Due to the limited amount of distilled samples, we design various methods to increase the data variety and scale, and obtain the diversified data $\mathcal{D}^{div} = \{(\widetilde{p},\widetilde{h},\widetilde{r})\}$ ; +3. Data curriculum. We combine $\mathcal{D}^{dis}$ and $\mathcal{D}^{div}$ as the augmented dataset $\mathcal{D}^a$ . A curriculum strategy is defined to train the model with the easier distilled samples in $\mathcal{D}^a$ first and then the original ones in $\mathcal{D}$ . + +# 3.1 Data Distillation + +Before introducing our distillation method, we discuss the difficulty of training a model with the orig- + +inal training samples in detail. The dependency of a response on the given persona fluctuates between different parts of the persona sentences. As shown in Figure 1, most responses only correspond to one persona sentence. The remaining persona information is mostly redundant, and may confuse the model to attend on useful persona information. Similarly, we notice that models tend to attend more on the last few utterances of $H$ rather than the historical ones. We find that by using a Transformer encoder-decoder model, the attention weights of the last Transformer layer on the last utterance is $45\%$ higher than the average on the other utterances. See Appendix C.1 for the experiment and results. This observation is also consistent with previous studies on multi-turn context understanding (Khandelwal et al., 2018; Sankar et al., 2019). + +A few previous works have demonstrated that attention-based models will be distracted by noisy attended information, and accurate attention supervisions can be very beneficial (Liu et al., 2016; Hsu et al., 2018). Inspired by them, we mimic a "hard" attention supervision between the response and useful persona/dialogue history by directly removing redundant tokens in the attended sequences. Therefore, different from previous work that modify the model to inject attention supervisions, our method only manipulates data. + +Persona distillation We aim to determine which persona sentence the current response is consistent with, and thus remove the remaining non-consistent ones. To do so, we associate each persona sentence $p_k$ with the target response $R$ , and determine the consistency between each $p_k$ and $R$ . Following previous work (Welleck et al., 2019), we cast it as a natural language inference (NLI) problem. If $R$ entails $p_k$ , it is considered to be consistent with $p_k$ , otherwise irrelevant to $p_k$ . A trained RoBERTa (Liu et al., 2019) model is used here as the NLI model, with an accuracy of $90.8\%$ on the DialogueNLI dev set provided in Welleck et al. (2019). Details are provided in Appendix A.1. + +Dialogue history distillation We can adopt a trained attention-based model to determine useful context sentences. For simplicity, we could also keep only the most useful last utterance $H_{M}$ in a distilled sample (as suggested by our preliminary experiments discussed in the beginning of this section). In our experiments in §4, we find that using the last utterance is enough for our method to work + +![](images/2852d27889d1fb90d6bd9ad2c6dc714df3700b2c8447fdb411129b0a6be71a3a.jpg) +Figure 2: The framework of our data manipulation method $\mathbf{D}^3$ . It obtains the augmented dataset $\mathcal{D}^a = \mathcal{D}^{dis} \cup \mathcal{D}^{div}$ from the original dataset $\mathcal{D}$ through data distillation and data diversification. Curriculum strategy is used to train a model by first learning on the easy augmented data $\mathcal{D}^a$ and then on the hard original training data $\mathcal{D}$ . + +well. + +A distilled sample $(\widetilde{P},\widetilde{H},\widetilde{R})$ is ready to be constructed now. Here, $\widetilde{P}$ and $\widetilde{H}$ both contain only one sentence. $\widetilde{P}$ is any $p_k$ that entails $R$ , and $\widetilde{H}$ is the last utterance in the dialogue history, and $\widetilde{R} = R$ . Such samples form the distilled dataset $\mathcal{D}^{dis}$ . Note that an original sample in $\mathcal{D}$ may result in none, one, or multiple distilled samples, as $R$ may entail none, one, or multiple persona sentences. + +# 3.2 Data Diversification + +Distilled samples should ease model training as their responses are highly dependent on their $\widetilde{P}$ and $\widetilde{H}$ . However, samples in $\mathcal{D}^{dis}$ are limited in terms of both scale (around $40\%$ of the original data) and diversity (about 4.5k unique persona sentences). Hence, it is necessary to augment $\mathcal{D}^{dis}$ . Thanks to the assured relationship between $\widetilde{P} / \widetilde{H}$ and $R$ , we can devise possible methods to diversify distilled samples with more semantically varied samples. Our data diversification operation contains the following three parts along with quality filtering, as shown in Figure 2. + +Persona editing We aim to obtain new persona sentences to improve the data scale, and more importantly the persona diversity. Hence, we here consider both token-level and phrase-level editing methods given a persona sentence $\widetilde{P}$ : + +- Token-level editing: we randomly mask a predefined ratio of tokens in $\widetilde{P}$ , then use a pretrained BERT (Devlin et al., 2019) model to make predictions on the masked positions one by one. +- Phrase-level editing: we remove the last few tokens in $\widetilde{P}$ with the removal length determined by a random ratio, and utilize a pretrained GPT2 (Radford et al., 2019) to rewrite the removal part. + +Multiple edited persona sentences can be obtained from one certain $\widetilde{P}$ . Here, we finetune pretrained models using all persona sentences for a trade-off between semantic diversity and domain similarity. To ensure a satisfactory fluency and novelty of an edited persona $\widetilde{p}$ , we rate it via a scoring function: + +$$ +f = \alpha \cdot \operatorname {P P L} (\widetilde {p}) + (1 - \alpha) \cdot \mathrm {B S} _ {f} (\widetilde {p}, \widetilde {P}). \tag {1} +$$ + +Here, PPL calculates the normalized perplexity via a GPT2 model to measure its fluency, and the rescaled F1 value of BERTScore $(\mathrm{BS}_f)$ (Zhang et al., 2019) is employed to evaluate the semantic similarity between two sentences. Lower values for both functions are preferred, indicating higher fluency or novelty. $\alpha$ is a hyper-parameter. We rank all edited personas originated from the same $\widetilde{P}$ with the ascending order of their scores in Eq. 1, and select the top $N_{p}$ ones. + +Response aligning Since the semantic meaning of an edited persona sentence obtained above could change, the original response may not be consistent with it. Therefore, we need to get a new aligned response to maintain the persona consistency. Two approaches are utilized to obtain an aligned response $\widetilde{r}$ given an edited persona sentence $\widetilde{p}$ and the corresponding distilled history utterance $\widetilde{H}$ : + +- Token-level editing: We observe that some overlapped tokens can be found between $\widetilde{P}$ and $\widetilde{R}$ . If an overlapped token $w$ has been changed to a new token $w'$ in the edited persona $\widetilde{p}$ , we directly replace $w$ in $\widetilde{R}$ with $w'$ in the same positions, resulting in an aligned response $\widetilde{r}$ . An illustration figure can be found in Appendix A.2. +- Model predicting: If no overlapped token can be found, token-level editing will not be applicable. Then we employ a GPT2-based encoder-decoder model (Cao et al., 2020) finetuned on the distilled + +![](images/0541357e419b1b867fe60568f124d99efc28a92b961273c4cd1ad21085b80a42.jpg) +Figure 3: Aligning responses for new personas via token-level editing or model generating. T/P: edit persona in token/phrase level. $(t_{1}$ and $t_2$ are overlapped tokens, $t_1^{\prime}$ and $t_2^\prime$ are corresponding new edited and aligned tokens.) + +data $\mathcal{D}^{dis}$ to predict responses with the given $\widetilde{p}$ and a dialogue history utterance $\widetilde{H}$ . + +Figure 3 demonstrates the two kinds of approaches. + +Dialogue history augmentation To further scale up the size of distilled samples, we also manipulate the dialogue history $\widetilde{H}$ . Since the diversity scarcity issue is not severe in $\widetilde{H}$ , we use a popular sentence-level data augmentation method, back translation (BT) (Sennrich et al., 2016), to obtain variants of dialogue utterances. We could consider the semantics of the variants are identical. Distilled history utterance $\widetilde{H}$ is translated into an intermediate language, then back into the source language using a couple of existing translation models. The original dialogue history and its $N_{h}$ variants compose the augmented dialogue history set $\{\widetilde{h}\}$ . + +Combining the above three parts together, we now obtain new samples $\{(\widetilde{p},\widetilde{h},\widetilde{r})\}$ . We evaluate them with respect to fluency, persona consistency and history coherence: + +$$ +\begin{array}{l} s = \beta \cdot \mathrm {P P L} (\widetilde {r}) + \gamma \cdot \mathrm {N L I} (\widetilde {p}, \widetilde {r}) \\ + (1 - \beta - \gamma) \mathrm {N L I} _ {c} (\widetilde {h}, \widetilde {r}), \quad (2) \\ \end{array} +$$ + +where NLI measures the entailment between a persona sentence and the response by the same NLI model in §3.1, and $\mathrm{NLI}_c$ evaluates the entailment between a dialogue history utterance and the response using another NLI model (Dziri et al., 2019)(details in Appendix A.2). $\beta$ and $\gamma$ are hyperparameters. We filter samples below a threshold $T$ , and the remaining samples constitute the diversified data set $\mathcal{D}^{div}$ . The whole augmented training dataset is the union of $\mathcal{D}^{dis}$ and $\mathcal{D}^{div}$ . The quality of augmented samples is discussed in Appendix B. + +
DDisdisDivDaD+Da
#sample65,71926,69326,70053,393119,112
#persona4,7104,5229,78814,31014,498
#token20,46713,42012,79417,83523,269
+ +Table 1: Statistics of samples obtained in each stage. + +# 3.3 Data Curriculum + +During inference, the model should be capable to handle testing data with multiple persona sentences and dialogue history utterances as the original data. Therefore, a model trained using $\mathcal{D}^a$ only is not proper. We should use both $\mathcal{D}^a$ and $\mathcal{D}$ . Unlike previous studies that treat the original and augmented data equally and mix them directly, we design a curriculum strategy. Considering the different training difficulty of data in $\mathcal{D}^a$ and $\mathcal{D}$ , we treat $\mathcal{D}^a$ as an easy curriculum while the original dataset $\mathcal{D}$ as a hard curriculum. The model is trained on such data curriculum successively until convergence. + +# 4 Experiments + +To validate the effectiveness of our proposed model-agnostic data manipulation method, we first experiment on two strong persona-based dialogue generation models (Transformer encoder-decoder and GPT2) on the benchmark PersonaChat (Zhang et al., 2018a) dataset. Next we conduct a series of analysis to examine the usefulness of different data manipulation operations in our method. + +# 4.1 Experimental Setup + +Dataset The PersonaChat (Zhang et al., 2018a) data is widely used in this field (Song et al., 2019, 2020; Wolf et al., 2019; Golovanov et al.). Each sample has a dialogue history $H$ with no more than 15 utterances ( $M \leq 15$ ) and a persona $P$ with between 4 and 6 sentences ( $4 \leq L \leq 6$ ). Numbers of samples, unique persona sentences, and tokens in each stage of our method are listed in Table 1. + +Base models Two dialogue model architectures are considered: + +- TRANSFORMER (Vaswani et al., 2017): an encoder-decoder architecture using Transformer as the backbone with pointer generator (See et al., 2017) integrated; +GPT2: one of the most powerful pretrained models on this task (Wolf et al., 2019; Golovanov et al.; Cao et al., 2020). + +
ModelPPLBLEUNIST-4BSfEnt-1Ent-2Ent-3Dis-1Dis-2Dis-3CFlu.Coh.Pcon.
Human----5.6808.91310.275.25934.9066.370.4722.6252.4510.531
TRANS38.283.1401.1480.14864.0465.4846.2621.6096.29811.710.2352.3032.0380.304
TRANS-BT37.923.3151.0820.15274.2745.9056.7521.7607.10813.390.2892.3372.1420.350
TRANS-CVAE37.613.3121.1910.15333.9745.4516.2671.4595.79511.160.2602.3332.1110.335
TRANS-FILTER38.992.9461.1010.15634.2836.0337.0881.7967.69614.060.4462.3182.0880.492
TRANS-D337.303.3581.2060.15744.2236.1657.2981.8267.92314.420.4852.3972.1720.513
GPT217.633.7611.2780.16934.4856.1877.0292.0118.26015.030.5182.5082.2430.508
GPT2-BT16.963.9431.3480.16634.5476.2487.0891.9478.11314.940.5092.4882.2590.454
GPT2-CVAE17.163.3391.3600.15924.2455.6916.4901.7486.79912.190.4842.3582.1500.426
GPT2-FILTER16.903.7341.3370.17884.5706.3527.2632.1489.03116.520.5712.5272.2330.537
GPT2-D315.694.1841.4290.18354.6146.4267.3212.2679.80318.200.5572.5322.2550.548
+ +Table 2: Results of all compared data manipulation methods on two base models. BLEU and Dist-n are in %. Best results are in bold, and second best are underlined. Shaded numbers indicate our $\mathrm{D}^3$ is significantly better than this method on human evaluation, C-score and $\mathrm{BS}_f$ , according to our significance T-test where $p > 0.05$ . + +TRANSFORMER is trained from scratch, and GPT2 is finetuned. For both models, we construct training data by concatenating persona and dialogue history as a single input sequence, in which special symbols and token type embeddings are involved to distinguish between them. The negative log-likelihood loss is used to train models using Adam optimizer (Kingma and Ba, 2015). + +Compared methods We pack two base models with our method $\mathbf{D}^3$ and other data manipulation approaches for comparison: + +- BACK TRANSLATION (BT) (Sennrich et al., 2016): we perform BT on all sentences in a training sample, including the persona sentences and dialogue utterances, and train the model with the augmented and original data jointly; +- CVAE (Li et al., 2019): a CVAE-based generation model is trained on the original data and then used to generate new responses via sampling with different latent codes. Since it can only handle pairwise data, we concatenate all input sentences as a single input sequence in this method; +- ENTROPY FILTER (FILTER) (Csaky et al., 2019): it removes generic responses according to the entropy, which is calculated using the dialogue history and the response without using the persona. + +The detailed configurations of each method are given in Appendix B. + +Automatic metrics We adopt multiple widely used metrics to measure the response quality, including Perplexity (PPL), BLEU (Papineni et al., 2002), NIST-4 (Doddington, 2002) and BERTScore (Zhang et al., 2019). We use the same $\mathbf{BS}_f$ in Eq. 1 for BERTScore. To evaluate the response diversity, we use Distinct-n (Li et al., 2016a) + +(Dist, $n = 1,2,3$ ) which is the ratio of unique n-grams among the corpus, and Entropy-n (Zhang et al., 2018b) (Ent, $n = 1,2,3$ ) that is the entropy obtained via the n-gram distribution in a sentence. Moreover, C-score (Madotto et al., 2019) (C) is involved, where we follow the default setting and use the output of an NLI model trained on the DialogueNLI dataset (Welleck et al., 2019) to indicate the consistency between a response and persona sentences. + +Human evaluation We randomly selected 200 samples from the test set for human evaluations. Five professional annotators from a third-party company were asked to rate the responses from three aspects: 1) Fluency (Flu.); 2) Coherence (Coh.) with the dialogue history, 3) Persona consistency (Pcon.). The scores for the first two aspects have three scales, in which $1/2/3$ indicates unacceptable/moderate/satisfactory respectively. The last one is binary, where 1 means the response is consistent with at least one persona sentence in the sample and 0 otherwise. The agreement rate from raters is $97.5\%$ , $89.5\%$ , $100\%$ @3 (at least 3 of them reach an agreement) in the these aspects, indicating the validity of scores. The instruction of human evaluation is given in Appendix B. + +# 4.2 Results + +Table 2 reports the results on two based models trained with the use of various compared data manipulation methods. T-test is conducted between our $\mathrm{D}^3$ and other compared methods on each base model for metrics including $\mathrm{BS}_f$ , C-score and three human evaluation metrics. Other automatic metrics have similar results or are not applicable such as Distinct-n. Details of the significant tests are given in Appendix C.2. + +
PPLBLEUNIST-4BSfEnt-1Ent-2Ent-3Dis-1Dis-2Dis-3C
TRANS38.283.1401.1480.14864.0465.4846.2621.6096.29811.710.235
TRANS-D337.303.3581.2060.15744.2236.1657.2981.8267.92314.420.485
TRANS-D3*37.673.2591.1850.15544.1976.0957.2321.7947.83514.270.439
w/o diversification37.903.1591.1050.15114.0515.6646.5331.5706.99213.420.454
w/o distillation38.253.1051.1260.14994.0265.4596.2901.4956.13111.760.352
only distillation104.81.5090.9390.10594.0025.3986.2651.2794.6308.5050.637
w/o persona editing37.963.2841.1360.15354.1715.6866.5171.6086.59912.620.422
w/o history augmentation38.103.2911.2220.15504.1505.7596.5601.6086.49312.520.461
w/o response filter38.213.1061.0870.15034.2075.8417.0801.5926.99112.980.399
+ +Table 3: Automatic evaluation results with variant in data distillation (middle), and diversification (bottom), compared with our full method (top) on TRANSFORMER. $\mathbf{D}^{3*}$ means using an NLI model trained under a few-show setting (200 labelled samples) in the data distillation. + +On TRANSFORMER, all methods achieve improvements on most metrics compared with training with the original dataset. Our method yields the best performance except for Ent-1. On GPT2, many methods fail to improve the various metrics consistently. For example, on the persona consistency (Pcon.), only ENTROPY FILTER and our method can get higher scores than training with the original dataset. The reason is that the data scarcity issue is less severe with a pretrained model, and it is more important to address the data diversity issue. In our method, the augmented distilled samples are encouraged to have different semantics with the original ones and improve the data diversity, and thus continue to get improvements on the strong pretrained GPT2. + +# 4.3 More Analysis + +We further analyze the contributions made by different data manipulation operations in our method by answering the following three questions: + +1. Is there a need to construct simple data $\mathcal{D}^{dis}$ as in data distillation? +2. Can data diversification effectively obtain diverse distilled data? +3. Does the curriculum strategy better exploit the augmented data and help model training? + +We use results on TRANSFORMER here for discussion in the following part. Refer to Appendix C.3 for extensive results on GPT2 model. + +Analysis of data distillation To examine the effectiveness of data distillation, we need to neutralize the influence of data diversification as it is only applicable to distilled data. Following variants of our $\mathbf{D}^3$ are considered: 1) w/o diversification: only using distilled data $\mathcal{D}^{dis}$ in the easy curriculum; 2) w/o distillation: based on 1), we recover samples + +in $\mathcal{D}^{dis}$ into their original format, which means all their persona sentences and history utterances are included; 3) only distillation: only $\mathcal{D}^{dis}$ is used in training without using the original data in $\mathcal{D}$ . + +Results of these variants are shown in the middle of Table 3. Obviously, removing data diversification decreases the performance in all aspects as the model has less training data. If we further remove data distillation and use the same amount of data in their original formats, the model performs even worse, especially on the C-score. This validates the effectiveness of data distillation in our method. However, it is not proper to completely rely on distilled data. From the results of only using distilled data in training, our method improves the C-score, yet significantly degenerates in other aspects. The reason is that the relationship between persona/dialogue history and the response has changed from the original data to their distilled ones. Thus a model trained with distilled data should serve as a warm start to learn the original data, but not to replace the original data. + +We also test the robustness of our data distillation method by using an NLI model trained in a few-shot setting (200 samples). Results are included in Table 3 as $\mathbf{D}^{3*}$ . It is slightly worse than our method with sufficient NLI training data, but still superior to most compared methods. Note that the response diversity metrics nearly remain unchanged. This means that our data diversification methods are still effective when starting from noisy distilled samples. It also shows that our method can be useful when only limited in-domain NLI labeled data are available for data distillation. + +Analysis of data diversification Table 1 shows that the diversified data contain many new persona sentences as well as tokens. Besides, we compute + +
PPLBLEUNIST-4BSfEnt-1Ent-2Ent-3Dis-1Dis-2Dis-3C
TRANS-D337.303.3581.2060.15744.2236.1657.2981.8267.92314.420.485
Original38.283.1401.1480.14864.0465.4846.2621.6096.29811.710.235
Only augment126.31.6030.9560.08524.3156.3097.4261.7477.53012.660.942
Shuffle37.663.2031.1750.15214.1286.0966.9791.6596.88913.790.404
Reverse48.172.1371.0190.15083.9475.2916.0391.3685.5039.2110.912
+ +Table 4: Performance comparison between different curriculum variants, using TRANSFORMER as the base model. + +
Novelty-1, 2, 3, 4
sample30.8947.0753.8159.64
persona40.2662.1770.4777.81
+ +Table 5: Novelty metrics of the diversified data compared to distilled data in sample and persona level. + +![](images/9ded5f02dc2e9b7185c2b76db29be662a1c22589fbdc7036fd9735d85e7c5412.jpg) +Figure 4: The compositions of diversified data. T/P: token/phrase-level editing to get edited personas, O/B: original/BT-augmented dialogue history, E/G: token editing/generating by a model to get aligned responses. + +the Novelty metrics (Wang and Wan, 2018; Zhang et al., 2020) of diversified samples in $\mathcal{D}^{div}$ . It takes the original distilled samples in $\mathcal{D}^{dis}$ as references, and uses the Jaccard similarity function to measure the proportion of n-grams ( $n = 1, 2, 3, 4$ ) in $\mathcal{D}^{div}$ but not in $\mathcal{D}^{dis}$ . A higher value means more "novel" content. Note that we particularly prefer more novel personas, while not encouraging more novel dialogue histories. Thus, the Novelty scores on the overall samples which include dialogue histories, personas and responses, are lower than those on the personas. + +To further examine how each part of data diversification works, we conduct the following ablation studies: 1) w/o persona editing: no persona sentence will be edited; 2) w/o history augmentation: only original dialogue history is used; 3) w/o response filtering: all constructed samples are directly used without using Eq. 2. Results in the bottom of Table 3 show that all these designs contribute to the performance of the whole method. Among them, response filtering is the most important as it ensures the quality of augmented samples. + +We also investigate the proportions of diversified samples coming from various source combinations. Results are shown in Figure 4, which shows that more than $80\%$ diversified samples have their re + +sponses obtained via model predicting, as token editing sets a strict condition that overlapped tokens must exist. Phrase-level editing also contributes to more high-quality personas with satisfactory fluency and semantic novelty. + +Analysis of data curriculum We first compare other data curriculum variants to show the usefulness of training with the designed data curriculum. The following variants are included: 1) Original: only the original dataset $\mathcal{D}$ (the hard curriculum in $\mathbf{D}^3$ ) is used, which is equal to the base model; 2) Only augment: only the augmented dataset $\mathcal{D}^a$ (the easy curriculum in $\mathbf{D}^3$ ) is used; 3) Shuffle: shuffling of the original dataset $\mathcal{D}$ and the augmented dataset $\mathcal{D}^a$ together to train the model; 4) Reverse: using the curricula in a reverse order, which means the hard curriculum first and then the easy one. + +Relevant results are shown in Table 4. There is no doubt that our curriculum is the best when comprehensively considering all aspects. Although Only augment and Reverse show high C-scores, their responses are much worse in n-gram accuracy as they involve more persona information while focusing less on the dialogue coherence during generating. Shuffle shows better performance than Original as it includes more augmented data than the original dataset, which may benefit the training. However, such a mixing strategy is not so efficient as our data curriculum as it neglects the learning difficulty of different data sources. + +Next, we further quantify the effect of curriculum training on models using the attention from the response on the persona sentences. We define two metrics, token-level/sentence-level consistent attention weight $(a_{t}$ and $a_{s})$ , to measure how the attention contributes to reflecting the proper personas. Recall that we concatenate the persona sentences and history utterances as a single model input. We record the token positions of the entailed persona sentences in the input sequence, which are determined by our NLI model, denoted as $\mathcal{S}$ . Then for each index $s\in S$ , if its corresponding token in the input also occurs in the response, we put this + +![](images/a5992c02e75e04417449af4eff7616d2935357bba856c8563669d9252731e7c5.jpg) +Figure 5: Average consistent attention weights in different decoder layers of TRANSFORMER trained with (i) original dataset (Orig.), (ii) shuffled data in $\mathcal{D}$ and $D^a$ (Shuffle), and (3) our data curriculum. Uniform: uniform attention values on all positions. Top: token-level $a_t$ ; bottom: sentence-level $a_s$ . + +index pair into a set $\mathcal{T} = \{(s,l)\}$ , where $s$ and $l$ are the token positions in the input sequence and response sequence respectively. Then we have two measurements for each sample: + +$$ +a _ {t} = \frac {1}{| \mathcal {T} |} \sum_ {(i, j) \in \mathcal {T}} a _ {i j}, \quad a _ {s} = \frac {1}{Y} \sum_ {i = 1} ^ {Y} \sum_ {j \in \mathcal {S}} a _ {i j}, \tag {3} +$$ + +where $a_{ij} \in [0,1]$ is the normalized scalar attention weight at the $i$ -th decoding step on the $j$ -th input token, i.e. $\sum_{j} a_{ij} = 1$ , and $Y$ is the length of the generated response. A higher $a_{t} / a_{s}$ indicates that the model poses more attention on proper persona tokens, where the former one is fine-grained for reflecting how the attention works properly at each step, while the latter one is coarse-grained for the whole generated response. + +Part of the results with selected TRANSFORMER layers for these two metrics on all samples from the PersonaChat dev set are shown in Figure 5 (Refer to Appendix C.4 for the complete results). Obviously, our method shows the highest $a_{t}$ and $a_{s}$ on all given layers compared to other two curriculum variants. Such a superiority is more significant in higher layers, which is more decisive for generating responses (Fan et al., 2019). While the attentions weights tend to distribute uniformly in lower layers, which are close to the uniform values. + +Case study Some response samples generated when using TRANSFORMER as the base model are shown in Figure 6. Here $\mathbf{H}$ indicates dialogue history, a persona sentence shaded in a darker color denotes that it has a higher attention weight posed by the model. Our method $\mathbf{D}^3$ can offer a model with the capability to pose more attention on the + +![](images/36a54f79249286b972df952c5f47fd564f9c5b46684707474cce0f55ff01e529.jpg) +Figure 6: Sample responses and visualized model attention weights on personas texts $(a_{s})$ , deeper colors indicate higher attention weights. T: TRANSFORMER, $\mathbf{D}^3$ : TRANSFORMER-D $^3$ . + +proper persona texts during generating responses. More cases can be found in Appendix C.6. + +# 5 Conclusion + +Our work targets the challenging personal-based dialogue generation task. Unlike previous work that designs a new dialogue model to improve the generation performance, we analyze the data issues affecting current models. On one hand, the data scale and diversity are expensive to increase by data collection. On the other hand, current data are difficult to learn with. Based on such an understanding, we propose a model-agnostic data manipulation method for this task. It first distills the original data and then augments both the amount and diversity of the distilled data. A curriculum training is then applied to utilize both augmented and original data. Experimental results showed that our method effectively improves the performance of two strong dialogue models, i.e. Transformer encoder-decoder and GPT2. + +# Acknowledgements + +We would like to thank Piji Li and Lemao Liu for their helpful discussion and feedback. We also thank anonymous reviewers for their constructive comments. + +# References + +Reina Akama, Sho Yokoi, Jun Suzuki, and Kentaro Inui. 2020. Filtering noisy dialogue corpora by connectivity and content relatedness. In Proceedings of EMNLP 2020, pages 941-958. + +Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of ICML 2009, pages 41-48. +Hengyi Cai, Hongshen Chen, Yonghao Song, Cheng Zhang, Xiaofang Zhao, and Dawei Yin. 2020a. Data manipulation: Towards effective instance learning for neural dialogue generation via learning to augment and reweight. In Proceedings of ACL 2020, pages 6334-6343. +Hengyi Cai, Hongshen Chen, Cheng Zhang, Yonghao Song, Xiaofang Zhao, Yangxi Li, Dongsheng Duan, and Dawei Yin. 2020b. Learning from easy to complex: Adaptive multi-curriculum learning for neural dialogue generation. In Proceedings of AAAI 2020, pages 7472-7479. +Yu Cao, Wei Bi, Meng Fang, and Dacheng Tao. 2020. Pretrained language models for dialogue generation with multiple input sources. In Proceedings of EMNLP-Findings 2020, pages 909-917. +Richard Csaky, Patrik Purgai, and Gábor Recski. 2019. Improving neural conversational models with entropy-based data filtering. In Proceedings of ACL 2019, pages 5650-5669. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the NAACL-HLT 2019, pages 4171-4186. +Emily Dinan, Varvara Logacheva, Valentin Malykh, Alexander Miller, Kurt Shuster, Jack Urbanek, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, et al. 2020. The second conversational intelligence challenge (convai2). In *The NeurIPS'18 Competition*, pages 187-208. +George Doddington. 2002. Automatic evaluation of machine translation quality using n-gram cooccurrence statistics. In Proceedings of the second international conference on Human Language Technology Research, pages 138-145. +Nouha Dziri, Ehsan Kamalloo, Kory W Mathewson, and Osmar Zaiane. 2019. Evaluating coherence in dialogue systems using entailment. pages 3806-3812. +Angela Fan, Edouard Grave, and Armand Joulin. 2019. Reducing transformer depth on demand with structured dropout. arXiv preprint arXiv:1909.11556. +Sergey Golovanov, Rauf Kurbanov, Sergey Nikolenko, Kyryl Truskovskyi, Alexander Tselousov, and Thomas Wolf. Large-scale transfer learning for natural language generation. In Proceedings of ACL 2019. +Hongyu Guo, Yongyi Mao, and Richong Zhang. 2019. Augmenting data with mixup for sentence classification: An empirical study. arXiv preprint arXiv:1905.08941. + +Yutai Hou, Yijia Liu, Wanxiang Che, and Ting Liu. 2018. Sequence-to-sequence data augmentation for dialogue language understanding. In Proceedings of COLING 2018, pages 1234-1245. +Wan-Ting Hsu, Chieh-Kai Lin, Ming-Ying Lee, Kerui Min, Jing Tang, and Min Sun. 2018. A unified model for extractive and abstractive summarization using inconsistency loss. In Proceedings of ACL 2018, pages 132-141. +Urvashi Khandelwal, He He, Peng Qi, and Dan Jurafsky. 2018. Sharp nearby, fuzzy far away: How neural language models use context. In Proceedings of ACL 2018, pages 284-294. +Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. *ICLR* 2015. +Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Proceedings of NAACL-HLT 2016, pages 110-119. +Jiwei Li, Michel Galley, Chris Brockett, Georgios Spithourakis, Jianfeng Gao, and William B Dolan. 2016b. A persona-based neural conversation model. In Proceedings of ACL 2016, pages 994-1003. +Juntao Li, Lisong Qiu, Bo Tang, Dongmin Chen, Dongyan Zhao, and Rui Yan. 2019. Insufficient data can also rock! learning to converse using smaller data with augmentation. In Proceedings of the AAAI 2019, pages 6698-6705. +Zhaojiang Lin, Andrea Madotto, Yejin Bang, and Pascale Fung. 2021. The adapter-bot: All-in-one controllable conversational model. In Proceedings of the AAAI 2021, pages 16081-16083. +Pierre Lison and Jörg Tiedemann. 2016. Opensubtitles2016: Extracting large parallel corpora from movie and tv subtitles. In Proceedings of LREC 2016, pages 923-929. +Lemao Liu, Masao Utiyama, Andrew Finch, and Eichiro Sumita. 2016. Neural machine translation with supervised attention. In Proceedings of COLING 2016, pages 3093-3102. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. +Andrea Madotto, Zhaojiang Lin, Chien-Sheng Wu, and Pascale Fung. 2019. Personalizing dialogue agents via meta-learning. In Proceedings of ACL 2019, pages 5454-5459. +Junghyun Min, R Thomas McCoy, Dipanjan Das, Emily Pitler, and Tal Linzen. 2020. Syntactic data augmentation increases robustness to inference heuristics. In Proceedings of ACL 2020, pages 2339-2352. + +Tong Niu and Mohit Bansal. 2019. Automatically learning data augmentation policies for dialogue tasks. In Proceedings of EMNLP-IJCNLP 2019, pages 1317-1323. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL 2002, pages 311-318. +Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP 2014, pages 1532-1543. +Emmanouil Antonios Platanios, Otilia Stretcu, Graham Neubig, Barnabas Poczos, and Tom Mitchell. 2019. Competence-based curriculum learning for neural machine translation. In Proceedings of NAACL-HLT 2019, pages 1162-1172. +Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog. +Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M Smith, et al. 2021. Recipes for building an open-domain chatbot. +Chinnadhurai Sankar, Sandeep Subramanian, Christopher Pal, Sarath Chandar, and Yoshua Bengio. 2019. Do neural dialog systems use the conversation history effectively? an empirical study. In Proceedings of ACL 2019, pages 32-37. +Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of ACL 2017, pages 1073-1083. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of ACL 2016, pages 86-96. +Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In Proceedings of ACL-IJCNLP 2015, pages 1577-1586. +Haoyu Song, Yan Wang, Kaiyan Zhang, Wei-Nan Zhang, and Ting Liu. 2021. BoB: BERT over BERT for training persona-based dialogue models from limited personalized data. In Proceedings of ACLIJCNLP 2021, pages 167-177. +Haoyu Song, Yan Wang, Weinan Zhang, Xiaojiang Liu, and Ting Liu. 2020. Generate, delete and rewrite: A three-stage framework for improving persona consistency of dialogue generation. In Proceedings of ACL 2020, pages 5821-5831. + +Haoyu Song, Wei-Nan Zhang, Yiming Cui, Dong Wang, and Ting Liu. 2019. Exploiting persona information for diverse generation of conversational responses. In Proceedings of IJCAI 2019, pages 5190-5196. +Yi Tay, Shuohang Wang, Anh Tuan Luu, Jie Fu, Minh C Phan, Xingdi Yuan, Jinfeng Rao, Siu Cheung Hui, and Aston Zhang. 2019. Simple and effective curriculum pointer-generator networks for reading comprehension over long narratives. In Proceedings of ACL 2019, pages 4922-4931. +Zhiliang Tian, Wei Bi, Xiaopeng Li, and Nevin L Zhang. 2019. Learning to abstract for memory-augmented conversational response generation. In Proceedings of ACL 2019, pages 3816-3825. +Zhiliang Tian, Wei Bi, Zihan Zhang, Dongkyu Lee, Yiping Song, and Nevin L Zhang. 2021. Learning from my friends: Few-shot personalized conversation systems via social networks. In Proceedings of AAAI 2021, pages 13907-13915. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS 2017, pages 5998-6008. +Ke Wang and Xiaojun Wan. 2018. Sentigan: generating sentimental texts via mixture adversarial networks. In Proceedings of IJCAI 2018, pages 4446-4452. +Sean Welleck, Jason Weston, Arthur Szlam, and Kyunghyun Cho. 2019. Dialogue natural language inference. In Proceedings of ACL 2019, pages 3731-3741. +Thomas Wolf, Victor Sanh, Julien Chaumont, and Clement Delangue. 2019. Transfertransfo: A transfer learning approach for neural network based conversational agents. arXiv preprint arXiv:1901.08149. +Benfeng Xu, Licheng Zhang, Zhendong Mao, Quan Wang, Hongtao Xie, and Yongdong Zhang. 2020. Curriculum learning for natural language understanding. In Proceedings of ACL 2020, pages 6095-6104. +Semih Yavuz, Abhinav Rastogi, Guan-Lin Chao, and Dilek Hakkani-Tur. 2019. Deepcopy: Grounded response generation with hierarchical pointer networks. In Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue, pages 122-132. +Rongsheng Zhang, Yinhe Zheng, Jianzhi Shao, Xiaoxi Mao, Yadong Xi, and Minlie Huang. 2020. Dialogue distillation: Open-domain dialogue augmentation using unpaired data. In Proceedings of EMNLP 2020, pages 3449-3460. + +Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018a. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of ACL 2018, pages 2204-2213. + +Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. In ICLR 2019. + +Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018b. Generating informative and diverse conversational responses via adversarial information maximization. In NIPS 2018, pages 1810-1820. + +# A Implementation Details of $\mathbf{D}^3$ + +# A.1 Details of Distillation + +In order to obtain the NLI model to determine the persona consistency, the RoBERTa-Large-MNLI $^2$ model is utilized. To make the model better fit the domain of PersonaChat, we finetune the model on the DialogueNLI dataset (Welleck et al., 2019) which is a part of the original PersonaChat. We set the batch size as 32 and finetune the model for 5 epochs using a learning rate 1e-5. We obtain a model $\mathrm{RoBERTa}_{nli}$ achieving $90.8\%$ accuracy on the dev set. This model will also be responsible for calculating the entailment probability NLI in response filtering and C-score in the experiments. A threshold $\tau = 0.99$ is used in this model for predicting the NLI labels. For the few-shot setting $\mathbf{D}^{3*}$ in §4.3, we randomly sample 200 samples from the training set to train the above NLI model using learning a rate 2e-5, and obtain a model achieving $79.3\%$ on the dev set. + +# A.2 Details of Diversification + +The BERT-based-uncased model3 and GPT2-base4 are involved as the pretrained models in this stage. To ensure that the pretrained models can make predictions that better fit current data domain while also have enough capabilities of generation diversity, we perform the following finetuning: 1) finetune BERT and GPT2 on the persona sentences for 100 steps with a batch size 32 and a learning rate 1e-4, obtaining $\mathrm{BERT}_{per}$ and $\mathrm{GPT2}_{per}$ ; 2) finetune GPT2 on responses for 200 steps with a batch size 32 and a learning rate 1e-4, and obtain $\mathrm{GPT2}_{res}$ . + +Persona editing $\mathrm{BERT}_{per}$ and $\mathrm{GPT2}_{per}$ will be used for token-level editing and phrase-level editing respectively. Each will generate 10 unique new persona sentences from one original persona sentence via sampling according to the multinomial distribution. At the token level, we only mask the most informative tokens which can be decided by the POS tags given by $\mathrm{SpaCy}^5$ as it is meaningless to mask some words such as prepositions "to" and "in". The target POS tags are listed in Table 6. We set the token-level mask ratio as 0.8. At phrase level, the mask ratio is randomly sampled between [0.3, 0.6]. We also restrict that at least 2 tokens are + +
POS tagsVERB, NOUN, PROPN, NUM, ADV, ADP, ADJ
+ +Table 6: The target POS tags for token-level masking. + +masked and the maximum length of generated text pieces from $\mathrm{GPT2}_{per}$ does not exceed $30\%$ of the original length to preserve the sentence similarity. + +We use $\alpha = 0.4$ in Eq. 1, where PPL is given by $\mathrm{GPT2}_{per}$ normalized by a constant 50 (which is about the highest PPL value given by the GPT2 model on current corpus). For BERTScore, the F1 value is used as $BS_{f}$ while other configurations follow the recommendation for English in Zhang et al. (2019) $^6$ . $N_{p}$ is set as 5. + +Response aligning For token-level editing, we also restrict the POS tags of overlapped tokens according to Table 6. For model predicting, we train the Multi-GPT2 model on the distilled data $\mathcal{D}^{dis}$ . Its performance on the dev set distilled from the original dev set of PersonaChat is shown in Table 7. We can see that this model shows high n-gram accuracy and persona consistency, thus should be effective. + +Dialogue history augmentation We use the transformer_wmt_en_de Transformer model in Fairseq7 as the translation model. It is trained on the WMT14 EN-FR dataset with 40.5M samples and default configurations. During inference, we use beam search with its size 5 for both en-fr and fr-en translation, resulting in 25 new utterances for each original one. For a large divergence, we select $N_{p} = 1$ new utterance with the lowest BLEU score when taking the original one as the reference. + +Quality filtering We use $\mathrm{GPT2}_{res}$ normalized by a constant 50 to get the PPL of responses. Here, we finetune another RoBERTa-Large-MNLI model on the InferConvAI dataset which achieves $88.7\%$ accuracy on its dev set. The entailment probability given by this model is regarded as $\mathrm{NLI}_c$ . We set $\beta = 0.2$ , $\gamma = 0.6$ in Eq. 2. + +We compare the fluency and coherence of responses with the GPT2-based PPL and NLI model-based score from the training set, which are shown in Table 8. In addition, we also evaluate the GPT2-PPL's for edited and original persona sentences, which are 6.427 vs. 10.426. + +# B Details of Experiment + +Base model For TRANSFORMER, we use 300-dim GloVe (Pennington et al., 2014) trained on 6B corpus as the word embeddings. There are 6 layers in both the encoder and decoder, with the hidden size 300 and 4 heads. During training, a cross-entropy loss is used along with Label Smoothing with the ratio 0.1. For GPT2, we use the base pretrained model with 12 layers and 768-dim hidden state. It will be trained using the average of a cross-entropy loss on generating and a classification loss between true response and one randomly sampled negative response. Beam search with the beam size 3 along with length penalty is used during inference for both models. + +The formats of input or response for both models are shown in Figure 7. Here $\langle \mathrm{bos} \rangle$ , $\langle \mathrm{eos} \rangle$ , $\langle \mathrm{talker1} \rangle$ , and $\langle \mathrm{talker2} \rangle$ are special symbols to distinguish different parts of input or response. + +Model training We use a learning rate 2e-4 for TRANSFORMER and 6.25e-5 for GPT2, which is a common setting in former similar works. And the training batch size is 256 for both models. Training will be stopped until the loss on the dev set does not decrease for $N$ epochs. Here $N$ is 15 for TRANSFORMER and 5 for GPT2. In curriculum learning, the learning rate is the same for different curricula. The dev set of the easy curriculum is obtained by applying the same augmentation to the original dev set. Models with the minimum loss at each curriculum are remained as the best. The best model obtained on the easy curriculum is used as the initial model in the hard curriculum. All experiments are implemented via PyTorch on 32GB NVIDIA V100 GPUs. Each epoch takes about 10 min for Transformer and 25min for GPT2. + +Hyper-parameters All hyper-parameters are determined using a coarse grid search to ensure satisfactory performance, including $\tau$ in data distillation, $\alpha$ in Eq. 1, $\beta, \gamma$ in Eq. 2. The candidate values of these hyper-parameters are given in Table 9, which are determined empirically to reduce the searching cost. The search target we want to maximize is the normalized average of all automatic metrics listed in Table 2 when inferencing on the test set, except PPL. Note that we only take TRANSFORMER as the base model for search, each time of search takes about 0.7 GPU day. GPT2 model follows the same setting as TRANSFORMER. We found that $\tau$ plays a more important role in our + +
PPLBLEUNIST-4BSfEnt-1Ent-2Ent-3Dis-1Dis-2Dis-3C
Multi-GPT217.706.1861.47730.32164.6656.8097.7044.11115.69327.1150.850
+ +![](images/6d02d9dad0490251709be6fdfdd373143605d27b66b4392672d9483933b61a91.jpg) +Input format for augmented data $\mathcal{D}^a$ + +![](images/aa6b29c6126ba2e6c6fc658104853e15693030ceb2140234f60c722162c94b51.jpg) +Input format for original dataset $\mathcal{D}$ + +![](images/15d0c3c185f823158f21a8db8ff79620d4f22eb36d6698290c45073f246cdeda.jpg) +Response format +Figure 7: The sequence format of an input and an output for both TRANSFORMER and GPT2 models. + +Table 7: The performance of trained Multi-GPT2 on the distilled dev set. + +
GPT2-PPLCoherence score
Original13.1190.361
Diversified18.8470.525
+ +Table 8: The average GPT2-based PPL and NLI model-based coherence score of the original responses and responses generated in diversification. + +
ParamCandidate values
τ0.9, 0.95, 0.99
α0.4, 0.5, 0.6
β0.2, 0.3
γ0.4, 0.5, 0.6
+ +Table 9: The candidate values for hyper-parameters during grid searching. + +
methodTrain sample number
Original65,719
BT131,436
CVAE131,436
Entropy-Filter59,892
53,393 (easy)
D3(Ours)65,719 (hard)
119,112 (all)
+ +Table 10: The training sample number used in each method. + +method who determine the quality of distilled samples, while other parameters have fewer impacts on our method. + +Baselines We apply the same translation models as the ones used in §A.2 for the BT (Sennrich et al., 2016) baseline and augment each sample with a new sample from it. For CVAE (Li et al., 2019) method, we use its default setting to train the model on PersonaChat dataset without using the personas. A new sample is generated for each input in the original dataset. In Entropy-filter (Csaky et al., 2019), we set the threshold as 1.1 and using both source and target sequences for filtering. Only samples that survived after filtering are used in + +training. The total numbers of training samples of all methods are listed in Table 10. Note that 0all models are trained until the loss does not decrease for the same $N$ epochs for a fair comparison. + +Metrics We use the same $\mathrm{BS}_f$ and $\mathrm{RoBERTa}_{nli}$ obtained before to calculate the BERTScore and C-score metrics respectively. The instructions for human annotators are provided in Table 14 and 15. + +# C Additional Experimental Results + +# C.1 Attention on Dialogue History + +To investigate how models pose attention on each part of dialogue history, especially the last utterance, we calculate the attention weights from different decoder layers on the last utterance or the other dialogue history utterances. TRANSFORMER model is used here, which is trained with the original training data without any augmentation. When testing on the dev set of PersonaChat dataset, the average token-level attention weight on the last utterance in the dialogue history is significantly higher than that on all other utterances, as shown in Figure 8. Thus, our history distillation can ease model learning for such knowledge by removing former utterances. + +# C.2 Statistical Results of Table 2 + +We conduct Student's T-test between the experimental results of our method $\mathbf{D}^3$ and every other baseline under each base model to verify the performance difference significance between every two methods. Here, all human evaluation results (Fluency, Coherence, Persona-consistency), and some applicable automatic metrics (C-score, $\mathrm{BS}_f$ ) are included. We can find that nearly all results from baselines satisfy the null hypothesis (results are significantly different from $\mathbf{D}^3$ ) given $p > 0.05$ or even a smaller threshold using TRANSFORMER as + +
PPLBLEUNIST-4BSfEnt-1Ent-2Ent-3Dis-1Dis-2Dis-3C
GPT217.633.7611.2780.16934.4856.1877.0292.0118.26015.030.518
GPT2-D315.694.1841.4290.18354.6146.4267.3212.1799.45817.720.557
GPT2-D3*15.774.0821.3880.18094.6116.4087.3122.2099.65717.910.536
w/o diversification15.894.1191.4410.18174.5266.2817.1482.1319.24317.110.528
w/o distilled format16.044.0261.3790.17884.4626.1517.0972.0179.02216.860.518
only distillation29.732.9121.3250.15094.5586.3927.2501.2524.8079.0481.131
w/o persona editing15.814.1901.4270.18014.5036.2047.0622.0658.86716.830.524
w/o history augmentation15.754.2131.5030.18124.5626.3337.2442.0579.13117.340.533
w/o response filter15.834.1191.3950.17904.6046.3877.2652.1589.41417.740.518
+ +Table 11: Automatic evaluation results with variant settings in distillation variants (middle), and data diversification ablations (lower), compared with the original $\mathbf{D}^3$ (top) on GPT2. $\mathbf{D}^{3*}$ means using an NLI model trained under a few-show setting (200 labelled samples) in the data distillation. + +
PPLBLEUNIST-4BSfEnt-1Ent-2Ent-3Dis-1Dis-2Dis-3C
GPT2-D315.694.1841.4290.18354.6146.4267.3212.1799.45817.720.557
Original17.633.7611.2780.16934.4856.1877.0292.0118.26015.030.518
Only augment33.012.5401.0780.10354.5746.2557.2321.9167.34011.771.148
Shuffle16.583.8011.3210.17994.5886.2617.2162.1289.39117.550.525
Reverse30.462.6151.0690.11894.2986.0746.9601.6466.7099.5291.111
+ +Table 12: Performance comparison between different curriculum variants, using GPT2 as the base model. + +![](images/9dc95b715f5f7aa50827dc79fa240e062123f5c266414831a2c8f2ebd90a6f13.jpg) +Figure 8: The average token-level attention weights from different decoder layers in TRANSFORMER on the last utterance or other part of dialogue history. Red line: the baseline values when all attention distributes uniformly among all tokens. + +the base model. Such significant difference tends to appear fewer times when using GPT2 as the base model except for CVAE, which again shows that all data manipulation methods may have fewer impacts when packed with a pretrained model. + +# C.3 More Analysis on GPT2 + +We also provide the extensive analysis results on GPT2 which is similar to the ones given in §4.3 on TRANSFORMER. Table 11 shows the results. We can find the influence of data diversification, as well as our distillation, have fewer impacts on GPT2 compared to TRANSFORMER. The reason is that GPT2 is a strong pretrained model, being less vulnerable to the different numbers of data samples. + +Moreover, Table 12 shows the performance when using different curriculum variants, demonstrating the similar conclusion as TRANSFORMER. + +# C.4 Additional Results of Attention Analysis for Curriculum + +To better illustrate the effect of our training curriculum strategy, we further provide the token-level/ sentence-level consistent attention weights $a_{t}$ and $a_{s}$ in all layers of Transformer and GPT2 trained via 3 curriculum strategies, Original (Orig.), Shuffle or our $\mathbf{D}^3$ method, as described in §4.3. All visualized attention weights are shown in Figure 9. Our method has the most accurate attention on personas at both levels. On the other hand, compared to Transformer, the divergence between different layers in GPT2 is more significant. + +# C.5 The Influence of Diversified Sample Numbers + +Since we can simply control the threshold for $s$ in Eq. 2 to determine how many diversified samples are generated for $\mathcal{D}^{div}$ . How this quantity affects the performance of $\mathbf{D}^3$ ? We carry out experiments to use different $\mathcal{D}^{div}$ whose size is about $50\%$ of $\mathcal{D}^{dis}$ or $200\%$ of $\mathcal{D}^{dis}$ on TRANSFORMER, compared to the original method where $\mathcal{D}^{div}$ is nearly the same size as $\mathcal{D}^{dis}$ . The results in terms of automatic metrics are shown in Table 13. It can be found that further extending the data scale will result in a very slight promotion but a longer training time, while + +![](images/551ff2834b6dbd383e5c53aeeb440478bca031b143abb855b3781d15ca1b2df4.jpg) +(a) Consistent attention weights from different decoder layers in TRANSFORMER. Upper: token-level $a_{tc}$ , lower: sentence-level $a_{sc}$ . + +![](images/3df3e21912c34159082eb5a7ad7e89f6ecb9f7011b19ca81cf9eaddf1fc9ea19.jpg) +(b) Consistent attention weights from different decoder layers in GPT2. Upper: token-level $a_{tc}$ , lower: sentence-level $a_{sc}$ . +Figure 9: Consistent attention weights on TRANSFORMER and GPT2. Orig.:training the model using the original training data $\mathcal{D}$ ; Shuffle: training the model using the shuffling data of $\mathcal{D}$ and $\mathcal{D}^a$ ; Ours: training the model using our curriculum strategy; Uniform.: the attention value distributed on all positions uniformly, which is a baseline. + +
PPLBLEUNIST-4BSfEnt-1Ent-2Ent-3Dis-1Dis-2Dis-3C
TRANS-D337.303.3581.2060.15744.2236.1657.2981.8267.92314.420.485
TRANS-D3(200%)37.493.3671.1990.15704.2716.2357.3431.8217.99714.510.493
TRANS-D3(50%)37.753.2691.1670.15514.1326.0857.0031.7437.65814.100.468
+ +Table 13: Performance comparison between original $\mathbf{D}^3$ and variants when using diversified dataset $\mathcal{D}^{div}$ with about $200\%$ or $50\%$ size of distilled dataset $\mathcal{D}^{dis}$ . + +squeeze the diversified dataset size has a more obvious effect on the performance. Nevertheless, using $\mathcal{D}^{div}$ with a similar size as $\mathcal{D}^{dis}$ is a good tradeoff between resource cost and performance, while ensure a fair comparison between former methods. + +# C.6 Additional Case Studies + +Except for the cases provided in $\S 4.3$ , we provide additional cases including the responses given by + +GPT2. They are shown in Figure 10, including visualized attention weights posed by different models on their persona sentences. Note that the attention weights are normalized along the whole input sequence including dialogue history. It can be found that our method can help the model to pay more attention to suitable persona parts, thus the generated responses have better persona consistency. + +![](images/e1ba9f48f51b852a2a0c7092875779e6c88600583b65526aad457dbb50fdaeb5.jpg) +Figure 10: Additional responses cases and visualization by Transformers(Trans) and GPT2 without or with our $\mathbf{D}^3$ data augmentation method. Colors in each persona text indicate the attention weight paid by different models. A darker color means a higher attention weight is posed by the current model. Colored texts in the response denote the persona consistency. + +![](images/9e8c05df850f5e4a991ecf3b46ec110156d4cfbe6dcb28bdcb6d9a2e72a583b6.jpg) + +
Data descriptionYou are supposed to be Speaker S2, you are required to evaluate the quality of dialogue responses from S2 in the following 3 aspects, based on 1) the persona information of S2; and 2) the dialogue history with Speaker S2. Here, the persona information of S2 mean the personality/characteristics of the speaker for the response need to be evaluated. The responses are expected to reflect the given persona for the speaker as possible, meanwhile, they should also be proper and coherent for the previous messages from Speaker S1. Each serial number indicates one sample. It contains persona information and corresponding dialogue history. The dialogue history contains several different responses (by different methods). Your need to rating for every response considering the persona information and dialogue history. Rating contains the following 3 aspects.
+ +
1. Fluency (1 ~ 3. Your need not to consider the persona information and dialogue history, just the response itself.
ScoreDescriptionExamples
1 (unsatisfied)1) The text is totally broken, or contains severe grammar errors. +2) The text is very hard to understandS1:i do not have any but charlie my puppy enjoys it +S2:i am triplets triplets triplets triplets triplets +(Cannot understand) +S1:i am a college student . art major . +S2:i love my spanish . is studying it has been studying ? +(Totally not fluent)
2 (fair)1) The text is basically fluent, contains grammar errors but do not affect understanding. +2) The response is short but fluent, without grammar error. +3) The text contains some repeated context. +4) The text is basically fluent, but contains perverse content.S1good , you have any hobbies ? +S2i travel a lot +(Fluent but too simple) +S1what kind of dog is he ? +S2he is a german shepard . he is a german shepard . +(Fluent but contains repetitions)
3 (satisfied)1) The text is long and informative, few grammar errors are acceptable. There may exist some non-fluent parts, but do not affect understanding. +2) The text is in medium length, fluent without grammar error.S1:hello what are doing today ? +S2:hello , i just got back from the gym . how are you ? +S1:good , you have any hobbies ? +S2:i used to be a painter , i still like to do that a lot . how about you ?
+ +
2. Dialogue coherence (1 ~ 3. You need not to consider the fluency if there is no difficulty in understanding. Your need to consider both the repsonse and dialogue history.)
ScoreDescriptionExamples
1 (unsatisfied)The response is irrelevant to the dialogue history. E.g., it does not share the same topic or it is an irrelevant answer.S1: how old are you? i turned four on my birthday! S2: awesome! i love the insane clown posse love (Irrelevant answer)
2 (fair)Very limit relevance exists between the response and history, or meets the following conditions: 1) The response is the same as the query. 2) The response is a kind of paraphrase of the query. 3) It is a general response that do not answer the query or contains very limited information,e.g., "i am sorry" 4) The response is a question without new information.S1: yes i bet you can get hurt . my wife works and i stay at home S2: i wish i could do that (very limited relevance) S1: hi! do you like turtles? S2: yes i do , do you have any hobbies ? (a question without new information) S1:i would love to travel to italy . i love baking cookies . S2:i would love to visit italy sometime . (Praphrasing the query)
3 (satisfied)1) The text is long and informative, few grammar errors are acceptable. There may exist some non-fluent parts, but do not affect understanding. 2) The text is in medium length, fluent without grammar error.S1:hello what are doing today? S2:hello , i just got back from the gym . how are you? S1:good , you have any hobbies? S2:i used to be a painter , i still like to do that a lot . how about you?
+ +Table 14: The instruction for annotators to make human evaluation for the generated responses (Part 1). + +3. The consistency with given persona (0 or 1. Your need to consider both the persona sentences and the response.) + +
ScoreDescriptionExamples
0The response totally does not reflect any given persona information.Persona sentences:1) i was born in south carolina.2) hey there i am a professional singer.3) i graduated from usc.4) my name is joanna and i love watching horror films.S2: what is your favorite movie ? (totally irrelevant to persona)S2: I was born in Texas. So where is your home twon ?("born in Texas" contradict the persona sentence "i was born in south carolina". And there is no other text can reflect the correct persona.)
1The response can reflect one or several persona sentences directly or indirectly.Persona sentences:1) i read twenty books a year.2) i'm a stunt double as my second job.3) i only eat kosher.4) i was raised in a single parent household.S2: nice . i love to read .directly reflect the persona "i read twenty books a year."S2: nice ! i am currently reading a horror novel .(Indirectly reflect the persona "i read twenty books a year.")
+ +Table 15: The instruction for annotators to make human evaluation for the generated responses (Part 2). \ No newline at end of file diff --git a/amodelagnosticdatamanipulationmethodforpersonabaseddialoguegeneration/images.zip b/amodelagnosticdatamanipulationmethodforpersonabaseddialoguegeneration/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..7fb453bffcf5dfe44b58971ca050a45104da57e9 --- /dev/null +++ b/amodelagnosticdatamanipulationmethodforpersonabaseddialoguegeneration/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:50074526da2ba6f9c52d459ab208550de8e1f28756a9f8e8fa8eec1608560a0d +size 1665950 diff --git a/amodelagnosticdatamanipulationmethodforpersonabaseddialoguegeneration/layout.json b/amodelagnosticdatamanipulationmethodforpersonabaseddialoguegeneration/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..7682dfbe3afbd12a950d4cf174eb06ae4eb0d236 --- /dev/null +++ b/amodelagnosticdatamanipulationmethodforpersonabaseddialoguegeneration/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:79e60663662ef122ec508be4f92d5cb61412c3a24773017bb3688a238dafaa70 +size 686962 diff --git a/amultidocumentcoveragerewardforrelaxedmultidocumentsummarization/1e1b8243-02a0-4e09-b15b-5ad88d7d8a4d_content_list.json b/amultidocumentcoveragerewardforrelaxedmultidocumentsummarization/1e1b8243-02a0-4e09-b15b-5ad88d7d8a4d_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..8300807a88abbd020e01dc0e375d9136770210d0 --- /dev/null +++ b/amultidocumentcoveragerewardforrelaxedmultidocumentsummarization/1e1b8243-02a0-4e09-b15b-5ad88d7d8a4d_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cee011b3c3ed74c390fcce0449bba30052b731129060fed1ede3be7080c2f64b +size 109745 diff --git a/amultidocumentcoveragerewardforrelaxedmultidocumentsummarization/1e1b8243-02a0-4e09-b15b-5ad88d7d8a4d_model.json b/amultidocumentcoveragerewardforrelaxedmultidocumentsummarization/1e1b8243-02a0-4e09-b15b-5ad88d7d8a4d_model.json new file mode 100644 index 0000000000000000000000000000000000000000..149d5f20fc9c9102216a3ae3fac54de3782479a5 --- /dev/null +++ b/amultidocumentcoveragerewardforrelaxedmultidocumentsummarization/1e1b8243-02a0-4e09-b15b-5ad88d7d8a4d_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7b979e683e8aecbc8a9040162c254a8671c40674d589e50017d7ac28fb19e8d9 +size 127335 diff --git a/amultidocumentcoveragerewardforrelaxedmultidocumentsummarization/1e1b8243-02a0-4e09-b15b-5ad88d7d8a4d_origin.pdf b/amultidocumentcoveragerewardforrelaxedmultidocumentsummarization/1e1b8243-02a0-4e09-b15b-5ad88d7d8a4d_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b7ebb8b199a86658a1ac2fbc88275df303e58b23 --- /dev/null +++ b/amultidocumentcoveragerewardforrelaxedmultidocumentsummarization/1e1b8243-02a0-4e09-b15b-5ad88d7d8a4d_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7a375ab51ebda6fa383b666400f50815c60bed7dea5c5b122b4544ac7e823a68 +size 1069487 diff --git a/amultidocumentcoveragerewardforrelaxedmultidocumentsummarization/full.md b/amultidocumentcoveragerewardforrelaxedmultidocumentsummarization/full.md new file mode 100644 index 0000000000000000000000000000000000000000..577238591644b1b9501f46edd83fd167ac909a2a --- /dev/null +++ b/amultidocumentcoveragerewardforrelaxedmultidocumentsummarization/full.md @@ -0,0 +1,337 @@ +# A Multi-Document Coverage Reward for RELAXed Multi-Document Summarization + +Jacob Parnell $^{1,2}$ , Inigo Jauregi Unanue $^{1,2}$ , Massimo Piccardi $^{1}$ + +1University of Technology Sydney, NSW, Australia + +$^{2}$ RoZetta Technology, NSW, Australia + +{jacob.parnell, inigo.jauregi}@rozettatechnology.com + +massimo.piccardi@uts.edu.au + +# Abstract + +Multi-document summarization (MDS) has made significant progress in recent years, in part facilitated by the availability of new, dedicated datasets and capacious language models. However, a standing limitation of these models is that they are trained against limited references and with plain maximum-likelihood objectives. As for many other generative tasks, reinforcement learning (RL) offers the potential to improve the training of MDS models; yet, it requires a carefully-designed reward that can ensure appropriate leverage of both the reference summaries and the input documents. For this reason, in this paper we propose fine-tuning an MDS baseline with a reward that balances a reference-based metric such as ROUGE with coverage of the input documents. To implement the approach, we utilize RELAX (Grathwohl et al., 2018), a contemporary gradient estimator which is both low-variance and unbiased, and we fine-tune the baseline in a few-shot style for both stability and computational efficiency. Experimental results over the Multi-News and WCEP MDS datasets show significant improvements of up to $+0.95$ pp average ROUGE score and $+3.17$ pp METEOR score over the baseline, and competitive results with the literature. In addition, they show that the coverage of the input documents is increased, and evenly across all documents. + +# 1 Introduction + +Multi-document summarization (MDS) aims to consolidate salient points of information across a set of documents into a concise summary. The main requirement for the summary is that it adequately represent the document set, with low redundancy and high coverage across all documents, while at the same time being readable and fluent. Combined with this, is the need to develop techniques that can handle the significant memory complexity required to tackle MDS. Recently, the re + +lease of dedicated datasets (Fabbri et al., 2019; Gholipour Ghalandari et al., 2020), and intelligently designed Transformer models (Liu et al., 2018; Liu and Lapata, 2019; Beltagy et al., 2020), have helped drive advancements in multi-document summarization, generally improving the accuracy and fluency of the predicted summaries. However, aspects such as the requirement to cover as much salient information from the input documents as possible, whilst still maintaining low repetition and low redundancy, have certainly been less explored to date (Nayeem et al., 2018; Mao et al., 2020). + +Within the sphere of contemporary neural MDS models, two main lines of investigation can be identified: graph-based approaches (Li et al., 2020; Pasunuru et al., 2021), and concatenation approaches (Liu et al., 2018; Zhang et al., 2020a). The former are approaches that rely on the construction of graphs to capture the inter- and intra-document relations. While powerful, they need to elicit the relations explicitly. The latter instead assume that all the input documents within a document set can be simply concatenated, possibly with document separators and tags, such that the relations can be "discovered" by the model. Like ordinary summarization, also MDS comes in two remarkably different styles: extractive, where the generated summaries consist of verbatim sentences from the original input documents (Nallapati et al., 2017), and abstractive, where the model is instead encouraged to generate a paraphrased understanding of the input documents. The intrinsic appeal of abstractive summaries and the advent of sequence-to-sequence models have increasingly shifted the trend toward abstractive summarization (See et al., 2017; Paulus et al., 2018; Fabbri et al., 2019; Lewis et al., 2020; Zhang et al., 2020a). As for what models are concerned, abstractive MDS has made increasing use of transformers, both "conventional" (Lewis et al., 2020; Zhang et al., 2020a) and modified to accommodate the characteristic input length + +of multi-document sets (Beltagy et al., 2020; Za-heer et al., 2020). + +Similarly to general summarization, the majority of MDS models are trained using the negative log-likelihood (NLL) as training objective, which aims to maximize the conditional log-likelihood of the tokens of a given reference summary. Despite its speed and efficacy, the NLL exhibits both the wrong-objective problem (Ding and Soricut, 2017), where the model is trained on a convenient objective rather than a desirable one, and the well-known exposure bias problem (Bengio et al., 2015; Ranzato et al., 2016). To alleviate these issues, reinforcement learning has been adopted in summarization, as in other language generation tasks, to train the model with a more appropriate objective (Li et al., 2019; Parnell et al., 2021). However, its effective use for MDS requires a reward function that can appropriately balance the reference summary and the multiple input documents in the document set. For this reason, in this paper we propose exploring a reward that combines a reference-based metric such as ROUGE with a coverage term over the input documents. To implement the reinforcement learning approach, we employ a contemporary gradient estimator of the policy gradient, RELAX (Grathwohl et al., 2018), which is both low-variance and unbiased. In addition, to limit the computation and the risk of parameter drift, we apply the objective to fine-tune an NLL-pretrained model in a few-shot manner. In light of the above, this paper makes the following contributions: + +1. a reward for reinforcement learning that combines a ROUGE score and a multi-document coverage score, to simultaneously adhere to both the reference summaries and the input documents; +2. a reinforcement learning implementation that leverages a low-variance and unbiased gradient estimator of the policy gradient, RELAX; +3. experimental results and a comprehensive analysis over two MDS datasets (Multi-News and WCEP), showing the empirical effectiveness of the proposed approach. + +The rest of this paper is organized as follows: first the related work is reviewed in Section 2, and then the proposed approach is introduced in Section 3. Section 4 describes the experimental set-up and main results, while Section 5 presents a more + +detailed analysis of the main components of the proposed approach. Eventually, Section 6 summarizes our findings and concludes the paper. + +# 2 Related Work + +Early work in multi-document summarization (MDS) that pre-dates the neural era (Mani and Bloedorn, 1997; Erkan and Radev, 2004; Christensen et al., 2013) was shaped around the notion of MDS as a collection of graph structures. As approaches in language generation naturally evolved into neural-based (Rush et al., 2015; Ranzato et al., 2016), later improved with the emergence of large, pre-trained language models (Devlin et al., 2019; Lewis et al., 2020; Zhang et al., 2020a), the effort shifted to integrating these graph structures into the models, often building on top of strong single-document summarization (SDS) baselines (Lebanoff et al., 2018; Zhang et al., 2018). + +Concurrently, the growing interest in multi-document summarization has led to the development of dedicated, multi-document datasets such as WikiSum (Liu et al., 2018), Multi-News (Fabbri et al., 2019), Wikipedia Current Events Portal (WCEP) (Gholipour Ghalandari et al., 2020) and others. The typical amount of input data that comes with these datasets has increased the pressure on the models to be able to handle larger inputs. For instance, WCEP has up to 100 documents in each document set, and 63.7 on average. As such, the standard transformers used to develop successful SDS models such as BART (Lewis et al., 2020) and PEGASUS (Zhang et al., 2020a) have proved inadequate for MDS due to their limited maximum input length (in the order of $10^{3}$ tokens) and quadratic memory complexity (Beltagy et al., 2020). In turn, this has prompted the development of long transformer models such as Longformer (Beltagy et al., 2020) (built upon BART) and BigBird (Zaheer et al., 2020) (built upon PEGASUS) which, thanks to their smart attention layers that scale linearly with the input length, have opened up the possibility of presenting the input documents "at once", allowing these re-designed attention mechanisms to discover both inter- and intra-document relations. + +Document summarization, as have other language generation tasks, has often been criticized for using maximum-likelihood training objectives that may prove limiting for the eventual performance of the models (Ding and Soricut, 2017). For this reason, reinforcement learning has been employed + +as an alternative, to directly optimize the models over evaluation metrics and explicitly reward the quality of the model's predictions. Reinforcement learning approaches have used metrics such as ROUGE-1, ROUGE-2 and ROUGE-L F1 (Paulus et al., 2018), and also more contemporary scoring functions such as BERTScore (Zhang et al., 2020b) as rewards, often mixed with maximum-likelihood objectives. When applying reinforcement learning to MDS, we contend that the reward should not simply be a ROUGE score against the reference summary, since this would dismiss key characteristics of the task such as inter-document information transfer. For instance, Mao et al. (2020) have leveraged maximal marginal relevance (Carbonell and Goldstein, 1998) to mollify higher-order information redundancy between the input documents. Several other performance measures could potentially be included in the reward, such as extractive fragment coverage and density (Grusky et al., 2018) and MINT (Dreyer et al., 2021), but to the best of our knowledge they have never been utilized as, or for, training objectives. + +To address this gap, in this paper we propose leveraging a modified coverage reward to improve information coverage across all the documents in the input set, jointly with a principled policy gradient estimator (RELAX) and a performing long transformer model (the BART Longformer Encoder-Decoder, or BART-LED), in the hope of benefiting from the synergy between these components. + +# 3 Proposed Approach + +In this section, we present the details of the proposed approach, including the reinforcement learning framework (Section 3.1), the multi-document coverage reward (Section 3.2), and the overall training objective (Section 3.3). + +# 3.1 Reinforcement Learning Gradient Estimators + +Given a set of documents in input, simply noted as $x$ , and a summary with $T$ tokens, $y = \{y_{1},\ldots ,y_{T}\}$ , the predictive distribution, also known as policy in reinforcement learning, can be noted as $p(y_{t}|y_{1},\dots ,y_{t - 1},x)$ . The policy gradient theorem (Sutton et al., 1999) states that an estimator for the gradient of the reinforcement learning risk can be expressed as: + +$$ +\Delta = - r \sum_ {t = 1} ^ {T} \frac {\partial}{\partial \theta} \log p (y _ {t} ^ {s} | y _ {1} ^ {s}, \dots , y _ {t - 1} ^ {s}, x) \quad (1) +$$ + +where $y_1^s, \ldots, y_T^s$ is a sequence sampled from the policy, $r$ is a function that rewards its quality, and $\theta$ collectively denotes all the policy's parameters. This estimator is the well-known REINFORCE (Williams, 1992) and is a baseline of reinforcement learning. At its turn, the gradient can be easily turned into a loss function to be used with automatic differentiation: + +$$ +\begin{array}{l} L _ {R E I N F O R C E} = - r \sum_ {t = 1} ^ {T} \log p \left(y _ {t} ^ {s} \mid y _ {1} ^ {s}, \dots , y _ {t - 1} ^ {s}, x\right) \\ = - r \log p \left(y ^ {s}\right) \tag {2} \\ \end{array} +$$ + +The sampled sequence in (2), $y^{s} = \{y_{1}^{s},\ldots ,y_{T}^{s}\}$ , can be obtained with any usual sampling approach such as teacher-forcing, student-forcing, or scheduled sampling (Bengio et al., 2015). While the samples can be drawn from a standard categorical distribution, in our experiments we utilize the Gumbel-Softmax re-parameterization (Jang et al., 2017) to obtain the categorical samples from transformed samples of a Gumbel distribution. The reason for the re-parameterization is that the Gumbel-Softmax samples are needed for the RELAX estimator that we introduce in the following. For a generic sample, $y_{t}^{s}$ , the re-parameterization can be concisely expressed as: + +$$ +\begin{array}{l} y _ {t} ^ {s} = \operatorname {a r g m a x} \left(z _ {t}\right) \tag {3} \\ z _ {t} \sim \text {G u m b e l - S o f t m a x} \left(p _ {t}, \tau\right) \\ \end{array} +$$ + +where $z_{t}$ is a Gumbel-Softmax sample of size equal to that of the vocabulary that acts as a "soft" prediction, $p_{t}$ is the probability vector over the vocabulary at slot $t$ , $\tau$ is a temperature parameter controlling the sparsity of $z_{t}$ , and $\operatorname{argmax}(z_t)$ returns the index of $z_{t}$ 's largest value. This reparameterization is provenly equivalent to directly sampling $y_{t}^{s}$ from $\mathrm{Cat}(p_t)$ (the reader can refer to Jang et al. (2017) for details). + +REINFORCE is an unbiased estimator of the theoretical gradient, but it typically suffers from a high variance which can affect the convergence and effectiveness of training. To curb its high variance, techniques based on control variates and the + +subtraction of simple baselines have been proposed and even applied to summarization (Rennie et al., 2017; Paulus et al., 2018). However, our early experiments showed that these approaches were not promising for the given task. In addition, some of these estimators introduce a "bias", i.e. a mean difference with respect to the theoretical gradient. More recently, the RELAX gradient estimator has been shown to empirically outperform REINFORCE, thanks to its ability to reduce the variance while remaining unbiased (Grathwohl et al., 2018). The corresponding RELAX loss can be expressed as: + +$$ +L _ {R E L A X} = - \left[ r - c _ {\phi} (\tilde {z}) \right] \log p \left(y ^ {s}\right) + c _ {\phi} (z) - c _ {\phi} (\tilde {z}) \tag {4} +$$ + +In (4), $c_{\phi}(\tilde{z})$ is a control variate of parameters $\phi$ which is expected to correlate tightly with the reward to reduce the variance, and term $c_{\phi}(z) - c_{\phi}(\tilde{z})$ ensures that the overall gradient remains an unbiased estimator of the theoretical gradient. Variable $z = \{z_1,\dots ,z_T\}$ denotes the sequence of the Gumbel-Softmax samples, while variable $\tilde{z}$ denotes the sequence of samples from a Gumbel-Softmax distribution conditioned on the observed values of $y^{s}$ . Operationally, $z_{t}$ is sampled first, unconditionally, then $y_{t}^{s}$ is derived with the argmax, and finally $\tilde{z}_t$ is sampled from a suitably conditioned Gumbel-Softmax distribution; details can be found in Grathwohl et al. (2018), Appendix B - Categorical. Overall, the RELAX estimator is both unbiased and low-variance. + +The control variate in our experiments is a simple two-layer feed-forward network that is constructed to correlate with the ROUGE scoring function. We obtain this by feeding the concatenation of the soft predictions, $z$ (or, in turn, $\tilde{z}$ ), and the reference summary, $y$ , as input to the control variate. This allows the model to learn to score the soft predictions and their targets in a way that mimics the ROUGE prediction-reference score. In detail, the architecture consists of two fully-connected linear layers, each followed by a ReLU linear activation function, and a final sigmoid activation function that normalizes the output of the last layer. Eventually, the output of the sigmoid is averaged to produce the control variate. + +# 3.2 Multi-Document Coverage Reward + +The design of an effective reward is another key aspect of a reinforcement learning objective. In our work, we have aimed to design an overall reward that could simultaneously remain faithful to: a) the reference summary, to ensure adequate generation performance, and b) the input documents, to cover as many important details as possible, and hopefully, support generalization. Relying solely on the reference summaries, given the large input size, does not seem to promise sufficient guidance, and our experiments have confirmed that. To implement the reward, we have chosen to use ROUGE-L F1 for the references and a multi-document coverage score for the input documents that we describe hereafter. + +Several quantitative measures of coverage exist in the literature, and have found ample use in describing the properties of summarization datasets and the performance of models. For our work, we have adopted the extractive fragment coverage (EFC) of Grusky et al. (2018). The EFC measures the percentage of words in a summary that are part of "extractive fragments" within an input document, which are simply multi-word phrases shared between the input document and the summary. It is a simple precision-type measurement that looks at how much of the prediction is in the input document. Noting an individual document as $D$ , a summary as $y$ and an extractive fragment as $f$ , the EFC can be expressed as: + +$$ +E F C (y, D) = \frac {1}{| y |} \sum_ {f \in \mathcal {F} (y, D)} | f | \tag {5} +$$ + +where the $|\cdot|$ operator is used to denote length. To promote an even improvement in coverage across the input documents, we propose a multi-document extension of the EFC that reaches its highest value when the coverage across the input documents is evenly distributed. Let us note the input document set here as $\mathcal{D}$ , and the EFC coverage vector over the document set as $\operatorname{cov}(y, \mathcal{D})$ . We also note the sample mean of a vector $x$ as $\mu(x)$ , the sample standard deviation as $\sigma(x)$ , and their ratio (the inverse coefficient of variation) as $c_v^{-1}(x)$ . This allows us to compute a "normalized" coverage score for a summary, $c_v^{-1}(\operatorname{cov}(y, \mathcal{D}))$ , which takes larger values the more the scores are uniform across the document set. In addition, inspired by Kryscinski et al. (2018), we define a reward that pits the normalized coverage score of the prediction, $y^s$ , against that of + +the reference, $y$ .. + +$$ +r ^ {c o v} = \frac {c _ {v} ^ {- 1} \left(c o v \left(y ^ {s} , \mathcal {D}\right)\right) - c _ {v} ^ {- 1} \left(c o v \left(y , \mathcal {D}\right)\right)}{c _ {v} ^ {- 1} \left(c o v \left(y ^ {s} , \mathcal {D}\right)\right)} \tag {6} +$$ + +Eventually, to ensure that short summaries are not unfairly rewarded with high coverage scores, we normalize the reward by the length ratio of the prediction and the reference: + +$$ +\hat {r} ^ {c o v} = r ^ {c o v} \frac {\left| y ^ {s} \right|}{\left| y \right|} \tag {7} +$$ + +Overall, the $\hat{r}^{cov}$ reward regards a prediction as "good" if it enjoys high average coverage of the input documents, the coverage is evenly distributed, and the prediction is of sufficient length. The reference summary acts as a baseline, making the reward additive if the prediction outperforms the reference, and subtractive if otherwise. + +Since ROUGE-L F1 and the coverage reward are not necessarily up to scale, to obtain the final reward, $r$ , we perform a convex combination with a scaling coefficient, $\beta$ : + +$$ +r = \operatorname {R O U G E} - \mathrm {L F 1} \left(y ^ {s}, y\right) + \beta \hat {r} ^ {c o v} \tag {8} +$$ + +# 3.3 Overall Training Objective + +As training strategy, we first train the model with the negative log-likelihood and choose the best model with a criterion based on the validation performance. After that, the model is fine-tuned with the reinforcement learning objective. In many past works, the reinforcement learning objective has been used mixed with the NLL for stability (Paulus et al., 2018; Li et al., 2019; Parnell et al., 2021). However, we assume that the model has already "warmed up" to the training data during its NLL pretraining stage, and only use either $L_{REINFORCE}$ (2) or $L_{RELAX}$ (4) for fine-tuning. To prevent excessive drifting from the NLL pre-trained model, we limit the fine-tuning to a few ( $\approx 1,000$ ) shots and a relatively low learning rate ( $3 \times 10^{-6}$ ). + +# 4 Experiments + +# 4.1 Datasets + +We have carried out multiple experiments over two MDS datasets in the news domain: Multi-News (Fabbri et al., 2019) and Wikipedia Current Events Portal (WCEP) (Gholipour Ghalandari et al., 2020). + +For WCEP, we specifically use the WCEP-100 version, which exclusively limits the number of articles within a document set to 100. We have chosen these datasets as they cover an ample spread of summary lengths and numbers of input documents, with Multi-News having longer reference summaries on average. Appendix A.2 reports the datasets' main statistics as presented in the original papers (Fabbri et al., 2019; Gholipour Ghalandari et al., 2020). + +# 4.2 Evaluation Metrics + +Like most previous works, we use the F1 variants of the ROUGE- $N$ scores $^3$ (Lin, 2004) for performance evaluation. In our use of ROUGE, we choose not to stem the predictions and the references during scoring. Since we use the ROUGE-L F1 score in our reward, to avoid circularity we also include METEOR $^4$ (Lavie and Agarwal, 2007) in the performance evaluation. Differently from our ROUGE implementation, METEOR uses stemming, synonyms, and other paraphrastic matching in the $n$ -gram matching stage. In a recent study, both ROUGE and METEOR have displayed high correlation with a number of desirable summarization properties such as coherence, consistency, fluency, and relevance (Fabbri et al., 2021). + +# 4.3 Main Settings + +We have implemented our approach on top of BART-LED (Beltagy et al., 2020). We utilize the generous maximum encoding length (16384 tokens) of this long-input transformer, by concatenating all the documents in a document set to form a single input to the model. The individual documents are separated by an [END] token, and the input is truncated to the maximum length. For every experiment, we report the average of three independently-initialized training runs. For each result, we have also run a nonparametric bootstrap test for statistical significance, and highlighted the results that are significantly different from the baseline. In the reward, the $\beta$ hyperparameter has been set to 1.0 with a validation described in Appendix A.3. All other hyperparameters are described in Appendix A.1. + +
ModelR-1R-2R-LMETEOR
Previous Work
HiMAP (Fabbri et al., 2019)44.1716.0521.38-
Hierarchical Transformer (Liu and Lapata, 2019)42.3615.2722.08-
GraphSum (Li et al., 2020)45.0216.6922.50-
GraphSum + RoBERTa (Li et al., 2020)45.8717.5623.39-
BART-Long (Pasunuru et al., 2021)48.5418.5623.78-
Our Models
BART-LED (Baseline)46.8918.5024.8429.61
ROUGE-L + REINFORCE46.5218.4924.9129.19
ROUGE-L + Coverage (β = 1.0) + REINFORCE46.3918.2924.7429.02
ROUGE-L + RELAX47.05†18.76†24.99†29.98†
ROUGE-L + Coverage (β = 1.0) + RELAX47.23†18.86†25.03‡30.53†
+ +Table 1: Average ROUGE and METEOR scores over the Multi-News test set. $(\dagger)$ and $(\ddagger)$ refer to statistically significant differences with respect to our baseline with a $p$ -value $< 0.01$ and $< 0.05$ , respectively, in a bootstrap hypothesis test (Dror et al., 2018). The best scores are bolded. + +
ModelR-1R-2R-LMETEOR
Previous Work
TSR (Gholipour Ghalandari et al., 2020)35.3013.7025.70-
BERTReg (Gholipour Ghalandari et al., 2020)35.0013.5025.50-
Submodular+ABS (Gholipour Ghalandari et al., 2020)34.4013.1025.00-
BART-WCEP-DynE-5 (Hokamp et al., 2020)35.4015.1025.60-
Our Models
BART-LED (Baseline)39.7918.9432.1029.04
ROUGE-L + REINFORCE40.25†18.1831.5830.91†
ROUGE-L + Coverage (β = 1.0) + REINFORCE40.68†18.8032.71‡30.28‡
ROUGE-L + RELAX41.11†19.46†33.13†30.57†
ROUGE-L + Coverage (β = 1.0) + RELAX40.78†19.1432.3732.21†
+ +Table 2: Average ROUGE and METEOR scores over the WCEP test set. $(\dagger)$ and $(\ddagger)$ refer to statistically significant differences with respect to our baseline with a $p$ -value $< 0.01$ and $< 0.05$ , respectively, in a bootstrap hypothesis test (Dror et al., 2018). The best scores are bolded. + +# 4.4 Results + +Multi-News. Table 1 compares the results over the Multi-News test set for the baseline, our proposed approaches and previous work from the literature. We first note that our BART-LED model has performed as a strong baseline, with its results being comparable to those of BART-Long (Pasunuru et al., 2021), which is based on the same BART Longformer architecture. In detail, BART-Long has reported a higher ROUGE-1 score, our baseline has reported a higher ROUGE-L score, and both have reported similar ROUGE-2 scores. Therefore, we regard our performance as comparable on the whole, with the differences most likely due to different hyperparameters. + +Amongst our results, the models fine-tuned with REINFORCE have achieved worse results than the baseline. This is evidence that a vanilla implementation of the policy gradient is not necessarily better than a standard NLL objective. Conversely, the models fine-tuned with RELAX have surpassed both the NLL baseline and virtually all the previous work. The best results have been achieved + +with the inclusion of the coverage term, with an improvement of $+0.36$ ROUGE-2 pp over the NLL baseline and a marked improvement of $+0.92$ METEOR pp. In addition, both results have reported a $p$ -value $< 0.01$ . These results give evidence to both the improved performance provided by the RELAX gradient estimator and the usefulness of the coverage term. In Appendix B, we also provide a qualitative example which shows that the increase in METEOR score is most likely given by the positive impact of the coverage term, which has allowed the model to retrieve relevant phrases from the input documents. + +WCEP. Table 2 shows the results over the WCEP test set. The trend is similar to that over Multi-News, but the improvements with the proposed models have been even more pronounced. In the first place, the NLL baseline has set a very strong performance compared to the previous work, showing the full potential of a long-input model such as the Longformer for MDS. As for Multi-News, the best results have been achieved with the RELAX gradient estimator, with improvements of + +up to $+1.32$ ROUGE-1 pp and $+3.17$ METEOR pp over the NLL baseline. The inclusion of the coverage term with RELAX has not been able to increase the ROUGE scores, but has increased METEOR by $+1.64$ pp. Again, we attribute this to the model's improved coverage of the input documents, which leads to an increased number of matches under METEOR's more relaxed matching scheme. A qualitative example is discussed in Appendix B. + +# 5 Analysis + +In this section, we present a more detailed analysis of the impact of the coverage term, the few-shot fine-tuning, and the RELAX gradient estimator using the Multi-News validation set as reference. For a further insight into the coverage reward, we also include an analysis of its trajectory during training. All the selected hyperparameters are listed in Appendix A.1. + +# 5.1 Impact of the Coverage Term + +Our rationale for including a coverage term in the reward is to ensure coverage of the input documents beyond what can be driven by the reference summaries alone. We note that this may or may not translate into an improvement of the evaluation metrics, but it seems to add intrinsic value to the summaries nevertheless. For this reason, we further analyze the impact of the coverage term hereafter. + +Figure 1 shows the average EFC coverage (5) for the documents in the input sets, indexed by the document position in the set (first, second etc). The figure shows that the inclusion of the coverage term with RELAX has led to a marked increase of the coverage, and almost evenly distributed across all the documents in the input set. In particular, the document in the last position has achieved the largest coverage improvement. + +In turn, Figure 2 shows the average ROUGE score for the documents in the input sets, obtained by averaging the ROUGE-1, ROUGE-2, and ROUGE-L scores computed between the predicted summary and the document (NB: not the reference summary). The figure shows that the improvements in ROUGE score across the document set are similar to those in EFC coverage, rather evenly distributed, and with an improvement of over $+4$ pp for the document in the last position. This is further evidence that the normalized coverage reward (7) is able to drive the model towards predictions that cover the input set more uniformly. + +![](images/0fd59397bfe6da0c5fba531857d6e26abf6f6318a10dcf5a7153ae39c044a879.jpg) +Figure 1: Comparison of the EFC coverage across the input documents over the Multi-News validation set for the NLL baseline, REINFORCE and RELAX. + +![](images/aa2f275de06cdec87cff303eef42b3ca9e5d3c87e5ca4826a7a624da9228d19f.jpg) +Figure 2: Comparison of the average ROUGE score across the input documents over the Multi-News validation set for the NLL baseline, REINFORCE and RELAX. The average is taken over the ROUGE-1, ROUGE-2, and ROUGE-L scores. + +# 5.2 Few-Shot Fine-Tuning + +To explore the behavior of the few-shot fine-tuning, we compare the validation-set performance on Multi-News with varying number of training examples, from 10 to 2000. The model's configuration is the best, with RELAX and the coverage term in the reward. Table 3 shows that the performance is the highest with 1000 examples, and starts to drop beyond this number. This is an important observation, as it shows that the reinforcement learning objective may lead to undesirable parameterizations beyond a point, and that the number of fine-tuning samples has to be treated as a hyperparameter. + +
# ExamplesAvg. ROUGEAvg. MS
Baseline29.8429.48
1029.8429.48
10029.8929.44
100030.0930.04
200029.8029.39
+ +# 5.3 Configuring RELAX + +The RELAX gradient estimator introduces two new hyperparameters: the temperature parameter, $\tau$ , and the control variate, $c_{\phi}$ . Hereafter, we discuss their impact and design. + +Temperature parameter. The RELAX gradient estimator uses a temperature parameter, $\tau$ , in the Gumbel-Softmax sampling (3). This parameter is maintained in log scale for convenience and is learnable alongside all other parameters; yet, its initial value can have a significant impact on the final model. To explore its behavior, Figure 3 shows the trajectory of parameter $\log \tau$ over 1000 Multi-News training steps for different initializations (0.25, 0.5 and 1.0). The trajectories show that, irrespective of its initial value, $\log \tau$ converges to a stable value within approximately 400 training steps. For the initializations at 0.25 and 1.0, within the first 200-300 training steps $\log \tau$ drifts significantly ( $\approx \pm 0.25$ units) from its initial value. Conversely, with the intermediate initialization at 0.5, the value remains substantially stable over the whole trajectory. Since limiting drift during finetuning is generally desirable, we have initialized $\log \tau$ to 0.5 in all experiments. + +Control variate size. Many different architectures could be used for the control variate, but given our choice described in Section 3.1, the main parameter is the feed-forward layers' hidden size. To explore its impact, Table 4 shows the average values of the ROUGE score and the coverage score over the Multi-News validation set with different hidden sizes (128, 256, and 512). The ROUGE score is computed between the prediction and the reference and is the average of ROUGE-1/2/L, while the coverage score is the average EFC of all the input documents. The values in Table 4 show that, the larger the control variate, the more + +![](images/d177c083a0fb86f40dc786ac1dd450a775a7bf5c79abea733ff30c649370df9c.jpg) +Figure 3: Trajectory of the $\log (\tau)$ temperature parameter over 1000 Multi-News training steps for different initializations. + +the model is able to increase the coverage score. However, the average ROUGE score drops beyond a size of 256. We speculate that this behavior is due to the larger scale of the coverage reward, as by providing more capacity to the network, we allow the control variate to increasingly correlate with the multi-document coverage reward rather than the ROUGE reward. To strike a satisfactory trade-off, we have therefore chosen 256 as the hidden size for all experiments with Multi-News, and carried out an equivalent selection for WCEP. + +Table 3: Comparison of the average ROUGE and ME-TEOR scores with different fine-tuning sizes over the Multi-News validation set. + +
Hidden SizeAvg. ROUGEAvg. Coverage
12830.010.4821
25630.090.4849
51229.910.5038
+ +Table 4: Comparison of the average ROUGE and coverage scores over the Multi-News validation set with different hidden sizes of the control variate. + +# 5.4 Coverage Reward Trajectory + +In a reinforcement learning framework, it could be useful to monitor the value of the reward over the training steps. Typically, the reward should exhibit an upward trajectory, since the reward should tend to increase as the model learns to make better predictions. In our case, we look to explore the impact of our coverage reward on the coverage distribution over the input documents. In particular, we want to verify whether the coverage reward is able to promote predictions that cover the input documents more evenly, which should translate into a decreased standard deviation. To this aim, Figure 4 shows a plot of the standard deviation of the cov + +erage scores (EFC) across the input document set against the training step. The trajectories show that both REINFORCE and RELAX have been able to decrease the standard deviation of the predictions to approximately 0.05 units from initial values of $0.08 - 0.09$ . The drop in standard deviation occurs quite quickly during training, coinciding with the improvement in the reward value of the predictions. Comparing REINFORCE with RELAX also shows that RELAX has been able to achieve lower standard deviation values throughout the training, with the exception of the very start. + +![](images/5fb93d3cffcd34c1c3ac5aedd976766bc5d104ea540c80d789c17cb7322915bd.jpg) +Figure 4: Standard deviation of the coverage scores (EFC) across the input documents for REINFORCE and RELAX against the training step. For both estimators, the standard deviation drops below 0.06 very early into the training, and sets to approximately 0.05. + +# 6 Conclusion + +In this paper, we have proposed fine-tuning a multi-document summarization model with a reward that balances the use of the reference summaries with the coverage of the input documents within a reinforcement learning framework. The rationale for the proposed reward is that the reference summaries alone may not be sufficient for an effective fine-tuning of the model in the presence of very large inputs such as those typical of MDS datasets. Another key component of the proposed approach is the use of a modern gradient estimator of the policy gradient, RELAX. The experimental results over two news-based MDS datasets, Multi-News and WCEP, have shown that the proposed approach has been able to achieve a marked improvement of ROUGE and METEOR scores compared to its NLL-pretrained baseline, and prove competitive against most existing approaches. In addition, the + +proposed approach has been able to increase the coverage of the input documents, and evenly across the entire document set. As future work, we aim to explore ways to prevent or mollify the model's drift with larger number of training steps, and explore alternative architectures and configurations for the control variate of the RELAX estimator. + +# References + +Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv:2004.05150. +Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, NIPS'15, page 1171-1179, Cambridge, MA, USA. MIT Press. +Jaime Carbonell and Jade Goldstein. 1998. The use of mmr, diversity-based reranking for reordering documents and producing summaries. In Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '98, page 335-336, New York, NY, USA. Association for Computing Machinery. +Janara Christensen, Mausam, Stephen Soderland, and Oren Etzioni. 2013. Towards coherent multidocument summarization. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1163-1173, Atlanta, Georgia. Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Nan Ding and Radu Soricut. 2017. Cold-start reinforcement learning with softmax policy gradient. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc. +Markus Dreyer, Mengwen Liu, Feng Nan, Sandeep Atluri, and Sujith Ravi. 2021. Analyzing the abstractiveness-factuality tradeoff with nonlinear abstractiveness constraints. CoRR, abs/2108.02859. +Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. 2018. The hitchhiker's guide to testing statistical significance in natural language processing. In Proceedings of the 56th Annual Meeting of the + +Association for Computational Linguistics (Volume 1: Long Papers), pages 1383-1392. Association for Computational Linguistics. +Günes Erkan and Dragomir R. Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of Artificial Intelligence Research, 22:457-479. +Alexander Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir Radev. 2019. Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1074-1084, Florence, Italy. Association for Computational Linguistics. +Alexander R. Fabbri, Wojciech Krysciński, Bryan McCann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021. SummEval: Re-evaluating Summarization Evaluation. Transactions of the Association for Computational Linguistics, 9:391-409. +Demian Gholipour Ghalandari, Chris Hokamp, Nghia The Pham, John Glover, and Georgiana Ifrim. 2020. A large-scale multi-document summarization dataset from the Wikipedia current events portal. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1302-1308, Online. Association for Computational Linguistics. +Will Grathwohl, Dami Choi, Yuhuai Wu, Geoff Roeder, and David Duvenaud. 2018. Backpropagation through the void: Optimizing control variates for black-box gradient estimation. In International Conference on Learning Representations. +Max Grusky, Mor Naaman, and Yoav Artzi. 2018. Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 708-719, New Orleans, Louisiana. Association for Computational Linguistics. +Chris Hokamp, Demian Gholipour Ghalandari, Nghia The Pham, and John Glover. 2020. Dyne: Dynamic ensemble decoding for multi-document summarization. CoRR, abs/2006.08748. +Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparameterization with gumbel-softmax. In International Conference on Learning Representations. +Wojciech Krysciński, Romain Paulus, Caiming Xiong, and Richard Socher. 2018. Improving abstraction in text summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1808-1817, Brussels, Belgium. Association for Computational Linguistics. + +Alon Lavie and Abhaya Agarwal. 2007. METEOR: An automatic metric for MT evaluation with high levels of correlation with human judgments. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 228-231, Prague, Czech Republic. Association for Computational Linguistics. +Logan Lebanoff, Kaiqiang Song, and Fei Liu. 2018. Adapting the neural encoder-decoder framework from single to multi-document summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4131-4141, Brussels, Belgium. Association for Computational Linguistics. +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics. +Siyao Li, Deren Lei, Pengda Qin, and William Yang Wang. 2019. Deep reinforcement learning with distributional semantic rewards for abstractive summarization. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6038-6044, Hong Kong, China. Association for Computational Linguistics. +Wei Li, Xinyan Xiao, Jiachen Liu, Hua Wu, Haifeng Wang, and Junping Du. 2020. Leveraging graph to improve abstractive multi-document summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6232-6243, Online. Association for Computational Linguistics. +Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics. +Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summarizing long sequences. In International Conference on Learning Representations. +Yang Liu and Mirella Lapata. 2019. Hierarchical transformers for multi-document summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5070-5081, Florence, Italy. Association for Computational Linguistics. +Inderjeet Mani and Eric Bloedorn. 1997. Multi-document summarization by graph search and matching. In Proceedings of the Fourteenth National Conference on Artificial Intelligence and + +Ninth Conference on Innovative Applications of Artificial Intelligence, AAAI'97/IAAI'97, page 622-628. AAAI Press. +Yuning Mao, Yanru Qu, Yiqing Xie, Xiang Ren, and Jiawei Han. 2020. Multi-document summarization with maximal marginal relevance-guided reinforcement learning. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1737-1751, Online. Association for Computational Linguistics. +Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarrunner: A recurrent neural network based sequence model for extractive summarization of documents. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI'17, page 3075-3081. AAAI Press. +Mir Tafseer Nayeem, Tanvir Ahmed Fuad, and Yllias Chali. 2018. Abstractive unsupervised multi-document summarization using paraphrastic sentence fusion. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1191-1204, Santa Fe, New Mexico, USA. Association for Computational Linguistics. +Jacob Parnell, Inigo Jauregi Unanue, and Massimo Piccardi. 2021. *RewardsOfSum: Exploring reinforcement learning rewards for summarisation.* In Proceedings of the 5th Workshop on Structured Prediction for NLP (SPNLP 2021), pages 1-11, Online. Association for Computational Linguistics. +Ramakanth Pasunuru, Mengwen Liu, Mohit Bansal, Sujith Ravi, and Markus Dreyer. 2021. Efficiently summarizing text and graph encodings of multi-document clusters. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4768-4779, Online. Association for Computational Linguistics. +Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In International Conference on Learning Representations. +Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings. +S. J. Rennie, E. Marcheret, Y. Mroueh, J. Ross, and V. Goel. 2017. Self-critical sequence training for image captioning. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1179-1195. +Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 + +Conference on Empirical Methods in Natural Language Processing, pages 379-389, Lisbon, Portugal. Association for Computational Linguistics. +Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073-1083, Vancouver, Canada. Association for Computational Linguistics. +Richard S. Sutton, David McAllester, Satinder Singh, and Yishay Mansour. 1999. Policy gradient methods for reinforcement learning with function approximation. In Proceedings of the 12th International Conference on Neural Information Processing Systems, NIPS'99, page 1057-1063, Cambridge, MA, USA. MIT Press. +Ronald J. Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn., 8(3-4):229-256. +Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, and Amr Ahmed. 2020. Big bird: Transformers for longer sequences. In Advances in Neural Information Processing Systems, volume 33, pages 17283-17297. Curran Associates, Inc. +Jianmin Zhang, Jiwei Tan, and Xiaojun Wan. 2018. Adapting neural single-document summarization model for abstractive multi-document summarization: A pilot study. In Proceedings of the 11th International Conference on Natural Language Generation, pages 381-390, Tilburg University, The Netherlands. Association for Computational Linguistics. +Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020a. PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 11328-11339. PMLR. +Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020b. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations. + +# A Model and Datasets + +# A.1 Model Hyperparameters + +Our baseline model is the BART Longformer encoder-decoder (BART-LED) of Beltagy et al. (2020). In all experiments, it has been first pretrained over the training set using the negative log-likelihood until convergence, which has typically set within 5 epochs, in line with what reported in Pasunuru et al. (2021). The best models on the validation set (average ROUGE) have then been finetuned with the reinforcement learning objectives in (2) and (4). BART-LED has 459M parameters, and the addition of the control variate for the RELAX experiments adds approximately 12M more parameters. We have used the Adam optimizer for training both the main BART-LED model and the parameters of the control variate of the RELAX gradient estimator. The learning rate of the main optimizer for the pre-training of the baseline has been set to the default of $3 \times 10^{-5}$ (Beltagy et al., 2020), before being reduced to $3 \times 10^{-6}$ for finetuning with the reinforcement learning approaches. For training the control variate, we have set an initial learning rate of $1 \times 10^{-2}$ and initialized the learnable $\log \tau$ parameter with a value of 0.5. The optimal size of the hidden layers of the control variate appears to empirically correlate with the maximum length of the reference summaries in the dataset. For Multi-News, we have set the size to be 256, and for WCEP to $40^{5}$ . For the multi-document coverage reward, we have used a $\beta$ value of 1.0. The entire model has been implemented on top of PyTorch Lightning6. Please refer to Table 5 for a full list of the hyperparameters. For all experiments, we have used an NVIDIA Quadro RTX 6000 with 24 GB of memory. + +# A.2 Dataset Links and Statistics + +Multi-News. Accessible via the Hugging Face Datasets Python package: https://github.com/huggingface/datasets/tree/master/datasets/multi_news. + +For fine-tuning, we have pulled the raw data from the authors' own repository: https://github.com/Alex-Fabbri/Multi-News. + +
HyperparameterMulti-NewsWCEP
Training LR3 × 10-53 × 10-5
Tuning LR3 × 10-63 × 10-6
RELAX LR1 × 10-21 × 10-2
log(τ) (RELAX)0.50.5
CV Size25640
β (Coverage)1.01.0
Max Input Length1638416384
Max Output Length25640
Label Smoothing0.00.0
Training Epochs55
Tuning Epochs11
Batch Size11
Beam Size11
+ +WCEP. Accessible from the following repository: https://github.com/complementizer/wcep-mds-dataset. + +Table 6 shows the datasets' main statistics, including the number of samples within each split and the average length in tokens of the reference summaries. For our model, we have used the average length of the reference summaries as a guidance for choosing the maximum output length and the hidden size of the control variate. + +Table 5: Hyperparameters used for training and evaluation. LR refers to Learning Rate, CV refers to Control Variate. + +
DSTrainTestDevAvg. Tokens
M-N44.9K5.6K5.6K263.7
WCEP8.1K1.0K1.0K33.3
+ +Table 6: Main statistics of the datasets used in the experiments. Multi-News has up to 10 individual articles in each document set, while WCEP has up to 100. The document split sizes have been rounded. + +# A.3 Scale of the Coverage Reward + +The multi-document reward is used in (8) in convex combination with the ROUGE-L F1 score, and an appropriate value of the mixing coefficient, $\beta$ , needs to be explored. To this aim, Table 7 shows the values of the average ROUGE and coverage scores over the Multi-News validation set. The coverage has not increased monotonically with the increase of the $\beta$ coefficient. In turn, the ROUGE score has reached a maximum for $\beta = 1.0$ . As a trade-off, we have set $\beta = 1.0$ in all experiments. + +
βAvg. ROUGEAvg. Coverage
0.529.970.5074
1.030.090.4849
2.030.000.5068
5.029.870.4823
+ +Table 7: Average ROUGE and coverage scores over the Multi-News validation set for different values of the reward mixing coefficient, $\beta$ . + +# B Qualitative Analysis + +Tables 8 through 11 present two qualitative examples, one per dataset, where we specifically compare our RELAX implementation with and without the use of the coverage term in the reward. Key points are highlighted in various colors, comments are addressed in the captions, and the ROUGE and METEOR scores are reported for each prediction. Document sets longer than a full page have been truncated to fit. + +# Source Document + +Plenty of churches contain relics of saints, but not many of those relics were found in excavations from sixth-century churches. Archaeologists at a medieval fortress site in Burgas, Bulgaria, found a lead vessel, which contains some of the ashes from the alleged grave of John the Apostle, in a reliquary that dates to the sixth century C.E. The reliquary, which was once part of an early Christian basilica, is named for Saint John the Theologian, who is considered one of Jesus' apostles. The vessel, which is less than an inch long, is decorated with crosses. Milen Nikolov, director of the Burgas Regional Museum of History, said that early Christians would have believed the relic had healing properties. John the Apostle's grave in Turkey was also a pilgrimage site for early Christians seeking healing, Ancient Origins reports. Nikolov said the reliquary was "one of the most important discoveries" in the museum's history. In addition to the relic, the archaeologists also uncovered a 10th century Bulgarian royal seal at the fortress site. Meghan DeMaria [END] Ashes from the grave of John the Apostle, one of the Twelve Apostles of Jesus Christ, have been discovered in a lead tube reliquary by Bulgarian archaeologists during excavations of the ancient and medieval port of Burgos (also known as Poros) on Cape Foros in today's Black Sea city of Burgas. The discovery of the lead tube containing ashes from the grave of John the Apostle, who is known as St. John the Theologian in Bulgarian (Eastern) Orthodox Christianity, located in the ancient city of Ephesus in Anatolia, today's Turkey, has been made during the 2014 excavations of the fortress of Burgos (or Poros) on Cape Foros in Burgas but was announced only on Wednesday, March 25, 2015, by Milen Nikolov, Director of the Burgas Regional Museum of History, at a special press conference. He has also announced other intriguing finds such as the discovery of a Late Antiquity latrine, also found at Burgos (Poros), and the discovery of a 10th century Bulgarian royal seal from the Rusocastro Fortress. The structures at the ancient and medieval fortress and port of Burgos (Poros) which were excavated in 2014 include an Early Christian basilica from the 6th century AD, a building complex from the 5th-6th century AD, and a Roman villa from the 3rd century AD. The John the Apostle reliquary was found in the 6th century basilica. "Probably a pilgrim from the Foros Peninsula (Cape) went on a pilgrimage to Ephesus, and came back here with this relic which was then donated to the basilica on Foros," Nikolov has explained, as cited by local news site Gramofona. Nikolov has described the finding of the reliquary as "one of the most important discoveries in the history of the [Burgas Regional History] Museum", and the lead tube as a "holy possession that preserved a holy substance" having to do with the beliefs that every year on May 8, the date of John the Apostle's death, there is manna, a holy curing powder, on the site of his grave. The lead tube reliquary itself containing the ashes from the grave of John the Apostle (St. John the Theologian) is really tiny: it is only $2.2\mathrm{cm}$ (less than an inch) long, and its diameter measures $1.7\mathrm{cm}$ . The reliquary is dated to the 6th century AD when pilgrimage to the Holy Lands was very common among Christians, Nikolov explains. On one of its sides there is an image of a cross with equal arms inside a medallion, and on the opposite side there is an image of two overlapping crosses with equal arms. The neck of the tube is also decorated with crosses. It has only one handle left, the other has broken off. In addition to the so called Empty Tomb, i.e. the Tomb of Jesus Christ in Jerusalem, the other centers of Christian pilgrimage in the 6th century AD included the grave of St. Menas in Abu Mina in Egypt; the grave of St. Simeon Stylites the Elder in Antioch (in today's Turkey); the grave of St. Thecla (or Tecla) in Seleucia, Mesopotamia; the grave of St. Isidore of Chios on the Aegean island of Chios; and the graves of John the Apostle (St. John the Theologian), St. Mary Magdalene, and St. Timothy in Ephesus. All of these Early Christian pilgrimage centers produced primarily clay tubes for holy water; a total of only 43 lead tubes from this time period are known in the entire world, the Bulgarian archaeologists from the Burgas Museum point out. They explaining 20 of those known lead tubes have been found in the St. John the Baptist Basilica in Monza, Italy (the Monza Cathedral); they were a gift from Lombard Queen Theodelinda (c. 570-628) made at the beginning of the 6th century. Another 16 lead tubes have been found in a grave in the Bobbio Abbey (a monastery founded by Irish Saint Columbanus in 614 AD) in the Italian town of Bobbio, close to Milan. One lead tube reliquary has been discovered in the Sant Pere de Casserres Abbey, a Benedictine monastery in the town of Les Masies de Roda, Osona comarca, Catalonia, Spain. In addition to these lead tube reliquaries, three others are kept in Germany and four in the USA, all of which were produced in Jerusalem and have depictions of Gospel scenes. Even though the reliquary discovered by the Bulgarian archaeologists in the basilica in the ancient and medieval fortress Burgos (Poros) on Cape Foros is also a lead tube, it is different from the other known lead tube reliquaries because the images on it are identical with the images from a group of clay tube reliquaries produced in ancient Ephesus. Follow us on Facebook, Twitter, Google+, Tumblr! "That is why at this stage we believe that the Burgas reliquary comes from this pilgrimage center (i.e. Ephesus) and it must be connected with the cult for St. John the Theologian (John the Apostle)," the head of the Burgas Museum of History, Milen Nikolov, explains. He also notes that John the Apostle was particularly cherished by the Early Christians. According to the Bible, John was Jesus Christ's favorite disciple, and when Jesus was crucified he asked John to take care of the Holy Mother, Virgin Mary. Later, John the Apostle settled in the ancient city of Ephesus together with Virgin Mary and St. Mary Magdalene. This is where he wrote the Book of Revelation, also known as The Apocalypse, and lived till the rest of his life. According to some historical sources, Christian pilgrims from around the world would gather on his grave in the Ephesus basilica on May 8, the date of his death. They would sprinkle rose petals on the rock above the basilica, and the next day wonder-working powder would appear on the rock. This manna could cure all kinds of diseases, which is why it was collected by the pilgrims in reliquaries and taken to their places of origin as evidence of their pilgrimage or as an apotropheus (an apotropaic item, i.e. an amulet chasing away evil). Some scholars believe the manna collected by the pilgrims came from the pollen from the roses they placed on John the Apostle's grave in Ephesus. "That is why, at this point, we believe that a pilgrim from the fortress of Poros went on a pilgrimage to the grave of St. John the Theologian in Ephesus from where he brought the valuable reliquary with curing powder," Nikolov elaborates... + +Table 8: Multi-News example. Document set with 2 individual input documents, separated by an [END] token. The comparison of summaries is in the following Table 9. The input has been truncated to fit the page. + +# Reference + +Archaeologists in Bulgaria think they've found ashes belonging to one of the 12 apostles. The team found a small lead container with ashes while excavating a basilica that dates back to the sixth century AD, reports Archaeology in Bulgaria. The markings on the inch-long vessel match those at the grave of St. John the Apostle in what is now Turkey, leading researchers to surmise that a long-ago Christian from Bulgaria traveled to the grave site and returned home with the sample of ashes. Such journeys were common around that time, reports Novinite. "Probably a pilgrim from the Foros Peninsula went on a pilgrimage to Ephesus, and came back here with this relic, which was then donated to the basilica on Foros," says the director of the Burgas Regional Museum of History in Bulgaria. (The ancient basilica is located in what is now the Black Sea city of Burgas.) Archeologists also found a Bulgarian royal seal from the 10th century at the dig site, which includes a fortress, notes the Week. (Another archaeological discovery: an ancient underground city in Turkey.) + +# NLL (47.05/16.12/24.59/28.10) + +Archaeologists have made a pretty significant find in Bulgaria: ashes from the grave of one of Jesus Christ's Twelve Apostles. The lead tube containing the ashes of St. John the Theologian, who is believed to have lived in what is now Turkey, was found in a medieval fortress and is named for the early Christian basilica that once stood there, Ancient Origins reports. The lead container, which is less than an inch long, is decorated with crosses and has only one handle, reports the International Business Times. The lead container is believed to have been carried by a pilgrim who made the trip to the ancient city of Ephesus, where the Apostle was once a favorite of Jesus. The grave of John the Theologian was a popular pilgrimage destination for early Christians seeking healing, and the lead container "must be connected with the cult for St. John the Theologian," says the head of the Burgas Regional Museum of History. The find was announced Wednesday, but it wasn't made public until Thursday. (Click to read about how a lead container found in a 14th-century tomb could help heal ancient wounds.) + +# ROUGE-L + RELAX (56.76/25.06/30.76/37.10) + +Archaeologists have made "one of the most important discoveries" in the history of the Burgas Regional Museum of History in Bulgaria, says the museum's director: a lead reliquary that contains ashes from the alleged grave of one of Jesus' Twelve Apostles. The tiny lead container, which is less than an inch long, was found during last year's excavations of the fortress of Burgos on Cape Foros, reports Ancient Origins. The reliquary was once part of an early Christian basilica, and "probably a pilgrim from the Foros Peninsula went on a pilgrimage to Ephesus, and came back here with this relic which was then donated to the basilica," says Milen Nikolov. The find was announced Wednesday, but the news was made public on Wednesday. The find was made during the same excavations that turned up a 10th-century royal seal, reports the Independent. The find is significant because early Christians believed the relic had healing properties, and John the Apostle's grave in Turkey was a popular pilgrimage site for those seeking healing, reports Ancient Origins. (Click to read about how a lead-tipped glass bottle was found in a 14th-century Roman city.) + +# ROUGE-L + Coverage + RELAX (57.59/30.00/31.93/38.64) + +Archaeologists have made "one of the most important discoveries" in the history of the Burgas Regional Museum of History in Bulgaria, says the museum's director: a lead reliquary that contains ashes from the alleged grave of one of Jesus Christ's apostles. The tiny lead container, which is less than an inch long, was found in a 6th-century basilica in what is now the Black Sea city of Burgas, reports Ancient Origins. "Probably a pilgrim from the Foros Peninsula went on a pilgrimage to Ephesus, and came back here with this relic which was then donated to the basilica," says Milen Nikolov, per the Independent. The reliquary is named for St. John the Theologian, who is believed to have been one of the Twelve Apostles. The find was made during 2014 excavations at the fortress of Burgos on Cape Foros, but the announcement was made only this week. The head of the museum says the reliquary is connected to the belief that there is manna, a holy curing powder, on the site of John the Apostle's grave every year. (Click to read about how a lead box found in a cave has been analyzed for ancient DNA.) + +Table 9: Multi-News example. Comparison of reference, NLL baseline, and RELAX-generated summaries for the document in Table 8. We compare specifically the addition of the coverage term in the reward, to qualitatively show its importance. The R1/R2/RL/Meteor scores are shown in the headers. Highlighted in blue are examples of key information that allow for the summary to remain faithful to the reference. Highlighted in green are examples where the coverage term has managed to improve the quality of the summary. Highlighted in red are examples where the model has conveyed incorrect statements with respect to the input documents, and where the subsequent use of the coverage has seemingly improved it. We note that these results are also in line with the average scores presented in Table 1. + +# Source Document + +Greece's conservative prime minister-elect Kyriakos Mitsotakis vowed that the country would "proudly" enter a post-bailout period of "jobs, security and growth" after winning a landslide victory in Sunday's general election. Official results showed Mitsotakis on track to crush leftist premier Alexis Tsipras, who oversaw austerity measures after Greece's dramatic rescue by international creditors in the European debt crisis. "A painful cycle has closed," Mitsotakis said in a televised address, adding that Greece would "proudly raise its head again" on his watch. "I will not fail to honour your hopes," he said as early congratulation calls came from outgoing European Commission chief Jean-Claude Juncker and Turkish President Recep Tayyip Erdogan. With official results from 94 per cent of polling stations, New Democracy scored a crushing victory by nearly 40 per cent - its best score in over a decade - to 31.5 per cent for Tsipras's leftist Syriza party. "I want to see this people prosper. I want to see the children who left to return," he later told party supporters. Mitsotakis will be sworn in as Greece's new prime minister on Monday. Tsipras had earlier admitted defeat after over four years in power that saw Greece emerge from its third bailout. The 44-year-old warned that his Syriza party would "dynamically" resist efforts to scale back the party's pro-labour reforms. If the results are confirmed, the 51-year-old Harvard graduate and former McKinsey consultant Mitsotakis will have a majority of 158 lawmakers in the 300-seat parliament. Tsipras's party will have 86 seats. The final number will depend on how smaller parties fare. They need at least 3.0 percent of the vote to enter parliament. New Democracy was last in power in 2014, in coalition with the Greek socialists. Mitsotakis is a scion of one of Greece's top political families. He is the son of former prime minister Constantine Mitsotakis, one of the country's longest-serving parliamentarians. His sister is former minister Dora Bakoyannis, Athens's first female mayor. And new Athens mayor Costas Bakoyannis, elected in May, is his nephew. Sunday's election was Greece's third in as many months, and the first held in midsummer since 1928. In May, New Democracy beat Syriza by nearly 9.5 points in European parliament elections. A week later, it completed a near-sweep of Greek regions in local elections. After that, Tsipras was forced to call an early general election. His term was scheduled to end in the autumn. Greece's youngest premier in more than a century, Tsipras had trailed in the polls for months amid widespread dissatisfaction over high taxes. "Greece is exiting 10 years of crisis and the new government will have the heavy task to give a chance to the country to recover completely or to sink", 36-year-old Aphrodite told AFP, as she cast her vote in the bohemian downtown Athens neighborhood of Exarcheia. "I hope that from tomorrow we will be able to breathe with relief. To take a deep breath, if Mitsotakis does what he promises," added Athinodoros, a 48-year-old self-employed worker. Tsipras has accused Mitsotakis - who was part of a 2012-2014 crisis government - of "disastrous" mismanagement that brought hundreds of thousands of job losses and business failures. Mitsotakis has now pledged to create "better" jobs through growth, foreign investment and tax cuts and to "steamroll" obstacles to business. Tsipras - who reduced unemployment and raised the minimum wage for the first time since 2012 - was criticized for campaigning as an anti-austerity crusader before eventually accepting a third EU bailout and the economic cutbacks that entailed. In parts of the country, there was also a backlash against a controversial agreement with North Macedonia that ended a bitter 27-year dispute over the country's name. The new smaller parties fighting to secure representation are Greek Solution, a nationalist party formed by TV salesman Kyriakos Velopoulos, and MeRA25, an anti-austerity party founded by maverick economist and former Greek finance minister Yanis Varoufakis. According to the exit polls, Varoufakis's party could elect nine lawmakers. Greek Solution could end up with 10 deputies, while neo-Nazi party Golden Dawn looks likely to be shut out of parliament for the first time since 2012. Golden Dawn, until recently Greece's third-ranking party, is in steep decline amid an ongoing trial for the 2013 murder of an anti-fascist rapper, allegedly carried out with the knowledge of senior Golden Dawn members. Mitsotakis has promised to hit the ground running. A Eurogroup finance meeting on Monday will convene to discuss the state of Greece's economy after tax cuts rolled out by Tsipras in May. Get Breaking news, live coverage, and Latest News from India and around the world on NDTV.com. Catch all the Live TV action on NDTV 24x7 and NDTV India. Like us on Facebook or follow us on Twitter and Instagram for latest news and live news updates. Budget 2019: Find the latest news on ndtv.com/budget. Use the income tax calculator to learn about your tax liability [END] Investors expect new Greek Prime Minister Kyriakos Mitsotakis to prove that his business-friendly reputation is deserved. The former banker and management consultant will need to make good on pledges to address issues including government finances, sourced loans and crippling bureaucracy, while working within tight fiscal constraints. Although he has inherited an economy on the mend and a stock market that is soaring, they are rebounding from shrunken bases. Mr Mitsotakis must ensure that Greece can attract the investment it desperately needs and create jobs as the country digs itself out of a financial crisis that has lasted more than a decade and taken a toll on living standards. Here are the three main issues the new Greek government will have to deal with from day one: While the new government is not yet in place, the country's creditors want to send a clear message that it has to stick to its commitment of achieving a 3.5 per cent primary surplus every year until 2022. Former prime minister Alexis Tsipras' move to distribute handouts before the European elections has raised doubts about Greece's ability to meet its fiscal targets. The European Commission estimates that the freebies will lead to a fiscal cost of 1 per cent of gross domestic product for both this year and the next, meaning creditors may ask the new government for additional austerity measures. Mr Mitsotakis plans to rapidly legislate tax cuts that will come into effect from next year to spur economic activity and show investors that Greece is creating a more friendly business environment. The biggest challenge is addressing about €80 billion (S$122 billion) in bad loans. Lenders are speeding up efforts to cut soured debt by selling portfolios of non-performing exposures (NPEs), but they will need more tools to meet their ambitious targets of single-digit NPE ratios by 2021. Mr Mitsotakis' target is doubling Greece's growth rate to 4 per cent next year. To achieve that, he needs investments. To convince investors that they can trust the country again, he wants to immediately proceed with the long-delayed Hellinikon project. The flagship venture envisages the transformation of the former Athens airport site - more than two times the size of New York's Central Park - into a metropolitan park including luxury hotels, casino, marinas and apartments. But that will not be enough. The new government will have to deal with red tape, a sluggish judicial system and corruption, as well as speeding up privatisations, especially in the energy sector. [END]... + +Table 10: WCEP example. Document set with 25 individual input documents, separated by an [END] token. The comparison of summaries is in the following in Table 11. The input has been truncated to fit the page. + +
Reference
Winner of the general election Kyriakos Mitsotakis is sworn in as the new Prime Minister of Greece, succeeding Alexis Tsipras.
NLL (20.00/7.14/20.00/5.26)
Greek voters go to the polls for a general election.
ROUGE-L + RELAX (70.58/68.75/70.58/56.68)
Conservative politician Kyriakos Mitsotakis is sworn in as the new Prime Minister of Greece.
ROUGE-L + Coverage + RELAX (68.18/57.14/63.63/58.02)
Conservative politician Kyriakos Mitsotakis is sworn in as the new Prime Minister of Greece after defeating leftist leader Alexis Tsipras in yesterday’s election.
+ +Table 11: WCEP example. Comparison of reference, NLL baseline, and RELAX-generated summaries for the document in Table 10. We compare specifically the addition of the coverage term in the reward, to qualitatively show its importance. The R1/R2/RL/Meteor scores are shown in the headers. Highlighted in blue are examples of key information that allow for the summary to remain faithful to the reference. Highlighted in green are examples where the coverage term has managed to improve the quality of the summary. As mentioned in Section 4.4, shorter summaries are involved in this dataset, and are more likely to result in higher ROUGE scores. In this example, both RELAX objectives have drastically improved the accuracy. We can also see that the model has been able to use the coverage term to improve the summary quality by adding relevant fragments, and lead to a higher Meteor score. We note that these results are in line with the average scores presented in Table 2. \ No newline at end of file diff --git a/amultidocumentcoveragerewardforrelaxedmultidocumentsummarization/images.zip b/amultidocumentcoveragerewardforrelaxedmultidocumentsummarization/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..07b76beee1e6470c09c61ed11aa512ef321462d7 --- /dev/null +++ b/amultidocumentcoveragerewardforrelaxedmultidocumentsummarization/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5095bff818b01cb095a1fa5c6fdfa41a618071f46694f774bff256ee0f45507d +size 468222 diff --git a/amultidocumentcoveragerewardforrelaxedmultidocumentsummarization/layout.json b/amultidocumentcoveragerewardforrelaxedmultidocumentsummarization/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..1469daa93fe25488cb2516b3589b79e82bb73959 --- /dev/null +++ b/amultidocumentcoveragerewardforrelaxedmultidocumentsummarization/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ecea00f2c6759843286bf4eeb4b53dbe40deb3913cca20495fac8b750b720856 +size 434108 diff --git a/aneuralnetworkarchitectureforprogramunderstandinginspiredbyhumanbehaviors/1a98cd61-bf92-457c-9a82-7b25fe6f8edb_content_list.json b/aneuralnetworkarchitectureforprogramunderstandinginspiredbyhumanbehaviors/1a98cd61-bf92-457c-9a82-7b25fe6f8edb_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..5ab61ade2e625b1fd9cf66df6dc76c62d8695af4 --- /dev/null +++ b/aneuralnetworkarchitectureforprogramunderstandinginspiredbyhumanbehaviors/1a98cd61-bf92-457c-9a82-7b25fe6f8edb_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:67a7b1d0ebd2fedb9a86cb98a7fd533c42c35d94cb9905b505845da84bb28869 +size 78762 diff --git a/aneuralnetworkarchitectureforprogramunderstandinginspiredbyhumanbehaviors/1a98cd61-bf92-457c-9a82-7b25fe6f8edb_model.json b/aneuralnetworkarchitectureforprogramunderstandinginspiredbyhumanbehaviors/1a98cd61-bf92-457c-9a82-7b25fe6f8edb_model.json new file mode 100644 index 0000000000000000000000000000000000000000..ea7a65dcb9c2a826e80a9441ff49797ede2c0243 --- /dev/null +++ b/aneuralnetworkarchitectureforprogramunderstandinginspiredbyhumanbehaviors/1a98cd61-bf92-457c-9a82-7b25fe6f8edb_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:95f6f5bddaf1d6796380a80eecb260e662964529e1db5c4d3d4c9e477d25fadc +size 104417 diff --git a/aneuralnetworkarchitectureforprogramunderstandinginspiredbyhumanbehaviors/1a98cd61-bf92-457c-9a82-7b25fe6f8edb_origin.pdf b/aneuralnetworkarchitectureforprogramunderstandinginspiredbyhumanbehaviors/1a98cd61-bf92-457c-9a82-7b25fe6f8edb_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..1c9e9c3df5c543e36b9397ee45308c8ba9f613de --- /dev/null +++ b/aneuralnetworkarchitectureforprogramunderstandinginspiredbyhumanbehaviors/1a98cd61-bf92-457c-9a82-7b25fe6f8edb_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8e015a52790c097d53ef8b1c3226943f629311a6c10682dd0cbcac4300126476 +size 561641 diff --git a/aneuralnetworkarchitectureforprogramunderstandinginspiredbyhumanbehaviors/full.md b/aneuralnetworkarchitectureforprogramunderstandinginspiredbyhumanbehaviors/full.md new file mode 100644 index 0000000000000000000000000000000000000000..da00ec8ffbce3301b2f3c0284b3b638547836b0f --- /dev/null +++ b/aneuralnetworkarchitectureforprogramunderstandinginspiredbyhumanbehaviors/full.md @@ -0,0 +1,348 @@ +# A Neural Network Architecture for Program Understanding Inspired by Human Behaviors + +Renyu Zhu1 Lei Yuan1 Xiang Li1* Ming Gao1 Wenyuan Cai2 + +1School of Data Science and Engineering, East China Normal University, Shanghai, China + +$^{2}$ Shanghai Hypers Data Technology Inc., Shanghai, China + +$\{52175100003, 51205903063\} @ \mathrm{stu.ecnu.edu.cn}$ {xiangli, mgao} @dase.ecnu.edu.cn + +wenyuan.cai@hypers.com + +# Abstract + +Program understanding is a fundamental task in program language processing. Despite the success, existing works fail to take human behaviors as reference in understanding programs. In this paper, we consider human behaviors and propose the PGNN-EK model that consists of two main components. On the one hand, inspired by the "divide-and-conquer" reading behaviors of humans, we present a partitioning-based graph neural network model PGNN on the upgraded AST of codes. On the other hand, to characterize human behaviors of resorting to other resources to help code comprehension, we transform raw codes with external knowledge and apply pre-training techniques for information extraction. Finally, we combine the two embeddings generated from the two components to output code embeddings. We conduct extensive experiments to show the superior performance of PGNN-EK on the code summarization and code clone detection tasks. In particular, to show the generalization ability of our model, we release a new dataset that is more challenging for code clone detection and could advance the development of the community. Our codes and data are publicly available at https://github.com/RecklessRonan/PGNN-EK. + +# 1 Introduction + +The past decades have witnessed the prosperity of programming platforms, such as *Github* and *Stack Overflow*. These platforms generate massive open-source code1 data that is named as "Big Code" in (Allamanis et al., 2018a). To automate the software development and maintenance, based on the "Software Naturalness" hypothesis (Hindle et al., 2016), natural language processing (NLP) techniques have been applied in program understanding. After that, a series of downstream programming language processing (PLP) tasks can be + +performed, including code summarization (Zhang et al., 2020; Ahmad et al., 2020; Liu et al., 2021) and code clone detection (Zhang et al., 2019; Wang et al., 2020). + +Existing works for understanding programs mainly utilize three types of information: code context, code structure and external knowledge. Specifically, code context refers to the token sequence in the code. For code structure, each code can be parsed into various types of intermediate representations, such as AST (Abstract Syntax Tree), CFG (Control Flow Graph) and PDG (Program Dependence Graph). These representations capture the structural information of codes. Further, there also exists external knowledge associated with codes, such as API documentation and other exemplary codes. Despite the success, all these models ignore considering human behaviors in reading programs. Recently, (Bengio et al., 2021) suggest the potential futures of deep learning by comparing current AI methods with human learning abilities. This further prompts us to revisit program understanding: Can we develop a model that understands programs like humans? + +In the domain of programming education, how people understand codes is a topic that has been studied. For example, based on knowledge base including syntactical knowledge (e.g., programming basics) and semantic knowledge (e.g., API documentation), (Schulte et al., 2010) offer a bottom-up reading technique, which assumes that people begin with individual code lines and chunks, and then combine them into higher-level abstractions. Further, (Park et al., 2016) state that when people read codes, reasoning about the hierarchical relationship of blocks, statements, expressions and variables is necessary. Based on these studies, we conclude three key points for human understanding codes. First, the transition of defined variables has to be traced. Second, humans usually adopt a "divide-and-conquer" strategy, which divides codes based + +on statements and then understands codes from a local-to-global view. Third, humans resort to external knowledge to comprehend codes, such as API documentation and code examples written by experts. + +In this paper, inspired by human behaviors for code comprehension, we propose a novel Partitioning-based Graph Neural Network with External Knowledge (PGNN-EK). To capture code context and structure, PGNN-EK upgrades the traditional AST and defines a novel substring-based AST called S-AST. In S-AST, we add edges between variables to trace the variable transitions, edges between adjacent tree leaves from left to right to enrich the context and structure information, and edges between sub-nodes corresponding to subtokens tokenized from user-defined identifiers to handle the Out of Vocabulary (OOV) problem (Karampatsis et al., 2020). Details will be illustrated later. After that, we first apply graph neural network (GNN) models on the S-AST to derive a code embedding. To further implement the "divide-and-conquer" reading strategy, we partition the S-AST into multiple subgraphs, which follow the sequence of statements in the original code. For each subgraph, we use GNN models to generate the subgraph embedding. Then, these subgraph embeddings are fused to generate another code embedding. For these two code embeddings, since they are both derived from S-AST, we further aggregate them. On the other hand, to characterize the dependence on external knowledge for code comprehension, we traverse the AST of the original code to derive a sequence of tokens for syntactic knowledge and then add the API descriptions to the end for semantic knowledge. We then apply CodeBERT (Feng et al., 2020) on the token sequence to capture external knowledge. Finally, PGNN-EK generates the output code embedding by combining the embedding derived from S-AST and the one from external knowledge. + +To evaluate the model performance, we conduct experiments on the code summarization task and code clone detection task, respectively. Before we apply PGNN-EK on the code clone detection benchmarks in CodeXGLUE (Shi et al., 2021) extracted from the BigCloneBench 2014 dataset (Svajlenko et al., 2014), we notice from the leaderboard that the results are incredibly high, where + +the minimum F1 score is 0.949. Then we dive into the characteristics of the dataset and find that the functionalities of codes in the test set have all appeared in the training set. Therefore, the dataset is very simple. To further test the model's generalization ability, we construct a new dataset, where the test set contains codes whose functionality has never appeared in the training set. This new dataset provides an insightful reference for further research in the community. + +Our main contributions are summarized as follows: + +- We construct a new code structure representation S-AST that can be used to handle the OOV problem in PLP. +- We follow human behaviors in understanding codes and propose a novel model PGNN-EK that leverages code context, structure and external knowledge. Specifically, we put forward a novel partitioning-based graph neural network model that can effectively use code context and structure. We also present a code transformation method to utilize external knowledge in boosting comprehension. +- We conduct extensive experiments on code summarization and code clone detection tasks to demonstrate the effectiveness of our model. In particular, we identify the limitation of a benchmark dataset for code clone detection and release a new dataset that is more challenging. + +# 2 Related Work + +# 2.1 Program Understanding + +Program understanding is a topic that has received wide attention. Early works use either code context or structure information. For example, taking codes as raw texts, some works use language models (Raychev et al., 2014; Allamanis et al., 2015), RNN-series (Zaremba and Sutskever, 2014; Dam et al., 2016) and attention (Iyer et al., 2016) to represent codes. However, different from natural language, programs are more structural, which can be parsed into intermediate graphs, such as AST. Many works for code analysis are then proposed based on AST, such as AST-based LSTM (Wei and Li, 2017), AST-based CNN (Yu et al., 2019), ASTNN (Zhang et al., 2019), code2vec (Alon et al., + +![](images/e4ad7f287505a7ac043db54999fab856dda12431dc1eb3e9f1d7e76e0f4096be.jpg) +Figure 1: An example of S-AST. To simplify the graph, we create a code snippet (top left), whose variables are defined with only one character, such as "a" and "b". In real tasks, the codes are longer and user-defined identifiers are more semantically complex. This could add more substring nodes and edges. The figure is better viewed in color. + +2019b), and code2seq (Alon et al., 2019a). Recently, GNN models have also been applied in code understanding. Since the original AST is actually a tree that is sparse, these works (Allamanis et al., 2018b; Wang et al., 2020; Wang and Li, 2021) first add edges to AST to make it more connected and then apply GNN models. Further, there are also works (Yu et al., 2020; Cummins et al., 2021; Liu et al., 2021) that utilize other intermediate graphs such as CFG, PDG and CPG (Yamaguchi et al., 2014). Recently, approaches that use both code context and structure are proposed. For example, Hellendoorn et al. (2020) and Zügner et al. (2021) incorporate the structure information derived from AST, such as edge weights and node distances, into the context attention computation in Transformer (Vaswani et al., 2017). + +Despite the success, all these methods only consider the code context and structure information. There are also approaches that utilize the external knowledge associated with codes. For example, some methods apply pre-training techniques in NLP to boost comprehension, such as CodeBERT (Feng et al., 2020), GPT-C (Svyatkovskiy et al., 2020) and PLBART (Ahmad et al., 2021). There are also works that incorporate code characteristics into pre-training models, such as GraphCodeBERT (Peng et al., 2021), OSCAR (Peng et al., 2021) and InferCode (Bui et al., 2021). Further, API is another external source for program understanding, which has been introduced in many works (Hu et al., 2018; Xu et al., 2020). How- + +ever, all these methods ignore considering human behaviors in program understanding. + +# 2.2 Code Summarization and Code Clone Detection + +In this paper, we focus on two program understanding downstream tasks: code summarization and code clone detection. For code summarization, some works (Iyer et al., 2016; Ahmad et al., 2020) use code context only, some methods (LeClair et al., 2019; Alon et al., 2019a) use code structure only, while there are also models (Hellendoorn et al., 2020; Zügner et al., 2021) that use both information. Further, Liu et al. (2021) introduce external knowledge for performance improvement. For code clone detection, existing works mainly employ code structure (Wei and Li, 2017; Zhang et al., 2019; Wang et al., 2020) and pre-training models (Feng et al., 2020; Ahmad et al., 2021). + +# 3 S-AST Construction + +In this section, we construct S-AST. The original AST has two main limitations: + +- Low connectivity. The original AST is actually tree-structured, where every two nodes are minimally connected with only one path. This could lead to a long distance between leaf nodes. As pointed out in (Alon and Yahav, 2021), directly applying GNN models in tree-shaped graphs could cause the long-range problem. + +- OOV problem. User-defined identifiers in codes can be arbitrarily complex and most of them are compound words, which could induce a large vocabulary size. For example, the training set size in the benchmark dataset CodeXGLUE (Lu et al., 2021) for code summarization is 164,814, while the vocabulary size for AST nodes is 620,256. After we split the nodes by camel case and underscores (Cvitkovic et al., 2019), the vocabulary size is still as high as 201,286. A very large vocabulary could cause the OOV problem (Jean et al., 2015) and thus adversely affect the model performance. + +To improve the connectivity of the AST, there exist some works (Allamanis et al., 2018b; Wang et al., 2020; Wang and Li, 2021) that add edges to the AST. However, these methods cannot address the OOV problem. Therefore, we propose a new code intermediate graph S-AST, as shown in Figure 1. Similar as in (Allamanis et al., 2018b; Wang et al., 2020), we add data flow edges to trace variable transitions and connect adjacent leaf nodes to encourage learning from contexts. To solve the OOV problem, we further reduce the vocabulary size by using the tokenizer of RoBERTa (Liu et al., 2019) to tokenize every leaf node in the AST. When a leaf node can be tokenized into multiple subtokens, we keep the first substring as the parent node and take other subtokens as its children. For example, the token "getLarger" is divided into the parent node "get" and the children nodes "L" and "arger". These new parent-children connections are defined as substring edges. With these three types of edges added, we increase the number of edges in the AST and improve the graph connectivity. Further, the vocabulary size could be significantly reduced. In our experiments, we use javalang3 to generate Java AST and reduce the vocabulary size to 50, 336, where 50, 265 is the size of original RoBERTa vocabulary and 71 is the number of keywords in non-leaf nodes defined by javalang. + +# 4 Algorithm + +In this section, we introduce the PGNN-EK model, which is composed of two main components. On the one hand, the partitioning-based graph neural network model (PGNN) is proposed to follow the "divide-and-conquer" behaviours of humans to + +understand programs. On the other hand, PGNN-EK leverages external knowledge to enhance the model's capability. The overall architecture of PGNN-EK is summarized in Figure 2. + +![](images/fada6c6058fe90b387d96cb73a7ddc1eb7d68e385f0ee9283cda316fc8c8265d.jpg) +Figure 2: The overall architecture of PGNN-EK + +# 4.1 Partitioning-based Graph Neural Networks + +As illustrated in (Schulte et al., 2010) and (Park et al., 2016), the bottom-up reasoning on the hierarchical relationship of statements plays an essential role in human understanding. Therefore, we propose a statement-based partitioning algorithm to divide S-AST into multiple subgraphs. Since S-AST is no longer a tree, for convenience, we first keep subtokens and their edges in-between in S-AST, and remove edges linking variables and those connecting adjacent leaf nodes, to derive a tree structure. After that, we calculate the number of nodes in each subtree of the root node and each subtree corresponds to a statement of the raw code. Then, we accumulate the number of nodes in subtrees from left to right. When the sum exceeds the pre-defined threshold $\lambda$ , we group these subtrees into one subgraph and reset the sum to zero. If the current subgraph is not the first one, for each variable node in it, we also add to the subgraph the closest node indicating the same variable in previous subgraphs to trace the variable transition. After the subgraph is derived, we add edges between nodes that represent the same variable and also connect adjacent leaf nodes as in the original S-AST. We repeat this process until all subtrees are visited. Note that if the node number of the last subgraph is smaller than $\lambda/2$ , we merge the last subgraph into the penultimate subgraph. Finally, we summarize the pseudocodes of the partitioning algorithm in Alg. 1. + +After subgraphs are derived, as in (Hellendoorn et al., 2020), we adopt GGNN (Li et al., 2016) as the graph embedding model, which uses a multi + +layer perceptron (MLP) and a gated recurrent unit (GRU) to perform message passing and embedding updating. Specifically, at the $(l + 1)$ -th layer, to update the embedding $\mathbf{h}_i^{l + 1}$ of node $x_{i}$ , we have: + +$$ +\begin{array}{l} \mathbf {m} _ {i} ^ {l + 1} = \sum_ {j \in \mathcal {N} _ {i}} \operatorname {M L P} \left(\mathbf {h} _ {j} ^ {l}, \mathbf {e} _ {i j}\right), \\ \mathbf {h} _ {i} ^ {l + 1} = \mathrm {G R U} (\mathbf {m} _ {i} ^ {l + 1}, \mathbf {h} _ {i} ^ {l}), \\ \end{array} +$$ + +where $\mathcal{N}_i$ is the neighbor set of $x_{i}$ and $\mathbf{e}_{ij}$ is the feature vector of the edge between $x_{i}$ and $x_{j}$ . After node embeddings are generated, we use a READ-OUT function to obtain the graph embedding $\mathbf{G}$ : + +$$ +\mathbf {G} = \operatorname {R E A D O U T} \left(\left\{\mathbf {h} _ {i} \right\}\right). +$$ + +We repeat the above process on each subgraph to derive a list of subgraph embeddings $\mathbf{L} = [\mathbf{G}_1, \mathbf{G}_2, \dots, \mathbf{G}_n]$ , where $n$ is the number of subgraphs. Next, we keep the order of the subgraph list and feed $\mathbf{L}$ into an unidirectional LSTM: + +$$ +\mathbf {O} = \operatorname {L S T M} (\mathbf {L}). +$$ + +Inspired by the skip connection (He et al., 2016), we also perform GGNN on the whole S-AST graph to derive a code embedding $\mathbf{C}$ . Finally, we concatenate $\mathbf{C}$ and the last output $\mathbf{O}[-1]$ of LSTM. We further feed the result into a fully connected layer to get the output code embedding $\mathbf{E}_p$ : + +$$ +\mathbf {E} _ {p} = \operatorname {F C} (\operatorname {C o n c a t} (\mathbf {C}, \mathbf {O} [ - 1 ])). +$$ + +# 4.2 External Knowledge + +To help understand programs, people often resort to external knowledge. For example, humans usually learn from massive exemplary codes written by experts for better syntactic comprehension, which are in the format of programming language. Further, API documentation is written in natural language and provides semantic details on functions. Therefore, a research question arises: how to fuse these external syntactic and semantic knowledge into our model? + +To address the problem, we use pre-training techniques in programming language processing (PLP), which are trained on massive code corpus to learn programming basics. In particular, we adopt CodeBERT (Feng et al., 2020), which is a bimodal pretrained model for both programming language and natural language. + +Before CodeBERT is applied, we first combine the raw code and API descriptions. To enrich the + +![](images/2ba9e2c6931ada67496bbdc77c1faa72a32ad97a05d5e175934e5def5d41cae1.jpg) +Figure 3: A toy example on code transformation with external knowledge. The last sentence in the right box is the API description of Math.abs. + +syntactic information contained in the raw code, we perform pre-order traversal on the AST of the code to obtain a sequence of tokens and replace the raw code. This is because the AST includes extra coderelated information, such as statements, variables and operations. Then we append the corresponding API description to the end. A toy example of transformation is shown in Figure 3. Finally, we feed the transformed context $\mathbf{T}$ into the pre-trained CodeBERT $^4$ and obtain the embedding $\mathbf{E}_e$ : + +$$ +\mathbf {E} _ {e} = \operatorname {C o d e B E R T} (\mathbf {T}). +$$ + +Finally, we concatenate the output embeddings of PGNN and CodeBERT, and feed the result into a fully connected layer to obtain the final embedding $\mathbf{E}_f$ : + +$$ +\mathbf {E} _ {f} = \operatorname {F C} \left(\operatorname {C o n c a t} \left(\mathbf {E} _ {p}, \mathbf {E} _ {e}\right)\right). +$$ + +# 5 Experiments + +In this section, we evaluate the performance of PGNN-EK. We conduct experiments on two program understanding tasks: code summarization and code clone detection. For each task, we use two benchmark datasets, whose statistics are listed in Table 1. + +# 5.1 Implementation details + +In our experiments, we use the AdamW optimizer and linear schedule from (Wolf et al., 2020) to update model parameters. For fair comparison, we run all experiments on 2 Tesla V100 with 32G memory. For PGNN, we set the number of GNN layers, the number of LSTM layers, the embedding size of GNN node, and the embedding size of LSTM hidden layer to 3, 2, 768 and 768, respectively. We choose the mean operator as the READ-OUT function. To avoid overfitting, we set the dropout rate to 0.2 in PGNN. We implement GNNs + +Table 1: The statistics of datasets + +
TaskDatasetTrainingValidationTestDescription
Code summarizationCodeSearchNet-Java (CSN)164,8145,17910,952Provided by CodeXGLUE
TL-CodeSum (TLC)69,7088,7148,714Original
Code clone detectionBigCloneBench (BCB)901,028415,416415,416Provided by CodeXGLUE
BigCloneBench-Function (BCB-F)398,11078,60281,202Split by functionality
+ +based on PyTorch Geometric (Fey and Lenssen, 2019). In the EK-enhanced component, we obtain 51, 191 method-description pairs after preprocessing the API documentation5. For pair examples, see Appendix B. In the code summarization task, we add a 6-layer Transformer-based decoder to generate summarization as in CodeBERT. We set learning rate to 0.00005, batch size to 16, training steps to 50,000, maximum code length to 256 and maximum summarization length to 32, respectively. In the code clone detection task, as suggested by (Neculoiu et al., 2016), we double the PGNN-EK to a siamese neural network to calculate code similarity. We set learning rate to 0.00005, batch size to 4, training steps to 200,000 and maximum code length to 400, respectively. + +# 5.2 Code Summarization + +Code summarization aims at generating natural language comments for codes. We evaluate the performance of PGNN-EK on two benchmark datasets, which are TL-CodeSum (shorted as TLC) (Hu et al., 2018) and the Java subset of CodeSearchNet (shorted as CSN) (Husain et al., 2019). For TLC, we use the original dataset. For CSN, we use the version provided by CodeXGLUE (Lu et al., 2021). For fair comparison, we use the smoothed BLEU4 score (Lin and Och, 2004) as in CodeXGLUE. The larger the score, the better the model performance. We compare our model with five representative baselines, including CodeNN (Iyer et al., 2016), NCS (Ahmad et al., 2020), Rencos (Zhang et al., 2020), CodeBERT (Feng et al., 2020) and PLBART (Ahmad et al., 2021). Due to the space limitation, we move the details of these baselines to Appendix C. + +Table 2 shows the code summarization results. Note that the results of CodeNN, NCS and Rencos are directly taken from (Shi et al., 2021). Also, the results of CodeBERT and PLBART on CSN are + +derived from the leaderboard of CodeXGLUE. For their results on TLC, we run the codes released by the authors of the paper and set hyper-parameters according to the original paper. From the table, we see that, due to the fusion of external knowledge, pre-training models CodeBERT, PLBART and PGNN-EK outperform other models on both datasets. Further, PGNN-EK performs the best. The gaps between PGNN-EK and the runner-up model PLBART on CSN and TLC are 0.5 and 1.05, respectively. This shows the importance of considering human behaviors for code comprehension. We also observe that scores on TLC are substantially larger than that on CSN. This is because codes in the training set and the test set of TLC are considerably more similar in functionalities, which will be elaborated in the next section. + +Table 2: Code summarization results. We highlight the best results in bold. * indicates that the improvements are statistically significant for $p < 0.01$ with paired t-test. + +
ModelCSNTLC
CodeNN8.5833.03
NCS11.1944.25
Rencos11.8046.81
CodeBERT17.6548.53
PLBART18.4550.01
PGNN-EK18.95*51.06*
+ +# 5.3 Code Clone Detection + +The goal of code clone detection is to detect whether two code fragments implement the same functionality. Following (Zhang et al., 2019; Wang et al., 2020), we use the BigCloneBench 2014 dataset (Svajlenko et al., 2014) and adopt the version provided by CodeXGLUE. We short it as BCB. + +Before we apply PGNN-EK on BCB, we notice from the leaderboard of CodeXGLUE that the results on BCB are incredibly high, where the mini + +mum F1 score is 0.949. Then we dive into the characteristics of the dataset and compare BCB with the original benchmark (Svajlenko et al., 2014). We find that the functionalities of codes in the test set have all appeared in the training set of BCB. Therefore, BCB is a very simple dataset. To test the model's generalization ability, we construct a new dataset, named BCB-F, where the test set contains codes whose functionality has never appeared in the training set. We first extract codes from the new version benchmark (Svajlenko and Roy, 2015) that has more code fragments and code functionalities. We next split training/validation/test set based on code functionalities. Specifically, we construct training/validation/test set with $22/11/10$ code functionalities. For details on the functionality splits of BCB and BCB-F, see Appendix D. We keep the same number of positive and negative samples in all the three sets. The comparison between BCB and BCB-F is given in Table 3. + +Table 3: Comparisons between BCB and BCB-F + +
BCBBCB-F
Code fragments913473182
Functionalities1043
Training/Test splittingrandom sampleby functionality
Ratio of positive-negativenearly 2:11:1
+ +In addition to the pre-training models CodeBERT and PLBART, we further compare our model with two representative methods in code clone detection, which are ASTNN (Zhang et al., 2019) and FA-AST (Wang et al., 2020) (For the details of these baselines, see Appendix C). + +Table 4 shows the evaluation results on the two datasets. For BCB, we take the results of other baseline methods from CodeXGLUE6. For BCB-F, we run the source codes released by their authors to obtain the results. From the table, we observe: 1) All models perform very well on BCB, indicating that the dataset is very simple. However, the best F1 score on BCB-F is only 0.724, which shows that this dataset is very challenging. 2) The non-pretraining models ASTNN and FA-AST predict all samples to be positive and perform poorly on BCB-F, while pre-training models perform better. This + +further demonstrates the importance of introducing external knowledge. 3) PGNN-EK achieves the best results on both datasets. This shows that considering human behaviors in program understanding enhances the generalization ability of PGNN- EK. + +Table 4: Code clone detection results w.r.t. precision (P), recall (R) and F1 measures. We highlight the best results in bold. * indicates that the improvements are statistically significant for $p < 0.01$ with paired t-test. + +
ModelBCBBCB-F
PRF1PRF1
ASTNN0.920.940.930.501.000.67
FA-AST0.960.940.950.501.000.67
CodeBERT0.9600.9690.9650.6110.8420.708
PLBART--0.9720.5170.9960.681
PGNN-EK0.975*0.973*0.974*0.621*0.8690.724*
+ +# 5.4 Ablation Study + +We further conduct ablation study to verify the importance of its main components in PGNN-EK, including subtokens, the S-AST graph, the partitioning-based GNN and the external knowledge. Specifically, one variant employs only the S-AST graph without using external knowledge. This helps us realize the importance of external knowledge in program understanding. We call this variant PGNN only. Meanwhile, we define another variant that ignores the hierarchical relationships in code structure and uses only external knowledge. We call this variant EK only. To further show the significance of S-AST in code understanding, we replace S-AST with the original AST in the variant PGNN-EK with AST. We also implement a variant that does not use the subtoken tokenizer to generate extra subtoken nodes and edges. We call it PGNN-EK without subtoken. This variant can be used to show the importance of subtokens in addressing the OOV problem. To show the advantage of the partitioning strategy, we propose a variant GNN-EK that discards the partitioning step. Finally, we consider a variant that feeds the raw code into the pre-trained CodeBERT without transforming it with external knowledge. We call this variant PGNN-CodeBERT. + +Table 5 summarizes the ablation study results. From the table, we see that: 1) S-AST contains richer information than AST and can serve as an effective code intermediate representation in program understanding. The introduction of subtokens nodes and edges alleviates the OOV problem + +Table 5: Ablation study on PGNN-EK. We highlight the best results in bold. + +
MethodCSN (Smoothed BLEU-4)TLC (Smoothed BLEU-4)BCB (F1)BCB-F (F1)
PGNN only14.0547.710.9510.667
EK only17.9549.660.9650.711
PGNN-EK with AST17.7048.960.9570.713
PGNN-EK without substring17.8249.010.9580.712
GNN-EK18.0549.950.9670.715
PGNN-CodeBERT18.6050.650.9690.720
PGNN-EK (Full Model)18.9551.060.9740.724
+ +and enhances the model performance. 2) External knowledge helps boost understanding codes. In particular, code transformation with external knowledge improves the expressiveness of the raw code. 3) The full model PGNN-EK outperforms other variants on all the datasets and tasks. This indicates the importance of every main component in PGNN-EK. It further shows that leveraging code context, code structure and external knowledge as humans is helpful for program understanding. + +# 5.5 The Influence of Subgraph Size + +We end this section with a hyper-parameter sensitivity analysis. In PGNN-EK there is a key hyper-parameter $\lambda$ that is used to control the size of subgraphs. Here, we investigate the sensitivity of $\lambda$ . We vary the value of $\lambda$ from $\{10, 30, 50, 70, 90, 110, 130, 150, 170, 190\}$ , and the final prediction results of PGNN-EK on 4 datasets are shown in the Figure 4. + +Table 6: The average number of nodes in S-AST + +
DatasetsCSNTLCBCBBCB-F
S-AST size137140372348
+ +The results indicate that 1) the model performance first increases and then drops, with the increase of the subgraph size. When the subgraph size is too small, each subgraph is a code fragment that no longer represents a code statement and thus contains less information. Further, when the subgraph is too large, each subgraph could be composed of statements that are of different semantic meanings, which thus degrades the model performance. 2) PGNN-EK performs the best at $\lambda = 30$ on CSN and TLC while it achieves the best results at $\lambda = 70$ on BCB and BCB-F. We further investigate the reason and show the average + +![](images/c1a05542bc762482c89917ddcf844e1a3050cfcd9a50b72f1cb7b30dd8fded50.jpg) +Figure 4: The influence of subgraph size on 4 datasets. + +number of nodes in S-AST on the four datasets in Table 6. From the table, BCB and BCB-F contain $\sim 2.5$ times more nodes than that in CSN and TLC. This empirically suggests that setting $\lambda$ to be about $\frac{1}{5}$ to $\frac{1}{4}$ of the average node number in S-AST could be a reasonable choice. + +# 6 Conclusion + +In this paper, we followed human understandings for programs and proposed the PGNN-EK model. To enrich the code structure information and alleviate the OOV problem, we presented the S-AST graph based on AST, which uses a substring tokenizer to generate substring nodes and edges between them. Inspired by the "divide-and-conquer" strategy, we proposed the partitioning-based graph neural network model on S-AST that employs code context and structure. To leverage the external knowledge to boost comprehension, we transformed the raw code to fuse syntactic and semantic knowledge and utilized pre-training techniques for information extraction. We performed extensive experiments to show the effectiveness of our model PGNN-EK on the code summarization and code + +clone detection tasks. In particular, to show the generalization ability of the model, we released a new benchmark that is more challenging. + +# 7 Acknowledgments + +This work has been supported by the National Natural Science Foundation of China under Grant No. U1911203, Alibaba Group through the Alibaba Innovation Research Program, the National Natural Science Foundation of China under Grant No. 61877018 and No.61977025, and Shanghai Pujiang Talent Program under Grant No. 21PJ1402900. + +# References + +Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2020. A transformer-based approach for source code summarization. In ACL 2020. +Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2021. Unified pre-training for program understanding and generation. In NAACL-HLT 2021. +Miltiadis Allamanis, Earl T. Barr, Christian Bird, and Charles Sutton. 2015. Suggesting accurate method and class names. In ESEC/FSE 2015. +Miltiadis Allamanis, Earl T. Barr, Premkumar T. Devanbu, and Charles Sutton. 2018a. A survey of machine learning for big code and naturalness. ACM Comput. Surv., 51(4):81:1-81:37. +Miltiadis Allamanis, Marc Brockschmidt, and Mahmoud Khademi. 2018b. Learning to represent programs with graphs. In ICLR 2018. +Uri Alon, Shaked Brody, Omer Levy, and Eran Yahav. 2019a. code2seq: Generating sequences from structured representations of code. In ICLR 2019. +Uri Alon and Eran Yahav. 2021. On the bottleneck of graph neural networks and its practical implications. In ICLR 2021. +Uri Alon, Meital Zilberstein, Omer Levy, and Eran Yahav. 2019b. code2vec: learning distributed representations of code. Proc. ACM Program. Lang., 3(POPL):40:1-40:29. +Yoshua Bengio, Yann LeCun, and Geoffrey E. Hinton. 2021. Deep learning for AI. Commun. ACM, 64(7):58-65. +Nghi D. Q. Bui, Yijun Yu, and Lingxiao Jiang. 2021. Infercode: Self-supervised learning of code representations by predicting subtrees. In ICSE 2021. +Chris Cummins, Zacharias V. Fisches, Tal Ben-Nun, Torsten Hoefler, Michael F. P. O'Boyle, and Hugh Leather. 2021. Programl: A graph-based program representation for data flow analysis and compiler optimizations. In ICML 2021. + +Milan Cvtkovic, Badal Singh, and Animashree Anandkumar. 2019. Open vocabulary learning on source code with a graph-structured cache. In ICML 2019. +Hoa Khanh Dam, Truyen Tran, and Trang Pham. 2016. A deep language model for software code. CoRR, abs/1608.02715. +Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou. 2020. Codebert: A pre-trained model for programming and natural languages. In EMNLP 2020. +Matthias Fey and Jan E. Lenssen. 2019. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR 2016. +Vincent J. Hellendoorn, Charles Sutton, Rishabh Singh, Petros Maniatis, and David Bieber. 2020. Global relational models of source code. In ICLR 2020. +Abram Hindle, Earl T. Barr, Mark Gabel, Zhendong Su, and Premkumar T. Devanbu. 2016. On the naturalness of software. Commun. ACM, 59(5):122-131. +Xing Hu, Ge Li, Xin Xia, David Lo, Shuai Lu, and Zhi Jin. 2018. Summarizing source code with transferred API knowledge. In *IJCAI* 2018. +Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. 2019. Code-searchnet challenge: Evaluating the state of semantic code search. CoRR, abs/1909.09436. +Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2016. Summarizing source code using a neural attention model. In ACL 2016. +Sebastien Jean, KyungHyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural machine translation. In ACL 2015. +Rafael-Michael Karampatsis, Hlib Babii, Romain Robbes, Charles Sutton, and Andrea Janes. 2020. Big code != big vocabulary: open-vocabulary models for source code. In ICSE '20. +Alexander LeClair, Siyuan Jiang, and Collin McMillan. 2019. A neural model for generating natural language summaries of program subroutines. In ICSE 2019. +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In ACL 2020, pages 7871-7880. Association for Computational Linguistics. + +Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard S. Zemel. 2016. Gated graph sequence neural networks. In ICLR 2016. +Chin-Yew Lin and Franz Josef Och. 2004. ORANGE: a method for evaluating automatic evaluation metrics for machine translation. In COLING 2004. +Shangqing Liu, Yu Chen, Xiaofei Xie, Jing Kai Siow, and Yang Liu. 2021. Retrieval-augmented generation for code summarization via hybrid GNN. In ICLR 2021. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692. +Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin B. Clement, Dawn Drain, Daxin Jiang, Duyu Tang, Ge Li, Lidong Zhou, Linjun Shou, Long Zhou, Michele Tufano, Ming Gong, Ming Zhou, Nan Duan, Neel Sundaresan, Shao Kun Deng, Shengyu Fu, and Shujie Liu. 2021. Codexglue: A machine learning benchmark dataset for code understanding and generation. CoRR, abs/2102.04664. +Paul Neculoiu, Maarten Versteegh, and Mihai Rotaru. 2016. Learning text similarity with siamese recurrent networks. In Proceedings of the 1st Workshop on Representation Learning for NLP, Rep4NLP@ACL 2016. +Thomas H. Park, Meen Chul Kim, Sukrit Chhabra, Brian Lee, and Andrea Forte. 2016. Reading hierarchies in code: Assessment of a basic computational skill. In *ITiCSE* 2016, pages 302-307. ACM. +Dinglan Peng, Shuxin Zheng, Yatao Li, Guolin Ke, Di He, and Tie-Yan Liu. 2021. How could neural networks understand programs? In ICML 2021. +Veselin Raychev, Martin T. Vechev, and Eran Yahav. 2014. Code completion with statistical language models. In PLDI '14. +Carsten Schulte, Tony Clear, Ahmad Taherkhani, Teresa Busjahn, and James H. Paterson. 2010. An introduction to program comprehension for computer science educators. In Proceedings of the 2010 ITiCSE working group reports, ITiCSE-WGR 2010, pages 65-86. ACM. +Ensheng Shi, Yanlin Wang, Lun Du, Junjie Chen, Shi Han, Hongyu Zhang, Dongmei Zhang, and Hongbin Sun. 2021. Neural code summarization: How far are we? CoRR, abs/2107.07112. +Jeffrey Svajlenko, Judith F. Islam, Iman Keivanloo, Chanchal Kumar Roy, and Mohammad Mamun Mia. 2014. Towards a big data curated benchmark of interproject code clones. In ICSME 2014. + +Jeffrey Svajlenko and Chanchal K. Roy. 2015. Evaluating clone detection tools with bigclonebench. In ICSME 2015. +Alexey Svyatkovskiy, Shao Kun Deng, Shengyu Fu, and Neel Sundaresan. 2020. Intellicode compose: code generation using transformer. In ESEC/FSE '20. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017. +Wenhan Wang, Ge Li, Bo Ma, Xin Xia, and Zhi Jin. 2020. Detecting code clones with graph neural network and flow-augmented abstract syntax tree. In SANER 2020. +Yanlin Wang and Hui Li. 2021. Code completion by modeling flattened abstract syntax trees as graphs. In AAAI 2021. +Huihui Wei and Ming Li. 2017. Supervised deep features for software functional clone detection by exploiting lexical and syntactical information in source code. In *IJCAI* 2017. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics. +Frank F. Xu, Zhengbao Jiang, Pengcheng Yin, Bogdan Vasilescu, and Graham Neubig. 2020. Incorporating external knowledge through pre-training for natural language to code generation. In ACL 2020. +Fabian Yamaguchi, Nico Golde, Daniel Arp, and Konrad Rieck. 2014. Modeling and discovering vulnerabilities with code property graphs. In 2014 IEEE Symposium on Security and Privacy, SP 2014. +Hao Yu, Wing Lam, Long Chen, Ge Li, Tao Xie, and Qianxiang Wang. 2019. Neural detection of semantic code clones via tree-based convolution. In ICPC 2019. +Zeping Yu, Wenxin Zheng, Jiaqi Wang, Qiyi Tang, Sen Nie, and Shi Wu. 2020. Codecmr: Cross-modal retrieval for function-level binary source code matching. In NeurIPS 2020. +Wojciech Zaremba and Ilya Sutskever. 2014. Learning to execute. CoRR, abs/1410.4615. + +Jian Zhang, Xu Wang, Hongyu Zhang, Hailong Sun, and Xudong Liu. 2020. Retrieval-based neural source code summarization. In ICSE 20. + +Jian Zhang, Xu Wang, Hongyu Zhang, Hailong Sun, Kaixuan Wang, and Xudong Liu. 2019. A novel neural source code representation based on abstract syntax tree. In ICSE 2019. + +Daniel Zügner, Tobias Kirschstein, Michele Catasta, Jure Leskovec, and Stephan Gunnemann. 2021. Language-agnostic representation learning of source code from structure and context. In ICLR 2021. + +# A Partitioning S-AST Algorithm + +See Algorithm 1. + +# B Examples of API-Description Pairs + +In the experiment, we obtain 51,191 method description pairs after preprocessing, and Table 7 gives some examples. + +# C Baselines Introduction + +We compare our model with five representative models in code summarization task: + +- CodeNN (Iyer et al., 2016) is the first method that applies deep neural networks in code summarization. It uses a classical attention-based encoder-decoder framework from Neural Machine Translation (NMT). +- NCS (Ahmad et al., 2020) applies Transformer (Vaswani et al., 2017) to model the pairwise relationship between code tokens and capture their long-term dependencies. +- Rencos (Zhang et al., 2020) proposes an attention-based encoder-decoder model and enhance it with the most similar code snippets retrieved from the training set. +- CodeBERT (Feng et al., 2020) is a bimodal pre-training model for programming and natural languages based on RoBERTa (Liu et al., 2019). +- PLBART (Ahmad et al., 2021) is a sequence-to-sequence pre-training model based on BART (Lewis et al., 2020). + +In addition to the pre-training models CodeBERT and PLBART, we further compare our model with two representative model in code clone detection task: + +- ASTNN (Zhang et al., 2019) proposes an AST-based neural network that splits AST into a sequence of statement trees and applies a bidirectional RNN model to produce source code representation. However, it ignores external knowledge associated with codes. +- FA-AST (Wang et al., 2020) augments original AST with explicit control and data flow edges, then introduces two different types of GNNs to detect code clones. + +# D Functionalities Splits in BCB and BCB-F + +For BCB, the functionalities in Train/Val/Test set are: + +- Train: Web Download, Secure Hash(MD5), Copy a File, Decompress Zip, FTP Authenticated Login, Bubble Sort, Init. SGV with Model, SGV Selection Event Handler, Create Java Project(Eclipse), SQL Update and RollBACK. +- Val: Same to Train. +- Test: Same to Train. + +For BCB-F, the functionalities in Train/Val/Test set are, where the emphasis discloses the whole 10 functionalities that exist in BCB: + +- Train: Decompress Zip, Copy a File, Get Prime Factors, File Dialog, Resize Array, Get MAC Address String, Parse CSV File, Secure Hash(MD5), Send Email, Load Custom Font, Create Java Project(Eclipse), Extract Matches Using Regex, Open File in Desktop Application, Connect to Database, Load File to Byte Array, Call Method Using Reflection, Take Screenshot to File, Write PDF File, Delete Folder and Contents, Copy Directory, Binary Search, Delete Folder and Contents. +- Val: SQL Update and RollBACK, Bubble Sort, Execute External Process, XMPP Send Message, Zip Files, Convert Date String Format, Secure Hash, GCD, SGV Selection Event Handler, Init. SGV with Model, Play Sound. +- Test: Shuffle Array in Place, Create Encryption Key Files, Load Custom Font, Encrypt to File, Parse XML to DOM, CRC32 File Checksum, Transpose a Matrix, Test Palindrome, Web Download, FTP Authenticated Login. + +Table 7: Examples of API-Description Pairs + +
APIsDescriptions
Math.absReturns the absolute value of an int value.
Arrays.hashCodeReturns a hash code based on the contents of the specified array.
Scanner.hasNextReturns true if this scanner has another token in its input.
Color.getRGBReturns the RGB value representing the color in the default sRGB ColorModel.
+ +# Algorithm 1 Partitioning S-AST + +Input: A S-AST $\mathcal{T}$ with node features $\mathcal{X}$ , edge indexes $\mathcal{I}$ and edge features $\mathcal{E}$ + +Parameter: $\lambda$ , which specifies the minimum number of nodes in the subgraph + +Output: Nodes features list $\mathcal{L}_x$ , edge indexes list $\mathcal{L}_i$ , and edge features list $\mathcal{L}_e$ of subgraphs + +1: Derive a tree structure $\mathcal{T}'$ by removing data flow edges and adjacent leaf edges in $\mathcal{T}$ ; +2: nodes_sum $\leftarrow 0$ ,nodes_set $\leftarrow \{\}$ +3: $nf\_list, ei\_list, ef\_list, \mathcal{L}_x, \mathcal{L}_i, \mathcal{L}_e \gets \{\}$ ; +4: Obtain a subtree list $\{\mathcal{S}\}$ based on subtrees of root nodes in $\mathcal{T}'$ from left to right; +5: for $S$ in $\{S\}$ do +6: $n\gets$ the number of nodes in $\mathcal{S}$ +7: nodes_sum $\leftarrow$ nodes_sum + n; +8: Add nodes in $\mathcal{S}$ to nodes_set; +9: if nodes_sum $\geq \lambda$ or $S$ is the last element of $\{S\}$ then +10: if $\mathcal{L}_x\neq \emptyset$ then +11: Add closest nodes that indicate the same variables in $\mathcal{L}_x$ to nodes_set; +12: end if +13: Assign $nf\_list$ , $ei\_list$ , $ef\_list$ based on nodes_set, $\mathcal{X}$ , $\mathcal{I}$ and $\mathcal{E}$ ; +14: Append $nf\_list, ei\_list, ef\_list$ to $\mathcal{L}_x, \mathcal{L}_i, \mathcal{L}_e$ respectively; +15: nodes_sum $\leftarrow 0$ ,nodes_set $\leftarrow \{\}$ +16: end if +17: end for +18: // $A[-i]$ denotes the $i$ -th element from the bottom in $A$ . +19: if size of $\mathcal{L}_x[-1] < \lambda / 2$ and size of $\mathcal{L}_x > 1$ then +20: Merge $\mathcal{L}_x[-1]$ and $\mathcal{L}_x[-2]$ , $\mathcal{L}_i[-1]$ and $\mathcal{L}_i[-2]$ , $\mathcal{L}_e[-1]$ and $\mathcal{L}_e[-2]$ , respectively; +21: end if +22: return $\mathcal{L}_x, \mathcal{L}_i, \mathcal{L}_e$ \ No newline at end of file diff --git a/aneuralnetworkarchitectureforprogramunderstandinginspiredbyhumanbehaviors/images.zip b/aneuralnetworkarchitectureforprogramunderstandinginspiredbyhumanbehaviors/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..c7c4375564ae025dc4da32df6b5bac2d8b037491 --- /dev/null +++ b/aneuralnetworkarchitectureforprogramunderstandinginspiredbyhumanbehaviors/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8bc36e0a1a45a71734021368779abaa1ad50fd3883a70ca256c4b6c2ef2a4d79 +size 430391 diff --git a/aneuralnetworkarchitectureforprogramunderstandinginspiredbyhumanbehaviors/layout.json b/aneuralnetworkarchitectureforprogramunderstandinginspiredbyhumanbehaviors/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..732e25c41227bcb669ccbbdd8007969dc85f5cae --- /dev/null +++ b/aneuralnetworkarchitectureforprogramunderstandinginspiredbyhumanbehaviors/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9934d893d04a49b2847af9fdb24a524cff17833e380703ace80f53f2551571e2 +size 429731 diff --git a/arationalecentricframeworkforhumanintheloopmachinelearning/8a02f431-3fe4-4118-9870-9f50d6c18ace_content_list.json b/arationalecentricframeworkforhumanintheloopmachinelearning/8a02f431-3fe4-4118-9870-9f50d6c18ace_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b9f1645b774c2a6228ff38be078ec529d8ab9822 --- /dev/null +++ b/arationalecentricframeworkforhumanintheloopmachinelearning/8a02f431-3fe4-4118-9870-9f50d6c18ace_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ed38a67df17fc99009d0c3b23543c4a5486c7afa0675ad6010b249443b7a8a2e +size 76631 diff --git a/arationalecentricframeworkforhumanintheloopmachinelearning/8a02f431-3fe4-4118-9870-9f50d6c18ace_model.json b/arationalecentricframeworkforhumanintheloopmachinelearning/8a02f431-3fe4-4118-9870-9f50d6c18ace_model.json new file mode 100644 index 0000000000000000000000000000000000000000..eaf6b13bba33337e4dc891cea0ae1be794f43c80 --- /dev/null +++ b/arationalecentricframeworkforhumanintheloopmachinelearning/8a02f431-3fe4-4118-9870-9f50d6c18ace_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f89243d4683c2716e6f8df58949d7b1c7d2f0454cfd3a21f3d4d76c053b87c44 +size 93103 diff --git a/arationalecentricframeworkforhumanintheloopmachinelearning/8a02f431-3fe4-4118-9870-9f50d6c18ace_origin.pdf b/arationalecentricframeworkforhumanintheloopmachinelearning/8a02f431-3fe4-4118-9870-9f50d6c18ace_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5b655d46fb0c3d96e0f91ca01acf6f6bf1390c69 --- /dev/null +++ b/arationalecentricframeworkforhumanintheloopmachinelearning/8a02f431-3fe4-4118-9870-9f50d6c18ace_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:68ac9e6d6f0c2f73c064c71ea384b16a25258ffd3e5ac4885f12e0782ea3bc7d +size 494209 diff --git a/arationalecentricframeworkforhumanintheloopmachinelearning/full.md b/arationalecentricframeworkforhumanintheloopmachinelearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..47914b7716f9cb2743beb7772447a33d8c3d9dab --- /dev/null +++ b/arationalecentricframeworkforhumanintheloopmachinelearning/full.md @@ -0,0 +1,286 @@ +# A Rationale-Centric Framework for Human-in-the-loop Machine Learning + +Jinghui Lu\* 1,2,5, Linyi Yang\* 3,4, Brian Mac Namee 1,2, Yue Zhang 3,4 + +1 The Insight Centre for Data Analytics, University College Dublin + +$^{2}$ School of Computer Science, University College Dublin + +$^{3}$ School of Engineering, Westlake University + +$^{4}$ Institute of Advanced Technology, Westlake Institute for Advanced Study + +5 SenseTime Research + +{jinghui.lu, brian.macnamee}@ucd.ie + +{yanglinyi, zhangyue}@westlake.edu.cn + +# Abstract + +We present a novel rationale-centric framework with human-in-the-loop - Rationales-centric Double-robustness Learning (RDL) - to boost model out-of-distribution performance in few-shot learning scenarios. By using static semi-factual generation and dynamic human-intervened correction, RDL exploits rationales (i.e. phrases that cause the prediction), human interventions and semi-factual augmentations to decouple spurious associations and bias models towards generally applicable underlying distributions, which enables fast and accurate generalisation. Experimental results show that RDL leads to significant prediction benefits on both in-distribution and out-of-distribution tests compared to many state-of-the-art benchmarks—especially for few-shot learning scenarios. We also perform extensive ablation studies to support in-depth analyses of each component in our framework. + +# 1 Introduction + +Recent work finds that natural artefacts (Gururangan et al., 2018) or spurious patterns (Keith et al., 2020; Srivastava et al., 2020) in datasets can cause sub-optimal model performance for neural networks. As shown in Figure 1, the bold phrases—" $100\%$ bad" and "brain cell killing"—are underlying causes for a negative sentiment prediction that most human readers would recognise. These are defined as rationales in this paper. The underlined phrase—"acting and plot"—has been incorrectly recognised as a causal term by the model used for this example, and is referred to as a spurious pattern. + +Spurious patterns (or associations) are caused by natural artefacts or biases in training data (Lertvittayakumjorn and Toni, 2021), and are usually useless, or even harmful, at test time. This issue can be severe in few-shot learning (FSL) + +![](images/fd83c87c6c1cbf6c16646b85f58440502795cca3e1832323a101986de969c71c.jpg) +Figure 1: A negative movie review with human annotated causal terms (bold text) and spurious patterns recognised by the model (underlined text). + +scenarios. For instance, Kulesza et al. (2010) suggests that when a model is trained with a small subset of labelled data, it is prone to exploiting spurious patterns leading to poor generalisability that is evident in the performance decay in out-of-distribution (OOD) datasets. In spite of these issues, training deep neural networks using few labelled examples is a compelling scenario since unlabelled data may be abundant but labelled data is expensive to obtain in real-world applications (Lu and MacNamee, 2020; Lu et al., 2021). + +There is a strand of research addressing this scenario that seeks to improve model performance by "introducing methods and resources for training models less sensitive to spurious patterns" (Kaushik et al., 2020). Most of this work relies on generating counterfactual augmented data (CAD), either manually (Kaushik et al., 2021) or automatically (Feng et al., 2021; Qian et al., 2021; Yang et al., 2021, 2020a; Delaney et al., 2021). For example, Kaushik et al. (2020) proposed a human-in-the-loop framework where human annotators are required to make minimal changes to original movie reviews to produce sentiment-flipped counterfactual reviews, which enables models to learn useful associations between input texts and output labels (Kaushik et al., 2021). + +Generating manual counterfactuals, however, is expensive and time-consuming—Kaushik et al. (2020) report the cost of revising $2.5k$ instances at over $10,000. On the other hand, fully automatic methods are task-specific and therefore have weak robustness across domains and less reliabil + +![](images/eafad2ff7caa1b52f1d6e8ace673c7197411176e6c52a3cdea3c91584564eb8a.jpg) +Figure 2: The procedure of the Rationale-centric Double-robustness Learning framework. Red text highlights rationales identified by human annotators. Blue text indicates words replaced in raw text. Underlined text shows spurious patterns identified by the model. + +ity compared to manual counterfactuals. To address these issues, we propose Rationales-centric Double-robustness Learning (RDL), a human-in-the-loop framework for data augmentation in a few-shot setting, which is efficient, robust, model-agnostic, and general across tasks. + +Our main idea is a rationale-centric strategy for eliminating the effect of spurious patterns by leveraging human knowledge as shown in Figure 2. Our double-robustness framework consists of two main modules. The first is a Static Semi-factual Generation module that generates a set of semifactual data automatically for a given instance by using human-identified rationales. Such labelling requires less human input compared to fully manual counterfactual generation (see Section 3.1). In contrast with counterfactuals (Roese, 1997) that rely on what might have been different (i.e. the label would be changed if certain terms have been changed), semi-factuals (McCloy and Byrne, 2002; Kenny and Keane, 2021), as used in our work, aim to guide a model to identify terms less causally related to the label (i.e. even if certain terms had been changed, the label would be kept the same). Second, we apply a Dynamic Human-intervened Correction module, where the most salient features are identified for model predictions over a set of training examples, and human workers intervene by checking the correctness of the rationale in case first-round modifications introduce new artefacts. We evaluate the two modules in a few-shot setting, where a minimum number of training instances are labeled for maximum generalisation power, both for in-distribution and OOD predictions. + +Results on a sentiment analysis task, which is + +also used in Kaushik et al. (2020), demonstrate that the double-robust models can be less sensitive to spurious patterns. In particular, models trained with RDL with only 50 labelled examples achieve the same or even better results than fully-supervised training with a full training set of 1,707 examples, and improvements are especially significant for OOD tests. The predictive model trained with RDL using only 100 labelled examples outperforms models trained with manual (Kaushik et al., 2020) and automatic CAD (Yang et al., 2021) using the full augmented training set of 3,414 examples. + +To the best of our knowledge, we are the first to exploit the efficacy of semi-factuals and human-intervention for improving the generalisation abilities of deep neural networks in few-shot learning scenarios.\* + +# 2 Related Work + +Data augmentation has been used for resolving artefacts in training datasets before (Gururangan et al., 2018; Srivastava et al., 2020; Kaushik et al., 2021). In particular, previous work (Kaushik et al., 2020) relied on large-scale crowd-sourcing to generate useful augmented data. More recently, Yang et al. (2021), and Wang and Culotta (2021) investigated the efficacy of the automatically generated counterfactuals for sentiment analysis. Similar to our work, these methods also consider the most salient features that a model uses when generating augmented data, which is in line with our rationale definition. However, they use sentiment lexicon matching for identifying rationales, which is task-specific and not necessarily fully relevant. In contrast, we employ human annotators to identify rationales, which can be task-agnostic and robust. Moreover, our method generates semi-factuals instead of counterfactuals used in previous work. + +Human-the-loop Machine Learning (Wu et al., 2021) has received increasing research attention. Active learning (Settles, 2009; Margatina et al., 2021), the most common example of human-in-the-loop machine learning, asks human annotators only to provide high-level annotations (i.e. labels) for important examples. There is also some work exploring more explainable AI systems by exploiting feature-based information. Such methods use relatively simple models such as Naïve Bayes (Stumpf + +et al., 2009; Kulesza et al., 2015) and Linear Regression with bag-of-words features (Jia and Liang, 2017; Teso and Kersting, 2019; Ghai et al., 2021; Shao et al., 2021), because these classifiers are relatively intuitive in generating explanations and amenable to incorporating human feedback. + +Some other work uses simple neural networks such as multi-layer perceptrons (Shao et al., 2021) and shallow CNNs (Lertvittayakumjorn et al., 2020; Stammer et al., 2021; Teso et al., 2021) because the predictions of such models can be explained in the form of features. Very recently, Yao et al. (2021) proposed a human-in-the-loop method to inspect more complicated models (e.g. BERT) with the help of model-agnostic post-hoc explanation algorithms (Ribeiro et al., 2018) that can explain predictions of any linear or non-linear model without exploiting its weights. However, previous work focuses on increasing the explainability of AI systems for high-stakes domains such as health and finance (Li et al., 2020; Yang et al., 2020b), instead of improving model robustness or generalisation ability. Also, they assume access to a large amount of labelled data. In contrast, we focus on few-shot learning scenarios which are more compelling. + +# 3 Method + +The RDL pipeline is shown in Figure 2 and consists of two modules: Static Semi-factual Generation and Dynamic Human-intervened Correction. + +Static semi-factual generation is a more efficient alternative to manually generated counterfactuals (Kaushik et al., 2020). In the first phase, Rationale Marking (Section 3.1), human annotators review each document in the training set to provide rationales (i.e. phrases that support the document classification decisions shown as bold text in Figure 2). The second phase is a semi-factual generation method based on synonym replacement (Section 3.2) that produces augmented examples (blue text in Figure 2 indicates replaced words), which are added into the training set. + +Dynamic human-intervened correction (Section 3.3) is a rationales-powered human-in-the-loop framework to dynamically correct the model's behaviours. At the outset, sampling and sensitivity of contextual decomposition (SCD) (Jin et al., 2019) is applied to detect the rationales given by the model that is obtained in the previous step. Then, all model-identified rationales (underlined texts in Figure 2) are examined by human annotators to iden + +tify false rationales (i.e. words or phrases that do not support the classifications but are falsely included by the model) and missing rationales (i.e. words or phrases that support the classifications but are not included by the model). Both false rationales and missing rationales are corrected to produce augmented examples. Finally, newly generated examples are added into the training set to re-train the deep learning model. + +# 3.1 Rationale Marking + +Following Kaushik et al. (2020) and Yang et al. (2021), we use the $IMDb$ movie review dataset (Maas et al., 2011) in our experiments. It consists of positive and negative movie reviews that are easy for human participants to understand, re-associate, and provide feedback upon (Zaidan et al., 2007). + +We use a crowdsourcing company to recruit editors and annotators for marking rationales that support classification decisions. At the outset, annotators were given instructions and examples that gently guided them to annotate rationales. Only adjectives, adverbs, nouns, and verbs were considered as rationales. Besides, rationales were required to carry complete semantic information. For example, for a phrase starting with a negation word such as "not great", annotators are instructed to mark the whole phrase "not great" as a rationale instead of just marking "not". We also limited rationales to at most three consecutive words (i.e. unigrams, bigrams and trigrams). Phrases consisting of numerical scores are not counted as rationales (e.g. 5 or 10 stars) since different datasets may use different rating scales, and annotating digits may hurt OOD performance. + +Overall, we encouraged annotators to try their best to mark as many rationales as possible to explain classification labels. However, to guarantee the quality of rationale marking and prevent annotators from over including non-rationales for more payment, we also manually inspected annotated examples and rejected examples that contained incorrect rationales. After inspection, we rejected $10.6\%$ of negative reviews and $7.6\%$ of positive reviews. Editors and annotators re-annotated the rejected examples, which were then presented to us for another inspection. All re-annotated examples were approved only if all authors were happy with the quality of the annotations. Otherwise, the examples were re-annotated again. + +Our annotation procedure generated 5,073 + +rationales in 855 movie reviews involved in Section 3.1 and 3.3 (note that we did not annotate all 1,707 examples in the training set because only 855 examples were necessarily involved in our experiments). Human annotators spent on average 183.68 seconds to identify rationales in a review and our method generated semi-factual examples automatically. On the contrary, workers spent on average 300 seconds to revise a review to generate a counterfactual manually as reported by Kaushik et al. (2020). Note that our approach using 100 labelled examples can outperform manual CAD (Kaushik et al., 2020) using the entire training set of 1,707 examples (see Section 5.3), making our approach $\frac{300 \times 1707}{183.68 \times 100} \approx 27.88$ times more efficient than manually generated CAD. + +# 3.2 Static Semi-factual Generation + +We take a simple replacement strategy, which has been taken by Yang et al. (2021), to generate semifactual examples. Given a human-identified rationale, our method constructs augmented examples by automatically replacing non-rationale words, thus leading to examples with the same labels. This augmentation is consistent with semi-factual thinking: even if those non-rationales were changed, the label would not change. + +Formally, given a training example $x_{i} = [t_{i1}, t_{i2}, \dots, t_{ij}]$ (where $t_{ij}$ is the $j^{th}$ token of the $i^{th}$ document) and its ground truth label $y_{i}$ , we create a rationale vector $r_{i} = [a_{i1}, a_{i2}, \dots, a_{ij}]$ where $a_{ij}$ is the value that indicates whether $t_{ij}$ is a rationale or not (we set $a_{ij} = 1$ to indicate that $t_{ij}$ is a rationale and 0 otherwise). To generate a semi-factual example, $x_{i}'$ , we randomly replace a certain number of non-rationales (where $a_{ij} = 0$ ), except for punctuation, with synonymous terms. The synonyms can be provided by a human, retrieved automatically from a lexicon such as WordNet (Miller, 1995), or generated using the mask-filling function of a pretrained context-aware language model (Liu et al., 2019). + +In our experiments, we randomly replace $5\%$ of non-rationales using mask-filling and generate a set of augmented examples, $x_{i}^{\prime}$ , with some replaced non-rationales and all the other tokens identical to $x_{i}$ . The label, $y_{i}$ , of a newly generated example is the same as the label of the original example, $x_{i}$ . Examples of generated data are shown in Table 1. Afterwards, the augmented examples are added into the training set used to train the model. + +# 3.3 Dynamic Human-intervened Correction + +Dynamic human-intervened correction further improves the robustness of the model by allowing human annotators to correct the model rationales online. Firstly, SCD is applied to detect unigrams, bigrams or trigrams that are salient to the model. SCD is a technique to assess the importance of terms by continuously removing terms and measuring changes in prediction (Jin et al., 2019). Human annotators examine all rationales given by the model from all documents to discover two types of incorrect rationale: false rationales and missing rationales. The next phase allows human feedback to influence the learning process. To this end, for each type of incorrect rationale, we propose a corresponding strategy to correct them. + +For false rationales (i.e. phrases that actually do not support classifications but are incorrectly identified by the model), we use synonym replacement again to generate semi-factual examples. Unlike the static semi-factual generation (Section 3.2), in this component we replace all false rationales with their synonyms instead of randomly replacing $5\%$ of non-rationales in a document. Examples of generated data are shown in Table 2. + +For missing rationales (i.e. phrases that actually support classifications but are not identified by the model), we take another simple semi-factual generation strategy, that is, extracting sentences that contain missing rationales to form semi-factual data. Specifically, given a sentence containing missing rationales, we use this sentence as a new example, and the label of this newly generated example is identical to that of the document where the sentence is extracted. For example, there is a positive movie review (bold font for rationales) "Robert Urich was a fine actor, and he makes this TV movie believable. I remember watching this film when I was 15....". The model fails to identify "fine" and "believable" as rationales. Thus we extract the text "Robert Urich was a fine actor, and he makes this TV movie believable." as a new example, and the class of this example is still positive. We extract the whole sentence rather than just the missing rationales to reserve more semantic information. + +Note that the two correction methods in dynamic human-intervened correction can operate in parallel and the generated examples are added to the small training set to re-train the model. + +
SentimentExamples
NegativeOrigin: The attempt at a "lesbian scene" was sad.Augment 1: The hint at a "lesbian scene" was sad.Augment 2: The attempt at a "kiss scene" was sad .
PositiveOrigin: I recommended this film a lot, specially in this difficult times for the planet .Augment 1: I recommended you film a lot, specially in this difficult times for the planet .Augment 2: I recommended this movie a lot, specially in this difficult times for the planet .
+ +Table 1: Fragments of augmented data generated by static semi-factual generation (Original/Augmented, in order). Blue spans were synonyms used as replacements and bold font were rationales identified by human annotators. + +
SentimentExamples
NegativeOrigin: but this is pathetic! Micawber was nothing more than a mid-nineteenth century Kramer. +SCD: but this is pathetic! Micawber was nothing more than a mid-nineteenth century Kramer. +Augment 1: but this is pathetic! Perkins became nothing more than a mid-nineteenth century Kramer. +Augment 2: but this is pathetic! It had nothing more than a mid-nineteenth century Kramer.
PositiveOrigin: Soylent Green is a wild movie that I enjoyed very much. +SCD: Soylent Green is a wild movie that I enjoyed very much. +Augment 1: Gang Orange is a wild movie that I enjoyed very much. +Augment 2: Village Spring is a wild movie that I enjoyed very much.
+ +Table 2: Fragments of augmented data generated by false rationale correction (Original/SCD/Augmented, in order). Underlined spans were false rationales given by the model through SCD. Blue spans were synonyms used as replacements, and bold font were rationales identified by human annotators. + +# 4 Why Does RDL Work? + +Broadly speaking, our RDL framework takes advantage of invariance that makes a model less sensitive to non-rationale words or spurious patterns (Tu et al., 2020; Wang et al., 2021) in favour of focusing on useful mappings of rationales to labels. + +More specifically, by using static semi-factual generation (Section 3.2) and false rationale correction (Section 3.3), we expect to break spurious associations. For example, if a model incorrectly determines that "Soylent Green" is associated with positive sentiment (Table 2), the augmented examples that replace "Soylent Green" with other phrases such as "Gang Orange" break the spurious association. Besides, using synonym replacement can generate examples that are similar to the original one, which is equivalent to adding noisy data to prevent models from overfitting (Wei and Zou, 2019). + +Missing rationale correction (Section 3.3) emphasizes the ground truth associations between rationales and labels, enabling the model to better estimate the generally useful underlying distributions for OOD datasets, even in few-shot learning scenarios. In the next section, we present experiments and empirical evidence to demonstrate the utility of the proposed RDL framework in improving model robustness. + +# 5 Experiments + +Our intention is to improve the generalisability of models, and we use both in-distribution and OOD + +performance for evaluation. Our experiments are designed to address the following research questions: + +- RQ1 Can we use static semi-factual generation to achieve better in-distribution and OOD performance? +- RQ2 Does dynamic human-intervened correction improve generalisability of models? + +# 5.1 Datasets + +For fair comparison with previous work (Kaushik et al., 2020; Yang et al., 2021), we use the $IMDb$ sentiment classification dataset (Maas et al., 2011) as the in-distribution dataset. Following Kaushik et al. (2020), all models were trained with the $IMDb$ dataset predefined training, validation and test partitions containing 1, 707, 245, and 488 reviews respectively and an enforced 50:50 class ratio. + +To measure the generalisation ability of different models, we focus on OOD performance. To this end, we test models on another four binary sentiment classification datasets: the sampled Amazon reviews dataset (Ni et al., 2019) (100,000 positives and 100,000 negatives) from six genres: beauty, fashion, appliances, gift cards, magazines, and software; the Yelp review dataset (Zhang et al., 2015) (19,000 positives and 19,000 negatives); the SST-2 dataset (Socher et al., 2013) (1,067 positives and 1,143 negatives), and the SemEval-2017 Twitter dataset (Rosenthal et al., 2017) (2,339 positives + +
Training DataIn-domainSemEval-2017SST-2YelpAmazon
Static (50 gold)88.60±1.1177.28±9.1179.29±5.1491.53±2.0689.63±1.65
Full (1,707 gold)93.23±0.4671.17±2.5480.23±2.0993.66±0.8490.29±0.57
DP (Static + 350 auto) (400)86.70±2.9274.36±2.9277.33±6.0189.60±2.5189.15±1.89
RR (Static + 350 auto) (400)89.65±1.2779.20±1.2778.89±5.9591.93±2.1089.73±1.26
Our Methods
Static + 150 auto (200)90.08±1.2578.88±6.6779.40±3.2892.19±1.5189.81±1.73
Static + 350 auto (400)90.16±0.8580.54±2.8181.26±1.9793.03±1.0890.09±1.79
Static + 550 auto (600)90.04±1.5080.69±3.4281.23±1.8392.10±3.0789.67±1.27
Static + 750 auto (800)90.08±1.0180.55±3.9680.75±2.3092.36±1.8790.18±1.44
Static + 950 auto (1000)89.83±1.2880.90±3.2980.58±2.5792.30±2.1990.62±1.29
Static + 1150 auto (1200)90.12±1.8279.31±1.8279.52±3.1591.47±3.6190.16±1.46
+ +Table 3: Results on in-distribution and OOD data. Values in brackets are the training set size. Static: uses 50 gold examples. Full: uses the full training set. Static + n: our static semi-factual generation method where $n$ is the number of semi-factuals. RR: Random Replacement (Wei and Zou, 2019). DP: Duplication. + +and 2,339 negatives). These datasets were sampled to ensure a nearly 50:50 class balance. + +# 5.2 Evaluating Static Semi-factual Generation + +To address RQ1, we compare the performance of models trained by the static semi-factual generation strategy with models trained with the original 50 examples, referred to as Static. We also compare to a model trained with the full training set (1,707 labelled examples), referred to as Full. + +# 5.2.1 Experiment Setup + +To simulate the few-shot training scenario, we randomly sample 50 examples (we also forced a 50:50 class balance) from the $IMDb$ dataset as training data. For each experiment, the training is repeated 10 times with training datasets sampled by 10 different random seeds. We report the average result of these 10 repetitions and use accuracy to measure the classification performance. Our experiments rely on an off-the-shelf cased "RoBERTa-base" model implemented by Hugging Face* to either perform mask-filling to provide synonyms or as a predictive model. Following Kaushik et al. (2020), we fine-tune RoBERTa for up to 20 epochs and apply early stopping with patience of 5 (i.e. stop fine-tuning when validation loss does not decrease for 5 epochs). + +We also explore the impact of the number of semi-factual examples on model performance. To this end, we conduct static semi-factual generation with a different number of augmented examples for each instance: $\{3,7,11,15,19,23\}$ . Considering we have 50 original examples, this would result in $\{150,350,550,750,950,1,150\}$ additional examples in the training set, respectively (we call + +this Static $+n$ , where $n$ is the number of generated semi-factuals). + +We use the Adam optimizer (Kingma and Ba, 2014) with a batch size of 4. We found that setting the learning rate to $\{5\mathrm{e} - 5,5\mathrm{e} - 6$ and $5\mathrm{e} - 6\}$ could optimise Static, Static $+n$ , and Full, respectively. + +# 5.2.2 Results and Analysis + +As shown in Table 3, all static semi-factual generation (Static+n) methods can outperform the baseline method (Static) in both in-distribution and OOD tests, demonstrating the utility of static semi-factual generation. Among all Static+n methods, Static+350 seems the best-performing method and exceeds Static with a $1.56\%$ in-distribution improvement in average accuracy. Static+350 also outperforms Static with $3.26\%$ , $1.97\%$ , $1.5\%$ , and $0.46\%$ OOD improvement in the SemEval-2017, SST-2, Yelp and Amazon datasets respectively. Although the improvement on the Amazon dataset appears modest, given that there are 200,000 examples in the Amazon test set, this actually stands for nearly 1,000 documents being correctly classified. + +The Static $+n$ methods can even outperform Full (i.e. normal training with the full training set) on the SemEval, SST-2, and Amazon datasets and are comparable on the Yelp dataset. The performance of models with the full training set is best on the in-distribution dataset but the worst on the SemEval dataset, which can be caused by the big difference between underlying distributions of these two datasets. In other words, a model that fits well with one dataset can cause performance decay on others. In this case, training with a smaller training set is more likely to reduce overfitting with the in-distribution dataset and fit well with the SemEval dataset, which explains the big improvement. It is interesting to note that models trained with the en + +tire training set perform slightly better on the OOD Yelp dataset $(93.66\pm 0.84)$ than on the in-distribution dataset $(93.23\pm 0.46)$ , which could also be explained by the high similarity between the underlying distributions of these two datasets. + +# Benefits of Static Semi-factual Generation + +First, we test whether the improvement in model performance is brought about by static semi-factual generation (Static+n) or simply by an increase in the size of the training set. We compare Static+350 (due to its relatively good performance) with another baseline called Duplication (DP heareafter). We multiply the original training set (50 examples) up into 400 examples identical to the size of the training set of Static+350, and fine-tune RoBERTa on this dataset with the same hyperparameters as Static+350. + +As shown in Table 3, in most cases, DP underperforms other algorithms and is even worse than Static, demonstrating that solely increasing the dataset size cannot improve the performance. We believe that the duplication of original examples increases the risk of overfitting and easily magnifies artefacts or spurious patterns hidden in the small training set, which leads to worse models. + +Second, synonym replacement has been used previously for data augmentation (Wei and Zou, 2019), and we compare static semi-factual generation with simply replacing any words (i.e. both rationales and non-rationales). Following Wei and Zou (2019), we replace $5\%$ of words at random and set the training set size to 400 to ensure fair comparison (we use RoBERTa and the same hyperparameters of Static+350). We call this Random Replacement (RR hereafter). + +As shown in Table 3, RR is slightly better than the baseline Static approach. This result is similar to that reported in Wei and Zou (2019), since the augmented data generated by random replacement is similar to the original data, introducing noise that helps prevent overfitting to some extent. However, the magnitude of improvement of the Static $+n$ method is much larger than that of RR, demonstrating the utility of only replacing non-rationales to generate semi-factuals. These observations show that the model trained with Static $+n$ does improve both in-distribution and OOD performance, and the improvement is actually derived from static semi-factual generation. + +![](images/d95d6a767aa42bbd6bcef5c5ed63e5e7128d34b295bf6fb555377daa340541b2.jpg) +Figure 3: Average performance gain of different static semi-factual generation methods with different augmentation size over four OOD datasets. + +# 5.3 Evaluating Dynamic Human-intervened Correction + +As shown in Table 3 and Figure 3, the performance gain of static semi-factual generation (Static+n) marginalises when augmented data is increased. Using too much augmented data even hurts the Static+1150 performance. This observation is consistent with existing work on data augmentation (Wei and Zou, 2019). We believe one reason could be that the use of static augmented examples could also introduce new spurious patterns that degrade model performance, necessitating a method that exploits rationales without generating too many augmented examples. Human-in-the-loop can address this issue by dynamically correcting the model. + +To address RQ2, we compare the performance of models trained by dynamic human-intervened correction with a popular few-shot human-in-the-loop learning framework, Active Learning, as well as two other state-of-the-art CAD-based methods (Kaushik et al., 2020; Yang et al., 2021). Lastly, we provide an ablation study to examine the influence of different correction methods, as well as an analysis regarding model sensitivity to spurious patterns. + +# 5.3.1 Experiment Setup + +We build up an active learning procedure as a baseline based on the model trained with Static. In particular, we select another 50 examples by Uncertainty Sampling (i.e. prediction scores for two classes in these examples were close) and add them into the training set (called AL hereafter). The training set size of the baseline becomes 100. The best performing static semi-factual generation method Static+350 is also listed as a baseline. + +For fair comparison, we also use Uncertainty Sampling to select another 50 examples (i.e. 100 original examples in the training set now) for the proposed dynamic human-intervened correction in + +
Baseline MethodsIn-domainSemEval-2017SST-2YelpAmazon
Static (50 gold)88.60±1.1177.28±9.1179.29±5.1491.53±2.0689.63±1.65
Static + 350 auto (400)90.16±0.8580.54±2.8181.26±1.9793.03±1.0890.09±1.79
AL (100 gold)88.64±1.7578.61±5.9080.50±3.3792.47±0.6889.80±1.91
CAD-based Methods
Manual CAD (3,414 gold)92.70±0.5369.98±3.9980.30±2.0391.87±1.0990.48±1.09
Automatics CAD (1,707 gold+1,707 auto)91.82±0.7479.39±5.3780.60±3.1091.92±0.9790.46±1.08
Our Dynamic Methods
Dynamic (100 gold + 700 auto)90.84±0.9980.32±4.3182.40±2.1493.19±1.2490.51±2.17
Dynamic-MR (100 gold + 700 auto)91.06±1.2179.04±4.9282.24±2.5993.03±1.9290.22±2.74
Dynamic-FR (100 gold + 700 auto)89.85±1.3882.39±1.8881.59±1.8292.98±0.9190.12±2.42
+ +cluding both False Rationale Correction and Missing Rationale Correction (called Dynamic). For Dynamic, we control the number of augmented examples for each review to 7 (4 from Missing Rationale Correction and 3 from False Rationale Correction), resulting in 800 examples in the training set. For Automatic CAD (Yang et al., 2021) and Manual CAD (Kaushik et al., 2020), we use the entire training set to produce counterfactuals to build up two challenging baselines (one counterfactual for one example, which is limited by the method), resulting in 3,414 examples in the training set. + +To investigate the influence of each correction method, we also construct another two datasets that augment the same 100 original examples to 800 exclusively by False Rationale Correction (Dynamic-FR hereafter) and Missing Rationale Correction (Dynamic-MR hereafter). Again, experiments all rely on a RoBERTa model and all hyperparameters are identical to those described in Section 5.2.1, except for the learning rate of AL which is set to $1.25\mathrm{e - }5$ (we found this value optimised AL performance). + +# 5.3.2 Results and Analysis + +As shown in Table 4, both AL and Dynamic outperform Static in in-distribution and OOD datasets which makes sense, because we use Uncertainty Sampling to add new labelled data to minimise model uncertainty and increase model performance. However, AL fails to compete with Static+350 even if more original data is added, which again demonstrates the utility of static semi-factual generation. On the contrary, Dynamic does better than Static+350 with a $0.68\%$ in-distribution improvement in average accuracy. Dynamic also outperforms Static+350 with $1.14\%$ , $0.16\%$ , $0.42\%$ OOD improvement in the SST-2, Yelp and Amazon datasets, but no improvement for the SemEval + +Table 4: Results on in-distribution and OOD data. Values in brackets are the training set size. AL: Active Learning. Manual CAD (Kaushik et al., 2020), Automatic CAD (Yang et al., 2021). Our methods are Dynamic-MR: Missing Rationale Correction, Dynamic-FR: False Rationale Correction, Dynamic: Dynamic Human-intervened Correction. + +
Non-rationalesRationales
Static0.5720.428
Dynamic0.4330.567
+ +Table 5: Static versus Dynamic models on average sensitivity (normalised) to rationales and non-rationales for IMDb test samples. + +dataset. Finally, the performance of our methods is better that the state-of-the-art manual CAD method in few-shot learning scenarios on all OOD datasets. + +Overall, these observations demonstrate that applying dynamic human-intervened correction (i.e. Missing Rationale Correction and False Rationale Correction) can further increase the robustness of a model on generalisation ability, effectively avoiding the improvement marginalisation caused by the increased volume of augmented data. + +# Missing Rationales vs. False Rationales + +We conduct an ablation study by examining the performance of Dynamic-MR and Dynamic-FR in Table 4. Interestingly, Dynamic-FR is specifically good at improving model performance on the in-distribution and SemEval datasets while Dynamic-MR does a good job on the SST-2 dataset. We believe that it is because Dynamic-MR biases the model to estimate an underlying distribution that is useful for SST-2 and in-distribution datasets, while Dynamic-FR biases the model to estimate a distribution similar to SemEval dataset. The performance of Dynamic can be explained as a compromise of two correction methods. + +# Sensitivity to Spurious Patterns + +We conduct an analysis to explore whether the double-robust models are less sensitive to spurious patterns. We compute models mean sensitivity to all rationales and non-rationales through SCD in the $IMDb$ test set. As shown in Table 5, the corrected model is much more sensitive to rationales with $13.9\%$ average increase in the + +sensitivity to rationales, which demonstrates that our double-robust method can decouple models from spurious patterns. + +# 6 Conclusion + +We proposed a rationale-centric human-in-the-loop framework, RDL, for better model generalisability in few-shot learning scenarios. Experimental results show that our method can boost performance of deep neural networks in both in-distribution and OOD datasets and make models less sensitive to spurious patterns, enabling fast generalisation. In the future, we expect to see rationale-centric frameworks defined for different tasks, including NER, question answering, and relation extraction. + +# 7 Ethical Statement + +We honor the ACL Code of Ethics. No private data or non-public information was used in this work. All annotators have received labor fees corresponding to the amount of their annotated instances. + +# Acknowledgements + +We acknowledge with thanks the discussion with Chenyang Lyu from Dublin City University, as well as the many others who have helped. We would also like to thank anonymous reviewers for their insightful comments and suggestions to help improve the paper. This publication has emanated from research conducted with the financial support of the Pioneer and "Leading Goose" R&D Program of Zhejiang under Grant Number 2022SDXHDX0003 and Science Foundation Ireland (SFI) under Grant Number [12/RC/2289_P2]. Yue Zhang is the corresponding author. + +# References + +Eoin Delaney, Derek Greene, and Mark T Keane. 2021. Uncertainty estimation and out-of-distribution detection for counterfactual explanations: Pitfalls and solutions. arXiv preprint arXiv:2107.09734. +Fuli Feng, Jizhi Zhang, Xiangnan He, Hanwang Zhang, and Tat-Seng Chua. 2021. Empowering language understanding with counterfactual reasoning. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 2226–2236, Online. Association for Computational Linguistics. +Bhavya Ghai, Q. Vera Liao, Yunfeng Zhang, Rachel Bellamy, and Klaus Mueller. 2021. Explainable active learning (xal): Toward ai explanations as interfaces for machine teachers. Proc. ACM Hum.-Comput. Interact., 4(CSCW3). + +Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R Bowman, and Noah A Smith. 2018. Annotation artifacts in natural language inference data. arXiv preprint arXiv:1803.02324. +Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 2021-2031. Association for Computational Linguistics. +Xisen Jin, Zhongyu Wei, Junyi Du, Xiangyang Xue, and Xiang Ren. 2019. Towards hierarchical importance attribution: Explaining compositional semantics for neural sequence models. In International Conference on Learning Representations. +Divyansh Kaushik, Eduard Hovy, and Zachary C Lipton. 2020. Learning the difference that makes a difference with counterfactually augmented data. International Conference on Learning Representations (ICLR). +Divyansh Kaushik, Amrith Setlur, Eduard Hovy, and Zachary C Lipton. 2021. Explaining the efficacy of counterfactually augmented data. International Conference on Learning Representations (ICLR). +Katherine Keith, David Jensen, and Brendan O'Connor. 2020. Text and causal inference: A review of using text to remove confounding from causal estimates. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5332-5344. +Eoin M Kenny and Mark T Keane. 2021. On generating plausible counterfactual and semi-factual explanations for deep learning. +Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. International Conference on Learning Representations. +Todd Kulesza, Margaret Burnett, Weng-Keen Wong, and Simone Stumpf. 2015. Principles of explanatory debugging to personalize interactive machine learning. In Proceedings of the 20th International Conference on Intelligent User Interfaces, IUI '15, page 126-137, New York, NY, USA. Association for Computing Machinery. +Todd Kulesza, Simone Stumpf, Margaret Burnett, Weng-Keen Wong, Yann Riche, Travis Moore, Ian Oberst, Amber Shinsel, and Kevin McIntosh. 2010. Explanatory debugging: Supporting end-user debugging of machine-learned programs. In 2010 IEEE Symposium on Visual Languages and Human-Centric Computing, pages 41-48. +Piyawat Lertvittayakumjorn, Lucia Specia, and Francesca Toni. 2020. Find: Human-in-the-loop debugging deep text classifiers. +Piyawat Lertvittayakumjorn and Francesca Toni. 2021. Explanation-based human debugging of nlp models: A survey. arXiv preprint arXiv:2104.15135. + +Jiazheng Li, Linyi Yang, Barry Smyth, and Ruihai Dong. 2020. Maec: A multimodal aligned earnings conference call dataset for financial risk prediction. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pages 3063-3070. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692. +Jinghui Lu, Maeve Henchion, Ivan Bacher, and Brian Mac Namee. 2021. A sentence-level hierarchical bert model for document classification with limited labelled data. In *Discovery Science*, pages 231-241, Cham. Springer International Publishing. +Jinghui Lu and Brian MacNamee. 2020. Investigating the effectiveness of representations based on pretrained transformer-based language models in active learning for labelling text datasets. arXiv preprint arXiv:2004.13138. +Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142-150, Portland, Oregon, USA. Association for Computational Linguistics. +Katerina Margatina, Giorgos Vernikos, Loic Barrault, and Nikolaos Aletras. 2021. Active learning by acquiring contrastive examples. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Underline Science Inc. +Rachel McCloy and Ruth MJ Byrne. 2002. *Semifactual "even if" thinking*. *Thinking & Reasoning*, 8(1):41-67. +George A. Miller. 1995. Wordnet: A lexical database for english. Commun. ACM, 38(11):39-41. +Jianmo Ni, Jiacheng Li, and Julian McAuley. 2019. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 188-197, Hong Kong, China. Association for Computational Linguistics. +Chen Qian, Fuli Feng, Lijie Wen, Chunping Ma, and Pengjun Xie. 2021. Counterfactual inference for text classification debiasing. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5434-5445. + +Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Anchors: High-precision model-agnostic explanations. In Proceedings of the AAAI conference on artificial intelligence, volume 32. +Neal J Roese. 1997. Counterfactual thinking. *Psychological bulletin*, 121(1):133. +Sara Rosenthal, Noura Farra, and Preslav Nakov. 2017. SemEval-2017 task 4: Sentiment analysis in Twitter. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 502-518, Vancouver, Canada. Association for Computational Linguistics. +Burr Settles. 2009. Active learning literature survey. +Xiaoting Shao, Arseny Skryagin, Wolfgang Stammer, Patrick Schramowski, and Kristian Kersting. 2021. Right for better reasons: Training differentiable models by constraining their influence functions. Proceedings of the AAAI Conference on Artificial Intelligence, 35(11):9533-9540. +Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Association for Computational Linguistics. +Megha Srivastava, Tatsunori Hashimoto, and Percy Liang. 2020. Robustness to spurious correlations via human annotations. In International Conference on Machine Learning, pages 9109-9119. PMLR. +Wolfgang Stammer, Patrick Schramowski, and Kristian Kersting. 2021. Right for the right concept: Revising neuro-symbolic concepts by interacting with their explanations. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, pages 3619-3629. Computer Vision Foundation / IEEE. +Simone Stumpf, Vidya Rajaram, Lida Li, Weng-Keen Wong, Margaret Burnett, Thomas Dietterich, Erin Sullivan, and Jonathan Herlocker. 2009. Interacting meaningfully with machine learning systems: Three experiments. Int. J. Hum.-Comput. Stud., 67(8):639-662. +Stefano Teso, Andrea Bontempelli, Fausto Giunchiglia, and Andrea Passerini. 2021. Interactive label cleaning with example-based explanations. Proceedings of the Thirty-fifth Conference on Neural Information Processing Systems. +Stefano Teso and Kristian Kersting. 2019. Explanatory interactive machine learning. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, AIES '19, page 239-245, New York, NY, USA. Association for Computing Machinery. + +Lifu Tu, Garima Lalwani, Spandana Gella, and He He. 2020. An empirical study on robustness to spurious correlations using pre-trained language models. Transactions of the Association for Computational Linguistics, 8:621-633. +Tianlu Wang, Diyi Yang, and Xuezhi Wang. 2021. Identifying and mitigating spurious correlations for improving robustness in nlp models. arXiv preprint arXiv:2110.07736. +Zhao Wang and Aron Culotta. 2021. Robustness to spurious correlations in text classification via automatically generated counterfactuals. In AAAI. +Jason Wei and Kai Zou. 2019. EDA: Easy data augmentation techniques for boosting performance on text classification tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6382-6388, Hong Kong, China. Association for Computational Linguistics. +Xingjiao Wu, Luwei Xiao, Yixuan Sun, Junhang Zhang, Tianlong Ma, and Liang He. 2021. A survey of human-in-the-loop for machine learning. arXiv preprint arXiv:2108.00941. +Linyi Yang, Eoin Kenny, Tin Lok James Ng, Yi Yang, Barry Smyth, and Ruihai Dong. 2020a. Generating plausible counterfactual explanations for deep transformers in financial text classification. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6150-6160. +Linyi Yang, Jiazheng Li, Padraig Cunningham, Yue Zhang, Barry Smyth, and Ruihai Dong. 2021. Exploring the efficacy of automatically generated counterfactuals for sentiment analysis. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 306-316, Online. Association for Computational Linguistics. +Linyi Yang, Tin Lok James Ng, Barry Smyth, and Riuhai Dong. 2020b. Http: Hierarchical transformer-based multi-task learning for volatility prediction. In Proceedings of The Web Conference 2020, pages 441-451. +Huihan Yao, Ying Chen, Qinyuan Ye, Xisen Jin, and Xiang Ren. 2021. Refining neural networks with compositional explanations. arXiv preprint arXiv:2103.10415. +Omar Zaidan, Jason Eisner, and Christine Piatko. 2007. Using "annotator rationales" to improve machine learning for text categorization. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 260-267, Rochester, New York. Association for Computational Linguistics. + +Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. Advances in neural information processing systems, 28:649-657. \ No newline at end of file diff --git a/arationalecentricframeworkforhumanintheloopmachinelearning/images.zip b/arationalecentricframeworkforhumanintheloopmachinelearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..10c2bfbb387c15cc1e20224324183d005f39a558 --- /dev/null +++ b/arationalecentricframeworkforhumanintheloopmachinelearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c4124da5017b6224461d517b61f5ac96696fc903f58f7efcfee3b008f364f521 +size 355715 diff --git a/arationalecentricframeworkforhumanintheloopmachinelearning/layout.json b/arationalecentricframeworkforhumanintheloopmachinelearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..71b96041ed36ae8c842ab7e4f57cd2ed13b5eb33 --- /dev/null +++ b/arationalecentricframeworkforhumanintheloopmachinelearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5ca74cbf16b2972e1644da313de72c0ae12e020e95dc44e1f35ce20a18db74ed +size 325868 diff --git a/aspectnewsaspectorientedsummarizationofnewsdocuments/089795ab-01da-401a-be53-e86cc2514c79_content_list.json b/aspectnewsaspectorientedsummarizationofnewsdocuments/089795ab-01da-401a-be53-e86cc2514c79_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..557d45252bae5cf9c4c6e7926b25cc9c045d3d7b --- /dev/null +++ b/aspectnewsaspectorientedsummarizationofnewsdocuments/089795ab-01da-401a-be53-e86cc2514c79_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cc48e5c5e8f0ca7440d1c4e541a6133e76b7bb2023b67977af1c7caec903e417 +size 87105 diff --git a/aspectnewsaspectorientedsummarizationofnewsdocuments/089795ab-01da-401a-be53-e86cc2514c79_model.json b/aspectnewsaspectorientedsummarizationofnewsdocuments/089795ab-01da-401a-be53-e86cc2514c79_model.json new file mode 100644 index 0000000000000000000000000000000000000000..a61bc2619ef67ab9cecbe48ae7e34dcdf89fbb5a --- /dev/null +++ b/aspectnewsaspectorientedsummarizationofnewsdocuments/089795ab-01da-401a-be53-e86cc2514c79_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:86964099569f8d16138593b16b14b4205a5a4b18603cfafb067cc32e77b17ada +size 105325 diff --git a/aspectnewsaspectorientedsummarizationofnewsdocuments/089795ab-01da-401a-be53-e86cc2514c79_origin.pdf b/aspectnewsaspectorientedsummarizationofnewsdocuments/089795ab-01da-401a-be53-e86cc2514c79_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..07a7e7d98bd3a28a7fe3b4b8da18a8ae4712a184 --- /dev/null +++ b/aspectnewsaspectorientedsummarizationofnewsdocuments/089795ab-01da-401a-be53-e86cc2514c79_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3035e4f039b8b6cb9d3d63182a802b5de5bc8660487d03c20ae1ed9f1c4589c9 +size 458556 diff --git a/aspectnewsaspectorientedsummarizationofnewsdocuments/full.md b/aspectnewsaspectorientedsummarizationofnewsdocuments/full.md new file mode 100644 index 0000000000000000000000000000000000000000..8f18b5ad464192ac411dd6c61360131a2b1d6a39 --- /dev/null +++ b/aspectnewsaspectorientedsummarizationofnewsdocuments/full.md @@ -0,0 +1,325 @@ +# ASPECTNEWS: Aspect-Oriented Summarization of News Documents + +Ojas Ahuja $^{1}$ , Jiacheng Xu $^{1}$ , Akshay Gupta $^{1}$ , Kevin Horecka $^{2}$ , Greg Durrett $^{1}$ + +1The University of Texas at Austin + +2Walmart NexTech + +{ojas, jcxu}@utexas.edu, gdurrett@cs.utexas.edu + +# Abstract + +Generic summaries try to cover an entire document and query-based summaries try to answer document-specific questions. But real users' needs often fall in between these extremes and correspond to aspects, high-level topics discussed among similar types of documents. In this paper, we collect a dataset of realistic aspect-oriented summaries, ASPECTNEWS, which covers different subtopics about articles in news sub-domains. We annotate data across two domains of articles, earthquakes and fraud investigations, where each article is annotated with two distinct summaries focusing on different aspects for each domain. A system producing a single generic summary cannot concisely satisfy both aspects. Our focus in evaluation is how well existing techniques can generalize to these domains without seeing in-domain training data, so we turn to techniques to construct synthetic training data that have been used in query-focused summarization work. We compare several training schemes that differ in how strongly keywords are used and how oracle summaries are extracted. Our evaluation shows that our final approach yields (a) focused summaries, better than those from a generic summarization system or from keyword matching; (b) a system sensitive to the choice of keywords.1 + +# 1 Introduction + +Recent progress in text summarization (See et al., 2017; Liu and Lapata, 2019; Zhang et al., 2020a; Lewis et al., 2020) has been supported by the availability of large amounts of supervised data, such as the CNN/Daily Mail and XSum datasets (Hermann et al., 2015; Narayan et al., 2018), which provide a single, generic, topic-agnostic summary. However, a document often contains different aspects (Titov and McDonald, 2008; Woodsend and Lapata, 2012) that might be relevant to different users. For + +example, a political science researcher studying responses to earthquakes may want a summary with information about government-led recovery efforts and broader social impacts, not a high-level generic summary of what happened. Systems should be able to produce summaries tailored to the diverse information needs of different users. Crucially, these systems should be usable in realistic settings where a user is interested in vague aspects of the document, instead of a highly focused query. + +In this work, we present a new dataset for evaluating single-document aspect-oriented extractive summarization which we call ASPECTNEWS. We derive subsets of examples from CNN/Daily Mail following certain topics, namely earthquakes and fraud reports. These domains are special in that the articles within them have several aspects which are repeatedly mentioned across articles and form coherent topics, e.g., impact on human lives of an earthquake. We ask annotators to select sentences relevant to such information needs, which correspond to imagined use cases. Interannotator agreement on full summaries is low due to the inherent subjectivity of the task, so rather than coming up with a consensus summary, we instead primarily evaluate against soft labels based on the fraction of annotators selecting a given sentence. + +To benchmark performance on this dataset, we build a system that can summarize a document conditioned on certain aspect-level keywords without assuming annotated training data for those aspects. Since there are no large-scale supervised training sets suitable for this purpose, we explore methods to generate aspect-oriented training data from generic summaries. We compare these with past approaches (Freermann and Klementiev, 2019) on their ability to adapt to our aspect-oriented setting, which requires taking aspectual keyword inputs (as opposed to specific entities or queries) and being appropriately sensitive to these keywords. + +Our experiments on our ASPECTNEWS dataset + +![](images/79e1cbb12062951b97bd22cd945cc029233850d8020f0222cc1d94f7abc14518.jpg) +Figure 1: Examples of an earthquake-related article paired with extractive summaries from the CNN/DM dataset. "Generic" represents the selection of a general purpose summarization model. "Geo(graphy)" (colored in green) and "Recovery" (colored in orange) indicate our aspects of interest for the summary. We highlight aspect-relevant phrases in the document. + +1. At least 42 people have died with hundreds more injured after a 6.2-magnitude earthquake hit Indonesia's Sulawesi island early Friday, according to Indonesia's Disaster Management Agency. +2. The epicenter of the quake, which struck at 1:28 a.m. Jakarta time, was 6 kilometers (3.7 miles) northeast of the city of Majene, at a depth of 10 kilometers (6.2 miles), according to Indonesia's Meteorology, Climatology and Geophysics Agency. +3. Thirty-four people died in the city of Mamuju, to the north of the epicenter, while another eight died in Majene. +4. In Majene, at least 637 were injured and 15,000 residents have been displaced, according to [...] +7. Many people are still trapped under collapsed buildings, according to local search and rescue teams. +8. Rescuers search for survivors at a collapsed building in Mamuju city in Indonesia. +9. "Our priority is saving victims who are still buried under the buildings," Safaruddin Sanusi, head of West Sulawesi's Communications and Information Department, told CNN Friday. [...] +12. "Most...of the people in Mamuju city are now displaced. They are afraid to stay at their houses." +15. "We need more extrication equipment and more personnel to work fast on saving victims trapped under the building." + +and the SPACE dataset (Angelidis et al., 2021) find that our model produces summaries that score higher on agreement with human aspect-oriented annotations than generic summarization models, previous aspect-oriented models, and baselines such as keyword matching. Second, we find that the summaries our model generates are sensitive to the choice of keywords. Third, we find that our model performs competitively with leading models on the SPACE dataset in the multi-document setting. Finally, we find that abstractive query-focused systems (He et al., 2020) hallucinate significantly in this setting, justifying our choice of an extractive framework here. + +# 2 Related Work + +Relatively little recent work has focused on aspect-oriented summarization. One line of research focuses on summarization of documents with respect to specific queries (Baumel et al., 2014; Krishna and Srinivasan, 2018; Frermann and Klementiev, 2019; He et al., 2020; Xu and Lapata, 2020a). However, a query such as "What facilities were damaged in the Oaxacan region?" is a document specific query, which cannot be applied to other earthquake news articles and bears more resemblance to the task of long-form question answering (Fan et al., 2019). Our focus is closer to work on attribute extraction from opinions or reviews (Dong et al., 2017; Angelidis and Lapata, 2018), as factors like geographic details and recovery efforts are usually mentioned in many earthquake stories. Recent work has also begun to study summarization from an interactive perspective (Shapira et al., 2021); our approach could be naturally extended in this direction. + +Methods Historically, most work on query-focused summarization has addressed the multi-document setting. You et al. (2011) apply regression models to this task, and Wei et al. (2008) approach the problem from the perspective of ranking sentences by their similarity to the query. These classic methods rely integrally on the multi-document setting, and so cannot be easily adapted to our setup. More recently, Xu and Lapata (2020b) focus on multi-document summarization by modeling the applicability of candidate spans to both the query and their suitability in a summary. Angelidis et al. (2021) explore a method using quantized transformers for aspect-oriented summarization, which we compare to. + +Datasets There are several differences between ASPECTNEWS and other existing aspect-oriented summarization datasets. Firstly, ASPECTNEWS focuses on single-document summarization, while similar aspect-oriented datasets such as the SPACE dataset of reviews (Angelidis et al., 2021) and other attribute extraction settings (Dong et al., 2017; Angelidis and Lapata, 2018) are multi-document. Second, our dataset focuses on generalization to new aspect types, rather than assuming we've trained on data with those same aspects; that is, how can we produce appropriate aspect-oriented summaries of earthquake articles even if we have not trained on any? Third, compared to query-focused settings, our aspect-oriented dataset is closer to the actual information needs of users, since users are often interested in summaries about broad subtopics rather than specific queries. + +The TAC 2010/2011 summarization datasets2 + +
DomainAspectPromptKeywords
EarthquakeGEO RECVgeography, region, or location recovery and aid efforts (death toll and injuries, foreign/domestic government assistance, impact on survivors)region, location, country, geography, miles recovery, aid, survivor, injury, death
FraudPEN NATUREpenalty or consequences for the fraudster, or for others nature of the fraud: the amount of money taken, benefits for the fraudster, and how the fraud workedpenalty, consequences, jailed, fined, court amount, money, bank, stolen, time
+ +Table 1: Prompts and keywords used for each of our two domains: Earthquake and Fraud. These represent prominent topics that users might be interested in. + +propose guided summarization tasks that involve similar aspects. However, each article cluster in TAC has a single, fixed set of aspects that don't differ substantially from what a generic summary should capture. The DUC 2005/2006 task (Dang, 2005) does not have aspects but rather can accept a "granularity" level at which to produce the summary. Christensen et al. (2014) produce a hierarchy of relatively short summaries among multiple documents. + +Other previous work (He et al., 2020; Xu and Lapata, 2020a; Tan et al., 2020) proposes constructing keyword sets for each individual document for training. Krishna and Srinivasan (2018); Frermann and Klementiev (2019) condition on topic tokens referring to the topic tags in metadata. Compared to these other approaches, we focus more on evaluation of aspects, as opposed to a purely keyword- and query-driven view. + +# 3 Aspect-Oriented Data Collection + +We begin by considering our target application: users who have specific information needs that they want to be satisfied. This consideration broadly falls under the category of purpose factors defined by Jones (1998) and should be accounted for in the summarization process. + +Our data collection process involves the following steps: (1) Identifying clusters of articles in our target domains from a large corpus of news summaries. (2) Manually specifying multiple user intents per target domain, representing the aspect of the summarization process. (3) Crowdsourcing annotation of extractive summaries in these domains based on the user intents. + +# 3.1 Target Domains + +We draw our datasets from the English-language CNN/Daily Mail summarization dataset (Hermann et al., 2015). We manually identified two domains, earthquakes and fraud, based on inspecting clusters + +of articles in these domains. These two domains are ideal for two reasons. First, they contain a significant number of on-topic articles (over 200) after careful filtering. Second, the articles in these domains are reasonably homogeneous: each article would often feature at least broadly similar information about an event, making aspect-based summarization well-defined in these cases.3 Although not completely universal, most earthquake articles refer to some information about each of two aspects here: geography (GEO) and recovery (RECV). Figure 1 shows an example of an earthquake-related article. Similarly, most fraud articles include information about the penalty (PEN) imposed for the fraud, and the nature (NATURE) of the fraud. + +To retrieve our examples from these two domains, we first encode each article in CNN/DM corpus $\mathcal{C}$ with a text encoder $E$ . We adopt the Universal Sentence Encoder (Cer et al., 2018) for its efficiency and robustness. We create an exemplar sentence for each domain to serve as the target to retrieve the most relevant content. We describe the choice of exemplar sentences in Section A.2. We measure the similarity of each candidate article $c$ and the exemplar sentence $s$ as the average of the cosine similarity between each of the candidate article's sentences $c_{i}$ and the exemplar, $sim(c,s) = \frac{1}{n}\sum_{i = 1}^{n}\cos (E(c_i),E(s))$ . + +We found this procedure to be more robust than simple keyword matching for retrieving articles with coherent aspects; for example, keyword matching for "earthquakes" resulted in returning articles primarily about tsunamis due to the imbalanced data distribution. + +# 3.2 Specifying User Intents + +With these two domains, we examine our dataset to derive aspects that simulate realistic information needs of users. + +Table 1 describes the domain, aspect, annotation prompt and keywords used for evaluation. For each domain, we establish two aspects. Each aspect must be well-represented in the corpus and easy to understand by both readers and annotators. The authors annotated these aspects based on inspection of the articles and brainstorming about user intents based on scenarios. For example, the penalty scenario was motivated by a real use case derived from the authors' colleagues investigating reporting of wrongdoing in news articles at scale, where summarization can be used to triage information. + +# 3.3 Crowdsourcing + +Finally, to construct actual extractive summaries for evaluation in these domains, we presented the user intents to annotators on Amazon Mechanical Turk. An annotator is shown a description of intent from Table 1 along with an article and is asked to identify a few sentences from the article that constitute a summary. They can rate each sentence on a scale from 0 to 3 to account for some sentences being more relevant than others. Their final summary, which they are shown to confirm before submitting, consists of all sentences rated with a score of at least 1. The exact prompt is shown in the Appendix. + +Each article was truncated to 10 sentences for ease of annotation. This assumption was reasonable for the two domains we considered, and the truncation approach has been used in See et al. (2017) without much performance degradation. We found that annotators were unlikely to read a full length article due to the inherent lead bias in news articles, so this also helped simplify the task. In order to maintain a high quality of annotations, we discard annotations that do not have at least a single selected sentence in common with at least a single other annotator on that sample. In practice, this only discards a handful of isolated annotations. + +# 3.4 Data Analysis & Annotator Agreement + +In Table 2, we show the basic statistics of the collected dataset. We show the distribution of the number of sentences agreed upon by the annotators in Table 3. We see that annotators somewhat agree in most cases, but relatively few sentences are uniformly agreed upon by all annotators. Our initial + +
# articles# sent# words
PEN1002.9030.5
NATURE1002.7929.9
GEO1002.5328.4
RECV1002.7627.0
+ +Table 2: Statistics for the collected datasets. For each aspect we collect 100 articles and each article is annotated by 5 Turkers. #sent and #words are the average number of sentences selected and average number of words in each sentence. + +
Agreement12345
Freq (%)19.6129.2625.1619.166.80
+ +Table 3: Majority agreement distribution of 5 annotators on filtered collected data. + +pilot studies also showed that annotators are often unsure where the cutoff is for information to be notable enough to include in a summary. We therefore view this disagreement as inherent to the task, and preserve these disagreements in evaluation rather than computing a consensus summary. + +We also compare the overlap between aspect-oriented annotation and generic extractive oracle derived from reference summaries from CNN/DM. In Table 4, the similarity and exact match $^{4}$ between generic oracle summaries and the top 3 annotated sentences are fairly low, which means the annotated aspect driven summaries significantly differ from the standard extractive oracle. + +# 4 Building an Aspect-Oriented System + +Our aspect-oriented data collection works well to create labeled evaluation data, but it is difficult to scale to produce a large training set. Identifying suitable domains and specifying user intents requires significant human effort, and collecting real test cases at scale would require a more involved user study. + +We build an aspect-oriented model without gold-labeled aspect-oriented training data. We do this by generating keywords for each article in CNN/DM, and training the model to learn the relationship between these keywords and a summary. Our system follows broadly similar principles to He et al. (2020), but in an extractive setting. + +
STDREF vs.Jaccard Sim.EM (%)
PEN0.2471.0
NATURE0.2492.0
GEO0.2652.0
RECV0.2011.0
+ +Table 4: Comparison of annotation labels and the nonquery focused extractive oracle derived from reference summaries. We take the top-3 most common selected sentences from each aspect-oriented dataset and compute Jaccard similarity between the sets and the percentage of exact matches (EM). + +Article: 1. Justine Greening has called for a major shake-up in the EU aid budget - as it emerged more than half the cash is squandered on relatively rich countries. +2. The International Development Secretary challenged the basis of the £10-billion-a-year budget, which channels cash to countries such as Turkey, Iceland and Brazil. +3. She is pressing for a major shift in policy to target resources at the poorest countries. +4. International Development Secretary Justine Greening today insisted aid money [...] +5. Miss Greening held talks with ministers from [...] +7. Miss Greening said: 'I don't think it's right that the EU still gives money to those countries higher up the [...] +9. Her intervention comes amid mounting concern about the EU aid budget, which [...] total aid budget. [...] + +Keywords: countries, budget, development, 10-billion, Turkey + +Table 5: An example article from CNN/DM and keywords extracted. These keywords indicate both highly specific concepts and broad topic, but a model trained on data with appropriate reference summaries can learn to leverage either specific or generic keywords in the summarization process. + +# 4.1 Keyword-controlled Data + +We present a scheme to generate keywords for each document from the original dataset. CNN/DM consists of pairs $(D, S)$ of a document $D$ and associated summary $S$ . We aim to augment these to form $(D, K, S')$ triples with keywords $K$ and a possibly modified summary $S'$ . Our mixed augmentation technique requires training the model on both $(D, S)$ and $(D, K, S')$ for a given document. We now describe the steps to create this data. + +Keyword Extraction For each document in CNN/DM, we calculate the most important tokens in that document according to their TF-IDF ranking with respect to the entire corpus. Of these tokens, we select the ones that are present in the reference summary. This process selects tokens that are more likely to be consequential in affecting the output summary. + +Reference Summary Computation Since CNN/DM reference summaries are abstractive, + +we need to derive extractive oracle summaries for training; these consist of sentence-level binary decisions $\mathbf{E} = E_1, \ldots, E_m$ for each sentence. Traditionally, this is done by finding a set of sentences that maximize ROUGE-2 (R2) with respect to the reference: $\operatorname{argmax}_{\mathbf{E}} R2(\mathbf{E}, S)$ (Gillick and Favre, 2009; Nallapati et al., 2017). However, training the model to predict $P(S_1, \ldots, S_m \mid D, k)$ , an extractive analogue of He et al. (2020), was insufficient for our extractive model to learn to be sensitive to keywords; it merely learned to return a good generic summary regardless of what keywords were given. + +To instill stronger dependence on the keywords, we made two modifications to this process. First, we modified the reference summary by concatenating the keywords with the reference summary before computing the extractive oracle summary. This concatenation makes the oracle extraction more likely to select sentences containing the keywords, though modifying the reference summary requires maintaining a balance between the influence of keywords and of the original gold summary. + +Second, we use BERTScore (Zhang et al., 2020b, BS) rather than ROUGE-2 to identify sentences that closely match the reference summary. BERTScore turns out to boost the evaluation performance by a large margin, as shown in Table 12, so we use BERTScore for oracle extraction for all our experiments. One reason for this is that the ROUGE-2 summaries favor exact keyword matches in selecting sentences, so the trained model simply learned to keyword matching in extreme cases. Our final reference summary is therefore $\mathrm{argmax}_{\mathbf{E}}BS(\mathbf{E},S + nK)$ , where $n$ is a hyperparameter we discuss next. + +Keyword Intensity To compute $n$ , we introduce another parameter $r$ that controls the ratio of keyword tokens to original reference summary tokens. Higher values of $r$ lead to extracting sentences in a manner more closely approximating keyword matching, but yielding poor standalone summaries. On the other hand, lower values of $r$ may lead to generic summaries insensitive to the keywords. In practice, the number of times a keyword $w$ is concatenated to the original summary $S$ is defined as $n = r \times \frac{\text{len}(S)}{\#(\text{keywords})}$ where $\text{len}(S)$ is the number of tokens in the original summaries and $\# (\text{keywords})$ is the total number of keywords available. When $r = 1$ , the concatenated keywords have the same length of the original summary. + +Mixed Training We explore a variant of training where we include training data with multiple variants of each original document from the dataset. Each document in the original dataset is mapped to two training samples, (1) a document without keywords and an unmodified oracle extractive summary, (2) a document with keywords and an oracle extractive summary using our modification procedure. + +# 4.2 Aspect-Oriented Model + +Our model is trained to predict a summary $S$ from a document-keywords pair $(D, K)$ . Following BERT-SUM (Liu and Lapata, 2019), we fine-tune BERT (Devlin et al., 2019) for extractive summarization using our modified CNN/Daily Mail dataset with keywords. During training, we preprocess the original document, and use the modified oracle extractive summary as the gold outputs. During inference, the keywords are user-defined. This scheme is similar to He et al. (2020), but differs in that it is extractive. + +We refer to this model, trained on our BERTScore references with the mixed training scheme, as AOSUMM. + +# 5 Experiments + +We evaluate our model on the ASPECTNEWS dataset, comparing performance on aspect-oriented summarization to several baselines. We additionally experiment on the SPACE multi-document dataset (Angelidis et al., 2021) to provide a point of comparison on a prior dataset and show that our aspect-oriented method is competitive with other systems. + +# 5.1 Metrics + +On ASPECTNEWS, we evaluate our model against the annotations using $\mathrm{F}_1$ score and ROUGE scores. It is impossible to achieve $100\mathrm{F}_1$ on this task due to inherent disagreement between annotators. One downside of $\mathrm{F}_1$ is that the model may be penalized even when the predicted sentence is very similar to the annotation, for this reason we also calculate ROUGE-1, -2, and -L scores (Lin, 2004). On the SPACE dataset, the gold summaries are abstractive, so we only calculate ROUGE scores. + +# 5.2 Baselines & Competitor Models + +On the SPACE corpus, we primarily focus on comparisons to quantized transformer (QT) (Angelidis + +et al., 2021) and CTRLSUM (He et al., 2020). For the ASPECTNEWS dataset, we benchmark our system against several other models and baselines which we now describe. + +Heuristic and QA Baselines KEYWORD takes the keywords described in Table 1 and greedily finds the first occurrence of each keyword in the input document. STDREF stands for the extractive oracle given the original reference summaries from CNN/DM. QA uses an ELMo-BiDAF question answering model (Seo et al., 2017; Peters et al., 2018) to find answers to synthetic questions "What is {keyword}?" for each keyword in the article. We select the sentence where the selected span is located as a sentence to extract. Each of these three technique is an extractive baseline where top sentences are selected. + +Summarization Baselines We also compare our AOSUMM model against text summarization models, and query-focused models from previous work (retrained or off-the-shelf). (i) BERTSUM is a bert-base-cased extractive summarization model fine-tuned on CNN/DM (Liu and Lapata, 2019). (ii) BERT-FK shares the similar model architecture as BERTSUM but the training data comes from Frermann and Klementiev (2019). This data is constructed by interleaving several articles from the CNN/DM dataset together, extracting a coarse aspect from the original URL of one of the article, and setting the new gold summary to match that article. (iii) CTRLSUMis an off-the-shelf abstractive summarization model with the capability of conditioning on certain queries or prompts (He et al., 2020). (iv) Our model AOSUMM is based on BERTSUM and trained with techniques described in Section 4. + +# 5.3 Results + +ASPECTNEWS The experimental results on AspectNEWS are shown in Table 6. We find that our model outperforms our baselines across $\mathrm{F}_1$ , ROUGE-1, ROUGE-2, and ROUGE-L scores. Significantly, our model generally outperforms keyword matching, demonstrating that semantic match information from training with the BERTScore oracle may be more useful than training with a ROUGE oracle in terms of reproducing annotators' judgments; recall that our model has not been trained on any AspectNEWS data and only on our synthetic data. + +
ModelPENANNOTNATUREANNOTGEOANNOTRECVANNOT
F1R-1R-2R-LF1R-1R-2R-LF1R-1R-2R-LF1R-1R-2R-L
STDREF32.951.739.540.733.553.041.342.034.951.941.342.128.245.733.037.4
KEYWORD39.262.050.647.138.358.746.645.050.967.959.953.732.853.341.643.9
QA30.746.936.837.726.539.128.832.252.463.058.956.832.946.636.538.5
BERTSUM40.160.147.846.541.663.551.749.446.465.456.451.437.355.844.844.6
BERT-FK24.543.928.933.221.040.823.428.323.942.430.332.921.435.421.326.9
CTRLSUMN/A47.830.233.0N/A51.735.335.4N/A21.68.019.6N/A32.311.619.2
AOSUMM44.864.254.151.645.264.453.948.049.969.161.254.239.659.549.146.7
Max60.361.570.261.4
+ +Table 6: Performance comparison of our model (AOSUMM) versus baselines on the ASPECTNEWS dataset in both the earthquakes and fraud domains, using our geography (GEOANNOT) and recovery (RECVANNOT) aspects for the former and penalty (PENANNOT), and nature (NATUREANNOT) aspects for the latter. The last row displays the maximum possible $\mathrm{F}_1$ score due to the disagreement of annotation. + +
ServiceLocationFoodBuildingCleanlinessRooms
BERTSUM12.416.713.015.613.812.5
CTRLSUM20.118.617.418.923.319.7
QT26.023.617.716.025.121.6
AOSUMM26.920.317.416.422.821.6
+ +We note that our model's performance falls behind keyword matching some baselines in the geography aspect; this may be because the aspect is relatively homogeneous and can be easily approximated by keyword matching. + +SPACE The results on all the aspects of the SPACE dataset are shown in Table 7. All of the aspect-oriented models exceed the performance of the generic summaries produced by BERTSUM. We also find that our model performs competitively with the quantized transformer (QT) (Angelidis et al., 2021) and CTRLSUM (He et al., 2020) methods in this dataset. This is a surprising result: the AOSUMM model is trained only with out-of-domain synthetic data, without access to the aspects prior to keywords specified at test time. Additionally, this is an abstractive task that we are applying an extractive model to. + +# 5.4 Ablations and Analysis + +Keyword Sensitivity We evaluate the sensitivity of the model to different keywords. There is + +Table 7: ROUGE-L scores on the SPACE dataset of our model, AOSUMM, versus BERTSUM, CTRLSUM, and quantized transformer (QT). Despite being an extractive model, our approach is competitive with strong query-focused or aspect-based models. + +
KWF1R-1R-2R-LF1R-1R-2R-L
PEN44.864.254.151.641.860.849.546.5
NATURE44.365.556.051.345.264.453.948.0
GEO49.969.161.254.238.056.245.346.2
RECV42.860.449.747.839.659.549.146.7
+ +Table 8: Keyword sensitivity analysis broken down by domain of ASPECTNEWS. + +
Jaccard Sim.EM (%)
PENKw vs. NATUREKw0.65721.0
GEOKw vs. RECVKw0.55922.0
+ +Table 9: Difference in AOSUMM outputs with different keywords. We compute Jaccard similarity between the sets and the and percentage of Exact Matches (EM). + +some overlap between the summaries returned by different keyword sets, as shown by the Jaccard similarity: some sentences may fit under both GEO and RECV, or both PEN and NATURE. Table 9 shows statistics of this, with the Fraud keyword sets yielding more similar summaries than those in Earthquake. We also confirm that using the keywords "matched" to our setting outperforms using other sets of keywords in that domain (Table 8) suggesting that our model is picking summaries in a keyword-driven fashion. + +Keyword Intensity We can vary the parameter $k$ controlling the number of times we append the keywords to the reference summary in order to generate the oracle extractive summary. We experiment with different levels of intensity and show the result in Table 10. For most cases, $r = 1$ works well among all the datasets. + +
GEORECVPENNATURE
r = 0.548.440.041.942.7
r = 1.049.939.644.845.2
r = 2.049.039.441.942.0
+ +Table 10: Comparison of various levels of keyword intensity. We experiment with different level of keyword intensity for different oracle and train our AOSUMM model on these setting. We show the $\mathrm{F}_1$ of model's prediction and human annotation. The larger the $r$ , the more keywords will be concatenated. + +# 6 Qualitative Evaluation & Comparison + +Extractive vs. Abstractive Comparison It is difficult to directly compare the quality of summaries produced by an extractive model to those produced by an abstractive model. Abstractive models do not extract individual sentences from a summary so direct $\mathrm{F}_1$ evaluations cannot be compared in the manner of Table 6. ROUGE scores are a misleading comparison given that an extractive model will be better matched to our extractive ground truths. Therefore, we perform a qualitative analysis to determine the models' relative responsiveness to keywords and relative advantages and disadvantages.[5] + +Keyword Sensitivity Comparison Although both CTRLSUM and AOSUMM are sensitive to the choice of keywords and alter their summary in response to different keywords, CTRLSUM often either hallucinates false information (Maynez et al., 2020) or simply rewrites the prompt in the generated summary. We found that just under the GEO keywords in the earthquakes domain, out of 100 sample articles the bigram "not known" appears 27 times in relation to describing the location of the earthquake and "not immediately known" appears another 24 times. The CTRLSUM model frequently rephrases the prompt rather than synthesizing information in the document related to the keywords into a cogent summary. + +Comparison of Factuality of Output Table 11 shows one example of CTRLSum hallucination in the GEO case. Here, the model also rewrites the prompt and inserts it into the summary without + +
SelArticle
G(CNN) – A 7.2-magnitude earthquake has struck south of the Mar- iana Islands, according to the U.S. Geological Survey.
The Mariana Islands – an archipelago in the western Pacific Ocean – are made up of two U.S. territories, Guam and the Com- monwealth of the Northern Mariana Islands.
The islands sit about three-quarters of the way from Hawaii to the Philippines.
RThe Pacific Tsunami Warning Center did not issue a tsunami warning after the quake, which struck at 7:19 a.m. Saturday (5:19 p.m. ET Friday).
R"We wouldn't expect any kind of significant tsunami for this event," said the center's director, Charles McCreery, noting that the quake's magnitude was relatively low to provoke one.
RThere were no immediate reports of casualties or damage, emerg- gency management officials said.
GThe quake struck about 375 kilometers (233 miles) west- southwest of Hagatna, Guam, and 445 kilometers (276 miles) west-southwest of Rota, Northern Mariana Islands.
CTRLSUM GEO
NEW: The location of the quake is in the province of Yucatán. NEW: There are no immediate reports of casualties or damage. The quake is centered about 375 kilometers (233 miles) west-southwest of Hagatna, Guam. The U.S. Geological Survey says it was a 7.2-magnitude quake. The earthquake is centered in the Yucatán province of Mexico. The country's geography is similar to that of the U.N. region.
CTRLSUM RECV
NEW: The death toll from the quake is not immediately known. The U.S. Geological Survey reports a 7.2-magnitude quake. The Mariana Is- lands sit about three-quarters of the way from Hawaii to the Philippines. "There is a survivor. There is an injury. There will be an aid.recovery process," the U.N. secretary-general says. The quake is centered about 375 kilometers (233 miles) west-southwest of Hagatna, Guam.
+ +Table 11: An example article from the earthquakes domain, along with summaries selected by AOSUMM (denoted as $G$ and $R$ ) and CTRLSUM with GEO and RECV keyword. + +adding new information. Although such behavior may possibly perform well on automated metrics, it does not serve the purpose of query-focused summarization. + +Extractive summaries Table 11 shows that our model is able to successfully extract relevant parts of the document for our aspects under consideration. There are some features which may make these summaries hard to process in isolation, such as the quake in the first $R$ sentence; our method could be extended with prior techniques to account for anaphora resolution (Durrett et al., 2016). + +# 7 Conclusion + +In this paper, we present a new dataset for aspect-oriented summarization of news articles called ASPECTNEWS. Unlike query-focused summarization datasets which are often driven by document specific facts or knowledge, this aspect-oriented task is designed to mimic common user intents in domain-specific settings. We present a keyword-controllable system trained on synthetic data and show that it can perform well on ASPECTNEWS without training on the target domains, performing + +better than a range of strong baseline methods. + +# Acknowledgments + +This work was chiefly supported by funding from Walmart Labs and partially supported by NSF Grant IIS-1814522, a gift from Amazon, and a gift from Salesforce Inc. Opinions expressed in this paper do not necessarily reflect the views of these sponsors. Thanks to Ido Dagan for helpful discussion and suggestions about this paper, as well to the anonymous reviewers for their thoughtful comments. + +# References + +Stefanos Angelidis, Reinald Kim Amplayo, Yoshihiko Suhara, Xiaolan Wang, and Mirella Lapata. 2021. Extractive opinion summarization in quantized transformer spaces. Transactions of the Association for Computational Linguistics (TACL), 9:277-293. +Stefanos Angelidis and Mirella Lapata. 2018. Summarizing opinions: Aspect extraction meets sentiment prediction and they are both weakly supervised. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3675-3686, Brussels, Belgium. Association for Computational Linguistics. +Tal Baumel, Raphael Cohen, and Michael Elhadad. 2014. Query-chain focused summarization. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 913-922, Baltimore, Maryland. Association for Computational Linguistics. +Daniel Matthew Cer, Yinfei Yang, Sheng yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, C. Tar, Yun-Hsuan Sung, B. Strope, and R. Kurzweil. 2018. Universal sentence encoder. arXiv preprint arXiv:1803.11175. +Janara Christensen, Stephen Soderland, Gagan Bansal, and Mausam. 2014. Hierarchical summarization: Scaling up multi-document summarization. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 902-912, Baltimore, Maryland. Association for Computational Linguistics. +Hoa Trang Dang. 2005. Overview of duc 2005. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186. + +Li Dong, Shaohan Huang, Furu Wei, Mirella Lapata, Ming Zhou, and Ke Xu. 2017. Learning to generate product reviews from attributes. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 623-632, Valencia, Spain. Association for Computational Linguistics. +Greg Durrett, Taylor Berg-Kirkpatrick, and Dan Klein. 2016. Learning-based single-document summarization with compression and anaphoricity constraints. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1998-2008, Berlin, Germany. Association for Computational Linguistics. +Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. 2019. ELI5: Long form question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3558-3567, Florence, Italy. Association for Computational Linguistics. +Lea Frermann and Alexandre Klementiev. 2019. Inducing document structure for aspect-based summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6263-6273, Florence, Italy. Association for Computational Linguistics. +Dan Gillick and Benoit Favre. 2009. A scalable global model for summarization. In Proceedings of the Workshop on Integer Linear Programming for Natural Language Processing, pages 10-18, Boulder, Colorado. Association for Computational Linguistics. +Junxian He, Wojciech Krysciński, Bryan McCann, Nazneen Rajani, and Caiming Xiong. 2020. Ctrlsum: Towards generic controllable text summarization. arXiv preprint arXiv:2012.04281. +Karl Moritz Hermann, Tomás Kočisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching Machines to Read and Comprehend. In Proceedings of the Conference on Neural Information Processing Systems (NeurIPS). +Karen Sparck Jones. 1998. Automatic summarising: Factors and directions. In Advances in Automatic Text Summarization, pages 1-12. MIT Press. +Kundan Krishna and Balaji Vasan Srinivasan. 2018. Generating topic-oriented summaries using neural attention. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1697-1705, New Orleans, Louisiana. Association for Computational Linguistics. +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. + +2020. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics. +Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics. +Yang Liu and Mirella Lapata. 2019. Text summarization with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3730-3740, Hong Kong, China. Association for Computational Linguistics. +Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan Thomas Mcdonald. 2020. On faithfulness and factuality in abstractive summarization. In Proceedings of The 58th Annual Meeting of the Association for Computational Linguistics (ACL). +Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. SummaRuNNer: A Recurrent Neural Network based Sequence Model for Extractive Summarization of Documents. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). +Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don't give me the details, just the summary! Topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. +Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics. +Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073-1083, Vancouver, Canada. Association for Computational Linguistics. +Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In Proceedings of the International Conference on Machine Learning (ICML). +Ori Shapira, Ramakanth Pasunuru, Hadar Ronen, Mohit Bansal, Yael Amsterdamer, and Ido Dagan. 2021. + +Extending multi-document summarization evaluation to the interactive setting. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 657-677, Online. Association for Computational Linguistics. +Bowen Tan, Lianhui Qin, Eric Xing, and Zhiting Hu. 2020. Summarizing text on any aspects: A knowledge-informed weakly-supervised approach. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6301-6309, Online. Association for Computational Linguistics. +Ivan Titov and Ryan McDonald. 2008. A joint model of text and aspect ratings for sentiment summarization. In Proceedings of ACL-08: HLT, pages 308-316, Columbus, Ohio. Association for Computational Linguistics. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is All You Need. In Proceedings of the Conference on Neural Information Processing Systems (NeurIPS). +Furu Wei, Wenjie Li, Q. Lu, and Y. He. 2008. Query-sensitive mutual reinforcement chain and its application in query-oriented multi-document summarization. In Proceedings of the Special Interest Group on Information Retrieval (SIGIR). +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace's Transformers: State-of-the-art Natural Language Processing. arXiv preprint arXiv:1910.03771. +Kristian Woodsend and Mirella Lapata. 2012. Multiple aspect summarization using integer linear programming. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 233-243, Jeju Island, Korea. Association for Computational Linguistics. +Yumo Xu and Mirella Lapata. 2020a. Abstractive query focused summarization with query-free resources. arXiv preprint arXiv:2012.14774. +Yumo Xu and Mirella Lapata. 2020b. Coarse-to-fine query focused multi-document summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3632-3645, Online. Association for Computational Linguistics. +Ouyang You, Wenjie Li, Sujian Li, and Qin Lu. 2011. Applying regression models to query-focused multi-document summarization. Information Processing & Management, 47:227-237. + +Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J. Liu. 2020a. PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization. In Proceedings of the International Conference on Machine Learning, Proceedings of Machine Learning Research. PMLR. +Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020b. BERTScore: Evaluating text generation with BERT. In Proceedings of the International Conference on Learning Representations (ICLR). + +![](images/093e4e3f174343282f947f7b2cee5b606936c13d48ca7f2e9d1ffe9352849376.jpg) +Figure 2: User interface for Turkers' annotation. + +# A Appendices + +# A.1 Training Details + +For all models, we split CNN/Daily Mail set into the standard 287,226 training pairs, 13,368 validation pairs and 11,490 test pairs following See et al. (2017). + +We follow the training procedure for BERTSUM (Liu and Lapata, 2019) with modifications. We use the cased variant of bert-base-cased available through HuggingFace (Wolf et al., 2019) instead of uncased and do not lowercase the dataset during preparation. Our learning rate schedule follows Vaswani et al. (2017) with + +$$ +l r = 2 e ^ {- 3} \cdot \min (\text {s t e p} ^ {- 0. 5}, \text {s t e p} \cdot \text {w a r m u p} ^ {- 1. 5}) +$$ + +where warmup $= 10000$ + +For fine-tuning AOSUMM on the modified CNN/DM dataset, the training completes in 8 hours on a single NVIDIA Quadro RTX 8000. + +# A.2 Exemplar Sentences + +In order to generate earthquake and fraud domain data we filter the CNN/DM dataset using similarity between latent representations of Universal Sentence Encoder (USE) (Cer et al., 2018). To find domain-related articles, we need to generate a sentence that is vague enough to match most in-domain articles but specific enough to exclude articles outside the domain. For earthquakes we found the sentence "An earthquake occurred." to work well. We embedded this sentence with USE, and calculated distance in latent space to articles in CNN/DM. For the fraud dataset we use the similar sentence "A fraud occured." After inspecting the matches, + +
F1R-1R-2R-LF1R-1R-2R-L
PENNATURE
RS36.355.842.143.038.057.644.843.3
BS44.864.254.151.645.264.453.948.0
GEORECV
RS39.559.249.147.234.954.944.345.2
BS49.969.161.254.239.659.549.146.7
+ +Table 12: Comparison of our AOSUMM model trained on data using ROUGE (RS) or BERTScore (BS) as the scoring metric for oracle extraction. Training with BERTScore oracle summaries gives much stronger performance. + +
GEORECVPENNATUREAvg.
Non-Mixed48.041.843.943.744.3
Mixed49.939.644.845.244.9
+ +Table 13: Comparison of AOSUMM with or without mixed training data. We show the $\mathrm{F}_1$ of the system output and human annotation on four domains. + +we manually exclude articles that are outside the domain. + +# A.3 Crowdsourcing + +To improve the quality of the data collected, we educate annotators with detailed instruction and user-friendly interface shown in Figure 2. We also manually sample and check the collected data. + +# A.4 Oracle Derivation: BERTScore vs. ROUGE + +In Table 12 we show the performance improvement from replacing ROUGE-derived oracle labels with their BERTScore-derived counterparts. Using BERTScore (Zhang et al., 2020b) to obtain oracle extractive summaries for training data produces models that are significantly stronger than models trained on sentences selected by maximizing ROUGE score. We hypothesize this is because ROUGE score maximization essentially limits what the model learns to lexical matching, while BERTScore can score based on more abstract, semantic criteria. + +# A.5 Mixed vs. Non-Mixed + +We compare models trained using the mixed technique against models trained without any augmentation, and find that the mixed technique generally provides some benefit, but inconsistently. In Table 13, the Mixed technique is effective on GEO, PEN, and NATURE, but not RECV. The small per + +formance improvement from Mixed training may result from the model more easily learning the relationship between the keywords and the aspect-oriented summaries due to mixed examples. Another benefit of this technique is that a single model is capable of producing both generic and aspect-oriented summaries. + +# A.6 SPACE Evaluation Details + +Several adjustments were made in order to run our model on the SPACE dataset. Since there are multiple input documents per summary, we first concatenated all documents together and treated the result as a single article. In order to process this large "article" with our model, we processed it in 512-token chunks using BERT in order to obtain representations from the [CLS] token, and then concatenated those representations together before passing them through the classification layer. This allowed selection of any sentence from any part of the input. The following keywords were used for each of the aspects in the dataset: (i) service, customer, staff, employee, assistance; (ii) location, room, region, hotel, place; (iii) food, dining, restaurant, dinner, meal; (iv) building, establishment, room, property, site; (v) cleanliness, sanitary, polished, clean, washed; (vi) rooms, chair, table, bed, wall. \ No newline at end of file diff --git a/aspectnewsaspectorientedsummarizationofnewsdocuments/images.zip b/aspectnewsaspectorientedsummarizationofnewsdocuments/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..a29e5d301faaef9128ed2e63bc902aca0546e923 --- /dev/null +++ b/aspectnewsaspectorientedsummarizationofnewsdocuments/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:388c5282005d8c0edc8b60c43e27c76d522bb4f7e29848809b8e7500bf7805d5 +size 452481 diff --git a/aspectnewsaspectorientedsummarizationofnewsdocuments/layout.json b/aspectnewsaspectorientedsummarizationofnewsdocuments/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..e904c649bcbdf92a151acdf9fb23986041a58057 --- /dev/null +++ b/aspectnewsaspectorientedsummarizationofnewsdocuments/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fde4759edd90d5522af6daa1c4c3ff4e8cae5c09dc1f6d1ebb0986cd603775d2 +size 368803 diff --git a/astatutoryarticleretrievaldatasetinfrench/f91f1b46-dc21-4862-9261-db692266feb8_content_list.json b/astatutoryarticleretrievaldatasetinfrench/f91f1b46-dc21-4862-9261-db692266feb8_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..891a64d68618b7d7c823b3cbf181598400b1a371 --- /dev/null +++ b/astatutoryarticleretrievaldatasetinfrench/f91f1b46-dc21-4862-9261-db692266feb8_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2e959edd909fb5605d69231feeaa0fb058e09989573bcf35f56bc774630d16b1 +size 104850 diff --git a/astatutoryarticleretrievaldatasetinfrench/f91f1b46-dc21-4862-9261-db692266feb8_model.json b/astatutoryarticleretrievaldatasetinfrench/f91f1b46-dc21-4862-9261-db692266feb8_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c5f30747bf252e778d98278b49ffbb6208d72c14 --- /dev/null +++ b/astatutoryarticleretrievaldatasetinfrench/f91f1b46-dc21-4862-9261-db692266feb8_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:86090c21dd53a0281d35f93cd913c15e185b8d4a4b0a702acb1dd2e912c2f73c +size 125545 diff --git a/astatutoryarticleretrievaldatasetinfrench/f91f1b46-dc21-4862-9261-db692266feb8_origin.pdf b/astatutoryarticleretrievaldatasetinfrench/f91f1b46-dc21-4862-9261-db692266feb8_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a1e319c04a5524661c560ff59c494929777f9779 --- /dev/null +++ b/astatutoryarticleretrievaldatasetinfrench/f91f1b46-dc21-4862-9261-db692266feb8_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:76c2bc07f3697be1f92efa82a87db3a8848c44aab6a5f37564e5f4b4c42268e3 +size 764409 diff --git a/astatutoryarticleretrievaldatasetinfrench/full.md b/astatutoryarticleretrievaldatasetinfrench/full.md new file mode 100644 index 0000000000000000000000000000000000000000..261544c0842c19e8ab839d03b48ef31bf93f40ee --- /dev/null +++ b/astatutoryarticleretrievaldatasetinfrench/full.md @@ -0,0 +1,443 @@ +# A Statutory Article Retrieval Dataset in French + +Antoine Louis and Gerasimos Spanakis +Law & Tech Lab, Maastricht University + +{a.louis, jerry.spanakis}@maastrichtuniversity.nl + +# Abstract + +Statutory article retrieval is the task of automatically retrieving law articles relevant to a legal question. While recent advances in natural language processing have sparked considerable interest in many legal tasks, statutory article retrieval remains primarily untouched due to the scarcity of large-scale and high-quality annotated datasets. To address this bottleneck, we introduce the Belgian Statutory Article Retrieval Dataset (BSARD), which consists of $1,100+$ French native legal questions labeled by experienced jurists with relevant articles from a corpus of $22,600+$ Belgian law articles. Using BSARD, we benchmark several state-of-the-art retrieval approaches, including lexical and dense architectures, both in zero-shot and supervised setups. We find that fine-tuned dense retrieval models significantly outperform other systems. Our best performing baseline achieves $74.8\%$ R@100, which is promising for the feasibility of the task and indicates there is still room for improvement. By the specificity of the domain and addressed task, BSARD presents a unique challenge problem for future research on legal information retrieval. Our dataset and source code are publicly available. + +# 1 Introduction + +Legal issues are an integral part of many people's lives (Ponce et al., 2019). However, the majority of citizens have little to no knowledge about their rights and fundamental legal processes (Balmer et al., 2010). As the Internet has become the primary source of information in response to life problems (Estabrook et al., 2007), people increasingly turn to search engines when faced with a legal issue (Denvir, 2016). Nevertheless, the quality of the search engine's legal help results is currently unsatisfactory, as top results mainly refer people to commercial websites that provide basic information as a way to advertise for-profit services (Hagan and Li, 2020). On average, only one in five persons + +obtain help from the Internet to clarify or solve their legal issue (Ponce et al., 2019). As a result, many vulnerable citizens who cannot afford a legal expert's costly assistance are left unprotected or even exploited. This barrier to accessing legal information creates a clear imbalance within the legal system, preventing the right to equal access to justice for all. + +People do not need legal services in and of themselves; they need the ends that legal services can provide. Recent advances in natural language processing (NLP), combined with the increasing amount of digitized textual data in the legal domain, offer new possibilities to bridge the gap between people and the law. For example, legal judgment prediction (Aletras et al., 2016; Luo et al., 2017; Zhong et al., 2018; Hu et al., 2018; Chen et al., 2019) may assist citizens in finding insightful patterns between their case and its outcome. Additionally, legal text summarization (Hachey and Grover, 2006; Bhattacharya et al., 2019) and automated contract review (Harkous et al., 2018; Lippi et al., 2019) may help people clarify long, complex, and ambiguous legal documents. + +In this work, we focus on statutory article retrieval, which, given a legal question such as "Is it legal to contract a lifetime lease?", aims to return one or several relevant law articles from a body of legal statutes (Kim et al., 2019; Nguyen et al., 2020), as illustrated in Figure 1. A qualified statutory article retrieval system could provide a professional assisting service for unskilled humans and help empower the weaker parties when used for the public interest. + +Finding relevant statutes to a legal question is a challenging task. Unlike traditional ad-hoc information retrieval (Craswell et al., 2020), statutory article retrieval deals with two types of language: common natural language for the questions and complex legal language for the statutes. This difference in language distribution greatly complicates + +![](images/8290189a1480f17f13000ef8dd5da475a51ac56d33915f9181a56fe05db47526.jpg) +Figure 1: Illustration of the statutory article retrieval task performed on the Belgian Statutory Article Retrieval Dataset (BSARD), which consists of $1,100+$ questions carefully labeled by legal experts with references to relevant articles from the Belgian legislation. With BSARD, models can learn to retrieve law articles relevant to a legal question. All examples we show in the paper are translated from French for illustration. + +the retrieval task as it indirectly requires an inherent interpretation system that can translate a natural question from a non-expert to a legal question to be matched against statutes. For skilled legal experts, these interpretations come from their knowledge of a question's domain and their understanding of the legal concepts and processes involved. Nevertheless, an interpretation is rarely unique. Instead, it is the interpreter's subjective belief that gives meaning to the question and, accordingly, an idea of the domains in which the answer can be found. As a result, the same question can yield different paths to the desired outcome depending on its interpretation, making statutory article retrieval a difficult and time-consuming task. + +Besides, statutory law is not a stack of independent articles to be treated as complete sources of information on their own – unlike news or recipes. Instead, it is a structured and hierarchical collection of legal provisions that have whole meaning only when considered in their overall context, i.e., together with the supplementary information from their neighboring articles, the fields and sub-fields they belong to, and their place in the hierarchy of the law. For instance, the answer to the question “Can I terminate an employment contract?” will most often be found in labor law. However, this is not necessarily true if an employer is contracting a self-employed worker to carry out a specific task, in which case the answer probably lies at the higher level of contract law. This example illustrates the importance of considering the question’s context and understanding the hierarchical structure of the law when looking for relevant statutory articles. + +In order to study whether retrieval models can approximate the efficiency and reliability of legal experts, we need a suitable labeled dataset. However, such datasets are difficult to obtain considering that, although statutory provisions are generally publicly accessible (yet often not in a machine-readable format), the questions posed by citizens are not. + +This work presents a novel French native expert-annotated statutory article retrieval dataset as its main contribution. Our Belgian Statutory Article Retrieval Dataset (BSARD) consists of more than 1,100 legal questions posed by Belgian citizens and labeled by legal experts with references to relevant articles from a corpus of around 22,600 Belgian law articles. As a second contribution, we establish strong baselines on BSARD by comparing diverse state-of-the-art retrieval approaches from lexical and dense architectures. Our results show that fine-tuned dense retrieval models significantly outperform other approaches yet suggest ample opportunity for improvement. We publicly release our dataset and source code at https://github.com/maaastrichtlawtech/bsard. + +# 2 Related Work + +Due to the increasing digitization of textual legal data, the NLP community has recently introduced more and more datasets to help researchers build reliable models on several legal tasks. For instance, Fawei et al. (2016) introduced a legal question answering (LQA) dataset with 400 multi-choices questions based on the US national bar exam. Similarly, Zhong et al. (2020) released an LQA dataset based on the Chinese bar exam consisting of 26,365 + +multiple-choice questions, together with a database of evidence that includes 3,382 Chinese legal provisions and the content of the national examination counseling book. + +Furthermore, Duan et al. (2019) proposed a legal reading comprehension dataset with 52,000 question-answer pairs crafted on the fact descriptions of 10,000 cases from the Supreme People's Court of China. On a different note, Xiao et al. (2018) presented a dataset for legal judgment prediction (LJP) with around 2.68 million Chinese criminal cases annotated with 183 law articles and 202 charges. Likewise, Chalkidis et al. (2019a) introduced an LJP dataset consisting of 11,478 English cases from the European Court of Human Rights labeled with the associated final decision. + +Meanwhile, Xiao et al. (2019) introduced a dataset for similar case matching with 8,964 triplets of cases published by the Supreme People's Court of China, and Chalkidis et al. (2019b) released a text classification dataset containing 57,000 English EU legislative documents tagged with 4,271 labels from the European Vocabulary. Additionally, Manor and Li (2019) introduced a legal text summarization dataset consisting of 446 sets of contract sections and corresponding reference summaries, and Holzenberger et al. (2020) presented a statutory reasoning dataset based on US tax law. + +Recently, Hendrycks et al. (2021) proposed a dataset for legal contract review that includes 510 contracts annotated with 41 different clauses for a total of 13,101 annotations. In the same vein, Borchmann et al. (2020) introduced a semantic retrieval dataset for contract discovery with more than 2,500 annotations in around 600 documents. Lastly, the COLIEE Case Law Corpus (Rabelo et al., 2020) is a case law retrieval and entailment dataset that includes 650 base cases from the Federal Court of Canada, each with 200 candidate cases to be identified as relevant to the base case. + +Regarding statutory article retrieval, the only other publicly available dataset is the COLIEE Statute Law Corpus (Rabelo et al., 2020). It comprises 696 questions from the Japanese legal bar exam labeled with references to relevant articles from the Japanese Civil Code, where both the questions and articles have been translated from Japanese to English. However, this dataset focuses on legal bar exam question answering, which is quite different from legal questions posed by ordinary citizens. While the latter tend to be vague and + +straightforward, bar exam questions are meant for aspiring lawyers and are thus specific and advanced. Besides, the dataset only contains closed questions (i.e., questions with "yes" or "no" answers) and considers almost 30 times fewer law articles than BSARD does. Also, unlike BSARD, the data are not native sentences but instead translated from a foreign language with a completely different legal system.1 As a result, the translated dataset may not accurately reflect the logic of the original legal system and language. These limitations suggest the need for a novel large-scale citizen-centric native dataset for statutory article retrieval, which is the core contribution of the present work. + +# 3 The Belgian Statutory Article Retrieval Dataset + +# 3.1 Dataset Collection + +We create our dataset in four stages: (i) compiling a large corpus of Belgian law articles, (ii) gathering legal questions with references to relevant law articles, (iii) refining these questions, and (iv) matching the references to the corresponding articles from our corpus. + +Law articles collection. In civil law jurisdictions, a legal code is a type of legislation that purports to exhaustively cover a whole area of law, such as criminal law or tax law, by gathering and restating all the written laws in that area into a unique book. Hence, these books constitute valuable resources to collect many law articles on various subjects. We consider 32 publicly available Belgian codes, as presented in Table 3 of Appendix A. Together with the legal articles, we extract the corresponding headings of the sections in which these articles appear (i.e., book, part, act, chapter, section, and subsection names). These headings provide an overview of each article's subject. As preprocessing, we use regular expressions to clean up the articles of specific wording indicating a change in part of the article by a past law (e.g., nested brackets, superscripts, or footnotes). Additionally, we identify and remove the articles repealed by past laws but still present in the codes. Eventually, we end up with a corpus $\mathcal{C} = \{a_1,\dots ,a_N\}$ + +of $N = 22,633$ articles that we use as our basic retrieval units. + +Questions collection. We partner with Droits Quotidiens (DQ),2 a Belgian organization whose mission is to clarify the law for laypeople. Each year, DQ receives and collects around 4,000 emails from Belgian citizens asking for advice on a personal legal issue. Thanks to these emails, its team of six experienced jurists keeps abreast of Belgium's most common legal issues and addresses them as comprehensively as possible on its website. Each jurist is an expert in a specific field (e.g., "family", "housing", or "work") and is responsible for answering all questions related to that field. Given their qualifications and years of experience in providing legal advice in their respective fields, the experts can be considered competent enough to always (eventually) retrieve the correct articles to a given question. + +In practice, their legal clarification process consists of four steps. First, they identify the most frequently asked questions on a common legal issue. Then, they define a new anonymized "model" question on that issue expressed in natural language terms, i.e., as close as possible as if a layperson had asked it. Next, they search the Belgian law for articles that help answer the model question and reference them. Finally, they answer the question using the retrieved relevant articles in a way a layperson can understand. These model questions, legal references, and answers are further categorized before being posted on DQ's website (e.g., the question "What is the seizure of goods?" is tagged under the "Money $\rightarrow$ Debt recovery" category). With their consent, we collect more than 3,200 model questions together with their references to relevant law articles and categorization tags. + +Assuming it takes a jurist between 5 to 20 minutes to find the relevant articles to a given question and categorize the latter. An estimate of the pecuniary value of those labeled questions is over €105,000 - 3,200 questions, each requiring 10 minutes to label, assuming a rate of €200 per hour. + +Questions refinement. We find that around one-third of the collected questions are duplicates. However, these duplicated questions come with different categorization tags, some of which providing additional context that can be used to refine the questions. For example, the question "Should I + +install fire detectors?" appears four times in total, under the following tags: "Housing $\rightarrow$ Rent $\rightarrow$ I am a {tenant, landlord} $\rightarrow$ In {Wallonia, Brussels}". We distinguish between the tags with one or a few words indicating a question subject (e.g., "housing" and "rent") and those that provide context about a personal situation or location as short descriptive sentences (e.g., "I am tenant in Brussels"). If any, we append the contextual sentence tags in front of the questions, which solves most of the duplicates problem and improves the overall quality of the questions by making them more specific. + +Questions filtering. The questions collected are annotated with plain text references to relevant law articles (e.g., "Article 8 of the Civil Code"). We use regular expressions to parse these references and match them to the corresponding articles from our corpus. First, we filter out questions whose references are not articles (e.g., an entire decree or order). Then, we remove questions with references to legal acts other than codes of law (e.g., decrees, directives, or ordinances). Next, we ignore questions with references to codes other than those we initially considered. We eventually end up with 1,108 questions, each carefully labeled with the ids of the corresponding relevant law articles from our corpus. Finally, we split the dataset into training/test sets with 886 and 222 questions, respectively. + +# 3.2 Dataset Analysis + +To provide more insight, we describe quantitative and qualitative observations about BSARD. Specifically, we explore (i) the diversity in questions and articles, (ii) the relationship between questions and their relevant articles, and (iii) the type of reasoning required to retrieve relevant articles. + +Diversity. The 22,633 law articles that constitute our corpus have been collected from 32 Belgian codes covering a large number of legal topics, as presented in Table 3 of Appendix A. The articles have a median length of 77 words, but 142 articles exceed 1,000 words (the lengthiest one being up to 5,790 words), as illustrated in Figure 2b. These long articles are mostly general provisions, i.e., articles that appear at the beginning of a code and define many terms and concepts later mentioned in the code. The questions are between 5 and 44 words long, with a median of 14 words, as shown in Figure 2a. They cover a wide range of topics, with around $85\%$ of them being either about family, + +
General topicPercentageSubtopicsExample
Family30.6%Marriage, parentage, divorce, etc.When is there a guardianship?
Housing27.4%Rental, flatshare, insalubrity, etc.Who should repair the common wall?
Money16.0%Debt, insurance, taxes, etc.What is the seizure of goods?
Justice13.6%Proceedings, crimes, legal aid, etc.How does the appeal process work?
Foreigners5.7%Citizenship, illegal stay, etc.Can I come to Belgium to get married?
Social security3.5%Pensions, pregnancy, health, etc.Am I dismissed during my pregnancy?
Work3.2%Breach of contract, injuries, etc.Can I miss work to visit the doctor?
+ +Table 1: Distribution of question topics in BSARD. + +![](images/534f9b7d5ae21b2487013720cfa5eff7a29b90ed13f2167145766374542b7f72.jpg) +(a) Question length. + +![](images/f6f2f78579228df0cd012db9dd63f1b70b4c42c0a69f773cdeea25865a9fab9e.jpg) +(b) Article length. + +![](images/412aa43f2ff4402ebf3fdc2c2a1d650ff2d2b6d2317fb5f9facd44c0e7328433.jpg) +(c) Number of relevant articles per question. + +![](images/7fbb4c284514236586cd099b0d4a51235d054525ad852d2ba2d41de92f0e2029.jpg) +(d) Number of citations per relevant article. +Figure 2: Statistics of BSARD. + +housing, money, or justice, while the remaining $15\%$ concern either social security, foreigners, or work, as described in Table 1. + +Question-article relationship. Questions might have one or several relevant legal articles. Overall, $75\%$ of the questions have less than five relevant articles, $18\%$ have between 5 and 20, and the remaining $7\%$ have more than 20 with a maximum of 109, as seen in Figure 2c. The latter often have complex and indirect answers that demand extensive reasoning over a whole code section, which explains these large numbers of relevant articles. Furthermore, an article deemed relevant to one question might also be for others. Therefore, we calculate for each unique article deemed relevant to at least one question the total number of times it is cited as a legal reference across all questions. As a result, we find that the median number of citations for those articles is 2, and less than $25\%$ of them are cited more than five times, as illustrated in Figure 2d. Hence, out of the 22,633 articles, only 1,612 are referred to as relevant to at least one question in the dataset, and around $80\%$ of these 1,612 articles come from either the Civil Code, Judicial Code, Criminal Investigation Code, or Penal Code. Meanwhile, 18 out of the 32 codes have less than five articles men + +tioned as relevant to at least one question, which can be explained by the fact that those codes focus less on individuals and their concerns. + +# 4 Models + +Formally speaking, a statutory article retrieval system $R: (q, \mathcal{C}) \to \mathcal{F}$ is a function that takes as input a question $q$ along with a corpus of law articles $\mathcal{C}$ , and returns a much smaller filter set $\mathcal{F} \subset \mathcal{C}$ of the supposedly relevant articles, ranked by decreasing order of relevance. For a fixed $k = |\mathcal{F}| \ll |\mathcal{C}|$ , the retriever can be evaluated in isolation with multiple rank-based metrics (see Section 5.1). The following section describes the retrieval models we use as a benchmark for the task. + +# 4.1 Lexical Models + +Traditionally, lexical approaches have been the de facto standard for textual information retrieval due to their robustness and efficiency. Given a query $q$ and an article $a$ , a lexical model assigns to the pair $(q,a)$ a score $s_L:(q,a)\to \mathbb{R}_+$ by computing the sum, over the query terms, of the weights of each query term $t\in q$ in the article, i.e., + +$$ +s _ {L} (q, a) = \sum_ {t \in q} w (t, a). \tag {1} +$$ + +First, we use the TF-IDF weighting scheme, in which + +$$ +w (t, a) = \operatorname {t f} (t, a) \cdot \log \frac {| \mathcal {C} |}{\mathrm {d f} (t)}, \tag {2} +$$ + +where the term frequency tf is the number of occurrences of term $t$ in article $a$ , and the document frequency df is the number of articles within the corpus that contain term $t$ . Then, we experiment with the BM25 weighting formula (Robertson et al., 1994), defined as + +$$ +\begin{array}{l} w (t, a) = \frac {\operatorname {t f} (t , a) \cdot \left(k _ {1} + 1\right)}{\operatorname {t f} (t , a) + k _ {1} \cdot \left(1 - b + b \cdot \frac {| a |}{a v g a l}\right)} \tag {3} \\ \cdot \log \frac {| \mathcal {C} | - \mathrm {d f} (t) + 0 . 5}{\mathrm {d f} (t) + 0 . 5}, \\ \end{array} +$$ + +where $k_{1}\in \mathbb{R}_{+}$ and $b\in [0,1]$ are constant parameters to be fixed, $|a|$ is the article length, and avgal is the average article length in the collection. + +During inference, we compute a score for each article in corpus $\mathcal{C}$ and return the $k$ articles with the highest scores as the top- $k$ most relevant results to the input query. + +# 4.2 Dense Models + +Lexical approaches suffer from the lexical gap problem (Berger et al., 2000) and can only retrieve articles containing keywords present in the query. To overcome this limitation, recent work (Lee et al., 2019; Karpukhin et al., 2020; Xiong et al., 2021) relies on neural-based architectures to capture semantic relationships between the query and documents. The most commonly used approach is based on a bi-encoder model (Gillick et al., 2018) that maps queries and documents into dense vector representations. Formally, a dense retriever calculates a relevance score $s_D: (q, a) \to \mathbb{R}_+$ between question $q$ and article $a$ by the similarity of their respective embeddings $h_q, h_a \in \mathbb{R}^d$ , i.e., + +$$ +s _ {D} (q, a) = \operatorname {s i m} \left(\boldsymbol {h} _ {q}, \boldsymbol {h} _ {a}\right), \tag {4} +$$ + +where $\operatorname{sim}:\mathbb{R}^d\times \mathbb{R}^d\to \mathbb{R}$ is a similarity function such as dot product or cosine similarity. Typically, these embeddings result from a pooling operation on the output representations of a word embedding model: + +$$ +\boldsymbol {h} _ {q} = \operatorname {p o o l} \left(f (q; \boldsymbol {\theta} _ {1})\right), \text {a n d} \tag {5} +$$ + +$$ +\boldsymbol {h} _ {a} = \operatorname {p o o l} \left(f (a; \boldsymbol {\theta} _ {2})\right), +$$ + +where model $f(\cdot ;\pmb {\theta}_i):\mathcal{W}^n\to \mathbb{R}^{n\times d}$ with parameters $\pmb{\theta}_{i}$ maps an input text sequence of $n$ terms from + +vocabulary $\mathcal{W}$ to $d$ -dimensional real-valued word vectors. The pooling operation pool: $\mathbb{R}^{n\times d}\to \mathbb{R}^d$ uses the output word embeddings to distill a global representation for the text passage - using either mean, max, or [CLS] pooling. + +Note that the bi-encoder architecture comes with two flavors: (i) siamese (Reimers and Gurevych, 2019; Xiong et al., 2021), which uses a unique word embedding model (i.e., $\theta_{1} = \theta_{2}$ ) that maps the query and article together in a shared dense vector space, and (ii) two-tower (Yang et al., 2020; Karpukhin et al., 2020), which use two independent word embedding models that encode the query and article separately into different embedding spaces. + +During inference, the articles are pre-encoded offline, and their representations are stored in an index structure. Then, given an input query, an exact search is performed by computing the similarities between the query representation and all pre-encoded article representations. The resulting scores are used to rank the articles such that the $k$ articles that have the highest similarities with the query are returned as the top- $k$ results. + +# 4.2.1 Zero-Shot Evaluation + +First, we study the effectiveness of siamese bi-encoders in a zero-shot evaluation setup, i.e., pre-trained word embedding models are applied out-of-the-box without any additional fine-tuning. We experiment with two types of widely-used word embedding models: (i) models that learned context-independent word representations, namely word2vec (Mikolov et al., 2013a,b) and fast-Text (Bojanowski et al., 2017), and (ii) models that learned context-dependent word embeddings, namely RoBERTa (Liu et al., 2019). + +RoBERTa can process texts up to a maximum input length of 512 tokens. Although alternative models exist to alleviate this limitation (Beltagy et al., 2020; Ainslie et al., 2020), they have all been trained on English text, and there are no French equivalents available yet. Therefore, we use a simple workaround that splits the text into overlapping chunks and passes each chunk in turn to the embedding model. To form the chunks, we consider contiguous text sequences of 200 tokens with an overlap of 20 tokens between consecutive chunks. + +For all zero-shot models, we use mean pooling on all word embeddings of the passage to extract a global representation for the latter and cosine similarity to score passage representations. + +# 4.2.2 Training + +Thereafter, we train our own siamese and two-tower RoBERTa-based bi-encoder models on BSARD. Let $\mathcal{D} = \{\langle q_i,a_i^+\rangle \}_{i = 1}^N$ be the training data where each of the $N$ instances consists of a query $q_{i}$ associated with a relevant (positive) article $a_{i}^{+}$ . Using in-batch negatives (Chen et al., 2017; Henderson et al., 2017), we can create a training set $\mathcal{T} = \{\langle q_i,a_i^+,A_i^- \rangle \}_{i = 1}^N$ where $A_{i}^{-}$ is a set of negative articles for question $q_{i}$ constructed by considering the articles paired with the other questions from the same mini-batch. For each training instance, we contrastively optimize the negative log-likelihood of each positive article against their negative articles, i.e., + +$$ +\begin{array}{l} L \left(q _ {i}, a _ {i} ^ {+}, \mathcal {A} _ {i} ^ {-}\right) \\ = - \log \frac {\exp \left(s _ {D} \left(q _ {i} , a _ {i} ^ {+}\right) / \tau\right)}{\sum_ {a \in \mathcal {A} _ {i} ^ {-} \cup \{a _ {i} ^ {+} \}} \exp \left(s _ {D} \left(q _ {i} , a\right) / \tau\right)}, \tag {6} \\ \end{array} +$$ + +where $\tau > 0$ is a temperature parameter to be set. This contrastive loss allows learning embedding functions such that relevant question-article pairs will have a higher score than irrelevant ones. + +To deal with articles longer than 512 tokens, we use the same workaround as in the zero-shot evaluation and split the long sequences into overlapping chunks of 200 tokens with a window size of 20. However, this time, we limit the size of the articles to the first 1,000 words due to limited GPU memory. Although not ideal, doing so remains reasonable given that only $0.6\%$ of the articles in our corpus have more than 1,000 words, as mentioned in Section 3.2. Each chunk is prefixed by the [CLS] token, and we extract a global representation for the whole article by averaging the output [CLS] token embeddings of the different chunks. Here, we use the dot product to compute similarities as it gives slightly better results than cosine. + +# 5 Experiments + +We now describe the setup we use for experiments and evaluate the performance of our models. + +# 5.1 Experimental Setup + +Metrics. We use three standard information retrieval metrics (Manning et al., 2008) to evaluate performance, namely the (macro-averaged) recall@k $(\mathbf{R}@\mathbf{k})$ , mean average precision@k (MAP@k), and mean reciprocal rank@k (MRR@k). Appendix B gives a detailed description of these + +metrics in the context of statutory article retrieval. We deliberately omit to report the precision@k given that questions have a variable number of relevant articles (see Figure 2c), which makes it senseless to report it at a fixed $k$ - questions with $r$ relevant articles will always have $\mathrm{P} @ k < 1$ if $k > r$ . For the same reason, $k$ should be large enough for the recall@k. Hence, we use $k \in \{100, 200, 500\}$ for our evaluation. + +French word embedding models. Our focus is on a non-English dataset, so we experiment with French variants of the models mentioned above. Specifically, we use a 500-dimensional skip-gram word2vec model pre-trained on a crawled French corpus (Fauconnier, 2015), a 300-dimensional CBOW fastText model pre-trained on French Web data (Grave et al., 2018), and a French RoBERTa model, namely CamemBERT (Martin et al., 2020), pre-trained on 147GB of French web pages filtered from Common Crawl.3 + +Hyper-parameters & schedule. For BM25, we optimize the parameters on BSARD training set and find $k_{1} = 1.0$ and $b = 0.6$ to perform best. Regarding the bi-encoder models, we optimize the contrastive loss using a batch size of 22 questionarticle pairs and a temperature of 0.05 for 100 epochs, which is approximately 20,500 steps. We use AdamW (Loshchilov and Hutter, 2019) with an initial learning rate of 2e-5, $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ , weight decay of 0.01, learning rate warm up over the first 500 steps, and linear decay of the learning rate. Training is performed on a single Tesla V100 GPU with 32 GBs of memory and evaluation on a server with a dual 20 core Intel(R) Xeon(R) E5-2698 v4 CPU @2.20GHz and 512 GBs of RAM. + +# 5.2 Results + +In Table 2, we report the retrieval performance of our models on the BSARD test set. Overall, the trained bi-encoder models significantly outperform all the other baselines. The two-tower model improves over its siamese variant on recall@100 but performs similarly on the other metrics. Although BM25 underperforms the trained bi-encoders significantly, its performance indicates that it is still a strong baseline for domain-specific retrieval. These results are consistent with those obtained on other in-domain datasets (Thakur et al., 2021). + +
TrainModelEncoder(s)ParamsLatencyR@100R@200R@500MAP@100MRR@100
XTF-IDF--82740.1350.4459.348.6912.98
XBM25 (official)--134251.3356.7864.7116.0424.59
XSiamese bi-encoderword2vec-449.4161.7671.5712.9021.49
XSiamese bi-encoderfastText-332.9341.3349.266.2911.78
XSiamese bi-encoderCamemBERT-274.216.0012.820.502.04
Siamese bi-encoderCamemBERT110M2871.6378.3883.7735.4443.52
Two-tower bi-encoderCamemBERT220M2674.7878.0483.3935.6742.46
+ +Table 2: Retrieval performance (in percent) and query latency (in milliseconds) of various information retrieval approaches on the test set. The best results are marked in bold. + +Regarding the zero-shot evaluation of siamese bi-encoder models, we find that directly using the embeddings of a pre-trained CamemBERT model without optimizing for the IR task gives poor results. Reimers and Gurevych (2019) noted similar findings for the task of semantic textual similarity. Furthermore, we observe that the word2vec-based bi-encoder significantly outperforms the fast-Text and BERT-based models, suggesting that pretrained word-level embeddings are more appropriate for the task than character-level or subword-level embeddings when used out of the box. + +Although promising, these results suggest ample opportunity for improvement compared to a skilled legal expert who can eventually retrieve all relevant articles to any question and thus get perfect scores. + +# 6 Discussion + +This section discusses the limitations and broader impacts of our dataset. + +# 6.1 Limitations + +As our dataset aims to give researchers a well-defined benchmark to evaluate existing and future legal information retrieval models, certain limitations need to be borne in mind to avoid drawing erroneous conclusions. + +First, the corpus of articles is limited to those collected from the 32 Belgian codes described in Table 3 of Appendix A, which does not cover the entire Belgian law as thousands of articles from decrees, directives, and ordinances are missing. During the dataset construction, all references to these uncollected articles are ignored, which causes some questions to end up with only a fraction of their initial number of relevant articles. This information loss implies that the answer contained in the remaining relevant articles might be incomplete, although it is still appropriate. + +Additionally, it is essential to note that not all legal questions can be answered with statutes alone. + +For instance, the question "Can I evict my tenants if they make too much noise?" might not have a detailed answer within the statutory law that quantifies a specific noise threshold at which eviction is allowed. Instead, the landlord should probably rely more on case law and find precedents similar to their current situation (e.g., the tenant makes two parties a week until 2 am). Hence, some questions are better suited than others to the statutory article retrieval task, and the domain of the less suitable ones remains to be determined. + +# 6.2 Broader Impacts + +In addition to helping advance the state-of-the-art in retrieving statutes relevant to a legal question, BSARD-based models could improve the efficiency of the legal information retrieval process in the context of legal research, therefore enabling researchers to devote themselves to more thoughtful parts of their research. + +Furthermore, BSARD can become a starting point of new open-source legal information search tools so that the socially weaker parties to disputes can benefit from a free professional assisting service. However, there are risks that the dataset will not be used exclusively for the public interest but perhaps also for profit as part of proprietary search tools developed by companies. Since this would reinforce rather than solve the problem of access to legal information and justice for all, we decided to distribute BSARD under a license with a non-commercial clause. + +Other potential negative societal impacts could involve using models trained on BSARD to misuse or find gaps within the governmental laws or use the latter not to defend oneself but to deliberately damage people or companies instead. Of course, we discourage anyone from developing models that aim to perform the latter actions. + +# 7 Conclusion + +In this paper, we present the Belgian Statutory Article Retrieval Dataset (BSARD), a citizen-centric French native dataset for statutory article retrieval. Within a larger effort to bridge the gap between people and the law, BSARD provides a means of evaluating and developing models capable of retrieving law articles relevant to a legal question posed by a layperson. We benchmark several strong information retrieval baselines that show promise for the feasibility of the task yet indicate room for improvement. In the future, we plan to build retrieval models that can handle lengthy statutory articles and inherently exploit the hierarchy of the law. In closing, we hope that our work sparks interest in developing practical and reliable statutory article retrieval models to help improve access to justice for all. + +# Acknowledgments + +This research is partially supported by the Sector Plan Digital Legal Studies of the Dutch Ministry of Education, Culture, and Science. In addition, this research was made possible, in part, using the Data Science Research Infrastructure (DSRI) hosted at Maastricht University. + +# References + +Joshua Ainslie, Santiago Ontañón, Chris Alberti, Vaclav Cvicek, Zachary Fisher, Philip Pham, Anirudh Ravula, Sumit Sanghai, Qifan Wang, and Li Yang. 2020. ETC: encoding long and structured inputs in transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 268-284. Association for Computational Linguistics. +Nikolaos Aletras, Dimitrios Tsarapatsanis, Daniel Preotiuc-Pietro, and Vasileios Lampos. 2016. Predicting judicial decisions of the european court of human rights: a natural language processing perspective. PeerJ Computer Science, 2:e93. +Nigel J Balmer, Alexy Buck, Ash Patel, Catrina Denvir, and Pascoe Pleasence. 2010. Knowledge, capability and the experience of rights problems. London: PLEnet. +Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. CoRR, abs/2004.05150. +Emily M. Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. + +Transactions of the Association for Computational Linguistics, 6:587-604. +Adam L. Berger, Rich Caruana, David Cohn, Dayne Freitag, and Vibhu O. Mittal. 2000. Bridging the lexical chasm: statistical approaches to answer-finding. In SIGIR 2000: Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 192-199. ACM. +Paheli Bhattacharya, Kaustubh Hiware, Subham Rajgaria, Nilay Pochhi, Kripabandhu Ghosh, and Saptarshi Ghosh. 2019. A comparative study of summarization algorithms applied to legal case judgments. In Advances in Information Retrieval - 41st European Conference on IR Research, volume 11437 of Lecture Notes in Computer Science, pages 413-428. Springer. +Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomás Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146. +Lukasz Borchmann, Dawid Wisniewski, Andrzej Gretkowski, Izabela Kosmala, Dawid Jurkiewicz, Lukasz Szalkiewicz, Gabriela Palka, Karol Kaczmarek, Agnieszka Kaliska, and Filip Gralinski. 2020. Contract discovery: Dataset and a few-shot semantic retrieval challenge with competitive baselines. In Findings of the Association for Computational Linguistics: EMNLP 2020, volume EMNLP 2020 of Findings of ACL, pages 4254-4268. Association for Computational Linguistics. +Ilias Chalkidis, Ion Androutsopoulos, and Nikolaos Aletras. 2019a. Neural legal judgment prediction in english. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 4317-4323. Association for Computational Linguistics. +Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis, and Ion Androutsopoulos. 2019b. Large-scale multi-label text classification on EU legislation. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 6314-6322. Association for Computational Linguistics. +Huajie Chen, Deng Cai, Wei Dai, Zehui Dai, and Yadong Ding. 2019. Charge-based prison term prediction with deep gating network. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 6361-6366. Association for Computational Linguistics. +Ting Chen, Yizhou Sun, Yue Shi, and Liangjie Hong. 2017. On sampling strategies for neural network-based collaborative filtering. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 767-776. + +Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, and Ellen M. Voorhees. 2020. Overview of the TREC 2019 deep learning track. CoRR, abs/2003.07820. +Catrina Denvir. 2016. Online and in the know? Public legal education, young people and the internet. Computers & Education, 92-93:204-220. +Xingyi Duan, Baoxin Wang, Ziyue Wang, Wentao Ma, Yiming Cui, Dayong Wu, Shijin Wang, Ting Liu, Tianxiang Huo, Zhen Hu, Heng Wang, and Zhiyuan Liu. 2019. CJRC: A reliable human-annotated benchmark dataset for chinese judicial reading comprehension. In 18th China National Conference on Chinese Computational Linguistics, volume 11856 of Lecture Notes in Computer Science, pages 439-451. Springer. +Leigh S Estabrook, G Evans Witt, and Harrison Rainie. 2007. Information searches that solve problems: How people use the Internet, libraries, and government agencies when they need help. Pew Internet & American Life Project. +Jean-Philippe Fauconnier. 2015. French word embeddings. +Biralatei Fawei, Adam Z. Wyner, and Jeff Z. Pan. 2016. Passing a USA national bar exam: a first corpus for experimentation. In Proceedings of the Tenth International Conference on Language Resources and Evaluation, pages 3373-3378. European Language Resources Association (ELRA). +Daniel Gillick, Alessandro Presta, and Gaurav Singh Tomar. 2018. End-to-end retrieval in continuous space. CoRR, abs/1811.08008. +Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomás Mikolov. 2018. Learning word vectors for 157 languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation, LREC 2018. European Language Resources Association (ELRA). +Ben Hachey and Claire Grover. 2006. Extractive summarisation of legal texts. Artificial Intelligence and Law, 14(4):305-345. +Margaret Hagan and Yue Li. 2020. Legal help search audit: Are search engines effective brokers of legal information? Available at SSRN 3623333. +Hamza Harkous, Kassem Fawaz, Rémi Lebret, Florian Schaub, Kang G. Shin, and Karl Aberer. 2018. Polisis: Automated analysis and presentation of privacy policies using deep learning. In 27th USENIX Security Symposium, pages 531-548. USENIX Association. +Matthew L. Henderson, Rami Al-Rfou, Brian Strope, Yun-Hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, and Ray Kurzweil. 2017. Efficient natural language response suggestion for smart reply. CoRR, abs/1705.00652. + +Dan Hendrycks, Collin Burns, Anya Chen, and Spencer Ball. 2021. CUAD: An expert-annotated nlp dataset for legal contract review. In Advances in Neural Information Processing Systems 31. +Sarah Holland, Ahmed Hosny, Sarah Newman, Joshua Joseph, and Kasia Chmielinski. 2018. The dataset nutrition label: A framework to drive higher data quality standards. arXiv preprint arXiv:1805.03677. +Nils Holzenberger, Andrew Blair-Stanek, and Benjamin Van Durme. 2020. A dataset for statutory reasoning in tax law entailment and question answering. In Proceedings of the Natural Legal Language Processing Workshop 2020 co-located with the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD 2020), volume 2645 of CEUR Workshop Proceedings, pages 31-38 CEUR-WS.org. +Zikun Hu, Xiang Li, Cunchao Tu, Zhiyuan Liu, and Maosong Sun. 2018. Few-shot charge prediction with discriminative legal attributes. In Proceedings of the 27th International Conference on Computational Linguistics, pages 487-498. Association for Computational Linguistics. +Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick S. H. Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, pages 6769-6781. Association for Computational Linguistics. +Mi-Young Kim, Juliano Rabelo, and Randy Goebel 2019. Statute law information retrieval and entailment. In Proceedings of the 6th Competition on Legal Information Retrieval and Entailment Workshop in association with the Seventeenth International Conference on Artificial Intelligence and Law, pages 283-289. ACM. +Kenton Lee, Ming-Wei Chang, and Kristina Toutanova 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 6086-6096. Association for Computational Linguistics. +Quentin Lhoest, Albert Villanova del Moral, Yacine Jernite, Abhishek Thakur, Patrick von Platen, Suraj Patil, Julien Chaumont, Mariama Drame, Julien Plu, Lewis Tunstall, Joe Davison, Mario Šaško, Gunjan Chhablani, Bhavitvya Malik, Simon Brandeis, Teven Le Scao, Victor Sanh, Canwen Xu, Nicolas Patry, Angelina McMillan-Major, Philipp Schmid, Sylvain Gugger, Clément Delangue, Théo Matussière, Lysandre Debut, Stas Bekman, Pierrick Cistac, Thibault Goehringer, Victor Mustar, François Lagunas, Alexander Rush, and Thomas Wolf. 2021 Datasets: A community library for natural language processing. In Proceedings of the 2021 Conference + +on Empirical Methods in Natural Language Processing: System Demonstrations, pages 175-184. Association for Computational Linguistics. +Marco Lippi, Przemyslaw Palka, Giuseppe Contissa, Francesca Lagioia, Hans-Wolfgang Micklitz, Giovanni Sartor, and Paolo Torroni. 2019. CLAUDETTE: an automated detector of potentially unfair clauses in online terms of service. Artificial Intelligence and Law, 27(2):117-139. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692. +Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In Proceedings of the 7th International Conference on Learning Representations. +Bingfeng Luo, Yansong Feng, Jianbo Xu, Xiang Zhang, and Dongyan Zhao. 2017. Learning to predict charges for criminal cases with legal basis. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2727-2736. Association for Computational Linguistics. +Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schütze. 2008. Introduction to information retrieval. Cambridge University Press. +Laura Manor and Junyi Jessy Li. 2019. Plain English summarization of contracts. In Proceedings of the Natural Legal Language Processing Workshop 2019, pages 1-11. Association for Computational Linguistics. +Louis Martin, Benjamin Müller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Éric de la Clergerie, Djamé Seddah, and Benoit Sagot. 2020. Camembert: a tasty french language model. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, pages 7203-7219. Association for Computational Linguistics. +Tomás Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. In 1st International Conference on Learning Representations, ICLR 2013. +Tomás Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26, pages 3111-3119. +Ha-Thanh Nguyen, Hai-Yen Thi Vuong, Phuong Minh Nguyen, Tran Binh Dang, Quan Minh Bui, Vu Trong Sinh, Chau Minh Nguyen, Vu D. Tran, Ken Satoh, and Minh Le Nguyen. 2020. JNLP team: Deep learning for legal processing in COLIEE 2020. CoRR, abs/2011.08071. + +Alejandro Ponce, Sarah Chamness Long, Elizabeth Andersen, Camilo Gutierrez Patino, Matthew Harman, Jorge A Morales, Ted Piccone, Natalia Rodriguez Cajamarca, Adriana Stephan, Kirssy Gonzalez, Jennifer VanRiper, Alicia Evangelides, Rachel Martin, Priya Khosla, Lindsey Bock, Erin Campbell, Emily Gray, Amy Gryskiewicz, Ayyub Ibrahim, Leslie Solis, Gabriel Hearn-Desautels, and Francesca Tinucci. 2019. Global Insights on Access to Justice 2019: Findings from the World Justice Project General Population Poll in 101 Countries. World Justice Project. +Juliano Rabelo, Mi-Young Kim, Randy Goebel, Masaharu Yoshioka, Yoshinobu Kano, and Ken Satoh. 2020. COLIEE 2020: Methods for legal document retrieval and entailment. In New Frontiers in Artificial Intelligence - JSAI-isAI 2020 Workshops, JURISIN, LENLS 2020 Workshops, volume 12758 of Lecture Notes in Computer Science, pages 196-210. Springer. +Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, pages 3980-3990. Association for Computational Linguistics. +Stephen E. Robertson, Steve Walker, Susan Jones, Micheline Hancock-Beaulieu, and Mike Gatford. 1994. Okapi at TREC-3. In Proceedings of The Third Text Retrieval Conference, TREC 1994, volume 500-225 of NIST Special Publication, pages 109-126. National Institute of Standards and Technology (NIST). +Nandan Thakur, Nils Reimers, Andreas Rückle, Abhishek Srivastava, and Iryna Gurevych. 2021. BEIR: A heterogeneous benchmark for zero-shot evaluation of information retrieval models. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track. +Chaojun Xiao, Haoxi Zhong, Zhipeng Guo, Cunchao Tu, Zhiyuan Liu, Maosong Sun, Yansong Feng, Xianpei Han, Zhen Hu, Heng Wang, and Jianfeng Xu. 2018. CAIL2018: A large-scale legal dataset for judgment prediction. CoRR, abs/1807.02478. +Chaojun Xiao, Haoxi Zhong, Zhipeng Guo, Cunchao Tu, Zhiyuan Liu, Maosong Sun, Tianyang Zhang, Xianpei Han, Zhen Hu, Heng Wang, and Jianfeng Xu. 2019. CAIL2019-SCM: A dataset of similar case matching in legal domain. CoRR, abs/1911.08962. +Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N. Bennett, Junaid Ahmed, and Arnold Overwijk. 2021. Approximate nearest neighbor negative contrastive learning for dense text retrieval. In 9th International Conference on Learning Representations, ICLR 2021. OpenReview.net. +Yinfei Yang, Daniel Cer, Amin Ahmad, Mandy Guo, Jax Law, Noah Constant, Gustavo Hernandez Abrego, + +Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2020. Multilingual universal sentence encoder for semantic retrieval. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, ACL 2020, pages 87-94. Association for Computational Linguistics. + +Haoxi Zhong, Zhipeng Guo, Cunchao Tu, Chaojun Xiao, Zhiyuan Liu, and Maosong Sun. 2018. Legal judgment prediction via topological learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3540-3549. Association for Computational Linguistics. + +Haoxi Zhong, Chaojun Xiao, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, and Maosong Sun. 2020. JECQA: A legal-domain question answering dataset. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, volume 34(05), pages 9701-9708. AAAI Press. + +# Appendix + +# A Legal Codes + +Table 3 presents a detailed summary of the 32 publicly available Belgian codes collected for BSARD. + +# B Evaluation Metrics + +Let $\operatorname{rel}_q(a) \in \{0,1\}$ be the binary relevance label of article $a$ for question $q$ , and $\langle i,a\rangle \in \mathcal{F}_q$ a result tuple (article $a$ at rank $i$ ) from the filter set $\mathcal{F}_q \subset \mathcal{C}$ of ranked articles retrieved for question $q$ . + +Recall. The recall $\mathrm{R}_q$ is the fraction of relevant articles retrieved for query $q$ w.r.t. the total number of relevant articles in the corpus $\mathcal{C}$ , i.e., + +$$ +\mathrm {R} _ {q} = \frac {\sum_ {\langle i , a \rangle \in \mathcal {F} _ {q}} \operatorname {r e l} _ {q} (a)}{\sum_ {a \in \mathcal {C}} \operatorname {r e l} _ {q} (a)}. \tag {7} +$$ + +Reciprocal rank. The reciprocal rank $(\mathrm{RR}_q)$ calculates the reciprocal of the rank at which the first relevant article is retrieved, i.e., + +$$ +\mathrm {R R} _ {q} = \max _ {\langle i, a \rangle \in \mathcal {F} _ {q}} \frac {\operatorname {r e l} _ {q} (a)}{i}. \tag {8} +$$ + +Average precision. The average precision $\mathrm{AP}_q$ is the mean of the precision value obtained after each relevant article is retrieved, that is + +$$ +\mathrm {A P} _ {q} = \frac {\sum_ {\langle i , a \rangle \in \mathcal {F} _ {q}} \mathrm {P} _ {q , i} \times \operatorname {r e l} _ {q} (a)}{\sum_ {a \in \mathcal {C}} \operatorname {r e l} _ {q} (a)}, \tag {9} +$$ + +where $\mathrm{P}_{q,j}$ is the precision computed at rank $j$ for query $q$ , i.e., the fraction of relevant articles + +retrieved for query $q$ w.r.t. the total number of articles in the retrieved set $\{\mathcal{F}_q\}_{i = 1}^j$ + +$$ +P _ {q, j} = \frac {\sum_ {\langle i , a \rangle \in \left\{\mathcal {F} _ {q} \right\} _ {i = 1} ^ {j}} \operatorname {r e l} _ {q} (a)}{\left| \left\{\mathcal {F} _ {q} \right\} _ {i = 1} ^ {j} \right|}. \tag {10} +$$ + +We report the macro-averaged recall (R), mean reciprocal rank (MRR), and mean average precision (MAP), which are the average values of the corresponding metrics over a set of $n$ queries. Note that as those metrics are computed for a filter set of size $k = |\mathcal{F}_q| \ll |\mathcal{C}|$ (and not on the entire list of articles in $\mathcal{C}$ ), we report them with the suffix "@k". + +# C Dataset Documentation + +# C.1 Dataset Nutrition Labels + +As a first way to document our dataset, we provide the dataset nutrition labels (Holland et al., 2018) for BSARD in Table 4. + +# C.2 Data Statement + +In addition to the data nutrition labels, we include the data statement (Bender and Friedman, 2018) for BSARD, which provides detailed context on the dataset so that researchers, developers, and users can understand how models built upon it might generalize, be appropriately deployed, and potentially reflect bias or exclusion. + +Curation rationale. All law articles from the selected Belgian codes were included in our dataset, except those revoked (identifiable because mentioned before the article or empty content) and those with a duplicate number within the same code (namely, the articles from Act V, Book III of the Civil Code; from Sections 2, 2bis, and 3 of Chapter II, Act VIII, Book III of the Civil Code; from Act XVIII, Book III of the Civil Code; from the Preliminary Act of the Code of Criminal Instruction; from the Appendix of the Judicial Code). Not including the latter articles did not pose a vital concern because none of them were mentioned as relevant to any of the questions in our dataset. Regarding the questions, all those that referenced at least one of the articles from our corpus were included in the dataset. + +Language variety. The questions and legal articles were collected in French (fr-BE) as spoken in Wallonia and Brussels-Capital region. + +
AuthorityCode#Articles#Relevant
FederalJudicial Code2285429
Code of Economic Law203298
Civil Code1961568
Code of Workplace Welfare128725
Code of Companies and Associations11940
Code of Local Democracy and Decentralization11593
Navigation Code9770
Code of Criminal Instruction719155
Penal Code689154
Social Penal Code30723
Forestry Code2610
Railway Code2600
Electoral Code2180
The Constitution2085
Code of Various Rights and Taxes1910
Code of Private International Law1354
Consular Code1000
Rural Code8712
Military Penal Code661
Code of Belgian Nationality318
RegionalWalloon Code of Social Action and Health365040
Walloon Code of the Environment127022
Walloon Code of Territorial Development7960
Walloon Public Service Code5970
Walloon Code of Agriculture4610
Brussels Spatial Planning Code4011
Walloon Code of Basic and Secondary Education3100
Walloon Code of Sustainable Housing28620
Brussels Housing Code27944
Brussels Code of Air, Climate and Energy Management2080
Walloon Animal Welfare Code1080
Brussels Municipal Electoral Code1000
Total226331612
+ +Table 3: Summary of the number of articles collected (after pre-processing) from each of the Belgian codes considered for BSARD, as well as the number of articles found to be relevant for at least one of the legal questions. + +Speaker demographic. Speakers were not directly approached for inclusion in this dataset and thus could not be asked for demographic information. Questions were collected, anonymized, and reformulated by Droits Quotidiens. Therefore, no direct information about the speakers' age and gender distribution or socioeconomic status is available. However, it is expected that most, but not all, of the speakers are adults (18+ years), speak French as a native language, and live in Wallonia or Brussels-Capital region. + +Annotator demographic. A total of six Belgian jurists from Droits Quotidiens contributed to anno- + +tating the questions. All have a law degree from a Belgian university and years of experience in providing legal advice and clarifications of the law. They range in age from 30-60 years, including one man and five women, gave their ethnicity as white European, speak French as a native language, and represent upper middle class based on income levels. + +Speech situation. The questions were written between 2018 and 2021 and collected in May 2021. They represent informal, asynchronous, edited, written language that does not exceed 44 words. No question contains hateful, aggressive, or inap + +# Data Facts + +Belgian Statutory Article Retrieval Dataset (BSARD) + +# Metadata + +
Filenamearticles_fr.csv* +questions_fr_train.csv† +questions_fr_test.csv‡
FormatCSV
Urlhttps://doi.org/10.5281/zenodo.5217310
Domainnatural language processing
Keywordsinformation retrieval, law
Typetabular
Rows22633*, 886†, 222‡
Columns6*, 6†, 6‡
Missingnone
LicenseCC BY-NC-SA 4.0
ReleasedAugust 2021
RangeN/A.
DescriptionThis dataset is a collection of French native legal questions posed by Belgian citizens and law articles from the Belgian legislation. The articles come from 32 publicly available Belgian codes. Each question is labeled by one or several relevant articles from the corpus. The annotations were done by a team of experienced Belgian jurists.
+ +
Variables
id*A unique ID number for the article.
article*The full content of the article.
code*The code to which the article belongs.
article_no*The article number in the code.
description*The concatenated headings of the article.
law_type*Either "regional" or "national" law.
id†,‡A unique ID number for the question.
question†,‡The content of the question.
category†,‡The general topic of the question.
subcategory†,‡The precise topic of the question.
extra_description†,‡Extra categorization tags of the question.
article_ids†,‡A list of article IDs relevant to the question.
+ +# Provenance + +# Source + +Belgian legislation + +(https://www.ejustice.just.fgov.be/loi/loi.htm) + +Droits Quotidiens + +(https://droitsquotidiens.be) + +# Author + +Name + +Email + +Antoine Louis + +a.louis@maastrichtuniversity.nl + +Table 4: Dataset nutrition labels for BSARD. + +propriate language as they were all reviewed and reworded by Droits Quotidiens to be neutral, anonymous, and comprehensive. All the legal articles were written between 1804 and 2021 and collected in May 2021. They represent strong, formal, written language containing up to 5,790 words. + +Text characteristics. Many articles complement or rely on other articles in the same or another code and thus contain (sometimes lengthy) legal references, which might be seen as noisy data. + +Recording quality. N/A. + +Other. N/A. + +Provenance appendix. N/A. + +# C.3 Intended Uses + +The dataset is intended to be used by researchers to build and evaluate models on retrieving law articles + +relevant to an input legal question. Therefore, it should not be regarded as a reliable source of legal information at this point in time, as both the questions and articles correspond to an outdated version of the Belgian law from May 2021 (time of dataset collection). In the latter case, the user is advised to consult daily updated official legal resources (e.g., the Belgian Official Gazette). + +# C.4 Hosting + +We provide access to BSARD on Hugging Face Datasets (Lhoest et al., 2021) at https://huggingface.co/datasets/antoiloui/bsard. Additionally, the dataset is hosted on Zenodo at https://doi.org/10.5281/zenodo.5217310. + +# C.5 Data Format + +The dataset is stored as CSV files and can be read using standard libraries (e.g., the built-in csv mod + +ule in Python) or the datasets library: + +1 | from datasets import load_dataset +2 | data = load_dataset("antoiloui/bsard") + +# C.6 Reproducibility + +We ensure the reproducibility of the experimental results by releasing our code on Github at https: //github.com/maaastrichtlawtech/bsard. + +# C.7 Licensing + +The dataset is publicly distributed under a CC BY-NC-SA 4.0 license, which allows sharing freely (i.e., copy and redistribute) and adapt (i.e., remix, transform, and build upon) the material on the conditions that the latter is used for non-commercial purposes only, proper attribution is given (i.e., appropriate credit, link to the license, and an indication of changes), and the same license as the original is used if one distributes an adapted version of the material. In addition, the code to reproduce the experimental results of the paper is released under the MIT license. + +# C.8 Maintenance + +The dataset will be supported and maintained by the Law & Tech Lab at Maastricht University. Any updates to the dataset will be communicated via the Github repository. All questions and comments about the dataset can be sent to Antoine Louis: a.louis@maaastrichtuniversity.nl. Other contacts can be found at + +https://maastrichtuniversity.nl/ + +law-and-tech-people. \ No newline at end of file diff --git a/astatutoryarticleretrievaldatasetinfrench/images.zip b/astatutoryarticleretrievaldatasetinfrench/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..6e3c03c0f0361fea80fe6f0b956c9dbadef432a7 --- /dev/null +++ b/astatutoryarticleretrievaldatasetinfrench/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c44a4643ceffb1eab35fe7ed46f2dac814567f480116fd229a85e4ef65b88376 +size 577623 diff --git a/astatutoryarticleretrievaldatasetinfrench/layout.json b/astatutoryarticleretrievaldatasetinfrench/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b635a637be519f878442b9f19e33cfafbb25f0a6 --- /dev/null +++ b/astatutoryarticleretrievaldatasetinfrench/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a03b389a6dd855958c7c102a5f0a8b9c86f98598ba20825f98404e6744e85b7e +size 464723 diff --git a/ataxonomyofempatheticquestionsinsocialdialogs/0de0a6c6-648e-4e53-9631-c5109866198c_content_list.json b/ataxonomyofempatheticquestionsinsocialdialogs/0de0a6c6-648e-4e53-9631-c5109866198c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..e660aabee03f342f109620964788bc689de6d6d3 --- /dev/null +++ b/ataxonomyofempatheticquestionsinsocialdialogs/0de0a6c6-648e-4e53-9631-c5109866198c_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:402b8624e74c6e7e2d8481d4dd5970bdd9f712e54bd2e4d01a377d7c8b7628f4 +size 119194 diff --git a/ataxonomyofempatheticquestionsinsocialdialogs/0de0a6c6-648e-4e53-9631-c5109866198c_model.json b/ataxonomyofempatheticquestionsinsocialdialogs/0de0a6c6-648e-4e53-9631-c5109866198c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..f6fc665e9220e3dffe1986e1712d8439b53d5dc9 --- /dev/null +++ b/ataxonomyofempatheticquestionsinsocialdialogs/0de0a6c6-648e-4e53-9631-c5109866198c_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3b4c58d04282c870ef5cabfeb736b4b5a57d52f281ea9e074cbffc82da165249 +size 144322 diff --git a/ataxonomyofempatheticquestionsinsocialdialogs/0de0a6c6-648e-4e53-9631-c5109866198c_origin.pdf b/ataxonomyofempatheticquestionsinsocialdialogs/0de0a6c6-648e-4e53-9631-c5109866198c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a9c73b117dfd371aedb334c2d0c92056136cd40c --- /dev/null +++ b/ataxonomyofempatheticquestionsinsocialdialogs/0de0a6c6-648e-4e53-9631-c5109866198c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7b2bbfc6c83d2359ee6e3f395e8560ec53d16f68cc3087c3b5625519519247b9 +size 1596240 diff --git a/ataxonomyofempatheticquestionsinsocialdialogs/full.md b/ataxonomyofempatheticquestionsinsocialdialogs/full.md new file mode 100644 index 0000000000000000000000000000000000000000..31cfeb61631015bd2e3ecbd83ea256f698476caa --- /dev/null +++ b/ataxonomyofempatheticquestionsinsocialdialogs/full.md @@ -0,0 +1,426 @@ +# A Taxonomy of Empathetic Questions in Social Dialogs + +Ekaterina Svikhnushina, Iuliana Voinea, Anuradha Welivita and Pearl Pu + +School of Computer and Communication Sciences + +EPFL, Lausanne, Switzerland + +{ekaterina.svikhnushina, iuliana VOinea, + +kalpani.welivita,pearl.pu}@epfl.ch + +# Abstract + +Effective question-asking is a crucial component of a successful conversational chatbot. It could help the bots manifest empathy and render the interaction more engaging by demonstrating attention to the speaker's emotions. However, current dialog generation approaches do not model this subtle emotion regulation technique due to the lack of a taxonomy of questions and their purpose in social chitchat. To address this gap, we have developed an empathetic question taxonomy (EQT), with special attention paid to questions' ability to capture communicative acts and their emotion-regulation intents. We further design a crowdsourcing task to annotate a large subset of the EmpatheticDialogues dataset with the established labels. We use the crowd-annotated data to develop automatic labeling tools and produce labels for the whole dataset. Finally, we employ information visualization techniques to summarize co-occurrences of question acts and intents and their role in regulating interlocutor's emotion. These results reveal important question-asking strategies in social dialogs. The EQT classification scheme can facilitate computational analysis of questions in datasets. More importantly, it can inform future efforts in empathetic question generation using neural or hybrid methods.1 + +# 1 Introduction + +Questions constitute a considerable part of casual conversations and play many important social functions (Huang et al., 2017; Enfield et al., 2010). Asking follow-up questions about the speaker's statement indicates responsiveness, attention, and care for the partner (Bregman, 2020; Huang et al., 2017). Listeners who manifest such an empathetic and curious attitude are more likely to establish the common ground for meaningful communication + +(McEvoy and Plant, 2014) and appear more likable to the speakers (Huang et al., 2017). + +The vital role of questions in social interaction makes question-asking a desirable property for open-domain chatbots. These chatbots aim to engage in a natural conversation with the users while practicing active listening to deliver understanding and recognition of users' feelings (Rashkin et al., 2019). In fact, generating meaningful questions is so important that this has become one of the central objectives of such agents (Xiao et al., 2020). + +However, asking questions effectively is challenging as not all questions can achieve a particular social goal, such as demonstrating attentiveness or empathy (Huang et al., 2017; Robinson and Heritage, 2006; Paukert et al., 2004). Given the task complexity, automatic conversational question generation is still gaining momentum, with only few results reported so far. See et al. (2019) suggested a way to control the number of questions produced by the model with conditional training. Wang et al. (2019) proposed a question-generation method to increase their semantic coherence with the answer, employing reinforcement learning followed by the adversarial training procedure. Wang et al. (2018) devised a model generating appropriate questions for a variety of topics by modeling the types of words used in a question (interrogatives, topic words, and ordinary words). These works presented approaches to produce contextually appropriate and diverse questions, but none of them considered the effect of questions on the interlocutor's emotional state. We attribute the deficiency in this research to the lack of resources allowing to analyze and model various question-asking strategies in affect-rich social exchanges. + +To address this gap, we present a categorization and analysis of questions in social dialogs, with four main contributions. First, we develop an Empathetic Question Taxonomy, EQT, by manually annotating a subset of the EmpatheticDialogues (ED) + +dataset (Rashkin et al., 2019) ( $\S 4$ ). EQT delineates the acts and intents of questions. Question acts capture semantic-driven communicative actions of questions, while question intents describe the emotional effect the question should have on the dialog partner. For example, a listener may request information (question act) about the age of speaker's daughter by asking "How old is she?" after learning about her success with the aim to amplify speaker's pride of his child (question intent). Second, we design and launch a crowd-sourcing annotation task to grow the original labeled seed subset tenfold ( $\S 5$ ). Third, we devise an automatic classification model, QBERT, to generate labels for the rest of the ED dataset to demonstrate one important application of the taxonomy ( $\S 6$ ). QBERT can facilitate the development of chatbots that offer engaging and empathetic conversations by raising meaningful questions. Finally, we inspect co-occurrences of acts and intents and their effect on the interlocutor's emotion using visualization techniques ( $\S 7$ ). The analysis illustrates the most prominent question-asking strategies in human emotional dialogs. To conclude, we discuss the implications of these results for future question generation approaches. + +# 2 Related Work + +Previously proposed taxonomies of dialog acts frequently differ in types of assisted natural language tasks. The Dialog Act Markup in Several Layers (DAMSL) tag set was designed to enable computational modeling of conversational speech using statistical methods (Jurafsky et al., 1997; Core and Allen, 1997). It consists of 42 communicative acts derived from a Switchboard corpus. Eight of these labels describe different question types according to their semantic role, e.g., Wh-question or Rhetorical-Question. Several works proposed hierarchical taxonomies of dialog acts, targeted at modeling users' intents in human-machine conversations. Montenegro et al. (2019) introduced their annotation scheme for a symbolic dialog system intended to improve the lives of the elderly, while Yu and Yu (2021) designed a scheme for facilitating general human-machine chit-chat. In both works the logs of human-machine interactions were used for producing the taxonomies. Each of them features labels devoted to questions, characterizing them either by a question word, e.g., How or What, or the form of expected answer, e.g., Open-ended or Yes/No question. Finally, Welivita and Pu (2020) + +suggested a taxonomy of empathetic response intents in dialogs from the ED dataset with the purpose of improving controllability in neural dialog generation approaches. It further stated that Questioning is one of the most frequent intents of the empathetic listeners. However, none of these works focused on the fine-grained analysis of questions and their role in empathetic dialogs. + +Meanwhile, several linguistic studies closely examined the pragmatics of questions and offered a number of classification schemes. Graesser et al. (1994) developed a scheme of 18 tags based on the information sought by the question. Their taxonomy applies well for transactional exchanges, but does not capture the social dimension. Freed (1994) studied the correspondence between the social function of questions and their syntactic form. She established 16 social question functions occurring in dyadic spoken conversations between friends. In another research effort, a group of linguists explored the range of social actions performed by questions across 10 languages (Enfield et al., 2010). The authors developed a coding scheme comprising 3 semantic question types and 7 social actions and applied it to questions in spontaneous spoken conversations (Stivers and Enfield, 2010). Finally, Huang et al. (2017) developed a taxonomy of 6 question types to describe questions occurring in their dataset of chat-based conversations between strangers instructed to get to know each other. + +The described works provide an insightful basis for studying questions in social conversations. However, they do not consider the effect of questions on their addressee's emotional states, neither do they describe specific mechanisms to handle computational modeling. Moreover, most of them apply to spoken dialogs, impeding the extension of their results to chat-based exchanges due to the inherent differences in these modalities. Lastly, they relied mainly on manual annotation, yielding comparatively smaller datasets. In our study, we extended the derived taxonomy to a large corpus using crowd-sourcing and automatic methods and analyzed the emerging patterns on a large scale. We summarize the comparison of our question taxonomy with the existing schemes in Table 1. + +# 3 Dataset + +For taxonomy derivation, we sought a dataset that contains social dialogs with diverse emotional expressions and could be applicable to train a chat + +
Taxonomy# labelssocial functionemotional functiondataset
(Graesser et al., 1994)18XXX
(Freed, 1994)16XX
(Enfield et al., 2010)7XX
(Huang et al., 2017)6XX
EQT21
+ +bot with advanced question-generating abilities. We avoided datasets featuring multi-modal dialogs (IEMOCAP (Busso et al., 2008), MELD (Poria et al., 2019)) as well as transcribed spoken conversations (Emotionlines (Hsu et al., 2018), Switchboard (Jurafsky et al., 1997)). Such dialogs contain back-channel communication and other sensory signals that are not present in chat-based conversations and, therefore, are not well-suited for the modeling task. Similarly, we rejected datasets that assist other tasks than social conversation modeling, such as SQuAD (Rajpurkar et al., 2016) (reading comprehension) or QoQA (Reddy et al., 2019) (information gathering). Finally, we did not consider datasets from social media as they can contain toxic and aggressive responses (Zhang et al., 2018). + +We opted for the EmpatheticDialogues (ED) dataset (Rashkin et al., 2019), a benchmark dataset for empathetic dialog generation containing 24,850 conversations grounded in emotional contexts. Each dialog is initiated by a speaker describing a feeling or experience and continued by a listener who was instructed to respond empathetically. The dialogs are evenly distributed over the 32 emotional contexts, covering various speaker sentiments (e.g., sad, joyful, proud). We found the ED dataset to be a rich source of question-asking as over $60\%$ of all dialogs contain a question in one of the listeners' turns, resulting in a total of 20K listener questions. Basic statistics of the dataset are given in Table 2. + +Table 1: Comparison of question taxonomies. + +
DescriptorValue
# dialogs in total24,850
# turns per dialog on avg.4.31
# dialogs with at least one question from listener15,253
# questions from listeners(61.4%)
20,201
+ +Table 2: Statistics of the EmpatheticDialogues dataset. + +# 4 Defining Empathetic Question Taxonomy + +Given the community's interest in question-asking functionality for chatbots and its significance for empathetic response generation, we aimed at developing a taxonomy of listeners' questions asked in response to speakers' emotional inputs. For this purpose, being guided by prior literature review, we employed a qualitative coding method, which is an established approach for such tasks (Stivers and Enfield, 2010; Huang et al., 2017; Zeinert et al., 2021). Qualitative coding is a process of grouping and labeling similar types of data and iteratively validating the labels. + +To cover a diverse range of speakers' emotions, we sampled several hundred dialogs uniformly from the 32 emotional contexts in the ED corpus. The sample size was chosen to balance the need for the diversity of questions with researchers' ability to consider each question carefully and was consistent with prior practice. The coding process was informed by previous question classification schemes (Table 1) and knowledge about general principles of emotional regulation (Gross, 2013). Iterative adjustments were applied resulting from discussions of the concrete data. Specifically, the first author made several iterations of coding trials to develop an initial set of labels. Throughout the process, a number of review sessions were held with the last author to merge the labels into more focused classes. As a result, we developed the Empathetic Question Taxonomy (EQT) with two distinguished branches: question acts describe semantic-driven features of questions (e.g., ask for confirmation, positive rhetoric), whereas question intents characterize their emotion-regulation functions targeted at the interlocutor's emotional state (e.g., sympathize, amplify excitement). As it will be revealed further (\$7), an empathetic listener can use different question acts to deliver the same intent, justifying the proposed branching. + +Overall, more than 310 questions were annotated. EQT consists of 9 labels for question acts and 12 labels for question intents. The granularity of the taxonomy was driven by earlier linguistic findings and empirical observations about the interplay of the labels in two branches. For example, question acts request information (Enfield et al., 2010), ask about consequence (Graesser et al., 1994), and ask about antecedent (Graesser et al., 1994) are related and could possibly be grouped. However, we de + +cided to keep them separately as listeners use them with unequal frequencies in positive and negative emotional contexts and combine them with different question intents (§7). Similarly, the initial set of labels for question intents was created based on the variety of emotions present in the dataset. We further reduced it to a manageable size to make it more applicable for an annotation task, while still preserving sufficient expressiveness of labels to represent subtleties of the data (Zeinert et al., 2021). We present the labels with their definitions below and provide several examples in Figure 1. Examples for each act and intent label are given correspondingly in Tables 4 and 5 from Appendix A. + +# Question acts + +Request information (38.7%): Ask for new factual information. + +Ask about consequence (21.0%): Ask about the result of the described action or situation. + +Ask about antecedent (17.1%): Ask about the reason or cause of the described state or event. + +Suggest a solution $(8.7\%)$ : Provide a specific solution to a problem in a form of a question. + +Ask for confirmation (5.8%): Ask a question to confirm or verify the listener's understanding of something that has been described by the speaker. + +Suggest a reason $(5.2\%)$ : Suggest a specific reason or cause of the event or state described by the speaker in a form of a question. + +Irony (1.3%): Ask a question that suggests the opposite of what the speaker may expect, usually to be humorous or pass judgement. + +Negative rhetoric (1.3%): Ask a question to express a critical opinion or validate a speaker's negative point without expecting an answer. + +Positive rhetoric $(1.0\%)$ : Ask a question to make an encouraging statement or demonstrate agreement with the speaker about a positive point without expecting an answer. + +# Question intents + +Express interest (57.1%): Express the willingness to learn or hear more about the subject brought up by the speaker; demonstrate curiosity. + +Express concern $(20.3\%)$ : Express anxiety or worry about the subject brought up by the speaker. Offer relief $(4.8\%)$ : Reassure the speaker who is anxious or distressed. + +Sympathize (3.9%): Express feelings of pity and sorrow for the speaker's misfortune. + +Support $(2.6\%)$ : Offer approval, comfort, or encouragement to the speaker, demonstrate an interest in and concern for the speaker's success. + +Amplify pride (2.6%): Reinforce the speaker's feeling of pride. + +Amplify excitement (1.9%): Reinforce the speaker's feeling of excitement. + +Amplify joy (1.6%): Reinforce the speaker's glad feeling such as pleasure, enjoyment, or happiness. + +De-escalate (1.6%): Calm down the speaker who is agitated, angry, or temporarily out of control. + +Pass judgement (1.6%): Express a (critical) opinion about the subject brought up by the speaker. + +Motivate (1.0%): Encourage the speaker to move onward. + +Moralize speaker $(1.0\%)$ : Judge the speaker. + +To validate the interpretability of the labels and efficacy of the instructions for the crowd-sourcing task, we invited two other members from our research group and asked them to annotate questions in 20 randomly selected dialogs, containing 25 questions. The annotators were instructed to consider the preceding dialog turns while assigning the labels as the same question might fall into different categories based on the context. For example, the question "What happened!?" can be classified as Express interest or Express concern, depending on the valence of the speaker's emotion. We computed both the Fleiss kappa (Fleiss, 1971) and the observed agreement among the first author and two annotators. The observed agreement was calculated as a percentage of questions with at least two agreed labels (Endriss and Fernandez, 2013). We considered it as a reliable measure of inter-rater + +- My cat vomited on my shoes today (Negative) +- Is your cat ill? (Suggest a reason, Sympathize) or does cat always do that? (Request info, Express concern) +- no he just ate too much (Neutral) + +- I got approved to adopt a dog! (Positive) +- Yay! I love dogs! Do you have any you want to get specifically or are you just going to look until you find one that clicks? (Ask about consequence, Amplify excitement) +- Oh I already picked one! I'll be picking her up this weekend. (Positive) + +Figure 1: Examples of dialogs grounded in negative (top) and positive (bottom) emotional contexts. Listeners' questions are shown in bold with the assigned (act, intent) labels given in parenthesis. The valence of speaker's emotions in each turn is also indicated. + +agreement as the number of coding categories was large (9 for acts and 12 for intents), yielding relatively low chance agreement (11.1% and 8.3% respectively). The agreement resulted in 92% for acts ( $\kappa = 0.52$ ) and 80% for intents ( $\kappa = 0.31$ ), supporting the satisfactory interpretability of EQT. + +# 5 Crowd-Sourced Annotation + +For further analysis, we annotated a larger subsample of the ED dataset with the EQT labels by designing and launching a crowd-sourcing task on Amazon Mechanical Turk (Mturk). The design was refined based on three pilot studies: one internal and two Mturk-based. For the annotation, we sampled about $40\%$ of dialogs from each of the original 32 emotional contexts. We only sampled the dialogs with at least one question in one of the listener's turns. The dialogs were then pre-processed so that each dialog ended with a question requiring a label. Further, we distributed the dialogs into individual human intelligent tasks (HITs) and launched them on Mturk in a sequence of batches. For each HIT we collected the annotations from three workers. The incentive for one HIT varied from $0.4 to$ 0.9 depending on the worker's performance and task configuration. We describe the details about the task design and the annotation procedure below; exhaustive explanations about dialog pre-processing and the task user interface are provided in Appendix B. + +# 5.1 Task design + +The interface consisted of four main components: instructions, terminology, terminology quiz, and the annotation task. The instructions informed the workers about the purposes of the task. Next, the terminology page outlined the description of the EQT, listing the definition of each label with examples. The terminology quiz contained six dialogs from the terminology page and invited the worker to select correct labels for questions in each dialog. Finally, the annotation task included 25 dialogs, each ending with a listener turn with one or multiple questions. Under each question, labels from two EQT branches were presented, and the worker had to select one most suitable label within each of the sets. Twenty out of the 25 dialogs were treated as points for annotation, and the other 5 were bonus + +dialogs. For the bonus questions, we identified the gold labels during the manual annotation phase and used them to control workers' quality: a worker had to select the correct labels to score the points counting towards additional incentive (\$0.2). + +We required all workers who accepted one of our tasks for the first time to take the terminology quiz. Workers who assigned the correct labels to at least three questions could proceed to the annotation task and were granted bonus payment for passing the quiz (\(0.1). The quiz was not required for the workers who had successfully passed it once. + +# 5.2 Quality control + +In addition to the terminology quiz, we used several mechanisms to control the annotation quality. First, following Mturk recommendations, we only allowed the workers with a $98\%$ approval rate to access our tasks. Second, we rejected assignments whose completion time significantly deviated from the expected average. Further, we ran additional checks for the workers who accepted several of our assignments simultaneously. Lastly, we computed the inter-rater agreement for each batch and discarded the submissions that harmed the agreement. + +# 5.3 Results + +Overall, we launched 556 HITs and 465 of them were completed. The rejection rate after the quality control was $4.7\%$ . Upon obtaining the results, we first computed the Fleiss kappa scores for acts $(\kappa = 0.34)$ and for intents $(\kappa = 0.27)$ to validate that the agreement between the workers is acceptable. Then, we identified the final labels using the majority vote: if at least two workers agreed on a label, we chose it as a final label. This resulted in an $83.6\%$ observed agreement score for acts and $75.8\%$ observed agreement for intents. The majority vote approach was shown to be able to filter noisy judgments of amateurs, producing the labeled set of comparable quality to the annotations of experts (Nowak and Rüger, 2010). As a final check, we computed the kappa agreement between the crowd-sourced labels and the first author annotations for the subset of 450 randomly sampled questions. The scores equaled 0.57 for acts $(71.6\%$ observed agreement) and 0.50 for intents $(68.0\%$ observed agreement), indicating moderate agreement, which we treat as satisfactory for this type of task. As a result, an act label was assigned to 6,433 questions and an intent label – to 5,826 questions, with an intersection of 4,962 questions. + +# 6 Automatic Labeling + +To show how EQT can be operationalized, we demonstrate the use of the taxonomy for annotating the reminder of the ED dataset. We first formulate the question act and intent prediction problems and then build two classification models to address them. Before training, we augmented the labeled set using $k$ -Nearest-Neighbors ( $k$ -NN) method. We also tried training the classifiers without data augmentation, but their performance was weaker (see Appendix D for details). + +# 6.1 Data Augmentation + +We employed the Sentence-BERT (SBERT) framework (Reimers and Gurevych, 2019) to obtain embeddings for all questions with their contexts. Then we used the cosine similarity measure to find $k$ labeled NNs for each question in the unlabeled set and assign the same labels to them. For the first step, we computed the embeddings of each dialog turn using the roberta-base-nli-stsb-mean-tokens SBERT model and then combined them into a single embedding per question with the weighted average. We opted for weighed average instead of concatenation to keep manageable size of the embedding vector. We used a half-decaying weighting scheme, providing the highest weight to the final question to indicate its importance. The usage of this weighting scheme is guided by our previous experiments of similar nature, where we observed that the models with decaying weights performed better than the ones without them (Welivita et al., 2021). Next, we tested several approaches for identifying semantically similar dialogs to propagate the labels. One strategy was to take the same label as the top-1 NN, given that the similarity was higher than a predefined threshold. The other strategy was to use the label identified with the majority vote from the top-3 NNs. We did not experiment with higher values of $k$ due to resource considerations. We ran several cross-validation experiments on the labeled set with grid search over various cosine-similarity thresholds. Top-3 majority vote strategy was shown to produce higher accuracy with a 0.825 cosine similarity threshold value resulting in the acceptable trade-off between the accuracy ( $\sim 76\%$ for both label sets) and the number of labeled questions. Therefore, we applied this strategy for the whole dataset, which produced additional 1,911 labels for question acts and 1,886 labels for question intents. More details are provided in Appendix C. + +# 6.2 Classifier Models + +Using the human-annotated and augmented labels, we trained two classifiers, which we collectively call QBERT. QBERT models have identical architecture and vary only in the number of output categories in the final layer. Each model consists of a BERT-based representation network, an attention layer, one hidden layer, and a softmax layer. For the representation network, we used the architecture with 12 layers, 768 dimensions, 12 heads, and 110M parameters. We initialized it with the weights of RoBERTa language model pre-trained by Liu et al. (2019) and for training used the same hyper-parameters as the authors. As input, we fed a listener question and preceding dialog turns in the reverse order. To prioritize the question, the half-decaying weighting scheme as described above was applied to the token embeddings of each turn. + +Before training, we took out a stratified random sample of $20\%$ of the questions (1,500) as a test set. The test set contained respectively 1156 human- and 344 SBERT-annotated questions. We separately trained each model on $80\%$ of the remaining datapoints (5,475 acts, 4,969 intents), keeping the rest as a validation set (1,369 acts, 1,243 intents). We trained each model for 15 epochs and for prediction retained the ones with the lowest validation loss (see Appendix D for details). The classifiers achieved $74.7\%$ accuracy for intents and $79.1\%$ accuracy for acts on the test set. Further breakdown accuracies for human- and SBERT-annotated test samples are given in Table 3. According to previous work, human-human agreement can be used as a proxy for human accuracy (Kumar, 2014; Somasundaran and Chodorow, 2014). Given the agreement in our Mturk experiment $(\sim 75 - 85\%)$ , QBERT exhibited reasonable predictive accuracy and validated applicability and usefulness of EQT for language modeling tasks. + +
Label sourceQuestion intentsQuestion acts
human71.0%77.1%
SBERT86.9%87.5%
both74.7%79.1%
+ +Table 3: Accuracy of QBERT classifiers on different slices of test data based on the source of annotations (human, SBERT, or both). + +# 7 Analysis of Questioning Strategies + +In this section we present the analysis of questioning strategies adopted by the empathetic listeners. We base our examination on human-annotated questions instead of the whole ED dataset to avoid any potential noise which might have been introduced by automatic classification. Visualizations for the whole dataset are included in Appendix E. Here, by a questioning strategy, we imply a combination of act and intent labels assigned to each question. We first analyzed which labels from the two EQT branches form such strategies by plotting the co-occurrences of each pair (Figure 2). Larger circles represent more frequent strategies, while an empty cell indicates that people do not use the given act to deliver the corresponding intent. For example, to amplify partner's joy, one may request information for more details or ask about consequences of the event, but will unlikely raise a negative rhetorical question. Several strategies are much more frequent than others. Act Request Information and intent Express interest dominate in our dataset, occurring together for $39\%$ of questions. They define the most general type of questions, which are probably easy to ask, providing a reason why listeners use them often. At the same time, dialogs in the ED dataset are relatively short, and it can be difficult for listeners to fully understand the ideas and + +![](images/ba4fd5db8ba61aad569272ad8e5817b680cf08770e1a384a13cb422a8c023d50.jpg) +Figure 2: Joint distribution of question intents and acts for 5,272 human-labeled questions. Blue circles are proportional to the frequency of each pair's co-occurrence. + +feelings of speakers in a couple of turns. In this case, requesting information and expressing interest demonstrates listener's attentive curiosity about the situation. Once listeners feel more confident about the speakers' sentiments and contexts, they employ more specific question-asking strategies. + +We further analyzed this phenomenon temporally across dialog turns (Figure 3). Primarily, we studied how listeners' questioning strategies affect speakers' emotions by visualizing the mappings between them. For this visualization, we used 41 emotion and intent labels describing each turn in the ED dataset produced by Welivita and Pu (2020). To avoid clutter, we mapped the original 41 labels to 3 coarser categories: positive, negative, and neutral using our best judgement (see Appendix E for details). Then, for the dialogs containing a question in the second turn, we plotted how speakers' emotions and listeners' questioning strategies shift over the first three turns. We computed the frequencies of all questioning strategies and, for the ones occurring in more than $0.5\%$ of cases, we plotted the flow patterns. We restricted our analysis to the first three turns because over $70\%$ of dialogs in the ED dataset have only four of them, excluding the possibility to study the influence of questioning strategies on further speakers' turns. In order to still get an intuition how listeners' question-asking behavior changes in the consecutive turns, we plotted the dynamics of the ratios of question act and intent labels across the dialog depth. + +Figure 3a shows the flow rates between speakers' emotions and listeners' questioning strategies. As observed before, listeners most likely use follow-up questions to elicit more details about the situation by expressing interest and requesting information. In most of such cases, the speaker's emotion remains preserved in their consecutive utterance as the speaker elaborates on the first turn, maintaining the sentiment. When speakers explain themselves with sufficient clarity already in the first turn, listeners raise more precise questions, adapting the strategy to the affective context. If speakers share a positive experience, listeners try to amplify their emotions by requesting more information or asking about the consequences of the situation. On the contrary, when speakers disclose a negative sentiment, listeners try to validate and alleviate their feelings. They typically intend to express concern, sympathize, offer relief, or de-escalate the issue, and achieve it by asking about what preceded or fol + +![](images/f359c39fc1bbc3f412171ce71d9ae1f9c9fb08c8c7b8cc17ca530ebda3630e3e.jpg) +(a) + +![](images/97dc4899ade2a7d22de042fb589e7d62654ccedd057360427a3cd76fefb4e1b2.jpg) +(b) +Figure 3: a) Mappings between emotions disclosed by the speakers and listeners' questioning strategies in the first three turns of the ED dialogs (human-labeled ED subset). b) Frequency distribution of question acts across dialog turns (human-labeled ED subset). c) Frequency distribution of question intents across dialog turns (human-labeled ED subset). Two prevalent intents were excluded for visual clarity; their percentage rates computed for all questions $(n = 3940$ and $n = 1274)$ are: Express interest: $54.3\% \rightarrow 57.9\%$ , Express concern: $22.5\% \rightarrow 13.7\%$ + +![](images/5bca6ff8a93c541b971c377e3a4f351a8259b07d155694c5e0d64755999dcf3f.jpg) +(c) + +lowed the situation and politely suggesting possible solutions or potential reasons for the issue. These specific strategies demonstrate their effectiveness as almost a half of negative speakers' emotions gets mitigated after the question intervention, while two thirds of positive emotions keep up in the following speaker's turn. The examples of dialogs showing how listeners use questions to treat both positive and negative speakers' sentiments are given in Figure 1. Additional examples are also available in Figure 9 of Appendix D. + +Figures 3b and 3c demonstrate how ratios of different acts and intents evolve over two successive listeners' responses. Even though the horizon of four dialog turns might be too short to trace all the patterns, a few observations can be made. With increasing depth of the dialog, the overall number of questions decreases, while two types get more prominent: general questions (Request Information, Express interest) and questions aiming at suppressing speakers' negative emotions (e.g., Suggest a solution, Offer relief). It may indicate that listeners employ specific strategies to react to positive speakers' emotions immediately after their disclosure, but in case of negative contexts they tend to ask + +for extra clarifications in the first place and deliver targeted emotional treatment only in the next turn. As dialogs converge to more neutral exchanges, reducing the need to manage speakers' feelings, the ratio of questions demonstrating listeners' general curiously about the subject increases. + +Finally, we reflected on the scarcely represented labels. Among acts, Positive and Negative rhetoric and Irony appear least frequently. These labels can be broadly classified as rhetorical questions. They typically serve for self-expression than conversational engagement and, therefore, are less common than other forms of questions (Huang et al., 2017). Moreover, negative rhetorical prompts may harm the conversation quality (Zhang et al., 2018), which could also explain why listeners avoided them in empathetic dialogs. The same reasoning applies to the two infrequent intents, Pass judgement and Moralize speaker. Another surprisingly rare intent is Motivate. We believe that motivation might be difficult to express in the form of a question. Moreover, for people who did not undergo special training, expressing motivation might be more challenging than other intents as it suggests a more thorough approach to solving one's problems. + +# 8 Limitations and Future Work + +Due to the nature of the ED dataset, some EQT labels are less represented than others. We kept them under consideration as we observed their distinctive role in managing speaker's emotions. Their further analysis is crucial for further identifying and designing effective questioning strategies for empathetic conversations, such as promoting motivational questions and avoiding judgmental ones. Eliciting additional samples for these categories could be possible by applying QBERT classifiers to other datasets capturing social dialogs. + +Our taxonomy does not cover the phatic role of questions typically occurring during greetings, e.g., "What's up?" or "How's it going?" Such questions were very rare in the ED dataset. We chose not to analyze them, since these routine questions are the most superficial (Huang et al., 2017) and unlikely to serve any emotion-regulation function. + +In the design of our annotation task, we opted for asking the crowd workers to choose a single most specific label from each of the two EQT branches. This was done with the aim of facilitating further analysis of questioning strategies within the scope of this study. Nevertheless, according to Graesser et al. (1994), most adequate classification schemes in the social sciences allow assigning an observation to multiple rather than only one category. This also applies to our case. For example, for the question "Did you go through a breakup recently?" both Suggest a reason and Request information can be relevant. Future work can explore the possibilities of using multiple applicable labels in addition to the most specific one. Additional labels can be obtained either by tagging the samples manually or by taking top-N most confident predictions from the classifiers. + +The results of this paper can facilitate the development of question-asking mechanisms of conversational chatbots. One can employ conditional training (See et al., 2019) to train an end-to-end neural model on a subset of most effective questioning strategies as defined by the co-occurrences of the EQT labels and their mappings with speakers' emotions (cf. Figure 3). To achieve even greater interpretability and controllability, researchers can devise architectures that dynamically model the selection of appropriate questioning strategy before generating a question. The strategy can be selected based on the conversational history and speaker's emotion and further passed into the question gener + +ation module. The main purpose of such modeling approaches is to lead an engaging empathetic conversation by raising meaningful questions, which deliver desirable effect on user's emotional state. Moreover, EQT along with QBERT models can be used to label questions originating from other corpora or chat logs and evaluate their effectiveness for regulating speaker's emotions, as described above. + +# 9 Conclusion + +In this paper we introduced EQT, an Empathetic Question Taxonomy depicting acts and intents of questions in social dialogs. We used crowdsourcing and automatic methods to tag all listeners' questions from the ED dataset with the EQT labels, which validated their interpretability and produced useful annotations for future research. Further analysis of the dataset with the visualization techniques shed light on various question-asking strategies employed by listeners in response to speakers' emotionally-ridden inputs. We identified several useful question-asking behaviors for favorable emotional regulation. We expect that our findings will enable the development of more controllable and effective question-generation models. + +# 10 Ethical Considerations + +In this work, we used Mturk platform to collect annotations for the dataset. Crowd workers on Mturk are known to be underpaid according to western standards, earning a median hourly wage of only \(\sim\)2/h (Kaufmann et al., 2011). At the same time, monetary remuneration is not the only factor defining people's motivation to work on such crowdsourcing platforms (Hara et al., 2018). For example, workers might also engage with HITs to learn new or train existing skills, pass free time, or meet new people. Taking these factors into account, we designed our annotation experiments so that workers received \(\sim\)6/h on average to achieve reasonable trade-off between the number of HITs we could launch with the available budget and the offered payment. While being slightly lower than the US minimum wage (\(7.25), it was deemed a fair compensation given that it is three times higher than the reported median wage and workers could have other reasons to complete the tasks than purely monetary reward. Nevertheless, we encourage future works of similar nature to offer higher compensation to the workers if possible. + +# References + +Peter Bregman. 2020. Validation. In M. Goldsmith and S. Osman, editors, Leadership in a Time of Crisis: The Way Forward in a Changed World, 100 Coaches. RosettaBooks. +Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jeannette N Chang, Sungbok Lee, and Shrikanth S Narayanan. 2008. Iemocap: Interactive emotional dyadic motion capture database. Language resources and evaluation, 42(4):335-359. +Mark G Core and James Allen. 1997. Coding dialogs with the damsl annotation scheme. In AAAI fall symposium on communicative action in humans and machines, volume 56, pages 28-35. Boston, MA. +Ulle Endriss and Raquel Fernández. 2013. Collective annotation of linguistic resources: Basic principles and a formal model. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 539-549, Sofia, Bulgaria. Association for Computational Linguistics. +N.J. Enfield, Tanya Stivers, and Stephen C. Levinson. 2010. Question-response sequences in conversation across ten languages: An introduction. Journal of Pragmatics, 42(10):2615-2619. Question-Response Sequences in Conversation across Ten Languages. +Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological bulletin, 76(5):378. +Alice F Freed. 1994. The form and function of questions in informal dyadic conversation. Journal of Pragmatics, 21(6):621-644. +Arthur C Graesser, Cathy L McMahon, and Brenda K Johnson. 1994. Question asking and answering. In Morton Ann Gernsbacher, editor, Handbook of Psycholinguistics. Academic Press. +James J Gross. 2013. Handbook of emotion regulation. Guilford publications. +Kotaro Hara, Abigail Adams, Kristy Milland, Saiph Savage, Chris Callison-Burch, and Jeffrey P. Bigham. 2018. A Data-Driven Analysis of Workers' Earnings on Amazon Mechanical Turk, page 1-14. Association for Computing Machinery, New York, NY, USA. +Chao-Chun Hsu, Sheng-Yeh Chen, Chuan-Chun Kuo, Ting-Hao Huang, and Lun-Wei Ku. 2018. Emotion-Lines: An emotion corpus of multi-party conversations. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). +Karen Huang, Michael Yeomans, Alison Wood Brooks, Julia Minson, and Francesca Gino. 2017. It doesn't hurt to ask: Question-asking increases liking. Journal of personality and social psychology, 113(3):430. + +Dan Jurafsky, Elizabeth Shriberg, and Debra Biasca. 1997. Switchboard swbd-damsl shallow-discourse-function annotation coders manual. Institute of Cognitive Science Technical Report. +Nicolas Kaufmann, Thimo Schulze, and Daniel Veit. 2011. More than fun and money: Worker motivation in crowdsourcing-a study on mechanical turk. +Ritesh Kumar. 2014. Developing politeness annotated corpus of Hindi blogs. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 1275-1280, Reykjavik, Iceland. European Language Resources Association (ELRA). +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. +P. McEvoy and R. Plant. 2014. Dementia care: using empathic curiosity to establish the common ground that is necessary for meaningful communication. Journal of Psychiatric and Mental Health Nursing, 21(6):477-482. +César Montenegro, Asier López Zorrilla, Javier Mikel Olaso, Roberto Santana, Raquel Justo, Jose A. Lozano, and María Inés Torres. 2019. A dialogue-act taxonomy for a virtual coach designed to improve the life of elderly. Multimodal Technologies and Interaction, 3(3). +Stefanie Nowak and Stefan Rüger. 2010. How reliable are annotations via crowdsourcing: A study about inter-annotator agreement for multi-label image annotation. In Proceedings of the International Conference on Multimedia Information Retrieval, MIR '10, page 557-566, New York, NY, USA. Association for Computing Machinery. +Amber Paukert, Brian Stagner, and Kerry Hope. 2004. The assessment of active listening skills in helpline volunteers. Stress, Trauma, and Crisis, 7(1):61-76. +Soujanya Poria, Devamanyu Hazarika, Navonil Majumder, Gautam Naik, Erik Cambria, and Rada Mihalcea. 2019. MELD: A multimodal multi-party dataset for emotion recognition in conversations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 527-536, Florence, Italy. Association for Computational Linguistics. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics. +Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards empathetic open-domain conversation models: A new benchmark and + +dataset. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5370-5381, Florence, Italy. Association for Computational Linguistics. +Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. CoQA: A conversational question answering challenge. Transactions of the Association for Computational Linguistics, 7:249-266. +Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics. +Jeffrey D. Robinson and John Heritage. 2006. Physicians' opening questions and patients' satisfaction. Patient Education and Counseling, 60(3):279-285. EACH Conference 2004. +Abigail See, Stephen Roller, Douwe Kiela, and Jason Weston. 2019. What makes a good conversation? how controllable attributes affect human judgments. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1702-1723, Minneapolis, Minnesota. Association for Computational Linguistics. +Swapna Somasundaran and Martin Chodorow. 2014. Automated measures of specific vocabulary knowledge from constructed responses ('use these words to write a sentence based on this picture'). In Proceedings of the Ninth Workshop on Innovative Use of NLP for Building Educational Applications, pages 1-11, Baltimore, Maryland. Association for Computational Linguistics. +Tanya Stivers and N.J. Enfield. 2010. A coding scheme for question-response sequences in conversation. Journal of Pragmatics, 42(10):2620-2626. Question-Response Sequences in Conversation across Ten Languages. +Weichao Wang, Shi Feng, Daling Wang, and Yifei Zhang. 2019. Answer-guided and semantic coherent question generation in open-domain conversation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5066-5076, Hong Kong, China. Association for Computational Linguistics. +Yansen Wang, Chenyi Liu, Minlie Huang, and Liqiang Nie. 2018. Learning to ask questions in open-domain conversational systems with typed decoders. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2193-2203, Melbourne, Australia. Association for Computational Linguistics. + +Anuradha Welivita and Pearl Pu. 2020. A taxonomy of empathetic response intents in human social conversations. In Proceedings of the 28th International Conference on Computational Linguistics, pages 4886-4899, Barcelona, Spain (Online). International Committee on Computational Linguistics. +Anuradha Welivita, Yubo Xie, and Pearl Pu. 2021. A large-scale dataset for empathetic response generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1251-1264, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Ziang Xiao, Michelle X. Zhou, Wenxi Chen, Huahai Yang, and Changyan Chi. 2020. If i hear you correctly: Building and evaluating interview chatbots with active listening skills. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20, page 1-14, New York, NY, USA. Association for Computing Machinery. +Dian Yu and Zhou Yu. 2021. MIDAS: A dialog act annotation scheme for open domain HumanMachine spoken conversations. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1103-1120, Online. Association for Computational Linguistics. +Philine Zeinert, Nanna Inie, and Leon Derczynski. 2021. Annotating online misogyny. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3181-3197, Online. Association for Computational Linguistics. +Justine Zhang, Jonathan Chang, Cristian Danescu-Niculescu-Mizil, Lucas Dixon, Yiqing Hua, Dario Taraborelli, and Nithum Thain. 2018. Conversations gone awry: Detecting early signs of conversational failure. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1350-1361, Melbourne, Australia. Association for Computational Linguistics. + +# A Examples from Empathetic Question Taxonomy + +Tables 4 (acts) and 5 (intent) present the two EQT branches with examples for each label. Examples are selected from the initial manually annotated subset. For each label we include its frequency for the three corresponding sets: manually-labeled, Mturk-labeled, and overall (both manually-, Mturk-, and automatically-labeled). The frequencies are approximately the same across each label, which validates that our annotation methods produced credible results. Examples of automatically assigned labels are given in Appendix D. + +
Question ActDefinition and Example
Request information38.7%, 52.5%, 51.4%Ask for new factual information.- when i left my family to study in another city i got upset.- I'm sorry to hear that. What are you studying?
Ask about consequence21.0%, 19.2%, 17.9%Ask about the result of the action or situation described by the speaker.- Our home was broken into- Oh no! Did they steal a lot?
Ask about antecedent17.1%, 10.5%, 11.3%Ask about the reason or cause of the event or state described by the speaker.- Hi, I had a great vacation but something went wrong- Oh no, I'm sorry to hear that. What happened?
Suggest a solution8.7%, 5.7%, 8.0%Provide a specific solution to a problem in a form of a question.- I lost my favorite jacket and I can't find it- did you try redoing your steps of the last day?
Ask for confirmation5.8%, 5.6%, 5.2%Ask a question to confirm or verify the listener's understanding about something that has been described by the speaker.- I applied for a job last week.- Oh did you?
Suggest a reason5.2%, 3.7%, 4.1%Suggest a specific reason or cause of the event or state described by the speaker in a form of a question.- i felt scared walking home alone the other day.- That's terrible! Were you in a bad part of town or anything?
Positive rhetoric1.0%, 1.3%, 1.1%Ask a question in order to make an encouraging statement or demonstrate agreement with the speaker about a positive point without expecting an answer.- I couldn't pay for all my groceries and someone came up from the line behind and paid for the rest. I was so touched!- Wow, how amazing is that!!?
Negative rhetoric1.3%, 1.1%, 0.8%Ask a question in order to express a critical opinion or validate a speaker's negative point without expecting an answer.- I swear my friend is always using me- that sucks is she really your friend then?
Irony1.3%, 0.3%, 0.2%Ask a question using words that suggest the opposite of what the listener intends, usually to be humorous or pass judgement.- I ate 10 Big Macs the other day.- oh my lord! only ten?
+ +Table 4: Classification of question acts with corresponding definitions and examples. Under each label its frequency is given for the three corresponding sets: manually labeled, Mturk labeled, and overall. + +
Question IntentDefinition and Example
Express interest 57.1%, 55.2%, 60.2%Express the willingness to learn or hear more about the subject brought up by the speaker; demonstrate curiosity. - I just applied for a higher paying position within my company. - That's cool, what is the position?
Express concern 20.3%, 20.3%, 23.4%Express anxiety or worry about the subject brought up by the speaker. - I cry every time I think of my sister. - Why?? what happened to her!!?
Sympathize 3.9%, 7.3%, 5.1%Express feelings of pity and sorrow for the speaker's misfortune. - my girlfriend cheated on me - Oh no! How did you find out?
Offer relief 4.8%, 3.2%, 4.5%Reassure the speaker who is anxious or distressed. - They stopped making donuts at my favorite bakery. - Oh no! Can you get donuts somewhere else?
Amplify excitement 1.9%, 4.7%, 2.3%Reinforce the speaker's feeling of excitement. - lol. Going on vacation to Florida in a couple weeks! - Wow that's awesome! To the beach?
Support 2.6%, 1.8%, 1.0%Offer approval, comfort or encouragement to the speaker, demonstrate interest in and concern for the speaker's success. - I studied so hard for my test. - I hope you did well?
Amplify joy 1.6%, 1.7%, 0.9%Reinforce the speaker's glad feeling such as pleasure, enjoyment, or happiness. - I just received my certification to teach english as a second language! - Congrats!!! Do you already have a job lined up?
Amplify pride 2.6%, 1.7%, 0.7%Reinforce the speaker's feeling of pride. - My nephew caught a huge bass this weekend! - That is cool, did you teach him how to fish?
De-escalate 1.6%, 1.6%, 0.7%Calm down the speaker who is agitated, angry or temporarily out of control. - My neighbor threw their nasty trash all over their yard and won't clean it up! It's sooo gross! - Oh, that's disgusting! Have you tried to talk to them about it?
Moralize speaker 1%, 0.6%, 0.6%Judge the speaker. - I broke my TV remote and i blamed it on my kid - That's kinda terrible. Did you apologize to him?
Pass judgement 1.6%, 1.2%, 0.5%Express an opinion (especially critical) about the subject brought up by the speaker. - I hope the government can give some free course about the benefit of staying calm and healthy - Government? No way, it is interested in quite the opposite my friend.
Motivate 1%, 0.5%, 0.2%Encourage the speaker to move onward. - This weekend is so boring so far - yeah? nothing interesting whatsoever? why not make it exciting yourself?
+ +Table 5: Classification of question intents with corresponding definitions and examples. Under each label its frequency is given for the three corresponding sets: manually labeled, Mturk labeled, and overall. + +# B Details about Mturk Annotation Task + +# B.1 Dialog Pre-processing + +Throughout our study, we only used those ED dialogs that contained questions in at least one listener turn. Since one dialog could contain several listener questions, for all downstream annotation tasks each such dialog was split into several separated dialogs, equal to the number of listener questions. The resulting sub-dialogs were truncated such that they would end with the particular question to which they corresponded to allow labeling every question in each dialog, without losing the previous conversational context. Figure 4 shows an example of a dialog from the original ED dataset and the resulting dialogs after the split. + +In the Mturk interface, if the given listener turn contained multiple questions, we showed the resulting sub-dialogs in the same page one after another for contextual consistency. But if the original dialog contained listener questions in several turns, we showed the resulting dialogs in the two separate pages. Using the example from Figure 4, we would show the first resulting dialog in one page and the last two resulting dialogs together in another page. + +# B.2 Task User Interface + +The user interface for the annotation task is illustrated in Figure 5. + +# Dialog 2/20 + +$\rightarrow$ I returned home from school one afternoon to news that my dog was run over by a car. + +I'm so sorry! You must've been so sad. Did you find out who did it? + +Select the correct labels for the question **in bold**, taking into account the context of the whole dialog. Select only one label for each of the two label sets. Be as specific as possible. + +![](images/1c8b12c565127f0b268cd6e189d86f779aa68345cd41ba469bd7f0b0087764f5.jpg) +Figure 5: The user interface of the Mturk crowdsourcing annotation task. +Figure 4: Original and resulting dialogs after preprocessing. + +# Original dialog + +
Speaker:- You are never going to believe what I did!
Listener:- What did you do?
Speaker:- Well, I normally do not feel comfortable lending things to my friends, but recently I mustered up the trust to loan my friend my vehicle.
Listener:- Ouch... Is it just for a day? Is your friend a safe driver?
+ +# Resulting dialogs + +
Speaker:- You are never going to believe what I did!
Listener:- What did you do?
Speaker:- You are never going to believe what I did!
Listener:- What did you do?
Speaker:- Well, I normally do not feel comfortable lending things to my friends, but recently I mustered up the trust to loan my friend my vehicle.
Listener:- Ouch... Is it just for a day?
Speaker:- You are never going to believe what I did!
Listener:- What did you do?
Speaker:- Well, I normally do not feel comfortable lending things to my friends, but recently I mustered up the trust to loan my friend my vehicle.
Listener:- Ouch... Is it just for a day? Is your friend a safe driver?
+ +# C Details about Data Augmentation with Lexical Similarity + +# C.1 Setup and Results + +We used a half-decaying weighting scheme to encode questions with preceding context for the data augmentation process. The highest weight was always assigned to the final question to give it a higher preference. For example, if the dialog context consisted of three turns with embeddings $e_1$ , $e_2$ , $e_3$ and the fourth turn was a listener's question with embedding $e_4^*$ , the final dialog embedding was $(8/15)e_4^* + (4/15)e_3 + (2/15)e_2 + (1/15)e_1$ . + +Figures 6 and 7 demonstrate the results of cross-validation runs for question acts and question intents for the Nearest-Neighbor label propagation approach. For each label set, we experimented with two similarity strategies: taking the same label as the top-1 most similar dialog according to the cosine similarity (Max, included in sub-figures 6a and 7a) and identifying the label with the majority vote from the top-3 most similar dialogs (Vote, included in sub-figures 6b and 7b). For each cross-validation launch we conducted a grid-search over cosine-similarity thresholds in a range between 0.7 and 1. + +We also tried concatenating one-hot-encoded emotional context vectors with the dialog embeddings before running the cross-validation, but it did not result in any improvement in the accuracy and the resulting plots were almost identical to Figures 6 and 7, so we decided not to proceed with this approach. + +# C.2 Examples of Annotated Questions + +Table 6 presents several examples of propagated labels obtained using the outlined data augmentation process to give a better idea on the accuracy of this approach. + +![](images/98927154b2b5b5770efefe0ea78b3dc5886a923341db3686992da8367d1e9af3.jpg) + +![](images/17ebb3b40c650a6d4c6092eb0ef0774d2060ecf0a02382e8ad33177d20056bb2.jpg) +(a) +(b) + +![](images/f73d6bb1b52405e0a52e2224b6b104c034adff83eb1be664b2cc67d19f043062.jpg) +Figure 6: Cross-validation results for question acts for the two considered strategies: Max in sub-figure 6a and Vote in sub-figure 6b. + +![](images/4101dc219518e66e5cd4658fd52d93c621d9497b5abf3c47195ed6d309bebc1c.jpg) +(a) +(b) +Figure 7: Cross-validation results for question intents for the two considered strategies: Max in sub-figure 6a and Vote in sub-figure 6b. + +Head of Table 6 + +
Annotated questionTop-1 NNTop-2 NNTop-3 NN
-I get a good feeling when I think back to a birthday I had when I was a kid and all of my friends and I got to see a really funny movie at the mall.- Aww! What movie did you go to see? (Request information, Express interest)-I went to the movies by myself yesterday. I have no friends.- what movie did you see? (0.87: Request information, Express interest)-I was happy when we were going to a new movie last weekend. I had waited all summer for it.- What movie was it? (0.87: Request information, Express interest)-I'm going to see a film tonight at the cinema.- oh really? what movie? (0.86: Request information, Express interest)
- It really sucked, since a month ago I was dating this girl and she dumped me so early on.- I'm so sorry. Are you okay? (Request information, Express concern)-I hurt me when my parents got divorced. I never thought that would happen - I'm so sorry, are you okay? (0.92: Request information, Express concern)-I am really feeling bad - I'm so sorry! Is everything ok? (0.90: Request information, Express concern)-I just found out that my girlfriend has been cheating on me. God this is the worst week of my life.- I feel really sorry for you. Will you be okay? (0.84: Request information, Express concern)
-One time my mom bought an ice cream from Mcdonalds! - Really? (Ask for confirmation, Express interest)-I saw someone putting mayo on their ice cream.- Really? (0.92: Ask for confirmation, Express interest)-I accidentally ate some- one else's cake at work - Really? (0.91: Ask for confirmation, Express interest)-I just ate 5 donuts by myself - Really? (0.86: Negative rhetoric, Express interest)
-i was scared walking home last night - Why was you scared was it too dark? (Suggest a reason, Express concern)-I used to be so scared to go to sleep as a kid.- How come? Were you scared of the dark? (0.92: Suggest a reason, Express concern)-I stay away from the dark.- Why do you do that? Are you scared of the dark? (0.86: Suggest a reason, Sympathize)-i was scared walking home the other day - Why were you scared? (0.83: Ask about antecedent, Express concern)
-I one time lost my trunks in the pool! People saw me in a way I didn't want! - Oh no! That must have been super embarrassing! How did you react to that? (Ask about consequence, Sympathize)-a girl i like at school told me today she doesn't like me in front of everyone - Oh no! That must have been really embarrassing! How did you respond? (0.85: Ask about consequence, Sympathize)-I fell down on stage while dancing, I felt so bad.- oh dear, that must've been embarrassing, are you okay though? (0.84: Ask about consequence, Sympathize)-One once at a swimming competition, I had a wardrobe malfunction in front of a lot of people - Oh my goodness, that must have been humiliated. What did you do? (0.83: Ask about consequence, Sympathize)
-My neighbor died in a car crash.- Oh my. I'm so sorry to hear that. What happened? (Ask about antecedent, Sympathize)-My nephew died yesterday.- I am so sorry to hear that. What happened? (0.89: Ask about antecedent, Sympathize)-My pet ferret Fuzzy died the other day. I was so heart-broken.- I'm so sorry to hear that. What happened? (0.88: Re- quest information, Sympa-thize)- When my pet died I felt liek I lost my family member, My best friend.- Im sorry to hear that. What happened? (0.88: Ask about antecedent, Sympa-thize)
-My brother just turned 16 and he's about to get his first car! I'm so excited for him.- Whoa that's exciting! What kind of car we look- ing at? (Request information, Amplify excitement)-I can't wait! We just bought a car today! Going to pick it up soon! - Oh nice! That is exciting! What kind of car did you get? (0.89: Request information, Amplify excite- ment)-I just bought a brand new car - How exciting! What kind of car is it? (0.86: Request information, Amplify excite- ment)-I was surprised when my dad got me my first car. I was not expecting it - That must have been exciting for you. What car was it? (0.85: Request information, Amplify excitement)
+ +Continuation of Table 6 + +
Annotated questionTop-1 NNTop-2 NNTop-3 NN
-I spent hours reviewing notes and course content to prepare myself for a few tri- als that a company wanted me to go through.- Good job! Do you feel pretty prepared? (Request information, Support)-I have an important job interview this week - Have you prepared well for it? (0.85: Request infor- mation, Express interest)-I have been studying for my final math exam all week long.- I hope you do well on it! Do you feel prepared? (0.83: Ask for confirmation, Support)-I've got a big interview on Friday. It for a job I really want. - I hope it goes well! are you prepared? (0.83: Re- quest information, Support)
-Friends threw me a sur- prisé party yesterday.- thats awesome, and happy birthday !!!- Thanks! I got so many cool gifts! I was so happy.- what kind of gifts did you get? (Ask about con- sequence, Amplify excite- ment)-I was happy to find that at work my coworker prepared a birthday party for me. I was not expecting it.-Wow. I bet that was a nice surprise. Did you get a lot of presents? (0.84: Ask about consequence, Amplify excitement)-My friends threw me a surprise birthday party last year! - That is very nice - It was! I was shocked and I felt very loved. -Did they brought any spe- cial gift? (0.84: Request in- formation, Express interest)-My friends planned a sur- prisé party for my birthday. - Exciting! Did you get any neat gifts? (0.84: Ask about consequence, Amplify excitement)
-I'm living my best life. I could' not be any happier.- good to know. and what makes your life so good, huh? (Request information, Amplify joy)-I am so happy with my life right now.- You sound very content. What makes you happy? (0.86: Request information, Express interest)-I feel good. Everything finally seems to be working out. - That's great! What are some things you're enjoy- ing about life right now? (0.86: Request information, Amplify joy)-I've been happy with the way things have been going in my life lately. - That's awesome, glad to hear, what are you most happy with? (0.86: Ask about antecedent, Amplify joy)
-I was happy when my brother finished school. I was proud of him - That is awesome. Was it high school or college? (Re- quest information, Amplify pride)-It felt great to see my son graduate. Like I succeeded as a parent.- That's awesome. high school? (0.88: Request in- formation, Amplify pride)-I use to be the number one tennis player in the state. - That is an awesome achievement! Was it for high school or college? (0.86: Request information, Amplify pride)-I'm a PhD student and I'm taking a really hard class. I have to do well so I was re- ally happy when I got an A on a test! - thats awesome! what col- lege you go to? (0.84: Re- quest information, Express interest)
-I cheated at cards.- Did you feel bad about it? (Ask about consequence, Moralize speaker)-I cut someone off in traffic today - Do you feel bad about it? (0.85: Ask about conse- quence, Moralize speaker)- Yesterday, i had a night out with my friends, but i lied to partner that i will be staying late for work. I did not want to see her nagging - That's really not good. Did you feel bad about it? (0.85: Negative rhetoric, Moralize speaker)-I was really hungry today and ate my roomates' left- overs. - Do you feel bad about it? (0.85: Ask about conse- quence, Moralize speaker)
-I stole money from my friend.- oh.. why did you do that? (Ask about antecedent, Pass judgement)-I stole money from my son's piggy bank.- Why did you do that? (0.94: Ask about antecedent, Pass judgement)-I stole money from some- one at a party years ago and I still feel bad about it. - Why did you do that? (0.91: Ask about antecedent, Pass judgement)-I told my best friends se-cret to another one of our friends. - Why did you do it? (0.89: Ask about antecedent, Pass judgement)
+ +Table 6: Examples of propagated labels obtained using majority vote from the top-3 Nearest-Neighbor (NN) dialogs according to cosine similarity. The first column includes the newly annotated question, and the other three show the top-3 NN dialogs with respective question labels and a similarity value. Spelling and punctuation of the original source have been preserved. + +![](images/6bc655e40bb95e92887edb5e08af87b3c254ad3f5a636a64375f765aec92d7c5.jpg) + +![](images/4e1feedb3cb6993368e508ee77bbd5e911f081563340847298327a70e9ff5291.jpg) +(a) +(b) +Figure 8: Train and validation losses over the course of approximately 15 training epochs for question acts (sub-figure 8a) and question intents (sub-figure 8b). + +# D Details about training automatic classifiers + +For our automatic classifiers, we used GELU as a hidden activation function and applied a 0.1 dropout to all layers and attention weights. For training, we used Adam optimizer with $\beta_{1} = 0.9$ , $\beta_{2} = 0.98$ , $\epsilon = 1 \times 10^{-6}$ , and the peak learning rate of $2 \times 10^{-5}$ . The maximum number of input tokens was set to 100, and we used the batch size of 50. The evolution of train and validation losses over the course of 15 training epochs is shown in Figure 8. We used Google Colab environment for the training. + +The performance of classifiers trained only on a human-annotated subset was several percent lower than training on augmented data (see Section 6.2), resulting in $75\%$ accuracy for acts and $70\%$ for intents on the same (human-annotated) test set. Therefore, in this paper, we focus on the results obtained with the augmented data. + +Figure 9 demonstrates several examples of automatically labeled questions in the ED dialogs. We specify both the predicted act and intent labels for each listeners' question and emotions expressed by speakers in each turn to observe how they are influenced by listeners' questions. Here we combine the pre-processed dialogs (cf. Section B.1) back to their original format, which explains why some labeled questions appear in the middle of the dialogs. + +# E Extended Analysis of Questioning Strategies + +# E.1 Mapping of Emotions and Empathetic Intentions + +Table 7 presents the mapping of 32 emotions (Rashkin et al., 2019) and 9 empathetic intents (Welivita and Pu, 2020) to three coarser emotion categories of different valence, which we used to produce visualizations for the analysis. + +# E.2 Additional plots for Human-Labeled Subset + +Figures 10 and 11 show the breakdown of flow rates between speakers' emotions and listeners' questioning strategies (Figure 3) into separate mappings for acts and for intents, respectively. + +# E.3 Analysis of Questioning Strategies on the whole Dataset + +For completeness, we include the same analytical visualizations as presented in Section 7 for the whole ED dataset (Figures 12, 13, 14, and 15). From these Figures, one can observe higher presence of more "general" categories (Request information, Express interest), which presumably originates from the fact that QBERT classifiers are slightly biased towards these classes due to the class imbalance in the training data. Nevertheless, despite this remark, other major patterns revealed by the analysis of human-annotated subset (cf. Section 7), preserve in the Figures produced for the whole ED dataset (including automatically-annotated questions). + +![](images/f18ef1b26efcdf88d7302b0cc0bd895888842078b26cea6d9d0ad8d93e9f6cdf.jpg) +Figure 10: Mappings between emotions disclosed by the speakers and question acts used by listeners in the first three turns of the ED dialogs (human-labeled ED subset). + +![](images/e859dcb2a6dd0bcc0c3edba0c6e950de3273fc3a17e53c1215bafb02f3a34c13.jpg) +Figure 11: Mappings between emotions disclosed by the speakers and question intents used by listeners in the first three turns of the ED dialogs (human-labeled ED subset). + +- I am proud of my girlfriend for getting a full time job, I am sure she will do great! (Positive) +- That's awesome! bet she will too! when does she start? (Request information, Express interest) +- She starts in exactly a week (Positive) +- woo hoo so you guys going out to celebrate? (Ask about consequence, Amplify excitement) +- I am so happy to be having a boy (Positive) +- That's great! Congratulations! Is this your first child? (Request information, Amplify joy) +- Thanks. Yes it is. I already got a crib and baby bath. (Neutral) +- My daughter scored the winning goal at her last soccer game. I was so happy that all her hard work paid off! (Positive) +- That's great. Does she practice a lot? (Request information*, Amplify pride) +- Yes, she practices almost every day after school with her friends and also with her team. She says she will be a professional player one day! (Positive) +- Man.....my cat died: (I feel horrible. (Negative) +- That's awful, how did your cat die? (Ask about antecedent, Sympathize) +- Old age. she had a good life but it's still tearing me up. (Neutral) +- I took a test last week that I had studied very hard for. +I know I got most of the answers right, but I got a failing grade (Negative) +- Must've been a really difficult exam. Will there be other exams to balance it out? (Ask about consequence*, Offer relief*) +- The person sitting next to me copied my answers, so the teacher failed both of us. (Negative) +- I guess the teacher wasn't going to listen to you? (Suggest a reason, De-escalate) That sucks. +- I ordered a gift for a friend and it says it was delivered but I never received it. Now the company says it takes 14 days for a refund. (Negative) +- Don't you hate how "customer service" has no service anymore? (Negative rhetoric, Sympathize) Did you get the refund at least? (Suggest a solution, Offer relief) +- Still waiting.... That's the most upsetting. Because they waste no time taking your money (Negative) +- I didn't realize that stealing was bad until I realized how it made me feel afterwards (Negative) +- So you probably felt pretty guilty huh. Did you return what you stole? (Ask about consequence, Moralize speaker) +- No, I was scared to get charged, but I stopped after that (Neurtal) + +Figure 9: Examples of questions labeled automatically with QBERT. Question acts and intents marked with a star* were annotated by Mturk workers. + +# Category Mapped emotions and intents + +Positive: trusting, surprised, caring, content, joyful, excited, anticipating, hopeful, prepared, nostalgic, impressed, faithful, confident, proud, grateful + +Neutral: neutral, encouraging, agreeing, suggesting, acknowledging, sympathizing, wishing, consoling, questioning + +Negative: devastated, afraid, apprehensive, terrified, disappointed, disgusted, lonely, anxious, sad, embarrassed, annoyed, furious, ashamed, angry, sentimental, guilty, jealous + +Table 7: Mapping of 32 emotions and 9 empathetic intents describing the EmpatheticDialogues dataset to three emotion categories of different valence. + +![](images/629b511b779979f2b9a38c41c85f5c050938fbfd7e34ae3cd6bbe51bae9994c8.jpg) +Figure 12: Joint distribution of question intents and acts for 20,201 labeled questions (whole ED dataset). Blue circles are proportional to the frequency of each pair's co-occurrence. + +![](images/5d9e3ce4c298b770e1e962eeef1ef268a518b9c154a3eb1b2ca55936dee08444.jpg) +Figure 13: a) Mappings between emotions disclosed by the speakers and listeners' questioning strategies in the first three turns of the ED dialogs (whole ED dataset). b) Frequency distribution of question acts across dialog turns (whole ED dataset). c) Frequency distribution of question intents across dialog turns. Two prevalent intents were excluded for visual clarity; their percentage rates computed for all questions $(n = 14921$ and $n = 5043)$ are: Express interest: $59.7\% \rightarrow 61.1\%$ , Express concern: $24.9\% \rightarrow 19.3\%$ + +![](images/4899321fc2792f76b9891857ec6f29481d4fc1a3f7b5183fa3d8bbf93dd006f1.jpg) + +![](images/acbbf2d81658f5035c7ac8ebf1da3ad01822bf6fc7b3f1f609bee1b3f228092a.jpg) + +![](images/27830f5ba3694dee9e1fe6942944b61cb1a7bf31936d08622437563b7b6b5c6e.jpg) +Figure 14: Mappings between emotions disclosed by the speakers and question acts used by listeners in the first three turns of the ED dialogs (whole ED dataset). + +![](images/acb979c28cb0383694a9a9d25bad3ec1bc37939616cdcf4fbc1ddf54844a6bb7.jpg) +Figure 15: Mappings between emotions disclosed by the speakers and question intents used by listeners in the first three turns of the ED dialogs (whole ED dataset). \ No newline at end of file diff --git a/ataxonomyofempatheticquestionsinsocialdialogs/images.zip b/ataxonomyofempatheticquestionsinsocialdialogs/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..d4026077adab011538e73e10ed63c0da80fdec34 --- /dev/null +++ b/ataxonomyofempatheticquestionsinsocialdialogs/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d22a6e97a7c581b5e2e19f4ad6814dabb0e059a5f43d529b2f17aa4a3bee95c1 +size 2177792 diff --git a/ataxonomyofempatheticquestionsinsocialdialogs/layout.json b/ataxonomyofempatheticquestionsinsocialdialogs/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..1373d82ed63cafc7661119b14220cd0f6400b984 --- /dev/null +++ b/ataxonomyofempatheticquestionsinsocialdialogs/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:343eacc0f0e40cddbb42b2c5b999a781efe7a4ee33a0378cc4f19477a604c782 +size 497448 diff --git a/atokenlevelreferencefreehallucinationdetectionbenchmarkforfreeformtextgeneration/3c75b474-4cda-4239-9ec1-f929dc1d1792_content_list.json b/atokenlevelreferencefreehallucinationdetectionbenchmarkforfreeformtextgeneration/3c75b474-4cda-4239-9ec1-f929dc1d1792_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..a86b16a256d1afa82c94a03100484228f74ae344 --- /dev/null +++ b/atokenlevelreferencefreehallucinationdetectionbenchmarkforfreeformtextgeneration/3c75b474-4cda-4239-9ec1-f929dc1d1792_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bb3835ccaf9e9ea79d3fdf0acdca140aef23155d09d68df252f70c0e1355f23b +size 100104 diff --git a/atokenlevelreferencefreehallucinationdetectionbenchmarkforfreeformtextgeneration/3c75b474-4cda-4239-9ec1-f929dc1d1792_model.json b/atokenlevelreferencefreehallucinationdetectionbenchmarkforfreeformtextgeneration/3c75b474-4cda-4239-9ec1-f929dc1d1792_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b872de1a1108626c852d8cda35842463d0d4fafe --- /dev/null +++ b/atokenlevelreferencefreehallucinationdetectionbenchmarkforfreeformtextgeneration/3c75b474-4cda-4239-9ec1-f929dc1d1792_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4eb797ce2c85ad56786ecee9530ebf879b71645c0f2af449adc82ec1f10b6399 +size 122427 diff --git a/atokenlevelreferencefreehallucinationdetectionbenchmarkforfreeformtextgeneration/3c75b474-4cda-4239-9ec1-f929dc1d1792_origin.pdf b/atokenlevelreferencefreehallucinationdetectionbenchmarkforfreeformtextgeneration/3c75b474-4cda-4239-9ec1-f929dc1d1792_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f1e612f0625688f8a3b66dd2c747849ea15139d8 --- /dev/null +++ b/atokenlevelreferencefreehallucinationdetectionbenchmarkforfreeformtextgeneration/3c75b474-4cda-4239-9ec1-f929dc1d1792_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6f7f582cabde62fb66ad099e870bc17585b3a6e9f020f8570d42a7884ffab613 +size 2770007 diff --git a/atokenlevelreferencefreehallucinationdetectionbenchmarkforfreeformtextgeneration/full.md b/atokenlevelreferencefreehallucinationdetectionbenchmarkforfreeformtextgeneration/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1dbe1337911cfdeb2e9745cd60b8dcb5653d87eb --- /dev/null +++ b/atokenlevelreferencefreehallucinationdetectionbenchmarkforfreeformtextgeneration/full.md @@ -0,0 +1,369 @@ +# A Token-level Reference-free Hallucination Detection Benchmark for Free-form Text Generation + +Tianyu Liu $^{1}$ $^{2*}$ Yizhe Zhang $^{3}$ Chris Brockett $^{4}$ Yi Mao $^{4}$ Zhifang Sui $^{1}$ Weizhu Chen $^{4}$ Bill Dolan $^{4}$ + +$^{1}$ Peking University $^{2}$ Tencent Cloud Xiaowei $^{3}$ Meta AI $^{4}$ Microsoft Corporation {tianyu0421, szf} $@$ pku.edu.cn, yizhe.zhang@hotmail.com {chrisbkt, maoyi, wzchen, billdoll} $@$ microsoft.com + +# Abstract + +Large pretrained generative models like GPT-3 often suffer from hallucinating non-existent or incorrect content, which undermines their potential merits in real applications. Existing work usually attempts to detect these hallucinations based on a corresponding oracle reference at a sentence or document level. However ground-truth references may not be readily available for many free-form text generation applications, and sentence- or document-level detection may fail to provide the fine-grained signals that would prevent fallacious content in real time. As a first step to addressing these issues, we propose a novel token-level, reference-free hallucination detection task and an associated annotated dataset named HADES (HAllucination DEtction dataSet) $^{1}$ . To create this dataset, we first perturb a large number of text segments extracted from English language Wikipedia, and then verify these with crowd-sourced annotations. To mitigate label imbalance during annotation, we utilize an iterative model-in-loop strategy. We conduct comprehensive data analyses and create multiple baseline models. + +# 1 Introduction + +Automatic text generation using neural natural language generation (NLG) systems is increasingly fluent and thus seemingly plausible in many real-world applications. Large-scale pretrained models like GPT-3 (Brown et al., 2020) are proven to be powerful in understanding and performing free form text generation tasks at human-quality level with a few in-context examples, which dramatically reduces the manual labor needed in many text-based applications and services. Despite their + +great success, however, neural NLG systems using very large pre-trained models struggle to generate factually accurate and trustworthy text (Devlin et al., 2019; Radford et al., 2019), and exhibit a propensity to hallucinate non-existent or incorrect content that is unacceptable in most user-oriented applications. This poses a major challenge for deploying production NLG systems with realtime generation, where post-examination is impossible. + +Existing work has sought to detect hallucination and quantitatively measure generation consistency against a provided reference. Such reference-based hallucination detection has been proposed for abstractive summarization (Maynez et al., 2020), machine translation (Wang and Sennrich, 2020), data-to-text generation (Rebuffel et al., 2021), and image caption generation (Rohrbach et al., 2018). For many free-form text generation tasks, however, references are not readily available. For example, in a production NLG system such as a social chatbot using real-time response generation or a document auto-completion system, the generation model often cannot pair its outputs with sufficient reference information, rendering reference-based methods less applicable: $i)$ It may be difficult to even know where to obtain the reference, as obtaining it may be as hard as generating consistent information in the first place; $ii)$ Generation may be at a real-time online setting that demands leveraging only existing context to create new content. + +One common setup for qualitatively measuring the level of hallucination is performed at sentence-or document-level (Dhingra et al., 2019; Scialom et al., 2019). Related tasks such as fake news detection (Zellers et al., 2019) or fact checking (Thorne and Vlachos, 2018) also adopt this strategy. However, sentence- or document-level detection may not always provide high-resolution signals sufficient to pinpoint the hallucinated text, or can only judge whether a generated sentence or a document + +Input: ... She had a large family and lived with her grandparents .... In 1933 she gave birth to her first child .... In July 1926, many of her friends attended her funeral ... + +Label1: grandparents $\rightarrow$ Not Hallucination +Label2: funeral $\rightarrow$ Hallucination + +as a whole is a hallucinated artifact. Consequently, these high-level strategies may be insufficient to avoid hallucinations. As an alternative, at decoding time of an NLG system, we suggest that if the locus of hallucination can be identified at the token level, it may be possible to guide beam search or suppress the probability of certain tokens at real-time. + +To this end, we propose a reference-free, token-level hallucination detection task and introduce an annotated training and benchmark testing dataset that we call HADES (HAllucination DEtction dataSet). The reference-free property of this task yields greater flexibility in a broad range of generation applications. We expect the token-level property of this task to foster the development of models that can detect fine-grained signals of potential hallucination. In conjunction with consulting context to identify self-contradictory statements and access to commonsense and world knowledge, such fine-grained signals, when detected, should further mitigate real-time hallucination. + +Our contributions include: 1) We propose a reference-free, token-level hallucination detection task for free-form text generation. 2) We support this task with a dataset that we call HADEs, with $\sim 11\mathrm{k}$ instances extracted from English Wikipedia using an iterative data collection strategy to address data imbalance issues. We also present comprehensive analyses on the statistical features to shed light on what is commonly recognized as hallucination in crowd-sourced judgments and its salient characteristics in free-form text generation. 3) We create multiple baselines, including feature based models and pretrained models as a first step towards addressing the proposed task. + +# 2 Task Overview + +We formulate our hallucination detection task as a binary classification task. As shown in Fig 1, our goal is to assign either a "hallucina + +Raw Text: ... Failure of the government to maintain control of civil affairs, might have been caused by the allied bombing of German cities, or uprising of the millions of foreign forces working in German factories ... modified the plan with the intention of using it to take control of German cities, disarm and arrest the nazi leadership ... + +Perturbed Text: ... Failure of the army1, to assume2 control of civil order3 might have been caused by the allied bombing of german cities , or because4 of the millions of jewesh5 forced laborers employed by6 german factories .... modified the plan with the intention of using it to take control of german forces10, to directly attack11 the ss , and arrest the ss leaders12 ... + +(A) Contextual Perturbation + +![](images/66400fc9cbf4f3c484fde304bafdc8d8f825a7178d088aa4ee281a520b079b91.jpg) + +Do you think 'disarm' in SRC consistent with 'to directly attack'; in SUB according to the context? + +Not really. I think 'to directly attack $_{11}$ ' in SUB is inconsistent with 'disarm $_{11}$ ' in SRC. + +![](images/879b2db03a42a663cf338054a2b7f803180381cb356651ea47c71fac4dbb07b4.jpg) + +![](images/0cf456e16b1f631a33974029c241120b0067050fdb586697807ffffa2cd1482b.jpg) +Figure 1: Overview for reference-free token-level hallucination detection task. + +Do you think working in $\mathbf{i}_6$ in SRC consistent with 'employed by' in SUB according to the context? + +Yes. I think 'employed by' in SUB is consistent with 'working in' in SRC. + +![](images/64fcd485bab2cc3407f766bc0596e0ca6a6d131bee27d308eddfc7308852a7e3.jpg) +Figure 2: The data collection process of HADES. + +(B) Human Annotation + +tion”2(abbreviated as “ $\mathcal{H}$ ”) or a “not hallucination” (abbreviated as “ $\mathcal{N}$ ”) label to the highlighted spans. + +To simulate real-world NLG applications, we propose two sub-tasks with "offline" and "online" settings. In the offline setting, it is assumed that generation is complete, so the model is able perceive the bidirectional context. This could be used in the post-generation examination of NLG systems. For online detection, the model can only access the unidirectional preceding context, which simulates on-the-fly generation. Online detection is important in practice as it enables NLG systems to proactively forestall potential hallucinations. + +# 3 Dataset Creation + +To collect the HADES dataset, we first perturb "raw text" web data into "perturbed text" (Fig 2A) (Sec 3.2). We then ask human annotators to assess whether the perturbed text spans are hallucinations given the original text (Fig 2B) (Sec 3.3). + +# 3.1 Raw Data Collection + +Our raw data are sampled from English WIKI-40B (Guo et al., 2020) dataset. WIKI-40B-EN is a cleaned collection of English Wikipedia articles. We randomly sample from the first paragraphs of these articles and filter out short text of fewer than 5 sentences. We use Wikipedia as our text source since it is stylistically formal and of high quality, and covers diverse topics and domains. + +# 3.2 Contextual Perturbation + +To acquire machine generated text in the free-form, we perturb the raw text $^3$ using BERT. In applying this contextual perturbation we maintained two principles: $i$ ) the fluency and syntactic correctness of the perturbed text should be preserved; $ii$ ) the perturbed text should be lexically diverse. + +We leave the first two sentences in the raw text unchanged to serve as the preceding context, so as to avoid the "early token curse" (Press et al., 2020) where tokens are evaluated at the beginning with limited context. The text perturbation process is split into three pipelined operations, namely MASK, REPLACE and RANK. + +- i) In the MASK operation, we mask the tokenized words to be replaced with the special token "[MASK]" in the BERT vocabulary. Starting from the third sentence, we randomly mask word spans by a pre-defined mask ratio $\rho$ . By default we only mask one word in each perturbation, except for named entities identified by Spacy. We view the entity boundaries as minimal masking units to avoid collocation errors (e.g. "San Diego" should be masked as a whole). To reduce trivial instances, we do not mask stop words or punctuation identified by NLTK (Bird, 2006). +- ii) In the REPLACE operation, we leverage a pretrained BERT-base model to predict the masked span. The mask-then-predict training framework of the BERT model contextualizes the replacement with both preceding and subsequent text. For better fluency, we replace the masked tokens from left to right, e.g. a 3-token REPLACE operation will be “[MASK] [MASK] [MASK]” → “[A] [MASK] [MASK]” → “[A] [B] [MASK]” → “[A] [B] [C]”. When performing the replacement, we remove the original token from the predicted distribution over the vocabulary at + +each position of the text span, to avoid duplicated text after perturbation. We compared several decoding strategies in token substitution, including greedy, top-k (k=5/10/50) and top-p (p=0.95/0.9/0.8) (Holtzman et al., 2020) sampling methods. For comparison we sample 30 perturbed text for each sampling method and count the number of incoherent perturbations. We choose top-k (k=10) sampling as its good trade-off between diversity (via number of distinct tokens) and coherence (via number of incoherent perturbations). + +- iii) For each perturbed text, we substitute multiple word spans. Although being locally coherent, the perturbed text may still exhibit some global incoherence and syntactic issues, especially for longer text. We thus postprocess the perturbed text with a RANK operation as an additional screening step. For each raw text, we generate 20 perturbed candidates and rank them according to language model perplexity using a GPT-2 (117M) model. We only keep the candidate with lowest perplexity to ensure the fluency and syntactic correctness. + +# 3.3 Data Annotation + +We ended up with $\sim 1\mathrm{M}$ perturbed text segments in the pool after contextual perturbation, not all of which contain hallucination, as the BERT model can generate factual information given that it is pretrained on a rich open web corpus. Thus, we sought to further annotate the automatically perturbed texts via crowd-sourcing. Human annotation is prohibitively expensive at this scale, so instead of annotating all 1M perturbed texts, we annotated a subset that is less trivial and would lead to a more balanced distribution, using an iterative model-in-the-loop annotation approach that is conceptually related to active learning (Cohn et al., 1996; Jia and Liang, 2017; Zellers et al., 2018; Nie et al., 2020). + +Human annotation settings To perform the annotations, we hired judges on an internal (the name is redacted for double-blind review) crowdsourcing platform comparable to AMT. The judges were limited to the North American English speakers with good records (recognized as experts in the platform, rejection rate \(\leq 1\%\) ) and were screened via a simple 10-question qualification test (answering 8 out of 10 questions correctly). They were paid \(0.15\$ per HIT, which is more than prevailing + +local minimum wage. Protocols were implemented to block spammers in real time $^{5}$ . For each annotation, both original text and perturbed text were shown to the judges, with perturbed text span highlighted. The annotators were asked to determine whether the perturbed text spans are $\mathcal{H}$ (hallucination) or $\mathcal{N}$ (not hallucination) with the original text in terms of factualness and semantic coherence given the context. Each pair was judged by 4 annotators, and up to 6 if consensus was not reached. We retained only those annotations for which consensus was reached. Out of 12,719 annotated instances, $86.12\%$ instances reach consensus and are included in HADES dataset; $78.47\%$ instances reach $\geq 80\%$ agreement among annotators, e.g. 4/5 or 5/6 vote for "hallucination" label; $71.24\%$ instances reach $100\%$ agreement in the annotation. For inter-annotator agreement (IAA), the Krippendorf's alpha between the annotators is 0.87. + +Iterative Model-in-the-loop annotation Annotating all perturbed text segments is expensive and time-consuming. Thus, we resort to annotating a subset. We applied two principles for selecting the data to be annotated: $i)$ the data should be balanced. We found that with randomly sampled instances, the annotated label distribution is heavily skewed toward the "hallucination" class. Presumably most contextualized perturbations result in factual inconsistency to certain extent. However, we aim to have the number of instances in both classes on par with each other, so that the ROC (receiver operating characteristic) curve of tested models can be better characterized. $ii)$ the data for annotation should be less trivial6. The obvious instances contribute little to model training and method benchmarking, but cost as much annotation effort as other instances. + +The challenge is that we cannot know a priori the annotation labels and ease of labeling, hence selecting less trivial instances and forming a balanced label distribution for annotation is not straightforward. To address this challenge, we adopt an iterative Model-in-the-loop annotation strategy. Specifically, we split the annotations into several rounds. + +For each round $^7$ , we first retrain a hallucination detection model (initiated with BERT) based on the annotated instances in the previous rounds. This model is used for selecting the next batch of data to be annotated from the remaining unlabeled data. + +To filter out trivial instances and focus on the more useful cases, we use a heuristic rule for the automatic screening by abandoning instances where the detection model assigns low or high probability to "hallucination" class (the threshold varies in different rounds to yield reasonable number of candidates). To eliminate cases where the perturbed text paraphrases the original text, we also measured the cosine similarity between the replaced text (through "[CLS]" representation) and corresponding original content using a RoBERTa model (without fine-tuning), and then filtered out cases with a similarity score greater than 0.9. We also remove a large portion of obvious hallucination instances where the target text span is recognized as a DATE or NAME, and replaced by a different DATE8 or NAME. + +In the initial rounds of annotation, we observed extreme label imbalance (around $90\%$ are $\mathcal{H}$ class) between $\mathcal{H}$ (hallucination) and $\mathcal{N}$ (not hallucination) cases. To rebalance the label distribution so that each class received a decent amount of annotation, we performed additional subsampling based on the label predicted by the aforementioned detection model. We assume the human annotation for $\mathcal{H}$ and $\mathcal{N}$ cases is the oracle, indicating actual $\mathcal{H} / \mathcal{N}$ . Since the actual "hallucinated" is dominant, we seek to subsample from instances that are predicted as $\mathcal{H}$ by the detection model to make the distribution of actual $\mathcal{H} / \mathcal{N}$ even. To do this, we estimate the true positive rate (TPR, $\alpha$ ), true negative rate (TNR, $\beta$ ) and true precision $(\gamma)$ of the detection model based on the annotation from last round. The hope is that after subsampling, the actual $\mathcal{H}$ (TP + FN) is roughly equal to actual $\mathcal{N}$ (FP + TN). The estimated subsampling ratio $R$ for the predicted $\mathcal{H}$ (TP + FP) is given by: + +$$ +R = \frac {- 2 \alpha \beta \gamma + \alpha \beta + \beta \gamma + \alpha \gamma - \gamma}{(2 \gamma - 1) \alpha (1 - \beta)} \tag {1} +$$ + +
Machine Generated Text in HADES (Hallucination → Factuality)Hallucination Type
He became deputy major-general to the forces, with the acting rank of brigadier general. +(brigadier → major)Domain-specific +Knowledge
Retirement compensation arrangements (RCAS) are … no tax is paid by the owner / +employee until benefits are received at death. (death → retirement)Commonsense +knowledge
This meeting discussed the drug and alcohol problems for many in their community. +(many → teenager)Incoherence or +improper collocation
… is a designer / craftsman … he has also produced one-of-a-kind tables, chairs, and other +furniture … the New York Times described him as one of 2019's leading businessmen. +(businessmen → chair makers)Unrelated to the +central topic
Alfonzo Flores Ortiz … was a Colombian road racing cyclist from 1985 to 1987 … he was +born in April, 1992 in Medellin. (born → died)Conflict with +preceding context
He also aided prominent documentary writer Joseph Margulies on his book , Guantanamo +and the Abuse of Presidential Power. (documentary writer → civil rights attorney)Conflict with +succeeding context
+ +# 3.4 Data Analysis + +Below we provide data statistics and characterize the composition and properties of HADES. + +Data statistics In total, after accumulating annotations for several rounds, we obtain 12,719 instances with 71,226 HITS from judges. We conduct 14 rounds of annotation, increasing the annotation scale with each round (ranging from $\sim 200$ instances/round to $\sim 4000$ instances/round). Out of 12,719 annotated instances, 10,954 instances reached consensus among judges and are included in the HADES dataset. We split the dataset into train, validation and test sets with sizes of 8754, 1000, 1200 respectively. In the final dataset, "hallucination" cases slightly outnumber "not hallucination" cases, with a ratio of $54.5\% / 45.5\%$ . We summarize some typical hallucination types seen in the HADES dataset in Fig 3. + +Parsing features In Fig 4 we show the ratio of "hallucination" $(\mathcal{H})/$ "not hallucination" $(\mathcal{N})$ cases for different Part-of-Speech (POS) and Name Entity Recognition (NER) tags, identified by Spacy. From a POS perspective, around two-thirds of verbs and verbal phrases in the dataset are identified as "not hallucination", while in other types of words/phrases, "hallucination" cases are in the majority, e.g., most adverbs (ADV), adjectives (ADJ) and acronyms of proper nouns (PROPN) are labeled as "hallucination". Presumably many verbs or verbal phrases are lower in word concreteness (Nelson and Schreiber, 1992) than other word types (e.g. "make" and "create" can be used interchangeably in many circumstances), and thus, as we observe in our dataset, are less prone to be perturbed + +into hallucinations. For NER tags, about $90\%$ of word spans are not recognized as name entities. However, of the $10\%$ of remaining instances, over $90\%$ are "hallucination" cases. + +
LabelWord Prob*EntropyTF-IDFPPMI
H5.8525.62.581.49.021.019.198.134
N1.307.671.781.07.019.014.216.129
+ +![](images/18fb107856141ce2f64b56d098e4ad25446f0afee537adc3f9781166eef5c0fe.jpg) +Figure 3: Overview for different types of hallucination in the proposed HADES dataset. +(A) Mean $_{\text{std}}$ statistics for Hallucination $(\mathcal{H})$ and Hallucination $(\mathcal{N})$ labels (* indicates $\times 1e^{-8}$ ). +(B) Feature correlation heatmap between hallucination label and word probability, entropy, TF-TDF and PPMI. +Table 1: Analysis for statistical and model-based features of HADEs. + +Statistical and model-based features To analyze the characteristics of hallucinations in HADEs, we compute the correlation between a selected group of statistical/model-based features and hallucination labels. As shown in Table $1^{10}$ , we obtain the average word probability and average word entropy of a given text span with a BERT base model (without fine-tuning), as well as term frequency-inverse document frequency (TF-IDF), + +![](images/51e8ada1797105c2e0969993b5cd9be851548a48bd8067a3bcc20886cc8d738d.jpg) +Figure 4: Distributions of POS (left), NER (middle) and a breakdown of non-null NER tags (right) in HADEs. + +positive pointwise mutual information (PPMI) features of the given word span. By comparing the features of the two labels $(\mathcal{H} / \mathcal{N})$ (Table 1A), we observe that in our dataset, hallucinations typically associate with higher entropy. A counter-intuitive observation is that the hallucinations tend to have higher average probability than factually consistent content. We presume the underlying reason might be that the word distribution generated by machine may diverge from the word distribution of real human-written text (Holtzman et al., 2020; See et al., 2019) owing to self-reinforcing the current generation based on previous generation. Consequently, many overconfident generation outputs are likely to fall into hallucination. We observe no strong correlation between hallucination labels and TF-IDF or PPMI as demonstrated in Table 1B. + +# 4 Baseline Models + +As an initial step towards tackling the proposed hallucination detection task and benchmarking methods, we create several baseline detection models11. + +Feature-based models As elaborated in Sec 3.4, the statistical/model-based features like average word probability, average entropy, TF-IDF, PPMI, as well as parsing features like POS and NER tags can be vague indicators of hallucinations. The former two are context-aware and the latter four are not. We incorporate them as features to build classifiers including logistic regression (LR) and support vector machine (SVM) using scikit-learn (Pedregosa et al., 2011). The maximum number of iteration is set as 100, with an early-stop strategy which stops training if the loss does not drop within + +5 iterations. + +Transformer-based models We also build baseline detection models based on pretrained transformer models including BERT, GPT-2, XLNet (Yang et al., 2019) and RoBERTa (Liu et al., 2020). These transformer-based models represent the state-of-the-art, and can potentially better leverage context or embedded world knowledge to detect self-contradictory or anti-commonsense content. + +Specifically, for an input text segment, we fine-tune a pretrained model $\mathcal{M}$ to predict binary hallucination labels $\mathbf{y}$ for each given text span. During inference time, from the last layer hidden states $\mathbf{H} \in \mathbb{R}^{l \times h}$ ( $h, l$ are hidden size and sequence length, respectively) of $\mathcal{M}$ , suppose the target text span starts at position $s$ and ends at position $t$ , we first obtain the representation $\mathbf{w} \in \mathbb{R}^h$ for the target span with max pooling (i.e., $\mathbf{w} = \max_{-}\mathrm{pool}(\mathbf{H}_{s:t})$ ). We then map $\mathbf{w}$ to a binary hallucination label $y \in \{0, 1\}$ with a MLP network using tanh as activation. During training time, we fine-tune the model using cross entropy objective between the predicted labels and the actual labels. + +# 5 Experimental Setup + +Baseline configurations For the transformer-based baselines, we experiment with a variety of pretrained models via Hugging Face Transformers (Wolf et al., 2020), including BERT-large (335M), GPT2-medium (345M), XLNet-large (340M), RoBERTa-large (355M). We use Adam optimizer (Kingma and Ba, 2015) with different learning rates, i.e. 5e-3 for GPT2 and BERT and 1e-3 for other models. + +We explored multiple model architectures and setups to determine the optimal configuration using BERT-large model. These include $i$ ) span representation with mean/max pooling ; ii) number of layers of the MLP network; iii) hidden dimension + +
ModelAccG-Mean (↑)BSS (↓)AUCNot HallucinationHallucination
PRF1PRF1
LR62.2560.77--62.3572.0866.8662.1051.2460.33
SVM63.6761.50--62.8976.1868.9065.0549.6556.31
BERT71.9271.9519.0678.6374.4671.2972.8469.3172.6170.92
RoBERTa72.8370.9418.7878.7274.0674.7674.4171.4370.6771.05
XLNet72.3371.3918.7978.9371.1580.1375.3774.0763.6068.44
+ +Table 2: Benchmark (numbers in percentages $(\%)$ ) for the offline setting on HADEs, where detecting models have access to the bidirectional context. $\downarrow/\uparrow$ indicates lower/higher is better. Significant tests are in the appendix. + +
ModelAccG-Mean (↑)BSS (↓)AUCNot HallucinationHallucination
PRF1PRF1
GPT-271.5870.9819.1377.7171.3277.2974.1971.9365.1968.40
BERT71.0070.4318.6678.8370.9176.5073.6071.1264.8467.84
RoBERTa70.6770.1419.7777.0770.7475.8773.2270.5864.8467.59
XLNet70.0869.1719.7676.5969.3977.6073.2771.0861.6666.04
+ +Table 3: Benchmark (numbers in percentages $(\%)$ ) for the online setting on HADEs, where detection models only have the access to left context. $\downarrow / \uparrow$ indicates lower/higher is better. Significant tests are in the appendix. + +of the MLP; $iv$ ) whether or not to freeze the parameters of $\mathcal{M}$ up to the last layer, and choose the best configuration according to model performance on the validation set. The best configuration uses max-pooling, employs 2 layers of MLP with hidden dimension of $h/2$ , and freezes the model parameters up to the last layer of $\mathcal{M}$ and just fine-tunes the binary MLP classifier. We apply the same network configuration to all other pretrained models as empirically we see marginal performance gain after enumerating different configurations for individual pretrained models other than BERT. + +As discussed in Sec.2, HADES can serve as benchmark for hallucination detection in both offline (model can see bidirectional context) and online (only preceding context can be leveraged) settings. Note that we apply the feature-based baselines only in the offline setting (Table 2), because a good estimation of those features requires bidirectional context. The transformer with causal attention (GPT-2) can only fit in the online setting. + +Evaluation metrics We evaluate the baselines on HaDes with standard classification metrics including accuracy, precision, recall, F1 and AUC (Area Under Curve) with respect to ROC. We also utilize the G-Mean metric which measures geographic mean of sensitivity and specificity (Espindola and Ebecken, 2005) and they were reported useful especially for the imbalanced label distribution scenarios. We also employ the Brier Skill Score (BSS) + +metric (Center, 2005), which calculates the mean squared error between the reference distribution and the hypothesis probabilities. + +# 6 Results + +Baseline performance Table 3 and Table 2 show the performance of the baseline models $^{12}$ in both online and offline settings respectively. In both settings, the predictions for "not hallucination" cases have higher F1 scores than "hallucination" cases. All models perform better in the offline setting compared with the online setting, indicating that the succeeding context of the target words helps identify hallucinations. The transformer-based baselines are generally on par with each other. Under the offline setting, the pretrained models outperform feature-based models by a large margin; this indicates that the powerful contextualized feature extractor is important for successfully identifying hallucinations at fine granularity. Under the online setting, we observe that, for most of the metrics, GPT-2 yields the best performance of all baselines. + +![](images/9cf976ca0cf995659fe26dfedcfd12cb9446b7b951fd0d1f702229a61157500b.jpg) +Figure 5: The visualization of predicted hallucination scores for a sample of GPT-3 generated text, provided by BERT (large, offline) detector. Darker green signifies higher risk to be hallucinations. + +![](images/86e23e41213e9ecb43e70e3afb81ba5a0ba5daa636dc4d028ec2f4f445f3f7dc.jpg) +Figure 6: The performance of BERT-large based detecting model with different context lengths. + +Presumably, the causal language model pretraining method makes GPT-2 perform better in the autoaggressive (online) detection setting. + +Context matters in HADES To investigate extent to which contextual information helps the hallucination detection in HADES, we run BERT-large detection model with different context lengths and characterize its performance in both online and offline settings in Fig 6. Starting from the target words, we set a fixed size (5/10/20/40/80/160) context window and truncate all text beyond this window. As we enlarge the context window, model performance grows rapidly when context length is smaller than 80, and then gradually converges. This observation highlights the importance of context in hallucination detection. Interestingly, we observe that the model obtains higher performance in the offline mode than in the online setting. The performance gap between the two settings maximizes when context length is around 75, and vanishes with long ( $>150$ ) or short ( $<20$ ) context windows. We surmise that for long ( $>150$ ) context window, the preceding context information might already be adequate for detection, while for short ( $<20$ ) context windows, the context, regardless whether it is unidirectional or bidirectional, might not contain enough information for detection. + +Model predictions on GPT-3 generated text We visualize the predictions of BERT-large (offline) model on GPT-3 generated text in Fig 5. According to the 2021 census instruments $^{13}$ , some identified spans like "greenhouse gas emission" and "complete enumeration" are indeed not included in the census, we assume they are recognized due to the topic or knowledge irrelevance with the "census of agriculture" in the pretrained corpus. Interestingly, the detection model predicts the high hallucination risk on "structures and buildings", which has subtle differences with "total greenhouse area including enclosed structures" (included in the instruments). The case study demonstrates the potentials of our model in identifying hallucinated content in the actual outputs of large-scale pretrained models. + +# 7 Related Work + +Reference-based Hallucination Detection Apart from human verification (Chen and Bansal, 2018), researchers have developed effective reference-based methods which automatically detect hallucination in the generated text using statistical n-gram matching (Dhingra et al., 2019; Liu et al., 2019), edit distance heuristics (Zhou et al., 2021), natural language inference (Kryscinski et al., 2020; Falke et al., 2019), information extraction (Zhang et al., 2020; Goodrich et al., 2019) or question answering (Scialom et al., 2019; Eyal et al., 2019; Wang et al., 2020a). Our approach differs from them in that we investigate the reference-free hallucination detection scenario. + +To reduce hallucinations in the reference-based setting, researchers have applied iterative training (Nie et al., 2019), post editing (Dong et al., 2020), + +soft constraints, e.g. attention manipulation (Kiddon et al., 2016; Hua and Wang, 2019; Tian et al., 2019) or optimal transport (Wang et al., 2020b), and template/scaffold guided schema with explicit plans (Ma et al., 2019; Moryossef et al., 2019; Balakrishnan et al., 2019; Du et al., 2020; Liu et al., 2021), e.g. text sequences which specify the narrative ordering, and implicit plans (Wiseman et al., 2018; Ye et al., 2020; Shen et al., 2020; Li and Rush, 2020), e.g. (structured) hidden variables that corresponds to certain surface realization. + +# Reference-free Detection Approaches + +Reference-free hallucination detection is closely related to fake news detection (Zellers et al., 2019; Zhou and Zafarani, 2020; Zhong et al., 2020), which aims to identify deliberate disinformation in a reference-free manner on social media and usually involves common-sense and world knowledge reasoning (Monti et al., 2019), or fact checking (Thorne et al., 2018), where practitioners are asked to verify given claims without references by retrieving related evidence from Wikipedia. Another line of research is to classify sentence-level language specificity (Li and Nenkova, 2015; Gao et al., 2019), which scales from 1 (very general) - 5 (very specific) for short text, e.g. tweets, according to human annotation. + +The proposed hallucination detection aims to examine the text in a finer granularity than fake news detection and fact checking. In the proposed task, most parts of the text remain faithful; our goal is to identify subtle hallucinations at the token-level. Fake news detection or specificity assessment, on the other hand, usually focus on sentence- or document-level detection. + +# 8 Conclusions + +We have proposed a token-level reference-free hallucination detection task and introduced a benchmark dataset HADEs for identifying fine granularity hallucination in free-form text generation. To create this dataset, we perturbed texts to simulate hallucination in NLG system, and performed an interative model-in-the-loop annotation approach to annotate the perturbed text in an imbalanced label scenario. We have further provided comprehensive analyses of HADEs and evaluated several baseline models to establish initial benchmarks. We hope that the proposed task and dataset will shed light on high-resolution hallucination detection in freeform text generation and will eventually lead to + +real-time hallucination prevention. + +# Broader Impact and Ethnic Consideration + +This study aims to facilitate the recognition of potential hallucinated content produced by large-scale pretrained models in the free-form generation. We support this goal with a novel reference-free, token-level hallucination task and the corresponding annotated dataset HADES. The detection model trained with HADES could be potentially useful in both online and offline settings. For online settings it is possible to guide beam search or suppress the probability of hallucinated tokens through the detection models. For offline settings our system may expedite the human-in-the-loop post-examination in product deployment. + +We design our model to detect hallucination to factual statement. The learned knowledge should be able to be transferred to other domain like social chatbot once the chat is regarding certain facts (e.g. a celebrity, a historical event). Wikipedia dataset covers a lot of facts, domains and topics, making it ideal for our study. We thus collect the HADES dataset from Wikipedia. All text on Wikipedia is licensed under the Creative Commons Attribution/Share-Alike 3.0 Unported License. During the annotation, all involved annotators voluntarily participated with decent payment. + +# Acknowledgments + +The authors would like to thank the anonymous reviewers for their thoughtful and constructive comments. Tianyu and Zhifang gratefully acknowledge the support of the National Key Research and Development Program of China 2020AAA0106701 and National Science Foundation of China project U19A2065. + +# References + +Anusha Balakrishnan, Jinfeng Rao, Kartikeya Upasani, Michael White, and Rajen Subba. 2019. Constrained decoding for neural NLG from compositional representations in task-oriented dialogue. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 831-844, Florence, Italy. Association for Computational Linguistics. +Steven Bird. 2006. Nltk: the natural language toolkit. In Proceedings of the COLING/ACL 2006 Interactive Presentation Sessions, pages 69-72. +Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind + +Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc. +NOAA-CIRES Climate Diagnostics Center. 2005. Brier skill scores, rocs, and economic value diagrams can report false skill. +Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 675-686, Melbourne, Australia. Association for Computational Linguistics. +David A Cohn, Zoubin Ghahramani, and Michael I Jordan. 1996. Active learning with statistical models. Journal of artificial intelligence research, 4:129-145. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Bhuwan Dhingra, Manaal Faruqui, Ankur Parikh, Ming-Wei Chang, Dipanjan Das, and William Cohen. 2019. Handling divergent reference texts when evaluating table-to-text generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4884–4895, Florence, Italy. Association for Computational Linguistics. +Yue Dong, Shuohang Wang, Zhe Gan, Yu Cheng, Jackie Chi Kit Cheung, and Jingjing Liu. 2020. Multifact correction in abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9320-9331, Online. Association for Computational Linguistics. +Yuheng Du, Shereen Oraby, Vittorio Perera, Min-min Shen, Anjali Narayan-Chen, Tagyoung Chung, Anushree Venkatesh, and Dilek Hakkani-Tur. 2020. Schema-guided natural language generation. In Proceedings of the 13th International Conference on Natural Language Generation, pages 283-295, Dublin, Ireland. Association for Computational Linguistics. +Rogério P Espindola and Nelson FF Ebecken. 2005. On extending f-measure and g-mean metrics to multiclass problems. WIT Transactions on Information and Communication Technologies, 35. + +Matan Eyal, Tal Baumel, and Michael Elhadad. 2019. Question answering as an automatic evaluation metric for news article summarization. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3938-3948, Minneapolis, Minnesota. Association for Computational Linguistics. +Tobias Falke, Leonardo F. R. Ribeiro, Prasetya Ajie Utama, Ido Dagan, and Iryna Gurevych. 2019. Ranking generated summaries by correctness: An interesting but challenging application for natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2214-2220, Florence, Italy. Association for Computational Linguistics. +Yifan Gao, Yang Zhong, Daniel Preoticiuc-Pietro, and Junyi Jessy Li. 2019. Predicting and analyzing language specificity in social media posts. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6415-6422. +Ben Goodrich, Vinay Rao, Peter J Liu, and Mohammad Saleh. 2019. Assessing the factual accuracy of generated text. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 166-175. +Mandy Guo, Zihang Dai, Denny Vrandecic, and Rami Al-Rfou. 2020. Wiki-40B: Multilingual language model dataset. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 2440-2452, Marseille, France. European Language Resources Association. +Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In International Conference on Learning Representations. +Xinyu Hua and Lu Wang. 2019. Sentence-level content planning and style specification for neural text generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 591-602, Hong Kong, China. Association for Computational Linguistics. +Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2021-2031, Copenhagen, Denmark. Association for Computational Linguistics. +Chloeé Kiddon, Luke Zettlemoyer, and Yejin Choi. 2016. Globally coherent text generation with neural checklist models. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 329-339, Austin, Texas. Association for Computational Linguistics. + +Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In *ICLR (Poster)*. +Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9332-9346, Online. Association for Computational Linguistics. +Junyi Li and Ani Nenkova. 2015. Fast and accurate prediction of sentence specificity. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 29. +Xiang Lisa Li and Alexander Rush. 2020. Posterior control of blackbox generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2731-2743, Online. Association for Computational Linguistics. +Tianyu Liu, Fuli Luo, Pengcheng Yang, Wei Wu, Baobao Chang, and Zhifang Sui. 2019. Towards comprehensive description generation from factual attribute-value tables. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5985-5996, Florence, Italy. Association for Computational Linguistics. +Tianyu Liu, Xin Zheng, Baobao Chang, and Zhifang Sui. 2021. Towards faithfulness in open domain table-to-text generation from an entity-centric view. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 13415-13423. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Roberta: A robustly optimized bert pretraining approach. +Shuming Ma, Pengcheng Yang, Tianyu Liu, Peng Li, Jie Zhou, and Xu Sun. 2019. Key fact as pivot: A two-stage model for low resource table-to-text generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2047-2057, Florence, Italy. Association for Computational Linguistics. +Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factuality in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1906-1919, Online. Association for Computational Linguistics. +Federico Monti, Fabrizio Frasca, Davide Eynard, Damon Mannon, and Michael M Bronstein. 2019. Fake news detection on social media using geometric deep learning. arXiv preprint arXiv:1902.06673. +Amit Moryossef, Yoav Goldberg, and Ido Dagan. 2019. Step-by-step: Separating planning from realization in neural data-to-text generation. In Proceedings of + +the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2267-2277, Minneapolis, Minnesota. Association for Computational Linguistics. +Douglas L Nelson and Thomas A Schreiber. 1992. Word concreteness and word structure as independent determinants of recall. Journal of memory and language, 31(2):237-260. +Feng Nie, Jin-Ge Yao, Jinpeng Wang, Rong Pan, and Chin-Yew Lin. 2019. A simple recipe towards reducing hallucination in neural surface realisation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2673–2679, Florence, Italy. Association for Computational Linguistics. +Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Adversarial NLI: A new benchmark for natural language understanding. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4885-4901, Online. Association for Computational Linguistics. +Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. the Journal of machine Learning research, 12:2825-2830. +Ofir Press, Noah A Smith, and Mike Lewis. 2020. Shortformer: Better language modeling using shorter inputs. arXiv preprint arXiv:2012.15832. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. +Clément Rebuffel, Marco Roberti, Laure Soulier, Geoffrey Scoutheeten, Rossella Cancelliere, and Patrick Gallinari. 2021. Controlling hallucinations at word level in data-to-text generation. arXiv preprint arXiv:2102.02810. +Anna Rohrbach, Lisa Anne Hendricks, Kaylee Burns, Trevor Darrell, and Kate Saenko. 2018. Object hallucination in image captioning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4035-4045, Brussels, Belgium. Association for Computational Linguistics. +Thomas Scialom, Sylvain Lamprier, Benjamin Piwowarski, and Jacopo Staiano. 2019. Answers unite! unsupervised metrics for reinforced summarization models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3246-3256, Hong Kong, China. Association for Computational Linguistics. + +Abigail See, Aneesh Pappu, Rohun Saxena, Akhila Yerukola, and Christopher D. Manning. 2019. Do massively pretrained language models make better storytellers? In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 843-861, Hong Kong, China. Association for Computational Linguistics. +Xiaoyu Shen, Ernie Chang, Hui Su, Cheng Niu, and Dietrich Klakow. 2020. Neural data-to-text generation via jointly learning the segmentation and correspondence. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7155-7165, Online. Association for Computational Linguistics. +James Thorne and Andreas Vlachos. 2018. Automated fact checking: Task formulations, methods and future directions. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3346-3359, Santa Fe, New Mexico, USA. Association for Computational Linguistics. +James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and VERIFICATION. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809-819, New Orleans, Louisiana. Association for Computational Linguistics. +Ran Tian, Shashi Narayan, Thibault Sellam, and Ankur P Parikh. 2019. Sticking to the facts: Confident decoding for faithful data-to-text generation. arXiv preprint arXiv:1910.08684. +Alex Wang, Kyunghyun Cho, and Mike Lewis. 2020a. Asking and answering questions to evaluate the factual consistency of summaries. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5008-5020, Online. Association for Computational Linguistics. +Chaojun Wang and Rico Sennrich. 2020. On exposure bias, hallucination and domain shift in neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3544-3552, Online. Association for Computational Linguistics. +Zhenyi Wang, Xiaoyang Wang, Bang An, Dong Yu, and Changyou Chen. 2020b. Towards faithful neural table-to-text generation with content-matching constraints. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1072-1086, Online. Association for Computational Linguistics. +Sam Wiseman, Stuart Shieber, and Alexander Rush. 2018. Learning neural templates for text generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3174-3187, Brussels, Belgium. Association for Computational Linguistics. + +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics. +Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc. +Rong Ye, Wenxian Shi, Hao Zhou, Zhongyu Wei, and Lei Li. 2020. Variational template machine for data-to-text generation. In International Conference on Learning Representations. +Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. SWAG: A large-scale adversarial dataset for grounded commonsense inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 93-104, Brussels, Belgium. Association for Computational Linguistics. +Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 9054-9065. Curran Associates, Inc. +Yuhao Zhang, Derek Merck, Emily Tsai, Christopher D. Manning, and Curtis Langlotz. 2020. Optimizing the factual correctness of a summary: A study of summarizing radiology reports. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5108-5120, Online. Association for Computational Linguistics. +Wanjun Zhong, Duyu Tang, Zenan Xu, Ruize Wang, Nan Duan, Ming Zhou, Jiahai Wang, and Jian Yin. 2020. Neural deepfake detection with factual structure of text. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2461-2470, Online. Association for Computational Linguistics. +Chunting Zhou, Graham Neubig, Jiatao Gu, Mona Diab, Francisco Guzmán, Luke Zettlemoyer, and Marjan Ghazvininejad. 2021. Detecting hallucinated content in conditional neural sequence generation. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1393-1404, Online. Association for Computational Linguistics. + +Xinyi Zhou and Reza Zafarani. 2020. A survey of fake news: Fundamental theories, detection methods, and opportunities. ACM Computing Surveys (CSUR), 53(5):1-40. + +# A Detailed Statistical Analysis + +In Table 4, we provide detailed statistical analyses for different POS and NER tags in the HADES dataset. Although the average word probability and average word entropy features differ among POS/NER tags, hallucinated content typically associates with higher word probability and word entropy irrespective of POS/NER tag. Strong correlation between hallucination labels and TF-IDF or PPMI features is not observed. + +# B Annotation Interface + +The annotation interface is provided in Fig 7. + +Note that throughout the annotation process we choose to involve an even number of, e.g. 4 or 6, annotators (Sec 3.3) for an instance. The reason is that, we manage to involve extra annotators for controversial cases. If we pick an odd number of, e.g. 5 rather than 4, annotators, for each datapoint (binary classification) all possible results would be $0:5/1:4/2:3/3:2/4:1/5:0$ in terms of the ratio of Hallucination/Consistent labels, which means no more annotators would be involved as they always reach consensus (majority wins). + +# C Subsampling Ratio For Label Rebalance + +We adopt an iterative model-in-the-loop method in data annotation. Since observe a label imbalance between "hallucination" $(\mathcal{H})$ and "not hallucination" $(\mathcal{N})$ in the initial rounds of annotation, we employ subsampling to rebalance the label distribution in Sec 3.3. We accumulate the data annotated in the all previous rounds, and train a detection model using the accumulated data. Then we apply the detection model to the unannotated data in the candidate data pool in order to select next batch of data as elaborated in Sec 3.3. + +We assume that the human annotation for $\mathcal{H}$ and $\mathcal{N}$ cases is the oracle, indicating actual $\mathcal{H} / \mathcal{N}$ . Since the actual "hallucinated" is dominant, we try to subsample from the instances that are predicted by the detection model to be $\mathcal{H}$ , in order to even out the distribution of actual $\mathcal{H} / \mathcal{N}$ . To do this, we estimate the true positive rate $^{14}$ (TPR, $\alpha$ ), true negative rate (TNR, $\beta$ ) and true precision ( $\gamma$ ) of the detection model based on the annotations from the previous rounds. + +$$ +\mathrm {T P R} = \frac {\mathrm {T P}}{(\mathrm {T P} + \mathrm {F N})} \triangleq \alpha \tag {2} +$$ + +$$ +\mathrm {T N R} = \frac {\mathrm {T N}}{(\mathrm {T N} + \mathrm {F P})} \triangleq \beta \tag {3} +$$ + +$$ +\text {p r e c i s i o n} = \frac {\mathrm {T P}}{(\mathrm {T P} + \mathrm {F P})} \triangleq \gamma \tag {4} +$$ + +Where TP, FP, TN, FN are the abbreviations of "true positive", "false positive", "true negative" and "false negative" cases. We aim to subsample from the instances that are predicted as $\mathcal{H}$ from the detection model $(\mathrm{TP} + \mathrm{FP})$ with a subsampling ratio $s$ , so that the actual $\mathcal{H}$ $(\mathrm{TP} + \mathrm{FN})$ is roughly equal to actual $\mathcal{N}$ $(\mathrm{FP} + \mathrm{TN})$ after the resampling. We denote TP and TN as $x$ and $y$ and represent FN and FP with $x, y, \alpha, \gamma, \beta$ : + +$$ +\mathrm {F N} = \frac {1 - \alpha}{\alpha} x \tag {5} +$$ + +$$ +\mathrm {F P} = \frac {1 - \beta}{\beta} y \tag {6} +$$ + +By substituting FN, FP into Eq. (4), we have: + +$$ +\gamma = \frac {x}{x + \frac {1 - \beta}{\beta} y} \tag {7} +$$ + +To make the distribution of actual $\mathcal{H} / \mathcal{N}$ even $(s\mathrm{TP} + \mathrm{FN} = s\mathrm{FP} + \mathrm{TN})$ , we have: + +$$ +s x + \frac {1 - \alpha}{\alpha} x = s \frac {1 - \beta}{\beta} y + y \tag {8} +$$ + +By combining Eq. (7) and Eq. (8), we figure out the optimal subsampling ratio $s^*$ . + +$$ +s ^ {*} = \frac {- 2 \alpha \beta \gamma + \alpha \beta + \beta \gamma + \alpha \gamma - \gamma}{(2 \gamma - 1) \alpha (1 - \beta)} \tag {9} +$$ + +
TagWord Prob(×1e-8)EntropyTF-IDFPPMI
HNHNHNHN
POS:NOUN6.983201.686342.751.521.861.13.025.021.023.018.213.145.228.140
POS:VERB2.519.330.692.892.251.251.761.00.019.012.018.011.206.112.216.119
POS:ADJ8.1644.82.8618.92.951.462.381.23.021.017.017.009.180.128.164.117
POS:ADV5.1314.22.6512.22.561.181.971.09.016.011.014.008.181.114.182.105
POS:PROPN14.333.64.3517.83.121.731.561.39.029.026.033.029.198.150.312.275
POS:other9.5631.13.2815.72.641.611.260.97.013.013.011.010.158.107.205.092
NER:null5.3725.61.247.192.521.471.791.06.021.019.019.014.200.132.215.126
NER:other8.4325.45.0621.52.931.561.651.44.023.023.0260.024.189.146.263.237
All5.8525.61.307.672.581.491.781.07.021.019.0190.014.198.144.216.129
+ +Table 4: Detailed statistical features (Meanstd) for "hallucinated" $(\mathcal{H})$ and "not hallucinated" $(\mathcal{N})$ cases. + +Determine whether the highlighted text span in the modified text highlighted in red is consistent with that in the original paragraph. Focus on the highlighted words in context. You should ignore other inconsistencies that you may observe. + +# Original + +conrad tolendahl lally was the sole child born to lucy fedora wells and conrad colthurst whitley lally ; he arrived in 1882 . his noble french general great - grandfather had fought the british in india . his grandfather served through three wars in china with the royal navy before emigrating to canada . young lally was educated at private schools before matriculating at upper canada college . after graduation , he went into banking , opening and managing the first imperial bank of canada branch in banff in 1906 . two years later , he moved to wainwright , alberta to go partners in a general store . he became active in civic affairs , becoming mayor . however , as world war i erupted , he volunteered for military service with the royal flying corps . + +# Replacement + +conrad toendahl lally was the sole child born to lucy fedora wells and conrad colthurst whitley lally ; he arrived in 1882 . his noble - general great - grandfather had fought the british in china . his father served through the war in germany with the royal navy before emigrating to canada . conrad whitley wells was educated at public schools before matriculating at north york university . after graduating , he went into business , opening and running the first national bank of canada branch in fort garry in 1902 . two years later , he moved to calgary , alberta to go work in a grocery store . he became active in civic affairs , becoming mayor . however , as world war i began , he volunteered for military service with the royal flying corps . + +Figure 7: The annotation interface for the proposed hallucination detection task. \ No newline at end of file diff --git a/atokenlevelreferencefreehallucinationdetectionbenchmarkforfreeformtextgeneration/images.zip b/atokenlevelreferencefreehallucinationdetectionbenchmarkforfreeformtextgeneration/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..acdcc7402baec00219e361c37a4a3827baef43f8 --- /dev/null +++ b/atokenlevelreferencefreehallucinationdetectionbenchmarkforfreeformtextgeneration/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3d0fab14d4eb9f03f01a9624f148ede9a8a4df4c84c30c9ac4cedf868cad831a +size 536926 diff --git a/atokenlevelreferencefreehallucinationdetectionbenchmarkforfreeformtextgeneration/layout.json b/atokenlevelreferencefreehallucinationdetectionbenchmarkforfreeformtextgeneration/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..86277de44814691aff3893070760d97138fbf6e8 --- /dev/null +++ b/atokenlevelreferencefreehallucinationdetectionbenchmarkforfreeformtextgeneration/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6a023cd50034aac6f2f5a4e26eb641d52be4ac065e2021914009ede72ac57373 +size 476256 diff --git a/avariationalhierarchicalmodelforneuralcrosslingualsummarization/83358d60-727e-4818-9946-019d83454dc4_content_list.json b/avariationalhierarchicalmodelforneuralcrosslingualsummarization/83358d60-727e-4818-9946-019d83454dc4_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..4631a7e9ada064a040590ce2603c3fd0370c73f8 --- /dev/null +++ b/avariationalhierarchicalmodelforneuralcrosslingualsummarization/83358d60-727e-4818-9946-019d83454dc4_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3642c03f32a7cb188fea2fe35f47b2aa567a4e52ebe9e773c4fcb1a03f1b929c +size 99222 diff --git a/avariationalhierarchicalmodelforneuralcrosslingualsummarization/83358d60-727e-4818-9946-019d83454dc4_model.json b/avariationalhierarchicalmodelforneuralcrosslingualsummarization/83358d60-727e-4818-9946-019d83454dc4_model.json new file mode 100644 index 0000000000000000000000000000000000000000..29fe6b85dbd6236a8950abe55af8f1788f18df50 --- /dev/null +++ b/avariationalhierarchicalmodelforneuralcrosslingualsummarization/83358d60-727e-4818-9946-019d83454dc4_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b5c066fa4ae2a57d28e2a1bfe92082c5e79965b5c7d2b69c26c8b644418feb96 +size 121103 diff --git a/avariationalhierarchicalmodelforneuralcrosslingualsummarization/83358d60-727e-4818-9946-019d83454dc4_origin.pdf b/avariationalhierarchicalmodelforneuralcrosslingualsummarization/83358d60-727e-4818-9946-019d83454dc4_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..fc10d02533b295278c87879ac4f7a7c9f512cd30 --- /dev/null +++ b/avariationalhierarchicalmodelforneuralcrosslingualsummarization/83358d60-727e-4818-9946-019d83454dc4_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b05578d35a0ecb91c9e838c5308f48158d5682d686bd682ef675dcebdf8b1b18 +size 702771 diff --git a/avariationalhierarchicalmodelforneuralcrosslingualsummarization/full.md b/avariationalhierarchicalmodelforneuralcrosslingualsummarization/full.md new file mode 100644 index 0000000000000000000000000000000000000000..96ec5eaeccc8de85e40caac574ae2630442f77b3 --- /dev/null +++ b/avariationalhierarchicalmodelforneuralcrosslingualsummarization/full.md @@ -0,0 +1,448 @@ +# A Variational Hierarchical Model for Neural Cross-Lingual Summarization + +Yunlong Liang $^{1*}$ , Fandong Meng $^{2}$ , Chulun Zhou $^{2,3}$ , Jinan $\mathbf{Xu}^{1\dagger}$ , Yufeng Chen $^{1}$ , Jinsong $\mathbf{Su}^{3}$ and Jie Zhou $^{2}$ + +1Beijing Key Lab of Traffic Data Analysis and Mining, Beijing Jiaotong University, China + +$^{2}$ Pattern Recognition Center, WeChat AI, Tencent Inc, China + +$^{3}$ School of Informatics, Xiamen University + +{yunlongliang, jaxu, chenyf}@bjtu.edu.cn clzhou@stu.xmu.edu.cn + +jssu@xmu.edu.cn {fandongmeng,withtomzhou}@tencent.com + +# Abstract + +The goal of the cross-lingual summarization (CLS) is to convert a document in one language (e.g., English) to a summary in another one (e.g., Chinese). Essentially, the CLS task is the combination of machine translation (MT) and monolingual summarization (MS), and thus there exists the hierarchical relationship between MT&MS and CLS. Existing studies on CLS mainly focus on utilizing pipeline methods or jointly training an end-to-end model through an auxiliary MT or MS objective. However, it is very challenging for the model to directly conduct CLS as it requires both the abilities to translate and summarize. To address this issue, we propose a hierarchical model for the CLS task, based on the conditional variational auto-encoder. The hierarchical model contains two kinds of latent variables at the local and global levels, respectively. At the local level, there are two latent variables, one for translation and the other for summarization. As for the global level, there is another latent variable for cross-lingual summarization conditioned on the two local-level variables. Experiments on two language directions (English $\Leftrightarrow$ Chinese) verify the effectiveness and superiority of the proposed approach. In addition, we show that our model is able to generate better cross-lingual summaries than comparison models in the few-shot setting. + +# 1 Introduction + +The cross-lingual summarization (CLS) aims to summarize a document in source language (e.g., English) into a different language (e.g., Chinese), which can be seen as a combination of machine translation (MT) and monolingual summarization (MS) to some extent (Orasan and Chiorean, 2008; Zhu et al., 2019). The CLS can help people effectively master the core points of an article in a + +foreign language. Under the background of globalization, it becomes more important and is now coming into widespread use in real life. + +Many researches have been devoted to dealing with this task. To our knowledge, they mainly fall into two categories, i.e., pipeline and end-to-end learning methods. (i) The first category is pipeline-based, adopting either translationsummarization (Leuski et al., 2003; Ouyang et al., 2019) or summarization-translation (Wan et al., 2010; Orasan and Chiorean, 2008) paradigm. Although being intuitive and straightforward, they generally suffer from error propagation problem. (ii) The second category aims to train an end-to-end model for CLS (Zhu et al., 2019, 2020). For instance, Zhu et al. (2020) focus on using a pre-constructed probabilistic bilingual lexicon to improve the CLS model. Furthermore, some researches resort to multi-task learning (Takase and Okazaki, 2020; Bai et al., 2021a; Zhu et al., 2019; Cao et al., 2020a,b). Zhu et al. (2019) separately introduce MT and MS to improve CLS. Cao et al. (2020a,b) design several additional training objectives (e.g., MS, back-translation, and reconstruction) to enhance the CLS model. And Xu et al. (2020) utilize a mixed-lingual pre-training method with several auxiliary tasks for CLS. + +As pointed out by Cao et al. (2020a), it is challenging for the model to directly conduct CLS as it requires both the abilities to translate and summarize. Although some methods have used the related tasks (e.g., MT and MS) to help the CLS, the hierarchical relationship between MT&MS and CLS are not well modeled, which can explicitly enhance the CLS task. Apparently, how to effectively model the hierarchical relationship to exploit MT and MS is one of the core issues, especially when the CLS data are limited. In many other related NLP tasks (Park et al., 2018; Serban et al., 2017; + +Shen et al., 2019, 2021), the Conditional Variational Auto-Encoder (CVAE) (Sohn et al., 2015) has shown its superiority in learning hierarchical structure with hierarchical latent variables, which is often leveraged to capture the semantic connection between the utterance and the corresponding context of conversations. Inspired by these work, we attempt to adapt CVAE to model the hierarchical relationship between MT&MS and CLS. + +Therefore, we propose a Variational Hierarchical Model to exploit translation and summarization simultaneously, named VHM, for CLS task in an end-to-end framework. VHM employs hierarchical latent variables based on CVAE to learn the hierarchical relationship between MT&MS and CLS. Specifically, the VHM contains two kinds of latent variables at the local and global levels, respectively. Firstly, we introduce two local variables for translation and summarization, respectively. The two local variables are constrained to reconstruct the translation and source-language summary. Then, we use the global variable to explicitly exploit the two local variables for better CLS, which is constrained to reconstruct the target-language summary. This makes sure the global variable captures its relationship with the two local variables without any loss, preventing error propagation. For inference, we use the local and global variables to assist the cross-lingual summarization process. + +We validate our proposed training framework on the datasets of different language pairs (Zhu et al., 2019): Zh2EnSum (Chinese $\Rightarrow$ English) and En2ZhSum (English $\Rightarrow$ Chinese). Experiments show that our model achieves consistent improvements on two language directions in terms of both automatic metrics and human evaluation, demonstrating its effectiveness and generalizability. Few-shot evaluation further suggests that the local and global variables enable our model to generate a satisfactory cross-lingual summaries compared to existing related methods. + +Our main contributions are as follows: + +- We are the first that builds a variational hierarchical model via conditional variational auto-encoders that introduce a global variable to combine the local ones for translation and summarization at the same time for CLS. +- Our model gains consistent and significant performance and remarkably outperforms the + +most previous state-of-the-art methods after using mBART (Liu et al., 2020). + +- Under the few-shot setting, our model still achieves better performance than existing approaches. Particularly, the fewer the data are, the greater the improvement we gain. + +# 2 Background + +Machine Translation (MT). Given an input sequence in the source language $X_{mt} = \{x_i\}_{i=1}^{|X_{mt}|}$ , the goal of the neural MT model is to produce its translation in the target language $Y_{mt} = \{y_i\}_{i=1}^{|Y_{mt}|}$ . The conditional distribution of the model is: + +$$ +p _ {\theta} \left(Y _ {m t} \mid X _ {m t}\right) = \prod_ {t = 1} ^ {| Y _ {m t} |} p _ {\theta} \left(y _ {t} \mid X _ {m t}, y _ {1: t - 1}\right), +$$ + +where $\theta$ are model parameters and $y_{1:t - 1}$ is the partial translation. + +Monolingual Summarization (MS). Given an input article in the source language $X_{ms}^{src} = \{x_i^{src}\}_{i=1}^{|X_{ms}^{src}|}$ and the corresponding summarization in the same language $X_{ms}^{tgt} = \{x_i^{tgt}\}_{i=1}^{|X_{ms}^{tgt}|}$ , the monolingual summarization is formalized as: + +$$ +p _ {\theta} (X _ {m s} ^ {t g t} | X _ {m s} ^ {s r c}) = \prod_ {t = 1} ^ {| X _ {m s} ^ {t g t} |} p _ {\theta} (x _ {t} ^ {t g t} | X _ {m s} ^ {s r c}, x _ {1: t - 1} ^ {t g t}). +$$ + +Cross-Lingual Summarization (CLS). In CLS, we aim to learn a model that can generate a summary in the target language $Y_{cls} = \{y_i\}_{i=1}^{|Y_{cls}|}$ for a given article in the source language $X_{cls} = \{x_i\}_{i=1}^{|X_{cls}|}$ . Formally, it is as follows: + +$$ +p _ {\theta} \left(Y _ {c l s} \mid X _ {c l s}\right) = \prod_ {t = 1} ^ {| Y _ {c l s} |} p _ {\theta} \left(y _ {t} \mid X _ {c l s}, y _ {1: t - 1}\right). +$$ + +Conditional Variational Auto-Encoder (CVAE). The CVAE (Sohn et al., 2015) consists of one prior network and one recognition (posterior) network, where the latter takes charge of guiding the learning of prior network via Kullback-Leibler (KL) divergence (Kingma and Welling, 2013). For example, the variational neural MT model (Zhang et al., 2016a; Su et al., 2018a; McCarthy et al., 2020; Su et al., 2018c), which introduces a random latent variable $\mathbf{z}$ into the neural MT conditional distribution: + +$$ +p _ {\theta} \left(Y _ {m t} \mid X _ {m t}\right) = \int_ {\mathbf {z}} p _ {\theta} \left(Y _ {m t} \mid X _ {m t}, \mathbf {z}\right) \cdot p _ {\theta} (\mathbf {z} \mid X _ {m t}) d \mathbf {z}. \tag {1} +$$ + +Given a source sentence $X$ , a latent variable $\mathbf{z}$ is + +firstly sampled by the prior network from the encoder, and then the target sentence is generated by the decoder: $Y_{mt} \sim p_{\theta}(Y_{mt}|X_{mt},\mathbf{z})$ , where $\mathbf{z} \sim p_{\theta}(\mathbf{z}|X_{mt})$ . + +As it is hard to marginalize Eq. 1, the CVAE training objective is a variational lower bound of the conditional log-likelihood: + +$$ +\begin{array}{l} \mathcal {L} (\theta , \phi ; X _ {m t}, Y _ {m t}) = - \mathrm {K L} \left(q _ {\phi} \left(\mathbf {z} ^ {\prime} \mid X _ {m t}, Y _ {m t}\right) \| p _ {\theta} (\mathbf {z} \mid X _ {m t})\right) \\ + \mathbb {E} _ {q _ {\phi} (\mathbf {z} ^ {\prime} | X _ {m t}, Y _ {m t})} \left[ \log p _ {\theta} (Y _ {m t} | \mathbf {z}, X _ {m t}) \right] \\ \leq \log p (Y _ {m t} | X _ {m t}), \\ \end{array} +$$ + +where $\phi$ are parameters of the CVAE. + +# 3 Methodology + +Fig. 1 demonstrates an overview of our model, consisting of four components: encoder, variational hierarchical modules, decoder, training and inference. Specifically, we aim to explicitly exploit the MT and MS for CLS simultaneously. Therefore, we firstly use the encoder (§ 3.1) to prepare the representation for the variational hierarchical module (§ 3.2), which aims to learn the two local variables for the global variable in CLS. Then, we introduce the global variable into the decoder (§ 3.3). Finally, we elaborate the process of our training and inference (§ 3.4). + +# 3.1 Encoder + +Our model is based on transformer (Vaswani et al., 2017) framework. As shown in Fig. 1, the encoder takes six types of inputs, $\{X_{mt}, X_{ms}^{src}, X_{cls}, Y_{mt}, X_{ms}^{tgt}, Y_{cls}\}$ , among which $Y_{mt}, X_{ms}^{tgt}$ , and $Y_{cls}$ are only for training recognition networks. Taking $X_{mt}$ for example, the encoder maps the input $X_{mt}$ into a sequence of continuous representations whose size varies with respect to the source sequence length. Specifically, the encoder consists of $N_e$ stacked layers and each layer includes two sub-layers: a multi-head self-attention (SelfAtt) sub-layer and a position-wise feed-forward network (FFN) sub-layer: + +$$ +\begin{array}{l} \mathbf {s} _ {e} ^ {\ell} = \operatorname {S e l f A t t} \left(\mathbf {h} _ {e} ^ {\ell - 1}\right) + \mathbf {h} _ {e} ^ {\ell - 1}, \\ \mathbf {h} _ {e} ^ {\ell} = \mathrm {F F N} (\mathbf {s} _ {e} ^ {\ell}) + \mathbf {s} _ {e} ^ {\ell}, \\ \end{array} +$$ + +where $\mathbf{h}_e^\ell$ denotes the state of the $\ell$ -th encoder layer and $\mathbf{h}_e^0$ denotes the initialized embedding. + +Through the encoder, we prepare the representations of $\{X_{mt}, X_{ms}^{src}, X_{cls}\}$ for training prior networks, encoder and decoder. Taking $X_{mt}$ for example, we follow Zhang et al. (2016a) and apply + +![](images/f644700c3d63dc9576a2fbab9ead748415e56cdaf8bbb6569fd5cbebe6dae1e5.jpg) +Figure 1: Overview of the proposed VHM framework. The local variables $\mathbf{z}_{mt}$ , $\mathbf{z}_{ms}$ are tailored for translation and summarization, respectively. Then the global one $\mathbf{z}_{cls}$ is for cross-lingual summarization, where the $\mathbf{z}_{cls}$ not only conditions on the input but also $\mathbf{z}_{mt}$ and $\mathbf{z}_{ms}$ . The solid grey lines indicate training process responsible for generating $\{\mathbf{z}_{mt}',\mathbf{z}_{ms}',\mathbf{z}_{cls}'\}$ from the corresponding posterior distribution predicted by recognition networks, which guide the learning of prior networks. The dashed red lines indicate inference process for generating $\{\mathbf{z}_{mt},\mathbf{z}_{ms},\mathbf{z}_{cls}\}$ from the corresponding prior distributions predicted by prior networks. The encoder is shared by all tasks with a bilingual vocabulary. + +mean-pooling over the output $\mathbf{h}_e^{N_e,X_{mt}}$ of the $N_{e}$ -th encoder layer: + +$$ +\mathbf {h} _ {X _ {m t}} = \frac {1}{| X _ {m t} |} \sum_ {i = 1} ^ {| X _ {m t} |} (\mathbf {h} _ {e, i} ^ {N _ {e}, X _ {m t}}). +$$ + +Similarly, we obtain $\mathbf{h}_{X_{ms}^{src}}$ and $\mathbf{h}_{X_{cls}}$ . + +For training recognition networks, we obtain the representations of $\{Y_{mt},X_{ms}^{tgt},Y_{cls}\}$ , taking $Y_{mt}$ for example, and calculate it as follows: + +$$ +\mathbf {h} _ {Y _ {m t}} = \frac {1}{| Y _ {m t} |} \sum_ {i = 1} ^ {| Y _ {m t} |} \left(\mathbf {h} _ {e, i} ^ {N _ {e}, Y _ {m t}}\right). +$$ + +Similarly, we obtain $\mathbf{h}_{X_{ms}^{tgt}}$ and $\mathbf{h}_{Y_{cls}}$ . + +# 3.2 Variational Hierarchical Modules + +Firstly, we design two local latent variational modules to learn the translation distribution in MT pairs and summarization distribution in MS pairs, respectively. Then, conditioned on them, we introduce a global latent variational module to explicitly exploit them. + +# 3.2.1 Local: Translation and Summarization + +Translation. To capture the translation of the paired sentences, we introduce a local variable $\mathbf{z}_{mt}$ that is responsible for generating the target information. Inspired by Wang and Wan (2019), we use isotropic Gaussian distribution as the prior distribution of $\mathbf{z}_{mt}$ : $p_{\theta}(\mathbf{z}_{mt}|X_{mt})\sim \mathcal{N}(\boldsymbol{\mu}_{mt},\sigma_{mt}^2\mathbf{I})$ where $\mathbf{I}$ denotes the identity matrix and we have + +$$ +\boldsymbol {\mu} _ {m t} = \operatorname {M L P} _ {\theta} ^ {m t} \left(\mathbf {h} _ {X _ {m t}}\right), \tag {2} +$$ + +$$ +\pmb {\sigma} _ {m t} = \mathrm {S o f t p l u s} (\mathrm {M L P} _ {\theta} ^ {m t} (\mathbf {h} _ {X _ {m t}})), +$$ + +where $\mathrm{MLP}(\cdot)$ and Softplus $(\cdot)$ are multi-layer perceptron and approximation of ReLU function, respectively. + +At training, the posterior distribution conditions on both source input and the target reference, which provides translation information. Therefore, the prior network can learn a tailored translation distribution by approaching the recognition network via KL divergence (Kingma and Welling, 2013): $q_{\phi}(\mathbf{z}_{mt}^{\prime}|X_{mt},Y_{mt}) \sim \mathcal{N}(\boldsymbol{\mu}_{mt}^{\prime},\boldsymbol{\sigma}_{mt}^{\prime 2}\mathbf{I})$ , where $\boldsymbol{\mu}_{mt}^{\prime}$ and $\boldsymbol{\sigma}_{mt}^{\prime}$ are calculated as: + +$$ +\boldsymbol {\mu} _ {m t} ^ {\prime} = \operatorname {M L P} _ {\phi} ^ {m t} \left(\mathbf {h} _ {X _ {m t}}; \mathbf {h} _ {Y _ {m t}}\right), \tag {3} +$$ + +$$ +\pmb {\sigma} _ {m t} ^ {\prime} = \mathrm {S o f t p l u s} (\mathrm {M L P} _ {\phi} ^ {m t} (\mathbf {h} _ {X _ {m t}}; \mathbf {h} _ {Y _ {m t}})), +$$ + +where $(\cdot ,\cdot)$ indicates concatenation operation. + +Summarization. To capture the summarization in MS pairs, we introduce another local variable $\mathbf{z}_{ms}$ , which takes charge of generating the source-language summary. Similar to $\mathbf{z}_{mt}$ , we define its prior distribution as: $p_{\theta}(\mathbf{z}_{ms}|X_{ms}^{src})\sim \mathcal{N}(\boldsymbol{\mu}_{ms},\sigma_{ms}^2\mathbf{I})$ , where $\boldsymbol{\mu}_{ms}$ and $\sigma_{ms}$ are calculated as: + +$$ +\boldsymbol {\mu} _ {m s} = \mathrm {M L P} _ {\theta} ^ {m s} \left(\mathbf {h} _ {X _ {m s} ^ {s r c}}\right), \tag {4} +$$ + +$$ +\pmb {\sigma} _ {m s} = \mathrm {S o f t p l u s} (\mathrm {M L P} _ {\theta} ^ {m s} (\mathbf {h} _ {X _ {m s} ^ {s r c}})). +$$ + +At training, the posterior distribution conditions on both the source input and the source-language summary that contains the summarization clue, and thus is responsible for guiding the learning of the prior distribution. Specifically, we define the posterior distribution as: $q_{\phi}(\mathbf{z}_{ms}^{\prime}|X_{ms}^{src},X_{ms}^{tgt}) \sim \mathcal{N}(\boldsymbol{\mu}_{ms}^{\prime},\boldsymbol{\sigma}_{ms}^{\prime 2}\mathbf{I})$ , where $\boldsymbol{\mu}_{ms}^{\prime}$ and $\boldsymbol{\sigma}_{ms}^{\prime}$ are calculated as: + +$$ +\boldsymbol {\mu} _ {m s} ^ {\prime} = \operatorname {M L P} _ {\phi} ^ {m s} \left(\mathbf {h} _ {X _ {m s} ^ {s r c}}; \mathbf {h} _ {X _ {m s} ^ {t g t}}\right), \tag {5} +$$ + +$$ +\boldsymbol {\sigma} _ {m s} ^ {\prime} = \mathrm {S o f t p l u s} (\mathrm {M L P} _ {\phi} ^ {m s} (\mathbf {h} _ {X _ {m s} ^ {s r c}}; \mathbf {h} _ {X _ {m s} ^ {t g t}})). +$$ + +# 3.2.2 Global:CLS + +After obtaining $\mathbf{z}_{mt}$ and $\mathbf{z}_{ms}$ , we introduce the global variable $\mathbf{z}_{cls}$ that aims to generate a target-language summary, where the $\mathbf{z}_{cls}$ can simultane + +ously exploit the local variables for CLS. Specifically, we firstly encode the source input $X_{cls}$ and condition on both two local variables $\mathbf{z}_{mt}$ and $\mathbf{z}_{ms}$ , and then sample $\mathbf{z}_{cls}$ . We define its prior distribution as: $p_{\theta}(\mathbf{z}_{cls}|X_{cls},\mathbf{z}_{mt},\mathbf{z}_{ms})\sim \mathcal{N}(\boldsymbol{\mu}_{cls},\sigma_{cls}^2\mathbf{I})$ where $\boldsymbol{\mu}_{cls}$ and $\sigma_{cls}$ are calculated as: + +$$ +\boldsymbol {\mu} _ {c l s} = \operatorname {M L P} _ {\theta} ^ {c l s} \left(\mathbf {h} _ {X _ {c l s}}; \mathbf {z} _ {m t}; \mathbf {z} _ {m s}\right), \tag {6} +$$ + +$$ +\pmb {\sigma} _ {c l s} = \mathrm {S o f t p l u s} (\mathrm {M L P} _ {\theta} ^ {c l s} (\mathbf {h} _ {X _ {c l s}}; \mathbf {z} _ {m t}; \mathbf {z} _ {m s})). +$$ + +At training, the posterior distribution conditions on the local variables, the CLS input, and the crosslingual summary that contains combination information of translation and summarization. Therefore, the posterior distribution can teach the prior distribution. Specifically, we define the posterior distribution as: $q_{\phi}(\mathbf{z}_{cls}^{\prime}|X_{cls}, \mathbf{z}_{mt}, \mathbf{z}_{ms}, Y_{cls}) \sim \mathcal{N}(\boldsymbol{\mu}_{cls}^{\prime}, \boldsymbol{\sigma}_{cls}^{r2}\mathbf{I})$ , where $\boldsymbol{\mu}_{cls}^{\prime}$ and $\boldsymbol{\sigma}_{cls}^{\prime}$ are calculated as: + +$$ +\boldsymbol {\mu} _ {c l s} ^ {\prime} = \mathrm {M L P} _ {\phi} ^ {c l s} (\mathbf {h} _ {X _ {c l s}}; \mathbf {z} _ {m t}; \mathbf {z} _ {m s}; \mathbf {h} _ {Y _ {c l s}}), +$$ + +$$ +\boldsymbol {\sigma} _ {c l s} ^ {\prime} = \operatorname {S o f t p l u s} \left(\mathrm {M L P} _ {\phi} ^ {c l s} \left(\mathbf {h} _ {X _ {c l s}}; \mathbf {z} _ {m t}; \mathbf {z} _ {m s}; \mathbf {h} _ {Y _ {c l s}}\right)\right). \tag {7} +$$ + +# 3.3 Decoder + +The decoder adopts a similar structure to the encoder, and each of $N_{d}$ decoder layers includes an additional cross-attention sub-layer (CrossAtt): + +$$ +\mathbf {s} _ {d} ^ {\ell} = \operatorname {S e l f A t t} \left(\mathbf {h} _ {d} ^ {\ell - 1}\right) + \mathbf {h} _ {d} ^ {\ell - 1}, +$$ + +$$ +\mathbf {c} _ {d} ^ {\ell} = \mathrm {C r o s s A t t} (\mathbf {s} _ {d} ^ {\ell}, \mathbf {h} _ {e} ^ {N _ {e}}) + \mathbf {s} _ {d} ^ {\ell}, +$$ + +$$ +\mathbf {h} _ {d} ^ {\ell} = \mathrm {F F N} (\mathbf {c} _ {d} ^ {\ell}) + \mathbf {c} _ {d} ^ {\ell}, +$$ + +where $\mathbf{h}_d^\ell$ denotes the state of the $\ell$ -th decoder layer. + +As shown in Fig. 1, we firstly obtain the local two variables either from the posterior distribution predicted by recognition networks (training process as the solid grey lines) or from prior distribution predicted by prior networks (inference process as the dashed red lines). Then, conditioned on the local two variables, we generate the global variable $(\mathbf{z}_{cls}^{\prime} / \mathbf{z}_{cls})$ via posterior (training) or prior (inference) network. Finally, we incorporate $\mathbf{z}_{cls}^{(t)}$ into the state of the top layer of the decoder with a projection layer: + +$$ +\mathbf {o} _ {t} = \operatorname {T a n h} \left(\mathbf {W} _ {p} \left[ \mathbf {h} _ {d, t} ^ {N _ {d}}; \mathbf {z} _ {c l s} ^ {(t)} \right] + \mathbf {b} _ {p}\right), \tag {8} +$$ + +where $\mathbf{W}_p$ and $\mathbf{b}_p$ are training parameters, $\mathbf{h}_{d,t}^{N_d}$ is the hidden state at time-step $t$ of the $N_{d}$ -th decoder layer. Then, $\mathbf{o}_t$ is fed into a linear transformation and softmax layer to predict the probability distri + +bution of the next target token: + +$$ +\mathbf {p} _ {t} = \operatorname {S o f t m a x} (\mathbf {W} _ {o} \mathbf {o} _ {t} + \mathbf {b} _ {o}), +$$ + +where $\mathbf{W}_o$ and $\mathbf{b}_o$ are training parameters. + +# 3.4 Training and Inference + +The model is trained to maximize the conditional log-likelihood, due to the intractable marginal likelihood, which is converted to the following variational lower bound that needs to be maximized in the training process: + +$$ +\begin{array}{l} \mathcal {J} (\theta , \phi ; X _ {c l s}, X _ {m t}, X _ {m s} ^ {s r c}, Y _ {c l s}, Y _ {m t}, X _ {m s} ^ {t q t}) = \\ - \operatorname {K L} \left(q _ {\phi} \left(\mathbf {z} _ {m t} ^ {\prime} \mid X _ {m t}, Y _ {m t}\right) \| p _ {\theta} \left(\mathbf {z} _ {m t} \mid X _ {m t}\right)\right) \\ - \mathrm {K L} \left(q _ {\phi} \left(\mathbf {z} _ {m s} ^ {\prime} \mid X _ {m s} ^ {s r c}, X _ {m s} ^ {t g t}\right) \| p _ {\theta} \left(\mathbf {z} _ {m s} \mid X _ {m s} ^ {s r c}\right)\right) \\ - \mathrm {K L} \left(q _ {\phi} \left(\mathbf {z} _ {c l s} ^ {\prime} \mid X _ {c l s}, \mathbf {z} _ {m t}, \mathbf {z} _ {m s}, Y _ {c l s}\right) \| p _ {\theta} \left(\mathbf {z} _ {c l s} \mid X _ {c l s}, \mathbf {z} _ {m t}, \mathbf {z} _ {m s}\right)\right) \\ + \mathbb {E} _ {q _ {\theta}} \left[ \log p _ {\theta} \left(Y _ {m t} \mid X _ {m t}, \mathbf {z} _ {m t}\right) \right] \\ + \mathbb {E} _ {q _ {\phi}} \left[ \log p _ {\theta} \left(X _ {m s} ^ {t q t} \mid X _ {m s} ^ {s r c}, \mathbf {z} _ {m s}\right) \right] \\ + \mathbb {E} _ {q _ {\phi}} \left[ \log p _ {\theta} \left(Y _ {c l s} \mid X _ {c l s}, \mathbf {z} _ {c l s}, \mathbf {z} _ {m t}, \mathbf {z} _ {m s}\right) \right], \\ \end{array} +$$ + +where the variational lower bound includes the reconstruction terms and KL divergence terms based on three hierarchical variables. We use the reparameterization trick (Kingma and Welling, 2013) to estimate the gradients of the prior and recognition networks (Zhao et al., 2017). + +During inference, firstly, the prior networks of MT and MS generate the local variables. Then, conditioned on them, the global variable is produced by prior network of CLS. Finally, only the global variable is fed into the decoder, which corresponds to red dashed arrows in Fig. 1. + +# 4 Experiments + +# 4.1 Datasets and Metrics + +Datasets. We evaluate the proposed approach on Zh2EnSum and En2ZhSum datasets released by (Zhu et al., 2019). The Zh2EnSum and En2ZhSum are originally from (Hu et al., 2015) and (Hermann et al., 2015; Zhu et al., 2018), respectively. Both the Chinese-to-English and English-to-Chinese test sets are manually corrected. The involved training data in our experiments are listed in Tab. 1. + +Zh2EnSum. It is a Chinese-to-English summarization dataset, which has 1,699,713 Chinese short texts (104 Chinese characters on average) paired with Chinese (18 Chinese characters on average) and English short summaries (14 tokens on average). The dataset is split into 1,693,713 training pairs, 3,000 validation pairs, and 3,000 test pairs. + +
Zh2EnSumD1CLSZh2EnSum1,693,713
D2MSLCSTS1,693,713
D3MTLDC2.08M
En2ZhSumD4CLSEn2ZhSum364,687
D5MSENSUM364,687
D3MTLDC2.08M
+ +Table 1: Involved training data. LCSTS (Hu et al., 2015) is a Chinese summarization dataset. LDC corpora includes LDC2000T50, LDC2002L27, LDC2002T01, LDC2002E18, LDC2003E07, LDC2003E14, LDC2003T17, and LDC2004T07. ENSUM consists of CNN/Dailymail (Hermann et al., 2015) and MSMO (Zhu et al., 2018). + +
ModelsZh2EnSum
Size (M)Train (S)Data
ATS-A137.6030D1&D3
MS-CLS211.4148D1&D2
MT-CLS208.8463D1&D3
MT-MS-CLS114.9024D1&D2&D3
VHM117.4027D1&D2&D3
+ +Table 2: Model details. Size (M): number of trainable parameters; Train (S) denotes how many seconds required for each model to train the 100-batch cross-lingual summarization task of the same batch size (3072). Data: Training Data, as listed in Tab. 1. + +
ModelsEn2ZhSum
Size (M)Train (S)Data
ATS-A115.0525D4&D3
MS-CLS190.2365D4&D5
MT-CLS148.1672D4&D3
MT-MS-CLS155.5032D4&D5&D3
VHM158.0036D4&D5&D3
+ +Table 3: Model details. Size (M): number of trainable parameters; Train (S) denotes how many seconds required for each model to train the 100-batch cross-lingual summarization task of the same batch size (3072). Data: Training Data, as listed in Tab. 1. + +The involved training data used in multi-task learning, model size, training time, are listed in Tab. 2. + +En2ZhSum. It is an English-to-Chinese summarization dataset, which has 370,687 English documents (755 tokens on average) paired with multisentence English (55 tokens on average) and Chinese summaries (96 Chinese characters on average). The dataset is split into 364,687 training pairs, 3,000 validation pairs, and 3,000 test pairs. The involved training data used in multi-task learning, model size, training time, are listed in Tab. 3. + +Metrics. Following Zhu et al. (2020), 1) we evaluate all models with the standard ROUGE metric (Lin, 2004), reporting the F1 scores for ROUGE-1, ROUGE-2, and ROUGE-L. All ROUGE scores + +
M#ModelsZh2EnSumEn2ZhSum
RG1RG2RGLMVSRG1RG2RGL
M1GETran (Zhu et al., 2019)24.349.1420.130.6428.1911.4025.77
M2GLTran (Zhu et al., 2019)35.4516.8631.2816.9032.1713.8529.43
M3TNCLS (Zhu et al., 2019)38.8521.9335.0519.4336.8218.7233.20
M4ATS-A (Zhu et al., 2020)40.6824.1236.9722.1540.4722.2136.89
M5MS-CLS (Zhu et al., 2019)40.3422.6536.3921.0938.2520.2034.76
M6MT-CLS (Zhu et al., 2019)40.2522.5836.2121.0640.2322.3236.59
M7MS-CLS-Rec (Cao et al., 2020a)40.9723.2036.96NA38.1216.7633.86
M8MS-CLS*40.4422.1936.3221.0138.2620.0734.49
M9MT-CLS*40.0521.7235.7420.9640.1422.3636.45
M10MT-MS-CLS (Ours)40.6524.0236.6922.1740.3422.3536.44
M11VHM (Ours)41.36††24.64†37.15†22.55†40.98††23.07††37.12†
M12mBART (Liu et al., 2020)43.6125.1438.7923.4741.5523.2737.22
M13MLPT (Xu et al., 2020)43.5025.4129.66NA41.6223.3537.26
M14VHM + mBART (Ours)43.97†25.61†39.19†23.8841.95†23.54†37.67†
+ +Table 4: ROUGE F1 scores (\%) and MoverScore scores (\%) on Zh2EnSum test set, and ROUGE F1 scores (\%) on En2ZhSum test set. RG and MVS refer to ROUGE and MoverScore, respectively. The “*” denotes results by running their released code. The “NA” indicates no such result in the original paper. “†” and “††” indicate that statistically significant better (M11 vs. M4 and M14 vs. M12) with t-test $p < 0.05$ and $p < 0.01$ , respectively. “VHM + mBART” means that we use mBART weights as model initialization of our VHM. + +are reported by the $95\%$ confidence interval measured by the official script;6 2) we also evaluate the quality of English summaries in Zh2EnSum with MoverScore (Zhao et al., 2019). + +# 4.2 Implementation Details + +In this paper, we train all models using standard transformer (Vaswani et al., 2017) in Base setting. For other hyper-parameters, we mainly follow the setting described in Zhu et al. (2019, 2020) for fair comparison. For more details, please refer to Appendix A. + +# 4.3 Comparison Models + +Pipeline Models. TETran (Zhu et al., 2019). It first translates the original article into the target language by Google Translator and then summarizes the translated text via LexRank (Erkan and Radev, 2004). TLTran (Zhu et al., 2019). It first summarizes the original article via a transformer-based monolingual summarization model and then translates the summary into the target language by Google Translator. + +End-to-End Models. TNCLS (Zhu et al., 2019). It directly uses the de-facto transformer (Vaswani + +et al., 2017) to train an end-to-end CLS system. ATS-A (Zhu et al., 2020).8 It is an efficient model to attend the pre-constructed probabilistic bilingual lexicon to enhance the CLS. MS-CLS (Zhu et al., 2019). It simultaneously performs summarization generation for both CLS and MS tasks and calculates the total losses. MT-CLS (Zhu et al., 2019).9 It alternatively trains CLS and MT tasks. MS-CLS-Rec (Cao et al., 2020a). It jointly trains MS and CLS systems with a reconstruction loss to mutually map the source and target representations. mBART (Liu et al., 2020). We use mBART (mbart.cc25) as model initialization to fine-tune the CLS task. MLPT (Mixed-Linguual Pretraining) (Xu et al., 2020). It applies mixed-lingual pretraining that leverages six related tasks, covering both cross-lingual tasks such as translation and monolingual tasks like masked language models. MT-MS-CLS. It is our strong baseline, which is implemented by alternatively training CLS, MT, and MS. Here, we keep the dataset used for MT and MS consistent with Zhu et al. (2019) for fair comparison. + +# 4.4 Main Results + +Overall, we separate the models into three parts in Tab. 4: the pipeline, end-to-end, and multi-task + +![](images/57e9f844f82efbb3b1b889e2127b6dcb8f7f394dd4ecb39d1edccb3ce8de1a64.jpg) +Figure 2: ROUGE F1 scores (\%) and MoverScore scores (\%) on Zh2EnSum test set in few-shot setting. $\mathbf{x}\%$ means that the $\mathbf{x}\%$ CLS training dataset is used, e.g., $0.1\%$ represents that $0.1\%$ training dataset (about 1.7k instances) is used for training. The performance "Gap-H" (orange line) between "VHM" and "MT-MS-CLS" grows steadily with the decreasing of used CLS training data, which is similar to the performance "Gap-D" (red line) between "VHM" and "ATS-A". + +![](images/e079326a67d2d5f7ecaf52e75feaee5eefed055c741fa6a24909810d3913a1ce.jpg) + +![](images/a2d3e3e06074c5b20f656613b8c55cf62f415b386c0db8acddc46fc0c00c9359.jpg) + +![](images/0640d86c7064333d1167ff96bc0e2b49a49aa993cc4e0d8db263ef34e999fd82.jpg) + +settings. In each part, we show the results of existing studies and our re-implemented baselines and our approach, i.e., the VHM, on Zh2EnSum and En2ZhSum test sets. + +Results on Zh2EnSum. Compared against the pipeline and end-to-end methods, VHM substantially outperforms all of them (e.g., the previous best model "ATS-A") by a large margin with 0.68/0.52/0.18/0.4↑ scores on RG1/RG2/RGL/MVS, respectively. Under the multi-task setting, compared to the existing best model "MS-CLS-Rec", our VHM also consistently boosts the performance in three metrics (i.e., 0.39↑, 1.44↑, and 0.19↑ ROUGE scores on RG1/RG2/RGL, respectively), showing its effectiveness. Our VHM also significantly surpasses our strong baseline "MT-MS-CLS" by 0.71/0.62/0.46/0.38↑ scores on RG1/RG2/RGL/MVS, respectively, demonstrating the superiority of our model again. + +After using mBART as model initialization, our VHM achieves the state-of-the-art results on all metrics. + +Results on En2ZhSum. Compared against the pipeline, end-to-end and multi-task methods, our VHM presents remarkable ROUGE improvements over the existing best model "ATS-A" by a large margin, about $0.51 / 0.86 / 0.23 \uparrow$ ROUGE gains on RG1/RG2/RGL, respectively. These results suggest that VHM consistently performs well in different language directions. + +Our approach still notably surpasses our strong baseline "MT-MS-CLS" in terms of all metrics, which shows the generalizability and superiority of our model again. + +# 4.5 Few-Shot Results + +Due to the difficulty of acquiring the cross-lingual summarization dataset (Zhu et al., 2019), we conduct such experiments to investigate the model performance when the CLS training dataset is limited, i.e., few-shot experiments. Specifically, we randomly choose $0.1\%$ , $1\%$ , $10\%$ , and $50\%$ CLS training datasets to conduct experiments. The results are shown in Fig. 2 and Fig. 3. + +Results on Zh2EnSum. Fig. 2 shows that VHM significantly surpasses all comparison models under each setting. Particularly, under the $0.1\%$ setting, our model still achieves best performances than all baselines, suggesting that our variational hierarchical model works well in the few-shot setting as well. Besides, we find that the performance gap between comparison models and VHM is growing when the used CLS training data become fewer. It is because relatively larger proportion of translation and summarization data are used, the influence from MT and MS becomes greater, effectively strengthening the CLS model. Particularly, the performance "Gap-H" between MT-MS-CLS and VHM is also growing, where both models utilize the same data. This shows that the hierarchical relationship between MT&MS and CLS makes substantial contributions to the VHM model in terms of four metrics. Consequently, our VHM achieves a comparably stable performance. + +Results on En2ZhSum. From Fig. 3, we observe the similar findings on Zh2EnSum. This shows that VHM significantly outperforms all comparison models under each setting, showing the generalizability and superiority of our model again in the few-shot setting. + +![](images/2fc4a85ebf093cc447a8d710ce291a2993c7515d26b6955991f45ae2f65274a5.jpg) +Figure 3: Rouge F1 scores (\%) on the test set when using different CLS training data. The performance “Gap-H” (orange line) between “VHM” and “MT-MS-CLS” grows steadily with the decreasing of used CLS training data on ROUGE-2, which is similar to the performance “Gap-D” (red line) between “VHM” and “ATS-A”. + +![](images/414850cfca24632ad9a1109a79773bd980a01761d00e3b2e11342baa03501912.jpg) + +![](images/f48d5924ca829a506e1ab9062c93146543214b81121ade28ccc9a291f8411685.jpg) + +
#ModelsZh2EnSumEn2ZhSum
RG1/RG2/RGL/MVSRG1/RG2/RGL
0VHM41.36/24.64/37.15/22.5540.98/23.07/37.12
1-zmt40.75/23.47/36.48/22.1840.35/22.48/36.55
2-zms40.69/23.34/36.35/22.1240.57/22.79/36.71
3-zmt&zms40.45/22.97/36.03/22.3639.98/21.91/36.33
4-zcls39.77/22.41/34.87/21.6239.76/21.69/35.99
5-hierarchy40.47/22.64/34.96/21.7839.67/21.79/35.87
+ +Table 5: Ablation results (in the full setting). Row 1 denotes that we remove the local variable $\mathbf{z}_{mt}$ , and sample $\mathbf{z}_{cls}$ from the source input and another local variable $\mathbf{z}_{ms}$ , similarly for row 2. Row 3 denotes that we remove both local variables $\mathbf{z}_{mt}$ and $\mathbf{z}_{ms}$ and sample $\mathbf{z}_{cls}$ only from the source input. Row 4 means that we remove the global variable $\mathbf{z}_{cls}$ and directly attend the local variables $\mathbf{z}_{mt}$ and $\mathbf{z}_{ms}$ in Eq. 8. Row 5 represents that we keep three latent variables but remove the hierarchical relation between $\mathbf{z}_{cls}$ and $\mathbf{z}_{mt} \& \mathbf{z}_{ms}$ . + +# 5 Analysis + +# 5.1 Ablation Study + +We conduct ablation studies to investigate how well the local and global variables of our VHM works. When removing variables listed in Tab. 5, we have the following findings. + +(1) Rows $1\sim 3$ vs. row 0 shows that the model performs worse, especially when removing the two local ones (row 3), due to missing the explicit translation or summarization or both information provided by the local variables, which is important to CLS. Besides, row 3 indicates that directly attending to $\mathbf{z}_{cls}$ leads to poor performances, showing the necessity of the hierarchical structure, i.e., using the global variable to exploit the local ones. +(2) Rows $4\sim 5$ vs. row 0 shows that directly attending the local translation and summarization cannot achieve good results due to lacking of the global combination of them, showing that it is very necessary for designing the variational hierarchical + +model, i.e., using a global variable to well exploit and combine the local ones. + +# 5.2 Human Evaluation + +Following Zhu et al. (2019, 2020), we conduct human evaluation on 25 random samples from each of the Zh2EnSum and En2ZhSum test set. We compare the summaries generated by our methods (MTMS-CLS and VHM) with the summaries generated by ATS-A, MS-CLS, and MT-CLS in the full setting and few-shot setting $(0.1\%)$ , respectively. We invite three graduate students to compare the generated summaries with human-corrected references, and assess each summary from three independent perspectives: + +1. How informative (i.e., IF) the summary is? +2. How concise (i.e., CC) the summary is? +3. How fluent, grammatical (i.e., FL) the summary is? + +Each property is assessed with a score from 1 (worst) to 5 (best). The average results are presented in Tab. 6 and Tab. 7. + +Tab. 6 shows the results in the full setting. We find that our VHM outperforms all comparison models from three aspects in both language directions, which further demonstrates the effectiveness and superiority of our model. + +Tab. 7 shows the results in the few-shot setting, where only $0.1\%$ CLS training data are used in all models. We find that our VHM still performs best than all other models from three perspectives in both datasets, suggesting its generalizability and effectiveness again under different settings. + +# 6 Related Work + +Cross-Lingual Summarization. Conventional cross-lingual summarization methods mainly focus on incorporating bilingual information into + +
ModelsZh2EnSumEn2ZhSum
IFCCFLIFCCFL
ATS-A3.444.163.983.123.313.28
MS-CLS3.124.083.763.043.223.12
MT-CLS3.364.244.143.183.463.36
MT-MS-CLS3.424.464.223.243.483.42
VHM3.564.544.383.363.543.48
+ +Table 6: Human evaluation results in the full setting. IF, CC and FL denote informative, concise, and fluent respectively. + +
ModelsZh2EnSumEn2ZhSum
IFCCFLIFCCFL
ATS-A2.262.962.822.042.582.68
MS-CLS2.242.842.782.022.522.64
MT-CLS2.383.022.882.182.742.76
MT-MS-CLS2.543.082.922.242.882.82
VHM2.683.163.082.563.062.88
+ +Table 7: Human evaluation results in the few-shot setting (0.1%). + +the pipeline methods (Leuski et al., 2003; Ouyang et al., 2019; Orasan and Chiorean, 2008; Wan et al., 2010; Wan, 2011; Yao et al., 2015; Zhang et al., 2016b), i.e., translation and then summarization or summarization and then translation. Due to the difficulty of acquiring cross-lingual summarization dataset, some previous researches focus on constructing datasets (Ladhak et al., 2020; Scialom et al., 2020; Yela-Bello et al., 2021; Zhu et al., 2019; Hasan et al., 2021; Perez-Beltrachini and Lapata, 2021; Varab and Schluter, 2021), mixed-lingual pre-training (Xu et al., 2020), knowledge distillation (Nguyen and Tuan, 2021), contrastive learning (Wang et al., 2021) or zero-shot approaches (Ayana et al., 2018; Duan et al., 2019; Dou et al., 2020), i.e., using machine translation (MT) or monolingual summarization (MS) or both to train the CLS system. Among them, Zhu et al. (2019) propose to use roundtrip translation strategy to obtain large-scale CLS datasets and then present two multi-task learning methods for CLS. Based on this dataset, Zhu et al. (2020) leverage an end-to-end model to attend the pre-constructed probabilistic bilingual lexicon to improve CLS. To further enhance CLS, some studies resort to shared decoder (Bai et al., 2021a), more pseudo training data (Takase and Okazaki, 2020), or more related task training (Cao et al., 2020b,a; Bai et al., 2021b). Wang et al. (2022) concentrate on building a benchmark dataset for CLS on dialogue field. Different from them, we propose a variational hierarchical model that introduces a global variable to simultaneously exploit and combine the local translation variable in MT pairs and local summarization vari + +able in MS pais for CLS, achieving better results. Conditional Variational Auto-Encoder. CVAE has verified its superiority in many fields (Sohn et al., 2015; Liang et al., 2021a; Zhang et al., 2016a; Su et al., 2018b). For instance, in dialogue, Shen et al. (2019), Park et al. (2018) and Serban et al. (2017) extend CVAE to capture the semantic connection between the utterance and the corresponding context with hierarchical latent variables. Although the CVAE has been widely used in NLP tasks, its adaption and utilization to cross-lingual summarization for modeling hierarchical relationship are non-trivial, and to the best of our knowledge, has never been investigated before in CLS. + +Multi-Task Learning. Conventional multi-task learning (MTL) (Caruana, 1997), which trains the model on multiple related tasks to promote the representation learning and generalization performance, has been successfully used in NLP fields (Collobert and Weston, 2008; Deng et al., 2013; Liang et al., 2021d,c,b). In the CLS, conventional MTL has been explored to incorporate additional training data (MS, MT) into models (Zhu et al., 2019; Takase and Okazaki, 2020; Cao et al., 2020a). In this work, we instead focus on how to connect the relation between the auxiliary tasks at training to make the most of them for better CLS. + +# 7 Conclusion + +In this paper, we propose to enhance the CLS model by simultaneously exploiting MT and MS. Given the hierarchical relationship between MT&MS and CLS, we propose a variational hierarchical model to explicitly exploit and combine them in CLS process. Experiments on Zh2EnSum and En2ZhSum show that our model significantly improves the quality of cross-lingual summaries in terms of automatic metrics and human evaluations. Particularly, our model in the few-shot setting still works better, suggesting its superiority and generalizability. + +# Acknowledgements + +The research work described in this paper has been supported by the National Key R&D Program of China (2020AAA0108001) and the National Nature Science Foundation of China (No. 61976015, 61976016, 61876198 and 61370130). Liang is supported by 2021 Tencent Rhino-Bird Research Elite Training Program. The authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve this paper. + +# References + +Ayana, shi-qi Shen, Yun Chen, Cheng Yang, Zhi-yuan Liu, and Maosong Sun. 2018. Zero-shot crosslingual neural headline generation. IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP), 26(12):2319-2327. +Yu Bai, Yang Gao, and Heyan Huang. 2021a. Crosslingual abstractive summarization with limited parallel resources. In Proceedings of ACL-IJCNLP, pages 6910-6924, Online. Association for Computational Linguistics. +Yu Bai, Heyan Huang, Kai Fan, Yang Gao, Zewen Chi, and Boxing Chen. 2021b. Bridging the gap: Crosslingual summarization with compression rate. CoRR, abs/2110.07936. +Yue Cao, Hui Liu, and Xiaojun Wan. 2020a. Jointly learning to align and summarize for neural crosslingual summarization. In Proceedings of ACL, pages 6220-6231, Online. Association for Computational Linguistics. +Yue Cao, Xiaojun Wan, Jinge Yao, and Dian Yu. 2020b. Multisumm: Towards a unified model for multilingual abstractive summarization. In Proceedings of AAAI, volume 34, pages 11-18. +Rich Caruana. 1997. Multitask learning. In Machine Learning, pages 41-75. +Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of ICML, page 160-167. +Li Deng, Geoffrey E. Hinton, and Brian Kingsbury. 2013. New types of deep neural network learning for speech recognition and related applications: an overview. 2013 IEEE ICASSP, pages 8599-8603. +Zi-Yi Dou, Sachin Kumar, and Yulia Tsvetkov. 2020. A deep reinforced model for zero-shot cross-lingual summarization with bilingual semantic similarity rewards. In Proceedings of the Fourth Workshop on Neural Generation and Translation, pages 60-68, Online. Association for Computational Linguistics. +Xiangyu Duan, Mingming Yin, Min Zhang, Boxing Chen, and Weihua Luo. 2019. Zero-shot cross-lingual abstractive sentence summarization through teaching generation and attention. In Proceedings of ACL, pages 3162–3172, Florence, Italy. Association for Computational Linguistics. +Günes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of Artificial Intelligence Research (JAIR), 22:457-479. +Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of AISTATS. + +Tahmid Hasan, Abhik Bhattacharjee, Wasi Uddin Ahmad, Yuan-Fang Li, Yong-Bin Kang, and Rifat Shahriyar. 2021. Crosssum: Beyond english-centric cross-lingual abstractive text summarization for $1500+$ language pairs. CoRR, abs/2112.08804. +Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Proceedings of NIPS, pages 1693-1701. +Baotian Hu, Qingcai Chen, and Fangze Zhu. 2015. LC-STS: A large scale Chinese short text summarization dataset. In Proceedings of EMNLP, pages 1967-1972, Lisbon, Portugal. Association for Computational Linguistics. +Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR. +Diederik P Kingma and Max Welling. 2013. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114. +Faisal Ladhak, Esin Durmus, Claire Cardie, and Kathleen McKeown. 2020. WikiLingua: A new benchmark dataset for cross-lingual abstractive summarization. In Findings of EMNLP, pages 4034-4048, Online. Association for Computational Linguistics. +Anton Leuski, Chin-Yew Lin, Liang Zhou, Ulrich German, Franz Josef Och, and Eduard Hovy. 2003. Cross-lingual c\*st\*rd: English access to Hindi information. ACM Transactions on Asian Language Information Processing, 2(3):245-269. +Yunlong Liang, Fandong Meng, Yufeng Chen, Jinan Xu, and Jie Zhou. 2021a. Modeling bilingual conversational characteristics for neural chat translation. In Proceedings of ACL, pages 5711-5724. +Yunlong Liang, Fandong Meng, Jinchao Zhang, Yufeng Chen, Jinan Xu, and Jie Zhou. 2021b. A dependency syntactic knowledge augmented interactive architecture for end-to-end aspect-based sentiment analysis. Neurocomputing. +Yunlong Liang, Fandong Meng, Jinchao Zhang, Yufeng Chen, Jinan Xu, and Jie Zhou. 2021c. An iterative multi-knowledge transfer network for aspect-based sentiment analysis. In *Findings of EMNLP*, pages 1768-1780. +Yunlong Liang, Chulun Zhou, Fandong Meng, Jinan Xu, Yufeng Chen, Jinsong Su, and Jie Zhou. 2021d. Towards making the most of dialogue characteristics for neural chat translation. In Proceedings of EMNLP, pages 67-79. +Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics. + +Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. Transactions of the Association for Computational Linguistics, 8:726-742. +Arya D. McCarthy, Xian Li, Jiatao Gu, and Ning Dong. 2020. Addressing posterior collapse with mutual information for improved variational neural machine translation. In Proceedings of ACL, pages 8512-8525. +Thong Nguyen and Luu Anh Tuan. 2021. Improving neural cross-lingual summarization via employing optimal transport distance for knowledge distillation. CoRR, abs/2112.03473. +Constantin Orăsàn and Oana Andreea Chiorean. 2008. Evaluation of a cross-lingual Romanian-English multi-document summariser. In Proceedings of LREC, Marrakech, Morocco. European Language Resources Association (ELRA). +Jessica Ouyang, Boya Song, and Kathy McKeown. 2019. A robust abstractive system for cross-lingual summarization. In Proceedings of NAACL, pages 2025–2031, Minneapolis, Minnesota. Association for Computational Linguistics. +Yookoon Park, Jaemin Cho, and Gunhee Kim. 2018. A hierarchical latent structure for variational conversation modeling. In Proceedings of NAACL, pages 1792-1801, New Orleans, Louisiana. Association for Computational Linguistics. +Laura Perez-Beltrachini and Mirella Lapata. 2021. Models and datasets for cross-lingual summarisation. In Proceedings of EMNLP, pages 9408-9423, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, and Jacopo Staiano. 2020. MLSUM: The multilingual summarization corpus. In Proceedings of EMNLP, pages 8051-8067, Online. Association for Computational Linguistics. +Iulian Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In Proceedings of AAAI. +Lei Shen, Yang Feng, and Haolan Zhan. 2019. Modeling semantic relationship in multi-turn conversations with hierarchical latent variables. In Proceedings of ACL, pages 5497-5502, Florence, Italy. Association for Computational Linguistics. +Lei Shen, Fandong Meng, Jinchao Zhang, Yang Feng, and Jie Zhou. 2021. GTM: A generative triple-wise model for conversational question generation. In Proceedings of ACL, pages 3495-3506, Online. Association for Computational Linguistics. + +Kihyuk Sohn, Honglak Lee, and Xinchen Yan. 2015. Learning structured output representation using deep conditional generative models. In Proceedings of NIPS, pages 3483-3491. +Jinsong Su, Shan Wu, Deyi Xiong, Yaojie Lu, Xianpei Han, and Biao Zhang. 2018a. Variational recurrent neural machine translation. In Proceedings of AAAI. +Jinsong Su, Shan Wu, Biao Zhang, Changxing Wu, Yue Qin, and Deyi Xiong. 2018b. A neural generative autoencoder for bilingual word embeddings. Information Sciences, 424:287-300. +Jinsong Su, Jiali Zeng, Deyi Xiong, Yang Liu, Mingxuan Wang, and Jun Xie. 2018c. A hierarchy-to-sequence attentional neural machine translation model. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26(3):623-632. +Sho Takase and Naoaki Okazaki. 2020. Multi-task learning for cross-lingual abstractive summarization. +Daniel Varab and Natalie Schluter. 2021. MassiveSumm: a very large-scale, very multilingual, news summarisation dataset. In Proceedings of EMNLP, pages 10150-10161, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of NIPS, pages 5998-6008. +Xiaojun Wan. 2011. Using bilingual information for cross-language document summarization. In Proceedings of ACL, pages 1546-1555, Portland, Oregon, USA. Association for Computational Linguistics. +Xiaojun Wan, Huiying Li, and Jianguo Xiao. 2010. Cross-language document summarization based on machine translation quality prediction. In Proceedings of ACL, pages 917-926, Uppsala, Sweden. Association for Computational Linguistics. +Danqing Wang, Jiaze Chen, Hao Zhou, Xipeng Qiu, and Lei Li. 2021. Contrastive aligned joint learning for multilingual summarization. In *Findings of ACLIJCNLP*, pages 2739–2750, Online. Association for Computational Linguistics. +Jiaan Wang, Fandong Meng, Ziyao Lu, Duo Zheng, Zhixu Li, Jianfeng Qu, and Jie Zhou. 2022. Clidsum: A benchmark dataset for cross-lingual dialogue summarization. arXiv preprint arXiv:2202.05599. +Tianming Wang and Xiaojun Wan. 2019. T-cvae: Transformer-based conditioned variational autoencoder for story completion. In Proceedings of IJCAI, pages 5233-5239. +Ruochen Xu, Chenguang Zhu, Yu Shi, Michael Zeng, and Xuedong Huang. 2020. Mixed-lingual pretraining for cross-lingual summarization. In Proceedings of AACL, pages 536-541, Suzhou, China. Association for Computational Linguistics. + +Jin-ge Yao, Xiaojun Wan, and Jianguo Xiao. 2015. Phrase-based compressive cross-language summarization. In Proceedings of EMNLP, pages 118-127, Lisbon, Portugal. Association for Computational Linguistics. +Jenny Paola Yela-Bello, Ewan Oglethorpe, and Navid Rekabsaz. 2021. MultiHumES: Multilingual humanitarian dataset for extractive summarization. In Proceedings of EACL, pages 1713-1717, Online. Association for Computational Linguistics. +Biao Zhang, Deyi Xiong, Jinsong Su, Hong Duan, and Min Zhang. 2016a. Variational neural machine translation. In Proceedings of EMNLP, pages 521-530. +Jiajun Zhang, Yu Zhou, and Chengqing Zong. 2016b. Abstractive cross-language summarization via translation model enhanced predicate argument structure fusing. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 24(10):1842-1853. +Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of ACL, pages 654-664. +Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Christian M. Meyer, and Steffen Eger. 2019. MoverScore: Text generation evaluating with contextualized embeddings and earth mover distance. In Proceedings of EMNLP-IJCNLP, pages 563-578, Hong Kong, China. Association for Computational Linguistics. +Junnan Zhu, Haoran Li, Tianshang Liu, Yu Zhou, Jiajun Zhang, and Chengqing Zong. 2018. MSMO: Multimodal summarization with multimodal output. In Proceedings of EMNLP, pages 4154-4164, Brussels, Belgium. Association for Computational Linguistics. +Junnan Zhu, Qian Wang, Yining Wang, Yu Zhou, Jiajun Zhang, Shaonan Wang, and Chengqing Zong. 2019. NCLS: Neural cross-lingual summarization. In Proceedings of EMNLP-IJCNLP, pages 3054-3064, Hong Kong, China. Association for Computational Linguistics. +Junnan Zhu, Yu Zhou, Jiajun Zhang, and Chengqing Zong. 2020. Attend, translate and summarize: An efficient method for neural cross-lingual summarization. In Proceedings of ACL, pages 1309–1321, Online. Association for Computational Linguistics. + +# Appendix + +# A Implementation Details + +In this paper, we train all models using standard transformer (Vaswani et al., 2017) in Base setting. For other hyper-parameters, we mainly follow the setting described in (Zhu et al., 2019, 2020) for fair comparison. Specifically, the segmentation granularity is "subword to subword" for Zh2EnSum, + +and "word to word" for En2ZhSum. All the parameters are initialized via Xavier initialization method (Glorot and Bengio, 2010). We train our models using standard transformer (Vaswani et al., 2017) in Base setting, which contains a 6-layer encoder (i.e., $N_{e}$ ) and a 6-layer decoder (i.e., $N_{d}$ ) with 512-dimensional hidden representations. And all latent variables have a dimension of 128. Each mini-batch contains a set of document-summary pairs with roughly 4,096 source and 4,096 target tokens. We apply Adam optimizer (Kingma and Ba, 2015) with $\beta_{1} = 0.9$ , $\beta_{2} = 0.998$ . Following Zhu et al. (2019), we train each task for about 800,000 iterations in all multi-task models (reaching convergence). To alleviate the degeneration problem of the variational framework, we apply KL annealing. The KL multiplier $\lambda$ gradually increases from 0 to 1 over 400,000 steps. All our methods without mBART as model initialization are trained and tested on a single NVIDIA Tesla V100 GPU. We use 8 NVIDIA Tesla V100 GPU to train our models when using mBART as model initialization, where the number of token on each GPU is set to 2,048 and the training step is set to 400,000. + +During inference, we use beam search with a beam size 4 and length penalty 0.6. \ No newline at end of file diff --git a/avariationalhierarchicalmodelforneuralcrosslingualsummarization/images.zip b/avariationalhierarchicalmodelforneuralcrosslingualsummarization/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..aa4ff112635f7b1012024fae309a2ed454bffd94 --- /dev/null +++ b/avariationalhierarchicalmodelforneuralcrosslingualsummarization/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:19a8e0cf9af102fca2ba61a4b222a3894983ea3a2a09930c245a7616194f077e +size 663226 diff --git a/avariationalhierarchicalmodelforneuralcrosslingualsummarization/layout.json b/avariationalhierarchicalmodelforneuralcrosslingualsummarization/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..83e9117280d6c51e0267d613d12bb6bb1b843c93 --- /dev/null +++ b/avariationalhierarchicalmodelforneuralcrosslingualsummarization/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:27d806f58d69f6ca569f9a26886bfe504099bae093a6acffa1e8721dee5f44a2 +size 506577 diff --git a/awellcomposedtextishalfdonecompositionsamplingfordiverseconditionalgeneration/d0c860e1-231a-4268-b81c-e4857bc54d1a_content_list.json b/awellcomposedtextishalfdonecompositionsamplingfordiverseconditionalgeneration/d0c860e1-231a-4268-b81c-e4857bc54d1a_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..723b704ec9e3990a481054a8c66c78f7dbe19f64 --- /dev/null +++ b/awellcomposedtextishalfdonecompositionsamplingfordiverseconditionalgeneration/d0c860e1-231a-4268-b81c-e4857bc54d1a_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f97ece5a1b2b3843568880cd276377b1a8f1fb5f9bc5689636b02324f9639040 +size 159846 diff --git a/awellcomposedtextishalfdonecompositionsamplingfordiverseconditionalgeneration/d0c860e1-231a-4268-b81c-e4857bc54d1a_model.json b/awellcomposedtextishalfdonecompositionsamplingfordiverseconditionalgeneration/d0c860e1-231a-4268-b81c-e4857bc54d1a_model.json new file mode 100644 index 0000000000000000000000000000000000000000..7c00a5bea9cf5a426daabda47c07c24d057d884f --- /dev/null +++ b/awellcomposedtextishalfdonecompositionsamplingfordiverseconditionalgeneration/d0c860e1-231a-4268-b81c-e4857bc54d1a_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:40d6a5559e9875df2a94447ded17460878ca0b7c5b4078151a0646643e79e795 +size 192107 diff --git a/awellcomposedtextishalfdonecompositionsamplingfordiverseconditionalgeneration/d0c860e1-231a-4268-b81c-e4857bc54d1a_origin.pdf b/awellcomposedtextishalfdonecompositionsamplingfordiverseconditionalgeneration/d0c860e1-231a-4268-b81c-e4857bc54d1a_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e2e0af2f8472b5dc2f4d0ff1b9623b09b09b11ee --- /dev/null +++ b/awellcomposedtextishalfdonecompositionsamplingfordiverseconditionalgeneration/d0c860e1-231a-4268-b81c-e4857bc54d1a_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:82792d5c524e4a9753c33fbbd2139f0a6a517acfbe8e323ca8ad0e45ffe81fac +size 717628 diff --git a/awellcomposedtextishalfdonecompositionsamplingfordiverseconditionalgeneration/full.md b/awellcomposedtextishalfdonecompositionsamplingfordiverseconditionalgeneration/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1156e45580e44fb7b3bacf1e62fef1a20d9ae25c --- /dev/null +++ b/awellcomposedtextishalfdonecompositionsamplingfordiverseconditionalgeneration/full.md @@ -0,0 +1,642 @@ +# A Well-Composed Text is Half Done! Composition Sampling for Diverse Conditional Generation + +Shashi Narayan + +Google Research + +shashinarayan@google.com + +Goncalo Simoes + +Google Research + +gsimoes@google.com + +Yao Zhao + +Google Brain + +yaozhaoyz@google.com + +Joshua Maynez + +Google Research + +joshuahm@google.com + +Dipanjan Das + +Google Research + +dipanjand@google.com + +Michael Collins + +Google Research + +mjcollins@google.com + +Mirella Lapata + +Google Research + +lapata@google.com + +# Abstract + +We propose Composition Sampling, a simple but effective method to generate diverse outputs for conditional generation of higher quality compared to previous stochastic decoding strategies. It builds on recently proposed plan-based neural generation models (Narayan et al., 2021) that are trained to first create a composition of the output and then generate by conditioning on it and the input. Our approach avoids text degeneration by first sampling a composition in the form of an entity chain and then using beam search to generate the best possible text grounded to this entity chain. Experiments on summarization (CNN/DailyMail and XSum) and question generation (SQuAD), using existing and newly proposed automatic metrics together with human-based evaluation, demonstrate that Composition Sampling is currently the best available decoding strategy for generating diverse meaningful outputs. + +# 1 Introduction + +In many NLG tasks, it is important to be able to generate multiple diverse outputs from a model. Tasks like summarization (Mani, 2001; Nenkova and McKeown, 2011) and question generation (Zhou et al., 2017) exhibit one-to-many relationships; there can be multiple semantically diverse summaries or questions for the same source, and it may be useful for a model to be able to generate multiple outputs. Yet, the primary focus of recent research in NLG has been on improving the quality of single-best outputs (Raffel et al., 2019; Lewis et al., 2019; Dong et al., 2019; Zhang et al., 2020a; Narayan et al., 2021), while diversity remains an unsolved problem (Hashimoto et al., 2019; Zhang et al., 2021). This is particularly challenging in conditional generation, where diversity in the target sequence should not come at the cost of correctness or faithfulness; for example, alternate summaries are not valuable if they are unfaithful to the input document(s) (Maynez et al., 2020; Kryscinski et al., + +2020). In this work, we investigate decoding methods for generating semantically diverse text which is also faithful to its input focusing on two tasks, namely summarization and question generation. + +Beam search (Li et al., 2016; Wiseman et al., 2017) has proven successful for single-best generation (Rush et al., 2015; Barrault et al., 2020; Meister et al., 2020), but struggles to generate diverse output (Vijayakumar et al., 2016). Stochastic sampling strategies, such as top- $k$ sampling (Fan et al., 2018) and nucleus sampling (Holtzman et al., 2020), are better at generating diverse sequences but are not suitable for conditional generation as they degenerate, $^{1}$ producing output that is not faithful to the source. Figure 1 exposes degeneration in summary output using nucleus sampling. + +To address these shortcomings, we propose Composition Sampling, a simple but effective hybrid decoding method for diverse and faithful conditional generation. It builds on recently proposed generation models (Narayan et al., 2021) that are trained to first plan a semantic composition of the target and then generate the text conditioned on the composition and the input. Composition sampling first samples a composition in the form of an entity chain and then uses beam search to generate the best possible sequence grounded to the sampled entity chain. Unlike top- $k$ or nucleus sampling, it avoids degeneration by instilling diversity in composition, rather than directly on the surface form. + +Our contributions can be summarized as follows: (a) we introduce Composition Sampling, a simple yet effective decoding method for diverse conditional generation, which combines planning with stochastic sampling; (b) we propose several metrics to compute semantic diversity in generated text; our metrics are complementary to lexical diversity + +Haman Written Summary: Chelsea star Eden Hazard is set to make his 100th top-flight appearance. Santi Cazorla should hit the same milestone when Arsenal meet Burnley. Both players have impressed since moving to the Premier League in 2012. Hazard has more goals this season but Cazorla has one more assist. Sportsmail's reporters choose the player who has excited them the most. +Beam Search: Eden Hazard and Santi Cazorla are both set to make their 100th Premier League appearances this weekend. Both players have been hugely influential since they moved to London. Here, Sportsmail's reporters choose the player they most enjoy seeing in action. +Nucleus Sampling: Eden Hazard and Santi Cazorlag will each make their 100th Premier League appearance this weekend. nightstandapplication.com. Sportsmail's hovercraft reporters choose their man of the match countermeasures. Nucleus Sampling: By making their 100th Premier League appearances this weekend, Eden Hazard and Santi Cazor halibut will set new records. Here, Anna Coren and Dominic King select their favourites. +Composition Sampling: (Eden Hazard | Santi Cazorla | Chelsea | Arsenal | Premier League ||| London | 2012 |||) Eden Hazard and Santi Cazorla are set to make their 100th appearances for Chelsea and Arsenal respectively in the Premier League this weekend. Both players have been hugely influential since they moved to London in the summer of 2012. But who has been the most exciting import to watch? +Composition Sampling: (Chelsea | Eden Hazard | Arsenal | Santi Cazorla || Sportsmail || London) Ch elsea's Eden Hazard and Arsenal's Santi Cazorla will both make 100th appearances this weekend. Sportsmail's reporters pick the player they most enjoy seeing in action. Both players have been hugely influential since moving to London. + +Figure 1: Human written summary, single-best predicted summary using beam search (beam size 8), and diverse summaries with nucleus sampling $(p = 0.95)$ and our composition sampling for a CNN/DailyMail article (shown in the Appendix, Figure 6). We highlight spans in orange that are not faithful to the input. + +(e.g., Self-BLEU; Zhu et al. 2018; Alihosseini et al. 2019) and assess whether a set of diverse outputs are contextually dissimilar (Self-BERTscore; Zhang et al. 2020b) or non-entailing (Self-Entailment); and (c) finally, we introduce, EDNA, a novel metric aiming to "Evaluate Diversity aNd fAithfulness" for summarization by quantifying whether summaries in a diverse set are faithful to their input without entailing each other. + +Evaluation on two popular summarization tasks, namely highlight generation (CNN/DailyMail; Hermann et al. 2015) and extreme summarization (XSum; Narayan et al. 2018), and question generation (SQuAD; Rajpurkar et al. 2016; Zhou et al. 2017), shows that composition sampling is most effective in generating diverse summaries or questions. When assessed by humans, composition sampled summaries are as faithful as the best summaries produced with beam search. In comparison, nucleus sampled summaries can be as diverse but far less faithful. Taken together our results demonstrate that Composition Sampling is currently the best available decoding strategy for generating diverse and meaningful output. + +# 2 Background + +Conditional generation tasks such as summarization (See et al., 2017), data-to-text generation (Wiseman et al., 2017), and machine translation + +(Bahdanau et al., 2015), are typically modeled using attention-based encoder-decoder architectures (Bahdanau et al., 2015; Gu et al., 2016; Vaswani et al., 2017). The encoder first encodes the input text $d$ and then the decoder predicts the output $s_{1:n}$ (e.g., the translation or summary of $d$ ) one token at a time as $p(s_i|s_1, \ldots, s_{i-1}; d)$ , where, $n$ is the output length and $s_i$ is the $i$ th token in the output. Often these models benefit from large scale task-agnostic pretraining (Song et al., 2019; Radford et al., 2018; Lewis et al., 2019; Rothe et al., 2020; Raffel et al., 2019; Zhang et al., 2020a). + +Plan-based Conditional Generation Narayan et al. (2021) develop a plan-based approach for neural summarization; their decoder generates a composition $c_{1:m}$ of target summary $s$ as $p(c_j|c_1, \ldots, c_{j-1}; d)$ , and then the same decoder produces $s$ as $p(s_i|s_1, \ldots, s_{i-1}; c; d)$ conditioned on input $d$ and composition $c_{1:m}$ , with $m$ being the composition length. Specifically, they adopt entity chains as the composition $c$ of summary $s$ , under the assumption that entities in the chain ought to be observed in the output summary. During inference, the model takes document $d$ as input and generates $c; s$ , the concatenation of composition and summary sequences, instead of generating $s$ directly; $c$ and $s$ are prefixed with special markers "[CONTENT]" and "[SUMMARY]", respectively, as shown in Figure 2. If $s$ consists of multiple sentences, markers"||" denote sentence boundaries in composition $c$ . + +![](images/85e71afd8a480ee8f641fca4cec664c30784896d4d70ccc80c1743072d81362c.jpg) +Figure 2: Illustration of composition sampling and other decoding strategies with vanilla and plan-based generation models. The term 'composition' is inspired from the quote "A Well-Composed Painting is Half Done" from French painter Pierre Bonnard. Images in black-and-white are early sketches or compositions of the painting in color. Nucleus or focus sampling often lead to hallucinations (highlight spans in red); corresponding color images are blurred to illustrate this. (Credit: The image of "Anna Pavlovna of Russia" is taken from Wikipedia.) + +The approach allows to directly manipulate the content of summaries and their quality. For example, we might inspect the predicted chain during inference and drop entities which are not present in the input document, thereby controlling for hallucinations (Narayan et al., 2021). Outwith summarization, similar constraints can be easily adapted to other conditional generation tasks. + +Maximization-Based Decoding In order to obtain the most likely output $\hat{s}$ from encoder-decoder models, we typically solve a maximization-based objective: $\hat{x} = \arg \max_x p(x|d)$ , where $x$ is either the predicted output text $s$ (for models without planning) or the concatenation of the predicted composition and the output text $c$ ; $s$ (for models with planning). It is standard practice to use beam search (Tillmann and Ney, 2003; Li et al., 2016; Wiseman et al., 2017) as solving the objective for the optimal sequence with neural sequence models is not tractable (Chen et al., 2018). + +Stochastic Sampling for Diverse Decoding Sampling-based strategies have been widely used to induce diversity in language models. Temperature sampling uses a temperature to skew the distribution towards high probability tokens at each decoding step (Ackley et al., 1985; Ficler and Goldberg, 2017; Fan et al., 2018), while top- $k$ sampling truncates the distribution to $k$ high probability tokens (Fan et al., 2018; Holtzman et al., 2018; Radford et al., 2019). Similarly to top- $k$ sampling, + +nucleus sampling (Holtzman et al., 2020) also truncates the tail of the distribution but chooses $k$ dynamically. At each decoding step, it samples high-probable tokens from a nucleus $N$ defined as the smallest subset of tokens from the vocabulary $V$ with cumulative probability $p' \geq p$ , where $p$ is the pre-specified mass of the nucleus. + +Aralikatte et al. (2021) introduce focus sampling to promote diversity in summarization models. It constructs a subset $V_{k} \subseteq V$ by sampling $k$ source-relevant and topical tokens from the vocabulary distribution. Standard beam search decoding is then used to generate a summary limited to $V_{k}$ . However, the authors show that focus sampling is very sensitive to $k$ ; increasing it improves generation quality but at the cost of diversity. + +# 3 Composition Sampling + +Composition Sampling is a novel hybrid method which combines stochastic sampling with maximization-based decoding, whilst leveraging plan-based generation (Narayan et al., 2021). Specifically, we employ nucleus sampling to obtain diverse compositions $c_{\mathrm{sample}}$ from $p(c|d)$ where $d$ is the input text and $c$ are entity chains (prefixed with "[CONTENT]" in Figure 2). We first employ nucleus sampling to obtain diverse compositions from $p(c|d)$ , where $d$ is the input text. And then employ beam search to generate the most-likely diverse output $s$ (prefixed with "[SUMMARY]" in Figure 2), given input $d$ and composition $c_{\mathrm{sample}}$ as + +$p(s|c_{\mathrm{sample}};d)$ . We will experimentally show that composition sampling enables the generation of fluent, faithful and diverse texts for conditional generation. + +Why Entity Chains? Unlike top- $k$ or nucleus sampling, composition sampling avoids degeneration by introducing diversity in composition, rather than directly on the surface form. For this to effectively work, the choice of $c$ needs to be well correlated with an underlying notion of "semantic composition", which we want to "diversify"; if $c_{1}$ and $c_{2}$ are two semantic compositions for input $d$ such that $c_{1} \neq c_{2}$ , then two summaries $s_{1} = \arg \max_{s} p(s|c_{1}; d)$ and $s_{2} = \arg \max_{s} p(s|c_{2}; d)$ are bound to be diverse. In our work, we have chosen entity chains to model semantic compositions; entity chains have been widely studied to model entity-level lexical cohesion (Barzilay and Elhadad, 1997) and coherence (Halliday and Hasan, 1976; Azzam et al., 1999) in text. Also, entity chains are unique to $d$ , and thus can be easily distinguished from compositions for other inputs. Moreover, entity chains provide a very effective knob for content control in abstractive generation, e.g., compositions can be constrained to entities only present in the input document, thereby avoiding hallucinations and entity degeneration. + +Hypothesis 1: If the semantic composition $c$ of the output text $s$ corresponds to entity chains, then learning $p(c|d)$ is much easier than learning $p(s|d)$ ; $d$ is the input. Hence, we can sample from $p(c|d)$ with higher confidence than sampling directly from $p(s|d)$ , and then compute $\arg \max_{s} p(s|c; d)$ . + +We demonstrate the effectiveness of entity chains as a choice for $c$ using the summarization example in Figure 3. The negative log likelihood of generating the summary $s$ from scratch without planning $(-\log p(s|d))$ is 121.18, while the negative log likelihood of generating composition $c$ with planning $(-\log p(c|d))$ is 46.95; hence, the model is much more confident when sampling from $p(c|d)$ than directly from $p(s|d)$ . + +Why Grounded Generation? The generation of $s$ is inherently grounded to its entity composition $c$ ; following Narayan et al. (2021), the entity chains are extracted from their targets during training. Hence, once the hard part of planning the composition is done, the model is less perplexed during generation of the output. + +![](images/cf028d0d9e7ba2f3b18fc975569dba98d5930421f327728e75285a3611932610.jpg) +Figure 3: Probabilities of generating underlined entities in human written reference summary from Figure 1 (input article shown in Figure 6): when the summary is generated directly (Generate, Summary), when the entity chain "Chelsea | Eden Hazard || Santi Cazorla | Arsenal | Burnley || Premier League | 2012 || Hazard | Cazorla | one || Sportsmail" is predicted first during planning (Plan-Generate, Entity Chain), and when the entities are predicted in the summary after planning (Plan-Generate, Summary). All probabilities were computed with PEGASUS (Zhang et al., 2020a) fine-tuned models. The symbol $\underline{\mathbf{\Pi}}_{-}^{\prime}$ denotes start of token. + +In Figure 3, the plan-based model is more confident in predicting entities than its counterpart without planning; perplexities of predicting entities in the summary with and without planning are 0.24 and 1.36, respectively, and perplexities of generating the whole summary with and without planning are 1.15 and 1.48, respectively. In fact, despite the increased length of the target in the plan-based model (i.e., $c_{1:m}$ ; $s_{1:n}$ instead of $s_{1:n}$ ), we find that the perplexity of predicting the longer sequence $(c_{1:m}; s_{1:n})$ is lower than predicting just the summary without any planning, due to grounding (1.16 vs 1.48). Overall, $p(c; s|d)$ , the plan-based approach, learns a more confident distribution at each decoding step compared to no planing, i.e., $p(s|d)$ . For the example in Figure 3, the average cumulative probabilities for the top 15 tokens in the vocabulary distribution at each decoding step are 0.283 for $p(s|d)$ and 0.433 for $p(c; s|d)$ . + +In the following we assess composition sampling for its ability to generate semantically diverse output for two tasks, namely summarization (Sec + +tion 4) and question generation (Section 5). + +# 4 Single Document Summarization + +# 4.1 Datasets and Models + +We evaluate our decoding strategy on two popular single document summarization datasets: CNN/DailyMail highlight generation (Hermann et al., 2015) and XSum extreme summarization (Narayan et al., 2018), using the original train/validation/test splits. Inputs and outputs were truncated to 512 and 128 for XSum, and, 1,024 and 256 for CNN/DailyMail. + +We conduct experiments with state-of-the-art pretrained models for summarization, namely PEGASUS (Zhang et al., 2020a) and FROST (Narayan et al., 2021). Our PEGASUS finetuned model generates summaries directly, whereas FROST generates the entity chain followed by the summary. In both cases we use large transformer architectures (Vaswani et al., 2017) with $L = 16$ , $H = 1$ , 024, $F = 4$ , 096, $A = 16$ (568M parameters), where $L$ denotes the number of layers for encoder and decoder Transformer blocks, $H$ is the hidden size, $F$ the feed-forward layer size, and $A$ the number of self-attention heads. Since this paper is proposing a decoding strategy, there is no need to train new summarization models. We use the publicly available PEGASUS and FROST checkpoints. Training details and model hyperparameters can be found in Zhang et al. (2020a) and Narayan et al. (2021). + +All models are decoded with a beam size of 8 and a length-penalty of 0.8. For nucleus sampling and composition sampling, we use nucleus probability $p$ set to 0.95. For focus sampling (Aralikatte et al., 2021), we use $k = 10,000$ . + +# 4.2 Evaluation Metrics + +We assess our decoding strategy for likelihood, fluency, relevance, faithfulness, and diversity, using both automatic and human evaluation. FROST models predict a plan in the form of an entity chain, followed by a summary. All evaluations, except likelihood, are done on the summary, while the predicted entity chains are stripped out. For each diverse decoding strategy, we sample 5 times for each test document and report the average. + +Sequence Likelihood We report the perplexity of the generated sequence (i.e., entity chains concatenated with their summaries for planning models and summaries only for the others) using various decoding strategies. + +Lexical Fluency and Relevance We report ROUGE-L F1 scores (Lin and Hovy, 2003) against reference summaries.5 + +Semantic Relevance We report BERTScore (Zhang et al., 2020b) which computes the contextual similarity between a candidate and its reference. + +Faithfulness We follow Maynez et al. (2020) and report on textual entailment (Pasunuru and Bansal, 2018; Falke et al., 2019; Kryscinski et al., 2020). In particular, we report the probability of a summary entailing (Entailment) its input document using a classifier trained by fine-tuning an uncased BERT-Large pretrained model (Devlin et al., 2019) on the Multi-NLI dataset (Williams et al., 2018). + +We further assess faithfulness by humans. Our annotators, proficient in English, were tasked to read a document and then grade its summary on a scale of 1-4 (entirely unfaithful, somewhat unfaithful, somewhat faithful, and entirely faithful); a summary is "entirely faithful" if its content is fully supported or can be inferred from the document. We collected 3 ratings for each (document, summary) pair; we report average system ratings (across documents). With summaries deemed "somewhat unfaithful" and "somewhat faithful", annotators were asked to also specify what was faithful or unfaithful, to improve agreement. + +Diversity We report the number of times (out of five samples), a decoding technique is able to generate a completely new summary (Unique). We also use Self-BLEU (Zhu et al., 2018; Alihosseini et al., 2019) to measure lexical diversity in the generated summaries. We consider all pairs of summaries out of 5 samples, and for each pair we compute the BLEU score (Papineni et al., 2002) considering one summary as a hypothesis and the other as a reference. We report the average BLEU score as the Self-BLEU of the document. The lower the Self-BLEU for a decoding strategy is, the better it is in generating a more diverse set of summaries. + +
ModelXSumCNN/DailyMail
R1R2RLR1R2RL
SingleGSum (Dou et al., 2020)45.4021.8936.6745.9422.3242.48
CTRLsum (He et al., 2020)45.6522.3542.50
FAME (Aralikatte et al., 2021)45.3122.7537.4642.9520.7939.90
PEGASUS (Zhang et al., 2020a)47.5624.8739.4044.0521.6940.98
FROST (Narayan et al., 2021)47.8025.0639.7645.1122.1142.01
FROST++ (Narayan et al., 2021)44.9421.5837.2045.0822.1441.99
DiverseFocus (FAME)42.7619.8934.97
Nucleus (PEGASUS)38.4916.5730.9936.2715.1033.46
Nucleus (FROST)40.2617.8332.4938.4915.7135.49
Composition (FROST)45.1222.2436.9841.7618.9438.69
Composition (FROST++)43.8220.3535.8942.3719.4839.28
+ +Table 1: Comparison of decoding strategies with ROUGE: single-best vs diverse decoding (we report averages over 5 samples). Best results in each block are bold-faced. See Table 5 in the Appendix for more comparisons. + +We propose two additional measures to capture semantic diversity in summaries: Self-Entailment and Self-BERTScore. Similar to Self-BLEU, we compute the Entailment score and BERTScore for each possible pair of summaries, respectively and report the average. A lower Self-Entailment value suggests that the generated summaries do not entail each other. Analogously, a low Self-BERTScore value indicates that the decoding technique is able to generate more contextually dissimilar summaries. + +We further assess diversity by humans. Our annotators, proficient in English, again read two summaries (out of five samples) and then graded the pair on a scale of 1-4 (identical, somewhat identical, somewhat diverse, and diverse); the document was not shown in this assessment. Two summaries are "identical" if they are semantically equivalent, while the same information may be presented differently in the case of "somewhat identical". A "somewhat diverse" pair may introduce one or two new concepts or topics in one summary. A "diverse" pair should introduce new concepts or topics in each summary. We collected 3 ratings for each pair and report their average. This assessment was only done with single-sentence XSum summaries, in future work we will explore how to do this effectively for multi-sentence summaries. + +Diversity and Faithfulness For summarization, diverse summaries are not meaningful if they are not faithful to the input. We propose EDNA, a novel measure for "Evaluating Diversity aNd fAithfulness" in summaries. EDNA is the harmonic mean of Entailment and (1-Self-Entailment); higher values of EDNA imply more faithful and diverse summaries. The reason EDNA relies on + +Self-Entailment to measure diversity is because the faithfulness metric is also based on Entailment. This means that both components will be mapped to a score in a similar output space (i.e., they both yield values between 0 and 1 obtained through the same trained model), making it more likely to be properly balanced when mixed. + +# 4.3 Results + +Table 1 presents ROUGE results on the XSum and CNN/DailyMail test sets. The top block includes results for models which employ maximization-based decoding. GSum (Dou et al., 2020) is a state-of-the-art system which decodes summaries guided by an an extractive model at test time. CTRLsum (He et al., 2020) controls the summarization output through keywords and automatically extracted sentences. FAME (Aralikatte et al., 2021) uses a focus attention mechanism to encourage the decoder to proactively generate tokens that are similar or topical to the input document. As mentioned earlier FROST (Narayan et al., 2021) first generates an entity chain and then a summary while $\mathrm{FROST}_{++}$ is a constrained variant which restricts the predicted entities to those present in the document. We also show results for a vanilla PEGASUS model (Zhang et al., 2020a) finetuned on our datasets. + +The bottom block focuses on diverse decoding (we report averages across five samples). We show results with Focus sampling (Aralikatte et al., 2021) built on top of FAME, Nucleus sampling (Holtzman et al., 2020) with PEGASUS and FROST, and our Composition sampling. + +Table 2 presents more detailed faithfulness and diversity results, on challenge sets consisting of 50 documents for each XSum and CNN/DailyMail + +
ModelspplWith Ref.FaithfulnessDiversityDiv+Faith
RLBScEntHumanUniq S-BLS-EntS-BScHumanEDNA
XSumFAME34.230.700.242.19
PEGASUS0.5140.690.760.402.52
FROST0.3140.900.750.372.63
FROST++0.7133.750.700.442.78
DiverseFocus (FAME)29.190.660.231.882.689.510.620.911.84
Nucleus (PEGASUS)1.4731.100.680.242.005.026.220.100.683.11
Nucleus (FROST)0.8333.810.710.222.115.031.080.100.713.08
Composition (FROST)0.5136.950.730.272.374.758.940.170.792.73
Composition (FROST++)0.7433.870.700.432.753.576.870.400.842.25
CNN/DMPEGASUS0.3536.090.650.703.78
FROST0.3039.030.660.723.74
FROST++0.3738.870.660.793.94
DiverseNucleus (PEGASUS)1.3928.990.620.623.085.026.990.030.63
Nucleus (FROST)1.0431.580.630.563.085.029.600.030.64
Composition (FROST)0.5235.060.640.593.455.058.600.040.71
Composition (FROST++)0.4635.070.640.733.894.962.810.070.72
+ +Table 2: Likelihood, faithfulness and diversity results on 50 documents sampled from XSum and CNN/DailyMail each. We report on perplexity (ppl), Entailment (Ent), Uniqueness (Uniq), Self-BLEU (S-BL), Self-Entailment (S-Ent), Self-BERTScore (S-BSc) and EDNA scores, along with ROUGE (RL) and BERTScore (BSc) for comparison. We also report on human judgements for faithfulness and diversity. Additional R1 and R2 numbers can be found in the Appendix (Table 6). Best results in the diverse block for each dataset are bold faced. Scores for single-best decoded summaries are also presented for comparison. Focus (FAME) diverse predictions on CNN/DailyMail are not available. The lower the Self-* metric is, the better the decoding method in generating diverse outputs. + +summaries. We construct these challenge sets by randomly selecting documents whose reference summaries have non-extractive entity chains in them; an entity chain is extractive if all entities in it can be found in the input document. Narayan et al. (2021) have found that models struggle to generate faithful summaries for documents with data-divergence issues (Dhingra et al., 2019). The same challenge sets were used for our human evaluations of faithfulness and diversity. + +Composition Sampling is not as Performance Diminishing as Nucleus Sampling Single-best decoding for FROST achieves 39.76 ROUGE (RL) on XSum; nucleus and composition sampling fare worse showing an average drop of 7.27 and 2.78, respectively. Similarly, for CNN/DailyMail, ROUGE drops for nucleus sampling by an average of 6.51 points, compared to an average drop of 3.28 points for composition sampling (with FROST). Nucleus sampling is even more damaging for non-plan based models, such as PEGASUS; we see an average drop of 8.59 and 7.30 ROUGE points on XSum and CNN/DailyMail. These gaps are slightly larger for the challenging subsets in Table 2 which is expected due to the highly abstractive nature of the reference summaries therein. + +On XSum, composition Sampling with + +$\mathrm{FROST}_{++}$ performs slightly worse than with vanilla FROST in terms of ROUGE. This is due to the extreme abstractive nature of the XSum reference summaries (Maynez et al., 2020); as a result, a model is required to hallucinate factual content, that is not necessarily faithful to the input (see examples of XSum summaries in the Appendix, Figure 5). But Composition(FROST $^{+ + }$ ) only keeps supported entities in the sampled plans giving rise to summaries which diverge from their reference. This is not the case with CNN/DailyMail which is mostly extractive and we see that ROUGE performance improves with Composition(FROST $^{+ + }$ ) in Table 1. + +Composition Sampling is more Confident in its Predictions than Nucleus Sampling Perplexity for FROST predictions increases from 0.31 to 0.83 for nucleus sampling, but only to 0.51 for composition sampling, on XSum. PEGASUS shows an even larger increment in perplexity (from 0.51 to 1.47) for nucleus sampling. Similar patterns are observed for CNN/DailyMail summaries. + +Composition(FROST ++) is more perplexed when generating XSum summaries due to the reference summary divergence issue discussed earlier; perplexity increases from 0.51 to 0.74 compared to Composition(FROST). Interestingly, Composi + +tion(FROST $_{++}$ ) is almost as confident in generating diverse summaries as single-best beam decoding (FROST; perplexities of 0.71 vs 0.74 for XSum). Unsurprisingly, Composition(FROST $_{++}$ ) is more confident in generating CNN/DailyMail summaries than FROST (0.46 vs 0.52) due to their extractive nature. + +Constrained Composition is Most Effective in Generating Meaningful Diverse Summaries It is no surprise that nucleus sampling is able to generate the most diverse summaries on both XSum and CNN/DailyMail (achieving best scores for Self-BLEU, Self-Entailment, Self-BERTScore, and diversity assessed by humans); however these summaries perform poorly on faithfulness measures. Composition(FROST++) is most effective in generating faithful summaries, as demonstrated automatically (with best entailment scores on XSum and CNN/DailyMail) and by humans (with highest ratings on XSum and CNN/DailyMail); these summaries are also diverse achieving highest EDNA scores on both summarization datasets. + +We also examined whether models differ in terms of faithfulness and diversity as rated by our participants. We carried out pairwise comparisons using one-way ANOVA with post-hoc Tukey HSD tests $(p < 0.01)$ . The difference between Nucleus(PEGASUS) and Nucleus(FROST) is not significant. Nucleus(PEGASUS) was also not significantly more faithful than Focus(FAME) for XSum summaries. All other pairwise differences were significant for both faithfulness and diversity. + +In sum, our results demonstrate that composition sampling is a better alternative to nucleus or focus sampling for generating meaningful diverse summaries. Figure 1 presents summaries from different decoding strategies for a CNN/DailyMail article. Other example predictions for XSum and CNN/DailyMail articles can be found in the Appendix (Figures 5-9). + +Faithfulness and Diversity Metrics Correlate with Human Judgements We estimate the extent to which automatic metrics of faithfulness and diversity correlate with human ratings (using Spearman's rank correlation coefficient) in Table 3. In line with previous work (Maynez et al., 2020; Kryscinski et al., 2019), we find that entailment scores are best correlated with faithfulness (moderate, $0.40 \leq r \leq 0.59$ ). Like Self-BLUE, Self-Entailment and Self-BERTScore are also strongly + +
MetricFaithfulnessDiversity
ROUGE-L0.197-0.164
BERTScore0.209-0.195
Entailment0.588-0.067
1 - Self-BLEU-0.2080.880
1 - Self-Entailment-0.1870.771
1 - Self-BERTScore-0.1980.873
EDNA0.4820.174
+ +Table 3: Different automatic metrics and their correlation against human assessments using Spearman's rank coefficient. + +correlated with diversity ratings. Compared to other metrics which capture a single dimension, EDNA is positively correlated with both dimensions of diversity and faithfulness. + +Finally, in Figure 4, we plot faithfulness and diversity scores for different decoding strategies with varying temperatures and nucleus probabilities. We find that summaries sampled with Composition(FROST++) are consistently more faithful than single-best Beam(FROST) summaries, but worse than summaries decoded with Beam(FROST++.). Summaries sampled with Composition(FROST++) achieve the best EDNA score (with $p = 0.95$ ) amongst all diverse decoding strategies. + +# 5 Question Generation + +# 5.1 Dataset and Metrics + +Question generation is often conceptualized as the task of generating a question from a passage-answer pair (Zhou et al., 2017). We experiment on SQuAD (Rajpurkar et al., 2016) and use the split of Zhou et al. (2017) consisting of 86,635, 8,965, and 8,964 source-target pairs for training, validation, and testing, respectively. We follow Cho et al. (2019) and report BLEU-4 (Top-1, the single-best accuracy), Oracle (Top-5, the best accuracy among the 5-sampled hypotheses), and Self-BLEU (as defined in Section 4). + +# 5.2 Results + +For our question generation experiments we also compare models which employ single-best decoding against models which adopt diverse decoding techniques. The top block in Table 4 presents results for NQG++ (Zhou et al., 2017), a pointer generator-based model, CP+GSA (Zhao et al., + +![](images/1690424767f89770312bc79b243edd78438b946e0c5fe918920efdb74d2e850e.jpg) +Figure 4: Perplexity, entailment, self-entailment and EDNA scores on the CNN/DailyMail challenge set (Table 2) with varying temperatures (for random sampling) and nucleus Probabilities (for nucleus and composition sampling). For each diverse decoding strategy, we sample 5 times per document and report the average. + +![](images/c73e09b40cc7829c8332b97b277d6ed25b760d34bf30ce99cd8cecec1673775e.jpg) + +![](images/a266e435836a6cd66cae26d245de06bde941a045ab386130f3d48f8604a6f745.jpg) + +![](images/bc6586ed83b870a06c07386e22d4a10644ccee65020ce6c57a583231969e205c.jpg) + +
ModelsT1T5S-BL
SingleNQG++ (Zhou et al., 2017)13.27
MCP+GSA (Zhao et al., 2018)16.85
PEGASUS (Zhang et al., 2020a)22.17
FROST (Narayan et al., 2021)21.04
Diversetop-k Sampling11.5317.6545.99
Diverse Beam Search13.3818.3074.80
Mixture Decoder (Shen et al.)15.1721.9758.73
Mixture Selector (Cho et al.)15.6722.4559.82
Mixture Selector (Wang et al.)15.3421.1554.18
Nucleus (PEGASUS)12.0524.7230.64
Nucleus (FROST)10.6422.4925.50
Composition (FROST)17.1627.0461.68
Composition (FROST++)18.7726.6074.89
+ +Table 4: Comparison of different decoding techniques on question generation. We report on BLEU-4 Top-1 accuracy (T1) and Top-5 (T5), and Self-BLEU (S-BL). Results for diverse decoding comparison models are taken from Wang et al. (2020). Best results in each block are bold-faced. + +2018), a model which combines a pointer mechanism with a gated self-attention encoder, and finetuned PEGASUS and FROST models. The second block in the table contains several diverse decoding approaches including top- $k$ sampling (Fan et al., 2018), diverse beam search (Vijayakumar et al., 2018), mixture decoding (Shen et al., 2019) and mixture content selection (Cho et al., 2019; Wang et al., 2020). We compare these models against nucleus sampling with PEGASUS and FROST, and composition sampling with FROST. + +As in our summarization experiments, we observe that composition sampling is not as performance diminishing as nucleus sampling, in terms BLEU. FROST achieves a BLEU of 21.04 (top-1) in the single-best decoding setting; in comparison, BLEU drops for nucleus sampling by 10.40 points (on average), and 2.27 points only for composition sampling $(\mathrm{FROST}_{+ + })$ .Nucleus sampled questions achieve the best pairwise diversity scores (SelfBLEU of 25.50), but very low BLEU Top-1 score + +of 10.64. Composition sampled questions are less diverse than other methods, but outperform all baselines on Top-1 and Oracle metrics. Poor diversity (in terms of Self-BLEU) in composition sampled questions can be attributed to two limitations: (a) SQuAD questions are mostly extractive, and (b) questions are generated conditioned on the passage and the answer spans; leaving limited scope for models to generate diverse questions. An example in the Appendix (Figure 11) demonstrates the effectiveness of composition sampling in generating accurate and diverse questions compared to other sampling methods.[7] + +# 6 Conclusion + +We proposed Composition Sampling, a simple yet effective decoding method for faithful and diverse conditional generation. Our method is straightforward to implement and does not require any external system to augment the input during inference. Our experiments demonstrate that it is currently the best available decoding strategy for generating diverse and meaningful output. We also introduced Self-Entailment and Self-BERTScore, to automatically compute semantic diversity in summaries, and, EDNA, for jointly measuring faithfulness and diversity. + +# Acknowledgements + +We thank the reviewers, the ARR action editor, and the senior area chair for their valuable feedback. We would like to thank Ryan McDonald, Ankur Parikh, and Slav Petrov for their insightful comments. Many thanks also to Ashwin Kakarla and his team for their help with the human evaluation. + +# Ethical Considerations + +The nature of text generation leads to multiple ethical considerations when considering applications. The main failure mode is that the model can learn to mimic target properties in the training data that are not desirable. + +Faithfulness and Factuality Since models create new text, there is the danger that they may neither be faithful to the source material nor factual. This can be exacerbated when the data itself has highly abstractive targets, which require the model to generate words not seen in the source material during training. This often leads the model to generate content inconsistent with the source material (Maynez et al., 2020; Kryscinski et al., 2020; Gabriel et al., 2021). + +Trustworthy Data If the data itself is not trustworthy (comes from suspect or malicious sources) the model will naturally become untrustworthy as it will ultimately learn the language and topics of the training data. For instance, if the training data is about Obama birther conspiracies, and the model is asked to generate information about the early life of Obama, there is a risk that false claims will be predicted by the model. + +Bias in Data Similarly, biases in the data around gender, race, etc., risk being propagated in the model predictions, which is common for most NLP tasks. This is especially true when the models are trained from non-contemporary data that do not represent current norms and practices (Blodgett et al., 2020). + +The above considerations are non-malicious, in that the model is merely learning to behave as its underlying source material. If users of such models are not aware of these issues and do not account for them, e.g., with better data selection and evaluation, then the generated text can be damaging. + +Generation models can also be misused in malicious ways. These include generating fake news, spam, and other text meant to mislead large sections of the general population. + +# References + +David H. Ackley, Geoffrey E. Hinton, and Terrence J. Sejnowski. 1985. A learning algorithm for Boltzmann machines. Cognitive Science, 9:147-169. +Danial Alihosseini, Ehsan Montahaei, and Mahdieh Soleymani Baghshah. 2019. Jointly measuring diver + +sity and quality in text generation models. In Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation, pages 90-98, Minneapolis, Minnesota. Association for Computational Linguistics. +Rahul Aralikatte, Shashi Narayan, Joshua Maynez, Sascha Rothe, and Ryan McDonald. 2021. Focus attention: Promoting faithfulness and diversity in summarization. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6078-6095, Online. Association for Computational Linguistics. +Saliha Azzam, Kevin Humphreys, and Robert Gaizauskas. 1999. Using coreference chains for text summarization. In Coreference and Its Applications. +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473. +Loic Barrault, Magdalena Biesialska, Ondrej Bojar, Marta R. Costa-jussa, Christian Federmann, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Matthias Huck, Eric Joannis, Tom Kocmi, Philipp Koehn, Chi-kiu Lo, Nikola Ljubesic, Christof Monz, Makoto Morishita, Masaaki Nagata, Toshiaki Nakazawa, Santanu Pal, Matt Post, and Marcos Zampieri. 2020. Findings of the 2020 conference on machine translation (WMT20). In Proceedings of the Fifth Conference on Machine Translation, pages 1-55, Online. Association for Computational Linguistics. +Regina Barzilay and Michael Elhadad. 1997. Using lexical chains for text summarization. In Intelligent Scalable Text Summarization. +Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of "bias" in NLP. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5454-5476, Online. Association for Computational Linguistics. +Yining Chen, Sorcha Gilroy, Andreas Maletti, Jonathan May, and Kevin Knight. 2018. Recurrent neural networks as weighted language recognizers. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2261-2271, New Orleans, Louisiana. Association for Computational Linguistics. +Jaemin Cho, Minjoon Seo, and Hannaneh Hajishirzi. 2019. Mixture content selection for diverse sequence generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language + +Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3121-3131, Hong Kong, China. Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Bhuwan Dhingra, Manaal Faruqui, Ankur Parikh, Ming-Wei Chang, Dipanjan Das, and William Cohen. 2019. Handling divergent reference texts when evaluating table-to-text generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4884-4895, Florence, Italy. Association for Computational Linguistics. +Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In H. Wallach, H. Larochelle, A. Beygelzimer, F. Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 13042-13054. Curran Associates, Inc. +Zi-Yi Dou, Pengfei Liu, Hiroaki Hayashi, Zhengbao Jiang, and Graham Neubig. 2020. Gsum: A general framework for guided neural abstractive summarization. +Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to ask: Neural question generation for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1342-1352, Vancouver, Canada. Association for Computational Linguistics. +Alexander Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir Radev. 2019. Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1074-1084, Florence, Italy. Association for Computational Linguistics. +Tobias Falke, Leonardo F. R. Ribeiro, Prasetya Ajie Utama, Ido Dagan, and Iryna Gurevych. 2019. Ranking generated summaries by correctness: An interesting but challenging application for natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2214-2220, Florence, Italy. Association for Computational Linguistics. +Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings + +of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889-898, Melbourne, Australia. Association for Computational Linguistics. +Jessica Ficler and Yoav Goldberg. 2017. Controlling linguistic style aspects in neural language generation. In Proceedings of the Workshop on Stylistic Variation, pages 94-104, Copenhagen, Denmark. Association for Computational Linguistics. +Saadia Gabriel, Asli Celikyilmaz, Rahul Jha, Yejin Choi, and Jianfeng Gao. 2021. GO FIGURE: A meta evaluation of factuality in summarization. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 478-487, Online. Association for Computational Linguistics. +Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1631-1640, Berlin, Germany. Association for Computational Linguistics. +M. A. K. Halliday and Ruqaiya Hasan. 1976. Cohesion in English. Longman, London. +Tatsunori B. Hashimoto, Hugh Zhang, and Percy Liang. 2019. Unifying human and statistical evaluation for natural language generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1689-1701, Minneapolis, Minnesota. Association for Computational Linguistics. +Junxian He, Wojciech Krysciński, Bryan McCann, Nazeneen Rajani, and Caiming Xiong. 2020. Ctrlsum: Towards generic controllable text summarization. +Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 1693-1701. Curran Associates, Inc. +Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In International Conference on Learning Representations. +Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, David Golub, and Yejin Choi. 2018. Learning to write with cooperative discriminators. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1638-1649, Melbourne, Australia. Association for Computational Linguistics. + +Wojciech Kryscinski, Nitish Shirish Keskar, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Neural text summarization: A critical evaluation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 540-551, Hong Kong, China. Association for Computational Linguistics. +Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9332-9346, Online. Association for Computational Linguistics. +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. CoRR, abs/1910.13461. +Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016. Deep reinforcement learning for dialogue generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1192-1202, Austin, Texas. Association for Computational Linguistics. +Chin Yew Lin and Eduard Hovy. 2003. Automatic evaluation of summaries using n-gram co-occurrence statistics. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 150-157. +Inderjeet Mani. 2001. Automatic summarization, volume 3. John Benjamins Publishing. +Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factuality in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1906-1919, Online. Association for Computational Linguistics. +Clara Meister, Ryan Cotterell, and Tim Vieira. 2020. If beam search is the answer, what was the question? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2173-2185, Online. Association for Computational Linguistics. +Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797-1807, Brussels, Belgium. Association for Computational Linguistics. + +Shashi Narayan, Yao Zhao, Joshua Maynez, Gonçalo Simões, Vitaly Nikolaev, and Ryan McDonald. 2021. Planning with Learned Entity Prompts for Abstractive Summarization. Transactions of the Association for Computational Linguistics, 9:1475-1492. +Ani Nenkova and Kathleen McKeown. 2011. Automatic summarization. Foundations and Trends in Information Retrieval, 5(2-3):103-233. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. +Ramakanth Pasunuru and Mohit Bansal. 2018. Multireward reinforced summarization with saliency and entailment. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 646-653, New Orleans, Louisiana. Association for Computational Linguistics. +Weizhen Qi, Yu Yan, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, and Ming Zhou. 2020. ProphetNet: Predicting future n-gram for sequence-to-SequencePre-training. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2401-2410, Online. Association for Computational Linguistics. +Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Technical report, OpenAI. +Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Technical report, OpenAI. +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. CoRR, abs/1901.07291. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics. +Sascha Rothe, Shashi Narayan, and Aliaksei Severyn. 2020. Leveraging pre-trained checkpoints for sequence generation tasks. Transactions of the Association for Computational Linguistics, 8:264-280. +Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 + +Conference on Empirical Methods in Natural Language Processing, pages 379-389, Lisbon, Portugal. Association for Computational Linguistics. +Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073-1083, Vancouver, Canada. Association for Computational Linguistics. +Tianxiao Shen, Myle Ott, Michael Auli, and Marc'Aurelio Ranzato. 2019. Mixture models for diverse machine translation: Tricks of the trade. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 5719-5728. PMLR. +Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2019. MASS: Masked sequence to sequence pre-training for language generation. In Proceedings of the 36th International Conference on Machine Learning, volume 97, pages 5926-5936. PMLR. +Christoph Tillmann and Hermann Ney. 2003. Word reordering and a dynamic programming beam search algorithm for statistical machine translation. Computational Linguistics, 29(1):97-133. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30, pages 5998-6008. +Ashwin K. Vijayakumar, Michael Cogswell, Ramprasaath R. Selvaraju, Qing Sun, Stefan Lee, David J. Crandall, and Dhruv Batra. 2016. Diverse beam search: Decoding diverse solutions from neural sequence models. CoRR, abs/1610.02424. +Ashwin K. Vijayakumar, Michael Cogswell, Ramprasaath R. Selvaraju, Qing Sun, Stefan Lee, David J. Crandall, and Dhruv Batra. 2018. Diverse beam search for improved description of complex scenes. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 7371-7379. AAAI Press. +Zhen Wang, Siwei Rao, Jie Zhang, Zhen Qin, Guangjian Tian, and Jun Wang. 2020. Diversify question generation with continuous content selectors and question type modeling. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2134-2143, Online. Association for Computational Linguistics. +Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American + +Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguistics. +Sam Wiseman, Stuart Shieber, and Alexander Rush. 2017. Challenges in data-to-document generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2253-2263, Copenhagen, Denmark. Association for Computational Linguistics. +Hugh Zhang, Daniel Duckworth, Daphne Ippolito, and Arvind Neelakantan. 2021. Trading off diversity and quality in natural language generation. In Proceedings of the Workshop on Human Evaluation of NLP Systems (HumEval), pages 25-33, Online. Association for Computational Linguistics. +Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020a. PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 11328-11339. PMLR. +Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020b. BERTScore: Evaluating text generation with BERT. In Proceedings of the 8th International Conference on Learning Representations, Virtual Conference, Formerly Addis Ababa Ethiopia. +Yao Zhao, Xiaochuan Ni, Yuanyuan Ding, and Qifa Ke. 2018. Paragraph-level neural question generation with maxout pointer and gated self-attention networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3901-3910, Brussels, Belgium. Association for Computational Linguistics. +Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, Hangbo Bao, and Ming Zhou. 2017. Neural question generation from text: A preliminary study. CoRR, abs/1704.01792. +Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texy- gen: A benchmarking platform for text generation models. In The 41st International ACM SIGIR Con ference on Research and Development in Information Retrieval, SIGIR '18, page 1097-1100, New York, NY, USA. Association for Computing Machinery. + +
ModelsXSumCNN/DailyMail
R1R2RLR1R2RL
SingleRoBERTaShare (Rothe et al., 2020)38.5216.1231.1339.2518.0936.45
MASS (Song et al., 2019)39.7517.2431.9542.1219.5039.01
BART (Lewis et al., 2019)45.1422.2737.2544.1621.2840.90
GSum (Dou et al., 2020)45.4021.8936.6745.9422.3242.48
UniLM (Dong et al., 2019)43.3320.2140.51
T5 (Raffel et al., 2019)43.5221.5540.69
ProphetNet (Qi et al., 2020)44.2021.1741.30
CTRLsum (He et al., 2020)45.6522.3542.50
FAME (Aralikatte et al., 2021)45.3122.7537.4642.9520.7939.90
PEGASUS (Zhang et al., 2020a)47.5624.8739.4044.0521.6940.98
FROST (Narayan et al., 2021)47.8025.0639.7645.1122.1142.01
FROST++ (Narayan et al., 2021)44.9421.5837.2045.0822.1441.99
DiverseFocus (FAME)42.7619.8934.97
Nucleus (PEGASUS)38.4916.5730.9936.2715.1033.46
Nucleus (FROST)40.2617.8332.4938.4915.7135.49
Composition (FROST)45.1222.2436.9841.7618.9438.69
Composition (FROST++)43.8220.3535.8942.3719.4839.28
+ +Table 5: Full set of ROUGE results on XSum and CNN/DailyMail test sets comparing different decoding techniques and SOTA models. Best results in each block are bold-faced. + +
ModelsWith Reference
R1R2RL
XSumFocus (FAME)41.2020.3034.23
PEGASUS49.4928.4340.69
FROST49.1228.3540.90
FROST++41.1519.6633.75
DiverseFocus (FAME)36.5816.3229.19
Nucleus (PEGASUS)38.9118.4331.10
Nucleus (FROST)41.9620.7733.81
Composition (FROST)45.8823.7436.95
Composition (FROST++)41.8119.6133.87
CNN/DailyMailPEGASUS38.5015.0436.09
FROST41.8917.5439.03
FROST++41.8217.9638.87
DiverseNucleus (PEGASUS)31.5710.6228.99
Nucleus (FROST)34.6211.7831.58
Composition (FROST)37.8914.8835.06
Composition (FROST++)37.7915.0735.07
+ +Table 6: Full set of ROUGE results on 50 documents sampled from XSum and CNN/DailyMail (see also Table 2 in the main paper). + +
ModelsR1R2RL
Single-best with Beam Search
PEGASUS47.5218.7224.91
FROST43.1216.9322.49
Diverse Decoding, Average of five runs
Nucleus (FROST)39.5012.9419.50
Composition (FROST)42.4715.4321.43
Composition (FROST++)42.3715.7821.90
Diverse Decoding, Best of five runs
Nucleus (FROST)44.4016.8623.03
Composition (FROST)46.9819.3424.96
Composition (FROST++)46.7119.5525.36
+ +Table 7: ROUGE results on the Multi-News (Fabbri et al., 2019) multi-document summarization test set comparing different decoding techniques. The dataset contains 56K articles in total paired with multi-line human-written summaries from the site newser.com. + +
ModelsBLEU-4 Top-1Oracle Top-5Pairwise S-BLEU
Single-best with Beam Search
PEGASUS21.52
FROST19.98
Diverse Decoding
Nucleus (PEGASUS)12.6024.4531.23
Nucleus (FROST)10.9822.6126.36
Composition (FROST)16.6226.0762.47
Composition (FROST++)17.2825.0375.81
+ +Table 8: We also experimented with the split of Du et al. (2017) for SQuAD (Rajpurkar et al., 2016) question generation, consisting of 70,484, 10,570, and 11,877 examples for training, validation, and testing, respectively. Best results in each block are bold-faced. + +GOLD: Walsall have signed defender Luke Leahy on a two-year contract from Scottish Championship side Falkirk. + +Input: Leahy, 24, scored 12 goals in 158 appearances with Falkirk, having joined the club from Rugby Town in 2012. The left-back made 38 appearances last season, helping the club finish second in the Scottish second tier before they lost to Dundee United in the play-offs. He joins Walsall on a free transfer after his contract expired and is the League One club's first summer signing. Find all the latest football transfers on our dedicated page. + +# Single-best summaries + +Focus (PEGASUS) $(d\to t_d\to s)$ : Walsall have signed Falkirk defender Alex Leahy on a two-year deal. + +PEGASUS $(d\to s)$ : Walsall have signed defender Paddy Leahy from Scottish Championship side Falkirk on a three-year deal. + +FROST $(d\to c;s)$ :[CONTENT] Walsall | Falkirk | Liam Leahy | two [SUMMARY] Walsall have signed Falkirk defender Liam Leahy on a two-year deal. + +FROST $(d\to c_{\mathrm{drop}};s)$ : [CONTENT] Walsall | Falkirk | Leahy [SUMMARY] Walsall have signed Falkirk defender Leahy on a free transfer. + +# Focus Sampling: FAME + +$\mathbf{s}_1 \rightarrow$ Welsall have signed defender Adebayu "Adebayu"eahy on a two-year deal following his departure from Scottish Championship club Falkiri. + +s2 $\rightarrow$ Welsall have signed defender Adebayu ” Adebayu "eahy on a two-year deal from Scottish Championship club Falkock. + +s3 $\rightarrow$ Welsall have signed defender Adebayu ” Adebayu "eahy on a two-year deal from Scottish Championship club Falkock. + +$\mathbf{s}_4 \rightarrow$ Welsall have signed defender Adebayu Leahys from Scottish Championship club Falk Falkiri for an undisclosed fee on a three-year deal. + +$\mathbf{s}_5 \rightarrow$ Welsall have signed defender Adebayu " Adebayu "eahny on a two-year deal following his departure from Scottish Championship club Falkock. + +# Nucleus Sampling: PEGASUS + +$\mathbf{s}_1 \rightarrow$ Walsall have signed defender Adam Leahy from fellow Scottish Championship side Falkirk on a two-year contract. + +$\mathbf{s}_2 \rightarrow$ Walsall have signed defender Matt Leahy on a two-year deal from Falkirk + +$\mathbf{s}_3 \rightarrow$ Walsall have signed Falkirk full-back Tyrone Leahy for an undisclosed fee. + +$\mathbf{s}_4 \rightarrow$ Walsall have signed defender Jason Leahy from Scottish Championship club Falkirk. + +$\mathbf{s}_5 \rightarrow$ Walsall have signed Driscoll defender Chris Leahy for an undisclosed fee from Scottish Championship side Falkirk. + +# Nucleus Sampling: FROST + +$\mathbf{c}_1; \mathbf{s}_1 \rightarrow [\text{CONTENT}]$ Walsall | Rory Leahy | Falkirk [SUMMARY] dawned on Walsall as they signed defender Rory Leahy on a season-long loan from Falkirk. + +$\mathbf{c_2}$ . $\mathbf{s_2}\rightarrow$ [CONTENT] Walsall | Falkirk | Liam Leahy [SUMMARY] Walsall have signed Falkirk defender Liam Leahy. + +$\mathbf{c}_3;\mathbf{s}_3\rightarrow [CONTENT]$ Falkirk | Wade Leahy | Walsall [SUMMARY] Former Falkirk defender Wade Leahy has joined Walsall + +for an undisclosed fee. + +$\mathbf{c_4};\mathbf{s_4}\rightarrow$ [CONTENT] Walsall | Todd Leahy | Scottish Championship | Falkirk [SUMMARY] Walsall have signed defender Todd Leahy from Scottish Championship side Falkirk. + +$\mathbf{c}_{5}; \mathbf{s}_{5} \rightarrow$ [CONTENT] Walsall | Greg Leahy | Scottish Championship | Falkirk | two [SUMMARY] Walsall have signed defender Greg Leahy from Scottish Championship side Falkirk on a two-year contract. + +# Composition Sampling: FROST + +$\mathbf{c}_1;\mathbf{s}_1\rightarrow [CONTENT]Walsall\mid$ Rory Leahy | Falkirk [SUMMARY] Walsall have signed defender Rory Leahy from Falkirk. + +$\mathbf{c}_2; \mathbf{s}_2 \rightarrow [\text{CONTENT}]$ Walsall | Falkirk | Liam Leahy [SUMMARY] Walsall have signed Falkirk defender Liam Leahy. + +$\mathbf{c_3}$ . $\mathbf{s}_3\rightarrow$ [CONTENT] Falkirk | Wade Leahy | Walsall [SUMMARY] Falkirk defender Wade Leahy has joined Walsall. + +$\mathbf{c_4};\mathbf{s_4}\rightarrow$ [CONTENT] Walsall | Todd Leahy | Scottish Championship | Falkirk [SUMMARY] Walsall have signed defender Todd Leahy from Scottish Championship side Falkirk. + +$\mathbf{c}_5;\mathbf{s}_5\rightarrow$ [CONTENT] Walsall | Greg Leahy | Scottish Championship | Falkirk | two [SUMMARY] Walsall have signed defender Greg Leahy from Scottish Championship side Falkirk on a two-year deal. + +# Composition Sampling FROST++ + +$\mathbf{c}_1; \mathbf{s}_1 \rightarrow [\text{CONTENT}]$ Walsall | Leahy | Falkirk [SUMMARY] Walsall have signed defender Leahy from Falkirk. + +$\mathbf{c}_2; \mathbf{s}_2 \rightarrow [\text{CONTENT}]$ Walsall | Falkirk | Leahy [SUMMARY] Walsall have signed Falkirk defender Leahy on a free transfer. + +c3; s3 → [CONTENT] Falkirk | Leahy | Walsall [SUMMARY] Falkirk defender Leahy has joined Walsall on a free transfer. + +$\mathbf{c}_4; \mathbf{s}_4 \rightarrow [\text{CONTENT}]$ Walsall | Leahy | Scottish | Falkirk [SUMMARY] Walsall have signed defender Leahy from Scottish side Falkirk. + +$\mathbf{c}_5; \mathbf{s}_5 \rightarrow [\text{CONTENT}]$ Walsall | Leahy | Scottish | Falkirk [SUMMARY] Walsall have signed defender Leahy from Scottish side Falkirk. + +Figure 5: Example input article, its human written summary, and model predictions for the XSum dataset. We highlight spans in orange that are not faithful to the input. We use $c*$ and $s*$ to denote different compositions and their corresponding summaries. + +# Chelsea star Eden Hazard vs Arsenal playmaker Santi Cazorla: As duo prepare to reach 100 Premier League games, who has excited our experts the most since 2012? + +Chelsea's Eden Hazard and Arsenal's Santi Cazorla are set to reach a Premier League milestone this weekend when they each make their 100th appearance. + +Both players have been hugely influential since they moved to London in the summer of 2012, but who has been the most exciting import to watch? + +Here, Sportsmail's reporters choose the player they most enjoy seeing in action. + +Eden Hazard (L) and Santi Cazorla are both set to make their 100th Premier League appearance this weekend. + +# Lee Clayton. + +Cazorla has wonderful balance. So does Hazard. Cazorla scores important goals. So does Hazard. Cazorla is two-footed. So is Hazard. Cazorla dances past opponents. So does Hazard. + +So, while there is not a lot to choose between them and Hazard is likely to get the most picks in this article, I am going for Cazorla. It's a personal choice. He is a wonderful footballer. I have paid to watch them both (and I will pay to watch them both again), but the little Spanish magician edges it for me. + +# VERDICT: CAZORLA. + +Cazorla, pictured in action against Burnley, has been an influential part of Arsenal's midfield this season. + +# Ian Ladyman. + +I remember when Manchester City baulked at paying Hazard's wages when the Belgian was up for grabs in 2012. Back then City thought the young forward had a rather high opinion of his own worth for a player who was yet to play in a major European league. + +In the early days of his time at Chelsea, it looked as though City may have been right. He showed flashes of brilliance but also looked rather too easy to push off the ball. + +Roll forward to 2015, however, and the 24-year-old has developed in to one of the most important players in the Barclays Premier League. Brave, strong and ambitious, Hazard plays on the front foot and with only one thought in this mind. + +Rather like Cristiano Ronaldo, he has also developed in to the type of player ever defender hates, simply because he gets back up every time he is knocked to the ground. He would get in every team in the Premier League and is one of the reasons Chelsea will win the title this season. + +# VERDICT: HAZARD. + +Hazard controls the ball under pressure from Stoke midfielder Stephen Ireland at Stamford Bridge. + +Dominic King. It has to be Hazard. I saw him play for Lille twice in the season before he joined Chelsea – once against St Etienne, the other was what proved to be his final appearance against Nancy. He scored two in the first match, a hat-trick the latter and played a different game to those around him. + +He hasn't disappointed since arriving here and I love the nonchalance with which he takes a penalty, his low centre of gravity and the way he can bamboozle defenders. If there is such a thing as £32 million bargain, it is Hazard. + +# VERDICT: HAZARD. + +Hazard celebrates after scoring a fine individual goal in Chelsea's 3-2 win against Hull in March. + +# Nick Harris. + +Now this is a tricky one because while Eden Hazard will frequently embark on a dribble or dink in a pass that will make you nod in appreciation, he'll also miss a penalty and make you groan. Whereas the older Cazorla, less flashy but no less of a technical master, is to my mind more of a fulcrum, more important relatively to the sum of Arsenal's parts than Hazard is to Chelsea. + +You'll gasp at Hazard but Cazorla's wow factor is richer. That's not to dismiss either: both are brilliant footballers, contributing goals, assists and flair. Any neutral would bite your hand off to have either playing in your team. + +Forced to pick though, it's Cazorla, for his consistency and crucially doing it in the biggest games. Exhibit A would be Manchester City 0 Arsenal 2 in January; goal, assist, all-round brilliance, against a big team, at an important time. + +# VERDICT: CAZORLA. + +Cazorla scores from the penalty spot in Arsena's 2-0 away win at Manchester City in January. + +# Riath Al-Samarrai. + +Eden Hazard for me. Cazorla is an utter delight, a little pinball of a man who is probably the most two-footed player I've seen. Put him in a tight space and then you see what makes him rare among the best. But Hazard is the top player in the Premier League, in my opinion. + +This is the sixth of his eight seasons as a professional where he has reached double figures and yet he offers so much more than goals (36 in 99 in the Premier League for Chelsea). He can beat a man and, better still, you sense he likes doing it. + +Technically, his passing and shooting are excellent and he also has a mind capable of sussing out the shapes and systems in front of him. That intelligence, more specifically. + +Figure 6: Input CNN/DailyMail article for summaries presented in Figure 1. + +
Human Written Summary
Nick Clegg made the admission in a rare joint interview with his wife Miriam. Lib Dem said she decided against moving into ‘government mansion’. ‘Discussion’s a rather grand word for Miriam basically saying no,’ he joked. Miriam claims he has put ‘country above party’ at ‘great personal cost’. Tonight: Spotlight Nick Clegg tonight (Thursday) on ITV at 7.30pm.
Single-best summaries using Beam Search
PEGASUS: Lib Dem leader reveals balance of power lies with wife Miriam Gonzalez Durantez. Couple invited cameras into the kitchen of their £1.5 million family home. They revealed why they decided to remain with their three sons in London. Mrs Clegg said: ‘It has been the best decision for all of us and if I may, also for Nick as well because you can see how politicians sometimes can get in a bubble and be completely distant from normal society’. Mr Clegg insisted he felt more full of ‘vim and vigour’ than he did in the start of the coalition. Focus groups conducted by the programme found Mr Clegg was characterised as the ‘chihuahua in a handbag’ of the government. He said: ‘If you know you disagree with someone, there’s no tension. I suspect meetings between Tony Blair and Gordon Brown were so fractious because they kind of should have agreed but they didn’t’.
FROST: [CONTENT] Nick Clegg | Miriam ||| Lib Dem | Whitehall ||| ||| Couple | ITV | Tonight ||| Putney | London [SUMMARY] Nick Clegg reveals balance of power lies with his wife Miriam in joint interview. Lib Dem leader was given option of moving family into Whitehall mansion. But the move was vetoed by his high flying lawyer wife after last election. Couple invited the cameras into their kitchen for ITV’s Tonight programme. They were seen drinking wine and cooking in the kitchen of their home in Putney, south west London.
FROST++: [CONTENT] Nick Clegg | Miriam ||| Lib Dem | Whitehall ||| ||| ITV | Tonight ||| Putney | London [SUMMARY] Nick Clegg reveals balance of power lies with his wife Miriam in joint interview. Lib Dem leader was given option of moving family into Whitehall mansion. But the move was vetoed by his high flying lawyer wife. The couple invited the cameras into their kitchen for ITV’s Tonight programme. They were seen drinking white wine and cooking in their home in Putney, London.
+ +Figure 7: Human written summary and single-best summaries using beam search for the CNN/DailyMail article in Figure 10. + +# Nucleus Sampling: PEGASUS + +$\mathbf{s}_1 \rightarrow$ Nick Clegg reveals balance902 of power lies with his wife Miriam Gonzalez Durantez irish. Couple performed joint interview to promote new series on ITV News Tonight. Cleggs spoke of pressure of moving from their Putney home into a taxpayer-funded mansion. +$\mathbf{s}_2 \rightarrow$ Lib Dem leader and wife Miriam give TV interview to ITV's Tonight program. Co-hosts have been pictured drinking white wine and cooking paella. They explained why she vetoed family heading to Whitehall mansion. Husband quipped: 'It's a grand word for Miriam basically saying no'. +$\mathbf{s}_3 \rightarrow$ Lib Dem leader admitted wife Miriam has the final say over family life. Couple chose not to move their three Laundry to Whitehall home earlier this May. +s4 $\rightarrow$ Nick Clegg and his wife Miriam Gonzalez Durantez open up in TV interview. Lib Dem leader revealed she Bloomberg-style 'discussions' in their home. Couple revealed they opted not to stay with their sons in their £1.5m house. +$\mathbf{s}_5 \rightarrow$ Liberal Democrats leader revealed balance of power lies 30-plus metres away. He brought cameras into family home due to Cameron and Miliband controversies. Lib Dem leader joked that wife Miriam vetoed their move to Whitehall. + +# Nucleus Sampling: FROST + +c1; s1 → [CONTENT] Liberal Democrats | Nick Clegg | Miriam Gonzalez Durantez ||| Putney | London ||| Cleggs ||| ITV ||| Couple [SUMMARY] Liberal Democrats leader Nick Clegg reveals balance of power with wife Miriam Gonzalez Durantez in joint interview. They invited cameras into kitchen of £1.5 million family home in Putney, south west London. Cleggs are seen trying white wine as they discuss family life and girlfriends. They were Furness on ITV programme and said they chose home to protect family. Couple say choosing home stopped them veering off from wider society 'in a bubble' +$\mathbf{c}_2; \mathbf{s}_2 \rightarrow [CONTENT]$ Lib Dem | ITV | Tonight | Miriam Gonzalez Durantez |||| Couple | Putney | London [SUMMARY] Lib Dem leader appeared on ITV's Tonight programme with wife Miriam Gonzalez Durantez. He was given the option of moving his family into a grace-and-favour government mansion but was vetoed. Couple invite cameras into family home in Putney, south west London to talk about family life. +$\mathbf{c}_3; \mathbf{s}_3 \rightarrow [CONTENT]$ Lib Dems | Miriam || Couple | ITV | Tonight || Putney | London || bestseller | Miliband [SUMMARY] Lib Dems leader revealed balance of power lies with wife Miriam. Couple invited cameras into kitchen of their home for ITV's Tonight programme. Asked why they kept the family home Galore in Putney, south west London. Documentary follows millions-selling bestseller's rave over Miliband'!! +\(\mathbf{c}_4; \mathbf{s}_4 \rightarrow [CONTENT] \text{Clegg} | \text{Putney} || || || | | Lib Dem [SUMMARY] Mrs Clegg explains why the family stayed in their £1.5 million home in Putney 1929. Comparing their relationship to that of a different marriage, she said: 'We just stand together and it's not any more of a difficulty than what many others have to go through'. Revealingly, suggests that although no longer planning a political showdown they are closer than they have ever been. Senior Lib Dem says he has learned how to ignore populist attacks. +$\mathbf{c}_5;\mathbf{s}_5\rightarrow [CONTENT]$ Lib Dem | ITV | Tonight || David Cameron | Lib Demaceae || Nick | Miriam Gonzalez Durantez [SUMMARY] Lib Dem leader allowed ITV's Tonight news programme into kitchen of their home. David Cameron elegant kitchens sometimes look like Lib Demaceae. Nick's high flying wife Miriam Gonzalez Durantez reveals balance of power is with her. + +Figure 8: Diverse summaries with nucleus sampling for the CNN/DailyMail article in Figure 10. We highlight spans in orange that are not faithful to the input document. We use $c*$ and $s*$ to denote different compositions and their corresponding summaries. + +# Composition Sampling: FROST + +$\mathbf{c}_1; \mathbf{s}_1 \rightarrow [CONTENT]$ Liberal Democrats | Nick Clegg | Miriam Gonzalez Durantez ||| Putney | London ||| Cleggs ||| ITV ||| Couple [SUMMARY] Liberal Democrats leader Nick Clegg reveals balance of power lies with wife Miriam Gonzalez Durantez. The couple invited the cameras into the kitchen of their £1.5 million family home in Putney, south west London. Cleggs were seen drinking white wine and cooking in the kitchen as they chatted about their family life. Focus groups conducted by the ITV programme found he was characterised as the 'chihuahua in a handbag' of the government. Couple believe that choosing to remain in their family home has stopped them from becoming cut off from the rest of the world. + +$\mathbf{c}_2; \mathbf{s}_2 \rightarrow [\text{CONTENT}]$ Lib Dem | ITV | Tonight | Miriam Gonzalez Durantez ||| ||| Couple | Putney | London [SUMMARY] Lib Dem leader appeared on ITV's Tonight programme with wife Miriam Gonzalez Durantez. He was given the option of moving his family into a grace-and-favour government mansion - but the move was vetoed by his wife. Couple invited the cameras into the kitchen of their £1.5 million family home in Putney, south west London. + +c3; s3 → [CONTENT] Lib Dems | Miriam ||| Couple | ITV | Tonight ||| Putney | London ||| bestseller | Miliband [SUMMARY] Lib Dems leader reveals balance of power lies with wife Miriam in joint interview. Couple invited the cameras into their kitchen for ITV's Tonight programme. They were seen drinking wine and cooking in their £1.5 million home in Putney, south west London. Interview comes after bestseller's row over Miliband's small kitchen. + +$\mathbf{c}_4; \mathbf{s}_4 \rightarrow [\text{CONTENT}]$ Clegg | Putney || || || Lib Dem [SUMMARY] Mr Clegg and his wife invited the cameras into the kitchen of their Putney home. They were seen drinking wine and cooking as they chatted about their family life. The couple were asked why they decided to remain in their family home. Lib Dem leader was given the option of moving his family into a government mansion. + +$\mathbf{c}_5; \mathbf{s}_5 \rightarrow [CONTENT]$ Lib Dem | ITV | Tonight || David Cameron | Lib Demaceae || Nick | Miriam Gonzalez Durantez [SUMMARY] Lib Dem leader reveals balance of power in joint interview with wife on ITV's Tonight. David Cameron's large country kitchen and Lib Demaceae' small kitchen criticised. Nick and Miriam Gonzalez Durantez reveal why they stayed at home. + +# Composition Sampling: FROST++ + +c1; s1 → [CONTENT] Liberal Democrats | Nick Clegg | Miriam Gonzalez Durantez ||| Putney | London ||| Cleggs ||| ITV ||| [SUMMARY] Liberal Democrats leader Nick Clegg revealed balance of power lies with wife Miriam Gonzalez Durantez. The couple invited the cameras into the kitchen of their £1.5 million family home in Putney, south west London. The Cleggs were seen drinking white wine and cooking in the kitchen of their home. ITV programme found he was characterised as the 'chihuahua in a handbag' of the government. He also insisted he felt more full of 'vim and vigour' than he did in the start of the coalition. + +$\mathbf{c}_2; \mathbf{s}_2 \rightarrow [\text{CONTENT}]$ Lib Dem | ITV | Tonight | Miriam Gonzalez Durantez || Putney | London [SUMMARY] Lib Dem leader appeared on ITV's Tonight programme with wife Miriam Gonzalez Durantez. He was given the option of moving his family into a grace-and-favour government mansion - but the move was vetoed by his wife. The couple invited the cameras into the kitchen of their £1.5 million family home in Putney, south west London. + +$\mathbf{c}_3; \mathbf{s}_3 \rightarrow [CONTENT]$ Lib | Miriam || ITV | Tonight || Putney | London || Miliband [SUMMARY] Lib Dem leader reveals balance of power lies with wife Miriam in joint interview. The couple invited the cameras into their kitchen for ITV's Tonight programme. They were seen drinking wine and cooking in their £1.5 million home in Putney, south west London. Comes after Miliband was widely mocked for posing with wife in his kitchen. + +$\mathbf{c}_4; \mathbf{s}_4 \rightarrow [\text{CONTENT}]$ Clegg | Putney || || || Lib Dem [SUMMARY] Mr Clegg and his wife invited the cameras into the kitchen of their Putney home. They were seen drinking wine and cooking as they chatted about their family life. The couple were asked why they decided to remain in their family home. Lib Dem leader was given the option of moving his family into a government mansion. + +$\mathbf{c}_5;\mathbf{s}_5\rightarrow$ [CONTENT] Lib Dem | ITV | Tonight ||| David Cameron | Lib ||| Nick | Miriam Gonzalez Durantez [SUMMARY] Lib Dem leader reveals balance of power in joint interview with wife on ITV's Tonight. Comes after David Cameron invited cameras into Lib Dem leader's country kitchen. Nick and Miriam Gonzalez Durantez were seen drinking wine and cooking. + +Figure 9: Diverse summaries with composition sampling for the CNN/DailyMail article in Figure 10. We highlight spans in orange that are not faithful to the input document. We use $c*$ and $s*$ to denote different compositions and their corresponding summaries. + +# Inside the Clegg kitchen: Over white wine and paella Nick reveals how Miriam put her foot down and refused to swap their family home for a grace-and-favour property + +It is a conversation that will be familiar to couples across the country. What one spouse thinks is a 'discussion', the other understands they are being overruled. + +In a joint interview with his high flying lawyer wife Miriam Gonzalez Durantez, Nick Clegg revealed the balance of power lies where many long suspected: with her. + +After the last election, Mr Clegg was given the option of moving his family into a grace-and-favour government mansion - but the move was vetoed by his wife. + +After controversies over David Cameron's large country kitchen and Ed Miliband's small second kitchen, the couple invited the cameras into the kitchen of their £1.5 million family home in Putney, south west London for ITV's Tonight programme. Scroll down for video. + +Home: In a revealing joint interview, Liberal Democrats leader Nick Clegg (pictured) admitted his wife Miriam (right) makes the big decisions in their household. + +Mr Clegg is seen in the documentary drinking wine as his wife explains why she chose not to move her family into a government property. + +They revealed why they decided to remain with their three sons Antonio, Alberto, and Miguel, in the family home instead of making the move to Whitehall. + +Miriam, who uses her maiden name Gonzalez Durantez, told ITV News Political Editor Tom Bradby: + +'We had a lot of pressure at the time to go to one of the houses of the government. 'We discussed and thought the best thing would be for the children to stay here. + +Revealingly, Mr Clegg quipped: 'Discussion's a rather grand word for Miriam basically saying no.' + +But he quickly added: 'You were so right, you were so right.' + +However, the couple believe that choosing to remain in their family home has stopped them from becoming cut off from the rest of the world. + +Mrs Clegg said: 'If you look at it with perspective it has been the best decision for all of us and if I may, also for Nick as well because you can see how politicians sometimes can get in a bubble and be completely distant from normal society and I think if you're in your house in your neighbourhood, it's much easier really.' + +The couple were asked why they decided to remain with their three sons Antonio, Alberto, and Miguel, in their £1.5 million family home in Putney, south west London. + +The couple believe that choosing to remain in their family home has stopped them from becoming cut off from the rest of the world. + +Asked how they coped with the 'terrific kicking' given to her husband she said she didn't take it 'too seriously'. 'Just like any other marriage, we just stand together and it's not any more of a difficulty than what many others have to go through and you know. You should never take it too seriously.' + +And if he wanted five more years Mr Clegg said: 'Ten, 15, 20 why not! In for a penny, in for a pound.' + +He also insisted he felt more full of 'vim and vigour' than he did in the start of the coalition. + +Focus groups conducted by the programme found Mr Clegg was characterised as the 'chihuahua in a handbag' of the government. When asked what kind of drink he was the participants settled on Babycham. + +Asked how they coped with the 'terrific kicking' given to her husband, Mrs Clegg said she didn't take it 'too seriously' + +The Cleggs were seen drinking white wine and cooking paella in the kitchen of their home as they chatted about their family life. + +Honest: 'Discussion's a rather grand word for Miriam basically saying no,' Mr Clegg (left) joked during the interview. + +Ed Miliband was widely mocked after he posed with wife Justine in this picture, which turned out to be a second kitchen in his north London home used for 'tea and snacks' + +David Cameron invited the cameras into his Oxfordshire home, where he revealed he did not plan to stand for a third term. + +Mr Clegg sought to explain why his relations with the Prime Minister always seemed to be so cordial. He said: 'If you know you disagree with someone, there's no tension. I suspect meetings between Tony Blair and Gordon Brown were so fractious because they kind of should have agreed but they didn't. + +'When David Cameron and I sit in a meeting, as we do week in week out, we kind of know that our starting point is that we come from different vantage points...' + +He claimed not to read all newspapers, and had learned how to ignore attacks form his opponents. + +'It sounds glib but I actually think you can't take it too seriously otherwise you spend all your time reacting to stuff and you just have to laugh at it because some of it is faintly silly.' + +Mrs Clegg added that their close bond as a family has protected from the political brickbats. + +'From my point of view if I spend my time thinking about whatever a specific person may has said, I don't have any time to do what I want to do. + +Figure 10: CNN/DailyMail input article for the summaries presented in Figures 7-9. + +GOLD Question: What does the Premier of Victoria need to lead in the Legislative Assembly? + +Context with Answer (in boldface): Answer: most seats Context: The Premier of Victoria is the leader of the political party or coalition with the most seats in the Legislative Assembly. The Premier is the public face of government and, with cabinet, sets the legislative and political agenda. Cabinet consists of representatives elected to either house of parliament. It is responsible for managing areas of government that are not exclusively the Commonwealth's, by the Australian Constitution, such as education, health and law enforcement. The current Premier of Victoria is Daniel Andrews. + +# Single-best summaries + +PEGASUS: How many seats does the Premier of Victoria have in the Legislative Assembly? + +FROST: [CONTENT] Premier | Victoria | Legislative Assembly [SUMMARY] What does the Premier of Victoria have in the Legislative Assembly? + +# Nucleus Sampling: PEGASUS + +$\mathbf{s_1} \rightarrow$ The Premier of Victoria would have how many seats in the Legislative Assembly? + +$\mathbf{s}_2 \rightarrow$ What is the politician MP expect to have in Legislative Assembly? + +$\mathbf{s}_3 \rightarrow$ Aside from being the leader of a political party or coalition, how is the Premier of Victoria Geometry of the Legislative Assembly? + +$\mathbf{s}_4 \rightarrow$ How many Legislative Assembly seats is the Premier of Victoria? + +$\mathbf{s}_5 \rightarrow$ What are the Legislative Assembly seats? + +# Nucleus Sampling: FROST + +$\mathbf{c_1};\mathbf{s_1}\rightarrow [\mathrm{CONTENT}]$ criteria | Premier | Victoria | Coalition [SUMMARY] What is a Varied criteria for a Premier of Victoria to possess in a Coalition? + +$\mathbf{c_2};\mathbf{s_2}\rightarrow [\mathrm{CONTENTS}]$ Premier | Victoria | leader | party | coalition | Legislative Assembly [SUMMARY] The Premier of Victoria isThe leader of the political party or coalition with to what in the Legislative Assembly? + +$\mathbf{c_3};\mathbf{s_3}\rightarrow [\mathrm{CONTENT}]$ number | Legislative Assembly | seats | Premier [SUMMARY] What is the number of Legislative Assembly seats that the Premier holds? + +$\mathbf{c_4};\mathbf{s_4}\rightarrow [\mathrm{CONTENT}]$ piece | legislature | leader | party | mixture | members [SUMMARY] What piece of the legislature does the leader of the party have a mixture of members? + +$\mathbf{c}_5; \mathbf{s}_5 \rightarrow [\text{CONTENT}]$ Premier | Victoria | Legislative Assembly [SUMMARY] What does the Premier of Victoria have in the Legislative Assembly + +# Composition Sampling: FROST + +$\mathbf{c}_1; \mathbf{s}_1 \rightarrow [\text{CONTENT}]$ Premier | Victoria | Legislative Assembly [SUMMARY] What does the Premier of Victoria have in the Legislative Assembly? + +$\mathbf{c_2};\mathbf{s_2}\rightarrow [CONTENT]$ Premier | party | coalition | Legislative Assembly [SUMMARY] The Premier of the political party or coalition has what in the Legislative Assembly? + +$\mathbf{c}_3; \mathbf{s}_3 \rightarrow [\text{CONTENT}]$ Premier | Victoria | leader | party | Legislative Assembly [SUMMARY] The Premier of Victoria is the leader of the political party with what in the Legislative Assembly? + +$\mathbf{c}_4; \mathbf{s}_4 \rightarrow [\text{CONTENT}]$ Premier | Victoria | party | coalition [SUMMARY] What does the Premier of Victoria have in his political party or coalition? + +$\mathbf{c}_5; \mathbf{s}_5 \rightarrow [\text{CONTENT}]$ Premier | Victoria | leader | party | coalition | Legislative Assembly [SUMMARY] The Premier of Victoria is the leader of the political party or coalition with what in the Legislative Assembly? + +Figure 11: Example input passage with answer in boldface, human written question, and model predictions including diverse questions for the SQuAD Question Generation dataset. We highlight spans in orange that are not accurate with respect to the input context. We use $c*$ and $s*$ to denote different compositions and their corresponding questions. \ No newline at end of file diff --git a/awellcomposedtextishalfdonecompositionsamplingfordiverseconditionalgeneration/images.zip b/awellcomposedtextishalfdonecompositionsamplingfordiverseconditionalgeneration/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..2d395811ccb2b070081ab653f532c7bfac3f7f59 --- /dev/null +++ b/awellcomposedtextishalfdonecompositionsamplingfordiverseconditionalgeneration/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8e95e9582aae0d7ac94632ff71fb3664042d9fe25d3cbfb6bd5f823a0e0092bc +size 921495 diff --git a/awellcomposedtextishalfdonecompositionsamplingfordiverseconditionalgeneration/layout.json b/awellcomposedtextishalfdonecompositionsamplingfordiverseconditionalgeneration/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..0fd939fa46961f651a19d502c2580736b28e7d4b --- /dev/null +++ b/awellcomposedtextishalfdonecompositionsamplingfordiverseconditionalgeneration/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:264ca0e70520a15590566a29c148d0013877587f4de2d1009e5f76e42f74cd28 +size 719099 diff --git a/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/a77a4ac0-32d4-45f8-957b-39785af43b28_content_list.json b/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/a77a4ac0-32d4-45f8-957b-39785af43b28_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..83e6bb3535891923d8f4f66d493a8a9664a3db1a --- /dev/null +++ b/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/a77a4ac0-32d4-45f8-957b-39785af43b28_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:696501a2df40f7fbf2526d5fe96c711e22d92c676b7d90d31fba64d3bec3bae3 +size 45323 diff --git a/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/a77a4ac0-32d4-45f8-957b-39785af43b28_model.json b/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/a77a4ac0-32d4-45f8-957b-39785af43b28_model.json new file mode 100644 index 0000000000000000000000000000000000000000..e11e410eb0ab0de779320a5f9a72e79c6d2168a1 --- /dev/null +++ b/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/a77a4ac0-32d4-45f8-957b-39785af43b28_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dd34147319e367b640bf6ab6046843f75af6578549e7bf5bfbffae36cde45c8c +size 54825 diff --git a/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/a77a4ac0-32d4-45f8-957b-39785af43b28_origin.pdf b/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/a77a4ac0-32d4-45f8-957b-39785af43b28_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e35dc111cc171a4c2ef11b7a0f164cf96a7ce315 --- /dev/null +++ b/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/a77a4ac0-32d4-45f8-957b-39785af43b28_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:20fd7fc0975167052c90009debf57a29594a58913e9fd3fbbb9f9daf78c8070d +size 446210 diff --git a/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/full.md b/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/full.md new file mode 100644 index 0000000000000000000000000000000000000000..aa88eeddcd50dc55cf1c6fe8c4237970c4ba29f4 --- /dev/null +++ b/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/full.md @@ -0,0 +1,149 @@ +# "Is Whole Word Masking Always Better for Chinese BERT?": Probing on Chinese Grammatical Error Correction + +Yong Dai $^{1*}$ , Linyang Li $^{2*}$ , Cong Zhou $^{1*}$ , Zhangyin Feng $^{1}$ , Enbo Zhao $^{1}$ , Xipeng Qiu $^{2}$ , Piji Li $^{1}$ , Duyu Tang $^{1\dagger}$ + +Tencent AI Lab, China + +$^{2}$ Fudan University + +{yongdai,brannzhou,enbozhao,aifeng,duyutang} $@$ Tencent.com, + +{linyangli19, xpqiu} $@$ fudan.edu.cn + +# Abstract + +Whole word masking (WWM), which masks all subwords corresponding to a word at once, makes a better English BERT model (Sennrich et al., 2016). For the Chinese language, however, there is no subword because each token is an atomic character. The meaning of a word in Chinese is different in that a word is a compositional unit consisting of multiple characters. Such difference motivates us to investigate whether WWM leads to better context understanding ability for Chinese BERT. To achieve this, we introduce two probing tasks related to grammatical error correction and ask pretrained models to revise or insert tokens in a masked language modeling manner. We construct a dataset including labels for 19,075 tokens in 10,448 sentences. We train three Chinese BERT models with standard character-level masking (CLM), WWM, and a combination of CLM and WWM, respectively. Our major findings are as follows: First, when one character needs to be inserted or replaced, the model trained with CLM performs the best. Second, when more than one character needs to be handled, WWM is the key to better performance. Finally, when being fine-tuned on sentence-level downstream tasks, models trained with different masking strategies perform comparably. + +# 1 Introduction + +BERT (Devlin et al., 2018) is a Transformer-based pretrained model, whose prosperity starts from English language and gradually spreads to many other languages. The original BERT model is trained with character-level masking (CLM). A certain percentage (e.g. $15\%$ ) of tokens in the input se + +quence is masked and the model is learned to predict the masked tokens. + +It is helpful to note that a word in the input sequence of BERT can be broken into multiple wordpiece tokens (Wu et al., 2016). For example, the input sentence "She is undeniably brilliant" is converted to a wordpiece sequence "She is un ##deni ##ably brilliant", where "##" is a special prefix added to indicate that the token should be attached to the previous one. In this case the word "undeniably" is broken into three wordpieces {"un", "##deni", "#ably)}. In standard masked language modeling, CLM may mask any one of them. In this case, if the token "#ably" is masked, it is easier for the model to complete the prediction task because "un" and "##deni" are informative prompts. To address this, Whole word masking (WWM) masks all three subtokens (i.e., {"un", "##deni", "#ably"}) within a word at once. For Chinese, however, each token is an atomic character that cannot be broken into smaller pieces. Many Chinese words are compounds that consisting of multiple characters (Wood and Connelly, 2009). For example, "手机" (cellphone) is a word consisting of two characters "手" (hand) and "机" (machine). Here, learning with WWM would lose the association among characters corresponding to a word. + +In this work, we introduce two probing tasks to study Chinese BERT model's ability on character-level understanding. The first probing task is character replacement. Given a sentence and a position where the corresponding character is erroneous, the task is to replace the erroneous character with the correct one. The second probing task is character insertion. Given a sentence and the positions where + +a given number of characters should be inserted, the task is to insert the correct characters. We leverage the benchmark dataset on grammatical error correction (Rao et al., 2020a) and create a dataset including labels for 19,075 tokens in 10,448 sentences. + +We train three baseline models based on the same text corpus of 80B characters using CLM, WWM, and both CLM and WWM, separately. We have the following major findings. (1) When one character needs to be inserted or replaced, the model trained with CLM performs the best. Moreover, the model initialized from RoBERTa (Cui et al., 2019) and trained with WWM gets worse gradually with more training steps. (2) When more than one character needs to be handled, WWM is the key to better performance. (3) When evaluating sentence-level downstream tasks, the impact of these masking strategies is minimal and the model trained with them performs comparably. + +# 2 Our Probing Tasks + +In this work, we present two probing tasks with the goal of diagnosing the language understanding ability of Chinese BERT models. We present the tasks and dataset in this section. + +The first probing task is character replacement, which is a subtask of grammatical error correction. Given a sentence $s = \{x_{1}, x_{2}, \ldots, x_{i}, \ldots, x_{n}\}$ of $n$ characters and an erroneous span $es = [i, i + 1, \ldots, i + k]$ of $k$ characters, the task is to replace $es$ with a new span of $k$ characters. + +The second probing task is character insertion, which is also a subtask of grammatical error correction. Given a sentence $s = \{x_{1}, x_{2}, \ldots, x_{i}, \ldots, x_{n}\}$ of $n$ characters, a position $i$ , and a fixed number $k$ , the task is to insert a span of $k$ characters between the index $i$ and $i + 1$ . + +We provide two examples of these two probing tasks with $k = 1$ in Figure 1. For the character replacement task, the original meaning of the sentence is "these are all my ideas". Due to the misuse of a character at the 7th position, its meaning changed significantly to "these are all my attention". Our character replacement task is to replace the misused character "主" with "注". For the character insertion task, what the writer wants to express is "Human is the most important factor. However, due to the lack of one character between the 5th and 6th position, its meaning changed to "Human is the heaviest factor". The task is to + +![](images/312ae2a73dcb84d57054b80ee0543bd9947f02ab4f82a0b50048c823dc4a3bd1.jpg) +Figure 1: Illustrative examples of two probing tasks. For character replacement (upper box), the highlighted character at 7th position should be replaced with another one. For character insertion (bottom box), one character should be inserted after the 5th position. Translations in English are given in parentheses. + +insert "要" after the 5th position. Both tasks are also extended to multiple characters (i.e., $k \geq 2$ ). Examples can be found at Section 3.2. + +We build a dataset based on the benchmark of Chinese Grammatical Error Diagnosis (CGED) in years of 2016, 2017, 2018 and 2020 (Lee et al., 2016; Rao et al., 2017, 2018, 2020b). The task of CGED seeks to identify grammatical errors from sentences written by non-native learners of Chinese (Yu et al., 2014). It includes four kinds of errors, including insertion, replacement, redundant, and ordering. The dataset of CGED composes of sentence pairs, of which each sentence pair includes an erroneous sentence and an error-free sentence corrected by annotators. However, these sentence pairs do not provide information about erroneous positions, which are indispensable for the character replacement and character insertion. To obtain such position information, we implement a modified character alignment algorithm (Bryant et al., 2017) tailored for the Chinese language. Through this algorithm, we obtain a dataset for the insertion and replacement, both of which are suitable to examine the language learning ability of the pretrained model. We leave redundant and ordering types to future work. The statistic of our dataset is detailed in Appendix A. + +# 3 Experiments + +In this section, we first describe the BERT-style models that we examined, and then report numbers. + +# 3.1 Chinese BERT Models + +We describe the publicly available BERT models as well as the models we trained. + +
Length = 1Length = 2Length > 3Average
Insertionp@1p@10p@1p@10p@1p@10p@1p@10
BERT-base76.097.037.276.014.450.142.574.4
Ours-clm77.297.336.774.413.349.342.473.7
Ours-wwm56.680.142.979.119.354.039.671.1
Ours-clm-wwm71.395.142.680.920.653.044.876.3
Replacementp@1p@10p@1p@10p@1p@10p@1p@10
BERT-base66.095.121.058.210.146.132.466.5
Ours-clm67.496.620.458.37.436.931.763.9
Ours-wwm34.868.225.765.37.435.222.656.2
Ours-clm-wwm59.293.726.566.412.441.632.767.2
+ +Table 1: Probing results on character replacement and insertion. + +
Character Replacement
Input: 我没有权利破害别人的生活 (En: I have no right to destroy other people's lives.)Label: 坏Prediction: 坏 (99.97%)
Input: 代沟问题越来越深刻。 (En: The problem of generation gap is getting worse.)Label: 严重Prediction: 严 (79.94%) 重 (91.85%)
Character Insertion
Input: 吸烟不但对自己的健康好,而且对非吸烟者带来不好的影响。 Label: 不 (En: Smoking is not only bad for your health, but also bad to non-smokers.)Prediction: 不 (99.98%)
Input: 我下次去北京的时候,一定要吃北京烤鸭,我们在北京吃过的 是越南料理等外国的 Label: 饭菜 (En: Next time I go to Beijing, I can not miss the Peking Duck. What we have eaten in Beijing are Vietnamese cuisine and other foreign dishes.)Prediction: 美 (40.66%) 食 (33.55%)
+ +Figure 2: Top predictions of Ours-clm-wwm for replacement and insertion types. For each position, probability of the top prediction is given in parenthesis. The model makes the correct prediction for top three examples. For the bottom example, the prediction also makes sense, although it is different from the ground truth. + +As mentioned earlier, BERT-base (Devlin et al., 2018) $^4$ is trained with the standard MLM objective. $^5$ To make a fair comparison of CLM and WWM, we train three simple Chinese BERT baselines from scratch $^6$ : (1) Ours-clm: we train this model using CLM. (2) Ours-wwm: this model only differs in that it is trained with WWM. (3) Ours-clm-wwm: this model is trained with both CLM and WWM objectives. We train these three models on a text corpus of 80B characters consisting of news, wiki, and novel texts. For the WWM task, we use a public word segmentation tool Texsmart (Zhang et al., 2020) to tokenize the raw data first. The mask rate is $15\%$ which is commonly used in existing works. We use a max sequence length of 512, use the ADAM optimizer (Kingma and Ba, 2014) with a batch size of 8,192. We set the learning rate to 1e-4 with a linear optimizer with + +5k warmup steps and 100k training steps in total. Models are trained on 64 Tesla V100 GPUs for about 7 days. + +# 3.2 Probing Results + +We present the results on two probing tasks here. Models are evaluated by Prediction $@\mathrm{k}$ , denoting whether the ground truth for each position is covered in the top-k predictions. From Table 1, we can make the following conclusions. First, Ours-clm consistently performs better than Ours-wwm on probing tasks that one character needs to be replaced or inserted. We suppose this is because WWM would lose the association between characters corresponding to a word. Second, WWM is crucial for better performance when there is more than one character that needs to be corrected. This phenomenon can be observed from the results of Ours-wwm and Ours-clm-wwm, which both adopt WWM and perform better than Ours-clm. Third, pretrained with a mixture of CLM and WWM, Ours-clm-wwm performs better than Ours-wwm in the one-character setting and does better than + +![](images/d2ab1113ac205db7884d81bf6446f0c7bd491714be705b1025425fba96163459.jpg) + +![](images/313ee3df41d77df447a6c48602bbdd433e2ce720e269f8d2e683cce54c3e7eaa.jpg) +Figure 3: Model performance at different training steps on the probing task of character insertion. The top and bottom figures give the results evaluated on spans with one and two characters, respectively. + +Ours-clm when more than one characters need to be handled. For each probing task, two examples with predictions produced by Ours-clm-wwm are given in Figure 2. + +# 3.3 Analysis + +To further analyze how CLM and WWM affect the performance on probing tasks, we initialized our model from RoBERTa (Cui et al., 2019) and further trained baseline models. We show the performance of these models with different training steps on the insertion task. From Figure 3 (top), we can observe that as the number of training steps increases, the performance of Ours-wwm decreases. + +In addition, we also evaluate the performance of trained BERT models on downstream tasks with model parameters fine-tuned. The performance of Ours-clm-wwm is comparable with Ours-wwm and Ours-clm. More information can be found in Appendix C. + +# 4 Related Work + +We describe related studies on Chinese BERT model and probing of BERT, respectively. + +The authors of BERT (Devlin et al., 2018) provided the first Chinese BERT model which was trained on Chinese Wikipedia data. On top of that, Cui et al. (2019) trained RoBERTa-wwm-ext with WWM on extended data. Cui et al. (2020) further trained a Chinese ELECTRA model and MacBERT, both of which did not have [MASK] tokens. ELECTRA was trained with a token-level binary classification task, which determined whether a token was the original one or artificially replaced. In MacBERT, [MASK] tokens were replaced with synonyms and the model was trained with WWM and ngram masking. ERNIE (Sun et al., 2019) was trained with entity masking, similar to WWM yet tokens corresponding to an entity were masked at once. Language features are considered in more recent works. For example, AMBERT (Zhang and Li, 2020) and Lattice-BERT (Lai et al., 2021) both take word information into consideration. Chinese-BERT (Sun et al., 2021) utilizes pinyin and glyph of characters. + +Probing aims to examine the language understanding ability of pretrained models like BERT when model parameters are clamped, i.e., without being fine-tuned on downstream tasks. Petroni et al. (2019) study how well pretrained models learn factual knowledge. The idea is to design a natural language template with a [MASK] token, such as "the wife of Barack Obama is [MASK]". If the model predicts the correct answer "Micheal Obama", it shows that pretrained models learn factual knowledge to some extent. Similarly, Davison et al. (2019) study how pretrained models learn commonsense knowledge and Talmor et al. (2020) examine on tasks that require symbolic understanding. Wang and Hu (2020) propose to probe Chinese BERT models in terms of linguistic and world knowledge. + +# 5 Conclusion + +In this work, we present two Chinese probing tasks, including character insertion and replacement. We provide three simple pretrained models dubbed Ours-clm, Ours-wwm, and Ours-clm-wwm, which are pretrained with CLM, WWM, and a combination of CLM and WWM, respectively. Ours-wwm is prone to lose the association between words and result in poor performance on probing tasks when one character needs to be inserted or replaced. Moreover, WWM plays a key role when two or more characters need to be corrected. + +# References + +Christopher Bryant, Mariano Felice, and Edward Briscoe. 2017. Automatic annotation and evaluation of error types for grammatical error correction. Association for Computational Linguistics. +Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, and Guoping Hu. 2020. Revisiting pre-trained models for Chinese natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 657-668, Online. Association for Computational Linguistics. +Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, and Guoping Hu. 2019. Pretraining with whole word masking for chinese bert. arXiv preprint arXiv:1906.08101. +Joe Davison, Joshua Feldman, and Alexander M Rush. 2019. Commonsense knowledge mining from pretrained models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1173-1178. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. +Yuxuan Lai, Yijia Liu, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2021. Lattice-bert: Leveraging multi-granularity representations in Chinese pre-trained language models. arXiv preprint arXiv:2104.07204. +Lung-Hao Lee, Gaoqi Rao, Liang-Chih Yu, Endong Xun, Baolin Zhang, and Li-Ping Chang. 2016. Overview of NLP-TEA 2016 shared task for Chinese grammatical error diagnosis. In Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications (NLPTEA2016), pages 40–48, Osaka, Japan. The COLING 2016 Organizing Committee. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. +Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. 2019. Language models as knowledge bases? arXiv preprint arXiv:1909.01066. +Gaoqi Rao, Qi Gong, Baolin Zhang, and Endong Xun. 2018. Overview of NLPTEA-2018 share task Chinese grammatical error diagnosis. In Proceedings + +of the 5th Workshop on Natural Language Processing Techniques for Educational Applications, pages 42-51, Melbourne, Australia. Association for Computational Linguistics. +Gaoqi Rao, Erhong Yang, and Baolin Zhang. 2020a. Overview of nlptea-2020 shared task for chinese grammatical error diagnosis. In Proceedings of the 6th Workshop on Natural Language Processing Techniques for Educational Applications, pages 25-35. +Gaoqi Rao, Erhong Yang, and Baolin Zhang. 2020b. Overview of NLPTEA-2020 shared task for Chinese grammatical error diagnosis. In Proceedings of the 6th Workshop on Natural Language Processing Techniques for Educational Applications, pages 25-35, Suzhou, China. Association for Computational Linguistics. +Gaoqi Rao, Baolin Zhang, Endong Xun, and Lung-Hao Lee. 2017. IJCNLP-2017 task 1: Chinese grammatical error diagnosis. In Proceedings of the IJCNLP 2017, Shared Tasks, pages 1-8, Taipei, Taiwan. Asian Federation of Natural Language Processing. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics. +Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. Ernie: Enhanced representation through knowledge integration. arXiv preprint arXiv:1904.09223. +Zijun Sun, Xiaoya Li, Xiaofei Sun, Yuxian Meng, Xiang Ao, Qing He, Fei Wu, and Jiwei Li. 2021. Chinesebert: Chinese pretraining enhanced by glyph and pinyin information. arXiv preprint arXiv:2106.16038. +Alon Talmor, Yanai Elazar, Yoav Goldberg, and Jonathan Berant. 2020. olympics-on what language model pre-training captures. Transactions of the Association for Computational Linguistics, 8:743-758. +Zhiruo Wang and Renfen Hu. 2020. Intrinsic knowledge evaluation on chinese language models. arXiv preprint arXiv:2011.14277. +C. Wood and V. Connelly. 2009. Contemporary perspectives on reading and spelling. +Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. + +Liang Xu, Hai Hu, Xuanwei Zhang, Lu Li, Chenjie Cao, Yudong Li, Yechen Xu, Kai Sun, Dian Yu, Cong Yu, Yin Tian, Qianqian Dong, Weitang Liu, Bo Shi, Yiming Cui, Junyi Li, Jun Zeng, Rongzhao Wang, Weijian Xie, Yanting Li, Yina Patterson, Zuoyu Tian, Yiwen Zhang, He Zhou, Shaoweihua Liu, Zhe Zhao, Qipeng Zhao, Cong Yue, Xinrui Zhang, Zhengliang Yang, Kyle Richardson, and Zhenzhong Lan. 2020a. CLUE: A Chinese language understanding evaluation benchmark. In Proceedings of the 28th International Conference on Computational Linguistics, pages 4762-4772, Barcelona, Spain (Online). International Committee on Computational Linguistics. +Liang Xu, Hai Hu, Xuanwei Zhang, Lu Li, Chenjie Cao, Yudong Li, Yechen Xu, Kai Sun, Dian Yu, Cong Yu, et al. 2020b. Clue: A chinese language understanding evaluation benchmark. arXiv preprint arXiv:2004.05986. +Liang-Chih Yu, Lung-Hao Lee, and Liping Chang. 2014. Overview of grammatical error diagnosis for learning chinese as a foreign language. In Proceedings of the 1st Workshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA'14), pages 42-47. +Haisong Zhang, Lemao Liu, Haiyun Jiang, Yangming Li, Enbo Zhao, Kun Xu, Linfeng Song, Suncong Zheng, Botong Zhou, Jianchen Zhu, Xiao Feng, Tao Chen, Tao Yang, Dong Yu, Feng Zhang, Zhanhui Kang, and Shuming Shi. 2020. Texsmart: A text understanding system for fine-grained ner and enhanced semantic analysis. arXiv preprint arXiv:2012.15639. +Xinsong Zhang and Hang Li. 2020. Ambert: A pretrained language model with multi-grained tokenization. arXiv preprint arXiv:2008.11869. + +# A The statistic of dataset + +
ReplacementInsertionTotal
Length = 15,5224,55510,077
Length = 22,0041,3373,341
Length ≥ 3305383688
No. sentences5,7274,72110,448
No. spans7,8316,27514,106
No. chars10,5428,53319,075
+ +Table 2: The statistic of our dataset. + +# B Probing results from models with different initialization + +We also verify the performance of models initialized from BERT (Devlin et al., 2018) and RoBERTa (Cui et al., 2019) on probing tasks. The results are detailed in Table 3, from which we can obtain consistent conclusions with the previous section. + +# C The evaluation on downstream tasks + +We test the performance of BERT-style models on tasks including text classification (TNEWS, IFLY-TEK), sentence-pair semantic similarity (AFQMC), coreference resolution (WSC), key word recognition (CSL), and natural language inference (OCNLI) (Xu et al., 2020a). We follow the standard fine-tuning hyper-parameters used in Devlin et al. (2018); Xu et al. (2020b); Lai et al. (2021) and report results on the development sets. The detailed results is shown in Table 4. + +
InitializationLength = 1Length = 2Length > 3Average
Insertionp@1p@10p@1p@10p@1p@10p@1p@10
BERT-base76.097.037.276.014.450.142.574.4
Ours-clmfrom scratch77.297.336.774.413.349.342.473.7
Ours-wwm56.680.142.979.119.354.039.671.1
Ours-clm-wwm71.395.142.680.920.653.044.876.3
Ours-clmfrom BERT79.297.740.077.616.253.545.176.3
Ours-wwm61.287.743.479.420.156.441.674.5
Ours-clm-wwm73.196.141.880.620.656.745.277.8
Ours-clmfrom RoBERTa79.497.942.080.420.652.347.376.9
Ours-wwm61.487.944.379.920.159.341.975.7
Ours-clm-wwm77.397.546.883.322.558.748.979.8
Replacementp@1p@10p@1p@10p@1p@10p@1p@10
BERT-base66.095.121.058.210.146.132.466.5
Ours-clmfrom scratch67.496.620.458.37.436.931.763.9
Ours-wwm34.868.225.765.37.435.222.656.2
Ours-clm-wwm59.293.726.566.412.441.632.767.2
Ours-clmfrom BERT69.096.924.564.78.447.334.069.6
Ours-wwm40.681.627.267.98.439.425.463.0
Ours-clm-wwm61.694.927.667.810.447.033.269.9
Ours-clmfrom RoBERTa69.796.826.76812.151.736.272.2
Ours-wwm41.780.928.268.212.447.227.465.4
Ours-clm-wwm67.396.728.469.715.754.237.173.5
+ +Table 3: Probing results from models with different initialization. + +
ModelTNEWSIFLYTEKAFQMCOCNLIWSCCSLAverage
BERT-base57.161.474.275.278.681.871.4
Ours-clm57.360.372.873.979.368.768.7
Ours-wwmfrom scratch57.660.973.875.481.975.470.8
Ours-clm-wwm57.360.372.375.679.079.570.7
Ours-clm57.660.672.875.579.380.171.0
Ours-wwmfrom BERT58.360.871.7376.179.980.771.3
Ours-clm-wwm58.160.872.375.880.379.971.2
Ours-clm57.960.874.775.783.182.172.4
Ours-wwmfrom RoBERTa58.161.173.976.082.681.772.2
Ours-clm-wwm58.161.074.075.984.081.872.5
+ +Table 4: Evaluation results on the dev set of each downstream task. Model parameters are fine-tuned. \ No newline at end of file diff --git a/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/images.zip b/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4d69c7acba2bb158de8c3c3749a26954741da844 --- /dev/null +++ b/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d69e943e78bb6f43c19f6adb724375de22b6ca31e458d11ed722e05ba756169 +size 460438 diff --git a/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/layout.json b/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..c09e403c14f4042857f3dcdc73b98bccfb07cc11 --- /dev/null +++ b/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d9f4c05eea80baea8f758bad829d46b7d3b27f36868e75f2d59de9406341f493 +size 192875 diff --git a/translationerrordetectionasrationaleextraction/3554ad1e-a5ce-42b3-b353-99800f422f87_content_list.json b/translationerrordetectionasrationaleextraction/3554ad1e-a5ce-42b3-b353-99800f422f87_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..06b709f549698a8c3124d3e44f102e252560eb57 --- /dev/null +++ b/translationerrordetectionasrationaleextraction/3554ad1e-a5ce-42b3-b353-99800f422f87_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3b1c3ca6d1c23df0550d574819b283f9733e378a56b1a894fb9b2c66138eeafd +size 74529 diff --git a/translationerrordetectionasrationaleextraction/3554ad1e-a5ce-42b3-b353-99800f422f87_model.json b/translationerrordetectionasrationaleextraction/3554ad1e-a5ce-42b3-b353-99800f422f87_model.json new file mode 100644 index 0000000000000000000000000000000000000000..5c76bd8a79651e14b5898840f1f6fefd006332a2 --- /dev/null +++ b/translationerrordetectionasrationaleextraction/3554ad1e-a5ce-42b3-b353-99800f422f87_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:18af5c97172c5581e038ea0b3136c47e969febf307bae05edecc699efb7720a8 +size 90770 diff --git a/translationerrordetectionasrationaleextraction/3554ad1e-a5ce-42b3-b353-99800f422f87_origin.pdf b/translationerrordetectionasrationaleextraction/3554ad1e-a5ce-42b3-b353-99800f422f87_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5650af0448e2b98369ba6ae421923700852501d3 --- /dev/null +++ b/translationerrordetectionasrationaleextraction/3554ad1e-a5ce-42b3-b353-99800f422f87_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2927709e88c789090e9b899d9f7dfc638d16ed308e48315a3c87b4247bcc77e4 +size 449340 diff --git a/translationerrordetectionasrationaleextraction/full.md b/translationerrordetectionasrationaleextraction/full.md new file mode 100644 index 0000000000000000000000000000000000000000..ae5f62cb41f9fdbb250d7b8f7fa5979c76d37960 --- /dev/null +++ b/translationerrordetectionasrationaleextraction/full.md @@ -0,0 +1,284 @@ +# Translation Error Detection as Rationale Extraction + +Marina Fomicheva + +University of Sheffield + +m.fomicheva@sheffield.ac.uk + +Lucia Specia + +Imperial College London + +l.specia@imperial.ac.uk + +Nikolaos Aletras + +University of Sheffield + +n. aletras@sheffield.ac.uk + +# Abstract + +Recent Quality Estimation (QE) models based on multilingual pre-trained representations have achieved very competitive results in predicting the overall quality of translated sentences. However, detecting specifically which translated words are incorrect is a more challenging task, especially when dealing with limited amounts of training data. We hypothesize that, not unlike humans, successful QE models rely on translation errors to predict overall sentence quality. By exploring a set of feature attribution methods that assign relevance scores to the inputs to explain model predictions, we study the behaviour of state-of-the-art sentence-level QE models and show that explanations (i.e. rationales) extracted from these models can indeed be used to detect translation errors. We therefore (i) introduce a novel semi-supervised method for word-level QE; and (ii) propose to use the QE task as a new benchmark for evaluating the plausibility of feature attribution, i.e. how interpretable model explanations are to humans. + +# 1 Introduction + +Quality Estimation (QE) is the task of predicting Machine Translation (MT) quality at inference time, when no gold standard human translation is available (Blatz et al., 2004; Specia et al., 2009). QE can be framed as a word-level or a sentence-level task. Both tasks have numerous practical applications, such as deciding whether a given MT output can be published without editing, highlighting potential critical errors. Current QE approaches fine-tune powerful representations from pre-trained multilingual encoders such as BERT (Devlin et al., 2018) or XLM-R (Conneau et al., 2019). In the recent Shared Task on QE at WMT2020 (Specia et al., 2020) these approaches have achieved very high performance at predicting sentence-level translation quality (up to 0.9 Pearson correlation with human judgements for some language pairs). How- + +ever, as evidenced by these results, the accuracy of word-level prediction still leaves room for improvement. This is partly due to the limited amount of training data. Word-level error annotation is especially time-consuming and expensive, as it requires work from bilingual experts. In this work we introduce a new semi-supervised approach to word-level QE that removes the need of training data at word level. To achieve this, we propose addressing QE as a rationale extraction task (Lei et al., 2016). + +Explainability is a broad area aimed at explaining predictions of machine learning models (Lipton, 2016). Rationale extraction methods achieve this by selecting a portion of the input that justifies model output for a given data point. In translation, human perception of quality is guided by the presence of translation errors (Freitag et al., 2021). We hypothesize that sentence-level QE models also rely on translation errors to make predictions. If that is the case, explanations for sentence-level predictions can be used to detect translation errors, thus removing the need for word-level labeled training data. To extract model explanations, we use post hoc rationale extraction methods (Sundararajan et al., 2017) which try to explain the predictions of a given model (as opposed to modifying its architecture or introducing constraints during training), since one of our goals is to study to what extent existing QE models rely on the same information as humans to make predictions. + +At the same time, by using word-level errors as explanations for sentence-level QE scores, we introduce a new benchmark for evaluating explainability methods. Recent work has introduced various datasets for measuring the agreement between rationales extracted from NLP models and those provided by humans (DeYoung et al., 2019). QE is different from these datasets in various important aspects. First, it is a regression task, as opposed to binary or multiclass text classification mainly explored in previous work. Second, it is a multi- + +lingual task where the output score captures the relationship between source and target sentences. Finally, manual annotation of translation errors is a practical task with a long tradition in MT research and translation studies (Lommel et al., 2014), and thus offers an interesting alternative to human explanations collected specifically for evaluating rationale extraction methods. + +# Our main contributions are: + +- We introduce a novel semi-supervised approach for word-level QE. We provide practical recipes on how feature attribution methods can be used to derive information on translation errors from sentence-level models. +- We provide insights into the behaviour of state-of-the-art (SOTA) QE models by analysing attributions to different parts of the input sequence (source vs. target sentence, correct words vs. errors) at different hidden layers. +- We propose to use the QE task as a new benchmark for evaluating the plausibility of feature attribution explanations, i.e. how interpretable model explanations are to humans (Jacovi and Goldberg, 2020). + +# 2 Background and Related Work + +Quality Estimation Current SOTA models in sentence-level QE, which is typically framed as a regression task, mainly use multilingual representations from pre-trained transformers (Devlin et al., 2018), notably XLM-R (Conneau et al., 2019). The input to a sentence-level QE model is a concatenation of the source and translated sentences, separated by the [SEP] token. The sequence is encoded by the pre-trained Transformer model, and the [CLS] token is passed through a multilayer perceptron (MLP) layer to obtain a sentence-level score. During fine-tuning both the parameters of the pre-trained model and the parameters corresponding to the MLP layer are updated. + +Word-level QE is typically addressed as a binary classification task, where the QE model needs to predict a binary label indicating whether a word is correct or wrong for each word in the MT output (Lee, 2020). As illustrated in Figure 1 (left), some supervised approaches use both sentence-level and word-level objectives in a multi-task setting, which results in superior performance (Kim et al., 2017; + +Lee, 2020). Methods that do not require word-level training data either need access to the MT model (Rikters and Fishel, 2017; Fomicheva et al., 2020b), or still treat the problem as a supervised task but use synthetically generated data for supervision (Tuan et al., 2021). + +Rationale Extraction for NLP SOTA NLP models based on deep neural networks achieve high performance in a variety of tasks, often at the cost of interpretability (Lipton, 2016). Recent work aims to address this issue by focusing on two different goals. On the one hand, the aim is to produce justifications for model predictions that are plausible to the users, in order to increase users' trust (Ribeiro et al., 2016). On the other hand, the aim is to reveal the inner workings of the model and faithfully explain its predictions, so the explanation can be useful to model developers (Jacovi and Goldberg, 2020). + +Typically, explainability methods operate by selecting a portion of the input that justifies model prediction for a single data point. This can be done either by modifying the model architecture, or by trying to explain the predictions of a given model. The first type of approaches (a.k.a. rationalization by construction) involves imposing restrictions on the generated rationales to satisfy certain constraints, e.g. compactness (Yu et al., 2019; Chalkidis et al., 2021). Note that such restrictions often result in lower performance and indeed are not guaranteed to explain the behaviour of an unconstrained model (Jain et al., 2020). The second type of approaches (the so called post hoc) usually rely on feature attribution methods, which assign an importance value to each input feature of a network (Sundararajan et al., 2017; Schulz et al., 2020). These methods do not allow for introducing useful biases during training, but focus on faithfully explaining model behaviour. + +Feature attribution has a long tradition in image recognition tasks (Simonyan et al., 2013) and only recently have been applied to some NLP tasks, most commonly text classification (DeYoung et al., 2019). QE is fundamentally different from text classification where clues are typically separate words or phrases (Zaidan et al., 2007) which often can be considered independently of the rest of the text. This independence assumption does not hold for the task of evaluating translation quality where a word cannot be identified as a clue (e.g. translation error) without considering the surrounding context. + +![](images/90657f2804fd3aeede8cf088f4ca66b71959fb66dfb6c51507e307a3768034bf.jpg) +Figure 1: Fully supervised word-level QE (left) and semi-supervised word-level QE as rationale extraction (right). Dashed and solid lines represent training and test time, respectively. + +![](images/b0df2f32eab1abaa27714b49a336c7c30968bcb208e0524a7d8de80c402b0ecc.jpg) + +Furthermore, SOTA NLP models based on contextualized representations for input words make rationale extraction especially challenging, as the representation for a given word can encode not only the word identity but also its interactions with other words in the text. Recent work has revealed various interesting properties that characterize the information flow through hidden layers in deep transformer models (Voita et al., 2019; De Cao et al., 2020; Yun et al., 2021). We provide additional insights on this topic in Section 5.2. + +# 3 Translation Error Prediction as Rationale Extraction + +We propose framing semi-supervised word-level QE as rationale extraction from sentence-level QE models. Instead of training a dedicated supervised model for word-level prediction, we propose deriving word-level scores from a strong sentence-level QE model by extracting explanations for model predictions (see Figure 1 (right)). Given a trained sentence-level QE model and the test data, rationale extraction methods detect the parts of the input that are relevant for model predictions on a sample-by-sample basis. We hypothesize that words with the highest relevance scores should correspond to actual translation errors on word-level. + +# 3.1 Approach + +More formally, given the source sequence $\mathbf{x}^S = x_1^S,\dots,x_{|S|}^S$ , the target sequence $\mathbf{x}^T = x_1^T,\dots,x_{|T|}^T$ and the QE model $M(\mathbf{x}^S,\mathbf{x}^T) = \hat{y}$ that predicts sentence MT quality, a feature attribution method produces a vector of attribution scores $\mathbf{a} = a_1,\dots,a_{|S + T|}$ , which represent the contribution of each source and target word to the prediction $\hat{y}$ . + +Crucially, no word-level labels are required for training. For evaluation, the attribution scores + +are compared against binary gold labels $\mathbf{w} = w_{1},\dots,w_{|T|}\in \{0,1\}$ indicating whether each given word in the target sequence is an error or correct. + +The predictive models for QE explored in our experiments are built by fine-tuning multilingual representations from pre-trained transformers. Transformer model starts from context-agnostic representations consisting of positional and token embeddings. These representations are passed through a set of hidden layers where at each layer the representations are iteratively updated via multi-head attention. This allows the hidden representation for each token to encode information on other words in the sentence. + +We note that attribution to the input tokens or to the embedding layer can hardly succeed in detecting translation errors, as those cannot be identified independently from the context given by the source and target sentence. In this work, we perform feature attribution to hidden states at different layers and analyse which layer results in attribution scores that best correspond to translation errors. + +# 3.2 Feature Attribution Methods + +Feature attribution methods can be divided into those providing explanations by simplification, such as LIME (Ribeiro et al., 2016); gradient-based explanations (Sundararajan et al., 2017); and perturbation-based explanations (Schulz et al., 2020). + +We select three popular methods for rationale extraction, which (i) do not require modifying the model architecture or re-training the model and (ii) allow attribution to hidden states. For comparison, we also use LIME which operates directly on the input text. We note that this set is not exhaustive of SOTA rationale extraction methods. Our main goal is not to conduct a comparative study of feature + +attribution methods but rather testing whether it is possible to address word-level QE as a rationale extraction task without any word-level supervision. + +LIME (Ribeiro et al., 2016) is a simplification-based explanation technique, which fits a sparse linear model in the vicinity of each test instance, to approximate the decision boundary of the complex model. The data for fitting the linear model is produced by perturbing the given instance and computing model predictions. Linear model coefficients are then used as attribution scores for each input feature. For NLP tasks features correspond to input tokens and perturbation is achieved by randomly removing words from the sequence. + +Information Bottleneck is a perturbation-based method originally proposed by Schulz et al. (2020) for the task of image recognition. The method applies the idea of information bottleneck (Tishby and Zaslavsky, 2015) for feature attribution. Specifically, it injects noise into an intermediate layer representation. The amount of noise injected at the position corresponding to each input feature is optimized to minimize the loss of the main task while at the same time maximizing the overall amount of injected noise. + +Integrated Gradients (Sundararajan et al., 2017) is a gradient-based method similar to the traditional salience and input*gradients approaches (Simonyan et al., 2013). The latter takes the signed partial derivatives of the output with respect to the input and multiply them by the input itself. Intuitively, this is analogous to inspecting the products of model coefficients and feature values in linear models (Sundararajan et al., 2017). Integrated gradients improves on that by defining a baseline input and computing the average gradient while the input varies along a linear path from baseline input to the actual input. The baseline is defined by the user depending on the task. For image recognition, black image is used as baseline. It is not clear what such baseline representation should be in the case of language tasks. Here, we select a zero baseline for simplicity. Better results can be achieved with a more informed choice of a baseline and we leave this to future work. + +Attention Finally, we test attention as an attribution method. Self-attention mechanisms have been widely studied in the context of explainability (Jain and Wallace, 2019; Serrano and Smith, 2019; Bujel et al., 2021). To compute a single attention score for a transformer-based model with multi-head attention, we average the weights across the different attention heads. + +# 4 Experimental Setup + +# 4.1 Evaluation Metrics + +Given a test set with both sentence-level and word-level gold labels, we want to measure to what extent the words with the highest attributions according to the QE model correspond to human annotations for MT errors. Note that we cannot use the evaluation metrics traditionally employed for assessing the performance of word-level QE, such as F1 score and Matthews correlation coefficient (Specia et al., 2020), as they require binary predictions while feature attribution methods return continuous scores. Instead, we rely on metrics based on class probabilities (Atanasova et al., 2020). Since attribution methods proceed on instance-by-instance basis and the scores produced for different instances are not necessarily comparable, we compute the evaluation metrics for each instance separately and average the results across all instances in the test set. + +AUC score For each instance, we compute the area under the receiver operating characteristic curve (AUC score) to compare the continuous attribution scores a against binary gold labels w. For a test set with $N$ instances: + +$$ +A U C = \frac {1}{N} \sum_ {n} A U C _ {n} \left(\mathbf {w} _ {n}, \mathbf {a} _ {n} ^ {\mathbf {x} ^ {T}}\right) \tag {1} +$$ + +Average Precision AUC score can be overly optimistic for imbalanced data. Therefore, we also use Average Precision (AP). + +Recall at Top-K In addition, we report the Recall-at-Top-K commonly used in information retrieval. Applied to our setting, this metric computes the proportion of words with the highest attribution that correspond to translation errors against the total number of errors in the MT output. Thus, for a given instance (we omit the instance index $n$ here for simplicity): + +$$ +\operatorname {R e c} @ \operatorname {T o p K} = \frac {1}{k} \sum_ {j \in \mathbf {e} _ {1: k}} \mathbf {w} _ {j} \tag {2} +$$ + +
Ro-EnEt-EnNe-En
Pearson r0.840.660.66
Average DA68.955.236.6
Num. sentences (all data)1,0001,0001,000
Num. sentences (DA < 70)438640935
Error rate (all data)0.210.280.65
Error rate (DA < 70)0.350.360.66
+ +Table 1: General statistics for MLQE-PE test sets: performance of sentence-level QE models (Pearson r), average DA score, total number of sentences in the test set, number of sentences with DA $< 70$ , as well as error rate in the full test set and in the subset of selected sentences. + +Where $\mathbf{e} = \text{argsort}(\mathbf{a}^{\mathbf{x}^T})$ is a sequence of indices corresponding to target words sorted by attribution score from highest to lowest and $k$ is the number of errors in the sentence. We then average the result across all instances in the test set. + +Accuracy at Top-1 Finally, we report the proportion of sentences where the word with the highest attribution in the target corresponds to a translation error. + +$$ +\operatorname {A c c} @ \operatorname {T o p} 1 = \frac {1}{N} \sum I [ \mathbf {a} _ {\mathbf {e} _ {1}} = 1 ] \tag {3} +$$ + +We note that the above metrics are not defined for sentences where all words are labelled as errors or correct. We exclude such sentences from evaluation. + +# 4.2 Sentence-level QE + +For sentence-level QE, we rely on TransQuest (Ranasinghe et al., 2020b), which was one of the top submissions to the WMT20 QE Shared Task (Specia et al., 2020). To facilitate the use of feature attribution methods described above, we use our own implementation of the approach proposed by (Ranasinghe et al., 2020b,a). It achieves comparable results to the ones reported by the authors. Due to limited computational resources we use the XLM-R-base as the underlying pre-trained Transformer model. We expect that using a more powerful sentence-level model would result in higher performance. + +# 4.3 Data + +We use the MLQE-PE (Multilingual Quality Estimation and Post-Editing) dataset described in + +Fomicheva et al. (2020a).3 MLQE-PE provides various types of manual MT evaluation for multiple language pairs. The MT outputs were assigned a sentence-level score inspired by the Direct Assessment (DA) annotation (Graham et al., 2015; Guzmán et al., 2019) on a continuous [0, 100] scale capturing overall translation quality. In addition, the MT outputs were independently post-edited by professional translators. MT outputs and their corresponding post-edited versions were automatically aligned in order to derive word-level binary labels ("BAD" if the word was corrected, and "OK" otherwise), as well as their HTER score that corresponds to the average number of "BAD" labels in a sentence (Snover et al., 2006). We use these labels to evaluate the performance of different feature attribution approaches. We treat "BAD" labels as the positive class and "OK" labels as negative class in all of our experiments.4 We do not evaluate attribution to source words. + +It is worth noting that word-level labels derived from post-editing do not capture error severity and do not always correspond to translation errors. However, due to the costs of collecting detailed error annotations for the substantially large amounts of data required to train SOTA models, this is a standard way of approximating error annotation in QE (Specia et al., 2020). + +To circumvent the above limitation, we leverage both types of sentence-level annotation (DA and HTER scores) in our experiments. We train sentence-level QE models with (i) DA scores and (ii) HTER scores. We evaluate both types of models using the word-labels derived from post-editing as described above. We then conduct evaluation as follows: + +1. We first evaluate explanations for DA-based models on the sentences with a sentence-level DA score lower than 70. $^6$ + +3https://github.com/sheffieldnlp/mlqe-pe +4The tokenization used internally by XLM-R model is different from the tokenization used for producing word-level error labels. To map the attribution scores to the word labels we take their maximum value. +Despite the limitations, we have chosen this dataset because it provides (i) sufficient amount of word-level training data, which allows us to compare our approach to a SOTA supervised approach; and (ii) access to the neural MT models that were used to produce the translations, thus enabling a comparison to an unsupervised glass-box approach. +This threshold is selected based on the annotation guidelines described in Fomicheva et al. (2020a), as the sentences assigned a score lower than 70 are guaranteed to have translation errors. + +
MethodRomanian-EnglishEstonian-EnglishNepalese-English
AUCAPA@1R@KAUCAPA@1R@KAUCAPA@1R@K
Gradients0.750.720.840.620.660.630.720.520.660.810.910.72
Info. Bottleneck0.650.620.710.500.580.550.560.460.640.780.800.71
Attention0.790.730.800.630.650.570.520.490.690.820.880.74
LIME0.540.480.400.390.560.560.650.460.520.750.760.68
Random0.500.430.360.330.500.470.380.370.500.700.620.65
Glass-box0.740.660.660.550.690.630.650.540.640.790.780.73
MicroTransQuest0.880.810.880.700.840.800.890.700.820.890.960.82
+ +Table 2: AUC/AP scores, as well as accuracy at top-1 (A@1) and recall at top-K (R@K) for different rationale extraction methods on the test partition of MLQE-PE dataset. Best rationale extraction results are highlighted in bold. Attributions are computed with respect to the hidden states at layer 10. + +2. We also evaluate explanations for DA-based sentence-level models on the full subset of sentences that contain at least one word-level error. +3. Finally, we evaluate explanations for HTER-based sentence-level models on the full subset of sentences that contain at least one word-level error. + +Interestingly, despite the discrepancy between DA training objective and word labels derived from post-editing, explanations for DA-based models achieve better accuracy. We report the results for (1) in the main body of the paper, while (2) and (3) are reported in Appendix B. + +We select three language pairs for our experiments: Estonian-English (Et-En), Romanian-English (Ro-En) and Nepali-English (Ne-En) with the best performance at sentence level achieved at WMT2020 Shared Task. Table 1 shows statistics for the respective test sets. These three language pairs present very different conditions for the task. Sentence-level model for Ro-En has much stronger performance in terms of Pearson correlation with human judgements. Ne-En has substantially lower translation quality where "BAD" words actually represent the majority class. + +# 4.4 QE Benchmarks + +We consider two benchmarks for word-level QE. On the one hand, we report the results for a strong supervised model based on pre-trained representations from XLM-R adapted to predict word-level binary labels derived from post-editing. To report the metrics presented in 4.1, we use the probability of the positive class as attribution scores. On the other hand, we consider a fully unsupervised + +approach, which however, requires access to the neural MT model, that was used to generate the translations. + +Black-box Supervised QE We use the word-level architecture available as part of the TransQuest toolkit (Ranasinghe et al., 2020b). Similarly to the sentence-level TransQuest model, it relies on XLM-Roberta-base pre-trained model finetuned for token classification task. We use XLM-Roberta-base to be consistent with the sentence-level settings. + +Glass-box Unsupervised QE Fomicheva et al. (2020b) propose to extract information from the MT system to predict translation quality in a fully unsupervised way. Following their work, we use log-probabilities from the neural MT model as attribution scores. The lower the log-probability corresponding to each word, the higher the chance that this word constitutes an error. + +# 5 Results + +# 5.1 QE as Rationale Extraction + +Table 2 shows the performance of our approach with different rationale extraction methods, as well as SOTA word-level QE methods for the MLQE-PE dataset. For the first three methods we compute the attributions to the hidden states at each layer on the dev set and report the results for this layer on the test set. First, our semi-supervised approach with all explanation methods substantially outperforms the random baseline.8 Among the different expla + +![](images/2ddee0b81a7baae4eadce61b0befc67998e45dee87890068097e89298a43fcaf.jpg) +Figure 2: Average attribution at each hidden layer on the toy task (left) and MLQE-PE Et-En dataset (right). Attributions are computed with the information bottleneck attribution method (Schulz et al., 2020). + +![](images/feb5810b62cbf473615e0c4f6cbfc56daf1bc764c14c35275dd7b0a3db8574da.jpg) + +![](images/ea7c62b7540d95c03fb65f0cea8c29a3bf9b6c969c368a38a46328222c84c603.jpg) +Figure 3: AUC score at each hidden layer for integrated gradients method. + +![](images/6f9f62b79c01c6a8a5c2c8915aedafbfd55db2999132b6cd5d35be5d627897f4.jpg) +Figure 4: Example of Estonian-English translation with attributions to the source (left) and target (right) sentences computed using integrated gradients method for each hidden layer. The correct post-edited version of this translation is: Evald cannot believe that Pille is so attached to her. + +nation methods, attention and integrated gradients achieve the best results. Second, the performance is comparable or better than the glass-box QE benchmark (MicroTransQuest) without requiring access to the neural MT model. For example, for Ro-En the AP scores achieved by the attention-based explanations and the glass-box word-level QE are 0.73 and 0.66, respectively. Third, the gap between the best-performing semi-supervised method and the supervised QE benchmark is the smallest for Ro-En, where the sentence-level QE model from which explanations are extracted is the strongest (see Table 1). Finally, on average, LIME-based explanations are substantially outperformed by the feature attribution methods. This agrees with our intuition that for the translation task where context plays a fundamental role, attribution to hidden states achieves much better performance than direct perturbation of input words. + +for the proposed error detection methods as most of the words in the data correspond to errors, as shown in Table 1. + +# 5.2 Analysis + +Feature Attribution per Layer Figure 2 shows attributions to tokens of different types across hidden layers. On the left, we show the results for a toy task, where we artificially introduced easy-to-detect errors in human translations and trained a QE model with near-perfect performance to predict whether a given sentence contain errors (see Appendix A). On the right, we show the results for the MLQE-PE Et-En test set. Similarly to the toy task, we observe that in the later layers the tokens corresponding to translation errors receive higher attribution scores. However, in the toy dataset, the source tokens have very low attributions. Here, in + +![](images/d27f06727b53dcb357bc002f780ef388b0bb6e9255c2d7bcb76db58d2484a055.jpg) +Figure 5: Frequency of the tokens with highest attribution in the neural MT training corpus. Y-axis shows the frequency of the source (left) and target (right) tokens with the highest attribution scores in low-quality MT sentences (red) and high-quality MT sentences (blue). X-axis corresponds to the hidden layers. + +![](images/cb759b4d5dbb0921955e973c4e1dc8056bfcb8945dc8c93f315abddc4fc1e054.jpg) + +contrast, the model appears to be relying on the source as well as the target. This aligns very well with human evaluation where both source and target sentences need to be considered in order to correctly determine translation quality. + +Figure 3 shows performance across layers for the integrated gradients method. As expected, the same layers that assign the highest attribution to the bad tokens (layers 9-11) are the ones that achieve the best performance. This finding is consistent across language pairs and attribution methods. Interestingly, this is also consistent with the findings reported in Voita et al. (2019), where they show that models trained with MLM objective encode context information in intermediate layers partially discarding the information on the identity of the input tokens which is recovered at the latest layers. + +So far we have studied the behavior of the QE models on the sentences that contain errors. We now look at the pattern in the attributions scores for sentences which were assigned high quality by the model. We hypothesize that higher scores will be assigned to the words that are "easy" to translate. To test this, we select high-quality and low-quality sentences (sentences with predicted scores lower than 0.25 percentile and higher then 0.75 percentile, respectively). Figure 5 shows the average frequency with which the words occur in the neural MT training dataset. Red line corresponds to the words with the highest attribution for high-quality MT sentences. Blue line corresponds to the words with the highest attribution for the low-quality MT sentences. The first plot corresponds to the source tokens and the second plot corresponds to the target tokens. As shown in the plots, when the model predicts high quality the most frequent + +words receive the highest attribution as the information progresses through the network. By contrast when low quality is predicted by the sentence-level model, the least frequent words receive the highest attribution. + +Qualitative Analysis Figure 4 shows an example. Attributions are shown for sentencepiece tokens, which is the representation used internally by XLM-R. Interestingly, both translation errors ("You" and "Pilate") and the corresponding words in the source ("Evald" and "Pille") receive higher attribution scores. + +# 6 Conclusion + +In this work, we propose a new semi-supervised approach for word-level QE by exploring feature attribution methods. We show that for well performing models our results approach performance of supervised methods. We also consider the QE as rationale extraction task as a new benchmark for plausibility-based evaluation of explainability methods. We hope this work will encourage further research on improving the efficiency of word-level QE models with lightly supervised methods. This work opens many directions for future research: from improving the achieved results by tuning linear weights to combine attributions to hidden states at different layers, to exploring different underlying architectures and sentence-level training objectives. + +# Acknowledgements + +This work was supported by funding from the Bergamot project (EU H2020 Grant No. 825303). + +# References + +Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, and Isabelle Augenstein. 2020. A diagnostic study of explainability techniques for text classification. arXiv preprint arXiv:2009.13295. +John Blatz, Erin Fitzgerald, George Foster, Simona Gandrabur, Cyril Goutte, Alex Kulesza, Alberto Sanchis, and Nicola Ueffing. 2004. Confidence estimation for machine translation. In Proceedings of the 20th International Conference on Computational Linguistics, Geneva, Switzerland. +Kamil Bujel, Helen Yannakoudakis, and Marek Rei. 2021. Zero-shot sequence labeling for transformer-based sentence classifiers. In Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021), pages 195–205, Online. Association for Computational Linguistics. +Ilias Chalkidis, Manos Fergadiotis, Dimitrios Tsarapatsanis, Nikolaos Aletras, Ion Androutsopoulos, and Prodromos Malakasiotis. 2021. Paragraph-level rationale extraction through regularization: A case study on european court of human rights cases. arXiv preprint arXiv:2103.13084. +Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle-moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116. +Nicola De Cao, Michael Schlichtkrull, Wilker Aziz, and Ivan Titov. 2020. How do decisions emerge across layers in neural models? interpretation with differentiable masking. arXiv preprint arXiv:2004.14992. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C Wallace. 2019. Eraser: A benchmark to evaluate rationalized nlp models. arXiv preprint arXiv:1911.03429. +Marina Fomicheva, Shuo Sun, Erick Fonseca, Frédéric Blain, Vishrav Chaudhary, Francisco Guzmán, Nina Lopatina, Lucia Specia, and André FT Martins. 2020a. Mlqe-pe: A multilingual quality estimation and post-editing dataset. arXiv preprint arXiv:2010.04480. +Marina Fomicheva, Shuo Sun, Lisa Yankovskaya, Frédéric Blain, Francisco Guzmán, Mark Fishel, Nikolaos Aletras, Vishrav Chaudhary, and Lucia Specia. 2020b. Unsupervised quality estimation for neural machine translation. arXiv preprint arXiv:2005.10608. + +Markus Freitag, George Foster, David Grangier, Viresh Ratnakar, Qijun Tan, and Wolfgang Macherey. 2021. Experts, errors, and context: A large-scale study of human evaluation for machine translation. arXiv preprint arXiv:2104.14478. +Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2015. Can machine translation systems be evaluated by the crowd alone. *Natural Language Engineering*, pages 1-28. +Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc'Aurelio Ranzato. 2019. The flores evaluation datasets for low-resource machine translation: Nepali-english and sinhala-english. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6100-6113. +Alon Jacovi and Yoav Goldberg. 2020. Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness? CoRR, abs/2004.03685. +Sarthak Jain and Byron C Wallace. 2019. Attention is not explanation. arXiv preprint arXiv:1902.10186. +Sarthak Jain, Sarah Wiegreffe, Yuval Pinter, and Byron C Wallace. 2020. Learning to faithfully rationalize by construction. arXiv preprint arXiv:2005.00115. +Hyun Kim, Jong-Hyeok Lee, and Seung-Hoon Na. 2017. Predictor-estimator using multilevel task learning with stack propagation for neural quality estimation. In Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Tasks Papers, pages 562-568, Copenhagen, Denmark. +Dongjun Lee. 2020. Two-phase cross-lingual language model fine-tuning for machine translation quality estimation. In Proceedings of the Fifth Conference on Machine Translation, pages 1024–1028, Online. Association for Computational Linguistics. +Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 107-117, Austin, Texas. Association for Computational Linguistics. +Zachary Chase Lipton. 2016. The mythos of model interpretability. CoRR, abs/1606.03490. +Arle Lommel, Hans Uszkoreit, and Aljoscha Burchardt. 2014. Multidimensional quality metrics (mqm): A framework for declaring and describing translation quality metrics. Tradumàtica, (12):0455-463. +Tharindu Ranasinghe, Constantin Orasan, and Ruslan Mitkov. 2020a. Transquest at wmt2020: Sentence-level direct assessment. arXiv preprint arXiv:2010.05318. + +Tharindu Ranasinghe, Constantin Orasan, and Ruslan Mitkov. 2020b. Transquest: Translation quality estimation with cross-lingual transformers. arXiv preprint arXiv:2011.01536. +Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135-1144. +Matiss Rikters and Mark Fishel. 2017. Confidence through attention. arXiv preprint arXiv:1710.03743. +Karl Schulz, Leon Sixt, Federico Tombari, and Tim Landgraf. 2020. Restricting the flow: Information bottlenecks for attribution. arXiv preprint arXiv:2001.00396. +Sofia Serrano and Noah A Smith. 2019. Is attention interpretable? arXiv preprint arXiv:1906.03731. +Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2013. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034. +Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of association for machine translation in the Americas, volume 200. CiteSeer. +Lucia Specia, Frédéric Blain, Marina Fomicheva, Erick Fonseca, Vishrav Chaudhary, Francisco Guzmán, and André F. T. Martins. 2020. Findings of the WMT 2020 shared task on quality estimation. In Proceedings of the Fifth Conference on Machine Translation, pages 743-764, Online. Association for Computational Linguistics. +Lucia Specia, Nicola Cancedda, Marc Dymetman, Marco Turchi, and Nello Cristianini. 2009. Estimating the sentence-level quality of machine translation systems. In Proceedings of the 13th Annual Conference of the European Association for Machine Translation, pages 28-35, Barcelona, Spain. +Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In International Conference on Machine Learning, pages 3319-3328. PMLR. +Naftali Tishby and Noga Zaslavsky. 2015. Deep learning and the information bottleneck principle. In 2015 IEEE Information Theory Workshop (ITW), pages 1-5. IEEE. +Yi-Lin Tuan, Ahmed El-Kishky, Adithya Renduchintala, Vishrav Chaudhary, Francisco Guzmán, and Lucia Specia. 2021. Quality estimation without human-labeled data. arXiv preprint arXiv:2102.04020. + +Elena Voita, Rico Sennrich, and Ivan Titov. 2019. The bottom-up evolution of representations in the transformer: A study with machine translation and language modeling objectives. arXiv preprint arXiv:1909.01380. +Mo Yu, Shiyu Chang, Yang Zhang, and Tommi S Jaakkola. 2019. Rethinking cooperative rationalization: Introspective extraction and complement control. arXiv preprint arXiv:1910.13294. +Zeyu Yun, Yubei Chen, Bruno A Olshausen, and Yann LeCun. 2021. Transformer visualization via dictionary learning: contextualized embedding as a linear superposition of transformer factors. arXiv preprint arXiv:2103.15949. +Omar Zaidan, Jason Eisner, and Christine Piatko. 2007. Using "annotator rationales" to improve machine learning for text categorization. In Human language technologies 2007: The conference of the North American chapter of the association for computational linguistics; proceedings of the main conference, pages 260-267. + +# A Toy dataset + +We devise a toy task to test feature attribution performance for word-level QE. We artificially introduce easy-to-detect errors in human translations and train a QE model with near-perfect performance to predict the presence/absence of such errors in a sentence. Specifically, we sample 10K/1K/1K sentence pairs from Es-En News-Commentary dataset (train/dev/test). Next, we artificially inject errors to half of the sentences at a rate of 0.1 using the following operations: insert, delete or replace random word, or swap two words selected at random. + +We fine-tune an XLM-R-base model for a sentence-level binary classification task where sentences that contain errors are considered as positive class, and sentences that do not contain errors are considered as negative class. The F1-score of this sentence-level classifier is 0.97. This is expected as the task is very easy. + +# B Performance of Rationale Extraction Methods on HTER Data + +Tables 4 and 5 show the performance of the proposed methods on the full subset of sentences that contain at least one word-level error for sentence-level QE models trained with HTER and DA ground truth scores. Pearson correlation for both types of models is shown in Table 3. Interestingly, even though for Ro-En and Et-En the performance of sentence-level models is near identical, extracted rationales are more accurate for the model trained with DA judgements. + +
Ro-EnEt-EnNe-En
Pearson r (DA)0.840.660.66
Pearson r (HTER)0.820.620.51
Num. sentences (all data)1,0001,0001,000
Num. sentences (with errors)714889945
Error rate (all data)0.210.280.65
Error rate (with errors)0.280.310.65
+ +Table 3: Statistics for MLQE-PE test sets: performance of sentence-level QE models (Pearson r), total number of sentences with at least one translation error, and the error rate in the full test set and in the subset of sentences with at least one error. + +
MethodRomanian-EnglishEstonian-EnglishNepalese-English
AUCAPA@1R@KAUCAPA@1R@KAUCAPA@1R@K
Gradients0.730.650.720.540.640.560.610.450.660.810.900.71
Info. Bottleneck0.590.490.500.360.540.470.420.370.620.760.780.69
Attention0.760.650.670.530.630.510.450.410.690.810.870.73
LIME0.510.390.290.290.550.490.540.390.520.730.720.66
Random0.500.380.270.250.500.410.340.310.500.700.630.64
Glassbox0.730.590.550.480.700.580.590.480.640.780.770.72
MicroTransQuest0.860.740.760.620.830.740.790.640.820.890.960.82
+ +Table 4: AUC/AP scores, as well as accuracy at top-1 (A@1) and recall at top-K (R@K) for different rationale extraction methods on the MLQE-PE test set on the subset of sentences that contain at least one error for the sentence-level QE models trained to predict DA judgements. + +
MethodRomanian-EnglishEstonian-EnglishNepalese-English
AUCAPA@1R@KAUCAPA@1R@KAUCAPA@1R@K
Gradients0.690.590.610.480.660.590.660.490.640.770.820.70
Info. Bottleneck0.530.430.380.320.580.500.470.380.570.730.680.67
Attention0.740.610.590.490.690.590.580.480.660.780.820.72
LIME0.610.470.370.350.640.560.590.450.530.740.760.68
Random0.500.380.270.250.500.410.330.320.500.700.630.64
Glassbox0.730.590.550.480.700.580.590.480.640.780.770.72
MicroTransQuest0.860.740.760.620.830.740.790.640.820.890.960.82
+ +Table 5: AUC/AP scores, as well as accuracy at top-1 (A@1) and recall at top-K (R@K) for different rationale extraction methods on the MLQE-PE test set on the subset of sentences that contain at least one error for the sentence-level QE models trained to predict HTER. \ No newline at end of file diff --git a/translationerrordetectionasrationaleextraction/images.zip b/translationerrordetectionasrationaleextraction/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..6ccae12a1ea1175a4a9eee82e79ad9e10287a984 --- /dev/null +++ b/translationerrordetectionasrationaleextraction/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:44dcd673a9ed988a0c719c0c8c7719737a237b8dab2c8f99f7c6a01f557c0eaf +size 429418 diff --git a/translationerrordetectionasrationaleextraction/layout.json b/translationerrordetectionasrationaleextraction/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..4d8fa19d1d918c88b416cd0ce81568ff854c5de3 --- /dev/null +++ b/translationerrordetectionasrationaleextraction/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5c04f9210e9124a6c5f41f230c5dd48a742c02a582c94412a3f01e163faed030 +size 289889 diff --git a/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/f08d7529-1083-4cf2-b571-926579b421d8_content_list.json b/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/f08d7529-1083-4cf2-b571-926579b421d8_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f861beefb2945b3f72e1daf3d0261aa79fad63b7 --- /dev/null +++ b/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/f08d7529-1083-4cf2-b571-926579b421d8_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3b91dde2e47d214e1c40dc3acf5b762ab611db9dc55ac8c2de20f258964b31a2 +size 42455 diff --git a/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/f08d7529-1083-4cf2-b571-926579b421d8_model.json b/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/f08d7529-1083-4cf2-b571-926579b421d8_model.json new file mode 100644 index 0000000000000000000000000000000000000000..40e886b424e4c1b70c4526b7ee0dc30d8a4ce14c --- /dev/null +++ b/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/f08d7529-1083-4cf2-b571-926579b421d8_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b4da418131f14dcd083656cb718853992f9691e81b354aa2ceaa62b938d38382 +size 52881 diff --git a/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/f08d7529-1083-4cf2-b571-926579b421d8_origin.pdf b/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/f08d7529-1083-4cf2-b571-926579b421d8_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7a582f6abc344ff803afdb6d36c214b8e8bd15d9 --- /dev/null +++ b/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/f08d7529-1083-4cf2-b571-926579b421d8_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:74c06d315848762710b4896ad1021abcda7b571e3a14a46f6db58fb8bcaf1c4b +size 675322 diff --git a/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/full.md b/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c9c068d4bee7b58913086912659639488db2541b --- /dev/null +++ b/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/full.md @@ -0,0 +1,173 @@ +# Two Birds with One Stone: Unified Model Learning for Both Recall and Ranking in News Recommendation + +Chuhan $\mathbf{W}\mathbf{u}^{\dagger}$ Fangzhao $\mathbf{W}\mathbf{u}^{\dagger*}$ Tao Qi† Yongfeng Huang† + +$^{\dagger}$ Department of Electronic Engineering, Tsinghua University, Beijing 100084, China + +$^{\ddagger}$ Microsoft Research Asia, Beijing 100080, China + +{wuchuhan15,wufangzhao,taoqi.qt} $@$ gmail.com + +yfhuang@tsinghua.edu.cn + +# Abstract + +Recall and ranking are two critical steps in personalized news recommendation. Most existing news recommender systems conduct personalized news recall and ranking separately with different models. However, maintaining multiple models leads to high computational cost and poses great challenges to meeting the online latency requirement of news recommender systems. In order to handle this problem, in this paper we propose UniRec, a unified method for recall and ranking in news recommendation. In our method, we first infer user embedding for ranking from the historical news click behaviors of a user using a user encoder model. Then we derive the user embedding for recall from the obtained user embedding for ranking by using it as the attention query to select a set of basis user embeddings which encode different general user interests and synthesize them into a user embedding for recall. The extensive experiments on benchmark dataset demonstrate that our method can improve both efficiency and effectiveness for recall and ranking in news recommendation. + +# 1 Introduction + +News recommendation techniques are widely used by many online news websites and Apps to provide personalized news services (Wu et al., 2020b). Recall and ranking are two critical steps in personalized news recommender systems (Karimi et al., 2018; Wu et al., 2021a). As shown in Fig. 1, when a user visits a news platform, the recommender system first recalls a set of candidate news from a large-scale news pool, and then ranks candidate news for personalized news display (Wu et al., 2020b). Both news recall and ranking have been widely studied (Elkahky et al., 2015; Liu et al., 2019, 2020; Wu et al., 2020a; Wang et al., 2020; Wu et al., 2021c; Qi et al., 2021a,b,c,d). In online news recommender systems, recall and ranking are + +![](images/2b3a33cfc0986a4dfd5a87946f784491ec5639c58675c145d04061a1638a04a3.jpg) +Figure 1: A typical pipeline of news recommendation. + +usually conducted separately with different models, as shown in Fig. 1. However, maintaining separate models for news recall and ranking in large-scale news recommender systems usually leads to heavy computation and memory cost (Tan et al., 2020), and it may be difficult to meet the latency requirement of online news services. + +Learning a unified model for personalized news recall and ranking would be greatly beneficial for alleviating the computation load of news recommender systems. However, it is a non-trivial task because the goals of recall and ranking are not the same (Covington et al., 2016; Malkov and Yashunin, 2018). Ranking usually aims to accurately rank candidates based on their relevance to user interests (Wu et al., 2019b; Ge et al., 2020; Wu et al., 2021b; Wang et al., 2020), while recall mainly aims to form a candidate pool that can comprehensively cover user interests (Liu et al., 2020; Qi et al., 2021d). Thus, the model needs to adapt to the different goals of recall and ranking without hurting their performance. + +In this paper, we propose a news recommendation method named UniRec, which can learn a unified user model for personalized news recall and ranking. In our method, we first encode news into embeddings with a news encoder, and learn a user embedding for ranking from the embeddings of historical clicked news. We further derive the user embedding for recall by using the user embedding for ranking as the attention query to select a + +![](images/45a6bb9c51e7d0f60cb1c9c56038700c9e4ef2bb14037ba5f7546dc6bee9c4de.jpg) +Figure 2: The framework of UniRec. + +set of basis user embeddings that encode different general user interest aspects and synthesize them into a user embedding for recall. In the test phase, we only use the basis user embeddings with top attention weights to compose the user embedding for recall to filter noisy user interests. Extensive experiments on a real-world dataset demonstrate that our method can conduct personalized news recall and ranking with a unified model and meanwhile achieve promising recall and ranking performance. + +# 2 Methodology + +The overall framework of UniRec is shown in Fig. 2. We first learn a user embedding for ranking from the user's historical clicked news. We then derive a user embedding for recall from the user embedding for ranking and a set of basis user embeddings that encode different general interests. Their details are introduced as follows. + +# 2.1 Ranking for News Recommendation + +The ranking part aims to rank candidate news in a small candidate list according to user interests. Following (Wu et al., 2020b), UniRec uses a news encoder that learns news embeddings from news texts and a user encoder that learns user interest embedding for ranking from the embeddings of clicked news. The candidate news embedding and user embedding for ranking are used to compute a click score for personalized news ranking. More specifically, we denote a user $u$ has $N$ historical clicked news $[D_1, D_2, \dots, D_N]$ . These clicked news are encoded into a sequence of news embeddings, which is denoted as $[\mathbf{r}_1, \mathbf{r}_2, \dots, \mathbf{r}_N]$ . The user encoder further takes this sequence as input, and outputs a user embedding $\mathbf{u}_{ra}$ for ranking. For a candidate news $D_i^c$ , we use the news encoder to obtain its embedding $\mathbf{r}_i^c$ . We follow (Okura et al., 2017) to + +compute the probability score of the user $u$ clicking on the candidate news $D_{i}^{c}$ via inner product, i.e., $\hat{y}_{ra}^{i} = \mathbf{u}_{ra}\cdot \mathbf{r}_{i}^{c}$ . The click scores of the news in a candidate list are used for personalized ranking. Following (Wu et al., 2019c), we use multi-head self-attention networks in both news and user encoders to capture the contexts of words and click behaviors, respectively. In addition, following (Devlin et al., 2019) we add position embeddings to capture the orders of words and behaviors. + +# 2.2 Recall for News Recommendation + +The recall part aims to select candidate news from a large news pool based on their relevance to user interests. To efficiently exploit user interest information for personalized news recall, we take the user embedding for ranking as input instead of rebuilding user interest representations from original user click behaviors. However, since the goals of ranking and recall are not the same (Kang and McAuley, 2019), the user embedding for ranking may not be suitable for news recall. Thus, we propose a method to distill a user embedding for recall from the user embedding for ranking. More specifically, we maintain a basis user embedding memory that encodes different general user interest aspects. We denote the $M$ basis user embeddings in the memory as $[\mathbf{v}_1,\mathbf{v}_2,\dots,\mathbf{v}_M]$ . We use the user embedding for ranking as the attention query to select basis user embeddings. We denote the attention weight of the $i$ -th basis user embedding as $\alpha_{i}$ which is computed as: + +$$ +\alpha_ {i} = \frac {\exp \left(\mathbf {u} _ {r a} \cdot \mathbf {w} _ {i}\right)}{\sum_ {j = 1} ^ {M} \exp \left(\mathbf {u} _ {r a} \cdot \mathbf {w} _ {j}\right)}, \tag {1} +$$ + +where the parameters $\mathbf{w}_i$ are served as the attention keys. Different from additive attention (Yang et al., 2016) where the attention keys and values are equivalent, in our approach the keys (i.e., $\mathbf{w}_i$ ) are different from the values (i.e., $\mathbf{v}_i$ ). This is because we expect the basis user embeddings to have different spaces with the user embeddings for ranking to better adapt to the recall task. The basis user embeddings are further synthesized into a unified user embedding $\mathbf{u}_{re}$ for recall by $\mathbf{u}_{re} = \sum_{i=1}^{M} \alpha_i \mathbf{v}_i$ . We use a news encoder that is shared with the ranking part to obtain the embedding $\mathbf{r}^c$ of each candidate news $D^c$ in the news pool. The final recall relevance score $\hat{y}_{re}$ between user interest and candidate news is computed by $\hat{y}_{re} = \mathbf{u}_{re} \cdot \mathbf{r}^c$ . + +# 2.3 Model Training + +Then we introduce the model training details of UniRec. We use a two-stage model training strategy to first learn the ranking part and then learn the recall part. Following prior works (Huang et al., 2013; Wu et al., 2019b,c), we use negative sampling techniques to construct samples for contrastive model learning (Oord et al., 2018). For learning the ranking part, we use clicked news in each impression as positive samples, and we randomly sample $K$ non-clicked news that are displayed in the same impression as negative samples. The loss function is formulated as follows: + +$$ +\mathcal {L} _ {r a} = - \log \left[ \frac {\exp (\hat {y} _ {r a} ^ {+})}{\exp (\hat {y} _ {r a} ^ {+}) + \sum_ {i = 1} ^ {K} \exp (\hat {y} _ {r a} ^ {i -})} \right], (2) +$$ + +where $\hat{y}_{ra}^{+}$ and $\hat{y}_{ra}^{-}$ denote the predicted click scores of a positive sample and the corresponding $i$ -th negative sample, respectively. By optimizing this loss function, the parameters of news and user encoders can be tuned. Motivated by (Ying et al., 2018), we fix the news encoder after the ranking model converges. Then, to learn the recall part, we also use clicked news of each user as positive samples, while we randomly select $T$ non-clicked news from the entire news set as negative samples, which aims to simulate the news recall scenario. The loss function for recall part training is as follows: + +$$ +\mathcal {L} _ {r e} = - \log \left[ \frac {\exp \left(\hat {y} _ {r e} ^ {+}\right)}{\exp \left(\hat {y} _ {r e} ^ {+}\right) + \sum_ {i = 1} ^ {T} \exp \left(\hat {y} _ {r e} ^ {i -}\right)} \right], \tag {3} +$$ + +where $\hat{y}_{re}^{+}$ and $\hat{y}_{re}^{i - }$ represent the predicted recall relevance scores of a positive sample and the corresponding $i$ -th negative sample, respectively. + +However, not all basis user embeddings are relevant to the interests of a user. Thus, motivated by Principal Component Analysis (PCA), in the test phase we propose to only use the top $P$ basis user embeddings with the highest attention weights to compose the user embedding for recall. We denote these basis user embeddings as $[\mathbf{v}_{t_1},\mathbf{v}_{t_2},\dots,\mathbf{v}_{t_P}]$ . We re-normalize their attention weights as follows: + +$$ +\alpha_ {t _ {i}} = \frac {\exp \left(\alpha_ {t _ {i}}\right)}{\sum_ {j = 1} ^ {P} \exp \left(\alpha_ {t _ {j}}\right)}. \tag {4} +$$ + +The user embedding $\mathbf{u}_{re}$ for recall is built by $\mathbf{u}_{re} = \sum_{i=1}^{P} \alpha_{t_i} \mathbf{v}_{t_i}$ , which can attend more to the major interests of a user and filter noisy basis user embeddings for better news recall. + +# 2.4 Complexity Analysis + +We provide some discussions on the computational complexity. In existing news recommendation methods that conduct recall and ranking with separate models, the computational complexity of learning user embeddings for recall and ranking are both $O(N)$ at least, because they need to encode the entire user behavior sequence. UniRec has the same complexity in learning the user embedding for ranking, but the complexity of deriving the user embedding for recall is reduced to $O(M)$ , where $M$ is usually much smaller than $N$ . In addition, the attention network used for synthesizing the user embedding for recall may also be lighter-weight than the user encoder. Thus, the total computational complexity can be effectively reduced. + +# 3 Experiments + +# 3.1 Dataset and Experimental Settings + +We conduct experiments on a large-scale public dataset named MIND (Wu et al., 2020b) for news recommendation. It contains news impression logs of 1 million users on Microsoft News in 6 weeks. The logs in the first five weeks are for training and validation, and the rest logs are for test. The detailed statistics of MIND are shown in Table 1. + +
# Users1,000,000# News161,013
# Impressions15,777,377# Click behaviors24,155,470
Avg. news title len.11.52# Categories20
+ +Table 1: Statistics of the MIND dataset. + +In our experiments, following (Wu et al., 2020b) we use news titles to learn news embeddings. The number of basis user embeddings is 20, and they are randomly initialized. The hyperparameter $P$ that controls the number of basis user embeddings for composing the user embedding for recall in the test phase is 5. The number of negative samples associated with each positive one is 4 and 200 for the ranking and recall tasks, respectively. Adam (Bengio and LeCun, 2015) is used as the optimizer. The batch size is 32. These hyperparameters are selected on the validation set. Following (Wu et al., 2020b), we use AUC, MRR, nDCG@5 and nDCG@10 to evaluate news ranking performance. In addition, we use recall rate of the top 100, 200, 500 and 1000 ranked news to evaluate news recall performance. We repeat every experiment 5 times. + +
MethodsAUCMRRnDCG@5nDCG@10
EBNR66.22±0.1731.97±0.1434.89±0.1740.49±0.19
DKN65.61±0.2031.58±0.1734.32±0.1940.04±0.22
NPA67.62±0.1432.69±0.1335.52±0.1541.33±0.17
NAML67.45±0.1232.48±0.0935.39±0.1041.19±0.14
NRMS68.24±0.0933.38±0.1036.34±0.1042.12±0.13
UniRec68.41±0.1133.50±0.1036.47±0.1242.26±0.14
+ +Table 2: Ranking performance of different methods. + +
MethodsR@100R@200R@500R@1000
YoutubeNet1.395±0.0342.284±0.0394.171±0.0426.867±0.037
Pinnersage1.431±0.0202.340±0.0184.252±0.0176.927±0.019
Octopus1.426±0.0262.392±0.0294.344±0.0317.188±0.029
UniRec(all)1.443±0.0232.402±0.0275.022±0.0258.294±0.026
UniRec(top)1.516±0.0262.531±0.0245.142±0.0278.485±0.026
+ +Table 3: Recall performance of different methods. + +# 3.2 Performance Evaluation + +We first compare the ranking performance of UniRec with several baseline methods, including: (1) EBNR (Okura et al., 2017), GRU (Cho et al., 2014) network for user interest modeling in news recommendation; (2) DKN (Wang et al., 2018), deep knowledge network for news recommendation; (3) NPA (Wu et al., 2019b), news recommendation with personalized attention; (4) NAML (Wu et al., 2019a), news recommendation with attentive multi-view learning; (5) NRMS (Wu et al., 2019c), news recommendation with multi-head self-attention. The ranking performance of different methods is shown in Table 2. We find that UniRec outperforms several compared baseline methods like NAML and NPA. This may be because self-attention has stronger ability in modeling news and user interests. In addition, UniRec also slightly outperforms its basic model NRMS. This is because UniRec can capture the orders of words and behaviors via position embedding. + +In the news recall task, we compare UniRec with top basis user embeddings (denoted as UniRec(top)) with the following baseline methods: (1) YoutubeNet (Covington et al., 2016), using the average of clicked news embeddings for recall; (2) Pinnersage (Pal et al., 2020), an item recall method based on hierarchical clustering; (3) Octopus (Liu et al., 2020), learning elastic number of user embeddings for item recall; (4) UniRec(all), a variant of UniRec that uses all basis user embeddings to compose the user embedding for recall. We + +show the recall performance of different methods in Table 3. We find YoutubeNet is less performant than other recall methods. This may be because different user behaviors may have different importance in user interest modeling and simply average their embeddings may be suboptimal. In addition, both UniRec(top) and UniRec(all) outperform other baseline methods. This is because our approach can exploit the user interest information inferred from the ranking module to enhance news recall. In addition, our approach is a unified model for both recall and ranking, which has better efficiency in online systems than other methods. Besides, UniRec(top) outperforms its variant UniRec(all). It may be because selecting the basis user embeddings with top attention weights can learn accurate user interest embeddings by attending to major user interests and filtering noisy ones. The above results validate the effectiveness of our method in both news ranking and recall. + +# 3.3 Case Study + +We verify the effectiveness of UniRec in news recall via several case studies. Fig. 3 shows the clicked news of a random user and several top news recalled by UniRec. From the user's clicked news, we can infer that this user may be interested in finance, sports and TV shows. We find the recall result of UniRec covers user interest categories of clicked news, but also keeps some diversity with them. It shows that UniRec can generate accurate and diverse personalized news recall results. + +
CategoryTitle
Clicked NewsFinanceChipotle customers say the chain is charging them hundreds of dollars in fake orders
SportsEvery touchdown from every game in week 9
TVfresh off the boat canceled after six seasons
UniRec RecallSportsThe Patriots opened with a grinding 16-play drive in which nearly everything went right
FinanceDean foods files for bankruptcy
TVViral Wheel of Fortune Contestant and His Wife Clarify Hilarious 'Loveless Marriage' Intro
TV8 of the best and 8 of the worst tv shows that got canceled this year, so far
SportsBrowns, Steelers brawl at end of cleveland's 21-7 win
+ +![](images/9cd8c8d25da963442680dbfe5dcb364b01f73e5b96f0d3165a58baa5778cb0c7.jpg) +Figure 3: The news clicked by a randomly sampled user and the top news recalled by UniRec. + +![](images/8592d6087fbfd0aad3215247a69e02a164d6e2b74ae41e3c7effd12d6d8a19d2.jpg) +Figure 4: Influence of the basis user embedding number. +Figure 5: Influence of the hyperparameter $P$ . + +# 3.4 Hyperparameter Analysis + +Finally, we study the influence of two important hyperparameters in our UniRec method, including the total number $M$ of basis user embeddings and the number $P$ of basis user embeddings for composing the user embeddings for recall. We first set $P = M$ and tune the value of $M$ . The recall performance is shown in Fig. 4. We find the performance is suboptimal when $M$ is too small, which may be due to the diverse user interests cannot be covered by a few basis user embeddings. However, the performance also descends when $M$ is large. + +This may be because it is difficult to accurately select informative basis user embeddings for user interest modeling. In addition, the computation and memory costs also increase. Thus, we set $M$ to a medium value (i.e., 20) that yields the best performance. We then tune the value of $P$ under $M = 20$ . The results are shown in Fig. 5. We find the performance is suboptimal when $P$ is very small. This is intuitive because the user interests cannot be fully covered. However, the performance also declines when $P$ is relatively large. This may be because basis user embeddings with relatively low attention weights are redundant or even noisy for user interest modeling. Thus, we choose to use 5 basis user embeddings to compose the user embedding for recall. + +# 4 Conclusion + +In this paper, we present a unified approach for recall and ranking in news recommendation. In our method, we first infer a user embedding for ranking from historical news click behaviors via a user encoder model. Then we derive a user embedding for recall from the obtained user embedding for ranking by regarding it as attention query to select a set of basis user embeddings that encode different general user interests. Extensive experiments on a benchmark dataset validate the effectiveness of our approach in both news ranking and recall. + +# Acknowledgments + +This work was supported by the National Natural Science Foundation of China under Grant numbers U1936208, U1936216, and 61862002, and the research initiation project of Zhejiang Lab (No. 2020LC0PI01). + +# References + +Yoshua Bengio and Yann LeCun. 2015. Adam: A method for stochastic optimization. In ICLR. +Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In EMNLP, pages 1724-1734. +Paul Covington, Jay Adams, and Emre Sargin. 2016. Deep neural networks for youtube recommendations. In RecSys., pages 191-198. ACM. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT*, pages 4171–4186. +Ali Mamdouh Elkahky, Yang Song, and Xiaodong He. 2015. A multi-view deep learning approach for cross domain user modeling in recommendation systems. In WWW, pages 278-288. +Suyu Ge, Chuhan Wu, Fangzhao Wu, Tao Qi, and Yongfeng Huang. 2020. Graph enhanced representation learning for news recommendation. In WWW, pages 2863-2869. +Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In CIKM, pages 2333-2338. ACM. +Wang-Cheng Kang and Julian McAuley. 2019. Candidate generation with binary codes for large-scale top-n recommendation. In CIKM, pages 1523-1532. +Mozhgan Karimi, Dietmar Jannach, and Michael Jugovac. 2018. News recommender systems-survey and roads ahead. Information Processing & Management, 54(6):1203-1227. +Zheng Liu, Jianxun Lian, Junhan Yang, Defu Lian, and Xing Xie. 2020. Octopus: Comprehensive and elastic user representation for the generation of recommendation candidates. In SIGIR, pages 289-298. +Zheng Liu, Yu Xing, Fangzhao Wu, Mingxiao An, and Xing Xie. 2019. Hi-fi ark: Deep user representation via high-fidelity archive network. In *IJCAI*, pages 3059-3065. +Yu A Malkov and Dmitry A Yashunin. 2018. Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. TPAMI, 42(4):824-836. +Shumpei Okura, Yukihiro Tagami, Shingo Ono, and Akira Tajima. 2017. Embedding-based news recommendation for millions of users. In KDD, pages 1933-1942. ACM. + +Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748. +Aditya Pal, Chantat Eksombatchai, Yitong Zhou, Bo Zhao, Charles Rosenberg, and Jure Leskovec. 2020. Pinnersage: Multi-modal user embedding framework for recommendations at pinterest. In KDD, pages 2311-2320. +Tao Qi, Fangzhao Wu, Chuhan Wu, and Yongfeng Huang. 2021a. Personalized news recommendation with knowledge-aware interactive matching. In SI-GIR, pages 61-70. +Tao Qi, Fangzhao Wu, Chuhan Wu, and Yongfeng Huang. 2021b. Pp-rec: News recommendation with personalized user interest and time-aware news popularity. In ACL, pages 5457-5467. +Tao Qi, Fangzhao Wu, Chuhan Wu, Yongfeng Huang, and Xing Xie. 2021c. Uni-fedrec: A unified privacy-preserving news recommendation framework for model training and online serving. In EMNLP Findings, pages 1438-1448. +Tao Qi, Fangzhao Wu, Chuhan Wu, Peiru Yang, Yang Yu, Xing Xie, and Yongfeng Huang. 2021d. Hierec: Hierarchical user interest modeling for personalized news recommendation. In ACL, pages 5446-5456. +Qiaoyu Tan, Ninghao Liu, Xing Zhao, Hongxia Yang, Jingren Zhou, and Xia Hu. 2020. Learning to hash with graph neural networks for recommender systems. In WWW, pages 1988-1998. +Heyuan Wang, Fangzhao Wu, Zheng Liu, and Xing Xie. 2020. Fine-grained interest matching for neural news recommendation. In ACL, pages 836-845. +Hongwei Wang, Fuzheng Zhang, Xing Xie, and Minyi Guo. 2018. Dkn: Deep knowledge-aware network for news recommendation. In WWW, pages 1835-1844. +Chuhan Wu, Fangzhao Wu, Mingxiao An, Jianqiang Huang, Yongfeng Huang, and Xing Xie. 2019a. Neural news recommendation with attentive multi-view learning. In IJCAI, pages 3863-3869. +Chuhan Wu, Fangzhao Wu, Mingxiao An, Jianqiang Huang, Yongfeng Huang, and Xing Xie. 2019b. Npa: Neural news recommendation with personalized attention. In KDD, pages 2576-2584. +Chuhan Wu, Fangzhao Wu, Suyu Ge, Tao Qi, Yongfeng Huang, and Xing Xie. 2019c. Neural news recommendation with multi-head self-attention. In EMNLP, pages 6390-6395. +Chuhan Wu, Fangzhao Wu, Yongfeng Huang, and Xing Xie. 2021a. Personalized news recommendation: A survey. arXiv preprint arXiv:2106.08934. + +Chuhan Wu, Fangzhao Wu, Yongfeng Huang, and Xing Xie. 2021b. User-as-graph: User modeling with heterogeneous graph pooling for news recommendation. In *IJCAI*, pages 1624–1630. +Chuhan Wu, Fangzhao Wu, Tao Qi, and Yongfeng Huang. 2020a. User modeling with click preference and reading satisfaction for news recommendation. In *IJCAI*, pages 3023–3029. +Chuhan Wu, Fangzhao Wu, Tao Qi, and Yongfeng Huang. 2021c. Feedrec: News feed recommendation with various user feedbacks. arXiv preprint arXiv:2102.04903. +Fangzhao Wu, Ying Qiao, Jiun-Hung Chen, Chuhan Wu, Tao Qi, Jianxun Lian, Danyang Liu, Xing Xie, Jianfeng Gao, Winnie Wu, et al. 2020b. Mind: A large-scale dataset for news recommendation. In ACL, pages 3597-3606. +Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In NAACL-HLT, pages 1480-1489. +Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L Hamilton, and Jure Leskovec. 2018. Graph convolutional neural networks for web-scale recommender systems. In KDD, pages 974-983. \ No newline at end of file diff --git a/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/images.zip b/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..26c7ea8d4003f90d6f7c05570653ad01c8611922 --- /dev/null +++ b/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:917edc6f1fc1eb3b0d41f71a82511f378600b44ea05489a841571d217edadefc +size 300936 diff --git a/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/layout.json b/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..5ace6b7d818ac6091f061c15cc1773f9bf18a63e --- /dev/null +++ b/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8980895ef74f88d89f91ca521f6ccd449f4b1b150be668e7222adb274138e206 +size 219224 diff --git a/twostepquestionretrievalforopendomainqa/ed9f0c05-42ca-4377-bfb9-a14865bf3542_content_list.json b/twostepquestionretrievalforopendomainqa/ed9f0c05-42ca-4377-bfb9-a14865bf3542_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b915bb0253a626c98b9040598a7d6b91de03c0e4 --- /dev/null +++ b/twostepquestionretrievalforopendomainqa/ed9f0c05-42ca-4377-bfb9-a14865bf3542_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a9c8cf12164f619e1a2d3ffb779c4118fe8101ee84c7d75f113cbf83235e8ffc +size 38487 diff --git a/twostepquestionretrievalforopendomainqa/ed9f0c05-42ca-4377-bfb9-a14865bf3542_model.json b/twostepquestionretrievalforopendomainqa/ed9f0c05-42ca-4377-bfb9-a14865bf3542_model.json new file mode 100644 index 0000000000000000000000000000000000000000..fe4e4f9e8f00e9e1b7866df966c12eed5b086ea5 --- /dev/null +++ b/twostepquestionretrievalforopendomainqa/ed9f0c05-42ca-4377-bfb9-a14865bf3542_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fcc6a39ef278cc124e61a72a131d882096a88962831c88237ec800581fd87f03 +size 47459 diff --git a/twostepquestionretrievalforopendomainqa/ed9f0c05-42ca-4377-bfb9-a14865bf3542_origin.pdf b/twostepquestionretrievalforopendomainqa/ed9f0c05-42ca-4377-bfb9-a14865bf3542_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..af63ae7db42b54207b1cac261b4d0de12528cea6 --- /dev/null +++ b/twostepquestionretrievalforopendomainqa/ed9f0c05-42ca-4377-bfb9-a14865bf3542_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:977ed7045cb2a90d329ec645684ae2c2ec59fc89037d58c77cb5d7c199bb0aac +size 427293 diff --git a/twostepquestionretrievalforopendomainqa/full.md b/twostepquestionretrievalforopendomainqa/full.md new file mode 100644 index 0000000000000000000000000000000000000000..9a0ce15649d33a93ec4827b777c43eb19db2ddc2 --- /dev/null +++ b/twostepquestionretrievalforopendomainqa/full.md @@ -0,0 +1,160 @@ +# Two-Step Question Retrieval for Open-Domain QA + +Yeon Seonwoo\*, Juhee Son\*, Jiho Jin†, Sang-Woo Lee‡§, Ji-Hoon Kim‡§, Jung-Woo Ha‡§, Alice Oh† + +†KAIST, ‡NAVER AI Lab, §NAVER CLOVA + +{yeon.seonwoo,sjh5665,jinjh0123}@kaist.ac.kr + +{sang.woo lee, genesisik, jungwoo.ha}@navercorp.com + +alice.oh@kaist.edu + +# Abstract + +The retriever-reader pipeline has shown promising performance in open-domain QA but suffers from a very slow inference speed. Recently proposed question retrieval models tackle this problem by indexing question-answer pairs and searching for similar questions. These models have shown a significant increase in inference speed, but at the cost of lower QA performance compared to the retriever-reader models. This paper proposes a two-step question retrieval model, SQuID (Sequential Question-Indexed Dense retrieval) and distant supervision for training. SQuID uses two bi-encoders for question retrieval. The first-step retriever selects top-k similar questions, and the second-step retriever finds the most similar question from the top-k questions. We evaluate the performance and the computational efficiency of SQuID. The results show that SQuID significantly increases the performance of existing question retrieval models with a negligible loss on inference speed. $^{1}$ + +# 1 Introduction + +Retriever-reader models in open-domain QA require a long time for inference (Izacard and Grave, 2021; Lewis et al., 2020b; Sachan et al., 2021; Mao et al., 2021a; Karpukhin et al., 2020). This has been identified as a bottleneck in building real-time QA systems, and question retrieval and phrase-indexed QA have been proposed to resolve this problem (Seo et al., 2018, 2019; Lee et al., 2020, 2021a,b; Lewis et al., 2021a,b). These approaches directly search the answer of the input question from the corpus without conducting additional machine reading steps which are computationally inefficient. In phrase-indexed QA, retrievers pre-index all phrases in the corpus and find the most similar phrase to the input question. In question retrieval, synthetic + +![](images/2603be4c624753ec1d5a2e61c77ee0691171eae05ad85fa1f3ae700cbf15983a.jpg) +Figure 1: Trade-off relation between the open-domain QA performance and the inference time of existing question retrieval models (blue dots) and SQuID (red dots) on NaturalQuestions (NQ). The x-axis represents the inference speed and the y-axis represents the QA performance. + +question-answer pairs are pre-indexed and referenced by retrievers (Du et al., 2017; Duan et al., 2017; Fabbri et al., 2020; Lewis et al., 2020a). + +Although recent question retrieval models significantly increase the inference speed, this improvement accompanies QA performance degradation. Several approaches have been applied to question retrieval models to overcome the performance degradation, such as adopting the cross-encoder (Mao et al., 2021b; Xiong et al., 2020) for re-ranking and increasing the model size (Lewis et al., 2021b). However, these approaches cause a significant loss of computational efficiency. Figure 1 shows the trade-off between the open-domain QA performance and the inference speed of question retrieval models. + +We propose SQuID (Sequential Question-Indexed Dense retrieval) which significantly improves QA performance without losing computational efficiency. Our work follows previous work on neural re-ranking methods, which use a cross-encoder to re-rank the top-k passages retrieved from the first-step retriever (Lewis et al., 2021b; + +Xiong et al., 2020). Re-ranking methods have improved retrieval performance but require huge computation costs due to the cross-encoder architecture. We use an additional bi-encoder retriever in SQuID instead of the cross-encoder to prevent loss on computational efficiency. We also provide distant supervision methods for training the additional retriever in the absence of training data for question retrievers. + +We evaluate SQuID on NaturalQuestions (NQ) (Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017). We conduct three types of experiments: open-domain QA, computational efficiency evaluation, and analysis on distant supervision methods for training the second-step retriever. Experimental results show that SQuID significantly outperforms the state-of-the-art question retrieval model by $4.0\%$ on NQ and $6.1\%$ on TriviaQA without losing computational efficiency. Our main contribution is in proposing a sequential question retriever model that successfully improves both QA performance and inference speed, thereby making a meaningful step toward developing real-time open-domain QA systems. + +# 2 Related Work + +The research problem of reducing the computational cost of open-domain QA has received much attention recently. The main bottleneck of a retriever-reader model is the machine reading step, and Seo et al. (2018, 2019); Lee et al. (2021a) propose phrase-indexed QA, which directly retrieves the answer from the corpus without the machine reading step. These models pre-compute the context of phrases in a corpus and conduct lexical and semantic similarity searches between the given question and the context of phrases (Zhao et al., 2021; Yamada et al., 2021). Most related to our work are the question retrieval models with question-generation models to build question-answer pairs and conduct a similarity search between the input question and the pre-indexed questions (Lewis et al., 2021a,b). These models significantly reduce the computational cost but results in lower performance. Our work provides an efficient question retrieval pipeline with distant supervision methods for training, while previous question retrieval models focus on the indexing methods with less attention on the retrieval pipeline. + +# 3 Method + +Our method is constructed based on the question retrieval pipeline proposed by Lewis et al. (2021b), where question retrievers find the most similar question to the input question and return the answer of the selected question. In this study, we note that previous question retrievers are optimized not just for improving the retrieval performance but for maintaining the inference speed to cover millions of text (Lewis et al., 2021b). In this process, the performance of retrievers decreases as they are more optimized for computational efficiency. We propose to use an additional retriever that takes the top-k predictions from the first retriever and selects the most similar question from the top-k results. The second-step retriever has a lower constraint in the inference speed than the first retriever since its search space contains only a few samples. This enables us to focus only on the retrieval performance when designing the training method. The overall training and inference procedure of SQuID is illustrated in Figure 2. We describe the details of SQuID below. + +# 3.1 Training + +Since the annotated question-question pairs are unavailable, we distantly supervise SQuID with heuristically selected positive and negative samples. We first select top-k similar questions with the first-step retriever. Among the top-k questions, we choose the positive samples and the negative samples as the following. For positive samples, we choose questions with the most similar answer to the ground truth answer in terms of F1-score, the evaluation metric used in extractive QA (Rajpurkar et al., 2016). For negative samples, we sample questions with answers that differ from the ground truth answer (Karpukhin et al., 2020; Xiong et al., 2021). + +When the input question is provided with a positive sample $(q^{+})$ and $m$ negative samples $(q_{1}^{-},\dots,q_{m}^{-})$ , our second-step retriever is trained to distinguish the positive and negative samples. The loss function is as follows: + +$$ +\begin{array}{l} L (q, q ^ {+}, q _ {1} ^ {-}, \dots , q _ {m} ^ {-}) = \\ - \log \left(\frac {e ^ {\operatorname {s i m} (q , q ^ {+})}}{e ^ {\operatorname {s i m} (q , q ^ {+})} + \sum_ {i = 1} ^ {m} e ^ {\operatorname {s i m} (q , q _ {i} ^ {-})}}\right). \tag {1} \\ \end{array} +$$ + +The similarity function is defined as the dot product of two vectors: $\mathrm{sim}(q_1,q_2) = E_Q(q_1)^T E_Q(q_2)$ . Where $E_{Q}(\cdot)$ is the question encoder of the second-step retriever. + +![](images/59b7461dda7f0c91458c7abd913deede3598aef2bcacc6869d65f9a29f74f967.jpg) +(a) Training procedure + +![](images/2393992c166955fbfc42543ec55e06987a31f981aa33bbb98d35752926db42bb.jpg) +(b) Inference procedure +Figure 2: Illustrations of training and inference processes of SQuID. SQuID consists of two retrievers. The first-step retriever selects top-k similar questions among the pre-indexed QAs. From the top-k results, (a) the second-step retriever is trained to distinguish the positive sample from the negative samples, and (b) it selects the most similar question at the inference time. + +# 3.2 Inference + +Given a question $q$ , the two retrievers of SQuID work in two steps. The first-step retriever selects top-k similar questions. The retrieved questions are then mapped to the question vectors pre-computed by the second-step retriever. The second-step retriever selects the most similar question $q'$ from the top-k results with the question vectors. We use Maximum Inner Product Search (MIPS) for the second-step retrieval. Finally, SQuID puts the answer of $q'$ as the answer for $q$ . + +# 4 Experimental Setup and Results + +We evaluate the performance and computational efficiency of SQuID on two open-domain QA datasets: NaturalQuestions (NQ) and TriviaQA. We also compare various distant supervision methods for training SQuID. We use exact match (EM) (Rajpurkar et al., 2016) for performance evaluation and the number of questions per second (Q/sec) for evaluation of inference speed. The details of our experimental setup is described in Appendix A.2. + +Question Retrievers on Open-Domain QA: We evaluate SQuID with two different first-step retrievers: BM25 and RePAQ-base $256^{2}$ (Lewis et al., 2021b). Table 1 shows that SQuID-BM25/DPR and SQuID-RePAQ/DPR achieve the best performance among question retrieval models on TriviaQA and NQ, respectively. Note that SQuID-RePAQ/DPR outperforms RePAQ-base256 significantly with a + +negligible loss of inference speed; $4.0\%$ p EM gain on NQ and $6.1\%$ p gain on TriviaQA at $92.0\%$ speed (1266 Q/sec vs. 1376 Q/sec). + +Trade-off between QA Performance and Computational Efficiency: Table 1 shows the tradeoff between the open-domain QA performance and the inference speed of the three types of open-domain QA models. Comparing RePAQ-large and RAG-Sequence, we see a large performance gap of $3.3\%$ on NQ and $18.0\%$ on TriviaQA, and we also see a large speed gap of $624~\mathrm{Q / s}$ and $0.8$ Q/s. SQuID bridges this gap, achieving comparable performances to RAG-Sequence on NQ while maintaining the high inference speed. The performance gain on TriviaQA is not as high, and we conjecture that this is because RePAQ uses only questions from NQ in its filtering step. We leave a deeper study of this discrepancy for future research. + +Figure 1 illustrates the QA performance and inference speed of various configurations of RePAQ SQuID. We vary the encoder of the second-step retriever with different pre-trained models: DPR (Karpukhin et al., 2020), BERT-base/large (Devlin et al., 2019), and ALBERT-base/large (Lan et al., 2019). The first and second-step question encoders can be executed concurrently, so we run them in parallel and set the batch size as half to measure the inference speed (SQuID-DPR-parallel). We use the maximum batch size possible on a single V100-16GB GPU. The figure shows that results of SQuID all lie to the top right of the curve fitted to the RePAQ results, meaning that SQuID succeeds in improving both QA performance and inference + +
Model TypeModelNQTriviaQAInference speed (Q/sec)
Question retrievalRePAQ-base256 (Lewis et al., 2021b)40.038.81376
RePAQ-base (Lewis et al., 2021b)40.939.7738
RePAQ-large (Lewis et al., 2021b)41.238.8624
SQuID-BM25/DPR43.145.6328
SQuID-RePAQ/DPR44.044.91006 (1266†)
Phrase-indexedDensePhrase (Lee et al., 2021a)40.950.720.6*
Retriever-readerRAG-Sequence (Lewis et al., 2020b)44.556.80.8
FiD-large (Izacard and Grave, 2021)51.467.60.5*
+ +Table 1: The open-domain QA performance (EM) and inference speeds of SQuID and baselines on NQ test set and TriviaQA test set. We use the performance and the inference speed of each baseline reported from their results. * indicates the inference speed is from the original paper. † indicates that the inference speed is computed in the parallel computing setting. + +
SupervisionBM25RePAQ
w/o 2nd retriever34.440.0
+ Self39.540.4
+ Similar43.144.0
+ Similar / Self43.644.1
+ Same Answer43.444.4
+ +Table 2: The open-domain QA performance (EM) of SQuID in four different distant supervision methods on NQ test set. + +speed. The detailed results are in Appendix A.1. + +Analysis on Positive Sampling Methods: We distantly supervise the second-step retriever because annotated question-question pairs are unavailable. We conduct experiments on various positive sampling methods for distant supervision: "Self", "Similar", "Similar/Self", and "Same Answer". Each method uses the following as the positive sample: + +1) the input question itself ("Self"), 2) a similar question with a similar answer ("Similar"), 3) a similar question if it has the ground truth answer, or the input question itself ("Similar/Default"), and 4) a random question with the ground truth answer ("Same Answer"). + +Table 2 shows the performance of SQuID-BM25 and SQuID-RePAQ-base256 on the NQ test set with the four distant supervision methods. The first row (w/o 2nd retriever) indicates the performance based only on the first-step retriever (BM25 or RePAQ-base256). The second-step retriever with "Self" method improves the performance slightly, and the others improve the performance more significantly. The large gap between "Self" and the + +other methods shows that using the answer information is essential for distant supervision. + +Error Propagation Analysis: The error rate of each stage in a multi-stage model provides a better understanding of the model's performance boundary. In SQuID, the second-step retriever only predicts the correct answer when the top-50 question-answer pairs retrieved by the first-step retriever contain the answer. This indicates that the upper-bound performance of SQuID is determined by the performance of the first-step retriever. We measure the R@50 accuracy of the first-step retrievers on NQ and TriviaQA. The performance of BM25 and RePAQ are $64.07\%$ and $64.34\%$ on NQ and $61.73\%$ and $59.10\%$ on TriviaQA, respectively. + +# 5 Conclusion + +The trade-off between the performance and the inference speed is an important problem in open-domain QA. Recently proposed question retrieval models have shown significantly improved inference speed. However, this improvement came at the cost of a significantly lower QA performance by the question retrieval models compared to the state-of-the-art open-domain QA models. In this paper, we proposed a two-step question retrieval model, SQuID. We evaluated the open-domain QA performance and the inference speed of SQuID on two datasets: NaturalQuestions and TriviaQA. From the results, we showed that the sequential two-retriever approach in SQuID achieves a significant QA performance improvement over the existing question retrieval models, while retaining the advantage of faster inference speed. This improvement in both + +QA performance and inference speed is a meaningful step toward the development of real-time open domain QA systems. + +# Acknowledgements + +This work was partly supported by Naver Corp. and the Engineering Research Center Program through the National Research Foundation of Korea (NRF) funded by the Korean Government MSIT (NRF-2018R1A5A1059921). + +# References + +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT*. +Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to ask: Neural question generation for reading comprehension. In ACL. +Nan Duan, Duyu Tang, Peng Chen, and Ming Zhou. 2017. Question generation for question answering. In EMNLP. +Alexander Richard Fabbri, Patrick Ng, Zhiguo Wang, Ramesh Nallapati, and Bing Xiang. 2020. Template-based question generation from retrieved sentences for improved unsupervised question answering. In ACL. +Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In EACL. +Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In ACL. +Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In EMNLP. +Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: A benchmark for question answering research. TACL. +Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. ALBERT: A lite bert for self-supervised learning of language representations. In ICLR. +Jinhyuk Lee, Minjoon Seo, Hannaneh Hajishirzi, and Jaewoo Kang. 2020. Contextualized sparse representations for real-time open-domain question answering. In ACL. + +Jinhyuk Lee, Mujeen Sung, Jaewoo Kang, and Danqi Chen. 2021a. Learning dense representations of phrases at scale. In ACL. +Jinhyuk Lee, Alexander Wettig, and Danqi Chen. 2021b. Phrase retrieval learns passage retrieval, too. arXiv. +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020a. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In ACL. +Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Roektaschel, Sebastian Riedel, and Douwe Kiela. 2020b. Retrieval-augmented generation for knowledge-intensive nlp tasks. In NeurIPS. +Patrick Lewis, Pontus Stenetorp, and Sebastian Riedel. 2021a. Question and answer test-train overlap in open-domain question answering datasets. In EACL. +Patrick Lewis, Yuxiang Wu, Linqing Liu, Pasquale Minervini, Heinrich Kuttler, Aleksandra Piktus, Pontus Stenetorp, and Sebastian Riedel. 2021b. PAQ: 65 million probably-asked questions and what you can do with them. arXiv. +Yuning Mao, Pengcheng He, Xiaodong Liu, Yelong Shen, Jianfeng Gao, Jiawei Han, and Weizhu Chen. 2021a. Generation-augmented retrieval for open-domain question answering. In ACL. +Yuning Mao, Pengcheng He, Xiaodong Liu, Yelong Shen, Jianfeng Gao, Jiawei Han, and Weizhu Chen. 2021b. Reader-guided passage reranking for open-domain question answering. In ACL-Findings. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In EMNLP. +Devendra Singh Sachan, Siva Reddy, William Hamilton, Chris Dyer, and Dani Yogatama. 2021. End-to-end training of multi-document reader and retriever for open-domain question answering. arXiv. +Minjoon Seo, Tom Kwiatkowski, Ankur Parikh, Ali Farhadi, and Hannaneh Hajishirzi. 2018. Phrase-indexed question answering: A new challenge for scalable document comprehension. In EMNLP. +Minjoon Seo, Jinhyuk Lee, Tom Kwiatkowski, Ankur Parikh, Ali Farhadi, and Hannaneh Hajishirzi. 2019. Real-time open-domain question answering with dense-sparse phrase index. In ACL. +Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N Bennett, Junaid Ahmed, and Arnold Overwijk. 2021. Approximate nearest neighbor negative contrastive learning for dense text retrieval. In ICLR. + +
ModelEMQ/sec
SQuID-RePAQ/DPR-parallel44.01266
SQuID-RePAQ/DPR44.01006
SQuID-RePAQ/BERT-large43.1814
SQuID-RePAQ/BERT-base43.11006
SQuID-RePAQ/ALBERT-large42.2677
SQuID-RePAQ/ALBERT-base41.8920
RePAQ-base25640.01376
RePAQ-large41.2624
RePAQ-xlarge41.5467
RePAQ-base + Reranker-base45.741
RePAQ-large + Reranker-xlarge46.27
+ +Table 3: EM score and inference speed on NQ for various configurations of SQuID and RePAQ + +Wenhan Xiong, Xiang Li, Srini Iyer, Jingfei Du, Patrick Lewis, William Yang Wang, Yashar Mehdad, Scott Yih, Sebastian Riedel, Douwe Kiela, et al. 2020. Answering complex open-domain questions with multi-hop dense retrieval. In ICLR. + +Ikuya Yamada, Akari Asai, and Hannaneh Hajishirzi. 2021. Efficient passage retrieval with hashing for open-domain question answering. In ACL. + +Tiancheng Zhao, Xiaopeng Lu, and Kyusong Lee. 2021. SPARTA: Efficient open-domain question answering via sparse transformer matching retrieval. In NAACL-HLT. + +# A Appendix + +# A.1 Detailed results of Figure 1 + +Table 3 shows the detailed results of Figure 1. + +# A.2 Experimental Setup + +Training Details: We set the batch size to 2 per GPU and the number of negative samples to 16. We used validation EM score for early stopping. SQuID was trained on a machine with four V100-16GB GPUs. We report the result of a single trial. + +Computational Environment for Measuring the Inference Speed: The inference speed of baseline models and SQuID is measured with a V100-16GB GPU and 32 CPUs (Intel Xeon E5-2686v4). We report mean of three separate trials. + +# A.3 License or Terms of Artifacts + +We use BERT whose license is under the Apache License 2.0 free with modification and distribution. Also, we use RePAQ whose license is under the CC + +BY-NC 4.0 free with modification and distribution. All models we used are publicly available. \ No newline at end of file diff --git a/twostepquestionretrievalforopendomainqa/images.zip b/twostepquestionretrievalforopendomainqa/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..2d419c450cdbe37b483c00b8ca8957c9a9b8d1f6 --- /dev/null +++ b/twostepquestionretrievalforopendomainqa/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:50a01ac76b10ccd3607ce9aa801c0bdd40a423c47084e225a99c0e89a7166a12 +size 243945 diff --git a/twostepquestionretrievalforopendomainqa/layout.json b/twostepquestionretrievalforopendomainqa/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..ef01a67169c21fb56a2411a7c89713ff5a45a0d7 --- /dev/null +++ b/twostepquestionretrievalforopendomainqa/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dce052c075bfe6b39d66d4d5b88c0950483e105db13ac11d95749c636618949c +size 173920 diff --git a/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/a36cb823-5d2b-46ba-86b5-3311e93ba1e5_content_list.json b/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/a36cb823-5d2b-46ba-86b5-3311e93ba1e5_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..a21c47991a8a3c0f3bcc1fca1d3482633cdd075b --- /dev/null +++ b/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/a36cb823-5d2b-46ba-86b5-3311e93ba1e5_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d43b5424b5172d728c0dd6d1358c998baf01e9c1b4e2881687b063093a36934 +size 83815 diff --git a/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/a36cb823-5d2b-46ba-86b5-3311e93ba1e5_model.json b/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/a36cb823-5d2b-46ba-86b5-3311e93ba1e5_model.json new file mode 100644 index 0000000000000000000000000000000000000000..6cbe2805e0647d420d4baeffe3698a7727ca235d --- /dev/null +++ b/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/a36cb823-5d2b-46ba-86b5-3311e93ba1e5_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c37f1fc21a4c10eda9ac51444f31509a08848f79edfa30a76aa4213be700913 +size 99208 diff --git a/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/a36cb823-5d2b-46ba-86b5-3311e93ba1e5_origin.pdf b/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/a36cb823-5d2b-46ba-86b5-3311e93ba1e5_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d566aa0f1f10c7f467be7c02d42ec5b4e18c7e09 --- /dev/null +++ b/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/a36cb823-5d2b-46ba-86b5-3311e93ba1e5_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7ff3c24042d786d18961be3c7a13e9e9c3b0f53c0bcb041d27a2e75280e75f81 +size 453482 diff --git a/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/full.md b/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a06d749fab6a5cab53da5c7ebf5e0166fa822a36 --- /dev/null +++ b/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/full.md @@ -0,0 +1,293 @@ +# Type-Driven Multi-Turn Corrections for Grammatical Error Correction + +Shaopeng Lai $^{1*}$ , Qingyu Zhou $^{2}$ , Jiali Zeng $^{2}$ , Zhongli Li $^{2}$ , Chao Li $^{2}$ , Yunbo Cao $^{2}$ , Jinsong Su $^{1,3\dagger}$ + +$^{1}$ School of Informatics, Xiamen University, China + +$^{2}$ Tencent Cloud Xiaowei, China + +3Key Laboratory of Digital Protection and Intelligent Processing of Intangible + +Cultural Heritage of Fujian and Taiwan, Ministry of Culture and Tourism, China splai@stu.xmu.edu.cn, {qingyuzhou, lemonzeng, neutrali, diegoli, yunbocao} @tencent.com, jssu@xmu.edu.cn + +# Abstract + +Grammatical Error Correction (GEC) aims to automatically detect and correct grammatical errors. In this aspect, dominant models are trained by one-iteration learning while performing multiple iterations of corrections during inference. Previous studies mainly focus on the data augmentation approach to combat the exposure bias, which suffers from two drawbacks. First, they simply mix additionally-constructed training instances and original ones to train models, which fails to help models be explicitly aware of the procedure of gradual corrections. Second, they ignore the interdependence between different types of corrections. In this paper, we propose a Type-Driven Multi-Turn Corrections approach for GEC. Using this approach, from each training instance, we additionally construct multiple training instances, each of which involves the correction of a specific type of errors. Then, we use these additionally-constructed training instances and the original one to train the model in turn. By doing so, our model is trained to not only correct errors progressively, but also exploit the interdependence between different types of errors for better performance. Experimental results and in-depth analysis show that our approach significantly benefits the model training. Particularly, our enhanced model achieves state-of-the-art single-model performance on English GEC benchmarks. We release our code at https://github.com/DeepLearnXMU/TMTC. + +# 1 Introduction + +Grammatical Error Correction (GEC) aims at automatically detecting and correcting grammatical (and other related) errors in a text. It attracts much attention due to its practical applications in writing assistant (Napoles et al., 2017b; Ghufron and + +Rosyida, 2018), speech recognition systems (Karat et al., 1999; Wang et al., 2020; Kubis et al., 2020) etc. Inspired by the success of neural machine translation (NMT), some models adopt the same paradigm, namely NMT-based models. They have been quite successful, especially with data augmentation approach (Boyd, 2018; Ge et al., 2018; Xu et al., 2019; Grundkiewicz et al., 2019; Wang and Zheng, 2020; Takahashi et al., 2020). However, these models have been blamed for their inefficiency during inference (Chen et al., 2020; Sun et al., 2021). To tackle this issue, many researchers resort to the sequence-to-label (Seq2Label) formulation, achieving comparable or better performance with efficiency (Malmi et al., 2019; Awasthi et al., 2019; Stahlberg and Kumar, 2020; Omelianchuk et al., 2020). + +Despite their success, both NMT-based and Seq2Label models are trained by one-iteration learning, while correcting errors for multiple iterations during inference. As a consequence, they suffer from exposure bias and exhibit performance degrade (Ge et al., 2018; Lichtarge et al., 2019; Zhao and Wang, 2020; Parnow et al., 2021). To deal with this issue, Ge et al. (2018) propose to generate fluency-boost pseudo instances as additional training data. Besides, Parnow et al. (2021) dynamically augment training data by introducing the predicted sentences with high error probabilities. + +However, the above-mentioned approaches construct pseudo data based on a GEC model or an error-generation model, which extremely depends on the performance of these models. Thus, the error distribution of pseudo data is biased and lacks diversity and practicality. Moreover, they simply mix original and pseudo data to train models, which are unable to learn correcting errors progressively. Furthermore, they ignore the interdependence between different types of errors, which intuitively plays an important role on GEC. Taking Table 1 as example, correcting "little" with "few" or "job" with "jobs" + +Erroneous Sentence: In my country there are little job because the economy is very bad. + +Reference Sentence: In my country there are few jobs because the economy is very bad. + +Table 1: An example for the interdependence between corrections. Please note that whichever error is corrected first, the other error can be corrected more easily. + +first can help the other error be better corrected. Therefore, we believe that how to construct and exploit pseudo data with editing-action corrections for GEC is still a problem worthy of in-depth study. + +In this paper, we first conduct quantitative experiments to investigate the performance improvements of GEC model with providing different types of error corrections. Experimental results show that corrections of appending or replacing words first indeed benefit the corrections of other errors. Furthermore, we propose a Type-Driven Multi-Turn Corrections (TMTC) approach for GEC. Concretely, by correcting a certain type of errors with others unchanged, we construct an intermediate sentence for each training instance and pair it with its raw erroneous sentence and reference sentence respectively, forming two additional training instances. During the model training, using the former instance, we firstly guide the model to learn correcting the corresponding type of errors. Then, using the latter instance, we teach the model to correct other types of errors with the help of previous corrections. Overall, contributions of our work are three-fold: + +- Through quantitative experiments, we investigate the interdependence between different types of corrections, with the finding that corrections of appending or replacing words significantly benefit correcting other errors. +- We propose a TMTC approach for GEC. To the best of our knowledge, our work is the first attempt to explore the interdependence between different types of errors for GEC. +- We conduct experiments and in-depth analysis to investigate the effectiveness of our proposed approach. Experimental results show that our enhanced model achieves the state-of-the-art performance. + +# 2 Related Work + +Generally, there are two categories of models in GEC: Transformer-dominant NMT-based models + +(Boyd, 2018; Ge et al., 2018; Xu et al., 2019; Grundkiewicz et al., 2019; Wang and Zheng, 2020; Takahashi et al., 2020) and GECToR leading Seq2Label models (Malmi et al., 2019; Awasthi et al., 2019; Stahlberg and Kumar, 2020; Omelianchuk et al., 2020). The former models consider GEC as a machine translation task, where the model is fed with the erroneous sentence and then output the corrected sentence token by token. By comparison, Seq2Label models are able to correct grammatical errors more efficiently and even better. Among them, the GECToR models (Omelianchuk et al., 2020) obtain remarkable performance. Typically, they adopt a pre-trained language model as the encoder to learn word-level representations and utilize a softmax-based classifier to predict designed editing-action labels. + +Since GEC models may fail to completely correct a sentence through just one-iteration inference, some researchers resort to data augmentation that has been widely used in other NLP studies (Song et al., 2020; Xu et al., 2020). For instance, Ge et al. (2018) propose to let the GEC model infer iteratively and design a fluency boost learning approach. Specifically, they establish new erroneous-reference sentence pairs by pairing predicted less fluent sentences with their reference sentences during training. Likewise, to solve the mismatches between training and inference of Seq2Label models, Parnow et al. (2021) apply a confidence-based method to construct additional training data by pairing low-confidence sentences with reference sentences. Note that these two methods also involve constructing pseudo data using sentences with partial errors. However, ours is still different from them in two aspects. First, these two methods simply mix their pseudo data with original data to still train models in a one-iteration learning manner. By contrast, we decompose the one-iteration corrections into multiple turns, so as to make the model aware of gradual corrections. Second, these two methods ignore the interdependence between different types of errors, which is exploited by our proposed approach to enhance the model. + +# 3 Background + +In this work, we choose GECToR (Omelianchuk et al., 2020) as our basic GEC model due to its efficiency and competitive performance. Typically, it considers the GEC task as a sequence-to-label task, where the candidate editing-action la + +![](images/0e335883a031b3e74adb13c385aaf164768a5033523d9243d0f65ad6d79405d4.jpg) +Figure 1: The procedure of our quantitative experiments. Each sentence is composed of five parts as illustrated, where Error(ACTION label) denote the erroneous words that can be corrected via corresponding editing-action label. We only correct one type of errors and compare the prediction results of other types of errors. + +bels mainly include $KEEP (to keep the current word unchanged), $DELETE (to delete the current word), $APPEND_t (to append the word t after the current word), $REPLACE_t (to replace the current word with the word t) and some elaborate g-transformation labels (Omelianchuk et al., 2020) performing task-specific operations, such as $TRANSFORM_CASE_LOWER and $TRANSFORM_CASE_CAPITAL (to change the case of the current word). + +On the whole, the GECToR model is composed of an encoder based on pre-trained language model and two linear classifiers: one for grammatical error detection (GED) and the other for GEC. The encoder reads the erroneous sentence $X_{e} = x_{1},x_{2},\ldots ,x_{N}$ and represent words with hidden states $\{h_i\}_{i = 1}^N$ , which are fed into classifiers to predict the binary label sequence $Y = y_{1},y_{2},\dots,y_{N}$ for GED and the editing-action label sequence $T = t_{1},t_{2},\dots,t_{N}$ for GEC, respectively. Formally, the losses of two classifiers can be formulated as + +$$ +L _ {d} = - \sum_ {i = 1} ^ {N} \log p \left(y _ {i} \mid X _ {e}, \theta\right), \tag {1} +$$ + +$$ +L _ {c} = - \sum_ {i = 1} ^ {N} \log p \left(t _ {i} \mid X _ {e}, \theta\right), \tag {2} +$$ + +where $\theta$ denotes model parameters. Usually, the GECToR model is trained to optimize the sum of two losses: $L = L_{d} + L_{c}$ . + +It is worth noting that the GECToR model is trained to correct all errors in a one-iteration manner, while correcting errors in a multiple-iteration way during inference (at most 5 iterations). Besides, there are three stages involved during the training of the GECToR model, as shown in Table 2. + +# 4 Effect of the Interdependence between Different Types of Corrections + +In this section, we conduct several groups of quantitative experiments to explore the interdependence + +
Dataset#InstanceStage
PIE-synthetic (Awasthi et al., 2019)9,000,000I
Lang-8 (Tajiri et al., 2012)947,344II
NUCLE (Dahlmeier et al., 2013)56,958II
FCE (Yannakoudakis et al., 2011)34,490II
W&I+LOCNESS (Bryant et al., 2019)34,304II, III
+ +Table 2: GECToR is trained on PIE-synthetic dataset for pre-training at Stage I. Then, it is fine-tuned on Lang-8, NUCLE, FCE, W&I+LOCNESS at Stage II. At Stage III, the final fine-tuning is conducted on W&I+LOCNESS. + +between corrections. + +We first train the GECToR model on Stage II Only for efficiency. All training settings are the same to published parameters.1 Afterwards, we use the model to conduct corrections on the BEA-2019 (W&I+LOCNESS) dev set and CoNLL-2014 test set (Ng et al., 2014) and their variants with some errors corrected manually. For simplicity, we only consider the three most frequent editing-action labels: $APPEND_t$, $DELETE and $REPLACE_t$. + +Figure 1 shows the procedure of quantitative experiments. Specifically, we separate each raw erroneous sentence into five parts: correct words, erroneous words that can be corrected by $\$APPEND\_ \{t\} /$ \$DELETE/\ $REPLACE\_ \{t\}$ , and words with other types of errors. If we want to investigate the influence of $\$APPEND\_ \{t\}$ , we first select the data containing $\$APPEND\_ \{t\}$ labels and denote them as $D(\text{APPEND})$ . Then we manually correct all the errors which should be corrected by $\$APPEND\_ \{t\}$ labels, obtaining the new subset $D(\text{APPEND})$ . Afterwards, we use our model to correct erroneous sentences of subsets $D(\text{APPEND})$ and $D(\text{APPEND})$ for just one iteration, and finally we only evaluate and compare the model performance on the predictions of + +
DatasetEvaluationRoBERTa
BEA-2019 (dev)CoNLL-2014 (test)
Num.Prec.Rec.F1Num.Prec.Rec.F1
Original Dataset$APPEND_t260953.4335.2242.4662127.4623.3525.24
$DELETE140356.0423.8133.42111551.8918.4827.25
$REPLACE_t349550.8723.3231.98139838.5718.4524.96
D(APPEND)$DELETE90462.6320.0230.3449647.5213.5121.04
$REPLACE_t207949.7120.3028.8366028.5711.2116.10
D(APPEND✓)$DELETE90468.8426.8838.66 (+8.32)49659.0617.7427.29 (+6.22)
$REPLACE_t207967.4636.9947.78 (+18.95)66048.9628.6436.14 (+20.04)
DDELETE)$APPEND_t102452.6925.7834.6233218.9313.8616.00
$REPLACE_t142550.9119.7228.4371630.8913.5518.83
D(DELETE✓)$APPEND_t102457.1427.7337.34 (+2.72)33222.7715.3618.35 (+2.35)
$REPLACE_t142555.0222.3231.75 (+4.32)71636.1716.6222.78 (+3.95)
D(REPLACE)$APPEND_t176252.7629.3437.7144323.9218.7421.01
$DELETE99656.1918.6728.0376747.1015.9123.78
D(Replace✓)$APPEND_t176268.0549.2157.11 (+19.40)44341.9744.2443.08 (+22.07)
$DELETE99669.3334.0445.66 (+17.63)76761.0825.1635.64 (+11.86)
+ +Table 3: Results of our quantitative experiments. $D(\text{ACTION})$ denotes a subset consisting of instances with ACTION label. $D(\text{ACTION}\checkmark)$ denotes another version of $D(\text{ACTION})$ , where corresponding errors have been manually corrected. + +$DELETE and $REPLACE_t$. For example, by comparing the model performance with respect to the $DELETE label, we can draw the conclusion that appending some words first could help the model to achieve better predictions on $DELETE. + +Likewise, we conduct experiments with respect to $DELETE and $REPLACE_{t} labels. Besides, we evaluate the performance for each type of labels on the raw dataset without any constraints. Experimental results of the RoBERTa-based GEC-ToR model (Liu et al., 2019) are listed in Table 3. We can observe that the consistent performance improvements occur on both the W&I+LOCNESS dev set and the CoNLL-2014 test set, no matter which type of errors are corrected first. Moreover, it is surprising that if replacing words or appending words are conducted beforehand, the model performance is significantly improved on correcting other types of errors. Meanwhile, deleting words does not benefit others compared with other two kinds of corrections. + +We also notice that the model improvements are positively associated with the number of manual corrections on the BEA-2019 dev set. However, the performance improvements on the CoNLL-2014 test set is not closely related to the number of manual corrections. Thus, we can conclude that the interdependence between different types of corrections indeed plays a more important role than the number of corrections on performance improvements. Having witnessed these experimental results, we can arrive at the following two conclus + +sions: + +- GEC models can better deal with errors when some types of errors have been corrected. +- Corrections of appending words or replacing words help the model correct other types of errors more than deleting words. + +Please note that we also conduct experiments using the XLNet-based GECToR model (Yang et al., 2019). Similar trend can be observed from experimental results reported in Appendix §A.1. + +# 5 Our Approach + +In this section, we introduce our proposed Type-Driven Multi-Turn Corrections (TMTC) approach in detail. As concluded above, correcting certain types of errors first benefits correcting others, thus, we decompose one-iteration corrections of each training instance into multi-turn corrections, so as to make the trained model to learn performing corrections progressively. + +The key step of our approach is to construct an intermediate sentence for each training instance. Formally, each training instance is a sentence pair $(X_e, X_c)$ consisting of an erroneous sentence $X_e$ and a reference sentence $X_c$ . To construct its intermediate sentence $X'$ , we randomly select partial grammatical errors and correct them manually while keeping others unchanged. Then, $X'$ is paired with $X_e$ and $X_c$ to generate two new pairs: $(X_e, X')$ and $(X', X_c)$ , respectively. Figure 2 illustrates an example of constructing two additional training instances from a sentence pair. In this example, for + +![](images/d85e9c83f116c3df0ac821558cdd17653588b0f23f8a83fb2b74c0b816f77afe.jpg) +Figure 2: The procedure illustration of constructing additional training instances. Here, we construct an intermediate sentence $X'$ , which is paired with the raw erroneous sentence $X_e$ and reference sentence $X_c$ to form two additional training instances $(X_e, X')$ and $(X', X_c)$ , respectively. Red squares mean labels correcting errors, while green ones mean the labels to keeping the current word unchanged. Losses of gray squares will be omitted in the first turn. + +the erroneous sentence with two grammatical errors "oldest" and "!," we correct "!" by "?" manually to form the semi-corrected sentence "How oldest are you ?". It should be noted that our constructed training instances are derived from the original training corpus, and thus their grammatical errors are also human-making. + +Based on the above findings mentioned in Section §4, we apply our approach to design three training strategies: APPEND-first, DELETE-first and REPLACE-first. Here, the ACTION-first strategy means that the model is trained to learn ACTION corrections in the first turn and then the others in the second turn. For example, when using the DELETE-first strategy, we keep the errors with "$DELETE" as target labels unchanged during the constructions of intermediate sentences. Using additionally-constructed training instances involving these sentences, the trained model will be encouraged to focus on performing corrections first via $DELETE. Table 4 lists the numbers of additionally-constructed training instances using these strategies. According to our findings concluded in Section §4, the models trained using APPEND-first and REPLACE-first strategies should perform better. + +Using our approach, we adopt different objectives to successively train our model. Specifically, + +
Strategy#Additional Instance
RANDOM367,814
APPEND-first311,348
DELETE-first326,100
REPLACE-first296,683
+ +Table 4: Numbers of additionally-constructed training instances. We also explore the training strategy that randomly corrects partial errors first. For convenience, we name this training strategy as RANDOM. + +we define the following training objectives $L_{c}^{(1)}$ and $L_{c}^{(2)}$ in the first and second turns, respectively: + +$$ +L _ {c} ^ {(1)} = - \sum_ {i = 1} ^ {N} \mathbb {1} \left(t _ {i} ^ {\prime} = t _ {i}\right) \cdot \log p \left(t _ {i} ^ {\prime} \mid X _ {e}, \theta\right), \tag {3} +$$ + +$$ +L _ {c} ^ {(2)} = - \sum_ {i = 1} ^ {\bar {N}} \log p \left(\bar {t} _ {i} \mid X ^ {\prime}, \theta\right), \tag {4} +$$ + +where $\{t_i^{\prime}\}_{i = 1}^{N}$ and $\{\bar{t}_i\}_{i = 1}^{\bar{N}}$ are the editing-action label sequences of additionally-constructed training instances $(X_{c},X^{\prime})$ and $(X^{\prime},X_{c})$ respectively. + +Notably, there remain some grammatical errors within intermediate sentences which not be learned by the model in the first turn. Therefore, we omit the incorrect supervisal signals in the definition of $L_{c}^{(1)}$ via an indicator function $\mathbb{1}(*)$ , which is used to shield the effect of incorrect losses. However, because our additionally-constructed training instances contain less grammatical errors compared with original ones, which causes the trained model to correct less errors. To address this defect, we still use the original training instances to continuously train model in the third turn. + +Formally, we finally, we use all training instances to continuously train our model with the following objective $L' = L_c^{(1)} + L_c^{(2)} + L$ . Our experimental results presented in Section §6 show that our additionally-constructed training instances and original ones are complementary to each other. + +# 6 Experiment + +# 6.1 Setup + +To ensure fair comparison, we train the GECToR models using the same training datasets and parameters as (Omelianchuk et al., 2020), and then evaluate them on the BEA-2019 (W&I+LOCNESS) dev, test set and the CoNLL-2014 test set. The details of the training data are listed in Table 2. Following (Omelianchuk et al., 2020), we conduct + +
ModelPre-trainedBEA-2019 (dev)CoNLL-2014 (test)
Prec.Rec.F0.5Prec.Rec.F0.5
GECToR(Omelianchuk et al., 2020)†RoBERTa50.3030.5044.5067.5038.3058.60
XLNet47.1034.2043.8064.6042.6058.50
GECToRRoBERTa49.8037.6146.7766.5645.0860.77
XLNet45.5539.8144.2764.0448.6760.24
GECToR(RANDOM)RoBERTa52.8836.0548.37 (+1.60)69.5444.3262.43 (+1.66)
GECToR(APPEND-first)RoBERTa54.9235.3049.43 (+2.66)70.7343.8863.01 (+2.24)
GECToR(DELETE-first)RoBERTa53.8535.1348.67 (+1.90)70.5742.7862.45 (+1.68)
GECToR(REPLACE-first)RoBERTa54.7834.8249.14 (+2.37)70.243.9262.70 (+1.93)
GECToR(RANDOM)XLNet49.7438.4746.99 (+2.72)67.4146.6861.91 (+1.67)
GECToR(APPEND-first)XLNet51.1037.7247.71 (+3.44)67.7446.3962.03 (+1.79)
GECToR(DELETE-first)XLNet50.4837.4947.21 (+2.97)67.3346.4261.79 (+1.55)
GECToR(REPLACE-first)XLNet51.9637.1948.14 (+3.87)69.3646.3063.08 (+2.84)
+ +Table 5: Results of models in the dataset setting of Stage II Only. † indicates scores reported in previous papers. + +
ModelPre-trainedBEA-2019 (test)CoNLL-2014 (test)
Prec.Rec.F0.5Prec.Rec.F0.5
Dual-boost(Ge et al., 2018)†----64.4730.4852.72
GECToR(Omelianchuk et al., 2020)†RoBERTa77.255.171.572.142.063.0
XLNet79.253.972.477.540.165.3
GECToR(GST)(Parnow et al., 2021)†RoBERTa77.555.771.974.142.264.4
XLNet79.454.572.878.439.965.7
SAD((12+2)(Sun et al., 2021)†BART--72.971.052.866.4
GECToRRoBERTa78.0253.4971.5372.9340.0263.11
XLNet80.2351.7672.3677.6340.1165.57
GECToR(RANDOM)RoBERTa79.8551.5371.94 (+0.41)75.3941.5764.84 (+1.73)
GECToR(APPEND-first)RoBERTa80.3151.1472.08 (+0.55)76.7740.9565.34 (+2.23)
GECToR(DELETE-first)RoBERTa79.3952.2571.92 (+0.39)75.7039.8564.16 (+1.05)
GECToR(REPLACE-first)RoBERTa81.2750.6772.51 (+0.98)77.3640.3565.37 (+2.26)
GECToR(RANDOM)XLNet81.1450.8372.49 (+0.13)77.0842.0366.06 (+0.49)
GECToR(APPEND-first)XLNet81.8950.5572.85 (+0.49)78.1842.6767.02 (+1.45)
GECToR(DELETE-first)XLNet82.3549.5272.71 (+0.35)77.0542.0366.04 (+0.47)
GECToR(REPLACE-first)XLNet81.3351.5572.91 (+0.55)77.8341.8266.40 (+0.83)
+ +Table 6: Results of models at the dataset setting of Three Stages of Training. + +experiments in two dataset settings: Stage II Only and Three Stages of Training. Notably, in the latter setting, we only apply our approach at Stage II and Stage III for efficiency. Finally, we evaluate the model performance in terms of official ERRANT (Bryant et al., 2017) and $M^2$ scorer (Dahlmeier and Ng, 2012) respectively. + +# 6.2 Main Results and Analysis + +Stage II Only. In this setting, we compare the performance of GECToR with or without applying our approach2. + +Results are presented on Table 5. Notably, the results are consistent with our findings in Section §4. That is, since correcting some types of errors benefit the corrections of other errors, all models trained with our approach significantly perform bet- + +ter than their corresponding baselines. Moreover, the GECToR models trained by the APPEND-first or REPLACE-first strategies are superior to models trained by DELETE-first or RANDOM, echoing the conclusions mentioned in Section $\S 4$ + +Three Stages of Training. In this setting, we compare our enhanced models with more baselines under the setting of the single model, including the most related work, Dual-boost (Ge et al., 2018), GECToR(GST) (Parnow et al., 2021) and the current best NMT-based model SAD(12+2) (Sun et al., 2021). + +As reported in Table 6, we obtain the similar results to Stage II Only. Our approach promotes models to obtain desirable improvements, where the APPEND-first and REPLACE-first strategies perform better. Overall, the GECToR models trained by our approach are comparable or even better than SAD(12+2). Particularly, when ensembling our + +
DatasetStrategyEvaluationRoBERTa
BEA-2019 (dev)CoNLL-2014 (test)
Num.Prec.Rec.F1Num.Prec.Rec.F1
D(APPEND)APPEND-first$DELETE90464.0319.6930.1249645.459.0715.13
$REPLACE_{\{t\}}207952.5419.3828.3266034.839.3914.80
D(APPEND✓)APPEND-first$DELETE90479.1733.6347.20 (+17.08)49668.1818.1528.66 (+13.53)
$REPLACE_{\{t\}}207973.4936.8049.04 (+20.72)66060.8428.4838.80 (+24.00)
D(DELETE)DELETE-first$APPEND_{\{t\}}102454.3122.7532.0733224.5311.7515.89
$REPLACE_{\{t\}}142552.7518.8827.8071635.1910.6116.31
D(DELETE✓)DELETE-first$APPEND_{\{t\}}102460.2825.4935.83 (+3.76)33230.3214.1619.30 (+3.41)
$REPLACE_{\{t\}}142559.1622.6732.78 (+4.98)71640.3213.9720.75 (+4.44)
D(REPLACE)REPLACE-first$APPEND_{\{t\}}176255.3227.1336.4144328.7416.0320.58
$DELETE99658.1319.3829.0776750.0011.3418.49
D(REPLACE✓)REPLACE-first$APPEND_{\{t\}}176273.5747.5657.77 (+21.36)44353.8242.8947.74 (+27.16)
DELETE99677.9936.6549.86 (+20.79)76771.7525.1637.26 (+18.77)
+ +Table 7: Results of our quantitative experiments using models enhanced by our approach. Three groups of experiments are conducted on the same data subset as Table 3. + +
ModelBEA-2019 (dev)CoNLL-2014 (test)
Prec.Rec.F0.5Prec.Rec.F0.5
GECToR49.8037.6146.7766.5645.0860.77
w/ TMTC54.9235.3049.4370.7343.8863.01
w/o turn 151.2937.0147.0368.9945.4562.51
w/o turn 250.4337.347.1266.9444.6061.31
w/o original55.2132.548.4471.2241.5562.32
mix data53.0431.0046.4471.3140.5961.84
w/o 1(*)53.2333.4947.6271.3142.1662.64
+ +Table 8: Ablation study. Our model is based on RoBERTa and trained using APPEND-first. The $\mathbb{1}(*)$ is the indicator function mentioned in Equation 3. + +enhanced models with competitive GEC models, we obtain $77.93F_{0.5}$ , achieving SOTA score on the BEA-2019 test set. + +Moreover, we find that our approach allows the trained models to correct more cautiously. That is, the trained models tend to perform less but more precise corrections, compared with the basic GEC-ToR models. One of underlying reasons is that our additionally-constructed training instances contain more $KEEP labels especially in the second turn, which makes the label predictions of the model biased. + +# 6.3 Ablation Study + +Then, we conduct more experiments to investigate the effectiveness of various details on our proposed approach. + +All experimental results are provided in Table 8. Results of lines 3-5 ("w/o turn 1", "w/o turn 2", "w/o original") demonstrate that our additionally-constructed training instances are complementary to original ones. In addition, we also directly mix the additionally-constructed training instances and + +![](images/6c1562b6c6f1e8b9670c1c71ddd0b686df78bd4eaa42eeda143b5c87f951a2c4.jpg) +Figure 3: Label predictions of the RoBERTa-based model on the BEA-2019 dev set in the first iteration of prediction. + +the original ones to train a GECToR model. However, such a training strategy does not promote the model to learn much better, showing the advantage of gradual learning error corrections. Finally, as mentioned in Section §5, some grammatical errors should not be learned within intermediate sentence. Here, we also report the performance of the GECToR model without omitting incorrect superviscal signals. As shown in the line 7 ("w/o $\mathbb{1}(*)$ ) of Table 8, the lower recall values indicate these incorrect $\$ \mathrm{KEEP}$ labels make the model to infer more conservatively. + +# 6.4 Analysis + +Correction Trend. Here, we use the models trained under different strategies to not only evaluate the one-iteration performance with respect to our investigated three types of labels, but also conduct quantitative experiments again. By doing so, we can investigate if our approach indeed guides the model to correct some types of error first. + +
ModelBEA-2019 (dev)CoNLL-2014 (test)
Prec.Rec.F0.5Prec.Rec.F0.5
GECToR49.8037.6146.7766.5645.0860.77
GECToR(APP+REP+DEL)59.2631.7050.4874.0840.3763.48
GECToR(APP+DEL+REP)58.3832.0650.1573.2640.8963.24
GECToR(REP+APP+DEL)57.7530.9549.2374.3639.1963.05
GECToR(REP+DEL+APP)57.7231.4449.6673.8639.8762.88
GECToR(DEL+APP+REP)59.1331.5250.0474.2839.6163.15
GECToR(DEL+REP+APP)58.5131.8350.1873.3440.5563.06
+ +Table 9: Results of more fine-grained strategies. We conduct experiments by the model trained at Stage II Only based on RoBERTa. + +![](images/5ace83c2fafb942b3b3cf8bade028c393f30a28956ac26884fecf4ebb1278aac.jpg) +Figure 4: The precision, recall and $\mathrm{F}_{0.5}$ values with respect to different correction ratios. + +As shown in Figure 3, we find our strategies indeed guide model to correct corresponding errors more precisely in the first iteration. Meanwhile, the less but more precise predictions occur again with respect to corresponding labels. For example, when only considering the model performance with respect to $\mathbb{S}$ APPEND_ $\{\mathsf{t}\}$ , we observe that the model trained by APPEND-first obtains the highest precision score. + +More importantly, back to Table 7, the phenomenon that correcting some types of errors benefits the others is highlighted. It indicates that our approach indeed allows the trained model to capture the interdependence between different types of corrections. + +Effect of Correction Ratio. As described in Section §5, the correction ratio is an important hyper-parameter that determines the numbers of manual corrections. Thus, we try different correction ratio values to investigate its effect on our approach. Figure 4 shows the performance of the trained model with varying correction ratios. Apparently, with the correction ratio increasing, the precision score drops and recall score rises. By contrast, the overall $\mathrm{F}_{0.5}$ scores are always steady. + +Effect of More Turns of Corrections. The above experimental results show that decompos- + +![](images/3bcbc019ac44e2ffabfbf7f81c0035140625409cc68761881ea8d7d9944a74a6.jpg) +Figure 5: The $\mathrm{F}_{0.5}$ scores of GECToR(RANDOM) with more turns of corrections. + +ing the conventional one-iteration training of into the two-turn training is useful to improve model training. A natural problem arises: can the trained model be further improved if we use more turns of training? + +To answer this question, we use the model trained by the RANDOM strategy to conduct experiments. Specifically, we decompose the one-iteration corrections into $K$ turns of corrections, where we construct intermediate sentence by accumulatingly correct $\frac{1}{K}$ errors during each turn of corrections. From Figure 5, we can observe that more turns of corrections do not benefit our models over two-turn corrections under the RANDOM strategy while with more training cost. + +Also, we conduct experiments using more fine-grained strategies. For example, we can design a training strategy: after learning corrections of \(APPEND\_ \{t\}\), the model learns to correct errors of \)REPLACE\_ \{t\}\( and then to correct others. For convenience, we name this strategy as APP+REP+DEL, where APP, REP and DEL are abbreviations of \)APPEND\_ \{t\}\(, \)REPLACE\_ \{t\}\( and \)DELETE$, respectively. As illustrated in Table 9, all models trained by our approach obtain slightly better performance when introducing more iterations of corrections. However, they require + +almost $1.5\mathrm{x}$ training time compared with our standard TMTC approach. + +# 7 Conclusion + +In this paper, we have firstly conducted quantitative experiments to explore the interdependence between different types of corrections, with the finding that performing some types of corrections such as appending or replacing words first help models to correct other errors. Furthermore, we propose a Type-Driven Multi-Turn Corrections (TMTC) approach for GEC, which allows the trained model to be not only explicitly aware of the progressive corrections, but also exploit the interdependence between different types of corrections. Extensive experiments show that our enhanced model is able to obtain comparable or better performance compared with the SOTA GEC model. + +In the future, we plan to apply bidirectional decoding (Zhang et al., 2018; Su et al., 2019; Zhang et al., 2019) to further improve our approach. Besides, inspired by the recent syntax-aware research (Li et al., 2021), we will explore the interdependence between corrections from other perspectives for GEC such as syntax. + +# Acknowledgment + +The project was supported by National Key Research and Development Program of China (No. 2020AAA0108004), National Natural Science Foundation of China (No. 61672440), Natural Science Foundation of Fujian Province of China (No. 2020J06001), and Youth Innovation Fund of Xi-amen (No. 3502Z20206059). We also thank the reviewers for their insightful comments. + +# References + +Abhijeet Awasthi, Sunita Sarawagi, Rasna Goyal, Sabyasachi Ghosh, and Vihari Piratla. 2019. Parallel iterative edit models for local sequence transduction. In EMNLP-IJCNLP, pages 4260-4270. +Adriane Boyd. 2018. Using wikipedia edits in low resource grammatical error correction. In NUT@EMNLP, pages 79-84. +Christopher Bryant, Mariano Felice, Øistein E. Andersen, and Ted Briscoe. 2019. The BEA-2019 shared task on grammatical error correction. In *BEA@ACL*, pages 52–75. +Christopher Bryant, Mariano Felice, and Ted Briscoe. 2017. Automatic annotation and evaluation of error + +types for grammatical error correction. In ACL, pages 793-805. +Meng Hui Chen, Tao Ge, Xingxing Zhang, Furu Wei, and M. Zhou. 2020. Improving the efficiency of grammatical error correction with erroneous span detection and correction. In EMNLP, pages 7162-7169. +Daniel Dahlmeier and Hwee Tou Ng. 2012. Better evaluation for grammatical error correction. In NACCL, pages 568-572. +Daniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu. 2013. Building a large annotated corpus of learner English: The NUS corpus of learner English. In BEA@NAACL-HLT, pages 22-31. +Tao Ge, Furu Wei, and M. Zhou. 2018. Fluency boost learning and inference for neural grammatical error correction. In ACL, pages 1055-1065. +M. Ghufron and Fathia Rosyida. 2018. The role of grammarly in assessing english as a foreign language (efl) writing. *Lingua Cultura*, 12:395-403. +Roman Grundkiewicz, Marcin Junczys-Dowmunt, and Kenneth Heafield. 2019. Neural grammatical error correction systems with unsupervised pre-training on synthetic data. In *BEA@ACL*, page 252–263. +Michael Heilman, Aoife Cahill, Nitin Madnani, Melissa Lopez, Matthew Mulholland, and Joel Tetreault. 2014. Predicting grammaticality on an ordinal scale. In ACL, pages 174-180. +Clare-Marie Karat, Christine Halverson, Daniel B. Horn, and John Karat. 1999. Patterns of entry and correction in large vocabulary continuous speech recognition systems. In CHI '99, pages 568-575. +Marek Kubis, Zygmunt Vetulani, Mikolaj Wypych, and Tomasz Zietkiewicz. 2020. Open challenge for correcting errors of speech recognition systems. ArXiv, abs/2001.03041. +Zhongli Li, Qingyu Zhou, Chao Li, Ke Xu, and Yunbo Cao. 2021. Improving BERT with syntax-aware local attention. In Findings of ACL, pages 645-653. +Jared Lichtarge, Christopher Alberti, Shankar Kumar, Noam M. Shazeer, Niki Parmar, and Simon Tong. 2019. Corpora generation for grammatical error correction. In *NAACL*, pages 3291-3301. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692. +Eric Malmi, Sebastian Krause, Sascha Rothe, Daniil Mirylenka, and Aliaksei Severyn. 2019. Encode, tag, realize: High-precision text editing. In EMNLP-IJCNLP, pages 5054-5065. + +Courtney Naples, Keisuke Sakaguchi, and Joel Tetreault. 2017a. Jfleg: A fluency corpus and benchmark for grammatical error correction. In EACL, pages 229-234. +Courtney Naples, Keisuke Sakaguchi, and Joel R. Tetreault. 2017b. Jfleg: A fluency corpus and benchmark for grammatical error correction. In EACL, pages 229-234. +Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christopher Bryant. 2014. The CoNLL-2014 shared task on grammatical error correction. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task, pages 1-14. +Kostiantyn Omelianchuk, Vitaliy Atrasevych, Artem Chernodub, and Oleksandr Skurzhanskyi. 2020. GECToR - grammatical error correction: Tag, not rewrite. In *BEA@ACL*, pages 163-170. +Kevin Parnow, Zuchao Li, and Hai Zhao. 2021. Grammatical error correction as gan-like sequence labeling. In Findings of ACL, pages 3284-3290. +Linfeng Song, Ante Wang, Jinsong Su, Yue Zhang, Kun Xu, Yubin Ge, and Dong Yu. 2020. Structural information preserving for graph-to-text generation. In ACL, pages 7987-7998. +Felix Stahlberg and Shankar Kumar. 2020. Seq2Edits: Sequence transduction using span-level edit operations. In EMNLP, pages 5147-5159. +Jinsong Su, Xiangwen Zhang, Qian Lin, Yue Qin, Junfeng Yao, and Yang Liu. 2019. Exploiting reverse target-side contexts for neural machine translation via asynchronous bidirectional decoding. Artif. Intell., 277:103168. +Xin Sun, Tao Ge, Furu Wei, and Houfeng Wang. 2021. Instantaneous grammatical error correction with shallow aggressive decoding. In ACL/IJCNLP, pages 5937-5947. +Toshikazu Tajiri, Mamoru Komachi, and Yuji Matsumoto. 2012. Tense and aspect error correction for ESL learners using global context. In ACL, pages 198-202. +Yujin Takahashi, Satoru Katsumata, and Mamoru Komachi. 2020. Grammatical error correction using pseudo learner corpus considering learner's error tendency. In ACL SRW, pages 27-32. +Haoyu Wang, Shuyan Dong, Yue Liu, James Logan, Ashish Kumar Agrawal, and Yang Liu. 2020. Asr error correction with augmented transformer for entity retrieval. In INTERSPEECH, pages 1550-1554. +Lihao Wang and Xiaqing Zheng. 2020. Improving grammatical error correction models with purpose-built adversarial examples. In EMNLP, pages 2858-2869. + +Kun Xu, Linfeng Song, Yansong Feng, Yan Song, and Dong Yu. 2020. Coordinated reasoning for crosslingual knowledge graph alignment. In AAAI, pages 9354-9361. +Shuyao Xu, Jiehao Zhang, Jin Chen, and Longlu Qin. 2019. Erroneous data generation for grammatical error correction. In *BEA@ACL*, pages 149-158. +Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In NeurIPS, pages 5754-5764. +Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A new dataset and method for automatically grading esol texts. In ACL, pages 180-189. +Biao Zhang, Deyi Xiong, Jinsong Su, and Jiebo Luo. 2019. Future-aware knowledge distillation for neural machine translation. TASLP, 27(12):2278-2287. +Xiangwen Zhang, Jinsong Su, Yue Qin, Yang Liu, Rongrong Ji, and Hongji Wang. 2018. Asynchronous bidirectional decoding for neural machine translation. In AAAI, pages 5698-5705. +Zewei Zhao and Houfeng Wang. 2020. Maskgec: Improving neural grammatical error correction via dynamic masking. In AAAI, pages 1226-1233. + +# A Appendix + +# A.1 Quantitative Experiments on XLNet + +We also conduct quantitative experiments described in Section §4 using model trained based on XLNet. The overall results are closely similar to Table 3, which indicates that our findings and conclusions are not specific to a certain model or a certain dataset, but common among realistic human-making datasets. + +# A.2 Evaluation on JFLEG + +Suggested by reviewers, we evaluate our approach on the JFLEG (Napoles et al., 2017a) dataset which focus on fluency. As shown in Table 11 and Table 12, models trained by our approach obtain higher GLEU (Heilman et al., 2014) compared with baselines, which demonstrate the effectiveness of decomposing one-iteration correction into multiple turns. However, editing-action based interdependence seems not very beneficial from the view of fluency. + +
DatasetEvaluationXLNet
BEA-2019 (dev)CoNLL-2014 (test)
Num.Prec.Rec.F1Num.Prec.Rec.F1
Original Dataset$APPEND_t260950.6138.0643.4562124.426.0925.21
$DELETE140352.7925.6634.53111549.6519.0127.50
$REPLACE_t349549.1024.1232.35139837.0620.8926.72
D(APPEND)$DELETE90461.8921.0231.3849646.3415.3223.03
$REPLACE_t207950.6520.6829.3766032.3014.2419.77
D(APPEND✓)$DELETE90472.6630.8643.32 (+11.94)49668.1818.1528.66 (+5.63)
$REPLACE_t207967.1336.8447.58 (+18.21)66060.8428.4838.80 (+19.03)
DDELETE)$APPEND_t102450.2727.4435.5033218.0916.5717.30
$REPLACE_t142549.5720.0028.5071628.1214.8019.40
D(DELETE✓)$APPEND_t102454.9128.4237.45 (+1.95)33230.3214.1619.30 (+2.00)
$REPLACE_t142551.4021.8930.71 (+2.21)71640.3213.9720.75 (+1.35)
D(REPLACE)$APPEND_t176255.3231.3838.8544320.2819.8620.07
$DELETE99656.3720.8830.4876745.1616.4324.09
D(Replace✓)$APPEND_t176265.4750.9157.28 (+18.43)44353.8242.8947.74 (+27.67)
$DELETE99670.8935.9447.70 (+17.22)76771.7525.1637.26 (+16.51)
+ +Table 10: Results of our control experiment. Four groups of results are obtained by the same re-implemented GECToR model. + +
ModelPre-trainedBEA-2019 (dev)CoNLL-2014 (test)JFLEG (test) GLEU
Prec.Rec.F0.5Prec.F0.5
GECToR(Omelianchuk et al., 2020)†RoBERTa50.3030.5044.5067.5038.30
XLNet47.1034.2043.8064.6042.60
GECToRRoBERTa49.8037.6146.7766.5645.08
XLNet45.5539.8144.2764.0448.67
GECToR(RANDOM)Roberta52.8836.0548.37 (+1.60)69.5444.32
GECToR(APPEND-first)Roberta54.9235.3049.43 (+2.66)70.7343.88
GECToR(DELETE-first)Roberta53.8535.1348.67 (+1.90)70.5742.78
GECToR(REPLACE-first)Roberta54.7834.8249.14 (+2.37)70.243.92
GECToR(RANDOM)XLNet49.7438.4746.99 (+2.72)67.4146.68
GECToR(APPEND-first)XLNet51.1037.7247.71 (+3.44)67.7446.39
GECToR(DELETE-first)XLNet50.4837.4947.21 (+2.97)67.3346.42
GECToR(REPLACE-first)XLNet51.9637.1948.14 (+3.87)69.3646.30
+ +Table 11: Results of models under the settings of Stage II Only. $\dagger$ indicates scores reported in previous papers. + +
ModelPre-trainedBEA-2019 (test)CoNLL-2014 (test)JFLEG (test) GLEU
Prec.Rec.F0.5Prec.Rec.F0.5
Dual-boost(Ge et al., 2018)†---64.4730.4852.72
GECToR(Omelianchuk et al., 2020)†RoBERTa77.255.171.572.142.063.0-
XLNet79.253.972.477.540.165.3-
GECToR(GST)(Parnow et al., 2021)†RoBERTa77.555.771.974.142.264.4-
XLNet79.454.572.878.439.965.7-
SAD((12+2)(Sun et al., 2021)†BART--72.971.052.866.4-
GECToRRoBERTa78.0253.4971.5372.9340.0263.1142.96
XLNet80.2351.7672.3677.6340.1165.5743.11
GECToR(RANDOM)Roberta79.8551.5371.94 (+0.41)75.3941.5764.84 (+1.73)59.05
GECToR(APPEND-first)Roberta80.3151.1472.08 (+0.55)76.7740.9565.34 (+2.23)58.88
GECToR(DELETE-first)Roberta79.3952.2571.92 (+0.39)75.7039.8564.16 (+1.05)58.94
GECToR(REPLACE-first)Roberta81.2750.6772.51 (+0.98)77.3640.3565.37 (+2.26)59.03
GECToR(RANDOM)XLNet81.1450.8372.49 (+0.13)77.0842.0366.06 (+0.49)58.73
GECToR(APPEND-first)XLNet81.8950.5572.85 (+0.49)78.1842.6767.02 (+1.45)58.64
GECToR(DELETE-first)XLNet82.3549.5272.71 (+0.35)77.0542.0366.04 (+0.47)58.45
GECToR(REPLACE-first)XLNet81.3351.5572.91 (+0.55)77.8341.8266.40 (+0.83)58.42
+ +Table 12: Results of models under the settings of Three Stages of Training. \ No newline at end of file diff --git a/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/images.zip b/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..9ad7e97165213cce82d14e27ffe1f1533124e351 --- /dev/null +++ b/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9b983434ad44a57467a2888c46aa5d537e025d0bf936a2c9d345dacae966488a +size 1175580 diff --git a/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/layout.json b/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..c3a554f0afef10b67839c2d3283b868943a06e37 --- /dev/null +++ b/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:27088c78b2ee78692ce1698d8092fdf5810fec93aae246cf256c0e1a8bf34663 +size 347441 diff --git a/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/0f71414d-6600-48dc-a66b-696b19438140_content_list.json b/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/0f71414d-6600-48dc-a66b-696b19438140_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..69c58a25322f492cd6352c80e4fc7f7c23ace7fd --- /dev/null +++ b/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/0f71414d-6600-48dc-a66b-696b19438140_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2081c5df1ea753cf30355f4fa21dca18b1b8e4459625fa5fe94a036ddcbf7c0a +size 47350 diff --git a/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/0f71414d-6600-48dc-a66b-696b19438140_model.json b/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/0f71414d-6600-48dc-a66b-696b19438140_model.json new file mode 100644 index 0000000000000000000000000000000000000000..1504b47e3e557b5d636b65a68e47b16524d5221b --- /dev/null +++ b/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/0f71414d-6600-48dc-a66b-696b19438140_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8559716abcf61eca9d3383273ab9929c36f5040b1f7f928c9b76ef261d5db816 +size 54342 diff --git a/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/0f71414d-6600-48dc-a66b-696b19438140_origin.pdf b/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/0f71414d-6600-48dc-a66b-696b19438140_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b9006e64e09dd8070add8097ed11ed594888fcea --- /dev/null +++ b/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/0f71414d-6600-48dc-a66b-696b19438140_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f8fe5cb6bd62994a21177c007633682ec69c10d31a2a339a5fee0da363ee6f6b +size 470632 diff --git a/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/full.md b/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/full.md new file mode 100644 index 0000000000000000000000000000000000000000..09127d9fdb9038393ba296f6cd1ed9f51a32eeed --- /dev/null +++ b/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/full.md @@ -0,0 +1,195 @@ +# uFACT: Unfaithful Alien-Corpora Training for Semantically Consistent Data-to-Text Generation + +Tisha Anders, Alexandru Coca, Bill Byrne + +Department of Engineering, University of Cambridge, United Kingdom + +anderstisha@gmail.com ac2123@cam.ac.uk wjb31@cam.ac.uk + +# Abstract + +We propose uFACT (Un-Faithful Alien Corpora Training), a training corpus construction method for data-to-text (d2t) generation models. We show that d2t models trained on uFACT datasets generate utterances which represent the semantic content of the data sources more accurately compared to models trained on the target corpus alone. Our approach is to augment the training set of a given target corpus with alien corpora which have different semantic representations. We show that while it is important to have faithful data from the target corpus, the faithfulness of additional corpora only plays a minor role. Consequently, uFACT datasets can be constructed with large quantities of unfaithful data, minimising the need for faithful data. We show how uFACT can be leveraged to obtain state-of-the-art results on the WebNLG benchmark using METEOR as our performance metric. Furthermore, we investigate the sensitivity of the generation faithfulness to the training corpus structure using the PARENT metric, and provide a baseline for this metric on the WebNLG (Gardent et al., 2017) benchmark to facilitate comparisons with future work. + +# 1 Introduction + +Data-to-text (d2t) generation is the task of generating fluent text $t$ given a set of information units, linearised into data source string $d$ (Table 1). + +
d{name,Einstein),(born,1879), (profession,physicist)}
tEinstein was a physicist,born in 1879.
+ +Table 1: Example of d2t system input $(d)$ and output $(t)$ + +Training high quality generation models requires corpora whose reference texts are faithful to the data sources representing their semantic content, i.e. the reference texts $t_r$ should have perfect information overlap with $d$ . Most corpora are, however, noisy, with imperfect fact overlap between data $d$ + +and reference text $t_r$ (Dhingra et al., 2019a). The quality of the training data in that case negatively impacts the performance of a d2t generator trained on it, as well as making it difficult to estimate the true accuracy of a generation $t_g$ , given $t_r$ (Parikh et al., 2020). Faithful examples are however expensive to obtain, and usually only available in small quantities. In the context of this scarcity, we propose the UFACT training set construction method. UFACT allows a generator to learn a more accurate d2t generation model from a mixture of faithful and unfaithful corpora, which reduces the need for vast quantities of faithful examples. For instance, our best-performing UFACT dataset contains 88692 examples, of which only 20,000 (24.34%) examples (the ones from the target corpus) are guaranteed to be faithful. We find that our approach leads to significant improvement in PARENT (Dhingra et al., 2019b) and METEOR (Banerjee and Lavie, 2005) compared to the conventional approach of training a d2t generator on one large unfaithful corpus. We conclude that even unfaithful examples from other corpora can contribute to fluency and faithfulness. Our UFACT-trained T5 surpasses state-of-the-art performance for METEOR on the WebNLG dataset. + +# 2 Related work + +Early approaches (Reiter and Dale, 1997) formalize d2t generation as three subtasks: content determination, structuring/grouping of information, and surface realisation. A handcrafted system is designed to solve each task. Recently, the focus has shifted towards end-to-end neural approaches, incorporating each of the subtasks into one system (Ferreira et al., 2019, Puduppully et al., 2018, Harkous et al., 2020). + +A number of end-to-end approaches to increasing faithfulness in d2t generation are curative, i.e. address generation quality post-hoc. For instance, Harkous et al. (2020) and Dušek and Kasner (2020) + +produce candidate generations first, and then judge faithfulness with a separate model, by checking entailment between $d$ and $t_{g}$ . Another approach to enhance faithfulness is to alter the generation model. Chen et al. (2020b) propose a generation model comprised of a copy-generate gate within an LSTM positional encoder. The gate acts as a soft switch between a copy-from-data mode and a language-generation mode. Kale (2020) utilise transfer learning to enhance their generation model, through pre-training on a large unsupervised, task-agnostic corpus. + +A different line of research focuses on preventative approaches, where the typical aim is to obtain a better model by improving the training data quality. Chen et al. (2020a) apply a unigram-based dataset selection process, by removing examples for which $t_r$ is not sufficiently related to $d$ . Parikh et al. (2020) also investigate this approach, releasing the noise-free ToTTo dataset, to ensure the training data does not encourage unfaithful generation. Filippova (2020) look for hallucinative examples in their dataset, either considering word-overlap, or comparing how strongly a language model vs. a conditional language model anticipates subsequent text. Dhingra et al. (2019b) develop the PARENT metric, a faithfulness-quantifying F-score that takes into account the data source in addition to the potentially divergent reference, providing a more robust assessment of the d2t mapping. + +In their work on model-agnostic meta-learning, Finn et al. (2017) note that training on different instances of a required task (e.g., training on different corpora) can facilitate learning a particular task. Inspired by this approach, we add other corpora with different semantic representations to the training dataset. We find not only that adding corpora boosts the semantic faithfulness of the d2t generator, but also that said corpora need not necessarily satisfy stringent faithfulness requirements, unlike the target corpus. + +# 3 Constructing a UFACT dataset + +Typically, a d2t generation model is obtained by task-specific fine-tuning, where a large-scale pretrained model such as T5 (Raffel et al., 2019) is fine-tuned on a small corpus. UFACT however, as an instance of mixed-corpus training, takes a different approach: examples from multiple corpora which do not share semantic representations, are linearised and tagged to form a large training + +corpus. A UFACT dataset is comprised of a target dataset for which we desire to maximise d2t generation fidelity and alien corpora. The latter are d2t corpora that may differ thematically and structurally from the target corpus and whose role is to improve generation fidelity on the target corpus. + +# 3.1 Corpora included in the UFACT dataset + +The uFACT datasets we experiment with are constructed from three corpora which differ significantly in size, vocabulary, intended purpose, and linearisation technique. Figure 1 displays the relative sizes of the uFACT datasets (FU and FUU), their faithful counterparts (FF and FFF), as well as other dataset compositions examined. + +![](images/213a5cd8e46ace0cf891af08382fae22bab5781e42b8199feddce7b349f1962e.jpg) +Figure 1: Dataset sizes The target corpus is WebNLG. Here U denotes unfaithful, describing a dataset that has not been curated while F stands for faithful, indicating a dataset that has been filtered to increase the faithfulness of the references to the data sources. See Appendix A for dataset curation approaches. + +WebNLG examples consist of up to seven RDF-triplets (subject-predicate-object), which are atomic entities of a knowledge graph, linearised into a string. 15 topics appear, of which 10 are seen in training. + +WikiInfo2Text1 is based on slot-value pairs, imitating a table. Our WikiInfo2Text set (a subset of the original) comprises five topics (UK_place, Book, Automobile, Military_conflict & French_commune). + +ViGGO (Juraska et al., 2019), a gaming dialogue corpus, has simple vocabulary, with 9 dialogue acts and 14 video game attributes available. The semantic representation consists of one dialogue act and 1-8 video game attributes, expressed as slot-value pairs that allow for lists of multiple values. + +Table 2 shows a sample training point from each corpus. It also shows that in the joint dataset the data source of every example, $d$ , is prepended with + +
dwebnlg: <s> Einstein <p> born <o> 1879 ; <s> Einstein <p> job <o> physicist
tEinstein was a physicist, born in 1879.
dwikiinfo: <name> H for Homicide && <author> S. Grafton && <series> Alpha Mysteries
tH for Homicide, by S. Grafton, is part of the Alpha Mysteries series.
dviggo: <request Explanation> (<rating>:[excellent], <genres>:[shooter, RTS])
tWhat is it about shooter and RTS games that you find so great?
+ +a dataset-specific tag (webnlg:, wikiinfo:, viggo:). Tags are usually task-based, (e.g., translate eng-to-ger:) and have been shown to be particularly effective with Transformer models (Ribeiro et al., 2021). Treating each dataset as a different instance of the d2t task as in the meta-learning approach, the tags reveal an example's affiliation with a dataset. + +# 3.2 Assembling a uFACT dataset + +In summary, a UFACT dataset is a mixed corpus comprising a target (WebNLG) and alien datasets (WikiInfo2Text & ViGGO). The next section shows that while the target corpus should obey a maximum degree of faithfulness, the faithfulness of alien datasets plays a subordinate role. Therefore, in a UFACT dataset, the target corpus obeys the quality-over-quantity principle, whereas alien corpora prioritise quantity over quality. + +# 4 Experiments + +# 4.1 Experimental setup + +We fine-tune the pre-trained T5-base (Raffel et al., 2020) from HuggingFace $^2$ for one epoch with batch size 8. We report averages of 5 values, obtained from training the model with 5 different seeds. We measure METEOR, BLEU (up to 4-grams) and PARENT (Dhingra et al., 2019b), a metric specifically developed for d2t-generation, considering both the reference text and the data source. PARENT uniquely assesses the faithfulness of the generation to the data source. For computing PAR + +![](images/f96e7d3f9b70fc84b9ab0d2b7cfdd847d38e8c3083120fba8bbe4b4ffec7e772.jpg) +Figure 2: T5 instance PARENT scores for each model instance (i.e. data configuration). 'FUU\t' is a UFACT dataset without tags. + +ENT, we use both the word-overlap $(\mathrm{P}(\mathrm{w}))$ and co-occurrence $(\mathrm{P}(\mathrm{c}))$ entailment models. All models are tested on the WebNLG test set, as in Harkous et al. (2020), to provide a fair comparison. The dataset compositions for different experiments are given in Figure 1. + +# 4.2 Effect of training dataset structure + +Table 3 and Figure 2 show the effect of the training set structure on the model performance. + +Table 2: Examples of the three d2t corpora. WebNLG consists of subject-predicate-object triplets, marked as such with $$ , $

$ , $$ . WikiInfo2Text has slot-value pairs, with slot-names in angle brackets, and pairs separated by &&. ViGGO has limited vocabulary, but the hierarchical structure of a dialogue act (e.g., request Explanation) parametrized by slot-value pairs (e.g., [excellent]). + +
Web.Wik.ViG.P(w)↑P(c)↑M↑B↑
1U--33.3244.4348.2818.89
2F--43.6255.5760.2842.03
3FF-45.3258.1961.3639.1
4FFF44.4756.1760.1340.61
5FU-46.4958.9561.8141.48
6FUU46.0258.5461.5940.88
7F\tU\tU\t43.6359.3260.0633.71
8UFF37.5448.7051.0225.16
9UUU38.0751.0452.3118.85
+ +Table 3: Experimental results for T5, with different dataset configurations. PARENT, METEOR and BLEU scores are measured for dataset configurations involving WebNLG (target), WikiInfo2Test (alien) & ViGGO (alien), respectively.{F,U}\t=no tags. All numbers reported are averages of the score of 5 models. + +Training on single datasets (Table 3, rows 1-2) When training on the target dataset alone (i.e., WebNLG) a large performance boost is obtained on all metrics from using the faithful dataset WebNLG[F], despite the fact that it contains only $20\%$ of the examples in WebNLG[U] (Figure 1). This demonstrates the detrimental effect of unfaithful target datasets, which are commonly used, on d2t generation faithfulness. The METEOR score of 48.28 on WebNLG[U] is comparable to the range of $\sim 39 - 46$ reported in previous work (Ribeiro et al., 2021). Using faithful in-domain data has a large positive effect on all metrics (row 2). + +Addition of faithful alien corpora (rows 3-4) When augmenting the target corpus with faithful + +alien corpora (i.e. F-F & F-F-F), the training corpus size increases by factors of 1.88 and 1.90, respectively. As expected, performance increases on PARENT and METEOR, compared to faithful single-corpus training (F). However, F-F (i.e. just one alien dataset) outperforms F-F-F (two alien datasets). This may be due to the fact that ViGGO has a complex semantic representation diverging from the tuple/triplet representation in the other datasets, differs considerably in domain3 from WebNLG and WikiInfo2Text, and only represents $0.92\%$ of the F-F-F dataset (Figure 1). Therefore, it may act as too strong a regulariser during the training phase. The decrease in BLEU coupled with increases in METEOR and PARENT suggests that the generation model stays more faithful to the table, while also phrasing the sentence in its own way. + +Training on UFACT datasets (rows 5-6) Training on UFACT datasets F-U and F-U-U improves generator performance compared to training with the faithful counterparts (F-F & F-F-F) (rows 3-4). This increase shows that the faithfulness of alien datasets WikiInfo2Text and ViGGO plays a subordinate role, and the model instead benefits from the sheer number of fluent examples. However, with the addition of ViGGO[U] (row 6 vs. row 5), no metric score is boosted, suggesting a constraint on alien datasets in terms of how much domains and, potentially, semantic representation can differ. + +UFACT without tags (row 7) Training on the largest mixed corpus (F-U-U) without dataset-specific tags reduces every metric's score, with the exception of P(c) which increases by $1.33\%$ . Coupled with the decrease in P(w) and BLEU this suggests that the generated text contains less lexical overlap with the references. + +Can the target corpus be unfaithful? (rows 8-9) We have seen that the large unfaithful target corpus WebNLG[U] alone is the worst-performing dataset configuration. The addition of alien corpora in this case, unlike in previous experiments, does not lead to state-of-the-art-like performance. Metric scores stay significantly below any dataset with a faithful target corpus, including the UFACT datasets. The low performance in unfaithful-targetcorpus configurations shows that the straightforward addition of alien corpora does not automatically result in desirable scores, and therefore jus + +tifies UFACT's quality-over-quantity principle for the target corpus. + +# 4.3 Analysis of UFACT efficacy + +The above results indicate that faithfulness in the target corpus should not be compromised, not even to gain a larger training set (see largest dataset U-U-U vs. smallest dataset F, or simply F vs. U). Furthermore, faithful alien corpora cannot compensate for unfaithful target corpora (e.g. U-F-F vs. F). + +While faithful examples are also desirable in alien datasets, the trade-off between performance and effort for faithful examples is such that faithfulness is not worth pursuing at any cost, seeing that F-U / F-U-U outperform F-F / F-F-F. + +The UFACT-method however insists on the target corpus being faithful. + +Models trained with $N = 2$ corpora outperform those with $N = 3$ in this paper, suggesting that adding corpora with significantly different domain coverage and semantic representations may be counterproductive when those corpora make up a tiny portion of the dataset. Subsequently, the regularising effect is mitigated in F-U-U, since the portion of ViGGO is higher (7.37%). + +Both METEOR, a reference-based metric and PARENT(c/w), which both take the reference and the data source into account, increase when training on uFACT datasets compared to conventional training (row 6 vs. 1). These increases suggest the data source is more accurately represented in the generated text. Therefore, uFACT provides a method of training better d2t models, with increased semantic faithfulness. The efficacy of mixed-corpus training shows that pretrained language models are powerful enough to learn and benefit from several tasks at once, provided the tasks are similar enough and sufficiently represented among the training set. + +On WebNLG, uFACT achieves a new state-of-the-art result of 61.81 on METEOR (Ribeiro et al., 2021) (Table 4). + +
AuthorModel/MethodMB
Castro Ferreira et al. (2019)UPF-FORGe39.0038.65
Harkous et al. (2020)DATATUNER42.4052.90
Kale (2020)T5-large44.0061.44
Moryossef et al. (2019)StrongNeural39.2046.5
Schmitt et al. (2020)Graformer43.3861.15
Zhao et al. (2020)PLANENC41.0052.78
our paperUFACT61.8141.84
+ +Table 4: State-of-the-art results on WebNLG for METEOR and BLEU. + +The comparatively low BLEU scores, in combination with high METEOR scores, are arguably desirable, since $n$ -gram precision metric BLEU rewards simply copying from potentially unfaithful $t_r$ , whereas METEOR can also reward semantically equivalent rephrasings of $t_r$ . METEOR and BLEU results thus suggest high semantic overlap without copying. Meanwhile, UFACT datasets F-U-U and F-U achieve the highest PARENT scores (Table 3, rows 5-6), ensuring semantic overlap with both reference and data source. + +# 5 Conclusion + +We have presented the UFACT-method, which boosts the faithfulness of data-to-text generation models by appropriately constructing the training corpus. Training T5 on a mixture of d2t corpora results in strong semantic accuracy increase, as long as and the target corpus remains faithful. UFACT's lax constraints on the majority of the training set mitigates the scarcity problem in finding faithful d2t corpora, thus making faithful d2t generation more practically feasible. The new state-of-the-art METEOR score proves that language models alone, if trained with a carefully constructed dataset, can be highly effective data-to-text generators. + +# References + +Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65-72, Ann Arbor, Michigan. Association for Computational Linguistics. +Thiago Castro Ferreira, Chris van der Lee, Emiel van Miltenburg, and Emiel Krahmer. 2019. Neural data-to-text generation: A comparison between pipeline and end-to-end architectures. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 552-562, Hong Kong, China. Association for Computational Linguistics. +Wenhu Chen, Yu Su, Xifeng Yan, and William Yang Wang. 2020a. KGPT: knowledge-grounded pretraining for data-to-text generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 8635-8648. Association for Computational Linguistics. + +Zhiyu Chen, Harini Eavani, Wenhu Chen, Yinyin Liu, and William Yang Wang. 2020b. Few-shot NLG with pre-trained language model. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 183-190, Online. Association for Computational Linguistics. +Bhuwan Dhingra, Manaal Faruqui, Ankur Parikh, Ming-Wei Chang, Dipanjan Das, and William Cohen. 2019a. Handling divergent reference texts when evaluating table-to-text generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4884-4895, Florence, Italy. Association for Computational Linguistics. +Bhuwan Dhingra, Manaal Faruqui, Ankur P. Parikh, Ming-Wei Chang, Dipanjan Das, and William W. Cohen. 2019b. Handling divergent reference texts when evaluating table-to-text generation. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 4884-4895. Association for Computational Linguistics. +Ondrej Dusek and Zdenek Kasner. 2020. Evaluating semantic accuracy of data-to-text generation with natural language inference. In Proceedings of the 13th International Conference on Natural Language Generation, pages 131-137, Dublin, Ireland. Association for Computational Linguistics. +Thiago Castro Ferreira, Chris van der Lee, Emiel van Miltenburg, and Emiel Krahmer. 2019. Neural data-to-text generation: A comparison between pipeline and end-to-end architectures. CoRR, abs/1908.09022. +Katja Filippova. 2020. Controlled hallucinations: Learning to generate faithfully from noisy data. CoRR, abs/2010.05873. +Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1126-1135. PMLR. +Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. The webnlg challenge: Generating text from RDF data. In Proceedings of the 10th International Conference on Natural Language Generation, INLG 2017, Santiago de Compostela, Spain, September 4-7, 2017, pages 124-133. Association for Computational Linguistics. +Hamza Harkous, Isabel Groves, and Amir Saffari. 2020. Have your text and use it too! end-to-end neural data-to-text generation with semantic fidelity. In Proceedings of the 28th International Conference on Computational Linguistics, pages 2410-2424, Barcelona, Spain (Online). International Committee on Computational Linguistics. + +Juraj Juraska, Kevin Bowden, and Marilyn Walker. 2019. ViGGO: A video game corpus for data-to-text generation in open-domain conversation. In Proceedings of the 12th International Conference on Natural Language Generation, pages 164-172, Tokyo, Japan. Association for Computational Linguistics. + +Mihir Kale. 2020. Text-to-text pre-training for data-to-text tasks. CoRR, abs/2005.10433. + +Amit Moryossef, Yoav Goldberg, and Ido Dagan. 2019. Step-by-step: Separating planning from realization in neural data-to-text generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2267-2277, Minneapolis, Minnesota. Association for Computational Linguistics. + +Ankur Parikh, Xuezhi Wang, Sebastian Gehrmann, Manaal Faruqui, Bhuwan Dhingra, Diyi Yang, and Dipanjan Das. 2020. ToTTo: A controlled table-to-text generation dataset. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1173-1186, Online. Association for Computational Linguistics. + +Ratish Puduppully, Li Dong, and Mirella Lapata. 2018. Data-to-text generation with content selection and planning. CoRR, abs/1809.00582. + +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. CoRR, abs/1910.10683. + +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67. + +Ehud Reiter and Robert Dale. 1997. Building applied natural language generation systems. Natural Language Engineering, 3(1):57-87. + +Leonardo F. R. Ribeiro, Martin Schmitt, Hinrich Schütze, and Iryna Gurevych. 2021. Investigating pretrained language models for graph-to-text generation. ArXiv, abs/2007.08426. + +Martin Schmitt, Leonardo F. R. Ribeiro, Philipp Dufter, Iryna Gurevych, and Hinrich Schütze. 2020. Modeling graph structure via relative position for better text generation from knowledge graphs. CoRR, abs/2006.09242. + +Chao Zhao, Marilyn Walker, and Snigdha Chaturvedi. 2020. Bridging the structural gap between encoding and decoding for data-to-text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2481-2491, Online. Association for Computational Linguistics. + +# A Obtaining faithful versions of the corpora + +# A.1 WebNLG & ViGGO + +For WebNLG and ViGGO, faithful examples were retrieved from Harkous et al. (2020)4, by selecting semantic fidelity classifier training examples labelled accurate. + +# A.2 WikiInfo2Text + +Slot-value pairs with slot names which are by default irrelevant to the text (e.g. img_size, or other website-specific meta-data) were excluded from the respective example. + +To be included in the training dataset, WikiInfo2Text examples had to obey two hand-crafted rules: + +1. Generation-to-data-source length ratio: + +- To prevent references from giving information beyond the data source, the number of characters in the generation was restricted, given the number of semantic components in the data source: + +$$ +l e n (r e f) < t a u * \text {n u m} _ {\text {d a t a p t s}} +$$ + +2. Overall reference text length: + +- To avoid hallucinative reference texts, the number of characters in the reference was restricted: + +$$ +l e n (r e f) < l a m b d a +$$ + +Values for $\tau$ and $\lambda$ can be found in the table below. For WikiInfo2Text, we still perform some superficial cleaning to prevent extremely long examples from overloading the GPU. + +
τλ
WikiInfo2Text[F]60800
WikiInfo2Text[U]1501500
+ +Table 5: WikiInfo2Text cleaning parameter settings \ No newline at end of file diff --git a/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/images.zip b/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..555c038e1ccba1f6dd627781ca87910d4f01de15 --- /dev/null +++ b/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ccde4fee4dda608d596d47a69c838bdfa1e2f1b97fc8d3e8ac16c436f9e05563 +size 181407 diff --git a/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/layout.json b/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..783e8ee8e0c2a1f1390572ca6bb21d691577c241 --- /dev/null +++ b/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:53a3c8827c6f87a1c5234636ab1a764ee4bff66711c29d1204a2fe554945ad63 +size 195908 diff --git a/unimo2endtoendunifiedvisionlanguagegroundedlearning/61996538-882d-4559-aaa4-21965154b1d5_content_list.json b/unimo2endtoendunifiedvisionlanguagegroundedlearning/61996538-882d-4559-aaa4-21965154b1d5_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..59ef8d1f551250784b78066c49baf4d874a63ee9 --- /dev/null +++ b/unimo2endtoendunifiedvisionlanguagegroundedlearning/61996538-882d-4559-aaa4-21965154b1d5_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:20fc3ad062f80b459eeb1bd54dbf46a8c94d04bae97e3f052ada3cd6fde536e3 +size 93026 diff --git a/unimo2endtoendunifiedvisionlanguagegroundedlearning/61996538-882d-4559-aaa4-21965154b1d5_model.json b/unimo2endtoendunifiedvisionlanguagegroundedlearning/61996538-882d-4559-aaa4-21965154b1d5_model.json new file mode 100644 index 0000000000000000000000000000000000000000..9288ddbfc02622a7fd8077581c48bea5aa4cdbcc --- /dev/null +++ b/unimo2endtoendunifiedvisionlanguagegroundedlearning/61996538-882d-4559-aaa4-21965154b1d5_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ddd79c836798a85b0d93e360820fbb2b8cadd1f97af4e2a185ac7f70187f2553 +size 112379 diff --git a/unimo2endtoendunifiedvisionlanguagegroundedlearning/61996538-882d-4559-aaa4-21965154b1d5_origin.pdf b/unimo2endtoendunifiedvisionlanguagegroundedlearning/61996538-882d-4559-aaa4-21965154b1d5_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5f510385f066e6437ccc2421fe2cc3bf99af5ef4 --- /dev/null +++ b/unimo2endtoendunifiedvisionlanguagegroundedlearning/61996538-882d-4559-aaa4-21965154b1d5_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:770161ab16bfe3bb47b541713166ac7a06232c12e218667d28dbbe194571865e +size 3196765 diff --git a/unimo2endtoendunifiedvisionlanguagegroundedlearning/full.md b/unimo2endtoendunifiedvisionlanguagegroundedlearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..23757fa83f22278955073d5c342a966979958edd --- /dev/null +++ b/unimo2endtoendunifiedvisionlanguagegroundedlearning/full.md @@ -0,0 +1,336 @@ +# UNIMO-2: End-to-End Unified Vision-Language Grounded Learning + +Wei Li, Can Gao, Guocheng Niu, Xinyan Xiao, Hao Liu, Jiachen Liu, Hua Wu, Haifeng Wang + +Baidu Inc., Beijing, China + +{liwei85,gaocan01,niuguocheng,xiaoxinyan, liuhao24,liujiachen,wu_hua,wanghaifeng}@baidu.com + +# Abstract + +Vision-Language Pre-training (VLP) has achieved impressive performance on various cross-modal downstream tasks. However, most existing methods can only learn from aligned image-caption data and rely heavily on expensive regional features, which greatly limits their scalability and performance. In this paper, we propose an end-to-end unified-modal pre-training framework, namely UNIMO-2, for joint learning on both aligned image-caption data and unaligned image-only and text-only corpus. We build a unified Transformer model to jointly learn visual representations, textual representations and semantic alignment between images and texts. In particular, we propose to conduct grounded learning on both images and texts via a sharing grounded space, which helps bridge unaligned images and texts, and align the visual and textual semantic spaces on different types of corpora. The experiments show that our grounded learning method can improve textual and visual semantic alignment for improving performance on various cross-modal tasks. Moreover, benefiting from effective joint modeling of different types of corpora, our model also achieves impressive performance on single-modal visual and textual tasks. Our code and models are public at the UNIMO project page https://unimo-ptm.github.io/. + +# 1 Introduction + +Large-scale pre-training has drawn much attention in the community of Computer Vision (CV), Natural Language Processing (NLP) and Multi-Modal (MM) due to its strong capability of generalization and efficient usage of large-scale data. However, in the existing literature, the work on vision, language and vision-language representation learning are mostly studied separately with different training data sources. In the vision domain, pre-training on large-scale image corpus such as ImageNet + +(Deng et al., 2009), OpenImages (Kuznetsova et al., 2020) and JFT-300M (Dosovitskiy et al., 2020) has proven to be critical for learning transferring visual representation for various downstream tasks. In NLP, pre-training on easily-accessible unannotated text corpora greatly improves the capabilities of language understanding and generation (Devlin et al., 2019; Liu et al., 2019; Yang et al., 2019). Pre-training has also become the de-facto approach in vision-language modeling (Lu et al., 2019; Chen et al., 2020c; Li et al., 2020, 2019a; Yu et al., 2020). However, existing VLP methods require a massive amount of aligned image-text pairs which are costly to collect and hard to scale up. The large volumes of image corpus in CV and text corpus in NLP cannot be effectively utilized. Thus, the scalability and performance upper limit of existing VLP methods are largely restricted. As they only learn joint vision-language representations on image-text pairs, they are also difficult to be effectively adapted to visual and textual tasks (Li et al., 2021b; Lin et al., 2020). + +To address the limitations, we propose a new end-to-end unified-modal pre-training framework, namely UNIMO-2, for joint learning on various types of corpora, including images, texts, and image-caption pairs. Specifically, we build a unified Transformer model to jointly learn visual representations, textual representations, and cross-modal alignment from the three types of corpora. Both the visual and textual representations are learned end-to-end from raw images and textual sentences. Combining a large number of unaligned images and texts is not only expected to improve the performance of joint vision-language tasks, but also improve the scalability of adapting to single-modal visual and textual tasks. However, it is challenging to bridge unaligned images and texts and effectively align the visual and textual semantic spaces on different types of corpora. + +Only a few works have attempted to bridge + +unaligned images and texts by leveraging object tags from an pre-trained object detector as "anchor points" (Li et al., 2021a,b). However, they all rely heavily on expensive object-centric visual feature extraction, thus facing the problems of limited visual expressive power and computation inefficiency. In this paper, in order to bridge the unpaired image and text corpora and align the visual and textual semantic spaces end-to-end, we propose to conduct grounded learning on images, texts, and image-text pairs via a sharing grounded space. Specifically, we introduce a grounded dictionary shared by images and texts, which represents vision-language grounded semantics. To learn the grounded dictionary, we apply vector quantization on both visual and textual representations to group image patches and text tokens with similar semantics into grounded tokens. Furthermore, we design a Grounded Transformer architecture to let the visual and textual information exchanged by the grounded tokens, which not only facilitates grounded dictionary learning, but also improves cross-modal alignment. Our grounded learning method can help bridge the textual and visual semantic spaces on unpaired image and text corpora to improve cross-modal fusion on different types of corpora. + +We evaluate UNIMO-2 on a variety of representative vision-language understanding and generation tasks, including image/text retrieval, visual question answering, visual reasoning and image caption. On all these tasks, UNIMO-2 obtains obvious improvements compared to the baselines that only learn on aligned image_caption data or without our grounded learning component. Moreover, we also evaluate our model on single-modal textual tasks such as natural language inference and visual tasks such as image classification (Deng et al., 2009). The results show that our model has also achieved very impressive performance on these tasks, which proves the strong scalability and adaptability of our model. + +UNIMO-2 has the following advantages compared with previous methods: + +- UNIMO-2 can jointly learn from both aligned and unaligned image and text corpora end-to-end, effectively alleviating the limitations of corpus, and learning more generalized visual and textual representations on large volumes of different types of corpus. + +- Benefiting from utilizing different types of + +corpora, UNIMO-2 has better scalability for different types of tasks, including both cross-modal tasks and single-modal tasks. + +- Our grounded learning method can help align textual and visual semantic spaces more effectively, thereby greatly improving the performance of various cross-modal tasks. In particular, the performance of zero-shot image/text retrieval even outperforms CLIP pre-trained on an order of magnitude larger pair corpus. + +# 2 Related Work + +Vision-Language Pre-training Recent years have witnessed rapid progress in vision-and-language pretraining (VLP) (Li et al., 2019b; Lu et al., 2019; Chen et al., 2020c; Li et al., 2019a, 2020; Yu et al., 2020). Most existing mainstream VLP models adopt a two-stage training method, which firstly extracts region-based visual features using a pre-trained object detection model, and then combines the derived object-centric region features of images and text embeddings as the input of Transformer (Vaswani et al., 2017) for cross-modal pre-training. These methods rely heavily on an off-the-shelf object detector like Faster R-CNN (Ren et al., 2016) typically pretrained on the Visual Genome dataset (Anderson et al., 2018). As the visual representation is not optimized towards a more generic cross-modal understanding and extracting region features with an object detection model is so time-consuming, they face the problems of limited visual expressive power and computation inefficiency, which makes them less scalable. + +Some recent work has also explored VLP without object detection modules (Xu et al., 2021; Kim et al., 2021; Huang et al., 2021; Wang et al., 2021). They either utilize grid features from pretrained CNNs or patch features following ViT (Dosovitskiy et al., 2020), however they only use limited image_caption pairs for cross-modal pretraining and thus their scalability and performance are limited. Only a few works have explored utilizing unaligned images and texts for vision-language pre-training, including our previous work UNIMO (Li et al., 2021b) and U-VisualBERT (Li et al., 2021a). However, they all rely on pre-extraction of region-based visual features or object tags by time-consuming object detection. How to bridge unpaired visual and textual corpora end-to-end without using object detection remains challenging. + +![](images/4640cad79e87e13910d51c5a41bab922b14691139ab5f679b79776c962289303.jpg) +Figure 1: Illustration of our UNIMO-2 framework. The left part shows the architecture of learning on image-text pairs, which produces grounded tokens based on the sharing semantics in images and texts. The right part shows the architecture of learning on unpaired images and texts, which produces grounded tokens from image representations or text representations, respectively. As they share the same grounded dictionary, the grounded tokens act as "anchor points" to bridge the gap between images and texts. + +![](images/5d111268707f4c9164c23503955ba4804dd92cbdd92361fed9ac790b8e1f9b14.jpg) + +Grounded Learning Language grounding is an active field aiming at enriching textual representations with visual information, which has been shown to improve performance on a variety of core NLP tasks (Bruni et al., 2014; Baroni, 2016; Kiela, 2017). Kiela et al. (2018) investigate grounded sentence representations by training a sentence encoder to predict the image features of a given caption. Tan and Bansal (2020) propose a vokenization method that maps language tokens to their related images. These works all enrich the language representation with visual information by learning a projection of text representations to corresponding images (Chrupaña et al., 2015). Recently, Huang et al. (2021) propose an end-to-end VLP method that aggregates visual features from a CNN encoder into visual tokens with a visual dictionary. Liu et al. (2021) propose to improve cross-modal retrieval tasks by incorporating a shared discretized embedding space, which is utilized to compute matching scores between different modalities to complement the representations from individual encoders. These works all rely on image-text pairs to learn cross-modal representations and only focus on joint vision-language tasks. By contrast, our work for the first time proposes to jointly model both aligned and unaligned images and texts by end-to-end learning a shared grounded semantic space, which can improve modality alignment between both aligned and unaligned images and texts. + +# 3 Approach + +The overall architecture of our model is shown in Figure 1. UNIMO-2 is an end-to-end framework, which consists of a trainable Transformer-based visual encoder, a Transformer-based text encoder, a grounded dictionary (GD) embedding module, and a multi-layer Grounded Transformer for modality fusion. The visual encoder takes an image as input by splitting it into small sizes of patches, and produces the high-level visual representations for all patches, similar to ViT (Dosovitskiy et al., 2020). The text encoder encodes textual tokens to produce high-level token representations. Based on the high-level representations of patches and tokens, we design a GD embedding module to group similar vision-language representations into grounded tokens with a shared grounded dictionary. The Grounded Transformer is further adopted to fuse features from vision and language modalities through interacting with the common grounded tokens. UNIMO-2 can be end-to-end pre-trained by joint Masked Language Modeling (MLM) on text, Image-Text Matching (ITM) on image-text pairs and Visual Contrastive Learning (VCL) on images. UNIMO-2 can also be easily adapted to various tasks including visual, textual and cross-modal tasks. + +# 3.1 End-to-End Grounded Learning + +Human acquire much of their knowledge through grounded learning - visual concepts can be acquired through language, and language acquisi + +tion emerges through visual interaction (Jones et al., 1991; Perfetti, 1998; Fincher-Kiefer, 2001; Andrews et al., 2009; Riordan and Jones, 2011). Inspired by this type of grounded learning, we propose to learn a sharing semantic space (i.e. grounded space) between images and texts to better align fine-grained visual and textual semantics. Specifically, based on the high-level visual representations of patches $V = \{v_{1},\ldots ,v_{M}\}$ and textual representations of tokens $T = \{t_1,\dots ,t_N\}$ , we introduce a grounded dictionary to group similar visual and textual representations into the same grounded token. The grounded features not only help align the visual and textual semantics in aligned image-caption data, but also act as "anchor points" to help bridge the unaligned images and texts, as shown in Figure 1. + +Grounded Dictionary Learning We define a grounded dictionary (GD) as a matrix $G \in \mathbb{R}^{C \times D}$ which contains $C$ embedding vectors with $D$ -dim. The embedding vector for the $j^{th}$ grounded token is denoted as $g_{j} \in \mathbb{R}^{D}, j \in 1,2,\ldots,C$ . Vector Quantization (VQ) is widely used to group continuous embeddings into groups of discrete latent variables (Oord et al., 2017; Liu et al., 2021; Huang et al., 2021). For example, each patch or token can be mapped to a grounded token by finding its nearest neighbor in the GD, as in Oord et al. (2017). + +Most existing VLP methods implicitly assume that there is a one-to-one correspondence hypothesis between the visual and textual modalities of image-text pairs. However, this hypothesis does not hold in reality as most image-text pairs on the Web are noisy or only have weak correlation. To tackle this issue, instead of mapping each patch or token representation to a grounded token, we only detect the most significant sharing semantics between image and text. We propose to find the top- $K$ most significant grounded tokens for both the textual and visual input. Specifically, let $x_{ij}$ denotes the similarity between embedding vectors of visual token $v_{i}$ and grounded token $g_{j}$ , which is computed by: + +$$ +x _ {i j} = \sigma \left(\eta * v _ {i} ^ {T} g _ {j}\right) \tag {1} +$$ + +where $\sigma$ denotes the sigmoid function, and $\eta$ denotes a learnable temperature parameter. Similarly, $y_{kj}$ denotes the similarity between embedding vectors of textual token $t_k$ and grounded token $g_j$ . + +For image-text pairs, the accumulated score of + +the grounded token $g_{j}$ is computed as: + +$$ +s _ {j} = \sum_ {i = 1} ^ {M} x _ {i j} + \sum_ {k = 1} ^ {N} y _ {k j} \tag {2} +$$ + +We obtain the top- $K$ most significant grounded tokens with the largest accumulated scores: $g_{1},\ldots ,g_{K} = Top_{K}\{s_{1},\ldots ,s_{C}\}$ , where $K$ is a hyper-parameter. Note that, if we set $K = M + N$ then it is similar that each patch or token is mapped to a grounded token, which will increase the computation cost and introduce noisy information into the grounded learning process. So, we set $K$ much smaller than $M + N$ to obtain the most significant and sharing grounded tokens, which can help align fine-grained visual and textual representations while eliminating the noisy or unrelated information in image-text pairs. For unpaired images or text, the accumulated score of each grounded token $g_{j}$ is $s_j = \sum_{i = 1}^{M}x_{ij}$ or $s_j = \sum_{k = 1}^{N}y_{kj}$ , and the top- $K$ grounded tokens can be obtained similarly. + +The grounded dictionary is randomly initialized, and further updated end-to-end while pre-training. As the $Top_{K}$ function is non-differentiable, we import a grounding loss to help learn the grounded dictionary. Specifically, we propose a revised form of the Vector Quantisation (VQ) algorithm (Oord et al., 2017), which uses the $l_{2}$ error to move the embedding vectors $g_{i}$ towards the mapped patch or token representations, as shown in the first term of Equation 3. For simplicity, here we take image input as an example. Since the volume of the embedding space is dimensionless, it can grow arbitrarily if the embeddings $g_{i}$ do not train as fast as the visual and textual encoder parameters. To make sure the encoder commits to an embedding and its output does not grow, we add a commitment loss, the second term in Equation 3. Thus, the total grounding loss becomes: + +$$ +\begin{array}{l} \mathcal {L} _ {G D} = \sum_ {i = 1} ^ {M} \| s g \left(v _ {i}\right) - \sum_ {j} \frac {x _ {i j}}{\sum_ {k} x _ {i k}} g _ {j} \| _ {2} ^ {2} \tag {3} \\ + \beta \sum_ {j = 1} ^ {K} \| s g (g _ {j}) - \sum_ {i} \frac {x _ {i j}}{s _ {j}} v _ {i} \| _ {2} ^ {2} \\ \end{array} +$$ + +where $sg(.)$ denotes the stop-gradient operator that is defined as identity at forward computation time and has zero partial derivatives, and $\beta$ denotes a weight parameter. + +The grounded dictionary faces a cold-start problem for unpaired images and texts. So we apply + +![](images/24f409b6e15fa06ec1d99848357cfc38a147ce0279c0fef193a3e72070d97ae1.jpg) +Figure 2: The self-attention architecture of Grounded Transformer. Cross-modal information is exchanged through the grounded tokens. + +curriculum learning on different types of corpora. Specifically, we first only train on image-text pairs for 20 epochs to obtain a usable grounded embedding space, then further train on all three types of corpus to help bridge unpaired images and texts. To show what the GD has learned, we have visualized some grounded tokens in Appendix A. + +Grounded Transformer After obtaining the grounded tokens, we append them with the visual tokens and textual tokens as input to our Grounded Transformer for cross-modal fusion. Specifically, we propose to bridge visual and textual representations by grounded tokens. As shown in Figure 2, the cross-modal information can only be exchanged by grounded tokens, which also push the grounded tokens to capture the most significant sharing semantics between images and texts. In this way, our model is more robust on weak correlation image-text pairs by modeling cross-modal interaction through common grounded tokens. Furthermore, the novel self-attention architecture can improve the computation efficiency compared to the standard pairwise self-attention mechanism. + +For unpaired images and texts, the Grounded Transformer also models the fusion of visual tokens or textual tokens with the grounded tokens. As the grounded dictionary captures common visual and textual semantics, it also helps learn cross-modal representations on unpaired images and texts. + +# 3.2 Pre-training On Different Corpus + +Based on the outputs of the Grounded Transformer, we adopt Masked Language Modeling (MLM) and Image-Text Matching (ITM) pre-training tasks on image-text pairs. Furthermore, we also apply MLM on text corpus and Visual Constrastive Learning (VCL) on images. + +Masked Language Modeling We iteratively sample spans of text until totally $15\%$ tokens have been selected. We sample the span length from a geometric distribution $l \sim Geo(p)$ , where $p$ is set as 0.2, similar to SpanBERT (Joshi et al., 2020). All tokens in the selected spans are replaced with either a special [MASK] token, a random token or the original token with probability $80\%$ , $10\%$ and $10\%$ , respectively. The goal is to predict these masked tokens based on their surrounding context and all visual features. The MLM task is also applied on text-only corpus, which predicts masked tokens only based on the surrounding tokens. + +Image-Text Matching To enhance the cross-modal matching, we adopt ITM task for pretraining as in previous works (Chen et al., 2020c). We apply a binary classifier on the concatenated embedding features of the “[CLS)” token in text and the “[CLS)” token in image by Grounded Transformer to predict whether the input image and text are matched or not. + +Visual Contrastive Learning UNIMO-2 learns representations on unpaired images by maximizing agreement between differently augmented views of the same image while minimizing similarities between different images via a contrastive loss in the latent space, similar to SimCLR (Chen et al., 2020a). We apply stochastic data argumentation module that transforms an image randomly resulted in two correlated views as a positive pair, and random images in the same minibatch as negative pairs. We combine augmentations of random cropping, random rotating and random color distortion followed by resizing back to the original size. + +# 3.3 Transferring To Different Tasks + +Our model can be effectively finetuned on different types of tasks, including cross-modal tasks, visual tasks and textual tasks. For cross-modal tasks, the model architecture is the same as the pre-training architecture on image-text pairs, as shown in the left part of Figure 1. Grounded tokens are produced based on both the visual and textual representations to facilitate cross-modal understanding and generation. For visual tasks, the model architecture is the same as the pre-training architecture on images, as shown in the middle part of Figure 1. Grounded tokens are obtained based on the visual representations from the Visual Transformer. As the grounded tokens contain sharing semantics between images and texts, UNIMO-2 can learn language-grounded + +image representations for visual tasks. Similarly, for textual tasks the model architecture is the same as the pre-training architecture on text, as shown in the right part of Figure 1. Grounded tokens are obtained based on the textual representations from the Text Transformer. Also, the sharing grounded space helps learn grounded text representations to facilitate textual tasks. + +# 4 Experimental Settings + +Pretraining Dataset Our pre-training datasets consist of three types: text corpus, image corpus and image-text pairs. The text corpus includes two large-scale corpora: BookWiki and OpenWebText, which are part of the training dataset of RoBERTa (Liu et al., 2019). The image corpus are images without textual descriptions, including a subset of OpenImages (Krasin et al., 2017) and ImageNet-21k (Deng et al., 2009). Each image in these datasets contains a textual label. The image-text pairs are composed of four existing multi-modal datasets: COCO (Lin et al., 2014), Visual Genome (VG) (Krishna et al., 2017), Conceptual Captions (CC) (Sharma et al., 2018) and SBU Captions (Ordonez et al., 2011), which have also been widely used in previous VLP models. The detail statistics are shown in the appendix. We also transform the label of each image to a sentence by prompts (e.g. "a photo of [label]") to create pseudo image-text pairs from the OpenImages and ImageNet-21k datasets for pretraining. + +Implementation Detail UNIMO-2 consists of 12 layers of Visual Transformer, 12 layers of Text Transformer, and 12 layers of Grounded Transformer. The Visual Transformer is initialized by ViT-B/16. The Text Transformer and Grounded Transformer are both initialized by RoBERTa-Base. The maximum sequence length of text tokens are set as 512. An Adam optimizer with initial learning rate 5e-5 and a learning rate linear decay schedule is utilized. + +For the visual encoder, our model receives the raw image $\mathbf{x} \in \mathbb{R}^{H \times W \times C}$ and maps it into flattened $1D$ sequence of patches $\mathbf{x}_p \in \mathbb{R}^{\frac{HW}{P^2} \times D}$ as input for the transformer, where $D$ is the fixed hidden size of the transformer layers and $P$ is the patch size. During pretraining, we utilize the $224 \times 224$ resolution with a fixed patch size of $16 \times 16$ , resulting in a patch sequence of length $14 \times 14$ as visual tokens. During fine-tuning, we increase the image resolution to $384 \times 384$ and interpolate the + +positional encoding of image patches following (Dosovitskiy et al., 2020). For the grounded embedding module, the grounded dictionary size $C$ is set as 2048, and the number of grounded tokens $K$ during pre-training and finetuning are both set as 100 that is much smaller than the max number of patches and tokens for pre-training (i.e. 709) and finetuning (i.e. 1089). We set $\beta = 0.25$ in all our experiments and the results did not vary obviously for values ranging from 0.1 to 1.0. We have compared different grounding settings in detail in Appendix A. + +Finetuning Tasks To show the scalability of our model, we fine-tune it on three types of downstream tasks: (1) joint vision-language cross-modal tasks, (2) visual tasks, and (3) textual tasks. The cross-modal tasks include: visual question answering (VQA) on the VQA v2.0 dataset (Goyal et al., 2017), image caption on the Microsoft COCO Captions dataset (Chen et al., 2015), visual entailment on the SNLI-VE dataset (Xie et al., 2019) and image-text retrieval on Flickr30k datasets (Young et al., 2014). The visual tasks include image classification on the ImageNet-1k dataset (Krizhevsky et al., 2012). The textual tasks include sentiment classification on the SST-2 dataset (Socher et al., 2013), natural language inference on the MNLI dataset (Williams et al., 2018), linguistic acceptability analysis on the CoLA dataset (Warstadt et al., 2019) and semantic similarity analysis on the STS-B dataset (Cer et al., 2017). The detail statistics of the datasets and hyper-parameter settings for the above tasks are described in Appendix B. + +# 5 Results and Analysis + +We compare UNIMO-2 to a variety of state-of-the-art models on cross-modal, visual and textual tasks. + +# 5.1 Cross-Modal Tasks + +The evaluation results on the joint vision-language cross-modal tasks are shown in Table 1. We compare with most of the existed VLP models, including regional feature-based models ViLBERT (Lu et al., 2019), UNITER (Chen et al., 2020c), Oscar (Li et al., 2020), Villa (Gan et al., 2020) and UNIMO (Li et al., 2021b), and end-to-end models ViLT (Kim et al., 2021), E2E-VLP (Xu et al., 2021), SOHO (Huang et al., 2021) and CLIP (Radford et al., 2021). The results show that UNIMO-2 achieves the best results against most benchmarks, outperforming both the base and large sizes of other + +
ModelZS-IR +R@1/R@5ZS-TR +R@1/R@5IR +R@1/R@5TR +R@1/R@5SNLI-VE +Val / TestVQA +test-dev / stdCaption +B@4 / C
Region-based Models Pretrained on Image-Text Pairs of CC, SBU, COCO and VG.
ViBERT31.86/61.12-58.20/84.90--70.55/70.92-
UNITER-Base66.16/88.4080.70/95.7072.52/92.3685.90/97.1078.59/78.2872.70/72.91-
Villa-Base--74.74/92.8686.60/97.9079.47/79.0373.59/73.67-
Oscar-Base-----73.16/73.4436.5/123.7
UNIMO-Base62.44/86.1677.40/95.1074.66/93.4089.70/98.4080.00/79.1073.79/74.0238.8/124.4
UNITER-Large68.74/89.2083.60/95.7075.56/94.0887.30/98.0079.39/79.3873.82/74.02-
Villa-Large--76.26/94.2487.90/97.5080.18/80.0274.69/74.87-
Oscar-Large-----73.61/73.8237.4/127.8
UNIMO-Large72.14/91.1485.80/96.8078.04/94.2489.40/98.9081.11/80.6375.06/75.2739.6/127.7
End-to-End Models Pretrained on Image-Text Pairs of CC, SBU, COCO and VG. † denotes 400 Million pairs.
ViLT51.3/79.969.7/91.062.2/87.683.7/97.2-70.94/--
E2E-VLP--73.58/92.4286.24/97.50-73.25/73.6736.2/117.3
SOHO--72.5/92.786.5/98.185.00/84.9573.25/73.47-
CLIP†68.7/90.688.0/98.7-----
Our Baseline65.11/87.4478.80/94.3878.52/94.0291.62/98.7280.37/80.4375.69/75.8738.5/128.4
UNIMO-272.70/91.1888.46/96.8480.14/95.5892.01/99.3181.97/81.4876.31/76.4239.7/131.2
+ +Table 1: Evaluation results on cross-modal tasks. ZS denotes zero-shot performance. IR and TR represents image-retrieval and text-retrieval, respectively. B@4 and C denotes metrics of BLUE4 and CIDEr, respectively. “Our Baseline” is similar to UNIMO-2, except that the grounded embedding module in UNIMO-2 is removed. It is trained on the same corpus and experimental settings with UNIMO-2. + +
ModelAcc@1
Zero-ShotFinetuned
SimCLRv2 (Chen et al., 2020b)-80.5
CLIP-ViT(B/16)68.680.2
Our Baseline58.280.7
UNIMO-266.380.8
+ +VLP models. Particularly, UNIMO-2 achieves very good performance on the task of zero-shot image/text retrieval, even outperforming CLIP (Radford et al., 2021) that pre-trained on an order of magnitude larger corpus. The results demonstrate that UNIMO-2 can obtain better cross-modal representations based on joint end-to-end grounded learning on different types of corpus. + +Furthermore, the performance of "Our Baseline" that just removes the grounded embedding module in UNIMO-2 drop obviously on all tasks, which demonstrates the effectiveness of our grounded learning method for cross-modal alignment. Especially, on the zero-shot image retrieval and text retrieval tasks, UNIMO-2 obtains 7.59 R@1 and 9.66 R@1 absolute gains compared to "Our Baseline". The results demonstrate that our grounded learning method can help align the visual and textual semantic space on different types of corpora to obtain more effective cross-modal representations. + +Table 2: Evaluation results on visual tasks, compared to state-of-the-art representation learning methods. We report both the zero-shot and finetuned top-1 accuracy on ImageNet-1k. The finetuned result of CLIP-ViT is linear probe performance. + +
ModelSST-2 AccMNLI Acc-(m/mm)CoLA MatSTS-B Per
BERT92.784.4 / ---
RoBERTa94.8-63.6-
UniLM94.587.0/85.961.187.7
UNITER89.780.8/-37.4-
VilBERT90.479.9/-36.1-
UNIMO95.186.8/86.765.491.0
Our Baseline94.187.1/86.960.691.0
UNIMO-294.787.5/87.562.191.2
+ +Table 3: Evaluation results on textual tasks. Mat and Per denote Matthews correlation coefficient and Pearson correlation coefficient, respectively. All the results are evaluated on the dev set. + +# 5.2 Visual Tasks + +UNIMO-2 can also be effectively adapted to visual tasks such as image classification. As UNIMO-2 learns effective cross-modal representations, it can classify images without finetuning. Specifically, the target labels of images can be transformed into pseudo image descriptions, such as "a photo of [label]". Then the zero-shot image-to-text retrieval method can be used to obtain the label for each image, similar to CLIP (Radford et al., 2021). Both the zero-shot and finetuned performance is compared to several state-of-the-art representation learning methods. The results in Table 2 show that UNIMO-2 can achieve comparable performance with CLIP that pretrained on billions of image-text pairs, on both the zero-shot and supervised settings. Moreover, UNIMO-2 obviously outperforms + +
ModelZS-IR R@1ZS-TR R@1IR R@1TR R@1COCO Caption B@4 / CZS-ImageNet Acc@1MNLI m/mm
UNIMO-272.7088.4680.1492.0139.7 / 131.266.387.5/87.5
GDw/o GD (P)65.1178.8078.5291.6238.5 / 128.458.287.1/86.9
w/o GD (I)40.2231.7674.0888.2639.0 / 127.421.387.5/87.3
w/o G.T.70.1085.0178.8491.1239.6 / 130.166.487.1/86.8
1-to-1 Map66.0680.9777.6190.4338.7 / 127.466.387.0/86.9
Corpusw/o Text70.0085.5078.9090.2439.0 / 128.765.084.9/85.0
w/o Images69.1784.8177.6590.3439.4 / 129.542.287.1/87.0
w/o Both70.0684.1278.1791.3239.3 / 129.343.085.9/85.7
+ +Table 4: Ablation study on the effectiveness of our unified end-to-end grounded learning architecture. + +"Our Baseline" on the zero-shot setting, achieving 8.1 Acc@1 absolute gains. The results demonstrate that UNIMO-2 also learns generalized visual representations through unified-modal learning on different types of corpora. + +# 5.3 Textual Tasks + +To show the effectiveness of UNIMO-2 on textual tasks, we further compare with both VLP models including UNITER, VilBERT and UNIMO, and pre-trained language models including BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019) and UniLM (Dong et al., 2019). The comparison results in Table 3 demonstrate that UNIMO-2 achieves much better performance than existing VLP models including UNITER and VilBERT, and achieves comparable performance than existed PLMs such as RoBERTa. UNIMO-2 also outperforms "Our Baseline" on all textual tasks. + +The above results demonstrate the adaptability and scalability of our unified end-to-end VLP architecture for joint learning on both aligned and unaligned images and texts. In all, UNIMO-2 not only achieves excellent performance on cross-modal tasks, but also performs very well on visual and textual tasks, which validates the superiority of our unified-modal learning architecture. + +# 5.4 Analysis + +Effectiveness of Grounded Learning We further validate the effectiveness of our grounded learning component by ablation study. "w/o GD (P)" denotes removing the grounded learning component during both pre-training and inference in order to validate its effectiveness for unified learning on different types of corpus. "w/o GD (I)" denotes keeping the grounded learning component during pre-training, but removing it during inference, in order to validate the effectiveness of the grounded representations to downstream tasks. "1-to-1 Map" denotes mapping each patch or token to + +a grounded token by finding its nearest neighbor in the grounded dictionary, similar to the vector quantization method in (Oord et al., 2017). We compare their performance on three types of tasks, as shown in the top part of Table 4. The results demonstrate that our grounded learning (GD) method is essential to the end-to-end joint learning from different types of corpus, which can help bridge unaligned images and texts and improve vision-language semantic alignment. The learned grounded representations is also critical to both the cross-modal and single-modal downstream tasks. We further validate the effectiveness of our Grounded Transformer by replacing it with a traditional Transformer, denoted as "w/o G.T". The results show that the performance of cross-modal tasks drop obviously compared to UNIMO-2, which demonstrate the effectiveness of our Grounded Transformer architecture. + +Effectiveness of Unaligned Images and Texts To further validate the effectiveness of unaligned images and texts to cross-modal learning, we compare the performance of UNIMO-2 on different pre-training datasets. Specifically, we compare the performance of UNIMO-2 by either removing the text cropus (i.e. "w/o Text"), the image corpus (i.e. "w/o Images") or removing them both (i.e. "w/o Both"). The comparison results are shown in the bottom part of Table 4, which show that either removing text corpus or image corpus will consistently reduce the performance of all three types of tasks, including cross-modal, visual and textual tasks. It is worth noting that the performance of the image/text retrieval tasks drop obviously when either removing the text-only cropus or image-only corpus, which demonstrate that unaligned corpus is also useful to cross-modal tasks. UNIMO-2 can effectively leverage unaligned images and texts to improve cross-modal learning. + +# 6 Conclusion + +In this work, we propose UNIMO-2, an end-to-end unified-modal pre-training framework that can learn from both aligned and unaligned image and text corpora. Our proposed grounded learning method can help bridge unpaired images and texts and align the textual and visual semantic spaces more effectively. Benefiting from effectively utilizing different types of corpora, UNIMO-2 has better scalability for different types of tasks. Experiments show that UNIMO-2 greatly improves the performance of various cross-modal tasks and also achieves very impressive performance on visual and textual tasks. The results also show that it is promising to further uniformly improve the performance of cross-modal, visual and textual tasks by utilizing larger scales of unpaired images and texts. + +# Acknowledgments + +This work was supported in part by the National Key R&D Program of China under Grant 2020YFB1406701. Xinyan Xiao is the corresponding author. + +# References + +Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6077-6086. +Mark Andrews, Gabriella Vigliocco, and David Vinson. 2009. Integrating experiential and distributional data to learn semantic representations. Psychological review, 116(3):463. +Marco Baroni. 2016. Grounding distributional semantics in the visual world. Language and Linguistics Compass, 10(1):3-13. +Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of artificial intelligence research, 49:1-47. +Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1-14, Vancouver, Canada. Association for Computational Linguistics. +Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020a. A simple framework for contrastive learning of visual representations. + +In International conference on machine learning, pages 1597-1607. PMLR. +Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey Hinton. 2020b. Big self-supervised models are strong semi-supervised learners. arXiv preprint arXiv:2006.10029. +Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dólar, and C Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325. +Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020c. Uniter: Universal image-text representation learning. In European Conference on Computer Vision, pages 104-120. Springer. +Grzegorz Chrupa, Ákos Kádár, and Afra Alishahi. 2015. Learning language through pictures. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 112-118, Beijing, China. Association for Computational Linguistics. +Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. IEEE. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In Advances in Neural Information Processing Systems, pages 13063-13075. +Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. +Rebecca Fincher-Kiefer. 2001. Perceptual components of situation models. Memory & Cognition, 29(2):336-343. + +Zhe Gan, Yen-Chun Chen, Linjie Li, Chen Zhu, Yu Cheng, and Jingjing Liu. 2020. Large-scale adversarial training for vision-and-language representation learning. arXiv preprint arXiv:2006.06195. +Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6904-6913. +Zhicheng Huang, Zhaoyang Zeng, Yupan Huang, Bei Liu, Dongmei Fu, and Jianlong Fu. 2021. Seeing out of the box: End-to-end pre-training for vision-language representation learning. arXiv preprint arXiv:2104.03135. +Susan S Jones, Linda B Smith, and Barbara Landau. 1991. Object properties and knowledge in early lexical learning. *Child development*, 62(3):499-516. +Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64-77. +Andrej Karpathy and Li Fei-Fei. 2015. Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3128-3137. +Douwe Kiela. 2017. Deep embodiment: grounding semantics in perceptual modalities. Technical report, University of Cambridge, Computer Laboratory. +Douwe Kiela, Alexis Conneau, Allan Jabri, and Maximilian Nickel. 2018. Learning visually grounded sentence representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 408-418, New Orleans, Louisiana. Association for Computational Linguistics. +Wonjae Kim, Bokyung Son, and Ildoo Kim. 2021. Vilt: Vision-and-language transformer without convolution or region supervision. arXiv preprint arXiv:2102.03334. +Ivan Krasin, Tom Duerig, Neil Alldrin, Vittorio Ferrari, Sami Abu-El-Haija, Alina Kuznetsova, Hassan Rom, Jasper Uijlings, Stefan Popov, Andreas Veit, et al. 2017. Openimages: A public dataset for large-scale multi-label and multi-class image classification. Dataset available from https://github.com/openimages, 2(3):2-3. +Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision, 123(1):32-73. + +Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25:1097-1105. +Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Alexander Kolesnikov, et al. 2020. The open images dataset v4. International Journal of Computer Vision, 128(7):1956-1981. +Gen Li, Nan Duan, Yuejian Fang, Daxin Jiang, and Ming Zhou. 2019a. Unicoder-vl: A universal encoder for vision and language by cross-modal pretraining. arXiv preprint arXiv:1908.06066. +Lianian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019b. Visualbert: A simple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557. +Lianian Harold Li, Haoxuan You, Zhecan Wang, Alireza Zareian, Shih-Fu Chang, and Kai-Wei Chang. 2021a. Unsupervised vision-and-language pre-training without parallel images and captions. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5339-5350, Online. Association for Computational Linguistics. +Wei Li, Can Gao, Guocheng Niu, Xinyan Xiao, Hao Liu, Jiachen Liu, Hua Wu, and Haifeng Wang. 2021b. UNIMO: Towards unified-modal understanding and generation via cross-modal contrastive learning. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2592-2607, Online. Association for Computational Linguistics. +Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, et al. 2020. Oscar: Object-semantics aligned pre-training for vision-language tasks. In European Conference on Computer Vision, pages 121-137. Springer. +Junyang Lin, An Yang, Yichang Zhang, Jie Liu, Jingren Zhou, and Hongxia Yang. 2020. Interbert: Vision-and-language interaction for multi-modal pretraining. arXiv preprint arXiv:2003.13198. +Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer. +Alexander H Liu, SouYoung Jin, Cheng-I Jeff Lai, Andrew Rouditchenko, Aude Oliva, and James Glass. 2021. Cross-modal discrete representation learning. arXiv preprint arXiv:2106.05438. + +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. +Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visi-olinguistic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems, pages 13-23. +Aaron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. 2017. Neural discrete representation learning. arXiv preprint arXiv:1711.00937. +Vicente Ordonez, Girish Kulkarni, and Tamara Berg. 2011. Im2text: Describing images using 1 million captioned photographs. Advances in neural information processing systems, 24:1143-1151. +Charles A Perfetti. 1998. The limits of co-occurrence: Tools and theories in language research. +Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020. +Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2016. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE transactions on pattern analysis and machine intelligence, 39(6):1137-1149. +Brian Riordan and Michael N Jones. 2011. Redundancy in perceptual and linguistic experience: Comparing feature-based and distributional models of semantic representation. Topics in Cognitive Science, 3(2):303-345. +Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2556-2565, Melbourne, Australia. Association for Computational Linguistics. +Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Association for Computational Linguistics. +Hao Tan and Mohit Bansal. 2020. Vokenization: Improving language understanding with contextualized, visual-grounded supervision. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages + +2066-2080, Online. Association for Computational Linguistics. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008. +Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, and Yuan Cao. 2021. Simvlm: Simple visual language model pretraining with weak supervision. arXiv preprint arXiv:2108.10904. +Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. 2019. Cola: The corpus of linguistic acceptability (with added annotations). +Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguistics. +Ning Xie, Farley Lai, Derek Doran, and Asim Kadav. 2019. Visual entailment: A novel task for fine-grained image understanding. arXiv preprint arXiv:1901.06706. +Haiyang Xu, Ming Yan, Chenliang Li, Bin Bi, Songfang Huang, Wenming Xiao, and Fei Huang. 2021. E2E-VLP: End-to-end vision-language pre-training enhanced by visual learning. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 503-513, Online. Association for Computational Linguistics. +Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural information processing systems, pages 5753-5763. +Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67-78. +Fei Yu, Jiji Tang, Weichong Yin, Yu Sun, Hao Tian, Hua Wu, and Haifeng Wang. 2020. Ernievil: Knowledge enhanced vision-language representations through scene graph. arXiv preprint arXiv:2006.16934. + +# A Grounded Learning Analysis + +Visualization of Grounded Dictionary To show the semantics of the grounded dictionary learned by UNIMO-2, we visualize the image patches and textual tokens that are grouped in each grounded token. We map each patch or token into a grounded token with which has the largest similarity between their representations by Equation 1. For each grounded token, the patches and tokens that have the largest similarity scores are selected and visualized. Several examples are shown in Figure 3, which demonstrate that each grounded token captures meaningful and consistent vision-language grounded semantics. + +Parameter Analysis In all our experiments, we utilize the default grounding settings that the grounded dictionary (GD) size $C$ is set as 2048 and the number of grounded tokens $K$ is set as 100. We further compare different grounding settings to explore the properties of the grounded semantic space for cross-modal learning. Specifically, we validate the performance of grounded learning with different grounded dictionary (GD) size $C$ from {1024, 2048, 4096, 8192} and different number of grounded tokens $K$ from {10, 20, 50, 100}. When comparing different GD size $C$ , we set $K$ as 100. We also keep $C = 2048$ when comparing different settings of $K$ . Furthermore, we also compare our method with the simplest Vector Quantization (VQ) method that maps each visual or textual token to a grounded token by finding its nearest neighbor in the grounded dictionary, namely "1-to-1 map". The number of grounded tokens for "1-to-1 map" is depended on the total number of image patches and textual tokens, which is 709 (i.e. $197 + 512$ ) during pre-training and 1089 (i.e. $577 + 512$ ) during finetuning. + +For time efficiency, we only pre-train UNIMO-2 on the corpus of image-text pairs for 10 epochs under the above settings, and then compare their performance on two representative cross-modal tasks, including zero-shot image/text retrieval and image caption, to validate their effectiveness on cross-modal alignment. The comparison results are shown in Table 5, which demonstrate that our grounded learning method achieves better performance on the two representative cross-modal tasks when the GD size $C$ is set as 4096 or the number of grounded tokens $K$ is set as 50. Too large $C$ will increase the difficulty of learning while too + +small $C$ may restrict the volume of grounded semantic space. Similarly, too small $K$ will lose sharing semantics between images and texts while too large $K$ will introduce noisy information. Although different settings have different behavior, the performance of our grounded learning method is relatively stable. In particular, the "1-to-1 map" method achieves much worse results than our grounded learning method under different settings, which validates the effectiveness of our grounded learning method on cross-modal alignment. Furthermore, our grounded learning method is much more efficient in computation than "1-to-1 map" as the number of grounded tokens is much smaller, which largely reduce the sequence length during cross-modal fusion. + +# B Experimental Settings + +Pretraining Datasets The pre-training datasets consist of text corpus, image collections and imagetext pairs. The detail statistics of them are shown in Table 6. + +Finetuning Tasks The multi-modal finetuning tasks include: + +- VQA requires the model to answer natural language questions by selecting the correct answer from a multi-choice list based on an image. We conduct experiments on the widely-used VQA v2.0 dataset (Goyal et al., 2017), which is built based on the COCO (Chen et al., 2015) images. Similar to previous work, both training and validation sets are used for training for the results on both the test-standard and test-dev splits. + +- Image Caption requires the model to generate a natural language description of an image. We report our results on the Microsoft COCO Captions dataset (Chen et al., 2015). Following Karpathy's (Karpathy and Fei-Fei, 2015) split, the dataset contains $113.2\mathrm{k} / 5\mathrm{k} / 5\mathrm{k}$ images for train/val/test splits respectively. + +- Visual Entailment (SNLI-VE) is evaluated on the SLNI-VE dataset (Xie et al., 2019) which was derived from Flickr30K images and Stanford Natural Language Inference (SNLI) dataset. The task is to determine the logical relationship (i.e., "Entailment", "Neutral" and "Contradiction") between a natural language statement and an image. + +![](images/83e783fefa533b95f827367d351489e886132b0eb8446158430e83fbdccceada.jpg) +ID=67: cake, birthday, candles, happy, party + +![](images/67705de23040ff0b2ab42346a23cd2ea532a75673abc6286715e4fe976f1ec42.jpg) +ID=74: airplane, flight, flying, plane, aircraft + +![](images/f6633ee7c8e482bd8af7e17043ad0d259e4ca291df7c989e19bcad3bf41ad4e4.jpg) +ID=153: kitchen, cook, chefs, food, pans + +![](images/6d058966916c5bb686a87d238f5bad053b47336ab950f751cf712af9d60b9288.jpg) +ID=680: glasses, sun, wearing, beach, sunny + +![](images/e50fa7658d480929d3f30e10f0d78e25b4ad40a241a86d77e13e348349164aa2.jpg) +ID=885: phone, cell, mobile, cellphone, talking +Figure 3: Visualization of the grounded dictionary learned by UNIMO-2, which groups consistent semantics of image patches and textual tokens. Each grounded token reflects an abstraction of vision-language grounded semantics. + +![](images/41eacaf5853a6d3f1e17b908d9330aae215ae7a075abe9182590f8fdd1535518.jpg) +ID=1211: taxi, transports, city, traffic, street + +
ModelZeroShot-IR +R@1 / R@5 / R@10ZeroShot-TR +R@1 / R@5 / R@10COCO Caption +B@4 / M / C / S
GD Size C102458.52 / 82.19 / 88.9271.10 / 90.14 / 95.1737.58 / 29.18 / 123.53 / 22.23
204860.32 / 84.02 / 89.7275.84 / 91.91 / 95.5637.62 / 29.12 / 123.38 / 22.16
409664.10 / 86.41 / 91.7977.91 / 94.38 / 96.7538.07 / 29.20 / 124.18 / 22.20
819261.20 / 85.29 / 90.7375.84 / 92.50 / 96.1537.86 / 29.03 / 124.23 / 22.33
Top-K1057.79 / 82.66 / 89.4769.92 / 91.42 / 95.5637.36 / 28.92 / 122.93 / 22.15
2061.46 / 85.07 / 90.7574.46 / 93.10 / 97.3437.90 / 28.81 / 123.68 / 22.03
5063.49 / 86.13 / 91.5477.32 / 93.10 / 96.6538.38 / 29.17 / 125.31 / 22.39
10060.32 / 84.02 / 89.7275.84 / 91.91 / 95.5637.62 / 29.12 / 123.38 / 22.16
1-to-1 Map56.51 / 81.54 / 88.1971.99 / 90.43 / 94.5835.62 / 27.97 / 117.92 / 21.38
+ +Table 5: Parameter analysis for grounded learning. The top part validates the influence of GD size $C$ , and the middle part compares the performance of different number of grounded tokens $K$ used during learning. The bottom part shows the effectiveness of our grounded learning method compared with the existing VQ method. + +
TypeImage-Text PairsUnaligned ImagesUnaligned Text
DatasetCOCOVGCCSBUImageNet21KOpen ImagesBookWikiOpenWebText
#Images113K108K3.01M867K14M1.7M
#Texts567K5.41M3.01M867K16G38G
+ +Table 6: Statistics of the aligned image-text pairs, and unaligned images and texts for pre-training. + +
TaskImage Src.#Images (#Text)
TrainValTest
test-standardtest-dev
VQACOCO83K(444K)41K(214K)81K(107K)81K(448K)
Image CaptionCOCO113.2K5K5K-
Visual EntailmentFlickr30K529.5K17.9K17.9K-
Image-Text RetrievalFlickr30K29K(145K)1K(5K)1K(5K)-
+ +Table 7: Statistics of the datasets for the cross-modal downstream tasks. + +
Hyper-paramsTextual TasksVisual Tasks
Learning Rate{1e-5, 2e-5, 3e-5}{1e-4, 3e-4, 5e-4}
Batch Size{16, 32}512
Epochs1010
Warmup Raito0.060.06
Weight Decay0.010.01
+ +Table 8: Hyper-parameters for fine-tuning on visual and textual tasks. + +- Image-Text Retrieval is evaluated on the Flickr30k dataset (Young et al., 2014), which contains two sub-tasks: image retrieval (Flickr30k-IR) and text retrieval (Flickr30k-TR), depending on which modality is used as the retrieved target. We report the top-K retrieval results on the test sets, including R@1, R@5 and R@10. + +The statistics of the datasets for the above multimodal-tasks are described in Table 7. The hyper-parameters for finetuning all the downstream tasks, including both the single-modal tasks and cross-modal tasks are shown in Table 8 and 9, respectively. The full evaluation results (including R@1, R@5 and R@10) on Image/Text Retrieval tasks and comparison with other state-of-the-art + +VLP methods are shown in Table 10. + +
Hyper-parametersImage-Text RetrievalSNLI-VEVQACOCO Caption
Batch Size326425632
Epoch40101210
Learning Rate5e-6 for epoch=[0,24] +5e-7 for epoch=[24,32] +5e-8 for epoch=[32,40]1e-54e-5 for epoch=[0,5] +4e-6 for epoch=[6,8] +4e-7 for epoch=[9,12]1e-5
Warmup Ratio-0.06-0.06
Weight Decay0.010.00.00.01
+ +Table 9: Hyper-parameters for fine-tuning on cross-modal tasks. + +
ModelZeroShot-IR +R@1 / R@5 / R@10ZeroShot-TR +R@1 / R@5 / R@10Finetuned-IR +R@1 / R@5 / R@10Finetuned-TR +R@1 / R@5 / R@10
ViLBERT-base31.86 / 61.12 / 72.80-58.20 / 84.90 / 91.52-
UNITER-base66.16 / 88.40 / 92.9480.70 / 95.70 / 98.0072.52 / 92.36 / 96.0885.90 / 97.10 / 98.80
Villa-base--74.74 / 92.86 / 95.8286.60 / 97.90 / 99.20
UNIMO-base62.44 / 86.16 / 91.6877.40 / 95.10 / 97.8074.66 / 93.40 / 96.0889.70 / 98.40 / 99.10
UNITER-large68.74 / 89.20 / 93.8683.60 / 95.70 / 97.7075.56 / 94.08 / 96.7687.30 / 98.00 / 99.20
Villa-large--76.26 / 94.24 / 96.8487.90 / 97.50 / 98.80
UNIMO-large72.14 / 91.14 / 94.9885.80 / 96.80 / 98.8078.04 / 94.24 / 97.1289.40 / 98.90 / 99.80
ViLT51.3 / 79.9 / 81.969.7 / 91.0 / 96.062.2 / 87.6 / 93.283.7 / 97.2 / 98.1
E2E-VLP--73.58 / 92.42 / 96.0386.24 / 97.50 / 98.92
SOHO--72.5 / 92.7 / 96.186.5 / 98.1 / 99.3
CLIP68.7 / 90.6 / 95.288.0 / 98.7 / 99.4--
Our Baseline65.11 / 87.44 / 92.6278.80 / 94.38 / 97.6378.52 / 94.02 / 96.6391.62 / 98.72 / 99.51
UNIMO-272.70 / 91.18 / 94.6088.46 / 96.84 / 98.9280.14 / 95.58 / 97.7592.01 / 99.31 / 99.51
+ +Table 10: Full evaluation results on the Flickr30k retrieval tasks. \ No newline at end of file diff --git a/unimo2endtoendunifiedvisionlanguagegroundedlearning/images.zip b/unimo2endtoendunifiedvisionlanguagegroundedlearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..5ef8377e0246c971f8fc3cc2330132cf8e08c846 --- /dev/null +++ b/unimo2endtoendunifiedvisionlanguagegroundedlearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e52e19a279ee2ec07d0a2b3229d6645685d6ce5e9ddadc56612b09f441d62a8f +size 1193251 diff --git a/unimo2endtoendunifiedvisionlanguagegroundedlearning/layout.json b/unimo2endtoendunifiedvisionlanguagegroundedlearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..f0d4a5526d43f9539b50d31004f910628da618b0 --- /dev/null +++ b/unimo2endtoendunifiedvisionlanguagegroundedlearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ee9a3c44e4483db7821b124e2758ace8be9470332478260b3d1d7dfc69ca0347 +size 393774 diff --git a/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/22bf9639-18d1-4371-8318-8a1085d4648c_content_list.json b/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/22bf9639-18d1-4371-8318-8a1085d4648c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..cd06d5682e5bd4d7261fdabae055a3e5c61925a1 --- /dev/null +++ b/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/22bf9639-18d1-4371-8318-8a1085d4648c_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c66a864620809c3eb157afefbf9f58075ccfce14a973399c707ed6c257147036 +size 41126 diff --git a/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/22bf9639-18d1-4371-8318-8a1085d4648c_model.json b/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/22bf9639-18d1-4371-8318-8a1085d4648c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..ea271b0a98d90b4cc8ce605a7fee9101a8f53430 --- /dev/null +++ b/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/22bf9639-18d1-4371-8318-8a1085d4648c_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2f485e87163f4f94fbab447e64d1a4670d192483bc7eb003d3471147a47f2c60 +size 48237 diff --git a/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/22bf9639-18d1-4371-8318-8a1085d4648c_origin.pdf b/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/22bf9639-18d1-4371-8318-8a1085d4648c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..08891e51d2a3390c731476a2ef78d062b3661f99 --- /dev/null +++ b/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/22bf9639-18d1-4371-8318-8a1085d4648c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e35aea220ceec94a840ea759a1e62972ae6b6b76ec9e462f3fc2166c204c45dd +size 295773 diff --git a/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/full.md b/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1a4e3e734bcbe691e32672c659df43b7ea02f7c6 --- /dev/null +++ b/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/full.md @@ -0,0 +1,188 @@ +# Unsupervised Word Segmentation with BERT Oriented Probing and Transformation + +Wei Li $^{1*}$ , Yuhan Song $^{2*}$ , Qi Su $^{3}$ , Yanqiu Shao $^{1}$ + +$^{1}$ School of Information Science, Beijing Language and Culture University + +$^{2}$ School of EECS, Peking University + +$^{3}$ School of Foreign Languages, Peking University + +liweitj47@blcu.edu.cn + +{songyuhan, sukia}@pku.edu.cn + +shaoyanqiu@blcu.edu.cn + +# Abstract + +Word Segmentation is a fundamental step for understanding many languages. Previous neural approaches for unsupervised Chinese Word Segmentation (CWS) only exploit shallow semantic information, which can miss important context. Large scale Pre-trained language models (PLM) have achieved great success in many areas. In this paper, we propose to take advantage of the deep semantic information embedded in PLM (e.g., BERT) with a self-training manner, which iteratively probes and transforms the semantic information in PLM into explicit word segmentation ability. Extensive experiment results show that our proposed approach achieves a state-of-the-art F1 score on two CWS benchmark datasets. The proposed method can also help understand low resource languages and protect language diversity. $^{1}$ + +# 1 Introduction + +There exist many low resource fields and languages where labeled word segmentation is inaccessible, which makes unsupervised word segmentation desirable. Previous unsupervised word segmentation methods mainly apply statistical models to either evaluate the quality of possible segmented sequence with discriminative models (e.g., Mutual Information (Chang and Lin, 2003)) or estimate the generative probabilities with generative models (e.g., Hidden Markov Model (Chen et al., 2014)). However, these statistical methods can only make use of the limited contextual information, thus yielding less competitive performance. + +With the thrive of neural networks, researchers have applied neural models for unsupervised word segmentation. Sun and Deng (2018) propose a segmental language model (SLM) to estimate the + +generative probability with recurrent networks. Although SLM can exploit more contextual information compared with statistical models, it is still weak in modeling deep semantic information, limited by its model capacity and training data scale. + +Pre-trained language models trained on large scale data have shown superior ability to model contextual information, and achieve great success in various tasks (Peters et al., 2018; Devlin et al., 2019; Radford et al., 2019). Inspired by the attempt for interpreting BERT (Wu et al., 2020), we propose to take advantage of the semantic representation ability of BERT to evaluate the closeness between characters in a probing manner. To be more specific, we assume that the difference between masking one character and masking several adjacent characters as a whole reveals the closeness between that character and the adjacent ones. + +Although this probing-based method can take advantage of the large amount of knowledge embedded in BERT, it only implicitly exploits the representation ability of BERT. To transfer the implicit knowledge into explicit segmentation boundary, we propose to apply a self-training method that transforms the segmentation decision from generative methods with high confidence into traditional "BI" sequence labeling system, which is then treated as the supervision signals for a discriminative model. + +To combine the advantage of both generative and discriminative models, we propose to iteratively train the discriminative model and generative model under the supervision signal from their counterparts. To select the model with the best performance in the unsupervised setting, we propose an evaluation module that evaluates the quality of the word boundaries with masked prediction accuracy based on the assumption that the closer two characters are, the bigger loss masking one adjacent character would bring. + +We conduct experiments on two Chinese Word Segmentation benchmark datasets in an unsuper + +vised manner. Experiment results show that our method can outperform the strong baseline models and achieve state-of-the-art results in unsupervised CWS. Extensive analysis shows the effectiveness of the proposed modules. + +We conclude our contributions as follows: + +- We propose an unsupervised word segmentation method that segments tokens by probing and transforming PLM with generative and discriminative modules, which are trained in a mutual promotion manner and selected for inference with an evaluation module. +- Experiment results show that our proposed method achieves the state-of-the-art result in unsupervised CWS. Extensive analysis testifies the effectiveness of the proposed modules. + +# 2 Related Work + +Previous unsupervised word segmentation methods can be roughly classified as generative and discriminative two ways. Generative models focus on finding the segmented sequence with the highest posterior probability. Hierarchical Dirichlet process (HDP) model (Goldwater et al., 2009), Nested PitmanYor process (NPY) (Mochihashi et al., 2009), Hidden Markov Model (HMM) (Chen et al., 2014) and SLM (Sun and Deng, 2018) are all different ways to estimate the generative probabilities for segmented sequences. On the other hand, discriminative models focus on designing a measure to evaluate the segmented sequences. Mutual Information (MI) (Chang and Lin, 2003), normalized Variation of Branching Entropy (nVBE) (Magistry and Sagot, 2012) and ESA (Wang et al., 2011) apply co-occurrence based measurement to evaluate the segmented sequences. + +# 3 Approach + +In this section, we describe our BERT oriented probing and transformation based unsupervised word segmentation approach. Our model mainly consists of three parts, a generative module that suggests the plausible word boundaries by probing BERT, a discriminative module that transforms the implicit boundary information into explicit sequence labels, and an evaluation module that estimates the performance of the model in an unsupervised manner. + +Algorithm 1 Unsupervised Word Segmentation Procedure +Require: Generative Module $G$ ,Discriminative +Module $D$ ,Evaluation Module $E$ ,sequences to +be segmented $X$ $\mathrm{i} = 0$ +while True do Segment the sequences $X$ with $G$ into $X^g$ Transform the segmented $X^g$ into "BI" labels Train $D$ with high confident segmentations in + $X^g$ Segment the sequences $X$ with updated $D$ +into $X^{d}$ +Train $G$ with high confident segmentations in + $X^d$ Evaluate the segmented sequence $X^d$ with $E$ $e = E(X^{d})$ +if $e^i < e^{i - 1}$ then Return $D^{i - 1}$ +end if + $\mathrm{i + = 1}$ +end while + +# 3.1 Overview + +Because our method works in an unsupervised manner, we propose to get the original word boundary information by probing BERT, which reveals the word boundaries by measuring the distance between masking a span and masking a token using the generative module. This distance reflects the closeness between the masked token and the masked span (separately). Then the discriminative module transforms the word boundaries suggested by the generative module into explicit segmentation labels to enable the self-training process. To combine the advantages of both generative and discriminative modules, two modules are iteratively trained with the word boundaries suggested by the updated counterpart with high confidence. To decide when to stop this iterative self-training procedure, an evaluation module is proposed to evaluate the segmented sequence, which early stops the iterative process with the model parameters that yields the best performance. + +# 3.2 Generative Module + +The proposed generative module works by probing a pre-trained language model (e.g., BERT) with masks on tokens. Assume the input sequence to be $[x_1, x_2, \dots, x_n]$ . We first mask one token at a time in order. The representation at $i$ -th position given + +by BERT after masking $x_{i}$ is $H_{i}$ . Then we mask two successive tokens at a time in order. $H_{i,j}$ is the representation given by BERT at $i$ -th position after masking both $x_{i}$ and $x_{j}$ . Note that it is different for the representation at $j$ -th position after masking both $x_{i}$ and $x_{j}$ , which we denote as $H_{j,i}$ . + +The intuition behind the generative model is that we assume if two tokens $x_{i}$ , $x_{j}$ are inherently close and should be combined as a word, the difference between masking $i$ -th and $j$ -th token together and solely masking $i$ -th token should be large, which is reflected by the probing distance $d$ , + +$$ +d = \frac {\left(\left| H _ {i , j} - H _ {i} \right| + \left| H _ {j , i} - H _ {j} \right|\right)}{2} +$$ + +On the contrary, if two tokens are loosely connected, $d$ should be small. This assumption follows the intuition that if $x_{i}$ is largely dependent on $x_{j}$ , masking $x_{j}$ should bring a relatively big influence on the representation. + +This indicator is applied to segment token sequence with a threshold, that is to say, if $d \geq threshold$ , we combine the two tokens $x_{i}$ and $x_{j}$ , if $d \leq threshold$ , we segment $x_{i}$ and $x_{j}$ . + +# 3.3 Discriminative Module + +The generative module can only exploit the implicit segmentation revealed by BERT. Furthermore, it is not very friendly when the word length gets longer. To overcome these drawbacks, we propose to transform the segmentation information provided by the generative module with high confidence into traditional supervised sequence labeling scheme with "BI" labels, which indicates the role (position) of the token to be "beginning" ("B") or "inside" ("I") of a word. + +We train the discriminative module by finetuning BERT on the transformed sequence labels with an additional output layer projecting the representation into "BI" labels. Since the results given by the generative module can be noisy, we only adopt the combined words with relatively high confidence, which is realized by strict thresholds for the generative module. If $d \geq threshold_{l}$ , we combine the two tokens $x_{i}$ and $x_{j}$ , if $d \leq threshold_{h}$ , we segment $x_{i}$ and $x_{j}$ . threshold indicates lower bound, threshold indicates higher bound. + +# 3.4 Iterative Training and Evaluation Module + +We assume that the generative module and the discriminative module can capture segmentation information from different aspects. Therefore, we + +propose a self-training procedure, which promotes both the generative module and the discriminative module by making them learn from the high confident predictions of the counterpart. + +To make the generative module learn from the discriminative module, we design a Euclidean distance based MSE loss function + +$$ +l o s s _ {g e n e r a t i v e} = \left\| d - t h r e s h o l d \right\| ^ {2} +$$ + +to push the distance between two tokens predicted to be in the same word to be larger than a threshold and vice versa. The loss is activated only when the generative module makes different predictions from the discriminative module. + +To prevent the self-training procedure from being over-fitting, we propose to keep the MLM objective while fine-tuning on the word segmentation objectives, and early stop the training with an evaluation module. The intuition behind the evaluation module is that predicting a masked token with the token inside the same word is much easier than predicting this masked token with the token outside that word. Formally, let the cross-entropy of predicting the $i$ -th token $x_{i}$ with the masked language modeling ability of BERT when masking two adjacent tokens $x_{i,j}$ be $CE_{i,j}$ , we assume that + +$$ +C E _ {i - 1, i} < C E _ {i, i + 1} +$$ + +if $x_{i,i+1}$ rather than $x_{i-1,i}$ belongs to the same word, because $x_{i+1}$ provides more information for prediction when masking $x_{i-1,i}$ . + +We apply this principle to inspect the segmentation results from either the discriminative module or the generative module. When the evaluation module detects performance decline, the training procedure stops, and the discriminative module with the best performance is used as the final word segmentation model. + +# 4 Experiment + +In this section, we show the results and analysis on two CWS benchmark datasets, PKU and MSR for a fair comparison, which are provided by the Second Segmentation Bake-off (SIGHAN 2005) (Emerson, 2005). There are 104K and 107K words in the test set of PKU and MSR datasets respectively. + +# 4.1 Settings + +In this paper, we use the pre-trained BERT (base) model for Chinese and the corresponding tokenizer + +
F1 scorePKUMSR
HDP (Goldwater et al., 2009)68.769.9
NPY-3 (Mochihashi et al., 2009)-80.7
NPY-2 (Mochihashi et al., 2009)-80.2
ESA (Wang et al., 2011)77.880.1
nVBE (Magistry and Sagot, 2012)80.081.3
HDP + HMM (Chen et al., 2014)75.376.3
Joint (Chen et al., 2014)81.181.7
SLM-2 (Sun and Deng, 2018)80.278.5
SLM-3 (Sun and Deng, 2018)79.879.4
MSLM (Downey et al., 2021)62.9-
Proposal84.183.0
+ +![](images/b05c7285d43de9ca78cc6bdf54f9add459365f980c06a792f56f755383d04a96.jpg) +Figure 1: The relation between evaluation score and F1 score on the development set. The evaluation score shows good coherence with F1 score. We select the model with best evaluation score, which also achieves the best F1 score on the development set. + +released by Huggingface. The tokenizer tokenizes the sentence into Chinese characters, which involves with no word (segmentation) information. We randomly initialize the discriminative module, which is trained for 2 epochs using sequence labels transformed from the generative module with high confidence. threshold is 8 and threshold is 12. We use AdamW (Loshchilov and Hutter, 2019) optimizer with the learning rate of 1e-4. + +# 4.2 Results + +In Table 1 we show the F1 score on PKU and MSR. From the results, we can see that our model yields much better results than the previous models and achieves state-of-the-art results. We assume the reason behind is that our model can take advantage of the large pre-trained language model, which encodes abundant language matching knowledge and can better model the context with big model capac + +Table 1: F1 score on two word segmentation benchmark datasets. Our proposed method achieves the state-of-the-art performance on all the datasets. We take the results reported in the original paper. + +
F1 scorePKUMSR
Generative Only74.872.5
+Discriminative79.778.3
+Discriminative & iterative80.578.9
+Discriminative & mlm82.082.1
Full Model84.183.0
+ +Table 2: Ablation study results. " MLM" means using mlm loss as a regularization mentioned in Section 3.4. "iterative" means using iterative training mentioned in Section 3.4. "Full model" means using Discriminative & mlm & iterative training. + +ity. Moreover, we can observe that the neural-based model SLM does not outperform the traditional statistical Joint method, but gives better results than other traditional generative models. This indicates that combining generative and discriminative methods can benefit the results. Moreover, our model does not need to constrain the longest word length compared with SLM-2, SLM-3, etc., which provides more flexibility. This is achieved by introducing the discriminative module, which segments the words under the sequence labeling scheme. + +# 4.3 Ablation Study + +In Table 2 we show the results for removing the designed modules. "Generative only" means we only use the generative module described in section 3.2, where a hard threshold of 10 is used to decide the word boundary. "+"Discriminative" means we use the discriminative module after learning from the generative module described in section 3.3 without iterative training and mlm loss. From the results, we can see that revealing the implicit word boundary information by probing BERT can only provide performance comparable to traditional statistical models. Transforming the implicit knowledge into explicit segmentation labels (+Discriminative) can give big promotion, which makes better use of the big amount of semantic knowledge encoded in PLM. Moreover, the proposed iterative training process and mlm loss further help improve the overall performance by combining the advantages of both generative and discriminative modules. + +Effect of Evaluation Module In Figure 1, we show the relation between the evaluation score described in section 3.4 and the development F1 score. We can see that the model with the best evaluation score achieves the best F1 score in the development set, and it generally coordinates with the variation trend of the F1 score, which makes the evaluation + +score a reasonable indicator to select the best model in the unsupervised setting. + +# 4.4 Case Study + +In Table 3 we show a concrete example of the segmentation results of SLM and our proposed method. Both two methods basically give correct word segments. The disagreement mainly lies in "送交市政府" (give to the city government). Compared with other words, "送交" can be relatively rare and bears very similar meaning with the single character "送", which makes SLM wrongly segment "送交" apart. On the contrary, our method is built based on BERT trained on a large corpus, which makes our model able to recognize these relatively rare words. For the part "市政府", where our model chooses to split, we assume that this is because similar contexts are often seen such as "北京市" (Beijing City), where "市" should be separated from "政府" (government). Furthermore, separating "市政府" into two words does not affect the understanding of the original text, and is more dependent on the segmentation fineness. + +# 5 Conclusion + +In this paper, we propose a BERT oriented Probing and Transformation method for unsupervised Word Segmentation. Our proposed model reveals the semantic information encoded in PLM into word boundary information by probing and transforming the token representations into explicit sequence labels. Experiment results on two benchmark CWS datasets show that our method achieves state-of-the-art F1 score. The proposed method works in an unsupervised manner, which can help understand low resource and endangered languages and thus protecting language diversity. + +# Acknowledgements + +This research project is supported by National Key R&D Program of China (No. 2019YFC1521200), the National Natural Science Foundation of China (No. 61872402), the Humanities and Social Science Project of the Ministry of Education (No. 17YJAZH068), Science Foundation of Beijing Language and Culture University (supported by “the Fundamental Research Funds for the Central Universities”) (No. 21YBB19) + +# References + +Jason S. Chang and Tracy Lin. 2003. Unsupervised word segmentation without dictionary. In ROCLING 2003 Poster Papers, pages 355-359, Hsinchu, Taiwan. The Association for Computational Linguistics and Chinese Language Processing (ACLCLP). +Miaohong Chen, Baobao Chang, and Wenzhe Pei. 2014. A joint model for unsupervised Chinese word segmentation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 854-863, Doha, Qatar. Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +C. Downey, Fei Xia, Gina-Anne Levow, and Shane Steinert-Threlkeld. 2021. A masked segmental language model for unsupervised natural language segmentation. ArXiv, abs/2104.07829. +Thomas Emerson. 2005. The second international Chinese word segmentation bakeoff. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing. +Sharon Goldwater, Thomas L. Griffiths, and Mark Johnson. 2009. A bayesian framework for word segmentation: Exploring the effects of context. Cognition, 112(1):21-54. +Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. +Pierre Magistry and Benoit Sagot. 2012. Unsupervised word segmentation: the case for Mandarin Chinese. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 383-387, Jeju Island, Korea. Association for Computational Linguistics. +Daichi Mochihashi, Takeshi Yamada, and Naonori Ueda. 2009. Bayesian unsupervised word segmentation with nested Pitman-Yor language modeling. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 100-108, Suntec, Singapore. Association for Computational Linguistics. +Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of + +
ModelSegmentation
Gold她保证,学生们的意见将送交市政府领导机关。
SLM她保证,学生们的意见将送交市政府领导机关。
Proposal她保证,学生们的意见将送交市政府领导机关。
+ +Table 3: Segmentation results of SLM and our proposed method. The gold content can be loosely translated as "She proposed that the suggestions of the students would be transferred to the leading agency of the city government." + +the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics. + +Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. + +Zhiqing Sun and Zhi-Hong Deng. 2018. Unsupervised neural word segmentation for Chinese via segmental language modeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4915-4920, Brussels, Belgium. Association for Computational Linguistics. + +Hanshi Wang, Jian Zhu, Shiping Tang, and Xiaozhong Fan. 2011. A new unsupervised approach to word segmentation. Computational Linguistics, 37(3):421-454. + +Zhiyong Wu, Yun Chen, Ben Kao, and Qun Liu. 2020. Perturbed masking: Parameter-free probing for analyzing and interpreting BERT. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4166-4176, Online. Association for Computational Linguistics. \ No newline at end of file diff --git a/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/images.zip b/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..a6587f7a485bf606014e611761ae478b20d7fe03 --- /dev/null +++ b/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9ba11e9b17da0cad262d9e315c478848ad0a644477ede5f885f1c06fe864e16f +size 139117 diff --git a/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/layout.json b/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..32af911af64c81229ac0ba6fd11ee52d45bd9e77 --- /dev/null +++ b/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f027f0e90ae22d7020e19f4e901be2eb717ce61b96ba8f21ba70712ccf5d85c8 +size 213870 diff --git a/unsupervisednaturallanguageinferenceusingphltripletgeneration/19b39625-376b-4052-8044-4736861fac1b_content_list.json b/unsupervisednaturallanguageinferenceusingphltripletgeneration/19b39625-376b-4052-8044-4736861fac1b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..9bf33f9d7b0da86f1b2041cac8aad7218fe6ae2a --- /dev/null +++ b/unsupervisednaturallanguageinferenceusingphltripletgeneration/19b39625-376b-4052-8044-4736861fac1b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ffa7d66fb67e8fd2ad8d441098016d5291644dfe6a054154bd904bce63823bd2 +size 98586 diff --git a/unsupervisednaturallanguageinferenceusingphltripletgeneration/19b39625-376b-4052-8044-4736861fac1b_model.json b/unsupervisednaturallanguageinferenceusingphltripletgeneration/19b39625-376b-4052-8044-4736861fac1b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..24051715b6701add63d1cc1da72c998f48934363 --- /dev/null +++ b/unsupervisednaturallanguageinferenceusingphltripletgeneration/19b39625-376b-4052-8044-4736861fac1b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bd3ca617dbd78a40cde0a5e8d3c324cbb0dd16e88840749bcdea6ed5daad67e8 +size 117025 diff --git a/unsupervisednaturallanguageinferenceusingphltripletgeneration/19b39625-376b-4052-8044-4736861fac1b_origin.pdf b/unsupervisednaturallanguageinferenceusingphltripletgeneration/19b39625-376b-4052-8044-4736861fac1b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ddb4a3eb11837f84d44381e19ad6c69374bd9d35 --- /dev/null +++ b/unsupervisednaturallanguageinferenceusingphltripletgeneration/19b39625-376b-4052-8044-4736861fac1b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c53de68dc396372dc346e420f8d7ee268d6fe0ad07a2ad86a3664ee9fe522a47 +size 455217 diff --git a/unsupervisednaturallanguageinferenceusingphltripletgeneration/full.md b/unsupervisednaturallanguageinferenceusingphltripletgeneration/full.md new file mode 100644 index 0000000000000000000000000000000000000000..00d396ae111647064042ba09918777430e28f5ee --- /dev/null +++ b/unsupervisednaturallanguageinferenceusingphltripletgeneration/full.md @@ -0,0 +1,411 @@ +# Unsupervised Natural Language Inference Using PHL Triplet Generation + +# Neeraj Varshney, Pratyay Banerjee, Tejas Gokhale, Chitta Baral +Arizona State University + +{nvarshn2, pbanerj6, tgokhale, cbaral}@asu.edu + +# Abstract + +Transformer-based models achieve impressive performance on numerous Natural Language Inference (NLI) benchmarks when trained on respective training datasets. However, in certain cases, training samples may not be available or collecting them could be time-consuming and resource-intensive. In this work, we address the above challenge and present an explorative study on unsupervised NLI, a paradigm in which no human-annotated training samples are available. We investigate it under three settings: $PH$ , $P$ , and $NPH$ that differ in the extent of unlabeled data available for learning. As a solution, we propose a procedural data generation approach that leverages a set of sentence transformations to collect PHL (Premise, Hypothesis, Label) triplets for training NLI models, bypassing the need for human-annotated training data. Comprehensive experiments with several NLI datasets show that the proposed approach results in accuracies of up to $66.75\%$ , $65.9\%$ , $65.39\%$ in PH, P, and NPH settings respectively, outperforming all existing unsupervised baselines. Furthermore, fine-tuning our model with as little as $\sim 0.1\%$ of the human-annotated training dataset (500 instances) leads to $12.2\%$ higher accuracy than the model trained from scratch on the same 500 instances. Supported by this superior performance, we conclude with a recommendation for collecting high-quality task-specific data. + +# 1 Introduction + +Natural Language Inference (NLI) is the task of determining whether a "hypothesis" is true (Entailment), false (Contradiction), or undetermined (Neutral) given a "premise". State-of-the-art models have matched human performance on several NLI benchmarks, such as SNLI (Bowman et al., 2015), Multi-NLI (Williams et al., 2018), and Dialogue NLI (Welleck et al., 2019). This high performance can be partially attributed to the availability of large training datasets; SNLI (570k), Multi-NLI (392k), + +![](images/c42bf119bf1ce0f7c4bbf2bf6d5ed11a668ad8c2803f11143924efd03fe2d619.jpg) +Figure 1: Illustrating our procedural data generation approach for unsupervised NLI. A sentence is treated as premise, and multiple hypotheses conditioned on each label (Entailment- E, Contradiction- C, and Neutral- N) are generated using a set of sentence transformations. + +and Dialogue-NLI (310k). For new domains, collecting such training data is time-consuming and can require significant resources. What if no training data was available at all? + +In this work, we address the above question and explore Unsupervised NLI, a paradigm in which no human-annotated training data is provided for learning the task. We study three different unsupervised settings: $PH$ , $P$ , and $NPH$ that differ in the extent of unlabeled data available for learning. In PH-setting, unlabeled premise-hypothesis pairs are available i.e. data without ground-truth labels. In P-setting, only a set of premises are available i.e. unlabeled partial inputs. The third setting NPH does not provide access to any training dataset, and thus it is the hardest among the three unsupervised settings considered in this work. + +We propose to solve these unsupervised settings using a procedural data generation approach. Given a sentence, our approach treats it as a premise (P) + +![](images/dc267043b1ac4f3a2607316a68ca692e28bcdfac05d1a1ca0e266a2e95bf2281.jpg) +Figure 2: Comparing supervised NLI with our three unsupervised settings. For unsupervised settings, we procedurally generate PHL triplets to train the NLI model. In NPH setting, a premise pool is collected from raw text corpora such as Wikipedia and then used for generating PHL triplets. In P setting, we directly apply these transformations on the available premises. In PH setting, we leverage the P-setting model to pseudo-label and filter the provided unlabeled PH pairs and then train the NLI model using this pseudo-labeled dataset. + +and generates multiple hypotheses (H) corresponding to each label $(\mathrm{L} = \text{Entailment, Contradiction, and Neutral})$ using a set of sentence transformations (refer to Figure 1). This results in creation of Premise-Hypothesis-Label (PHL) triplets that can be used for training the NLI model. In the P and PH settings, we directly apply our sentence transformations over the available premises to generate PHL triplets. However, in the NPH setting, premises are not available. We tackle this challenge by incorporating a premise generation step that extracts sentences from various raw text corpora such as Wikipedia and short stories. We use these extracted sentences as premises to generate PHL triplets. In Figure 2, we compare the four settings (one supervised and three unsupervised) and show our approach to develop an NLI model for each setting. + +To evaluate the efficacy of the proposed approach, we conduct comprehensive experiments with several NLI datasets. We show that our approach results in accuracies of $66.75\%$ , $65.9\%$ , and $65.39\%$ on SNLI dataset in PH, P, and NPH settings respectively, outperforming all existing unsupervised methods by $\sim 13\%$ . We also conduct experiments in low-data regimes where a few human-annotated labeled instances are provided and show that further fine-tuning our models with these instances consistently achieves higher performance than the models fine-tuned from scratch. For example, with just 500 labeled instances, our models achieve $8.4\%$ and $10.4\%$ higher accuracy on SNLI and MNLI datasets respectively. Finally, we show that fine-tuning with + +'adversarial' instances instead of randomly selected human-annotated instances further improves the performance of our models; it leads to $12.2\%$ and $10.41\%$ higher accuracy on SNLI and MNLI respectively. + +In summary, our contributions are as follows: + +1. We explore three unsupervised settings for NLI and propose a procedural data generation approach that outperforms the existing approaches by $\sim 13\%$ and raises the state-of-the-art unsupervised performance on SNLI to $66.75\%$ . +2. We also conduct experiments in low-data regimes and demonstrate that further fine-tuning our models with the provided instances achieves $8.4\%$ and $10.4\%$ higher accuracy on SNLI and MNLI datasets respectively. +3. Finally, we show that using 'adversarial' instances for fine-tuning instead of randomly selected instances further improves the accuracy. It leads to $12.2\%$ and $10.41\%$ higher accuracy on SNLI and MNLI respectively. Supported by this superior performance, we conclude with a recommendation for collecting high-quality task-specific data. + +We release the implementation1 of our procedural data generation approach and hope that our work will encourage research in developing techniques that reduce reliance on expensive human-annotated data for training task-specific models. + +# 2 Related Work + +Unsupervised Question-Answering: The unsupervised paradigm where no human-annotated training data is provided for learning has mostly been explored for the Question Answering (QA) task in NLP. The prominent approach involves synthesizing QA pairs and training a model on the synthetically generated data. Lewis et al. (2019); Dhingra et al. (2018); Fabbri et al. (2020) propose a template-based approach, while Puri et al. (2020) leverage generative models such as GPT-2 (Radford et al., 2019) to synthesize QA pairs. Banerjee and Baral (2020) create synthetic graphs for commonsense knowledge and propose knowledge triplet learning. Wang et al. (2021) leverage few-shot inference capability of GPT-3 (Brown et al., 2020) to synthesize training data for SuperGLUE (Wang et al., 2019) tasks. For visual question answering, Gokhale et al. (2020) use template-based data augmentation methods for negation, conjunction, and Banerjee et al. (2021) utilize image captions to generate training data. Gokhale et al. (2021) use linguistic transformations in a distributed robust optimization setting for vision-and-language inference models. + +Unsupervised NLI: In NLI, Cui et al. (2020) propose a multimodal aligned contrastive decoupled learning method (MACD) and train a BERT-based text encoder. They assign a label (E, C, N) based on the cosine similarity between representations of premise and hypothesis learned by their text encoder. Our approach differs from MACD as we leverage a procedural data generation step based on a set of sentence transformations and do not leverage data from other modalities. We use MACD as one of the baselines in our experiments. + +# 3 Unsupervised NLI + +In NLI, a premise-hypothesis pair $(P,H)$ is provided as input and the system needs to determine the relationship $L\in \{\text{Entailment},\text{Contradiction},\text{Neutral}\}$ between $P$ and $H$ . In the supervised setting, a labeled dataset $D_{train} = \{(P_i,H_i),L_i\}_{i = 1}^M$ consisting of $M$ instances which are usually human-annotated is available for training. However in the unsupervised setting, labels $L_{i}$ are not available, thus posing a significant challenge for training NLI systems. Along with this standard unsupervised setting (referred to as PH), we + +consider two novel unsupervised settings (P and NPH) that differ in the extent of unlabeled data available for learning: + +PH-setting: It corresponds to the standard unsupervised setting where an unlabeled dataset of PH pairs $\{(P_i, H_i)\}_{i=1}^M$ is provided. + +P-setting: In this setting, only premises from $D_{train}$ i.e $\left(\{(P_i)\}_{i=1}^M\right)$ are provided. It is an interesting setting as the large-scale NLI datasets such as SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2018) have been collected by presenting only the premises to crowd-workers and asking them to write a hypothesis corresponding to each label. Furthermore, this setting presents a harder challenge for training NLI systems than the PH-setting as only partial inputs are provided. + +NPH-setting: Here, no datasets (even with partial inputs) are provided. Thus, it corresponds to the hardest unsupervised NLI setting considered in this work. This setting is of interest in scenarios where we need to make inferences on a test dataset but its corresponding training dataset is not available in any form. + +From the above formulation, it can be inferred that the hardness of the task increases with each successive setting $(\mathrm{PH} \rightarrow \mathrm{P} \rightarrow \mathrm{NPH})$ as lesser and lesser information is made available. In order to address the challenges of each setting, we propose a two-step approach that includes a pipeline for procedurally generating PHL triplets from the limited information provided in each setting (Section 4), followed by training an NLI model using this procedurally generated data (Section 5). Figure 2 highlights the differences between four NLI settings (one supervised and three unsupervised) and summarizes our approach to develop an NLI model for each setting. + +# 4 PHL Triplet Generation + +To compensate for the absence of labeled training data, we leverage a set of sentence transformations and procedurally generate PHL triplets that can be used for training the NLI model. In P and PH settings, we apply these transformations on the provided premise sentences. In the NPH setting where premises are not provided, we extract sentences from various raw text corpora and apply these transformations on them to generate PHL triplets. + +# 4.1 $\mathcal{P}$ : Premise Generation + +We extract sentences from raw text sources, namely, COCO captions (Lin et al., 2014), ROC stories (Mostafazadeh et al., 2016), and Wikipedia to compile a set of premises for the NPH setting. We use these text sources as they are easily available and contain a large number of diverse sentences from multiple domains. + +ROC Stories is a collection of short stories consisting of five sentences each. We include all these sentences in our premise pool. MS-COCO is a dataset consisting of images with five captions each. We add all captions to our premise pool. From Wikipedia, we segment the paragraphs into individual sentences and add them to our premise pool. + +We do not perform any sentence filtration during the premise collection process. However, each transformation (described in subsection 4.2) has its pre-conditions such as presence of verbs/ adjectives/nouns that automatically filter out sentences from the premise pool that can not be used for PHL triplet generation. + +# 4.2 $\mathcal{T}$ : Transformations + +Now, we present our sentence transformations for each NLI label. Table 1 illustrates examples of PHL triplets generated from these transformations. + +# 4.2.1 Entailment: + +In NLI, the label is entailment when the hypothesis must be true if the premise is true. + +Paraphrasing (PA): Paraphrasing corresponds to expressing the meaning of a text (restatement) using other words and hence results in entailment premise-hypothesis pairs. We use the Pegasus (Zhang et al., 2019) tool to generate up to 10 paraphrases of a sentence and use them as hypothesis with the original sentence as the premise2. + +Extracting Snippets (ES): We use dependency parse tree to extract meaningful snippets from a sentence and use them as hypothesis with the original sentence as the premise. Specifically, we extract sub-trees that form a complete phrase or a sentence. For example, from the sentence "A person with red shirt is running near the garden", we create entailing hypotheses "A person is running near the garden", "A person is running", "A person is near the garden", etc. We implement 10 such techniques using spacy (Honnibal et al., 2020) $^2$ . + +Hypernym Substitution (HS): A hypernym of a word is its supertype, for example, "animal" is a hypernym of "dog". We use WordNet (Miller, 1995) to collect hypernyms and replace noun(s) in a sentence with their corresponding hypernyms to create entailment hypothesis. For example, from the premise "A black dog is sleeping", we create "A black animal is sleeping". Note that swapping the premise and hypothesis in this case gives us another PH pair that has a 'Neutral' relationship. + +Pronoun Substitution (PS): Here, we leverage Part-of-Speech (POS) tagging of spacy to heuristically substitute a noun with its mapped pronoun. For example, substituting "boy" with "he" in the sentence "boy is dancing in arena" results in an entailing hypothesis "he is dancing in arena". + +Counting (CT): Here, we count nouns with common hybernyms and use several templates such as "There are {count} {hypernym}s present" to generate entailing hypotheses. For instance, from the sentence "A motorbike and a car are parked", we create hypothesis "Two automobiles are parked". We also create contradiction hypotheses using the same templates by simply changing the count value such as "There are five automobiles present". + +# 4.2.2 Contradiction: + +The label is contradiction when the hypothesis can never be true if the premise is true. + +Contradictory Words (CW): We replace noun(s) and/or adjective(s) (identified using spacy POS tagging) with their corresponding contradictory words. For example, replacing the word 'big' with 'small' in "He lives in a big house" results in a contradictory hypothesis "He lives in a small house". For contradictory adjectives, we collect antonyms from wordnet and for nouns, we use the function 'most_similar' from gensim (Rehurek and Sojka, 2011)². + +Contradictory Verb (CV): We collect contradictory verbs from gensim and create hypothesis in the following two ways: (i) substituting verb with its contradictory verb: for example, from “A girl is walking”, we create hypothesis “A girl is driving” and (ii) selecting other sentences from the premise pool that have the same subject as the original sentence but have contradictory verbs: for example, sentences like “A young girl is driving fast on the street” and “There is a girl skiing with + +
TransformationOriginal Sentence (Premise)HypothesisLabel
PAFruit and cheese sitting on a black plateThere is fruit and cheese on a black plateE
PA + ES + HSA large elephant is very close to the cameraElephant is close to the photographic equipmentE
CW-nounTwo horses that are pulling a carriage in the streetTwo dogs that are pulling a carriage in the streetC
CVA young man sitting in front of a TVA man in green jersey jumping on baseball fieldC
PA + CWA woman holding a baby while a man takes a picture of themA kid is taking a picture of a male and a babyC
FConA food plate on a glass tableA food plate made of plastic on a glass tableN
PA + AMTwo dogs running through the snowThe big dogs are outsideN
+ +Table 1: Illustrative examples of PHL triplets generated from our proposed transformations. E,C, and N correspond to the NLI labels Entailment, Contradiction, and Neutral respectively. + +her mother". The second approach adds diversity to our synthetically generated PHL triplets $^2$ . + +Subject Object Swap (SOS): We swap the subject and object of a sentence to create a contradictory hypothesis. For example, from the sentence "A clock is standing on top of a concrete pillar", we create a contradictory hypothesis "a pillar is standing on top of a concrete clock". + +Negation Introduction (NI): We introduce negation into a sentence to create a contradictory hypothesis. For example, from the sentence "Empty fog covered streets in the night", we create hypothesis "Empty fog did not cover streets in the night". + +Number Substitution (NS): Here, we change numbers (tokens with dependency tag ‘nummod’ in the parse tree) in a sentence. For example, changing ‘four’ to ‘seven’ in the sentence “Car has four red lights” results in a contradictory hypothesis. + +Irrelevant Hypothesis (IrH): We sample sentences that have different subjects and objects than the premise sentence. For example, for the premise "Sign for an ancient monument on the roadside", we sample "A man goes to strike a tennis ball" as a contradictory hypothesis. + +# 4.2.3 Neutral: + +The label is neutral when the premise does not provide enough information to classify a PH pair as either entailment or contradiction. + +Adding Modifiers (AM): We introduce a relevant modifier for noun(s) in premise to generate a neutral hypothesis. For instance, in the sentence "A car parked near the fence", we insert modifier 'silver' for the noun 'car' and create hypothesis "A silver car parked near the fence". We collect relevant modifiers for nouns by parsing sentences in the premise pool and selecting tokens with dependency tag 'amod' and POS tag 'ADJ'. + +ConceptNet (Con): We add relevant information from ConceptNet (Speer et al., 2017) relations ('At-Location', 'DefinedAs', etc.) to the premise and create a neutral hypothesis. For instance, from the sentence "Bunch of bananas are on a table", we create hypothesis "Bunch of bananas are on a table at kitchen" using the 'AtLocation' relation. + +Same Subject but Non-Contradictory Verb (SS-NCV) : For a premise, we select sentences from the premise pool that have the same subject as the premise, contain additional noun(s) but no contradictory verbs as neutral hypotheses. For instance, for premise “A small child is sleeping in a bed with a bed cover”, we sample “A child laying in bed sleeping with a chair near by” as a hypothesis. + +We create more examples by swapping premise and hypothesis of the collected PHL triplets and accordingly change the label. For instance, swapping $P$ and $H$ in HS, ES, etc. results in neutral examples, swapping $P$ and $H$ in AM, Con results in entailment examples. Furthermore, we note that transformations ES, HS, PS, SOS, NI result in PH pairs with high word overlap between premise and hypothesis sentences, whereas, transformation PA, CV, IrH, SSNCV, etc. result in PH pairs with low word overlap. In order to add more diversity to the examples, we use composite transformations on the same sentence such as PA + ES ( $L = E$ ), PA + CW ( $L = C$ ) as shown in Table 1. + +# 4.3 Data Validation + +In order to measure the correctness of our procedurally generated PHL triplets, we validate randomly sampled 50 instances for each transformation. We find that nearly all the instances get correct label assignments in case of PA, HS, PS, NI, NS, IrH, AM transformations. While transformations CW, Con, SSNCV result in a few mislabeled instances. Specifically, SSNCV transformation results in the + +maximum errors (5). Appendix Section B provides examples of such instances. While it is beneficial to have noise-free training examples, doing so would require more human effort and increase the data collection cost. Thus, in this work, we study how well we can do solely using the procedurally generated data without investing human effort in either creating instances or eliminating noise. + +# 5 Training NLI Model + +In this section, we describe our approach to develop NLI models for each unsupervised setting. Table 13 (in Appendix) shows sizes of the generated PHL datasets for each setting. + +# 5.1 NPH-Setting + +We use the Premise Generation function $(\mathcal{P})$ over raw-text sources, namely, COCO captions, ROC stories, and Wikipedia i.e., $\mathcal{P}(\mathrm{COCO})$ , $\mathcal{P}(\mathrm{ROC})$ , and $\mathcal{P}(\mathrm{Wiki})$ to compile a set of premises and apply the transformations $(\mathcal{T})$ over them to generate PHL triplets. We then train a transformer-based 3-class classification model (Section 6.1) using the generated PHL triplets for the NLI task. + +# 5.2 P-Setting + +In this slightly relaxed unsupervised setting, premises of the training dataset are provided. We directly apply the transformation functions $(\mathcal{T})$ on the given premises and generate PHL triplets. Similar to the NPH setting, a 3-class classification model is trained using the generated PHL triplets. + +# 5.3 PH-Setting + +In this setting, unlabeled training data is provided. We present a 2-step approach to develop a model for this setting. In the first step, we create PHL triplets from the premises and train a model using the generated PHL triplets (same as the P-setting). In the second step, we pseudo-label the unlabeled PH pairs using the model trained in Step 1. + +Here, a naive approach to develop NLI model would be to train using this pseudo-labeled dataset. This approach is limited by confirmation bias i.e. overfitting to incorrect pseudo-labels predicted by the model (Arazo et al., 2020). We address this by filtering instances from the pseudo-labeled dataset based on the model's prediction confidence. We use the maximum softmax probability (maxProb) as the confidence measure and select only the instances that have high prediction confidence for training the + +final NLI model. This approach is based on prior work (Hendrycks and Gimpel, 2017) showing that correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified examples. Furthermore, we investigate two ways of training the final NLI model: + +Augmenting with $\mathcal{T}(P)$ : Train using the selected pseudo-labeled dataset and the PHL triplets generated in Step 1. + +Further Fine-tune P-Model: Further fine-tune the model obtained in Step 1 with the selected pseudo-labeled dataset instead of fine-tuning one from scratch. + +# 6 Experiments + +# 6.1 Experimental Setup + +Datasets: We conduct comprehensive experiments with a diverse set of NLI datasets: SNLI (Bowman et al., 2015) (sentence derived from only a single text genre), Multi-NLI (Williams et al., 2018) (sentence derived from multiple text genres), Dialogue NLI (Welleck et al., 2019) (sentences from context of dialogues), and Breaking NLI (Glockner et al., 2018) (adversarial instances). + +Model: We use BERT-BASE model (Devlin et al., 2019) with a linear layer on top of [CLS] token representation for training the 3-class classification model. We trained models for 5 epochs with a batch sizes of 32 and a learning rate ranging in $\{1 - 5\} e - 5$ . All experiments are done with Nvidia V100 16GB GPUs. + +Baseline Methods: We compare our approach with Multimodal Aligned Contrastive Decoupled learning (MACD) (Cui et al., 2020), Single-modal pre-training model BERT (Devlin et al., 2019), Multi-modal pre-training model LXMERT (Tan and Bansal, 2019), and VilBert (Lu et al., 2019). + +# 6.2 Results + +NPH-Setting: We utilize three raw text sources: COCO, ROC, and Wikipedia to compile a premise pool and then generate PHL triplets from those premises. Table 2 shows the accuracy of models in this setting. We use equal number of PHL triplets (150k class-balanced) for training the NLI models. We find that the model trained on PHL triplets generated from COCO captions as premises outperforms ROC and Wikipedia models on all datasets. We attribute this superior performance + +
ModelSNLIMNLI mat.MNLI mis.DNLIBNLI
BERT*35.09----
LXMERT*39.03----
VilBert*43.13----
T(P(C))64.849.0150.050.2674.73
T(P(R))58.5145.4445.9347.467.9
T(P(W))55.0644.1544.2548.4862.58
T(P(C+R))65.3946.8346.9247.9577.37
T(P(C+R+W))65.0946.6346.8344.7456.11
+ +Table 2: Comparing accuracy of models in the NPH-setting. C, R, and W correspond to the premise sources COCO, ROC, and Wikipedia respectively. Results marked with * have been taken from (Cui et al., 2020). + +
ApproachSNLIMNLI mat.MNLI mis.DNLIBNLI
BERT*35.09----
LXMERT*39.03----
VilBert*43.13----
MACD*52.63----
T(SNLI)65.7249.5650.0043.2767.78
+T(P(C))65.3649.9149.2446.2570.07
+T(P(R))65.9048.5348.3644.9766.43
+ +to the short, simple, and diverse sentences present in COCO that resemble the premises of SNLI that were collected from Flickr30K (Plummer et al., 2015) dataset. In contrast, Wikipedia contains lengthy and compositional sentences resulting in premises that differ from those present in SNLI, MNLI, etc. Furthermore, we find that combining the PHL triplets of COCO and ROC leads to a slight improvement in performance on SNLI $(65.39\%)$ , and BNLI $(77.37\%)$ datasets. + +P-Setting: Cui et al. (2020) presented MACD that performs multi-modal pretraining using COCO and Flickr30K caption data for the unsupervised NLI task. It achieves $52.63\%$ on the SNLI dataset. Our approach outperforms MACD and other single-modal and multi-modal baselines by $\sim 13\%$ on SNLI as shown in Table 3. We also experiment by adding PHL triplets generated from COCO and ROC to the training dataset that further improves the accuracy to $65.90\%$ and establish a new state-of-the-art performance in this setting. + +Table 3: Comparing accuracy of various approaches in the P-Setting. Results marked with * have been taken from (Cui et al., 2020). Note that we utilize the premises of the SNLI training dataset only but evaluate on SNLI (in-domain), and MNLI, DNLI, BNLI (out-of-domain). + +
MethodDataSNLIMNLI mat.MNLI mis.
From ScratchMaxProbFilt66.6753.3755.17
From ScratchMaxProbFilt+T(P)66.7550.2250.37
Finetune P-modelMaxProbFilt65.6052.9753.44
+ +Table 4: Comparing accuracy of our proposed approaches in the PH-Setting. Note that the models are trained using PH pairs only from the SNLI train-set but evaluated on MNLI (out-of-domain dataset) also. + +PH-Setting: Here, we first pseudo-label the given unlabeled PH pairs using the P-model and then select instances based on the maximum softmax probability (Section 5.3). We refer to this set of selected instances as MaxProbFilt dataset. This approach results in accuracy of $66.67\%$ on the SNLI dataset as shown in Table 4. We investigate two more approaches of training the NLI model. In the first approach, we train using MaxProbFilt and PHL triplets generated from premises. In the second approach, we further fine-tune the P-model with MaxProbFilt dataset. We find that the first approach slightly improves the accuracy to $66.75\%$ . This also represents our best performance across all the unsupervised settings. Furthermore, we observe improvement in the Out-of-domain datasets also $(53.37\%)$ and $55.17\%$ on MNLI matched and mismatched datasets respectively). + +# 6.3 Low-Data Regimes + +We also conduct experiments in low-data regimes where a few labeled instances are provided. We select these instances from the training dataset of SNLI/MNLI using the following two strategies: + +Random: Here, we randomly select instances from the corresponding training dataset. Further fine-tuning our NPH model with the selected instances consistently achieves higher performance than the models fine-tuned from scratch as shown in Table 5. With just 500 SNLI instances i.e. $\sim 0.1\%$ of training dataset, our models achieve $8.4\%$ and $8.32\%$ higher accuracy on SNLI (in-domain) and MNLI (out-of-domain) respectively. Furthermore, with 500 MNLI instances, our models achieve $10.37\%$ and $18.07\%$ higher accuracy on MNLI (in-domain) and SNLI (out-of-domain) respectively. + +Adversarial: Here, we select those instances from the training dataset on which the NPH model makes incorrect prediction. This is similar to the ad + +
Training DatasetMethod10020050010002000
SNLIMNLISNLIMNLISNLIMNLISNLIMNLISNLIMNLI
SNLIBERT44.6237.3648.9734.7158.5444.0165.3637.2472.5145.59
NPH (Random)64.8249.7265.0650.4866.9752.3370.6156.7573.759.0
NPH (Adv.)68.2151.9369.2356.5570.8558.4673.6259.4774.3160.43
MNLIBERT35.1236.0135.1436.5846.1647.147.6456.2153.6863.3
NPH (Random)63.8752.8563.8753.6164.2357.4765.6260.4266.8762.89
+ +Table 5: Comparing performance of various methods on in-domain and out-of-domain datasets in low-data regimes (100-2000 training instances). 'BERT' method corresponds to fine-tuning BERT over the provided instances from SNLI/MNLI, 'NPH (Random)' corresponds to further fine-tuning our NPH model with the randomly sampled instances from SNLI/MNLI, 'NPH (Adv.)' corresponds to further fine-tuning our NPH model with the adversarially selected instances from SNLI/MNLI. + +
ApproachΔ Accuracy
NPH model64.8%
- CV-5.88%
- CW-3.07%
- SSNCV-2.63%
- Neg.-0.70%
- IrH-0.50%
- PS-0.00%
+ +versarial data collection strategy (Nie et al., 2020; Kiela et al., 2021) where instances that fool the model are collected. Here, we do not simply fine-tune our NPH model with the adversarial examples as it would lead to catastrophic forgetting (Carpenter and Grossberg, 1988). We tackle this by including 20000 randomly sampled instances from the generated PHL triplets and fine-tune on the combined dataset. It further takes the performance to $70.85\%$ , $58.46\%$ on SNLI and MNLI respectively with 500 instances. + +# 6.4 Analysis + +Ablation Study: We conduct ablation study to understand the contribution of individual transformations on NLI performance. Table 6 shows the performance drop observed on removing PHL triplets created using a single transformation in the NPH-Setting. We find that Contradictory Words (CW) and Contradictory Verbs (CV) lead to the maximum drop in performance, $5.88\%$ and $3.07\%$ respectively. In contrast, Pronoun Substitution (PS) transformation doesn’t impact the performance significantly. Note that this does not imply + +Table 6: Ablation Study of transformations in the NPH-Setting. Each row corresponds to the drop in performance on the SNLI dataset when trained without PHL triplets created using that transformation. + +
SettingMetricLabel
CEN
NPHPrecision0.650.710.6
Recall0.680.770.51
PPrecision0.660.720.58
Recall0.670.780.52
PHPrecision0.640.740.60
Recall0.730.770.50
+ +Table 7: Precision and Recall values achieved by our models under each unsupervised setting. + +
NCRSSNLI-RSSNLI-NC
84.2250.0758.5975.39
+ +Table 8: Performance of our NPH model on Names-Changed (NC) and Roles-Switched (RS) adversarial test sets (Mitra et al., 2020). + +that this transformation is not effective, it means that the evaluation dataset (SNLI) does not contain instances requiring this transformation. + +NC and RS Evaluation: We evaluate our model on NER-Changed (NC) and Roles-Switched (RS) datasets presented in (Mitra et al., 2020) that test the ability to distinguish entities and roles. Our model achieves high performance on these datasets. Specifically, $84.22\%$ on NC and $75.39\%$ on SNLI-NC as shown in Table 8. + +Label-Specific Analysis: Table 7 shows the precision and recall values achieved by our models. We observe that our models perform better on Entailment and Contradiction than Neutral examples. This suggests that neutral examples are relatively more difficult. We provide examples of instances where our model makes incorrect predictions and conduct error analysis in Appendix. + +# 7 Conclusion and Discussion + +We explored three different settings in unsupervised NLI and proposed a procedural data generation approach that outperformed the existing unsupervised methods by $\sim 13\%$ . Then, we showed that fine-tuning our models with a few human-authored instances leads to a considerable improvement in performance. We also experimented using adversarial instances for this fine-tuning step instead of randomly selected instances and showed that it further improves the performance. Specifically, in presence of just 500 adversarial instances, the proposed method achieved $70.85\%$ accuracy on SNLI, $12.2\%$ higher than the model trained from scratch on the same 500 instances. + +This improvement in performance suggests possibility of an alternative data collection strategy that not only results in high-quality data instances but is also resource efficient. Using a model-in-the-loop technique has been shown to be effective for adversarial data collection (Nie et al., 2020; Kiela et al., 2021; Li et al., 2021; Sheng et al., 2021; Arunkumar et al., 2020). In these techniques, a model is first trained on a large dataset and then humans are instructed to create adversarial samples that fool the model into making incorrect predictions. Thus, requiring the crowd-sourcing effort twice. However, in our method, a dataset designer can develop a set of simple functions (or transformations) to procedurally generate training data for the model and can directly instruct humans to create adversarial samples to fool the trained model. This is resource efficient and allows dataset designers to control the quality of their dataset. + +# Ethical Considerations + +We use existing public-domain text corpora such as Wikipedia, ROC Stories, and MS-COCO, and follow the protocol to use and adapt research data to generate our weakly-labeled dataset. We will release the code to generate our dataset. Any bias observed in NLI systems trained using our methods can be attributed to the source data and our transformation functions. However, no particular sociopolitical bias is emphasized or reduced specifically by our methods. + +# Acknowledgements + +We thank the anonymous reviewers for their insightful feedback. This research was supported by + +DARPA SAIL-ON and DARPA CHESS programs. The views and opinions of the authors expressed herein do not necessarily state or reflect those of the funding agencies and employers. + +# References + +Eric Arazo, Diego Ortega, Paul Albert, Noel E O'Connor, and Kevin McGuinness. 2020. Pseudolabeling and confirmation bias in deep semi-supervised learning. In 2020 International Joint Conference on Neural Networks (IJCNN), pages 1-8. IEEE. +Anjana Arunkumar, Swaroop Mishra, Bhavdeep Sachdeva, Chitta Baral, and Chris Bryan. 2020. Real-time visual feedback for educative benchmark creation: A human-and-metric-in-the-loop workflow. +Pratyay Banerjee and Chitta Baral. 2020. Self-supervised knowledge triplet learning for zero-shot question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 151-162, Online. Association for Computational Linguistics. +Pratyay Banerjee, Tejas Gokhale, Yezhou Yang, and Chitta Baral. 2021. WeaQA: Weak supervision via captions for visual question answering. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 3420-3435, Online. Association for Computational Linguistics. +Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Computational Linguistics. +Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc. +Gail A. Carpenter and Stephen Grossberg. 1988. The art of adaptive pattern recognition by a self-organizing neural network. Computer, 21(3):77-88. +Wanyun Cui, Guangyu Zheng, and Wei Wang. 2020. Unsupervised natural language inference via decoupled multimodal contrastive learning. In Proceedings of the 2020 Conference on Empirical Methods + +in Natural Language Processing (EMNLP), pages 5511-5520, Online. Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Bhuwan Dhingra, Danish Danish, and Dheeraj Rajagopal. 2018. Simple and effective semi-supervised question answering. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 582-587, New Orleans, Louisiana. Association for Computational Linguistics. +Alexander Fabbri, Patrick Ng, Zhiguo Wang, Ramesh Nallapati, and Bing Xiang. 2020. Template-based question generation from retrieved sentences for improved unsupervised question answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4508-4513, Online. Association for Computational Linguistics. +Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking NLI systems with sentences that require simple lexical inferences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 650-655, Melbourne, Australia. Association for Computational Linguistics. +Tejas Gokhale, Pratyay Banerjee, Chitta Baral, and Yezhou Yang. 2020. Vqa-lol: Visual question answering under the lens of logic. In European conference on computer vision, pages 379-396. Springer. +Tejas Gokhale, Abhishek Chaudhary, Pratyay Banerjee, Chitta Baral, and Yezhou Yang. 2021. Semantically distributed robust optimization for vision-and-language inference. +Dan Hendrycks and Kevin Gimpel. 2017. A baseline for detecting misclassified and out-of-distribution examples in neural networks. Proceedings of International Conference on Learning Representations. +Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd. 2020. spaCy: Industrialstrength Natural Language Processing in Python. +Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vidgen, Grusha Prasad, Amanpreet Singh, Pratik Ringshia, Zhiyi Ma, Tristan Thrush, Sebastian Riedel, Zeerak Waseem, Pontus Stenetorp, Robin Jia, Mohit Bansal, Christopher Potts, and Adina Williams. 2021. Dynabench: Rethinking benchmarking in NLP. In + +Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4110-4124, Online. Association for Computational Linguistics. +Patrick Lewis, Ludovic Denoyer, and Sebastian Riedel. 2019. Unsupervised question answering by cloze translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4896-4910, Florence, Italy. Association for Computational Linguistics. +Linjie Li, Jie Lei, Zhe Gan, and Jingjing Liu. 2021. Adversarial vqa: A new benchmark for evaluating the robustness of vqa models. In International Conference on Computer Vision (ICCV). +Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollar, and Larry Zitnick. 2014. Microsoft coco: Common objects in context. In ECCV. European Conference on Computer Vision. +Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc. +George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39-41. +A. Mitra, Ishan Shrivastava, and Chitta Baral. 2020. Enhancing natural language inference using new and expanded training data sets and new learning models. In AAAI. +Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A corpus and cloze evaluation for deeper understanding of commonsense stories. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 839-849, San Diego, California. Association for Computational Linguistics. +Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Adversarial NLI: A new benchmark for natural language understanding. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4885-4901, Online. Association for Computational Linguistics. +Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. 2015. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. In Proceedings of the IEEE international conference on computer vision, pages 2641-2649. + +Raul Puri, Ryan Spring, Mohammad Shoeybi, Mostofa Patwary, and Bryan Catanzaro. 2020. Training question answering models from synthetic data. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5811-5826, Online. Association for Computational Linguistics. + +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. + +Radim Rehurek and Petr Sojka. 2011. Gensim-python framework for vector space modelling. NLP Centre, Faculty of Informatics, Masaryk University, Brno, Czech Republic, 3(2). + +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computational Linguistics. + +Sasha Sheng, Amanpreet Singh, Vedanuj Goswami, Jose Alberto Lopez Magana, Wojciech Galuba, Devi Parikh, and Douwe Kiela. 2021. Human-adversarial visual question answering. + +Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 31. + +Hao Tan and Mohit Bansal. 2019. LXMERT: Learning cross-modality encoder representations from transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5100-5111, Hong Kong, China. Association for Computational Linguistics. + +Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. In NeurIPS. + +Zirui Wang, Adams Wei Yu, Orhan First, and Yuan Cao. 2021. Towards zero-label language learning. arXiv preprint arXiv:2109.09193. + +Sean Welleck, Jason Weston, Arthur Szlam, and Kyunghyun Cho. 2019. Dialogue natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3731-3741, Florence, Italy. Association for Computational Linguistics. + +Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American + +Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguistics. + +Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J. Liu. 2019. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. + +# Appendix + +# A Transformations + +In this section, we provide details about the proposed sentence transformations. + +# A.1 Entailment + +Table 9 shows examples of our transformations. + +Paraphrasing (PA): It is an effective way of creating entailment examples as the hypothesis which is simply a paraphrased version of the premise is always entailed. Furthermore, since the Pegasus tool is trained for abstractive text summarization, it often removes some information from the original sentence while paraphrasing. For instance, a paraphrase of the sentence "A boy is playing with a red ball" could be "Boy is playing with a ball". This restricts us from using the paraphrased sentence as the premise with the original sentence as the hypothesis as the formed $PH$ pair does not represent an entailment scenario (neutral in this case). It is non-trivial to detect such instances in an automated way. Hence, in order to avoid noisy examples, we only use the original sentence as premise and paraphrased sentences as hypothesis. We also explore back-translation (Sennrich et al., 2016) but it often results in noisy outputs and provides less diversity than the Pegasus tool. Hence, we use only the Pegasus tool for generating paraphrases of sentences. + +Extracting Snippets (ES): Here, we provide details of the techniques used for extracting snippets from a text. Note that we use dependency parse tree of the sentence to select/skip the tokens to create the hypothesis. + +(i) We skip modifiers (tokens with dependency amod) that have no children in the parse tree. For example, from the sentence "The male surfer is riding a small wave", we create "The surfer is riding a small wave", "The male surfer is riding a wave", and "The surfer is riding a wave" as entailing hypotheses. +(ii) Similar to the previous technique, we skip adverb modifier (advmod). For example, from the + +sentence "A very beautiful girl is standing outside the park", we create an entailment hypothesis "A beautiful girl is standing outside the park". + +(iii) We skip adjectives that do not have dependency token conj and also have 0 children in the parse tree. For example, from the sentence "A middle-aged man in a beige vest is sleeping on a wooden bench," we create "A middle-aged man in a vest is sleeping on a bench." +(iv) In another technique, we select the root token and all the tokens to the left of it. If this results in selection of at least 3 tokens and if one of them is a verb then we consider it to be a valid sentence and use it as an entailing hypothesis. For example, from the sentence "The male surfer is riding a small wave", we create "surfer is riding". + +Hypernym Substitution (HS): Examples of hypernyms: + +'alcohol': ['beverage', 'drink'] +'apple': ['fruit'] +'axe': ['edge tool'] +'banana': ['fruit'] +etc. + +Pronoun Substitution (PS): For words in the list [‘man’, ‘boy’, ‘guy’, ‘lord’, ‘husband’, ‘father’, ‘boyfriend’, ‘son’, ‘brother’, ‘grandfather’, ‘uncle’], we use ('he'/ ‘someone’/ ‘they’, etc.) and for words in the list [‘woman’, ‘girl’, ‘lady’, ‘wife’, ‘mother’, ‘daughter’, ‘sister’, ‘girlfriend’, ‘grandmother’, ‘aunt’], we use 'she'/ ‘someone’/ ‘they’, etc.). In other cases, we use the pronoun ‘they’ or ‘someone’ or ‘somebody’. + +Counting (CT): We provide examples of templates we use to create counting hypotheses: + +"There are {count} {hypernym} present", +{"count} {hypernym} are present", +"Several {hypernym} present", + +"There are multiple {hypernym} present", + +"There are more than $\{\mathrm{count}\} \{\mathrm{hypernym}\}$ present", + +"There are at least {count} {hypernym} present", + +etc. + +We also substitute the hypernym in the original sentence directly to create hypotheses as shown in Table 9. + +# A.2 Contradiction + +Table 10 shows examples of our transformations. + +Contradictory Words (CW): For contradictory adjectives, we collect antonyms from wordnet and for contradictory nouns, we use the function 'most_similar' fromgensim (Rehurek and Sojka, 2011) library. that returns words close (but distinct) to a given word². For instance, it returns words like 'piano', 'flute', 'saxophone' when given the word 'violin' In order to filter out the inflected forms of the same word or its synonyms from the list returned by most_similar function, we remove words that have high STS with the given word. This step removes noisy contradictory word pairs to a large extent. Here, we provide examples of contradictory words: + +'stove': ['heater'] + +'cucumber': ['onion', 'carrot', 'melon', 'turnip', 'eggplant', 'watermelon', 'radish'] + +'motorcycle': ['truck', 'scooter', 'car'] + +'kitchen': ['bedroom', 'bathroom', 'toilet'] etc. + +Contradictory Verb (CV): We provide examples of contradictory verbs: + +stand': ['sprint', 'cycle', 'drive', 'jump', 'sit', etc.] + +'play': ['sleep', 'cry', 'fight', 'drink', 'hunt', etc.] + +smile': ['cry', 'anger', 'frown', etc.] etc. + +# A.3 Neutral + +Table 11 shows examples of our transformations. + +Adding Modifiers (AM): We provide examples of modifiers collected using our approach: + +'metal': ['large', 'circular', 'galvanized', 'heavy', 'dark', etc.] + +'vegetable': ['steamed', 'cruciferous', 'green', 'uncooked', 'raw', etc.] + +'park': ['quiet', 'neglected', 'vast', 'square', 'crowded', etc.] + +etc. + +ConceptNet: We use ConceptNet relations At-Location, DefinedAs, etc. and insert the node connected by these relations to the sentence resulting in a neutral hypothesis. + +
CategoryOriginal Sentence (Premise)Hypothesis
PAFruit and cheese sitting on a black plate.There is fruit and cheese on a black plate.
ESperson relaxes at home while holding something.person relaxes while holding something.
HS.A girl is sitting next to a blood hound.A girl is sitting next to an animal.
PSPeople are walking down a busy city street.they are walking down a busy city street
CTA man and woman setup a camera.Two people setup a camera
CompositeA large elephant is very close to the camera.elephant is close to the photographic equipment.
+ +Table 9: Illustrative examples of entailment transformations. + +
CategoryOriginal Sentence (Premise)Hypothesis
CW-nounA small bathroom with a sink under a cabinet.a small kitchen with a sink under a cabinet.
CW-adjA young man is doing a trick on a surfboard.A old man is doing a trick on a surfboard.
CVA couple pose for a picture while standing next to a couch.A couple sit in a chair on laptops
SOSA man is flying a kite on the beach.a beach is flying a kite on the man
NSTwo green traffics lights in a European city.nine green traffics lights in a European city
IrH.A flock of sheep grazing in a field.A man having fun as he glides across the water.
NI.A boy with gloves on a field throwing a ball.a boy with gloves on a field not throwing a ball
CompositeA woman holding a baby while a man takes a picture of thema kid is taking a picture of a male and a baby.
+ +Table 10: Illustrative examples of contradiction transformations. + +
CategoryOriginal Sentence (Premise)Hypothesis
AMtwo cats are eating next to each other out of the bowltwo cats are eating next to each other out of the same bowl
SSNCVA man holds an electronic device over his head.man is taking photo with a small device
FCona food plate on a table with a glass.a food plate on a table with a glass which is made of plastic.
Compositetwo dogs running through the snow.The big dogs are outside.
+ +Table 11: Illustrative examples of neutral transformations. + +
Trans.PremiseHypothesisAssigned LabelTrue Label
PSTwo dogs on leashes sniffing each other as people walk in a outdoor marketTwo dogs on leashes sniffing each other as they walk in a marketEN
CTAdult woman eating slice of pizza while standing next to buildingThere are 2 humans presentEC
CWMeal with meat and vegetables served on tableThere is a meal with cheese and vegetablesCN
SSNCVA person riding skis down a snowy slopeA person riding skis in a body of waterNC
SSNCVA person on a skateboard jumping up into the airA person jumping up in the air on a snow-boardNC
CVA male surfer riding a wave on the oceanA surfer is surfing in the ocean near some swimmersCN
+ +Table 12: Examples of mis-labeled PHL triplets generated by our transformations. + +
Transformation TNPH-SettingP-Setting
T(P(C))T(P(R))T(P(W))T(SNLI)
Raw Sentences591490600548
PA50833072273475
ES236519687516
PS374113738
CT258243
Neg.117511752053990
CW978119116265
CV1149635505
NS731622491
SOS42818022976
AM1048125535327
SSNCV136327405
+ +Table 13: Sizes of PHL triplet datasets generated by our transformations for the unsupervised settings. All numbers are in thousands. C, R, W denote COCO, ROC Stories, and Wikipedia respectively. For P-Setting, we show stats for SNLI dataset. We do not include PH-Setting in this table because we leverage the PHL triplets generated using the P-Setting to solve it as described in Section 5.3. + +# B Data Validation + +Table 12 shows examples of mis-labeled instances generated by our transformations. + +# C Training NLI Model + +Table 13 shows sizes of the generated PHL datasets for each setting. \ No newline at end of file diff --git a/unsupervisednaturallanguageinferenceusingphltripletgeneration/images.zip b/unsupervisednaturallanguageinferenceusingphltripletgeneration/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..0cac50dd0268bd0940c7896a386f4d3fae3b80a8 --- /dev/null +++ b/unsupervisednaturallanguageinferenceusingphltripletgeneration/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:71d38dec59b0d412bde51569d23b5416427d58b205e1f6e6d107ad5a315948a6 +size 601308 diff --git a/unsupervisednaturallanguageinferenceusingphltripletgeneration/layout.json b/unsupervisednaturallanguageinferenceusingphltripletgeneration/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..9f2c87786e108e03fdb806d6c25446534cf90b6f --- /dev/null +++ b/unsupervisednaturallanguageinferenceusingphltripletgeneration/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:069dc08cb3572cc0c0e93f436ff172330fa77eadc635f0b3d46b8b894ae62e37 +size 435665 diff --git a/unsupervisedpreferenceawarelanguageidentification/63217194-2d95-47cc-9b34-7637fa3c2268_content_list.json b/unsupervisedpreferenceawarelanguageidentification/63217194-2d95-47cc-9b34-7637fa3c2268_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0526c5889728d26f88510e51f2ce5becf2cad627 --- /dev/null +++ b/unsupervisedpreferenceawarelanguageidentification/63217194-2d95-47cc-9b34-7637fa3c2268_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:61291092ddc3d1587535f3768eb218af7dda279a46b90db248eea703e8f33c7d +size 42398 diff --git a/unsupervisedpreferenceawarelanguageidentification/63217194-2d95-47cc-9b34-7637fa3c2268_model.json b/unsupervisedpreferenceawarelanguageidentification/63217194-2d95-47cc-9b34-7637fa3c2268_model.json new file mode 100644 index 0000000000000000000000000000000000000000..72ad2209651fd15cf9e0f63373c62769a276623b --- /dev/null +++ b/unsupervisedpreferenceawarelanguageidentification/63217194-2d95-47cc-9b34-7637fa3c2268_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:29e5a79ef09522364c92a2676dc82faab173e933ba8e8ed8a68cca64c3f42685 +size 52867 diff --git a/unsupervisedpreferenceawarelanguageidentification/63217194-2d95-47cc-9b34-7637fa3c2268_origin.pdf b/unsupervisedpreferenceawarelanguageidentification/63217194-2d95-47cc-9b34-7637fa3c2268_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c5dfe73032cca0ce837b083d20f702e73c4e2379 --- /dev/null +++ b/unsupervisedpreferenceawarelanguageidentification/63217194-2d95-47cc-9b34-7637fa3c2268_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3c0140a5df3ed31c1c5647e3a77c5204e45dd6a9f6b5a3b4f0136406c6901f42 +size 487522 diff --git a/unsupervisedpreferenceawarelanguageidentification/full.md b/unsupervisedpreferenceawarelanguageidentification/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a5b4ef10c53a16c8b0ecc6c0eeda4ade35fbf208 --- /dev/null +++ b/unsupervisedpreferenceawarelanguageidentification/full.md @@ -0,0 +1,166 @@ +# Unsupervised Preference-Aware Language Identification + +Xingzhang Ren Baosong Yang* Dayiheng Liu Haibo Zhang Xiaoyu Lv Liang Yao Jun Xie DAMO Academy, Alibaba Group + +{xingzhang.rxz, yangbaosong.ybs, liudayiheng.1dyh, zhanhui.zhb,anzhi.lxy, yaoliang.yl, qingjing.xj}@alibaba-inc.com + +# Abstract + +Recognizing the language of ambiguous texts has become a main challenge in language identification (LID). When using multilingual applications, users have their own language preferences, which can be regarded as external knowledge for LID. Nevertheless, current studies do not consider the inter-personal variations due to the lack of user annotated training data. To fill this gap, we introduce preference-aware LID and propose a novel unsupervised learning strategy. Concretely, we construct pseudo training set for each user by extracting training samples from a standard LID corpus according to his/her historical language distribution. Besides, we contribute the first user labeled LID test set called "U-LID". Experimental results reveal that our model can incarnate user traits and significantly outperforms existing LID systems on handling ambiguous texts. Our code and benchmark have been released. + +# 1 Introduction + +Language identification (LID) is widely applied in a range of web services where a multitude of languages may be presented, such as translation systems, search engines, and social media (Yao et al., 2020a; Sun et al., 2020; Li et al., 2020; Bi et al., 2020; Xu et al., 2021). It predicts the natural language that a text is written in, and decides which language-specific model to invoke in downstream natural language processing (NLP) tasks (Lui et al., 2014; Yao et al., 2020b; Tambi et al., 2020). + +Several recent studies have well tackled LID by designing a feature set for a traditional or neural classifier (Kocmi and Bojar, 2017; Vo and Khoury, 2020; Jauhiainen et al., 2021). However, these researches merely explore textual information regardless of external knowledge about the user. In a real-world scenario, there exists large amount of + +
User Input TextLabelPrefer.BaselineOurs
veloes (veil)esenes
velofr (bike)frenfr
fundas huawei y7es (huawei y7 cases)esenes
kello kittyen (hello kitty)deiten
+ +Table 1: Examples of ambiguous text that are difficult to be accurately recognized. "Label" shows the language label that is annotated by a user and conforms to his/her input intention. "Prefer." denotes the language most frequently used by the corresponding user. "Baseline" and "Ours" indicate the predictions of baseline LID system and the proposed model, respectively. + +ambiguous user inputs, such as texts with false-friend, code-switching, and misspelling, as shown in Table 1. On the one hand, the languages of these texts are difficult (even impossible) to be explicitly identified without external knowledge. On the other hand, for different users, a good LID should flexibly give different results to the same ambiguous input, thus conforming to users' intention (Lin et al., 2021). It can be said that classifying ambiguous user inputs remains a main challenge in LID (Xia et al., 2010; Stiller et al., 2010). + +When drawing on a multilingual NLP application, every person has his/her own accustomed languages. The historical behavior implicitly mirrors the user language preference and can be exploited for LID. To this end, we propose a task named preference-aware LID, where the historical language distribution of a user is leveraged for the disambiguation of mistakeable texts, and guides LID to predict different languages for different users. + +A major bottleneck for this task lies in the lack of well-labeled training data. In particular, it is unavailable to obtain large amount of ambiguous texts labeled with different languages by different users. To overcome this issue, we propose a novel unsupervised strategy that builds synthetic data for each user via sampling natural training examples according to his/her historical language distribution. + +We build our model upon Transformer (Vaswani et al., 2017) and introduce two kinds of extensions. One is directly revising the predicted probability of LID using the user language preference. In order to maintain the robustness, the other encodes the user traits into inductive bias. + +Our models are trained using a publicly available dataset extracted from Wikipedia. Towards evaluating the effectiveness, we construct a user-driven LID test set "U-LID". The benchmark consists of 21 languages, each of which contains 500 examples collected from a real-world translation system and labeled by users. Extensive analyses demonstrate the superiority and the robustness of our approach on recognizing error-prone cases. + +# 2 Preliminary + +Problem Formulation Given an input text $X$ , the vanilla LID model with parameter $\theta$ predicts the probability of the language $y$ by $P(y|X;\theta)$ . As an extension of conventional LID, preference-aware LID considers the traits of each user, thus facilitating the classifying of ambiguous texts. In this paper, we treat the language preference of user as the external knowledge, which can be implicitly embodied in historical language distribution $D^{(u)}$ of user $u$ . Consequently, our task aims to model $P(y^{(u)}|X,D^{(u)};\theta)$ , as illustrated in Figure 1. + +User Annotated Test Set In order to assess the effectiveness of the proposed method, we construct a preference-aware LID test set called "U-LID". The training instance is represented as a triplet $\langle X, D^{(u)}, y^{(u)} \rangle$ . The samples are collected from a real-world translation system - Alibaba Translate. We mine user annotated data as follows: Given a user input, the translation system first returns a predicted language label and the associated translation results. When the user is dissatisfied with the prediction result, he/she may change the predicted language label. We argue that this operation not only reflects the user intention concerning the language, but also implies that the classification of the current input is error-prone. Accordingly, we collect texts whose predicted labels are revised by users. The test set is further manually checked and carefully desensitized by linguistic experts to maintain the data quality. Finally, the benchmark consists of 21 languages and 11,031 samples. + +![](images/4ab1339eb1e367741354bca6050c8293528dd1c2e0de9a3548d435274fe68400.jpg) +Figure 1: Illustration of the preference-aware LID task. The input text "basket" is a false-friend in English and French. Our model considers user language preference $D^{(u)}$ , thus being able to identify ambiguous text and generate distinct results for different users. + +average word count in each sample is 2.08, and the average number with respect to character is 13.27. + +# 3 Methodology + +# 3.1 Preference-Aware Model + +Our model is built upon the advanced neural-based model - Transformer (Vaswani et al., 2017). Given an input query $X$ , the output token representations can be formally expressed as: $Z = \operatorname{Transformer}(X)$ . + +The final probability distribution is calculated by assigning an output layer: + +$$ +Y = \operatorname {s o f t m a x} \left(W _ {o} \bar {Z} + b _ {o}\right), \tag {1} +$$ + +where $\overline{Z}$ denotes the mean of the token representations $Z$ . $W_{o} \in \mathbb{R}^{L \times H}$ , $b_{o} \in \mathbb{R}^{L}$ are trainable parameters with $H$ being the hidden size and $L$ being the number of languages. $\mathrm{softmax}(\cdot)$ represents a non-linear function that is used to normalize the probability distribution of labels. + +We propose the preference-aware model to leverage user language preference into LID includes two types of approaches: + +Revision-Based Model Intuitively, we can multiply the output $Y$ and the user language preference $D^{(u)}$ directly. The final distribution is revised as: + +$$ +Y ^ {(u)} = \operatorname {s o f t m a x} \left(Y D ^ {(u)}\right). \tag {2} +$$ + +In this paradigm, we regard $D^{(u)}$ as a reviser at the model training time. Note that, revision-based model can be also exploited in a plug-and-play fashion without any model training. + +![](images/333ba11de3b63c62cc08042aa3b69093079026e1617a2c207ac277aee76bd5ac.jpg) +Figure 2: Illustration of the construction of synthetic data. We use smoothed language preference of a user to sample examples from the standard training corpus. + +Representation-Based Model A natural alternative is to encode language preference into a representation, which is then served as an inductive bias in the output layer. Here, we assign $L$ trainable language embeddings $W_{e}\in \mathbb{R}^{L\times L}$ . The user representation is the weighted sum of language embeddings regarding to user language distribution: $W_{e}D^{(u)}$ . We modified Equation 1 as follows: + +$$ +Y ^ {(u)} = \operatorname {s o f t m a x} \left(W _ {o} \bar {Z} + W _ {e} D ^ {(u)} + b _ {o}\right). \quad (3) +$$ + +# 3.2 Unsupervised Training + +The main challenge of our task lies in the lack of user annotated training data. It is hard to construct large amount of training examples in the triplet form $\langle X, D^{(u)}, y^u \rangle$ . Although we construct a test set by mining user operations on switching languages, such kind of approach depends on expensive manual review due to the massive noises. + +To tackle this problem, we propose a novel unsupervised training strategy, as illustrated in Figure 2. In an existing LID training corpus $T$ , each text is labeled to a language. Given the user historical language distribution $D^{(u)}$ , we sample a subset $T^{(u)}$ from $T$ and guarantee the language distribution of $T^{(u)}$ to be consistent with $D^{(u)}$ . Nevertheless, most people only use one or two languages, making their historical distribution concentrated on a few languages. Immediately utilizing $D^{(u)}$ to sample examples for training may cause overconfidence problem. Firstly, the model may tend to overlook either the user information or the input text. Secondly, texts of which language frequency is relatively low in $D^{(u)}$ may fail to be correctly classified, especially for those languages not appearing in the user's historical inputs. Accordingly, we borrow the idea of up-sampling (Pereyra et al., + +2017; Wan et al., 2022) into our approach. The final sampling distribution can be calculated as: + +$$ +S ^ {(u)} = \operatorname {s o f t m a x} ((1 - \alpha) D ^ {(u)} + \alpha / L). \tag {4} +$$ + +Here, we set $\alpha = 0.01$ and collect 100 examples for each user as default. Besides, in order to maintain the robustness and cope with the situation that the user's historical input is none or inaccessible, we treat the uniform distribution as $D^{(u)}$ , then supplement the same number of standard training examples to that in current synthetic corpus. + +# 4 Experiments + +# 4.1 Experimental Setting + +Data Setting We collect 100 thousand (K) users from the log of the real-world translation system Alibaba Translate. Considering the standard LID corpus $T$ , we follow Vo and Khoury (2020) to extract the natural training data from the released datasets: W2C corpus (Majlis and Zabokrtsky, 2012), Common Crawl corpus (Schafer, 2016) and Tatoeba (Tiedemann and Thottingal, 2020). Finally $T$ consists of 21 languages, each of which contains 5 million (M) samples. We examine models on U-LID test set. Moreover, in order to investigate the robustness of our methods on conventional LID task, we further collect a publicly available test set KB-21 from Kocmi and Bojar (2017), using a subset of 21 languages. KB-21 consists of 2,100 samples, the average amounts of words and characters in each sample are 4.47 and 34.90, respectively. + +Implementation Details We follow the Base model setting as Vaswani et al. (2017), excepting that the number of layers is set to 1 for the computational efficiency. To avoid the problem of out-of-vocabulary, we follow existing LID approaches to exploit character-based embedding (Jauhiainen et al., 2019), in which vocabulary size is set to 15K. + +In this study, 1-Layer TRANSFORMER model is served as baseline. We reimplement widely used text classification models, FASTTEXT (Joulin et al., 2017) and TEXTCNN (Kim, 2014) as well as recent LID approach ATTENTIONCNN (Vo and Khoury, 2020), as listed in Table 2. In addition, we reproduced a state-of-the-art model Naive Bayes (Jauhiainen et al., 2021) in VarDial2021 task (Chakravarthi et al., 2021). Moreover, we also examine popular LID systems on our LID tasks, + +
ModelU-LIDKB-21
Existing LID Systems
Langid.py (Lui and Baldwin, 2012)63.5291.33
LanideNN (Kocmi and Bojar, 2017)67.2392.71
Reimplemented Models
NAIVE BAYES (Jauhiainen et al., 2021)60.5389.91
FASTTEXT (Joulin et al., 2017)59.2588.69
TEXTCNN (Kim, 2014)61.5891.24
ATTENTIONCNN (Vo and Khoury, 2020)62.1691.41
Ours
TRANSFORMER (Baseline)67.3592.81
+Revision-Based Model89.23††91.19
+without training84.79††92.81
+Representation-Based Model88.74††93.09†
+ +Table 2: Classification accuracy (ACC) on test sets. For reference, when immediately regarding the user preference language as the predicted result, the ACC on U-LID is 66.42. The proposed preference-aware LID models show significant improvements on U-LID tasks. Experimental results of neural-based models own averaged over 5 independent runs:"†" and "††" indicate the improvement over TRANSFORMER is statistically significant $(p < 0.05$ and $p < 0.01$ , respectively), estimated by bootstrap sampling (Koehn, 2004). + +including Langid.py $^5$ (Lui and Baldwin, 2012) and LanideNN $^6$ (Kocmi and Bojar, 2017). + +For training, we used Adam optimizer (Kingma and Ba, 2015) with the same learning rate schedule as Vaswani et al. (2017) and 8k warmup steps. Each batch consists of 1,024 examples and dropout rate is set to a constant of 0.1. Models are trained on a single Tesla P100 GPU. + +Considering the compared models, we exploit 1-3 gram to extract characters and words for FASTTEXT (Joulin et al., 2017). As to TEXTCNN (Kim, 2014), we apply six filters with the size of 3, 3, 4, 4, 5, 5 and a hidden size of 512. For computational efficiency, 1 layer network is used as default if no confusion is possible. Other configurations of our implementations are same to common settings described in corresponding literature or the released source codes. + +# 4.2 Results + +The results are concluded in Table 2. Our models significantly outperform the compared methods over $17\% -22\%$ accuracy on U-LID task, indicating the effectiveness of the utilization of user information. Specifically, treating user's language preference as a reviser performs best on U-LID, while + +![](images/0f7e0d8c630d37e14281da3542fd7b14e29054bc16d06f270a3edcf2ceb2b299.jpg) +Figure 3: Effects of the number of historical inputs on U-LID. Representation-based model is more robust. + +declining the quality on KB-21. We attribute this to the overconfidence of revision-based model on user historical language distribution, which weakens the learning of LID model on original text classification. It is encouraging to see that revision-based model without training can yields considerable result on U-LID, in the meanwhile, does not affect the quality on KB-21 by feeding the uniform historical distribution. By contrast, representation-based model alleviates the overconfidence problem and achieves good performance in both U-LID and KB-21. Accordingly, we use representation-based model as the default setting in subsequent analyses. + +# 4.3 Analysis + +Robustness Analysis User's language preference greatly affects our model. The less the user historical inputs, the higher the uncertainty of user preference is. Accordingly, the robustness of our model is necessary to be assessed. We plot Figure 3 to show the effects of the number of historical inputs. Obviously, revision-based model yields lower accuracy when there exists relatively bare user historical information, verifying our hypothesis that the model suffers from the problem of overconfidence on historical language distribution. On the contrary, representation-based model draws a more smooth line, which demonstrates its robustness. + +Qualitative Analysis Table 1 shows several identification results. In the first two cases, "velo" is a Spanish and French false-friend. The third example is code-switching in which "huawei y7" is a mobile phone module, preceded by a Spanish word which means "case". For the last case, "kello" presents a misspelled English word "hello". Results indicate that vanilla LID model fails to correctly identify these cases, while our model can exactly predict distinct results that conform to the user intention. + +# 5 Conclusion + +We explore preference-aware LID. Major contributions of our work are four-fold: 1) We introduce preference-aware LID task that leverages user language preference to improve LID. We hope our work can attract more attention to explore techniques on this topic; 2) We propose a novel unsupervised strategy to guide model to take user historical language distribution into account; 3) We collect U-LID and make it publicly available, which may contribute to the subsequent researches on LID; and 4) Extensive analyses indicate the effectiveness and robustness of our method, verifying that LID can profit from personality information to make the results conform to user intention. + +# Acknowledgments + +We thank anonymous reviewers for valuable comments. This research was supported by National Key R&D Program of China under Grant No.2018YFB1403202. + +# References + +Tianchi Bi, Liang Yao, Baosong Yang, Haibo Zhang, Weihua Luo, and Boxing Chen. 2020. Constraint translation candidates: A bridge between neural query translation and cross-lingual information retrieval. CoRR, abs/2010.13658. +Andrea Ceolin. 2021. Comparing the performance of cnns and shallow models for language identification. In Proceedings of the Eighth Workshop on NLP for Similar Languages, Varieties and Dialects, pages 102-112. +Bharathi Raja Chakravarthi, Gaman Mihaela, Radu Tudor Ionescu, Heidi Jauhiainen, Tommi Jauhiainen, Krister Lindén, Nikola Ljubesic, Niko Partanen, Ruba Priyadharshini, Christoph Purschke, et al. 2021. Findings of the vardial evaluation campaign 2021. In Proceedings of the Eighth Workshop on NLP for Similar Languages, Varieties and Dialects, pages 1-11. +Tommi Jauhiainen, Heidi Jauhiainen, and Krister Lindén. 2021. Naive bayes-based experiments in Romanian dialect identification. In Proceedings of the Eighth Workshop on NLP for Similar Languages, Varieties and Dialects, pages 76-83. +Tommi Jauhiainen, Marco Lui, Marcos Zampieri, Timothy Baldwin, and Krister Lindén. 2019. Automatic language identification in texts: A survey. volume 65, pages 675-782. +Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomás Mikolov. 2017. Bag of tricks for efficient + +text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 2: Short Papers, pages 427-431. Association for Computational Linguistics. +Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar; A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1746-1751. ACL. +Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Tom Kocmi and Ondrej Bojar. 2017. Lanidenn: Multilingual language identification on character window. CoRR, abs/1701.03338. +Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In EMNLP. +Juntao Li, Chang Liu, Jian Wang, Lidong Bing, Hongsong Li, Xiaozhong Liu, Dongyan Zhao, and Rui Yan. 2020. Cross-lingual low-resource set-to-description retrieval for global e-commerce. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8212-8219. AAAI Press. +Huan Lin, Liang Yao, Baosong Yang, Dayiheng Liu, Haibo Zhang, Weihua Luo, Degen Huang, and Jinsong Su. 2021. Towards user-driven neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4008-4018. +Marco Lui and Timothy Baldwin. 2012. langid.py: An off-the-shelf language identification tool. In *The 50th Annual Meeting of the Association for Computational Linguistics*, Proceedings of the System Demonstrations, July 10, 2012, Jeju Island, Korea, pages 25-30. The Association for Computer Linguistics. +Marco Lui, Joy Han Lau, and Timothy Baldwin. 2014. Automatic detection and language identification of multilingual documents. Trans. Assoc. Comput. Linguistics, 2:27-40. +Martin Majlis and Zdenek Zabokrtsky. 2012. Language richness of the web. In Proceedings of the Eighth International Conference on Language Resources and Evaluation, LREC 2012, Istanbul, + +Turkey, May 23-25, 2012, pages 2927-2934. European Language Resources Association (ELRA). +Gabriel Pereyra, George Tucker, Jan Chorowski, Lukasz Kaiser, and Geoffrey E. Hinton. 2017. Regularizing neural networks by penalizing confident output distributions. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Workshop Track Proceedings. OpenReview.net. +Roland Schäfer. 2016. Commoncow: Massively huge web corpora from commoncrawl data and a method to distribute them freely under restrictive EU copyright laws. In Proceedings of the Tenth International Conference on Language Resources and Evaluation LREC 2016, Portooroz, Slovenia, May 23-28, 2016. European Language Resources Association (ELRA). +Juliane Stiller, Maria Gäde, and Vivien Petras. 2010. Ambiguity of queries and the challenges for query language detection. In CLEF 2010 LABs and Workshops, Notebook Papers, 22-23 September 2010, Padua, Italy, volume 1176 of CEUR Workshop Proceedings. CEUR-WS.org. +Shuo Sun, Suzanne Sia, and Kevin Duh. 2020. Clireval: Evaluating machine translation as a cross-lingual information retrieval task. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, ACL 2020, Online, July 5-10, 2020, pages 134-141. Association for Computational Linguistics. +Ritzi, Ajinkya Kale, and Tracy Holloway King. 2020. Search query language identification using weak labeling. In Proceedings of The 12th Language Resources and Evaluation Conference, LREC 2020, Marseille, France, May 11-16, 2020, pages 3520-3527. European Language Resources Association. +Jörg Tiedemann and Santhosh Thottingal. 2020. OPUS-MT - building open translation services for the world. In Proceedings of the 22nd Annual Conference of the European Association for Machine Translation, EAMT 2020, Lisboa, Portugal, November 3-5, 2020, pages 479-480. European Association for Machine Translation. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008. +Duy-Tin Vo and Richard Khoury. 2020. Language identification on massive datasets of short messages using an attention mechanism CNN. In IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM 2020, The Hague, Netherlands, December 7-10, 2020, pages 16-23. IEEE. + +Yu Wan, Baosong Yang, Derek Fai Wong, Lidia Sam Chao, Liang Yao, Haibo Zhang, and Boxing Chen. 2022. Challenges of Neural Machine Translation for Short Texts. Computational Linguistics, pages 1-21. +Fei Xia, Carrie Lewis, and William D. Lewis. 2010. The problems of language identification within hugely multilingual data sets. In Proceedings of the International Conference on Language Resources and Evaluation, LREC 2010, 17-23 May 2010, Valletta, Malta. European Language Resources Association. +Linlong Xu, Baosong Yang, Xiaoyu Lv, Tianchi Bi, Dayiheng Liu, and Haibo Zhang. 2021. Leveraging advantages of interactive and non-interactive models for vector-based cross-lingual information retrieval. CoRR, abs/2111.01992. +Liang Yao, Baosong Yang, Haibo Zhang, Boxing Chen, and Weihua Luo. 2020a. Domain transfer based data augmentation for neural query translation. In Proceedings of the 28th International Conference on Computational Linguistics, pages 4521-4533, Barcelona, Spain (Online). International Committee on Computational Linguistics. +Liang Yao, Baosong Yang, Haibo Zhang, Weihua Luo, and Boxing Chen. 2020b. Exploiting neural query translation into cross lingual information retrieval. CoRR, abs/2010.13659. \ No newline at end of file diff --git a/unsupervisedpreferenceawarelanguageidentification/images.zip b/unsupervisedpreferenceawarelanguageidentification/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..faf17715fa5cfafcf1d8fb20a08519eeb829e773 --- /dev/null +++ b/unsupervisedpreferenceawarelanguageidentification/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:030679b25a0a9fb880bdf021317035c94cdf0f577fa645dc0955688688383e88 +size 186365 diff --git a/unsupervisedpreferenceawarelanguageidentification/layout.json b/unsupervisedpreferenceawarelanguageidentification/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..f38120428ce6581fe621693d63a2556d8d1bbc62 --- /dev/null +++ b/unsupervisedpreferenceawarelanguageidentification/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2822fe03ea1b1594fc2940cc8581ded790e4c3ad9369ed0501b0e90071d66431 +size 203869 diff --git a/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/c466d470-f7d8-427f-91ca-dd8d761f94e5_content_list.json b/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/c466d470-f7d8-427f-91ca-dd8d761f94e5_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..5daa0fa36c097d768ccc5ae646267f124ed33f7a --- /dev/null +++ b/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/c466d470-f7d8-427f-91ca-dd8d761f94e5_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5b421d7074f03099632420a148cb3303fc19d71103235f52a2e9fa81d477936b +size 82699 diff --git a/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/c466d470-f7d8-427f-91ca-dd8d761f94e5_model.json b/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/c466d470-f7d8-427f-91ca-dd8d761f94e5_model.json new file mode 100644 index 0000000000000000000000000000000000000000..ad7a64238eb17e752e1986904dd18d50788a01f5 --- /dev/null +++ b/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/c466d470-f7d8-427f-91ca-dd8d761f94e5_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:baef046b6ae08f15d464534d4cba9052b36bc3492d1e0da06a705afcb86228f3 +size 98164 diff --git a/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/c466d470-f7d8-427f-91ca-dd8d761f94e5_origin.pdf b/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/c466d470-f7d8-427f-91ca-dd8d761f94e5_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..cdc74890123de19d7fb80c9064334cd14413ae3b --- /dev/null +++ b/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/c466d470-f7d8-427f-91ca-dd8d761f94e5_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e8b9e4b9f694cc4598f15a67f2570fd07bfb35fcf254065eb51b5b1291bd707f +size 1510862 diff --git a/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/full.md b/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c4c306a33dd4dbfd8c84b84e22684141fac43e1c --- /dev/null +++ b/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/full.md @@ -0,0 +1,326 @@ +# Using Interactive Feedback to Improve the Accuracy and Explainability of Question Answering Systems Post-Deployment + +Zichao Li $^{1}$ , Prakhar Sharma $^{2}$ , Xing Han Lu $^{1}$ , Jackie C.K. Cheung $^{1}$ , Siva Reddy $^{1}$ + +$^{1}$ Mila, McGill University + +$^{2}$ University of California, Los Angeles + +zichao.li@mila.quebec + +# Abstract + +Most research on question answering focuses on the pre-deployment stage; i.e., building an accurate model for deployment. In this paper, we ask the question: Can we improve QA systems further post-deployment based on user interactions? We focus on two kinds of improvements: 1) improving the QA system's performance itself, and 2) providing the model with the ability to explain the correctness or incorrectness of an answer. We collect a retrieval-based QA dataset, FEEDBACKQA, which contains interactive feedback from users. We collect this dataset by deploying a base QA system to crowdworkers who then engage with the system and provide feedback on the quality of its answers. The feedback contains both structured ratings and unstructured natural language explanations. We train a neural model with this feedback data that can generate explanations and re-score answer candidates. We show that feedback data not only improves the accuracy of the deployed QA system but also other stronger non-deployed systems. The generated explanations also help users make informed decisions about the correctness of answers.1 + +# 1 Introduction + +Much of the recent excitement in question answering (QA) is in building high-performing models with carefully curated training datasets. Datasets like SQuAD (Rajpurkar et al., 2016), NaturalQuestions (Kwiatkowski et al., 2019) and CoQA (Reddy et al., 2019) have enabled rapid progress in this area. Most existing work focuses on the pre-deployment stage; i.e., training the best QA model before it is released to users. However, this stage is only one stage in the potential lifecycle of a QA system. + +In particular, an untapped resource is the large amounts of user interaction data produced after the initial deployment of the system. Gathering this + +data should in practice be relatively cheap, since users genuinely engage with QA systems (such as Google) for information needs and may provide feedback to improve their results.2 + +Exploiting this kind of user interaction data presents new research challenges, since they typically consist of a variety of weak signals. For example, user clicks could indicate answer usefulness (Joachims, 2002), users could give structured feedback in the form of ratings to indicate the usefulness (Stiennon et al., 2020), or they could give unstructured feedback in natural language explanations on why an answer is correct or incorrect. User clicks have been widely studied in the field of information retrieval (Joachims, 2002). Here we study the usefulness of interactive feedback in the form of ratings and natural language explanations. + +Whilst there are different variants of QA tasks, this paper focuses primarily on retrieval-based QA (RQA; Chen et al. 2017; Lee et al. 2019). Given a question and a set of candidate answer passages, a model is trained to rank the correct answer passage the highest. In practice, when such a system is deployed, an user may engage with the system and provide feedback about the quality of the answers. Such feedback is called interactive feedback. Due to the lack of a dataset containing interactive feedback for RQA, we create FEEDBACKQA. + +FEEDBACKQA is a large-scale English QA dataset containing interactive feedback in two forms: user ratings (structured) and natural language explanations (unstructured) about the correctness of an answer. Figure 1 shows an example from FEEDBACKQA. The dataset construction has two stages: We first train a RQA model on the questions and passages, then deploy it on a crowdsourcing platform. Next, crowdworkers engage with this system and provide interactive feedback. To make our dataset practically useful, we focus on + +![](images/9e4dc7b531b23f49d96a2254c5c68033606320bbcf7a3e72eb972ba3129a9478.jpg) +Figure 1: Users interact with the deployed QA model and give feedback. Feedback contains a rating (bad, good, could be improved, excellent) and a natural language explanation. + +question answering on public health agencies for the Covid-19 pandemic. The base model for FEEDBACKQA is built on 28k questions and 3k passages from various agencies. We collect 9k interactive feedback data samples for the base model. + +We investigate the usefulness of the feedback for improving the RQA system in terms of two aspects: answer accuracy and explainability. Specifically, we are motivated by two questions: 1) Can we improve the answer accuracy of RQA models by learning from the interactive feedback? and 2) Can we learn to generate explanations that help humans to discern correct and incorrect answers? + +To address these questions, we use feedback data to train models that rerank the original answers as well as provide an explanation for the answers. Our experiments show that this approach not only improves the accuracy of the base QA model for which feedback is collected but also other strong models for which feedback data is not collected. Moreover, we conduct human evaluations to verify the usefulness of explanations and find that the generated natural language explanations help users make informed and accurate decisions on accepting or rejecting answer candidates. + +Our contributions are as follows: + +1. We create the first retrieval-based QA dataset containing interactive feedback. +2. We demonstrate a simple method of using the feedback data to increase the accuracy and explainability of RQA systems. +3. We show that the feedback data not only improve the deployed model but also a stronger non-deployed model. + +# 2 FEEDBACKQA Dataset + +Recently, there have been efforts to collect feedback data in the form of explanations for natural language understanding tasks (Camburu et al. 2018; Rajani et al. 2019, inter alia). These contain explanations only for ground-truth predictions for a given input sampled from the training data without any user-system interaction. Instead, we collect user feedback after deploying a RQA system thereby collecting feedback for both correct and incorrect predictions. Table 1 presents a comprehensive comparison of FEEDBACKQA and existing natural language understanding (NLU) datasets with explanation data. + +# 2.1 Dataset collection + +In order to collect post-deployment feedback as in a real-world setting, we divide the data collection into two stages: pre-deployment (of a RQA model) and post-deployment. + +Stage 1: Pre-deployment of a QA system We scrape Covid-19-related content from the official websites of WHO, US Government, UK Government, Canadian government, $^{3}$ and Australian government. We extract the questions and answer passages in the FAQ section. To scale up the dataset, we additionally clean the scraped pages and extract additional passages for which we curate corresponding questions using crowdsourcing as if users were asking questions. We present details on this annotation process in Appendix A. We use this dataset to train a base RQA model for each source separately and deploy them. For the base model, we use a BERT-based dense retriever (Karpukhin + +
DatasetsTaskFeedback TypeInteractive FeedbackFeedback for incorrect predictions
e-SNLI (Camburu et al., 2018)NLIFree-formXX
CoS-E (Rajani et al., 2019)Commonsense QAFree-formXX
LIAR-PLUS (Alhindi et al., 2018)Fact checkingFree-formXX
QED (Lamm et al., 2021)Reading comprehensionStructuredXX
NExT (Wang et al., 2019)Text classificationStructuredXX
FEEDBACKQARetrieval-based QAStructured & Free-form
+ +Table 1: Comparison of FEEDBACKQA with existing NLU datasets containing feedback in the form of structured representations (according to a schema) or natural language explanations (free-form). + +
#Passages#Questions#Feedback
Australia58417832264
Canada5878844/
UK95628743668
US598135332628
WHO226688874
Overall2951277229434
+ +Table 2: Number of samples in different domains of FEEDBACKQA. We split the data into train/validation/test sets in the ratio of $0.7:0.1:0.2$ . + +et al., 2020) combined with Poly-encoder (Miller et al., 2017) (more details are in Section 3.1). + +Stage 2: Post-deployment of a QA system Since each domain has several hundred passages (Table 2), it is hard for a crowdworker to ask questions that cover a range of topics in each source. We thus collect questions for individual passages beforehand similar to Stage 1 and use these as interactive questions. The question and top-2 predictions of the model are shown to the user and they give feedback for each question-answer pair. The collected feedback consists of a rating, selected from excellent, good, could be improved, bad, and a natural language explanation elaborating on the strengths and/or weaknesses of the answer. For each QA pair, we elicit feedback from three different workers. We adopted additional strategies to ensure the quality of the feedback data, the details of which are available in Appendix B. The resulting dataset statistics are shown in Table 2. In order to test whether interactive feedback also helps in out-of-distribution settings, we did not collect feedback for one of the domains (Canada). + +# 2.2 FEEDBACKQA analysis + +Table 3 shows examples of the feedback data, including both ratings and explanations. We find that explanations typically contain review-style text indicating the quality of the answer, or state- + +ments summarizing which parts are correct and why. Therefore, we analyze a sample of explanations using the following schema: + +Review Several explanations start with a generic review such as This directly answers the question or It is irrelevant to the question. Sometimes users also highlight aspects of the answer that are good or can be improved. For instance, ... could improve grammatically... suggests that the answer could be improved in terms of writing. + +Summary of useful content refers to the part of answer that actually answers the question; + +Summary of irrelevant content points to the information that is not useful for the answer, such as off-topic or addressing incorrect aspects; + +Summary of missing content points the information the answer fails to cover. + +We randomly sample 100 explanations and annotate them. Figure 2 shows the distribution of the types present in explanations for each rating label. All explanations usually contain some review type information. Whereas explanations for answers labeled as excellent or acceptable predominantly indicate the parts of the answer that are useful. The explanations for answers that can be improved indicate parts that are useful, wrong or missing. Whereas bad answers often receive explanations that highlight parts that are incorrect or missing as expected. + +# 3 Experimental Setup + +FEEDBACKQA contains two types of data. One is pre-deployment data $\mathcal{D}_{\mathrm{pre}} = (Q,A^{+},\mathcal{A})$ , where $Q$ is a question paired with its gold-standard answer passage $A^{+}$ from the domain corpus $\mathcal{A}$ . The other is post-deployment feedback data $\mathcal{D}_{\mathrm{feed}} =$ $(Q,A,Y,E)$ , where $Q$ is a question paired with a candidate answer $A\in \mathcal{A}$ and corresponding feedback for the answer. The feedback consists of a rating $Y$ and an explanation $E$ .We build + +
Rating labelExplanation
ExcellentThis answers the question directly. This answer provides information and recommendation on how people and adolescent can protect themselves when going online during the Covid-19 pandemic.
AcceptableThis answer, while adequate, could give more information as this is a sparse answer for a bigger question of what one can do for elderly people during the pandemic.
Could be improvedThe answer relates and answers the question, but could improve grammatically and omit the "yes"
Could be improvedThe answer is about some of the online risks but not about how to protect against them.
BadThis does not answer the question. This information is about applying visa to work in critical sector. It does not provide any information on applying for Covid-19 pandemic visa event as asked in the question.
+ +Table 3: Examples of explanation and its associated rating label. Span color and their types of components: generic and aspect review; summary of useful content; summary of irrelevant content; summary of missing content + +![](images/3511ba8c12a059c947a635bccd34a36b436dae43d1d655485d91e94334f867f0.jpg) +Figure 2: Distribution of component number in 100 natural language feedback of different rating labels. + +two kinds of models on pre- and post-deployment data: RQA models on the pre-deployment data that can retrieve candidate answers for a given question, and feedback-enhanced RQA models on the post-deployment data that can rate an answer for a given question as well as generate an explanation for the answer. We use this rating to rerank the answer candidates. Therefore, in our setting, a feedback-enhanced RQA model is essentially a reranker. Keeping in mind the fact that real-world QA systems evolve quickly, we decouple the reranker model from the RQA model by using separate parameters for the reranker independent of the RQA model. We train this reranker on the feedback data. This allows for the reranker to be reused across many RQA models. We leave other ways to enhance RQA models with feedback data for future work. Below, we describe the architectures for the RQA models and feedback-based rerankers. + +# 3.1 RQA Models (Pre-deployment) + +We use dense passage retrievers (Karpukhin et al., 2020) to build the RQA models, where the similarity between the question embedding and the passage embedding is used to rank candidates. We use two variants of pre-trained models to obtain the + +embeddings: 1) BERT (Devlin et al., 2019), a pretrained Transformer encoder; and 2) BART (Lewis et al., 2020), a pretrained Transformer encoder-decoder. For BERT, we use average pooling of token representations as the embedding, whereas for BART we use the decoder's final state. While Karpukhin et al. use question-agnostic passage representations, we use a poly-encoder (Humeau et al., 2020) to build question-sensitive document representations. In a poly-encoder, each passage is represented as multiple encodings, first independent of the question, but then a simple attention between the question and passage embeddings is used to compute question-sensitive passage representation, which is later used to compute the relevance of the passage for a given query. Humeau et al. show that the poly-encoder architecture is superior to alternatives like the bi-encoder (Karpukhin et al., 2020) without much sacrifice in computational efficiency. $^4$ + +Given pre-deployment training data $\mathcal{D}_{\mathrm{pre}} = (Q, A^{+}, \mathcal{A})$ , the RQA model parameterized by $\theta$ is trained to maximize the log-likelihood of the correct answer: + +$$ +\mathcal {J} _ {\theta} = \log P _ {\theta} (A ^ {+} | Q, \mathcal {A}) +$$ + +$$ +P _ {\theta} \left(A ^ {i} \mid Q, \mathcal {A}\right) = \frac {\exp \left(S \left(Q , A ^ {i}\right)\right)}{\sum_ {A \in \mathcal {A}} \exp \left(S \left(Q , A\right)\right)} \tag {1} +$$ + +Here $S(Q, A)$ denotes the dot product similarity between the question and passage embedding. As it is inefficient to compute the denominator over all passages during training, we adopt an in-batch negative sampling technique (Humeau et al., 2020), merging all of the $A^{+}$ in the same minibatch into a set of candidates. + +# 3.2 Feedback-enhanced RQA models (Post-deployment) + +On the post-deployment data $\mathcal{D}_{\mathrm{feed}} = (Q, A, Y, E)$ , we train a reranker that assigns a rating to an answer and also generates an explanation. We use BART parameterized by $\phi$ as the base of EXPLAINRATE because it is easy to adapt it to both explanation generation and rating classification. The encoder of the BART model takes as input the concatenation $[Q; \mathrm{SEP}; A]$ , and the decoder generates an explanation $E$ ; after that, an incremental fully-connected network predicts the rating $Y$ given the last hidden states of decoder. The rating is used to score QA pairs, whereas the generated explanation is passed to humans to make an informed decision of accepting the answer. We also implement a variant where the model directly produces a rating without generating an explanation. Since each candidate answer is annotated by different annotators, an answer could have multiple rating labels. To account for this, we minimize the KL-divergence between the target label distribution and the predicted distribution: + +$$ +\begin{array}{l} \mathcal {J} _ {\phi^ {\prime}} = - D _ {\mathrm {K L}} \left(P (Y | Q, A) \mid \mid P _ {\phi} (Y | Q, A)\right), \\ P (Y _ {i} = y | Q _ {i}, A _ {i}) = \frac {C _ {y , i}}{\sum_ {y} C _ {y , i}} \tag {2} \\ \end{array} +$$ + +where $C_{y,i}$ is the count of the rating label $y$ for the $i$ -th feedback. + +In order to enhance an RQA model with the reranker, we first select the top- $k$ candidates according to the RQA model (in practice we set $k = 5$ ). The reranker then takes as input the concatenation of the question and each candidate, then generates a rating for each answer. We simply sum up the scores from the RQA model and the reranker model. In practice, we found that using the reranker probability of excellent worked better than normalizing the expectation of the rating score (from score 0 for label bad to 3 for excellent). So, we score the candidate answers as follows: + +$$ +\begin{array}{l} S (A | \mathcal {A}, Q) = P _ {\theta} (A = A ^ {+} | \mathcal {A}, Q) \tag {3} \\ + P _ {\phi} (y = e x c e l l e n t | A, Q) \\ \end{array} +$$ + +# 4 Experiments and Results + +We organize the experiments based on the following research questions: + +- RQ1: Does feedback data improve the base RQA model accuracy? + +- RQ2: Does feedback data improve the accuracy of RQA models that are stronger than the base model? +- RQ3: Do explanations aid humans in discerning between correct and incorrect answers? + +We answer these questions by comparing the RQA models with the feedback-enhanced RQA models. The implementation and hyper-parameter details of each model are included in Appendix D. + +# 4.1 RQ1: Does feedback data improve the base RQA model? + +Model details. Our base model is a BERT RQA model which we deployed to collect feedback data to train the other models (Section 3.1). + +For the feedback-enhanced RQA model, we use the BART-based reranker described in Section 3.2. We train one single model for all domains. We call this FEEDBACKRERANKER. We compare two variants of FEEDBACKRERANKER on validation set, one of which directly predicts the rating while the other first generates an explanation and then the rating. And we found the first one performs slightly better (Appendix Table 10). We conjecture that learning an explanation-based rating model from the limited feedback data is a harder problem than directly learning a rating model. Therefore, for this experiment, we only use the rating prediction model (but note that explanation-based rating model is already superior to the base RQA model). + +To eliminate the confounding factor of having a larger number of model parameters introduced by the reranker, we train another reranker model on the pre-deployment data VANILLARERANKER and compare against the reranker trained on the feedback data. To convert the pre-deployment data into the reranker's expected format, we consider a correct answer's rating label to be excellent, and the randomly sampled answer candidates5 to be bad. Note that this dataset is much larger than the feedback data. + +Finally, we combine the training data of FEEDBACKRERANKER and VANILLARERANKER and train the third reranker called COMBINEDRERANKER. + +To measure retrieval accuracy, we adopt Precision@1 (P@1) as our main metric. + +
MethodsAustraliaUSCanadaUKWHOAllBeats
BERT RQA model ◆47.2565.3081.4948.5081.1964.75None
+ FEEDBACKRERANKER *55.1365.9783.7451.0777.0566.59◆※
+ VANILLARERANKER ◆54.2964.8083.2049.6377.9665.98
+ COMBINEDRERANKER ◆55.6367.5484.9953.2178.5167.97◆※※
+ +Table 4: Accuracy of the BERT RQA model, i.e., the deployed model, and its enhanced variants on the test set. FEEDBACKRERANKER is trained on the post-deployment feedback data, VANILLAERANKER is trained on the pre-deployment data and COMBINEDRERANKER is trained on both. The column Beats indicates that the model significantly outperforms $(p$ -value $< 0.05)$ the competing methods. All of the results are averaged across 3 runs. + +
MethodsAustraliaUSCanadaUKWHOAllBeats
BART RQA model Y52.8868.4782.4951.2981.9767.42None
+ FEEDBACKRERANKER Y54.7870.4584.3853.4782.5169.12Y II
+ VANILLARERANKER II53.0970.4082.7653.0882.3368.33Y
+ COMBINEDRERANKER55.2771.4585.3554.8383.6170.10Y Y II
+ +Table 5: Accuracy of the BART RQA model and its enhanced variants on the test set. Results are averaged across 3 runs. + +Results. As shown in Table 4, the feedback-enhanced RQA model is significantly better than the base RQA model by 1.84 points. Although VANILLARERANKER improves upon the base model, it is weaker than FEEDBACKRERANKER, and COMBINEDRERANKER is a much stronger model than any of the models, indicating that learning signals presented in feedback data and the predeployment data are complementary to each other. Moreover, we also see improved performance on the Canada domain, although feedback data was not collected for that domain. + +From these experiments, we conclude that feedback data can improve the accuracy of the base RQA model, not only for the domains for which feedback data is available but also for unseen domains (Canada). + +# 4.2 RQ2: Does feedback data improve the accuracy of RQA models that are stronger than the base model? + +If feedback data were only useful for the base RQA model, then its usefulness would be questionable, since the RQA development cycle is continuous and the base RQA model will eventually be replaced with a better model. For example, we find that BART-based dense retriever is superior than the BERT RQA model: Table 9 in Appendix E shows the results on validation set which indicate that BART RQA model overall performance is nearly 4 points better than the BERT RQA model. + +To answer RQ2, we use the same FEEDBACK-ERANKER and VANILLARERANKER to rescore the BART RQA predictions, even though feedback data is not collected for this model. We observe that the resulting model outperforms the BART RQA model in Table 5, indicating that the feedback data is still useful. Again, FEEDBACK-ERANKER is superior to VANILLARERANKER although the feedback data has fewer samples than the pre-deployment data, and the COMBINED-ERANKER has the best performance. + +These results suggest that the feedback data is useful not only for the base RQA model but also other stronger RQA models. + +# 4.3 RQ3: Do explanations aid humans in discerning between correct and incorrect answers? + +We conduct a human evaluation to investigate whether explanations are useful from the perspective of users. Unfortunately, rigorous definitions and automatic metrics of explainability remain open research problems. In this work, we simulate a real-world scenario, where the user is presented an answer returned by the system as well as an explanation for the answer, and they are asked to determine whether the answer is acceptable or not. Jacovi and Goldberg (2020) advocate utility metrics as proxies to measure the usefulness of explanations instead of directly evaluating an explanation since plausible explanations does not necessarily increase the utility of the resulting system. Inspired by their findings, we measure if explana + +
ExplanationAccuracyAgreement
Blank69.170.31
Human-written88.330.80
BART feedback model81.670.71
BART summarization model74.170.30
+ +Table 6: Human evaluation results of the usefulness of explanations. Accuracy measures the utility of explanations in selecting the correct rating label for an answer, whereas agreement measures whether explanations invoke same behaviour pattern across users. + +tions can: 1) help users to make accurate decisions when judging an answer (with respect to a ground truth) and 2) improve the agreement among users in accepting/rejecting an answer candidate. The former measures the utility of an explanation and the latter measures if the explanations invoke the same behavioral pattern across different users irrespective of the utility of the explanation. Note that agreement and utility are not tightly coupled. For example, agreement can be higher even if the utility of an explanation is lower when the explanation misleads end users to consistently select a wrong answer (González et al., 2021; Bansal et al., 2021). + +We sample 60 feedback samples from the hidden split of the feedback data $\mathcal{D}_{\mathrm{feed}} = (Q, A, Y, E)$ for evaluation purposes. We evaluate four experimental setups on these samples which vary in the type of explanation shown to the end users: 1) no explanation; 2) human-written explanations; 3) explanations generated by the BART model trained on the feedback data (Section 3.2); and 4) summary of the answer candidate generated by a strong finetuned BART-based summarization model. The last setting is inspired from the observation in Section 2.2 that a large portion of explanations contain summary of questions/answers. We investigate if conventional summary of an answer is as useful as an explanation. For each of these setups, two crowdworkers assign a rating label to each answer candidate indicating the quality of the answer. Each setup has its own set of workers in order to avoid information-leakage across setups (this simulates A/B testing often used by production systems). + +We measure the workers' accuracy (average of the two workers) in determining the correctness of an answer with respect to the original annotation + +Question: What are the restrictions to travelers landing in the US? + +Answer: ... Under current federal regulations, pilots must report all illnesses and deaths to CDC before arriving to a US destination. According to CDC disease protocols, if a sick traveler is considered to be a public health risk, CDC works with local and state health departments and international public health agencies to contact passengers and crew exposed to that sick traveler. Be sure to give the airline your current contact information when booking your ticket so you can be notified if you are exposed to a sick traveler on a flight. ... + +Generated explanation: This answer is irrelevant to the question. The question is asking about restrictions on travelers landing in the US and this answer provides information on what to do if there is a sick passenger on an international or domestic flight. + +Human explanation: This information does not answer the question. The question is asking for information on the restrictions to travelers landing in the US and this answer contains information on pilots reports on illnesses and deaths before arriving to a US destinations and airline notification when passengers are exposed to sick travelers on a flight. + +Generated Summary: The Centers for Disease Control and Prevention (CDC) is working with airlines to help prevent illness and death on flights. + +Question: Should I wear gloves when I go out in public to prevent Covid-19? + +Answer: No, WHO does not recommend the regular use of gloves by people in the community. The wearing of gloves may increase risks, since it can lead to self-infection or transmission to others through touching contaminated surfaces which is not followed by removal of gloves and hand hygiene. Therefore, in public places such as supermarkets, in addition to physical distancing... + +Generated Explanation: This answer is excellent because it unequivocally states that gloves are not recommended. Therefore, it answers the question directly. It goes on to explain why gloves aren't recommended. This is a great answer. + +Human explanation: The question is answered directly. The answer states not to wear gloves in public. + +Generated Summary: The World Health Organization (WHO) has issued a guidance on the use of gloves in public places. + +Question: Can students work in aged care? + +Answer:...The Australian Government has announced that residential aged care facilities should take extra precautions to keep residents safe from COVID-19. Both individuals and management need to take responsibility for the health of visitors and staff at facilities to protect our most vulnerable community members. Aged care facilities will: closely monitor the health of staff... + +Generated Explanation: This answer does not address whether or not students are allowed to work in aged care. This answer only provides information on precautions to be taken by aged care facilities to keep residents safe. + +Human Explanation: The information here give explanation on guideline that aged care facility staffs should put in place and did not say anything about student working in aged care facility. + +Generated Summary: Residents in aged care facilities across Australia are being urged to take extra precautions to prevent the spread of a deadly virus. + +Table 7: Examples of different explanation types: model-generated and human-written explanation and model-generated summary. + +in FEEDBACKQA, as well as compute the agreement of workers with each other using Spearman correlation. Table 6 presents the results. All explanation types improve accuracy compared to the model with no explanations. This could be because any explanation forces the worker to think more about an answer. The human-written explanations has the highest utility and also leads to the biggest agreement. Both the human-written explanations and the explanations generated by the BART feedback model have more utility and higher agreement than the BART summarization model. In fact, the summarization model leads to lower agreement. + +These results indicate that explanations based on feedback data are useful for end users in discerning correct and incorrect answers, and they also improve the agreement across users. + +Table 7 shows some examples of explanation that helps the users make more informed and accurate decision. In the first example, the model-generated explanation points out the gap between the question and the answer candidate, though there are a large number of overlapping keywords. Meanwhile, human explanations are generally more abstractive and shorter in nature (e.g., see the second example). + +# 5 Related work + +Retrieval-based question answering has been widely studied, from early work on rule-based systems (Kwok et al., 2001), to recently proposed neural-based models (Yang et al., 2019; Karpukhin et al., 2020). Most existing work focuses on improving the accuracy and efficacy by modification of a neural architecture (Karpukhin et al., 2020; Humeau et al., 2020), incorporation of external knowledge (Ferrucci et al., 2010), and retrieval strategy (Kratzwald and Feuerriegel, 2018). These methods focus on the pre-deployment stage of RQA models. + +By contrast, we investigate methods to improve a RQA model post-deployment with interactive feedback. The proposed methods are agnostic to the architecture design and training methods of the base RQA model. + +Learning from user feedback has been a long standing problem in natural language processing. Whilst earlier work proposes methods for using implicit feedback—for instance, using click-through data for document ranking (Joachims, 2002)—recent work has explored explicit feedback such as explanations of incorrect responses by chatbots (Li + +et al., 2016; Weston, 2016) and correctness labels in conversational question answering and text classification (Campos et al., 2020). However, the feedback in these studies is automatically generated using heuristics, whereas our feedback data is collected from human users. Hancock et al. (2019) collect suggested responses from users to improve a chatbot, while we investigate the effect of natural feedback for RQA models. + +Explainability and Interpretability has received increasing attention in the NLP community recently. This paper can be aligned to recent efforts in collecting and harnessing explanation data for language understanding and reasoning tasks, such as natural language inference (Camburu et al., 2018; Kumar and Talukdar, 2020), commonsense question answering (Rajani et al., 2019), document classification (Srivastava et al., 2017), relation classification (Murty et al., 2020), reading comprehension (Lamm et al., 2021), and fact checking (Alhindi et al., 2018). The type of feedback in FEEDBACKQA differs from the existing work in several aspects: 1) FEEDBACKQA has feedback data for both positive and negative examples, while most of other datasets only contains explanations of positive ones; 2) FEEDBACKQA has both structured and unstructured feedback, while previous work mainly focuses on one of them; 3) The feedback in FEEDBACKQA is collected post-deployment; 4) While previous work aims to help users interpret model decisions, we investigate whether feedback-based explanations increase the utility of the deployed system. + +# 6 Conclusion + +In this work, we investigate the usefulness of feedback data in retrieval-based question answering. We collect a new dataset FEEDBACKQA, which contains interactive feedback in the form of ratings and natural language explanations. We propose a method to improve the RQA model with the feedback data, training a reranker to select an answer candidate as well as generate the explanation. We find that this approach not only increases the accuracy of the deployed model but also other stronger models for which feedback data is not collected. Moreover, our human evaluation results show that both human-written and model-generated explanations help users to make informed and accurate decisions about whether to accept an answer. + +# 7 Limitations and Ethical consideration + +The training and inference of a reranker with feedback data increases the usage of computational resources. We note that our feedback collection setup is a simulation of a deployed model. The feedback in real-world systems may contain sensitive information that should be handled with care. Moreover, real-world feedback could be noisy and is prone to adversarial attacks. + +# 8 Acknowledgements + +We would like to thank Andreas Madsen, Nathan Schucher, Nick Meade and Makesh Narsimhan for their discussion and feedback on our manuscript. We would also like to thank the Mila Applied Research team, especially Joumana Ghosn, Mirko Bronzi, Jeremy Pinto, and Cem Subakan whose initial work on the Covid-19 chatbot inspired this work. This work is funded by Samsung Electronics. JC and SR acknowledge the support of the NSERC Discovery Grant program and the Canada CIFAR AI Chair program. The computational resource for this project is partly supported by Compute Canada. + +# References + +Tariq Alhindi, Savvas Petridis, and Smaranda Muresan. 2018. Where is your evidence: Improving fact-checking by justification modeling. In Proceedings of the First Workshop on Fact Extraction and VERIFICATION (FEVER), pages 85-90. Association for Computational Linguistics. +Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, and Daniel Weld. 2021. Does the whole exceed its parts? the effect of ai explanations on complementary team performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1-16. +Taylor Berg-Kirkpatrick, David Burkett, and Dan Klein. 2012. An empirical investigation of statistical significance in NLP. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 995-1005. +Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-SNLI: Natural Language Inference with Natural Language Explanations. In Advances in Neural Information Processing Systems 31, pages 9539-9549. +Jon Ander Campos, Kyunghyun Cho, Arantxa Otegi, Aitor Soroa, Eneko Agirre, and Gorka Azkune. + +2020. Improving conversational question answering systems after deployment using feedback-weighted learning. In Proceedings of the 28th International Conference on Computational Linguistics, pages 2561-2571. +Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870-1879. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186. +David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A Kalyanpur, Adam Lally, J William Murdock, Eric Nyberg, John Prager, et al. 2010. Building watson: An overview of the deepqa project. AI magazine, 31(3):59-79. +Ana Valeria González, Gagan Bansal, Angela Fan, Yashar Mehdad, Robin Jia, and Srinivasan Iyer. 2021. Do explanations help users detect errors in open-domain QA? an evaluation of spoken vs. visual explanations. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1103-1116. +Braden Hancock, Antoine Bordes, Pierre-Emmanuel Mazare, and Jason Weston. 2019. Learning from dialogue after deployment: Feed yourself, chatbot! In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3667-3684. +Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2020. Poly-encoders: Transformer Architectures and Pre-training Strategies for Fast and Accurate Multi-sentence Scoring. arXiv:1905.01969 [cs]. ArXiv: 1905.01969. +Alon Jacovi and Yoav Goldberg. 2020. Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4198-4205. Association for Computational Linguistics. +Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In SIGKDD. Association for Computing Machinery. +Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of + +the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769-6781. +Bernhard Kratzwald and Stefan Feuerriegel. 2018. Adaptive document retrieval for deep question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 576-581. +Sawan Kumar and Partha Talukdar. 2020. Nile: Natural language inference with faithful natural language explanations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8730-8742. +Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453-466. +Cody CT Kwok, Oren Etzioni, and Daniel S Weld. 2001. Scaling question answering to the web. In Proceedings of the 10th international conference on World Wide Web, pages 150-161. +Matthew Lamm, Jennimaria Palomaki, Chris Alberti, Daniel Andor, Eunsol Choi, Livio Baldini Soares, and Michael Collins. 2021. Qed: A framework and dataset for explanations in question answering. Transactions of the Association for Computational Linguistics, 9:790-806. +Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086-6096. +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880. +Jiwei Li, Alexander H Miller, Sumit Chopra, Marc'Aurelio Ranzato, and Jason Weston. 2016. Dialogue learning with human-in-the-loop. arXiv preprint arXiv:1611.09823. +Alexander H Miller, Will Feng, Dhruv Batra, Antoine Bordes, Adam Fisch, Jiasen Lu, Devi Parikh, and Jason Weston. 2017. Parlai: A dialog research software platform. In EMNLP (System Demonstrations). +Shikhar Murty, Pang Wei Koh, and Percy Liang. 2020. Expert: Representation engineering with natural language explanations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2106-2113. + +Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! leveraging language models for commonsense reasoning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4932-4942, Florence, Italy. Association for Computational Linguistics. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392. +Siva Reddy, Danqi Chen, and Christopher D Manning. 2019. Coqa: A conversational question answering challenge. Transactions of the Association for Computational Linguistics, 7:249-266. +Shashank Srivastava, Igor Labutov, and Tom Mitchell. 2017. Joint concept learning and semantic parsing from natural language explanations. In Proceedings of the 2017 conference on empirical methods in natural language processing, pages 1527-1536. +Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. 2020. Learning to summarize with human feedback. In Advances in Neural Information Processing Systems, volume 33, pages 3008-3021. +Ziqi Wang, Yujia Qin, Wenxuan Zhou, Jun Yan, Qinyuan Ye, Leonardo Neves, Zhiyuan Liu, and Xiang Ren. 2019. Learning from explanations with neural execution tree. In International Conference on Learning Representations. +Jason E Weston. 2016. Dialog-based language learning. Advances in Neural Information Processing Systems, 29:829-837. +Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019. End-to-end open-domain question answering with bertserini. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 72-77. + +# A Details of Data Collection + +Passage curating After we scraped the websites, we collect the questions and answers in the Frequently-Asked-Questions pages directly. For those pages without explicit questions and answers, we extract the text content as passages and proceed to question collection. + +Question collection We hire crowd-source workers from English-speaking countries at the Amazon MTurk platform to write questions conditioned on the extracted passages. The workers are instructed not to ask too generic questions or copy and paste directly from the passages. + +A qualification test with two sections is done to pick up the best performing workers. In the first section, the workers are asked to distinguish the good question from the bad ones for given passages. The correct and incorrect questions were carefully designed to test various aspects of low-quality submissions we had received in the demo run. The second section is that writing a question given a passage. We manually review and score the questions. We paid $0.2$ to workers for each question. + +# B Details of Feedback Collection + +We asked the workers to provide rating and natural language feedback for question-answer pairs. For qualification test, we labeled the rating for multiple pairs of questions and answers. The workers are selected based on their accuracy of rating labeling. We paid $0.4\mathrm{~\$}$ to workers for each feedback. + +# C Details of Human Evaluation + +The worker assignment is done to make sure a worker rates the same question-answer pair only once. Otherwise there is risk that the workers just blindly give the same judgement for a certain QA pair. + +We adopt the qualification test similar to the one for feedback collection. We also include some dummy QA pairs, whose answer candidate were randomly sampled from the corpora, and we filter out the workers who fail to recognize them. We paid $0.3\mathrm{~\$}$ to workers for each QA pair. + +# D Implementation Details + +Throughout the experiments, we have used 4 32-GB Nvidia Tesla V100. The hyperparameter (learning rate, dropout rate) optimisation is performed + +
lrDropout
BERT (Bi-encoder)5.0e-050.1
BERT (Poly-encoder)5.0e-050.1
BART (Bi-encoder)9.53e-050.01026
BART (Poly-encoder)4.34e-050.1859
FEEDBACKRERANKER5.0e-050.1
+ +Table 8: Hyper-parameter setting of different variants of QA models as well as EXPLAINRATE and RATEONLY. There is no pooling operation in the latter two models. + +for the RQA models only and standard fine-tuning hyperparameters of BART are used for building the FEEDBACKRERANKER model. We set batch size as 16. We truncate the questions and passages to 50 and 512 tokens, respectively. The models are trained with 40 epochs. For our hyperparameter search, we have used 5 trials and while reporting the final results the best hyperparameter variant's performance was averaged across 3 different runs. All experiment runs were finished within 20 hours. + +# E Validation performance + +In addition to the Poly-encoders, we also explore Bi-encoder and we have found that its performance is consistently worse. Table 9 presents the performance of base QA models with different pretrained Transformer models and encoding methods on the validation set. + +
MethodsAustraliaUSCanadaUKWHOAll
BERT (Bi-encoder)44.5764.2481.1250.5581.8564.47
BERT (Poly-encoder)47.2565.3081.4948.5081.1964.75
BART (Bi-encoder)47.1367.6286.0155.0685.4868.26
BART (Poly-encoder)49.1766.9885.7554.2787.4668.73
+ +Table 9: The accuracy of different RQA models on the validation set. All of the results are averaged across 3 runs. + +
MethodsAustraliaUSCanadaUKWHOAll
BART RQA model
BART RQA model49.1766.9885.7554.2787.4668.73
+ FEEDBACKRERANKER with explanation-based rating51.3469.0984.2056.8787.7969.86
+ FEEDBACKRERANKER with rating only51.0968.5786.8458.2188.7870.70
BERT RQA model
BERT RQA model47.2565.3081.4948.5081.1964.75
+ FEEDBACKRERANKER with explanation-based rating51.3470.1583.7253.7184.4968.68
+ FEEDBACKRERANKER with rating only51.0968.4684.1855.6985.1568.91
+ +Table 10: Accuracy of PIPELINE models using different feedback data to train the re-ranker on the validation set. All of the results are averaged across 3 runs. \ No newline at end of file diff --git a/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/images.zip b/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..f04b8c3ffa6a620e0dba51f885d518dc02322d92 --- /dev/null +++ b/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9ba35c84defadb21c81b75d388045ab8374babb0a3f32daa663798716c99e9dc +size 500073 diff --git a/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/layout.json b/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..2882a2912ad6675c819c527f854dbeabb1add3ef --- /dev/null +++ b/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b785562fbadc7aa81e73218ba0e00ce0f4b1ff9be0fcaaa167c8874dedcef72c +size 333017 diff --git a/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/f0fe5b64-f3f2-4c57-9ee9-f8b2e0d259aa_content_list.json b/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/f0fe5b64-f3f2-4c57-9ee9-f8b2e0d259aa_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..50e51a03462425233555db8926cc77db27cd7775 --- /dev/null +++ b/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/f0fe5b64-f3f2-4c57-9ee9-f8b2e0d259aa_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f531ad44ad7e400e52c13b63cfbd2fa8b306c95a466383e826d72b0b74a1aa4c +size 68351 diff --git a/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/f0fe5b64-f3f2-4c57-9ee9-f8b2e0d259aa_model.json b/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/f0fe5b64-f3f2-4c57-9ee9-f8b2e0d259aa_model.json new file mode 100644 index 0000000000000000000000000000000000000000..8659f59af5334d4f77a6b2aeb734d72d3fa2327f --- /dev/null +++ b/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/f0fe5b64-f3f2-4c57-9ee9-f8b2e0d259aa_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e9932df6245dc4fdb057f83f1d726d7ac4ff0ad42a9eb7ac5d2d8a3f89bc5188 +size 85437 diff --git a/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/f0fe5b64-f3f2-4c57-9ee9-f8b2e0d259aa_origin.pdf b/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/f0fe5b64-f3f2-4c57-9ee9-f8b2e0d259aa_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..fa2ee7c781f056fed08162ce794a3fa0f7fddb94 --- /dev/null +++ b/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/f0fe5b64-f3f2-4c57-9ee9-f8b2e0d259aa_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a2bdce2669056068b2bdf5c09cf4a8e1f222597888d7c1b20c74a6da4147a713 +size 3842090 diff --git a/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/full.md b/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/full.md new file mode 100644 index 0000000000000000000000000000000000000000..f05d3ca7e8f218cac217e29e9ca6a784d9bece86 --- /dev/null +++ b/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/full.md @@ -0,0 +1,269 @@ +# Using NLP to quantify the environmental cost and diversity benefits of in-person NLP conferences + +Piotr Przybyla + +Institute of Computer Science, + +Polish Academy of Sciences + +Warsaw, Poland + +piotr.przybyla@ipipan.waw.pl + +Matthew Shardlow + +Department of Computing and Mathematics, + +Manchester Metropolitan University + +Manchester, UK + +m.shardlow@mmu.ac.uk + +# Abstract + +The environmental costs of research are progressively important to the NLP community and their associated challenges are increasingly debated. In this work, we analyse the carbon cost (measured as CO2-equivalent) associated with journeys made by researchers attending in-person NLP conferences. We obtain the necessary data by text-mining all publications from the ACL anthology available at the time of the study $(n = 60,572)$ and extracting information about an author's affiliation, including their address. This allows us to estimate the corresponding carbon cost and compare it to previously known values for training large models. Further, we look at the benefits of in-person conferences by demonstrating that they can increase participation diversity by encouraging attendance from the region surrounding the host country. We show how the trade-off between carbon cost and diversity of an event depends on its location and type. Our aim is to foster further discussion on the best way to address the joint issue of emissions and diversity in the future. + +# 1 Introduction + +Figure 1 shows the increase in travel to the ACL annual meeting over the past 40 years. Whereas conferences used to be the privilege of a few academics, they are now attended by participants from companies, research institutes and universities across the world. This comes with an increase in the total volume of work published, and with it an increase in the carbon emissions attributed to travelling to in-person events. + +In this study we seek to quantify the impact of conferences that are increasingly diverse in terms of participation and location (undoubtedly beneficial) on the increased carbon emissions (undoubtedly detrimental). We base our analysis on publications spanning 55 years (1965-2020), taken from + +![](images/8f0b9077b7f54135616bb214b790cbe0e7fd0872d4532e7253eab9f36d505ee9.jpg) +(a) ACL 1979: La Jolla, California, USA + +![](images/886eb18e613366ffcd447793899eb0b569efa10292b0727abc17f8b97302e58f.jpg) +(b) ACL 1999: College Park, Maryland, USA + +![](images/d4fc8055465aad4fe0327425d3f79aafe2c3a46048a95e5e4a1d02b6a3c499fd.jpg) +(c) ACL 2019: Florence, Italy +Figure 1: Visualisation of estimated journeys to the ACL annual meetings over 40 years. Maps for all major NLP conferences are included in the supplementary material. + +the ACL Anthology1. We use NLP tools to parse each document and identify the locations of the conference venues and lead researcher's institution. We answer the following questions: + +1. Where is NLP research performed and presented? +2. What are the environmental costs? +3. Do conferences increase local participation? +4. Which events attract a diverse audience and how do they compare to non-physical venues? + +To the best of our knowledge, our work is the first to quantitatively explore the relationship between the location of conferences in a research field + +and diversity of participation. We make our dataset and code available2 to enable further discussion on the costs and benefits of in-person meetings. + +# 2 Related work + +Environmental cost of travel and conferences: It is a well established fact that conferences come with a climate cost (Ciers et al., 2019), which has recently become greater (Pierce et al., 2020). This has led to calls to reduce or cancel the physical academic conference calendar (Johnson et al., 2020; Reay, 2003; Achakulvisut et al., 2020; Jackle, 2019; Dwyer, 2013). + +The scientific discourse has included measuring and quantifying the emissions costs of conferences and the travel associated with them, from specific events (Astudillo and AzariJafari, 2018), to conference series (Neugebauer et al., 2020), or indeed looking at the total emissions of an entire discipline (Waring et al., 2014; Poom et al., 2017). + +Travel is not the only cost associated with academic conferences, or research in general, with one PhD accounting for 21.5 tonnes of CO2-equivalent emissions (Achten et al., 2013), of which $35\%$ was attributed to conferences. Recent work shows that in France, a typical research lab might dispense $64\%$ of its carbon outputs on conference travel, with the remaining $36\%$ made up mostly of commuting and energy usage (Mariette et al., 2021) + +In response to the pandemic, many conferences have moved temporarily online. A meta-analysis of these online conferences showed that a major result of online delivery was a reduction in the registration fee, promoting access (Mubin et al., 2021). Further, online delivery may allay fears of high travel costs (Raby and Madden, 2021) — as is often the case with top-tier conferences. The main barrier to online participation is a perception of reduced social (rather than academic) opportunities (Raby and Madden, 2021), although this may be overcome through facilitating interpersonal meetings, and social discussion (Achakulvisut et al., 2020). It should be noted that whilst travel is unnecessary in virtual conferences, there is still a quantifiable carbon cost due to the infrastructure required (Ong et al., 2012, 2014; Faber, 2021). + +Academic conferences are not without their benefits and a clear advantage of in-person conferences rather than online is the perceived value in social interaction (Raby and Madden, 2021). This argu + +ment is strengthened by the observation that citation rates are higher for work presented across longer distances (Chalvatzis and Ormosi, 2020). An important benefit of conferences is providing an opportunity for researchers to interact with peers from diverse cultural, linguistic, demographic and academic backgrounds. This goal is also recognised within the NLP field. $^3$ + +The high climate cost of academic conferences has led to policy considerations (Bossdorf et al., 2010), including the adoption of carbon offsetting programmes for participants (Holden et al., 2017), wise choices of locations to reduce the average journey distance (Wenner et al., 2019) and mandated reporting of climate costs for conferences (Cugniere et al., 2020). Moving towards the adoption of any of these policies would help to begin the mitigation of the environmental impact of academic travel. Similar discussion has already started in computer science conference communities, e.g. ACM (Pierce et al., 2020). + +Environmental cost of ML and NLP research: In the field of ML and NLP, there has been an increasing trend towards openness in reporting of the emissions associated with AI research (Schwartz et al., 2020), especially that using deep learning (Henderson et al., 2020). Work has also been undertaken to estimate the overall cost of training machine learning (ML) models — taking into account not only the training time, but also the age of the hardware and server location (García-Martínez et al., 2019; Lacoste et al., 2019). + +There have been a few efforts within our own field of NLP to better understand the impact that modern techniques are having on the environment and specifically to quantify the emissions costs of training ever larger neural networks (Strubell et al., 2019). Benchmarking of NLP systems in terms of their energy consumption is a viable way to better understand the carbon cost of training such a model (Zhou et al., 2020). Taking into account factors such as resource utilisation can give a more accurate picture of the energy consumption of NLP models (Cao et al., 2020). + +A recent trend in NLP is to create low-resource models that provide sufficient performance. For example, light transformer models are quicker to train and consequently have a lower carbon footprint (Sanh et al., 2019; Anderson and Gomez- + +Rodriguez, 2020). Transfer learning presents an opportunity for massive carbon savings. If a model can be trained that requires only minimal retraining for various other subtasks, then this prevents further carbon expenditure down the line. Maximising model reusability is a good strategy for reducing carbon emissions (Kocmi and Bojar, 2020). + +# 3 Methods + +To be able to answer the questions that motivate this work, we need certain data about the research process, in particular regarding the location of researchers' affiliations and conference venues. Since no such single source of information existed, we decided to combine publicly available resources to create a new dataset containing the information we required. The process we used to create this resource is detailed below: + +Data structure: A publication is an independent piece of research presented to the community as a journal article or a presentation at a conference. For the purposes of this work, each publication is described by: (1) an identifier; (2) the first author's affiliation (identified by the domain name in their e-mail address); (3) the location of the first author's affiliation and (4) an event, to which the publication is assigned. + +An event could be a track at a conference, a co-located meeting (e.g. a workshop) or a volume of a journal. It is described by: (1) an identifier; (2) a name and (3) a location - physical place name in case of in-person events or a special tag (@) in case of journals and virtual conferences. + +Note that in this model, we always take into account the first author, while in fact one person may attend a conference to present several publications (resulting in less travels) or more than one author may attend to present a single publication (resulting in more travels). Resolving this issue would require conference registration data, which are not publicly available. Further, the address of the primary affiliation does not necessarily match the researcher's starting location when travelling to a conference. + +Text mining: In the process of gathering the data we rely on the XML version of the ACL Anthology available on GitHub4 (we used the version from 17.02.2021). From there we obtain the publications ( tag), associated events + +( tag) with titles and locations. + +The crucial information missing from the XML structure is the author's affiliation and their location. This information is mined from the publication text: we download the publication PDF and use $PyMuPDF^5$ to convert it to plain text. Next, we extract the first e-mail domain occurring in text through regular expressions (allowing for the curly brackets notation for account usernames) and treat it as affiliation identifier. Then, we use spaCy (Honnibal et al., 2020) to process the text with the en_core_web_trf pipeline, based on RoBERTa (Liu et al., 2019). Among the text spans recognised by the named entity recogniser as belonging to the category GPE (geopolitical entity), the one occurring first after the first author's last name is considered their location. Entities occurring close to each other are grouped, so that multipart names, such as Cambridge, Massachusetts (USA), are located correctly. + +Finally, to interpret the location names for affiliations and events, we use the Geocoding API of the Google Maps API. This allows us to obtain geographical coordinates (longitude and latitude) and country name for each location. We obtain continent information using the pycountry-convert Python package. + +Missing data: The process described above may leave some of the data fields empty. This may be caused by information being omitted in the XML (year or location for events) or PDF files (affiliation address not provided) or imperfect named entity recognition. + +In the case of events, we fill the missing data based on co-located events and manual investigation. We also check which of the conferences in 2020 took place as in-person events in the locations advertised. In the case of affiliations, we look at all other publications with the same affiliation and identify the most common location. We assume this location may also be used for the publication in question. Note that some of the PDF files of the oldest publications are based on scanned typescripts. Extracting information from these would require OCR techniques, but this was not attempted within the described work, resulting in a lower coverage of the earliest publications. + +Diversity computation: To quantify the participation diversity, we use the Gini coefficient $G$ . While it was originally proposed for assessing in + +![](images/e203fd9dc5b9040dae426ed87f00a5ed7f8c77607d17904ec1aefddc9db9492c.jpg) +Figure 2: Distribution of NLP publications between affiliation locations (countries) in each year with the diversity index (white line, right axis). + +![](images/f30cbc86732df520dc33eb545122164b55dd6fb9cad01d62b0c9b09a7900b812.jpg) +Figure 3: Distribution of NLP publications between event locations (countries, light grey=non-physical venues) with the diversity index (white line, right axis). + +come inequality (Gini, 1912), it is widely used as a diversity measure, e.g. of ecosystems (Lexerød and Eid, 2006), research databases (Weidlich and Filippov, 2016) or citation patterns (Leydesdorff et al., 2019). Since $G$ measures concentration, we define the diversity coefficient as $D = 1.0 - G$ . $D$ takes values between 0.0 (least uniform distribution, i.e. all conferences happening in the same country) and 1.0 (perfectly uniform distribution, i.e. each country hosting the same number of events). + +# 4 Results + +The process described above results in a dataset of 60,572 publications associated with 1,991 events. In the following subsections we analyse them to answer some of the important questions about the costs and benefits of the NLP conference system. + +Where is NLP research done? Regarding affiliations (e-mail domains), we see 5,501 different values in our dataset. Unsurprisingly for literature dating back to 1965, no domain could be found + +in a significant portion $(22\%)$ of the publications. For the known affiliations, the research output is unequally distributed between them, with the top 207 domains $(3.76\%)$ responsible for $50\%$ of the publications. Our diversity index $D$ takes the value 0.2303. + +Regarding addresses, they are associated with 135 countries. Following the refining procedure described in the previous section, only $0.8\%$ unknown values remain. The concentration here is even larger than in the case of affiliations: half of the output is generated by just 3 countries (US, China and Germany) and the $D$ coefficient equals 0.1087, indicating an even lower diversity amongst international publication in NLP venues. + +The contribution varying across years is shown in Figure 2. Coloured bars show the fraction of publications from a given year associated with each country, sorted by their global contribution (US=blue, China=orange, Germany=gold, UK=green, Japan=grey, France=light blue). Additionally, we show the diversity coefficient for the years (white line, right axis). We can see the diversity was rising through most of the considered period, but since 2013, the trend is reversed. + +Where is NLP research presented? In total, the 1,991 events were held in 48 different countries. The distribution of publications presented at each country is more uniform than previously covered, with diversity index of 0.3838. + +Figure 3 shows how this distribution changed across the years. The bars correspond to the number of papers presented in each country in a given year, with the same colour coding as in Figure 2. We can see that the distribution changes drastically every year due to major conferences moving around the world. As previously, we see the increasing diversity through the increasing $D$ coefficient. Moreover, while the number of articles presented in the most common country (US) was consistently high throughout the studied period, its relative contribution to the overall publication volume was falling for many years. Similarly to the previous plot, a new trend of falling diversity is visible from 2015. Finally, we can observe the changing role of non-physical venues (light grey bars): the share generated by online journals falling over the years and the sudden change in 2020, when $96\%$ of work was presented online. + +![](images/2b7f3dcb4ccccd0ef81bc300b0df0dfd56b48d499f0c39f0443b07d149c48524.jpg) +Figure 4: Average emissions per publication at local, regional and international conferences between 1965 and 2019 + +![](images/583d306e632d8e37c01dabd2dc3fe0e1389fbb727569cc3272bb57045e37c9a5.jpg) +Figure 5: The average emission per publication (over 5-year periods) and total emission (yearly) between 1970 and 2019. + +What are the environmental costs? Our dataset includes 51,116 publications, for which both the location of research centre and conference venue are known. The average journey distance was $4,988\mathrm{km}$ and the longest distance travelled was $19,888\mathrm{km}$ from New Zealand to Spain.7 + +To convert from the number of kilometres travelled (to the conference and back) to the carbon emissions costs, we turned to data from the UK Government for enabling companies to report their emissions. This resource provided us with 5 years of historic emissions data (2016-2020) for short-haul and long-haul flights giving the CO2 per passenger per kilometer for each given year. We trained a linear regression model to estimate the carbon cost of air travel beyond this time span. Gains in flight efficiency have led to the reduction of carbon emission, resulting in higher costs for his + +toric journeys. We used values for CO2-Equivalent with Radial Forcing, which give an estimate of the overall climate change impact of travel. We considered international flights as those longer than $3700\mathrm{km}$ in accordance with the guidelines associated with the data source. Journeys under this were considered short haul, except for those less than $500\mathrm{km}$ , where we assumed that another lower carbon means of travel would be more likely (in our case we used figures from the same data for train journeys). The data used to create the univariate linear regressions for predicting historic emissions are included in Appendix A. + +Each event could be simply represented through its total emissions, but there are several issues with this approach. Firstly, the size of a conference (number of attendees) dictates its overall emissions cost. Therefore, we use the mean carbon cost of a publication at each event instead. Secondly, we compared events according to their geographic reach. International conferences are those that can be hosted anywhere in the world. Regional conferences are those that are restricted to a specific region (we included LREC, which typically happens around the Mediterranean) and local conferences are those that happen in a single country (or a very narrow geographical region). The conferences included in each band are shown in Appendix B. + +Figure 4 shows that international and regional conferences are the main emitters of greenhouse gasses in the NLP field. Local conferences emit around a quarter of the CO2-Equivalent (per publication) compared to international or regional conferences. Whilst regional conferences have traditionally tracked below the average emissions of international conferences, the gap between them is narrowing, as these conferences are increasingly treated as international events. + +Figure 5 shows the discrepancy between the total CO2 emissions (in red, right axis) and the average CO2 emissions (in blue, left axis) over the same period across our entire dataset. We can see that whilst the average emissions fluctuate, they are generally stable around 0.8-1.2 tonnes of CO2 emitted per publication. This stability is possibly due to the fact that the increasing distances travelled are offset by increasing flight efficiency. In contrast, the total amount of CO2-equivalent emitted by conferences has risen exponentially hitting 1 million kg in 1998, 2 million kg in 2006, 3 million kg in 2016 and then jumping to over 6 million kg in 2018. + +6,000 Tonnes of CO2-Equivalent equates to... + +
1,304cars driven for a year
722homes powered for a year
13,892barrels of oil (energy production)
99,212new trees planted (CO2 capture)
339,172NLP pipelines trained
168NLP pipelines optimised
68,894Generic Transformers trained
22Generic Transformers optimised
71Instances of GPT-3 trained
+ +Table 1: Comparisons of recent annual conference emissions to familiar scenarios both within and outside of NLP. + +![](images/7a493f9209d1cfcda9854f6ffadda700ef9ba5556f3917c3d91e39b21fa0ac0b.jpg) +Figure 6: Comparison of the number of travels of certain distance (X axis, in km.) made in two scenarios: observed in the data and expected in case of random choice of events. + +To put the value of around 6,000 tonnes of CO2-equivalent (total emissions of NLP conferences in 2018) into context, we can compare to emissions for other activities. These are shown in Table 1 and were calculated using data from the website of the US Environmental Protection Agency9. Data estimating the amount of emissions used to train NLP models (Strubell et al., 2019; Lasse et al., 2020) are also included. + +What are the diversity benefits? We hypothesise that series of events occurring in different locations have the benefit of encouraging local researchers to attend, increasing the diversity of participation. In this section we seek to quantify this effect. + +Firstly, we verify this hypothesis by comparing the distances researchers travelled for conferences (blue bars) to the distances they would need to travel if they were choosing venues randomly (orange bars) in Figure 6. The results clearly confirm + +our assumptions: the number of observed short trips, especially a few hundred kilometers, is much higher than expected in a random choice scenario. The number of long trips, especially around 10,000 km, is greatly reduced. Using the data from the previous section, we can also estimate that thanks to these choices, the carbon cost of all travels was $27.21\%$ lower (a total saving of 19,104 tons of CO2 according to emission rates of 2020). + +Next, we can ask whether the priority given to local conferences depends on what country a researcher comes from. To that end, we compute the relative travel length by dividing the observed mean travel distance by the travel distance in a 'random choice' scenario. Figure 7 shows all countries with at least 15 publications according to their relative travel length and GDP per capita in 2018 (Bolt and van Zanden, 2020). We can see that the longest travels are made by countries in the middle-east, most of them considerably wealthy. Most countries that prefer nearby conferences have relatively low income, e.g. Serbia, Philippines or Bulgaria. + +Knowing that each event generates diversity by encouraging researchers from the nearby countries to participate, we can now measure how well this effect works for different conferences. It might be expected that achieving high diversity comes at a cost of longer journeys. We verify this by plotting the diversity of in-person events against travel distance (average per publication) in Figure 8. Most events are indeed arranged along an upward direction, but some do not belong to that trend. For example, we can see that EACL conferences deliver more diversity than others for the same travel distance. Some ACL meetings $^{10}$ , on the contrary, are associated with very long travel and not so much diversity. LREC events are clear outliers here, since they have by far the highest diversity for low distances. The dashed line corresponding to the diversity index of journals indicates that the diversity observed in many in-person events is much higher. Note that the online conferences are not included in this analysis, since their format was often unclear to authors in the moment of submission. + +In Figure 9 we compare the mean participation diversity of events organised in a given continent across the years. Consistently with Figure 2, we see an increasing diversity throughout most of the considered period for most continents. Europe is + +![](images/95b0871bb2d8661010fcbce9256e941279f7f967a28d30b4dfbfd360fc04ef2d.jpg) +Figure 7: Relative travel length (mean distance of travels made divided by mean distance of travels expected in random venue choice) for countries with at least 15 publications with respect to their continent and GDP per capita. + +![](images/cdbccba1a38b9c9433f13eea786186ac24b9307001f7783ac0f3364c4280ca3f.jpg) +Figure 8: NLP events plotted with respect to the diversity of participation (Y axis), mean travel distance (X axis) and number of publications (disc size). + +the location of very diverse events, but the Asian ones appear to be catching up. The journals have seen relatively slow growth and remain much less diverse than in-person events, except for South America or Australia and Oceania, where too few conferences took place for our analysis. + +# 5 Discussion + +Our work covers the carbon cost and diversity gain associated with conferences in the ACL Anthology. We consider that it is timely to perform this analysis, given the shutdown in physical meetings brought on by the global COVID-19 pandemic and have focussed our analysis on conferences from before the pandemic began. + +We have made a number of assumptions in our + +![](images/567c142752c25d2874f4479e2a611c206e12138aaeb0f28654e7f08b2330a571.jpg) +Figure 9: Diversity of events held on each continent between 1965 and 2019. '@' refers to journals. Africa is not represented due to the lack of events there in the ACL Anthology. + +work. Most notably, we have assumed that only first authors travel from the location of their institution to the location of the conference (and back) without detour via the easiest means of transport available to them. Our assumptions are consistent between events and as such, our methodology gives a useful tool for comparing potential climate impact in the field of NLP and beyond. + +Figure 2 shows that whilst the diversity index has grown consistently from 1970 to 2014, it has dropped since then, with 2020 having the lowest diversity index since 2008. We cannot give an explanation for the drop over this period without speculating, however tracking this index will allow us to measure the change in diversity over the coming years. + +Whereas previous work has claimed that nonphysical venues promote diversity (Raby and Madden, 2021), our research broadens the picture, with Figure 8 demonstrating that whilst some events are below the mean diversity index of online journals, many are above; in particular LREC and RANLP attract an audience from many countries. We chose not to make a direct comparison between in-person events and the pandemic-era online conferences of 2020 and 2021, since some events of the latter type were (at the point of submission) advertised as physical meetings, while others were in the hybrid format. However, extending our analysis to pure online and hybrid events is a clear direction for future work. + +We were also able to quantify the carbon cost of travelling to physical events in terms of CO2-equivalent. Whilst this has unsurprisingly grown with the growth of the NLP field, the average carbon cost per paper has remained stable, indicating that gains in efficiency from better modes of transport are offset by an increased travel distance. The total emissions in recent years has been as high as 6,000 tonnes of CO2-equivalent. It must also be noted that other activities of NLP research contribute to the total carbon cost generated by the NLP field. For example, the carbon cost of all travel in a single year of NLP research equates to about 22 fully optimised transformer models trained from scratch (see Table 1). We must also address the carbon cost of research, as well as considering the cost of flying to conferences. + +Measuring the diversity impact contributed by a conference happening in a certain place is not possible directly, since we cannot know, who would + +participate, if the event took place elsewhere. However, our data indicate a preference for local events, which is the highest in low-income countries. Holding conferences across the globe allows researchers from diverse locations to attend an event without flying as far as in a scenario where all conferences were located in one region (as was the case in the early days of the ACL conferences). However, there is a cautionary tale to tell in our data relating to the year 2018. In Figure 5, a large spike on the right hand side corresponds to 2018, when a total of over 6 thousand tonnes of CO2-equivalent was attributed to conference travel. In this year the ACL annual meeting was held in Melbourne, Australia and LREC was held in Miyazaki, Japan. The effect of this is clear as researchers from Europe and North America — who usually attend these conferences — needed to travel further, increasing the emissions. Holding conferences in different locations will only lead to increased diversity if these events are advertised to and attended by a majority of people from the region they are held in. + +Our definition of diversity index only takes into account the countries from which authors have attended, and does not measure other important factors of diversity (gender, race, economic status, native language, etc.). Whilst some of this information may be discernable from our data, most of it would only be possible to discover by author disclosure, which was not possible in our context. Reporting on the country-based diversity allows us to better understand the diversity of NLP research across the last 50 years. + +Our work is designed as a focussed study on the ACL anthology, and a similar analysis of a broader scope (e.g., all computer science, all science publications) would yield results allowing comparisons between disciplines. We were able to perform this analysis due to the provision of the ACL Anthology, which only covers papers in our field. Whilst other resources indexing AI and wider computer science, or even generic scientific literature, do exist (e.g., DBLP, Google Scholar, repositories such as OpenAire, event websites etc.), these each have their own limitations, such as not including PDF links (only DOIs which point to journal websites), lack of a public API or covering only a subset of the literature. Event websites are a fruitful source for data mining, but each event has its own bespoke format and extracting data this way is slow. + +We have attempted to give a view of the data that + +allows policy makers to make informed decisions on where the next NLP conference should be. We have also made our data available to facilitate future research. Policy makers may wish to consider the high emissions impact of locating a conference in an area far away from the typical attendance base, and also weigh this against the potential diversity gain of locating a conference in a lower-wealth area. We expect that conference organisers will make different decisions based on the relative importance of the above factors to their communities. + +# Acknowledgements + +This work was supported by the Polish National Agency for Academic Exchange through a Polish Returns grant number PPN/PPO/2018/1/00006. + +# References + +Titipat Achakulvisut, Tulakan Ruangrong, Isil Bilgin, Sofie Van Den Bossche, Brad Wyble, Dan FM Goodman, and Konrad P Kording. 2020. Point of view: Improving on legacy conferences by moving online. *Elife*, 9. +Wouter MJ Achten, Joana Almeida, and Bart Muys. 2013. Carbon footprint of science: More than flying. Ecological indicators, 34:352-355. +Mark Anderson and Carlos Gómez-Rodríguez. 2020. Distilling neural networks for greener and faster dependency parsing. In Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies, pages 2-13. +Miguel F Astudillo and Hessam AzariJafari. 2018. Estimating the global warming emissions of the Icaxvii conference: connecting flights matter. The International Journal of Life Cycle Assessment, 23(7):1512-1516. +Jutta Bolt and Jan Luiten van Zanden. 2020. Maddison style estimates of the evolution of the world economy. A new 2020 update. Technical report, Maddison Project. +Oliver Bossdorf, Madalin Parepa, and Markus Fischer. 2010. Climate-neutral ecology conferences: just do it! Trends in ecology & evolution, 25(2):61. +Qingqing Cao, Aruna Balasubramanian, and Niranjan Balasubramanian. 2020. Towards accurate and reliable energy measurement of nlp models. In Proceedings of SustainNLP: Workshop on Simple and Efficient Natural Language Processing, pages 141-148. +Konstantinos Chalvatzis and Peter L Ormosi. 2020. The carbon impact of flying to economics conferences: is flying more associated with more citations? Journal of Sustainable Tourism, 29(1):40-67. + +Joachim Ciers, Aleksandra Mandic, Laszlo Daniel Toth, and Giel Op't Veld. 2019. Carbon footprint of academic air travel: A case study in switzerland. Sustainability, 11(1):80. +Laure Cugniere, Diogo Veríssimo, Angeles Branas, and Guy Bigwood. 2020. From call to action: a roadmap to sustainable conferences. SocArXiv. +James Dwyer. 2013. On flying to ethics conferences: Climate change and moral responsiveness. *IJFAB: International Journal of Feminist Approaches to Bioethics*, 6(1):1-18. +Grant Faber. 2021. A framework to estimate emissions from virtual conferences. International Journal of Environmental Studies, 78(4):608-623. +Eva García-Martín, Crefeda Faviola Rodrigues, Graham Riley, and Håkan Grahn. 2019. Estimation of energy consumption in machine learning. Journal of Parallel and Distributed Computing, 134:75-88. +Corrado Gini. 1912. Variabilità e mutabilità. Rome: Libreria Eredi Virgilio Veschi. +Peter Henderson, Jieru Hu, Joshua Romoff, Emma Brunskill, Dan Jurafsky, and Joelle Pineau. 2020. Towards the systematic reporting of the energy and carbon footprints of machine learning. Journal of Machine Learning Research, 21(248):1-43. +Matthew H Holden, Nathalie Butt, Alienor Chauvenet, Michaela Plein, Martin Stringer, and Iadine Chades. 2017. Academic conferences urgently need environmental policies. Nature ecology & evolution, 1(9):1211-1212. +Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd. 2020. spaCy: Industrial-strength Natural Language Processing in Python. +Sebastian Jäckle. 2019. We have to change! the carbon footprint of ecpr general conferences and ways to reduce it. European Political Science, 18(4):630-650. +Ruth Johnson, Andrada Fiscutean, and Serghei Mangul. 2020. Refining the conference experience for junior scientists in the wake of climate change. arXiv preprint arXiv:2002.12268. +Tom Kocmi and Ondrej Bojar. 2020. Efficiently reusing old models across languages via transfer learning. In Proceedings of the 22nd Annual Conference of the European Association for Machine Translation, pages 19-28, Lisboa, Portugal. European Association for Machine Translation. +Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres. 2019. Quantifying the carbon emissions of machine learning. arXiv preprint arXiv:1910.09700. + +F. Wolff Anthony Lasse, Benjamin Kanding, and Raghavendra Selvan. 2020. Carbontracker: Tracking and predicting the carbon footprint of training deep learning modelss. In Proceedings of the ICML Workshop on Challenges in Deploying and monitoring Machine Learning Systems. ICML. +Nils L. Lexerød and Tron Eid. 2006. An evaluation of different diameter diversity indices based on criteria related to forest management planning. Forest Ecology and Management, 222(1-3):17-28. +Loet Leydesdorff, Caroline S. Wagner, and Lutz Bornmann. 2019. Interdisciplinarity as diversity in citation patterns among journals: Rao-Stirling diversity, relative variety, and the Gini coefficient. Journal of Informetrics, 13(1):255-269. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:11907.11692. +Jérôme Mariette, Odile Blanchard, Olivier Berné, and Tamara Ben Ari. 2021. An open-source tool to assess the carbon footprint of research. arXiv preprint arXiv:2101.10124. +Omar Mubin, Fady Alnajjar, Abdullah Shamail, Suleman Shahid, and Simeon Simoff. 2021. The new norm: Computer science conferences respond to Covid-19. Scientometrics, 126(2):1813-1827. +Sabrina Neugebauer, Maren Bolz, Rose Mankaa, and Marzia Traverso. 2020. How sustainable are sustainability conferences?—comprehensive life cycle assessment of an international conference series in europe. Journal of cleaner production, 242:118516. +Dennis Ong, Tim Moors, and Vijay Sivaraman. 2012. Complete life-cycle assessment of the energy/co2 costs of videoconferencing vs face-to-face meetings. In 2012 IEEE Online Conference on Green Communications (GreenCom), pages 50-55. IEEE. +Dennis Ong, Tim Moors, and Vijay Sivaraman. 2014. Comparison of the energy, carbon and time costs of videoconferencing and in-person meetings. Computer communications, 50:86-94. +Benjamin C Pierce, Michael Hicks, Crista Lopes, and Jens Palsberg. 2020. Conferences in an era of expensive carbon. *Communications of the ACM*, 63(3):35-37. +Age Poom, Kati Orru, and Rein Ahas. 2017. The carbon footprint of business travel in the knowledge-intensive service sector. Transportation Research Part D: Transport and Environment, 50:292-304. +Cassandra L Raby and Joan R Madden. 2021. Moving academic conferences online: Aids and barriers to delegate participation. *Ecology and Evolution*, 11(8):3646-3655. + +David S Reay. 2003. Virtual solution to carbon cost of conferences. Nature, 424(6946):251-251. +Victor Sanh, Lysandre Debut, Julien Chaumont, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. +Roy Schwartz, Jesse Dodge, Noah A. Smith, and Oren Etzioni. 2020. *Green ai. Communications of the ACM*, 63(12):54-63. +Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in nlp. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3645-3650. +Timothy Waring, Mario Teisl, Eva Manandhar, and Mark Anderson. 2014. On the travel emissions of sustainability science research. *Sustainability*, 6(5):2718-2735. +Iwona E. Weidlich and Igor V. Filippov. 2016. Using the gini coefficient to measure the chemical diversity of small-molecule libraries. Journal of Computational Chemistry, 37(22):2091-2097. +Fabian Wenner, Freke Caset, and Bart De Wit. 2019. Conference locations and sustainability aspirations: Towards an integrative framework? disP-The Planning Review, 55(1):34-51. +Xiyou Zhou, Zhiyu Chen, Xiaoyong Jin, and William Yang Wang. 2020. Hulk: An energy efficiency benchmark platform for responsible natural language processing. arXiv preprint arXiv:2002.05829. + +# A Values used in Calculations of Emissions per Passenger + +Table 2 shows the $\mathrm{Kg}$ of CO2-equivalent per passenger used in our calculations to train a univariate linear regression model for historic prediction. + +# B Conferences Analysed + +To produce Figure 4, we selected specific conferences that we denoted as either local, regional or international. Conferences were selected if they had a specific identifier in the ACL Anthology. The pythonic regular expressions used to match the identifiers and the categorisation of each conference is provided in Table 3. We also used these identifiers to produce the table of travel maps in the supplementary material. + +
Mode of Transport20202019201820172016
Long-Haul Flight0.099940.102440.111310.10340.10035
Short-Haul Flight0.081450.082910.085030.084320.08821
Train Journey0.036590.040770.043830.04636
+ +Table 2: Carbon cost (kg of CO2-equivalent per passenger) with respect to mode of transport and year. + +
Event NameACL Anthology IdentifiersCategorisation
ACLr"P\d\d\.d", r"2020\.ac1\.main"International
EMNLPr"D\d\d\.[123]", r"2020\.emnlp\.main"International
COLINGr"C\d\d\.d", r"2020\.coling\.main"International
CoNLLr"K\d\d\.d", r"2020\.conl1\.1"International
NAACLr"N\d\d\.d"Regional
LRECr"L\d\d\.d", r"2020\.lrec\.1"Regional
EACLr"E\d\d\.d"Regional
IJCNLPr"I\d\d\.d", "P15", "D19"Regional
TALNr"F\d\d\.d", "\\d\d\d\d\.jeptalnrecital...\*”Local
RANLPr"R\d\d\.d"Local
ALTAr"U\d\d\.d"Local
PACLICr"Y\d\d\.d"Local
ROCLINGr"O\d\d\.d"Local
NoDaLiDar"W11\.46", r"W13\.56", r"W15\.18", r"W17\.2\$\", r"W19\.61"Local
+ +Table 3: Regular expressions used to match conferences. \ No newline at end of file diff --git a/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/images.zip b/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..6b80e9b46d479ed360d8a7725e968cd01fed5d5e --- /dev/null +++ b/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:21c5d7736d1f5fb845b596546fd90621f610512b3e9c6d88e08b2869c89b5e04 +size 572796 diff --git a/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/layout.json b/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a7cb1bb14c7f16f7f5211211a3f627df178606e7 --- /dev/null +++ b/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ff19c887bcee27ab9ce2ff5a7b82e7bf1036b1611f87eedbbbc8b7f8bc15d152 +size 301788 diff --git a/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/3eb51916-e682-4bac-b7ed-5d95eb4d446e_content_list.json b/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/3eb51916-e682-4bac-b7ed-5d95eb4d446e_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..1862bbd1c551b4b3792e1844c55137c3c04dac01 --- /dev/null +++ b/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/3eb51916-e682-4bac-b7ed-5d95eb4d446e_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b8e12690d93a0e2b3e4e8b1d68115f4bf30272b8796c0a5c25b128d368256960 +size 115376 diff --git a/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/3eb51916-e682-4bac-b7ed-5d95eb4d446e_model.json b/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/3eb51916-e682-4bac-b7ed-5d95eb4d446e_model.json new file mode 100644 index 0000000000000000000000000000000000000000..d990114e3568c274101494a6de17313544bb1ec1 --- /dev/null +++ b/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/3eb51916-e682-4bac-b7ed-5d95eb4d446e_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:89df92391aa32efc3a5c5aedebc29aa21e7d82236e1e200101a511b118199c71 +size 138006 diff --git a/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/3eb51916-e682-4bac-b7ed-5d95eb4d446e_origin.pdf b/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/3eb51916-e682-4bac-b7ed-5d95eb4d446e_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2977556c888a44463d45bd41b75bc51610b8c236 --- /dev/null +++ b/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/3eb51916-e682-4bac-b7ed-5d95eb4d446e_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:928569062c5fc6f305a657ee11df889636358a66808ce6a9af84c7694354c1d3 +size 466506 diff --git a/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/full.md b/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b09b2ce03ee16dde410d2f8809d99fea5d810f53 --- /dev/null +++ b/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/full.md @@ -0,0 +1,420 @@ +# Using Pre-Trained Language Models for Producing Counter Narratives Against Hate Speech: a Comparative Study + +Serra Sinem Tekiroglu $^{2}$ , Helena Bonaldi $^{1,2}$ , Margherita Fanton $^{1,2*}$ , Marco Guerini $^{2}$ $^{1}$ University of Trento, Italy + +Fondazione Bruno Kessler, Via Sommarive 18, Povo, Trento, Italy + +tekiroglu@fbk.eu,hbonaldi@fbk.eu, + +margherita.fanton@ims.uni-stuttgart.de,guerini@fbk.eu + +# Abstract + +In this work, we present an extensive study on the use of pre-trained language models for the task of automatic Counter Narrative (CN) generation to fight online hate speech in English. We first present a comparative study to determine whether there is a particular Language Model (or class of LMs) and a particular decoding mechanism that are the most appropriate to generate CNs. Findings show that autoregressive models combined with stochastic decodings are the most promising. We then investigate how an LM performs in generating a CN with regard to an unseen target of hate. We find out that a key element for successful 'out of target' experiments is not an overall similarity with the training data but the presence of a specific subset of training data, i.e. a target that shares some commonalities with the test target that can be defined $a$ -priori. We finally introduce the idea of a pipeline based on the addition of an automatic post-editing step to refine generated CNs. + +# 1 Introduction + +Hate Speech (HS) has found fertile ground in Social Media Platforms. Actions undertaken by such platforms to tackle online hatred consist in identifying possible sources of hate and removing them by means of content deletion, account suspension or shadow-banning. However, these actions are often interpreted and denounced as censorship by the affected users and political groups (Myers West, 2018). For this reason, such restrictions can have the opposite effect of exacerbating the hostility of the haters (Munger, 2017). An alternative strategy, that is looming on the horizon, is based on the use of Counter Narratives. CNs are "all communicative actions aimed at refuting hate speech through thoughtful and cogent reasons, and true and fact-bound arguments" (Schieb and Preuss, 2016). As a de-escalating + +measure, CNs have been proven to be successful in diminishing hate, while preserving the freedom of speech (Benesch, 2014; Gagliardone et al., 2015). An example of $$ pair is shown below: + +HS: Women are basically childlike, they remain this way most of their lives. Soft and emotional. It has devastated our once great patriarchal civilizations. + +CN: Without softness and emotions there would be just brutality and cruelty. Not all women are soft and emotional and many men have these characteristics. To perpetuate these socially constructed gender profiles maintains norms which oppress anybody. + +Based on their effectiveness, CNs have started being employed by NGOs to counter online hate. Since for NGO operators it is impossible to manually write responses to all instances of hate, a line of NLP research has recently emerged, focusing on designing systems to automatically generate CN suggestions (Qian et al., 2019; Tekiroğlu et al., 2020; Fanton et al., 2021; Chung et al., 2021a; Zhu and Bhat, 2021). In this study, our main goal is to compare pre-trained language models (LM) and decoding mechanisms in order to understand their pros and cons in generating CNs. Thus, we use various automatic metrics and manual evaluations with expert judgments to assess several LMs, representing the main categories of the model architectures, and decoding methods. We further test the robustness of the fine-tuned LMs in generating CNs for an unseen target. Results show that autoregressive models are in general more suited for the task, and while stochastic decoding mechanisms can generate more novel, diverse, and informative outputs, the deterministic decoding is useful in scenarios where more generic and less novel (yet 'safer') CNs are needed. Furthermore, in out-of-target experiments we find that the similarity of targets (e.g. + +JEWS and MUSLIMS as religious groups) plays a crucial role for the effectiveness of portability to new targets. We finally show a promising research direction of leveraging gold human edits for building an additional automatic post-editing step to correct errors made by LMs during generation. To the best of our knowledge, this is the first study systematically analysing state of the art pre-trained LMs in CN generation. + +# 2 Related Work + +In this section we first discuss standard approaches to hate countering and studies on CN effectiveness on Social Media Platforms, then the existing CN data collection and generation strategies. + +Hate countering. NLP has started addressing the phenomenon of the proliferation of HS by creating datasets for automatic detection (Mathew et al., 2021; Cao et al., 2020; Kumar et al., 2018; Hosseinmardi et al., 2015; Waseem, 2016; Burnap and Williams, 2016). Several surveys provide a review on the existing approaches on the topic (Poletto et al., 2020; Schmidt and Wiegand, 2017; Fortuna and Nunes, 2018), also addressing the ethical challenges of the task (Kiritchenko et al., 2021). Still, automatic detection of HS presents some drawbacks (Vidgen and Derczynski, 2020). First of all, the datasets might include biases, and the models tend to replicate such biases (Binns et al., 2017; Davidson et al., 2019; Sap et al., 2019; Tsvetkov, 2020). Moreover, the end goals for which HS detection is employed are often charged with censorship of the freedom of speech by concerned users (Munger, 2017; Myers West, 2018). In this scenario, NGOs have started employing CNs to counter online hate. CNs have been shown to be effective in reducing linguistic violence (Benesch, 2014; Gagliardone et al., 2015; Schieb and Preuss, 2016; Silverman et al., 2016; Mathew et al., 2019); moreover, even if they might not influence the view of extremists, they are still effective in presenting alternative and non-hateful viewpoints to bystanders (Allison and Bussey, 2016; Anderson et al., 2014). + +CN data collection. The existing studies for collecting CN datasets employ four main approaches. Crawling consists in automatically scraping websites, starting from an HS content and searching for possible CNs among the responses (Mathew et al., 2018, 2019). With crowdsourcing CNs are + +written by non-expert paid workers as responses to provided hate content (Qian et al., 2019). Nichesourcing relies on a niche group of experts for data collection (De Boer et al., 2012), and it was employed by Chung et al. (2019) for CN collection using NGO's operators. Hybrid approaches use a combination of LMs and humans to collect data (Wallace et al., 2019; Dinan et al., 2019; Vidgen et al., 2020). Studies on CN collection are presented in more detail by Tekiroglu et al. (2020); Fanton et al. (2021). + +CN generation. Neural approaches to automatically generate CNs are beginning to be investigated. Fanton et al. (2021); Tekiroğlu et al. (2020); Qian et al. (2019) employ a mix of automatic and human intervention to generate CNs. Zhu and Bhat (2021) propose an entirely automated pipeline of candidate CN generation and filtering. Other lines of work include CN generation for under-resourced languages such as for Italian (Chung et al., 2020), and the generation of knowledge-bound CNs, which allows the production of CNs based on grounded and up-to-date facts and plausible arguments, avoiding the hallucination phenomena (Chung et al., 2021a). Instead, in our work we take a more foundational perspective, which is relevant for all the LM-based pipelines described above. Therefore, we compare and assess various state of the art pre-trained LMs in an end-to-end setting, which is developed as a downstream task for CN generation. + +# 3 Methodology + +In this section, we present the CN dataset, the language models, and the decoding mechanisms employed for our experiments. + +# 3.1 Dataset for fine-tuning + +For this study we rely on the dataset proposed by Fanton et al. (2021), which is the only available dataset that grants both the target diversity and the CN quality we aim for. The dataset was collected with a human-in-the-loop approach, by employing an autoregressive LM (GPT-2) paired with three expert human reviewers. It features $5003 < HS, CN>$ pairs, covering several targets of hate including DISABLED, JEWS, LGBT+, MIGRANTS, MUSLIMS, POC, WOMEN. The residual categories are collapsed to the label OTHER. We partitioned the dataset into training, validation, and test sets with the ratio: $8:1:1$ (i.e. 4003, 500 and 500 pairs), ensuring that all sets share the same + +target distribution, and no repetition of HS across the sets is allowed. + +# 3.2 Models + +We experiment with 5 Transformer based LMs (Vaswani et al., 2017) representing the main categories of the model mechanisms: autoregressive, autoencoder, and seq2seq. + +BERT. The Bidirectional Encoder Representations from Transformers was introduced by Devlin et al. (2019). It is a bidirectional autoencoder that can be adapted to text generation (Wang and Cho, 2019). + +GPT-2. The Generative Pre-trained Transformer 2 is an autoregressive model built for text generation (Radford et al., 2019). + +DiaLoGPT. The Dialogue Generative Pretrained Transformer is the extension of GPT-2 specifically created for conversational response generation (Zhang et al., 2020). + +BART. BART is a denoising autoencoder for pretraining seq2seq models (Lewis et al., 2020). The encoder-decoder architecture of BART is composed of a bidirectional encoder and an autoregressive decoder. + +T5. The Text-to-Text Transfer Transformer proposed by Raffel et al. (2020) is a seq2seq model with an encoder-decoder Transformer architecture. + +While all the other models could be fine-tuned directly for the generation task, for BERT we warm-started an encoder-decoder model using BERT checkpoints similar to the BERT2BERT model defined by (Rothe et al., 2020). The fine-tuning details and hyperparameter settings can be found in Appendix A.1. + +# 3.3 Decoding mechanisms + +We utilize 4 decoding mechanisms: a deterministic (Beam Search) and three stochastic (Top- $k$ , Top- $p$ , and a combination of the two). + +Beam Search (BS). The Beam Search algorithm is designed to pick the most-likely sequence (Li et al., 2016; Wiseman et al., 2017). + +Top- $\pmb{k}$ ( $\mathbf{Top}_k$ ). The sampling procedure proposed by Fan et al. (2018) selects a random word from the $k$ most probable ones, at each time step. + +Top- $p$ ( $\mathbf{Top}_p$ ). Also known as Nucleus Sampling, the parameter $p$ indicates the total probability for the pooled candidates, at each time step (Holtzman et al., 2020). + +Combining Top- $p$ and Top- $k$ ( $\mathbf{Top}_{pk}$ ). At decoding stage, it is possible to combine the parameters + +$p$ and $k$ . This is a Nucleus Sampling constrained to the Top- $k$ most probable words. + +In our experiments we used the following parameters as default: Beam-Search with 5 beams and repetition penalty $= 2$ ; Top- $k$ with $k = 40$ ; Top- $p$ with $p = .92$ ; Top $p_k$ with $k = 40$ and $p = .92$ . + +# 4 Evaluation metrics + +We use several metrics to evaluate various aspects of the CN generation. + +Overlap Metrics. These metrics depend on the $n$ -gram similarity of the generated outputs to a set of reference texts in order to assess the quality. We used our gold CNs as references and the CNs generated by the different models, as candidates. In particular, we employed three BLEU variants: BLEU-1 (B-1), BLEU-3 (B-3) and BLEU-4 (B-4) (Papineni et al., 2002), and ROUGE-L (ROU) (Lin, 2004). + +Diversity metrics. They are used to measure how diverse and novel the produced CNs are. In particular, we utilized Repetition Rate (RR) to measure the repetitiveness across generated CNs, in terms of the average ratios of non-singleton $n$ -grams present in the corpus (Bertoldi et al., 2013). It should be noted that RR is calculated as a corpus-based repetition score, i.e. inter-CN, instead of calculating intra-CN repetition of $n$ -grams only. We also used Novelty (NOV) (Wang and Wan, 2018), based on Jaccard similarity, to compute the amount of novel content that is present in the generated CNs as compared to the training data. + +Human evaluation metrics. Albeit more difficult to attain, human judgments provide a more reliable evaluation and a deeper understanding than automatic metrics (Belz and Reiter, 2006; Novikova et al., 2017). To this end, we specified the following dimensions for the evaluation of CNs. Suitableness (SUI): measures how suitable a CN is to the HS in terms of semantic relatedness and in terms of adherence to CN guidelines1; Grammaticality (GRM): how grammatically correct a generated CN is; Specificity (SPE): how specific are the arguments brought by the CN in response to the HS; Choose-or-not (CHO): determines whether the annotators would select that CN to post-edit and use it in a real case scenario as in the set up presented by Chung et al. (2021b); Is-best (BEST): whether the CN is the absolute best among the ones generated + +for an HS (i. e. whether the annotators would pick up exactly that CN if they had to use it in a real case scenario). + +The first three dimensions are rated with a 5-points Likert scale and follow the evaluation procedure described by Chung et al. (2020), whereas both choose-or-not and is-best are binary ratings (0, 1). Choose-or-not allows for multiple CNs to be selected for the same HS, while only one CN can be selected for is-best for each HS. + +Toxicity.2 It determines how "rude, disrespectful, or unreasonable" a text is. Toxicity has been employed both to detect the bias present in LMs (Gehman et al., 2020) and as a solution to mitigate such bias (Gehman et al., 2020; Xu et al., 2020). + +Syntactic metrics. A high syntactic complexity can be used as a proxy for an LM's ability of generating complex arguments. We used the syntactic dependency parser of spaCy3 For the task, focusing on the following measures: Maximum Syntactic Depth (MSD): the maximum depth among the dependency trees calculated over each sentence composing a CN. Average Syntactic Depth (ASD): the average depth of the sentences in each CN. Number of Sentences (NST): the number of sentences composing a CN. + +# 5 Experiments + +We performed two sets of experiments: first, we assessed how LMs perform in the task of generating CNs with different decoding mechanisms. Then, we selected the best model from the first round of experiments and tested its generalization capabilities when confronted with an unseen target of hate. + +# 5.1 LMs and decoding experiments + +For the first round of experiments, in order to avoid possible unfair assessments given by the open nature of the generative task (i.e. a highly suitable CN candidate could be scored low due to its difference from the single reference/gold CN), at test time we allowed the generation of several candidates for each HS+LM+decoding mechanism combination. We loosely drew inspiration from the Rank- $N$ Accuracy procedure and the 'generate, prune, select' procedure (Zhu and Bhat, 2021). In particular, + +given an LM and a decoding mechanism, we generated 5 CNs for each HS in the test set. + +Automated evaluation and selection We set up the automatic evaluation strategy as displayed in Figure 1. First, we scored each CN with the overlap metrics presented in Section 4, using the gold CN as a reference. Next, we ranked the candidate CNs with respect to the overlap scores and computed the mean of the rankings. Then, we selected the best ones according to the following criteria: + +BestLM selects the single best CN for an HS among the 20 generated by the 4 models. + +$\mathbf{Best}_{\mathbf{D}}$ selects the single best CN for an HS among the 25 generated by the 5 decoding configurations. + +$\mathbf{Best}_{\mathbf{LM} + \mathbf{D}}$ selects the single best CN among the 5 generated with each model-decoding combination. Moreover, we assessed the overall corpus-wise quality of the generated CNs with respect to the models, to the decoding mechanisms, and to the model-decoding combinations via the diversity metrics. + +![](images/c91d3d64936c2366bde81e8a7d4ff99aa5c0050458659752ce799a841c433283.jpg) +Figure 1: Given an HS, 5 CNs are generated for each model-decoding combination. $\odot$ indicates the best CN per model $(\in \mathrm{Best}_{\mathrm{LM}})$ . $\triangle$ indicates the best CN per decoding $(\in \mathrm{Best}_{\mathrm{D}})$ . $\square$ indicates the best CN per model-decoding combination $(\in \mathrm{Best}_{\mathrm{LM} + \mathrm{D}})$ . + +Human evaluation on a sample To perform the human evaluation we referred to the BestLM generations and sampled 200 instances from it. Each instance comprises an HS and 5 relevant CNs, each generated by a different model. We recruited 2 annotators who were trained extensively for the task following the procedure used by Fanton et al. (2021). The expert annotators were asked to evaluate the 5 CNs corresponding to the HS, according to the dimensions described in Section 4. We en + +riched the evaluation of this subset with the toxicity and the syntactic metrics. + +# 5.2 Results of the first set of experiments + +The results of the experiments on the LMs and the decoding mechanisms are reported in this section4. + +Best Model The results of the comparison of the models on the BestLM generations can be found in Table 1. Regarding the overlap and diversity metrics, DialoGPT records the best or the second best score in all the metrics, apart from novelty where it still achieves a high score (0.643) close to the best performance (0.655). T5 also achieves high scores, especially on ROUGE, BLEU-1 and novelty. + +BART, instead, is the best model according to human evaluation metrics, apart from specificity. On the other hand, it shows poor performances in terms of diversity metrics, indicating that it tends to produce grammatical and suitable but very generic responses. + +BERT records the worst scores for all the overlap and diversity metrics apart from novelty. However, it also achieves the best syntactic metric results. Therefore, it is evident that BERT's output is more complex, but very repetitive. The combination of these aspects eventually affects the clarity of BERT's output such that it yields poor results in the human evaluation, in particular for grammaticality (4.2, while other models are above 4.6). This poor grammaticality can also explain the syntactic scores since the spaCy dependency parser was not trained to handle ungrammatical text and this could actually inflates the ASD and MSD scores. + +GPT-2 overall yields very competitive results for several groups of metrics. It obtains the second-highest novelty score (0.653) and the best RR (7.736). It also achieves the second best results on BLEU-3, maximum syntactic depth and number of sentences, and the best results on toxicity and specificity (2.880) indicating the ability to produce complex, suitable, focused and diverse CNs. + +After the human evaluation we ran a qualitative interview with the annotators, whose feedback on the data strengthened the results we observed and the conclusion we drew. For instance, they reported the repetition of simple, yet catch-them-all, expressions (e.g. "they are our brothers and sisters") regardless of the target. Further inspections found + +that those CNs were mainly produced by BERT, which is in line with BERT's RR score. + +Best Decoding mechanism. The results calculated on $\mathrm{Best_D}$ output are presented in Table 2. Top $k$ is the best performing decoding mechanism achieving the best results on the diversity metrics, BLEU-3 and BLEU-4. It is also the best performing for specificity, maximum syntactic depth and number of sentences, and the second best for average syntactic depth and toxicity. + +The other stochastic decoding mechanisms perform well too. $\mathrm{Top}_p$ yields competitive results on both diversity and overlap metrics; it is the second best for specificity, and achieves good results on the syntactic metrics. $\mathrm{Top}_{pk}$ has a good performance on the overlap metrics. It obtains the second-highest scores in most of the human evaluation metrics and the lowest in toxicity, and it reaches a reasonable specificity score. + +On the other hand, BS does not achieve particularly good results, except for the ROUGE score. Even if it is the best decoding with respect to the human evaluation, this comes at the cost of specificity and diversity. Through a post-hoc manual analysis we observed that it was due to the deterministic nature of BS, that tends to choose the most probable sequences, i.e. the "safest", thus resulting in vague and repetitive outputs. + +Best Model-Decoding combination Here we briefly discuss the results of the evaluation obtained on the $\mathrm{Best}_{\mathrm{LM} + \mathrm{D}}$ generations. In particular, the autoregressive models GPT-2 and DialogGPT behave similarly with similar decoding mechanisms, such that BS outputs the best results for almost all the overlap metrics, and the worst for the diversity metrics. On the contrary, for the other models, the results achieved with stochastic decoding mechanisms are the best for the overlap metrics. In almost all the cases, we observe that the stochastic decoding mechanisms perform better on syntactic and diversity metrics and on toxicity, while for the human evaluation metrics BS tends to be the best, except for specificity. A detailed discussion can be found in Appendix A.2. + +Discussion. In this set of experiments, we found that the autoregressive models perform the best according to a combination of several metrics that we deem particularly relevant (e.g. more novel, diverse, and informative outputs). Of course more repetitive and conservative outputs can be preferred + +
OverlapDiversityToxicitySyntactic metricsHuman evaluation
ROUB-1B-3B-4RRNOV-ASDMSDNSTSUISPEGRMCHOBEST
BART0.2680.2770.0850.05120.7220.5600.4204.3114.9651.7403.7902.5524.9370.8400.272
BERT0.2370.2770.0730.03724.7470.6050.4065.0086.1602.2803.1352.6474.2470.7170.122
T50.2740.3020.0830.0428.5480.6550.3594.6925.3251.7152.8722.4024.6800.6420.090
DiaLoGPT0.2730.3040.0930.0528.2480.6430.3434.6775.5751.8953.3922.7554.8800.7670.245
GPT-20.2640.2970.0880.0507.7360.6530.3424.5845.5952.2403.5552.8804.8670.7950.270
+ +Table 1: Results of the overlap and diversity metrics are calculated on the BestLM generations while the toxicity, the syntactic metrics and the human evaluation are calculated on the corresponding subset. + +
OverlapDiversityToxicitySyntactic metricsHuman evaluation
ROUB-1B-3B-4RRNOV-ASDMSDNSTSUISPEGRMCHOBESTn
BS0.2870.2990.0960.05921.5790.5610.3984.4155.0481.6843.9362.4974.9250.8260.222%18.7
Toppk0.2870.3200.1060.05911.4040.6390.3524.6765.4881.9323.3242.6474.6880.7640.212%29.3
Topk0.2820.3140.1060.06010.0760.6520.3744.7045.7562.1333.1552.7164.6590.7160.183%27.1
Topp0.2850.3190.1050.06011.2700.6400.3814.7535.6712.0683.1492.6874.6810.7230.189%24.9
+ +Table 2: The results for the overlap and diversity metrics are calculated on the BestD generations: for each decoding mechanism, there are 2500 CNs. The remaining metrics are calculated on a subset of 1000 CNs: the distribution of which is shown in the column "n". + +when high precision of suitable CNs are required at the expense of being more generic and less novel. Still, for what concerns autoregressive models it could be argued that the good performance of the GPT-2 model we fine-tuned is due to the fact that generated CNs and gold CNs derive from a similar distribution (GPT-2 was employed in the human-in-the-loop process used to create the reference dataset from Fanton et al. (2021)). While we recognize that this could partially explain the performance of our GPT-2 model, it does not explain the performance of DialogoGPT, which is pre-trained on a completely different dataset. Therefore, we can reasonably conclude that autoregressive models are particularly suited for the task, regardless of the pre-training data. + +With respect to the decoding mechanisms, we record high repetitiveness and low novelty for the deterministic decoding BS. Even if it reaches high scores in most of the human evaluation metrics, it fails to produce specific CNs ending up in generating suitable, yet generic responses. On the contrary, stochastic decoding mechanisms produce more novel and specific responses. + +Example CNs generated in this session of experiments, along with some qualitative analysis, can be found in Appendix A.3. + +# 5.3 Leave One Target Out experiments + +In the second stage, we built a set of cross-domain experiments to capture the generalization capabilities of the best LM determined in the previous experiments. Specifically, we concentrate on as + +sessing how much a pre-trained language model fine-tuned on a pool of hate targets can generalize to an unseen target. + +Thus, for the out of target experiment we selected the LM that we deem the most prominent in order to reduce the number of LM configurations to compare. In particular, since we want to examine the generalization capability of the LM, the generation of novel CNs, in comparison to the training data, is given primary importance. Secondly, specificity is also crucial since it signifies the ability of the LM/decoding mechanism in generating accurate CNs and avoiding vague yet suitable, catch-all CNs. In contrast, repetitiveness is an undesirable feature of CNs, as it signals the tendency of a model to produce less flexible content. Given these considerations, we chose to employ GPT-2 with Top $k$ decoding for the Leave One Target Out (LOTO) experiments since it is the configuration achieving the best trade-off amongst all the others. + +This set of experiments is structured in 3 steps, replicated for each of the selected targets. We selected the targets with the highest number of examples (MUSLIMS, MIGRANTS, WOMEN, LGBT+ and JEWS) to have a sufficient sized test set for each configuration. + +First, we sampled from the Fanton et al. (2021) dataset 600 pairs for each LOTO target, in order to have a balanced setting. Additionally, POC and DISABLED were always kept in the training set, and we removed multi-target cases from OTHER. The resulting dataset consists of 3729 instances (further details are provided in Appendix A.4). Sec + +ondly, we fine-tuned 5 different configurations of the LM, and in each configuration one of the 5 LOTO targets is not present in the training data: LM-JEWS, LM-LGTB+, LM-MIGRANTS, LM-MUSLIMS and LM-WOMEN. Finally, we tested each LOTO model on the 600 HSs in the test set made of "left out" target examples. For instance, the model LM-JEWS is used for generating the CNs for the target JEWs, after being trained on $$ data without any instances with the label JEWS. We generated 5 CNs for each HS and selected the best CN according to the procedure described in Section 5.1. + +# Results of LOTO experiments + +We analyse the CNs generated with the LOTO models through overlap and diversity metrics (Table 3). We refer to Appendix A.4 for the comparison between RR calculated on the candidate CNs and the reference CNs of the Fanton et al. (2021) dataset. + +For all the targets we record higher novelty scores as compared to the previous experiments. Higher novelty ranges indicate that conditioning with new material (i.e. HS for the unseen targets) induces GPT-2 to produce new arguments. On the other hand, as expected, the overlap scores for LOTO are remarkably lower than those from the previous experiments (Table 3). Therefore, we can infer that generalizing to an unseen target is harder than generalizing to an unseen HS. + +
LOTOOverlapDiversity
TargetROUB-1B-3B-4RRNOV
JEWS0.16090.18420.01340.00354.7960.718
LGBT+0.15990.18280.01490.00554.6200.718
MIGRANTS0.16590.19150.01630.00384.7070.720
MUSLIMS0.17430.19340.01970.00595.3140.712
WOMEN0.17550.19880.01950.00684.6320.729
+ +Table 3: The overlap and diversity metrics scores for the various LOTO configurations. + +We also found out that the CNs generated in the LM_MUSLIMS and LM_WOMEN configurations obtain the highest overlap scores (Table 3). We hypothesize that the high scores can be explained by the presence of a target in the LOTO training that is highly similar to the left out one. To this end, we computed the novelty between each target subset of the training data and the LOTO test data for that configuration (see Appendix A.4 for details). The reference CNs for LM_MUSLIMS record the lowest novelty scores with respect to the JEWS subset of the training set (i.e. 0.761). Thus, it + +![](images/5b74241ee9bd2fa346b11a158cf3c5d3bc808ac886665febb40d559ca400fa06.jpg) + +![](images/226c9bf5e96ae4956c9bcf896bbf2bf4cabf938a1a0f7edb06f5a56addbe148a.jpg) + +![](images/52ec7784c04d949e4c0eb70a0da420d8a50f5370c8c03987ea0569fc0a896d22.jpg) +Figure 2: The correlation between the novelty of the reference CNs and overlap metrics: in each plot, the dots and the darker line correspond to the most influential target; the triangles and the lighter line correspond to the results calculated without it. + +![](images/6b0dcb6ce459ce9ad4920b70bf6952e3ab2eae65d555780b1c094dcd76e512ce.jpg) + +can be interpreted as the most influential portion of training data for the target MUSLIMS. On the other hand, for LM_WOMEN the highest influence is recorded with the LGBT+ subset of the training data (i.e. 0.763). These results can be explained by the semantic similarity of the target MUSLIMS to JEWS, both being religious groups; and of WOMEN to LGBT+, both being related to gender issues. + +As a complementary analysis, we consider two different computations of the reference CN novelty: with respect to the most influential target for each LOTO configuration, and with respect to the LOTO training data without the most influential target. We computed the Pearson correlation between the overlap metrics and each of the two novelty computations. In Figure 2, we observe that removing the influential target from the training data strongly decreases the correlation with the overlap metrics (from an average of -0.889 to -0.416). Consequently, we can conclude that to obtain high overlap results in the LOTO experiments, it is necessary that the training data contains a target strongly connected to the left out one. Most importantly, this connection is not arbitrarily decided but it is based on an $a$ -priori semantic similarity of the targets as exemplified before. + +Finally, we chose to generate also with the BS decoding mechanism, to use it as a baseline and compare it to the stochastic decoding mechanism (Top-k). In particular, we computed the Pearson correlation between the novelty of the reference + +CNs and the novelty of the candidate CNs with respect to the corresponding training data (Figure 3). We can observe that for the BS generation the novelty of the candidate CNs is lower than Top- $k$ (0.67-0.74 vs. 0.75-0.77) and the correlation with the novelty of the reference is weaker (0.53 vs. 0.59). This confirms the lower generalization ability with the deterministic decoding mechanism (as compared to the stochastic) that tends to produce generic and repetitive responses regardless of the semantic distances of the LOTO targets from the training data. + +![](images/964300776e10302ce1b5314389e4f060a52d3c3fe5e67c9499428c11eca8baf5.jpg) +Figure 3: Reference and candidate CNs novelty, for Top- $k$ and BS LOTO generations. + +![](images/d25aaf987bb09e34e40b4dff30b60ddef0a98f52e9c4ca72fc6bd50a018fc438.jpg) + +# 6 Automatic Post-Editing + +In the previous experiments we fine-tuned our models making resort to $$ pairs alone. Still the Fanton et al. (2021) dataset contains additional information that can be useful for our task: i.e. the original GPT-2 generation before undergoing human post-editing. + +Thus, as a final experiment, we propose to further improve the CN generation by moving from an end-to-end framework to a two stage pipeline, by decoupling CN generation from its 'final refinement'. In particular we propose the adoption of an Automatic Post-Editing (APE) stage in order to capture and utilize the nuances among the machine generated CNs and their human post-edited versions. APE, which is used for automatically correcting errors made by machine translation (MT) systems before performing actual human post-editing, has been an important tool for MT (Knight and Chander, 1994; do Carmo et al., 2021). Considering its effectiveness in MT, we hypothesize that building a pipeline with CN generation and APE could alleviate the requirement of the final manual post-editing (Allen and Hogan, 2000; Chatterjee et al., 2019) to achieve better constructed CNs. + +To this end, we fine-tuned another instance of GPT-2 medium model specifically for the post-editing task using the $$ triplets $^5$ , where $CN_{or}$ and $CN_{pe}$ denote the CNs originally generated by an LM and their human post-edited versions, respectively. The triplets were then filtered by removing those for which $CN_{or} = CN_{pe}$ . More details about the experiment settings can be found in Appendix A.5. + +
DataCNapeCNorN/A
Fanton et al. (2021)261460
GPT-2 Topk371944
+ +Table 4: The human annotation results for the APE experiments in terms of average preference percentages. + +We have conducted two human evaluations over the subsets of: i) the $CN_{or}$ of the Fanton et al. (2021) test samples, ii) the CN outputs of the best model and decoding mechanism combination provided as the results of the first set of experiments, that yielded the top 50 Translation Error Rate (TER) (Snover et al., 2006) scores with respect to the $CN_{or}$ . The two expert annotators were asked to state their preferences among the 2 randomly sorted CNs, $CN_{or}$ and $CN_{ape}$ (automatically post-edited output), for a given HS. The annotators were also allowed to decide on a tie. Results, shown in Table 4, indicate that, albeit there are often ties and only a subset of $CN_{or}$ is actually modified, when there is a preference, it is predominantly in favour of the automatically post-edited versions over the GPT-2 generated CNs (26% vs. 14% for the test set, and 37% vs. 19% for the GPT-2 Topk generations, on average). Regarding the experiment results, we believe that APE is a highly promising direction to increase the efficacy of the CN generation models where generation quality and diversity is crucial, and considering that obtaining/enlarging expert datasets to train better models is not simple. + +# 7 Conclusion + +In this work, we focus on automatic CN generation as a downstream task. First, we present a comparative study to determine the performances and peculiarities of several pre-trained LMs and decoding mechanisms. We observe that the best results (in term of novelty and specificity) overall are achieved + +by the autoregressive models with stochastic decoding: GPT-2 with the $\mathrm{Top}_k$ decoding mechanism, and DialogPT with the combination $\mathrm{Top}_{pk}$ . At the same time deterministic decoding can be used when more generic yet 'safer' CNs are preferred. + +Then, we investigate the performances of LMs in zero-shot generation for unseen targets of hate. Hence, we fine-tuned 5 different versions of GPT-2, leaving out the examples pertaining to one target at each turn. We find out that for each configuration/version, there is a subset of the training data which is more influential with respect to the generated data (i.e. a target that shares some commonalities with the test target that can be defined a-priori). Finally, we introduce an experiment by training an automatic post-editing module to further improve the CN generation quality. The notable human evaluation results paves the way for a promising future direction that decouples CN generation from its 'final refinement'. + +# Ethical Considerations + +Although tackling online hatred through CNs inherently protects the freedom of speech and has been proposed as a better alternative to the detect-remove-ban approaches, automatization of CN generation can still raise some ethical concerns and some measures must be taken to avoid undesired effects during research. Thus, we address the relevant ethical considerations and our remedies as follows: + +Annotation Guidelines: The well-being of the annotators was our top priority during the whole study. Therefore, we strictly followed the guidelines created for CN studies (Fanton et al., 2021) that were adapted from (Vidgen et al., 2019). The human evaluations have been conducted with the help of two expert annotators in CNs. These experts were already trained for the CN generation task and employed for the work presented by (Fanton et al., 2021). We further instructed them in the aims of each experiment, clearly explained the evaluation tasks, and then we exemplified proper evaluation of $$ pairs using various types of CNs. Most importantly, we limited the exposure to hateful content by providing a daily time limit of annotation. Concerning the demographics, due to the harmful content that can be found in the data, all annotators were adult volunteers, perfectly aware of the objective of the study. + +Dataset. We purposefully chose an expert-based dataset in order to avoid the risk of modeling the language of real individuals to (i) prevent any privacy issue, (ii) avoid to model inappropriate CNs (e.g. containing abusive language) that could be produced by non-experts. The dataset also focuses on the CN diversity while keeping the HSs as stereotypical as possible so that our CN generation models have a very limited diversity on the hateful language, nearly precluding the misuse. + +Computational Task. CN generation models are not meant to be used in an autonomous way, since even the best models can still produce substandard CNs containing inappropriate or negative language. Instead, following a Human-computer cooperation paradigm, our focus is on building models that can be helpful to NGO operators by providing them diverse and novel CN candidates for their hate countering activities and speed up the manual CN writing to a certain extent. This approach also gives ground to some of the measures we used during evaluation (namely choose-or-not and is-best). + +Model Distribution. In addition to the limited and simplified hateful content in the dataset we selected, we further reduce the risk of misuse by choosing a specific distribution strategy: i) we only make available the non-autoregressive models in order to eliminate the risk of using over-generation for hate speech creation, ii) we distribute such models only for research purposes and through a request based procedure in order to keep track of the possible users. + +# References + +Jeffrey Allen and Christopher Hogan. 2000. Toward the development of a post editing module for raw machine translation output: A controlled language perspective. In Third International Controlled Language Applications Workshop (CLAW-00), pages 62-71. +Kimberley R Allison and Kay Bussey. 2016. Cyberbystanding in context: A review of the literature on witnesses' responses to cyberbullying. Children and Youth Services Review, 65:183-194. +Jenn Anderson, Mary Bresnahan, and Catherine Musat-ics. 2014. Combating weight-based cyberbullying on facebook with the dissenter effect. Cyberpsychology, Behavior, and Social Networking, 17(5):281-286. +Anja Belz and Ehud Reiter. 2006. Comparing automatic and human evaluation of nlg systems. In 11th + +conference of the european chapter of the association for computational linguistics, pages 313-320. +Susan Benesch. 2014. Countering dangerous speech: New ideas for genocide prevention. Washington, DC: United States Holocaust Memorial Museum. +Nicola Bertoldi, Mauro Cettolo, and Marcello Federico. 2013. Cache-based online adaptation for machine translation enhanced computer assisted translation. In MT-Summit, pages 35-42. +Reuben Binns, Michael Veale, Max Van Kleek, and Nigel Shadbolt. 2017. Like trainer, like bot? inheritance of bias in algorithmic content moderation. In Social Informatics, pages 405-415, Cham. Springer International Publishing. +Pete Burnap and Matthew L Williams. 2016. Us and them: identifying cyber hate on twitter across multiple protected characteristics. *EPJ Data Science*, 5(1):11. +Rui Cao, Roy Ka-Wei Lee, and Tuan-Anh Hoang. 2020. Deepate: Hate speech detection via multi-faceted text representations. In 12th ACM Conference on Web Science, pages 11–20. +Félix do Carmo, Dimitar Shterionov, Joss Moorkens, Joachim Wagner, Murhaf Hossari, Eric Paquin, Dag Schmidtke, Declan Groves, and Andy Way. 2021. A review of the state-of-the-art in automatic post-editing. Machine Translation, 35(2):101-143. +Rajen Chatterjee, Christian Federmann, Matteo Negri, and Marco Turchi. 2019. Findings of the wmt 2019 shared task on automatic post-editing. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 11-28. +Yi-Ling Chung, Elizaveta Kuzmenko, Serra Sinem Tekiroglu, and Marco Guerini. 2019. CONAN - COunter NArratives through nichesourcing: a multilingual dataset of responses to fight online hate speech. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2819-2829, Florence, Italy. Association for Computational Linguistics. +Yi-Ling Chung, Serra Sinem Tekiroglu, and Marco Guerini. 2020. Italian counter narrative generation to fight online hate speech. In Proceedings of the Seventh Italian Conference on Computational Linguistics CLiC-it. +Yi-Ling Chung, Serra Sinem Tekiroglu, and Marco Guerini. 2021a. Towards knowledge-grounded counter narrative generation for hate speech. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 899-914, Online. Association for Computational Linguistics. +Yi-Ling Chung, Serra Sinem Tekiroglu, Sara Tonelli, and Marco Guerini. 2021b. Empowering ngos in countering online hate messages. Online Social Networks and Media, 24:100150. + +Thomas Davidson, Debasmita Bhattacharya, and Ingmar Weber. 2019. Racial bias in hate speech and abusive language detection datasets. In Proceedings of the Third Workshop on Abusive Language Online, pages 25-35. +Victor De Boer, Michiel Hildebrand, Lora Aroyo, Pieter De Leenheer, Chris Dijkshoorn, Binyam Tesfa, and Guus Schreiber. 2012. Nichesourcing: harnessing the power of crowds of experts. In International Conference on Knowledge Engineering and Knowledge Management, pages 16-20. Springer. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186. +Emily Dinan, Samuel Humeau, Bharath Chintagunta, and Jason Weston. 2019. Build it break it fix it for dialogue safety: Robustness from adversarial human attack. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4537-4546. +Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889-898, Melbourne, Australia. Association for Computational Linguistics. +Margherita Fanton, Helena Bonaldi, Serra Sinem Tekiroglu, and Marco Guerini. 2021. Human-in-the-loop for data collection: a multi-target counter narrative dataset to fight online hate speech. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics, Online. Association for Computational Linguistics. +Paula Fortuna and Sérgio Nunes. 2018. A survey on automatic detection of hate speech in text. volume 51, page 85. ACM. +Iginio Gagliardone, Danit Gal, Thiago Alves, and Gabriela Martinez. 2015. Countering online hate speech. Unesco Publishing. +Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A Smith. 2020. Realtoxicityprompts: Evaluating neural toxic degeneration in language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 3356-3369. +Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. + +Homa Hosseinmardi, Sabrina Arredondo Mattson, Rahat Ibn Rafiq, Richard Han, Qin Lv, and Shivakant Mishra. 2015. Detection of cyberbullying incidents on the instagram social network. arXiv preprint arXiv:1503.03909. +Svetlana Kiritchenko, Isar Nejadgholi, and Kathleen C Fraser. 2021. Confronting abusive language online: A survey from the ethical and human rights perspective. Journal of Artificial Intelligence Research, 71:431-478. +Kevin Knight and Ishwar Chander. 1994. Automated postediting of documents. In AAAI, volume 94, pages 779-784. +Ritesh Kumar, Atul Kr Ojha, Shervin Malmasi, and Marcos Zampieri. 2018. Benchmarking aggression identification in social media. In Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018), pages 1-11. +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics. +Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016. Deep reinforcement learning for dialogue generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1192-1202, Austin, Texas. Association for Computational Linguistics. +Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81. +Binny Mathew, Navish Kumar, Pawan Goyal, Animesh Mukherjee, et al. 2018. Analyzing the hate and counter speech accounts on twitter. arXiv preprint arXiv:1812.02712. +Binny Mathew, Punyajoy Saha, Hardik Tharad, Subham Rajgaria, Prajwal Singhania, Suman Kalyan Maity, Pawan Goyal, and Animesh Mukherjee. 2019. Thou shalt not hate: Countering online hate speech. In Proceedings of the International AAAI Conference on Web and Social Media, volume 13, pages 369-380. +Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee. 2021. Hatexplain: A benchmark dataset for explainable hate speech detection. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 14867-14875. +Kevin Munger. 2017. Tweetment effects on the tweeted: Experimentally reducing racist harassment. Political Behavior, 39(3):629-649. + +Sarah Myers West. 2018. Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms. New Media & Society, 20(11):4366-4383. +Jekaterina Novikova, Ondrej Dusek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for nlg. In 2017 Conference on Empirical Methods in Natural Language Processing, pages 2231-2242. Association for Computational Linguistics. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311-318. Association for Computational Linguistics. +Fabio Poletto, Valerio Basile, Manuela Sanguinetti, Cristina Bosco, and Viviana Patti. 2020. Resources and benchmark corpora for hate speech detection: a systematic review. Language Resources and Evaluation, pages 1-47. +Jing Qian, Anna Bethke, Yinyin Liu, Elizabeth Belding, and William Yang Wang. 2019. A benchmark dataset for learning to intervene in online hate speech. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4757-4766, Hong Kong, China. Association for Computational Linguistics. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67. +Sascha Rothe, Shashi Narayan, and Aliaksei Severyn. 2020. Leveraging pre-trained checkpoints for sequence generation tasks. Transactions of the Association for Computational Linguistics, 8:264-280. +Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th annual meeting of the association for computational linguistics, pages 1668-1678. +Carla Schieb and Mike Preuss. 2016. Governing hate speech by means of counterspeech on facebook. In 66th ICA Annual Conference, at Fukuoka, Japan, pages 1-23. +Anna Schmidt and Michael Wiegand. 2017. A survey on hate speech detection using natural language processing. In Proceedings of the Fifth International + +Workshop on Natural Language Processing for Social Media, pages 1-10. +Tanya Silverman, Christopher J Stewart, Jonathan Birdwell, and Zahed Amanullah. 2016. The impact of counter-narratives. Institute for Strategic Dialogue, London. https://www.strategicdialogue.org/wp-content/uploads/2016/08/Impact-of-Counter-Narratives_ONLINE.pdf-73. +Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of association for machine translation in the Americas, volume 200, 6. Cambridge, MA. +Serra Sinem Tekiroğlu, Yi-Ling Chung, and Marco Guerini. 2020. Generating counter narratives against online hate speech: Data and strategies. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1177-1190, Online. Association for Computational Linguistics. +Mengzhou Xia Anjalie Field Yulia Tsvetkov. 2020. Demoting racial bias in hate speech detection. SocialNLP 2020, page 7. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008. +Bertie Vidgen and Leon Derczynski. 2020. Directions in abusive language training data, a systematic review: Garbage in, garbage out. Plos one, 15(12):e0243300. +Bertie Vidgen, Alex Harris, Dong Nguyen, Rebekah Tromble, Scott Hale, and Helen Margetts. 2019. Challenges and frontiers in abusive content detection. In Proceedings of the third workshop on abusive language online, pages 80-93. +Bertie Vidgen, Tristan Thrush, Zeerak Waseem, and Douwe Kiela. 2020. Learning from the worst: Dynamically generated datasets to improve online hate detection. arXiv preprint arXiv:2012.15761. +Eric Wallace, Pedro Rodriguez, Shi Feng, Ikuya Yamada, and Jordan Boyd-Graber. 2019. Trick me if you can: Human-in-the-loop generation of adversarial question answering examples. Transactions of the Association for Computational Linguistics, 7(0):387-401. +Alex Wang and Kyunghyun Cho. 2019. BERT has a mouth, and it must speak: BERT as a Markov random field language model. In Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation, pages 30-36, Minneapolis, Minnesota. Association for Computational Linguistics. + +Ke Wang and Xiaojun Wan. 2018. Sentigan: Generating sentimental texts via mixture adversarial networks. In *IJCAI*, pages 4446-4452. +Zeerak Waseem. 2016. Are you a racist or am i seeing things? annotator influence on hate speech detection on twitter. In Proceedings of the first workshop on NLP and computational social science, pages 138-142. +Sam Wiseman, Stuart Shieber, and Alexander Rush. 2017. Challenges in data-to-document generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2253-2263, Copenhagen, Denmark. Association for Computational Linguistics. +Jing Xu, Da Ju, Margaret Li, Y-Lan Boureau, Jason Weston, and Emily Dinan. 2020. Recipes for safety in open-domain chatbots. arXiv e-prints, pages arXiv-2010. +Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020. DIALOGPT: Large-scale generative pre-training for conversational response generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 270-278, Online. Association for Computational Linguistics. +Wanzheng Zhu and Suma Bhat. 2021. Generate, prune, select: A pipeline for counterspeech generation against online hate speech. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 134-149. + +# A Appendix + +# A.1 Fine-tuning details + +Table 5 summarizes the details of the training of each model employed in the first session of experiments. + +
BAEPPARLRPERTLEL
BART (base)44139 M2E-0524.6592.3582.417
BERT Seq2Seq (base)43247 M3E-0511.2092.8453.205
T5 (base)23223 M5E-0510.92482.4123.205
DialoGPT (medium)42355 M5E-056.0851.4251.806
GPT-2 (medium)22355 M5E-058.9291.3202.189
+ +Table 5: The training details for all the models employed for the first collection of experiment: the batch size (BA), number of training epochs (EP), parameters (PAR), the learning rate (LR), perplexity (PER), training and evaluation loss (TL and EL). + +Since LM sizes are very different for each model and since our main focus is not studying performances according to LM dimension growth, as a rule-of-thumb, we chose one version smaller than the large version of each model provided that they all have the same order of magnitude. This corresponds to the medium versions for both DialogGPT and GPT-2, and base versions for the other models. GPT-2 and DialogGPT achieve the lowest perplexity, training and evaluation loss, thus indicating a slightly more successful fine-tuning, which are reflected in the evaluations throughout the study. + +We conducted a hyper-parameter search during the training phase of each model using the search space: learning-rate: $\{1e - 5,2e - 5,3e - 5,4e - 5,5e - 5\}$ , warm-up ratio: $\{0,0.1\}$ , batch-size: $\{2,4\}$ , epochs: $\{2,3,4,5\}$ . It has been conducted using Optuna, with 10 trials, optimized on minimizing the evaluation loss during training. + +# A.2 Best models-decoding combination + +Here we discuss the results for the overlap and diversity metrics obtained on the $\mathrm{Best}_{\mathrm{LM} + \mathrm{D}}$ generations (Table 6), and those calculated on the human evaluation subset (Tables 7 and 8). + +BART. BART performs well with the stochastic decoding methods, in particular: $\mathrm{Top}_p$ for overlap, diversity, syntactic metrics, and grammaticality; $\mathrm{Top}_k$ for overlap metrics and toxicity, whereas $\mathrm{Top}_{pk}$ is the best decoding approach on human evaluation and RR, and the second best on ROUGE and BLEU-1. On the contrary, BART does not achieve good results with deterministic approaches (i.e. BS). + +
OverlapDiversity
ROUB-1B-3B-4RRNOV
BART BS0.21080.21290.04860.028321.11020.5692
BART Toppk0.23310.23000.06050.036520.26450.5567
BART Topk0.23490.23330.06520.038520.65870.5575
BART Topp0.23290.23000.06210.037420.54760.5586
BERT BS0.17350.21080.02490.011338.03490.5864
BERT Toppk0.20340.23110.04840.023123.44170.6098
BERT Topk0.20320.23200.04830.022922.25460.6129
BERT Topp0.20440.23660.05000.024423.64470.6098
T5 BS0.21440.20070.04090.020721.55180.5827
T5 Toppk0.22360.24540.04660.02287.29960.6715
T5 Topk0.20760.23840.03760.01365.30020.6922
T5 Topp0.21590.23900.04300.01846.83530.6743
DialoGPT BS0.21920.22720.05280.031221.68000.5280
DialoGPT Toppk0.21320.24440.04370.02016.41580.6737
DialoGPT Topk0.20230.23020.03200.01344.72780.6956
DialoGPT Topp0.20930.23970.03850.01596.14720.6740
GPT-2 BS0.21950.21320.05160.031323.06050.5402
GPT-2 Toppk0.20550.23420.03840.01736.58990.6832
GPT-2 Topk0.19560.22710.03450.01534.76240.7022
GPT-2 Topp0.20140.23290.03880.01776.19440.6846
+ +Table 6: The results computed on the $\mathrm{Best}_{\mathrm{M} + \mathrm{D}}$ generations (2500 CN for each model-decoding mechanism combination). + +BERT. With BS, BERT achieves the best or second best result on all human evaluation metrics, except for specificity. For BERT the best decoding is $\mathrm{Top}_p$ : it is the best performing on overlap metrics and the second best for novelty. It achieves good results both on syntactic metrics and human evaluation too. + +T5. $\mathrm{Top}_{pk}$ is the best decoding mechanism. It records the best results for overlap metrics and toxicity, and it has good results on syntactic and human evaluation metrics. For what regards $\mathrm{Top}_k$ , it is the best for diversity, while $\mathrm{Top}_p$ is good on the syntactic metrics. BS achieves good results on human evaluation, except for specificity and is-best. + +GPT-2. With $\mathrm{Top}_{pk}$ , GPT-2 performs well on ROUGE, BLEU-1, suitability, grammaticality, and choose-or-not. With $\mathrm{Top}_p$ , GPT-2 records the second best result on BLEU scores and diversity metrics. With BS the model has the best performance on overlap metrics (except BLEU-1), and on suitability, grammaticality, and choose-or-not, but it has also the worst results on diversity metrics. Above all, $\mathrm{Top}_k$ is the decoding achieving the best compromise, reaching the best results for the diversity metrics, and with a superior specificity score (3.15) that is corroborated by the good performance on the other human evaluation metrics. + +DiaLoGPT. Top $k$ performs best with diversity metrics and specificity; it records the second high + +
ToxicitySyntactic metrics
-ASDMSDNSTn
BART BS0.48703.89194.67571.891937
BART Toppk0.39114.35924.94831.620758
BART Topk0.40214.37985.06561.737761
BART Topp0.42634.50385.09091.772744
BERT BS0.39544.55565.37501.916724
BERT Toppk0.40265.22996.20692.137958
BERT Topk0.41574.89696.29692.562564
BERT Topp0.40325.10196.29632.259354
T5 BS0.41274.48444.65621.343832
T5 Toppk0.32114.77545.37681.782669
T5 Topk0.34414.67675.42001.740050
T5 Topp0.39344.72455.59181.836749
DiaoGPT BS0.36354.23405.12771.872347
DiaoGPT Toppk0.33614.72645.50941.754753
DiaoGPT Topk0.34824.93336.17782.000045
DiaoGPT Topp0.32744.79705.52731.963655
GPT-2 BS0.35404.89015.36171.446847
GPT-2 Toppk0.31194.25305.41822.400055
GPT-2 Topk0.34164.67715.86272.568651
GPT-2 Topp0.36594.56635.74472.489447
+ +est score on grammaticality. $\mathrm{Top}_p$ has the second best result on diversity metrics and BLEU scores. BS is the best on overlap metrics (except BLEU-1), and also on almost all human evaluation metrics: it is the worst on specificity and on diversity metrics. $\mathrm{Top}_{pk}$ is the one working best withDialoGPT, since it reaches very good scores with human and overlap metrics, and this does not invalidate diversity, for which it ranks 3rd out of 4. + +# A.3 Examples of the generations + +Hate speech: Islam is a violent religion that promotes terror and war. + +DialogueGPT - BS: The vast majority of Muslims in the world are peaceful and friendly people. I don't see how you can say such a thing about them. + +BART - Topk: The vast majority of Muslims are peaceful people who have lived in peace and harmony for many years. How can you say this about a religion that promotes terror and war? + +BERT - Topk: islam is a religion of peace, and it is not a peaceful religion of violence. It is about peace and understanding of the world's problems, and peace is about bringing peace and peace. + +Table 7: The results of the toxicity and the syntactic metrics calculated on the subset employed for the human evaluation and grouped by each combination of model and decoding mechanism. The size of each group is showed in the column "n". + +
Human evaluation
SUISPEGRMCHOBESTn
BART BS3.75682.52704.94590.81080.229737
BART Toppk3.79312.61214.94830.85340.370758
BART Topk3.96722.54104.90160.86070.295161
BART Topp3.56822.51144.96590.81820.147744
BERT BS3.52082.52084.79170.77080.125024
BERT Toppk3.18102.57764.23280.71550.112158
BERT Topk3.03122.70314.15620.67970.101664
BERT Topp3.03702.71304.12960.74070.157454
T5 BS3.57812.28124.84380.76560.078132
T5 Toppk2.88412.49284.58700.66670.101469
T5 Topk2.46002.32004.64000.56000.050050
T5 Topp2.81632.43884.74490.61220.122449
DialoGPT BS4.15962.60644.98940.85110.308547
DialoGPT Toppk3.36792.80194.83960.78300.273653
DialoGPT Topk3.13332.92224.85560.73330.211145
DialoGPT Topp2.97272.70004.84550.70910.190955
GPT-2 BS4.30852.50004.96810.88300.276647
GPT-2 Toppk3.49092.80004.87270.82730.227355
GPT-2 Topk3.03923.14714.84310.72550.254951
GPT-2 Topp3.43623.06384.78720.74470.329847
+ +Table 8: For each model-decoding mechanism combination, these are the results of the metrics for the human evaluations. The size of each combination is showed in the column "n". + +T5 - Topp: The vast majority of Muslims in the UK are peaceful people who share many values and respect. + +GPT-2 - Topp: This is not true, the vast majority of Muslims is peaceful and peaceful Muslims are happy to live in any country they wish. They do not need to be a part of a hostile state and they should be treated as equals in society. + +In these examples we can observe that the CN obtained with GPT-2, albeit not perfect, is the most complete and accurate. Those generated with T5 and DialogGPT are suitable as well. On the other hand, the CNs obtained with BART and BERT present some problems: the former has a clear contradiction in it and the latter is illustrative of some of the problems detected in BERT's generations such that it is syntactically complex but very vague and repetitive, and it ends up not really conveying a clear message. + +Hate speech: The migrant crisis is not the fault of Western Europeans. Migrants are to blame for immigration. + +DiaLoGPT - Topk: What if an entire religion was to blame only for the migration crisis? So you think a large number of Muslims are responsible for all the problems of our world? + +BART - Topp: The migrant crisis is caused by a lack of integration and lack of knowledge of the world. + +BERT - BS: many migrants come here to work and provide for their families. they are our brothers and sisters, and we should welcome them. + +T5 - Topk: What evidence do you have for the evidence that migrants are to blame? Not all migrants have done anything wrong. + +GPT-2 - Topk: The EU has put in place laws to combat illegal immigration and to prevent crimes committed by migrants. They have a duty to help stop crime and do what is necessary to solve the problems within their borders. Migrants are very intelligent and do what they can to support their families while they study and work. + +Once again, GPT-2 and DialogGPT generations where chosen as the best CNs by the annotators. Moreover, these CNs are all generated via $\mathrm{Top}_k$ decoding mechanism. BART and T5 generated CNs are acceptable while requiring some post-editing to be employed. Finally, BERT's output is still very vague: we can notice the presence of the recurring expression our brothers and sisters. + +# A.4 Additional material for LOTO experiments + +Table 9 displays the distribution of the examples with respect to the targets, in the reference dataset and in the configurations for the LOTO experiments (Section 5.3). + +Table 10 presents the detailed results for the novelty of the reference CNs discussed in Section 5.3, while the RR for the CNs generated with the LOTO models and for the reference CNs are shown in Table 11. The rankings for these two RR computations are the same, and the ranges are almost overlapping. This means that leaving one target out does not impact the intra-corpora repetitiveness: instead, the CNs generated with a LOTO model gain a lower RR than the reference CNs. For the target MUSLIMS a high RR is recorded, both in candidate and in the reference CNs. A high repetitiveness in the data for this target can contribute to the good results observed on overlap metrics too (Table 3 in + +
TargetSamples in original datasetSamples in LOTO experiment
JEWS594600
LGBT+617600
MIGRANTS957600
MUSLIMS1335600
WOMEN662600
DISABLED220220
POC352352
other266157
Total50033729
+ +Table 9: The targets coverage in the reference dataset (Fanton et al., 2021) and in the LOTO configurations. + +
generation trainingJEWSLGBT+MIGRANTSMUSLIMSWOMEN
JEWS-0.7750.7800.7610.780
LGBT+0.781-0.7830.7650.763
MIGRANTS0.7820.775-0.7640.777
MUSLIMS0.7750.7700.769-0.776
WOMEN0.7890.7710.7830.775-
+ +Table 10: The novelty of the reference CNs in the data from Fanton et al. (2021) (generation) with respect to the training data for the LOTO models (training). + +Section 5.3): it is easier that two outputs are similar if they use a limited and repeated number of words. + +
TargetRR reference CNRR candidate CN
JEWS5.0714.796
LGBT+4.4894.620
MIGRANTS4.3814.707
MUSLIMS5.2445.314
WOMEN4.5474.632
+ +Table 11: The RR computed on the reference CN (pertaining the test set) and on the CN generated with the LOTO models. + +# A.5 APE Experiment Details + +The dataset by (Fanton et al., 2021) contains three versions of the same CN: the original CN generated by a GPT-2 model $\left(\mathrm{CN}_{or}\right)$ , the expert post-edited versions obtained during the human-in-the-loop cycles $\left(\mathrm{CN}_{pe*}\right)$ , and the final version rechecked by NGO experts $\left(\mathrm{CN}_{pe}\right)$ . + +For fine-tuning our APE model, we have thus used the triplets $$ and $$ . In this way, we managed to roughly double the number of the post-edit training samples, which is highly beneficial for a better model. When we filtered the triplets with a positive + +TER score between $\mathrm{CN}_{ed}$ and $\mathrm{CN}_{pe}$ , or $\mathrm{CN}_{or}$ and $\mathrm{CN}_{pe}$ , we obtained 4185 training, 596 test, and 568 validation samples following the partition used in the first set of experiments as described in Section 3.1. Finally, the best fine-tuning configuration of the GPT-2 medium model for APE was obtained with a learning rate of 2e-5 for 3 epochs resulting in 3.34 train loss and 1.23 eval loss. \ No newline at end of file diff --git a/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/images.zip b/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..1878489ee7a4815aa6890db3ef712562a3844a31 --- /dev/null +++ b/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2ae782261801ad6e73647f830638c9854c7a7816cef049927b21fe6c071eb0d2 +size 627792 diff --git a/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/layout.json b/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..19eaea11f17f0a2dcfcc7b4418527b37df914893 --- /dev/null +++ b/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a89ca3232d25f0df36c0884919f6b92cfa5e6c1661994d1c4e544385ee40d225 +size 496983 diff --git a/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/e27c9381-5b82-42d0-be7e-d5fb7f716aea_content_list.json b/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/e27c9381-5b82-42d0-be7e-d5fb7f716aea_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..23f714dc9cd41d3dd62bbc09627895cbacfe4add --- /dev/null +++ b/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/e27c9381-5b82-42d0-be7e-d5fb7f716aea_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2476d4d94a326600a721b6fd91939b96a34c23f233ecfb036b78da90accedc21 +size 87948 diff --git a/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/e27c9381-5b82-42d0-be7e-d5fb7f716aea_model.json b/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/e27c9381-5b82-42d0-be7e-d5fb7f716aea_model.json new file mode 100644 index 0000000000000000000000000000000000000000..32a93d28835ffa4ea08574c2d234a845b1d97a94 --- /dev/null +++ b/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/e27c9381-5b82-42d0-be7e-d5fb7f716aea_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5e72b10d707f314eac3b7606654f5e1636f70fabff8d42660f8c78186514db35 +size 110500 diff --git a/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/e27c9381-5b82-42d0-be7e-d5fb7f716aea_origin.pdf b/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/e27c9381-5b82-42d0-be7e-d5fb7f716aea_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f926dd2577d0d1514f7bf109a35c067d184ea44b --- /dev/null +++ b/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/e27c9381-5b82-42d0-be7e-d5fb7f716aea_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dca4619a80e253063d0029fbdd75936e265d18e05441af52e55392c1ae391e6e +size 530645 diff --git a/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/full.md b/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/full.md new file mode 100644 index 0000000000000000000000000000000000000000..3e2510bde16bb548c67a3f8c293aca5e1cb19886 --- /dev/null +++ b/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/full.md @@ -0,0 +1,327 @@ +# Virtual Augmentation Supported Contrastive Learning of Sentence Representations + +Dejiao Zhang* Wei Xiao Henghui Zhu Xiaofei Ma Andrew O. Arnold AWS AI Labs, New York + +# Abstract + +Despite profound successes, contrastive representation learning relies on carefully designed data augmentations using domain-specific knowledge. This challenge is magnified in natural language processing, where no general rules exist for data augmentation due to the discrete nature of natural language. We tackle this challenge by presenting a Virtual augmentation Supported Contrastive Learning of sentence representations (VaSCL). Originating from the interpretation that data augmentation essentially constructs the neighborhoods of each training instance, we in turn utilize the neighborhood to generate effective data augmentations. Leveraging the large training batch size of contrastive learning, we approximate the neighborhood of an instance via its K-nearest in-batch neighbors in the representation space. We then define an instance discrimination task regarding the neighborhood and generate the virtual augmentation in an adversarial training manner. We access the performance of VaSCL on a wide range of downstream tasks and set a new state-of-the-art for unsupervised sentence representation learning. + +# 1 Introduction + +Universal sentence representation learning has been a long-standing problem in Natural Language Processing (NLP). Leveraging the distributed word representations (Bengio et al., 2003; Mikolov et al., 2013; Collobert et al., 2011; Pennington et al., 2014) as the base features to produce sentence representations is a common strategy in the early stage. However, these approaches are tailored to different target tasks, thereby yielding less generic sentence representations (Yessenalina and Cardie, 2011; Socher et al., 2013; Kalchbrenner et al., 2014; Cho et al., 2014). + +This issue has motivated more research efforts on designing generic sentence-level learning objectives or tasks. Among them, supervised learning on the Natural Language Inference (NLI) datasets (Bowman et al., 2015a; Williams et al., 2017; Wang et al., 2018) has established benchmark transfer learning performance on various downstream tasks (Conneau et al., 2017; Cer et al., 2018; Reimers and Gurevych, 2019a; Zhang et al., 2021). Despite promising progress, the high cost of collecting annotations precludes its wide applicability, especially when the target domain has scarce annotations but differs significantly from the NLI datasets (Zhang et al., 2020). + +On the other hand, unsupervised learning of sentence representations has seen a resurgence of interest with the recent successes in self-supervised contrastive learning. These approaches rely on two main components, data augmentation and an instance-level contrastive loss. The popular contrastive learning objectives Chen et al. (2020); He et al. (2020) and their variants thereof have empirically shown their effectiveness in NLP. However, the discrete nature of the text makes it challenging to establish universal rules for effective text augmentation generation. + +Various contrastive learning based approaches have been proposed for sentence representation learning, where the main difference lies in how the augmentations are generated (Fang and Xie, 2020; Giorgi et al., 2020; Wu et al., 2020; Meng et al., 2021; Yan et al., 2021; Kim et al., 2021; Gao et al., 2021). Somewhat surprisingly, a recent work (Gao et al., 2021) shows that Dropout (Srivastava et al., 2014), i.e., augmentations obtained by feeding the same instance to the encoder twice, outperforms common data augmentations obtained by operating on the text directly, including cropping, word deletion, or synonym replacement. Again, this observation validates the inherent difficulty of attaining effective data augmentations in NLP. + +This paper tackles the challenge by presenting a neighborhood-guided virtual augmentation strategy to support contrastive learning. In a nutshell, data augmentation essentially constructs the neighborhoods of each instance, with the semantic content being preserved. We take this interpretation in the opposite direction by leveraging the neighborhood of an instance to guide augmentation generation. Benefiting from the large training batch of contrastive learning, we approximate the neighborhood of an instance via its K-nearest in-batch neighbors. We then define an instance discrimination task within this neighborhood and generate the virtual augmentation in an adversarial training manner. We run in-depth analyses and show that our VaSCL model leads to a more dispersed representation space with the data semantics at different granularities being better captured. We evaluate our model on a wide range of downstream tasks and show that our model consistently outperforms the previous state-of-the-art results by a large margin. + +# 2 Related Work + +Universal Sentence Representation Learning Arguably, the simplest and most common approaches for attaining sentence representations are bag-of-words (Harris, 1954) and variants thereof. However, bag-of-words suffers from data sparsity and a lack of sensibility to word semantics. In the past two decades, the distributed word representations (Bengio et al., 2003; Mikolov et al., 2013; Collobert et al., 2011; Pennington et al., 2014) have become the more effective base features for producing sentence representations. The downside is that these approaches are tailored to the target tasks (Yessenalina and Cardie, 2011; Socher et al., 2013; Kalchbrenner et al., 2014; Cho et al., 2014), and thereby the resulting sentence representations attain limited transfer learning performance. + +More recent efforts focus on directly designing the sentence-level learning objectives or tasks. On the supervised learning regime, Conneau et al. (2017); Cer et al. (2018) empirically show the effectiveness of leveraging the NLI task (Bowman et al., 2015a; Williams et al., 2017) to promote generic sentence representations. The task involves classifying each sentence pair into one of three categories: entailment, contradiction, or neutral. Reimers and Gurevych (2019b) further bolster the performance by using the pre-trained transformer (Devlin et al., 2018; Liu et al., 2019) as backbone. + +On the other end of the spectrum, Hill et al. (2016); Bowman et al. (2015b) propose using the denoising or variational autoencoders for sentence representation learning. Kiros et al. (2015); Hill et al. (2016) extend the distributional hypothesis to the sentence level and train an encoder-decoder to construct the surrounding context for each sentence. Alternatively, Logeswaran and Lee (2018) present a model that learns to discriminate the target context sentences from all contrastive ones. + +Contrastive Learning\n\nContrastive learning has been the pinnacle of recent successes in sentence representation learning. Gao et al. (2021); Zhang et al. (2021) substantially advance the previous state-of-the-art results by leveraging the entailment sentences in NLI as positive pairs for optimizing the properly designed contrastive loss functions. Nevertheless, we focus on unsupervised contrastive learning and form the positive pairs via data augmentation since such methods are more cost-effective and applicable across different domains and languages. Along this line, several approaches have been proposed recently, where the augmentations are obtained via dropout (Yan et al., 2021; Gao et al., 2021), back-translation (Fang and Xie, 2020), surrounding context sampling (Logeswaran and Lee, 2018; Giorgi et al., 2020), or perturbations conducted at different semantic-level (Wu et al., 2020; Yan et al., 2021; Meng et al., 2021).\n\n + +Consistency Regularization Our work is also closely related to consistency regularization, which is often used to promote better performance by regularizing the model output to remain unchanged under plausible input variations that are often induced via data augmentations. Bachman et al. (2014); Sajjadi et al. (2016); Samuli and Timo (2017); Tarvainen and Valpola (2017) show randomized data augmentations such as dropout, cropping, rotation, and flipping yield effective regularization. Berthelot et al. (2019, 2020); Verma et al. (2019) improve the performance by applying Mixup (Zhang et al., 2017) and its variants on top of stochastic data augmentations. However, data augmentation has long been a challenge in NLP as there are no general rules for effective text transformations. An alternative that comes to light when considering the violation of consistency regularization can in turn be used to find the most sensitive perturbation for a model. Therefore, we utilize consistency regularization to promote informative virtual augmen + +![](images/d268a670b09d29dbce650fa53a3a39550e29994fa5a0bcd2751f6d9c8f5f82d1.jpg) +Figure 1: Illustration of VaSCL. For each instance $x_{i}$ in a randomly sampled batch, we optimize (i) an instance-wise contrastive loss with the dropout induced augmentation obtained by forwarding the same instance twice, i.e., $x_{i}$ and $x_{i'}$ denote the same text example; and (2) a neighborhood constrained instance discrimination loss with the virtual augmentation proposed in Section 3.2. + +tation for a training instance in the representation space while leveraging its approximated neighborhood to regularize the augmentation sharing similar semantic content as its original instance. + +# 3 Method + +# 3.1 Preliminaries + +Self-supervised contrastive learning often aims to solve the instance discrimination task. In our scenario, let $f$ denote the transformer encoder that maps the $i^{\text{th}}$ input sentence $\mathbf{x}_i$ to its representation vector $\mathbf{e}_i = f(\mathbf{x}_i)^1$ . Further let $h$ be the contrastive learning head and $\mathbf{z}_i = h(f(\mathbf{x}_i))$ denote the final output for $\mathbf{x}_i$ . Let $\mathcal{B} = \{i, i'\}_{i=1}^{M}$ denote the indices of a randomly sampled batch of paired examples, where $\mathbf{x}_i, \mathbf{x}_{i'}$ are two independent variations of the $i^{\text{th}}$ instance. A popular loss function (Chen et al., 2020) for contrastive learning is defined as follows, + +$$ +\begin{array}{l} \ell_ {\mathcal {B}} \left(\mathbf {z} _ {i}, \mathbf {z} _ {i ^ {\prime}}\right) = \tag {1} \\ - \log \frac {e ^ {\mathbf {s i m} (\mathbf {z} _ {i} , \mathbf {z} _ {i ^ {\prime}}) / \tau}}{e ^ {\mathbf {s i m} (\mathbf {z} _ {i} , \mathbf {z} _ {i ^ {\prime}}) / \tau} + \sum_ {j \in \mathcal {B} \backslash (i , i ^ {\prime})} e ^ {\mathbf {s i m} (\mathbf {z} _ {i} , \mathbf {z} _ {j}) / \tau}}, \\ \end{array} +$$ + +where $\tau$ is the temperature hyper-parameter and $\mathbf{sim}(\cdot)$ denotes the cosine similarity, i.e., $\mathbf{sim}(\cdot) = \mathbf{z}_i^T\mathbf{z}_{i'} / \| \mathbf{z}_i\|_2\|\mathbf{z}_{i'}\|_2$ . Similarly, $\ell_{\mathcal{B}}(\mathbf{z}_{i'},\mathbf{z}_i)$ is defined by exchanging the roles of $\mathbf{z}_i$ and $\mathbf{z}_{i'}$ in the above equation. Intuitively, Equation (1) defines the log-likelihood of classifying the $i^{th}$ instance as its positive $i'$ among all $2M - 1$ candidates within + +the same batch $\mathcal{B}$ . Therefore, minimizing the above log-loss guides the encoder to map each positive pair close in the representation space, and negative pairs further apart. + +Dropout based contrastive learning As Equation (1) implies, the success of contrastive learning relies on effective positive pairs construction. However, it is challenging to generate strong and effective data transformations in NLP due to the discrete nature of natural language. This challenge is further demonstrated in a recent work (Gao et al., 2021), which shows that augmentations obtained by Dropout (Srivastava et al., 2014), i.e., $\mathbf{z}_i, \mathbf{z}_{i'}$ obtained by forwarding the same instance $\mathbf{x_i}$ twice, outperforms the common text augmentation strategies such as cropping, word deletion, or synonym replacement. Dropout provides a natural data augmentation by randomly masking its inputs or the hidden layer nodes. The effectiveness of using Dropout as pseudo data augmentations can be traced back to Bachman et al. (2014); Samuli and Timo (2017); Tarvainen and Valpola (2017). Nevertheless, the augmentation strength is weak with Dropout only. There is room for improvement, which we investigate in the following section. + +# 3.2 Neighborhood Constrained Contrastive Learning with Virtual Augmentation + +In essence, data augmentation can be interpreted as constructing the neighborhood of a training instance, with the semantic content being preserved. In this section, we take the interpretation in the opposite direction and leverage the neighborhoods of each instance to generate the augmentation. To be more specific, let $\bar{B} = \{i\}_{i=1}^{M}$ denote the indices + +of a randomly sampled batch with $M$ examples. We first approximate the neighborhood $\mathcal{N}(i)$ of the $i^{\mathrm{th}}$ instance as its K-nearest neighbors in the representation space, + +$\mathcal{N}(i) = \{k:\mathbf{e}_k$ has the top-K similarity with $\mathbf{e}_i$ among all other M-1 instances in $\bar{\mathcal{B}}\}$ + +We then define an instance-level contrastive loss regarding the $i^{\mathrm{th}}$ instance and its neighborhood as follows, + +$$ +\begin{array}{l} \ell_ {\mathcal {N} (i)} \left(\mathbf {z} _ {i} ^ {\delta}, \mathbf {z} _ {i}\right) = \tag {2} \\ - \log \frac {e ^ {\mathbf {s i m} (\mathbf {z} _ {i} ^ {\delta} , \mathbf {z} _ {i}) / \tau}}{e ^ {\mathbf {s i m} (\mathbf {z} _ {i} ^ {\delta} , \mathbf {z} _ {i}) / \tau} + \sum_ {k \in \mathcal {N} (i)} e ^ {\mathbf {s i m} (\mathbf {z} _ {i} ^ {\delta} , \mathbf {z} _ {k}) / \tau}} . \\ \end{array} +$$ + +In the above equation, $\mathbf{z}_i^\delta = h(\mathbf{e}_i^\delta)$ denotes the output of the contrastive learning head with the perturbed representation $\mathbf{e}_i^\delta = \mathbf{e}_i + \delta_i$ as input. Here, the initial perturbation $\delta_{i}$ is chosen as isotropic Gaussian noise. As it implies, Equation (2) shows the negative log-likelihood of classifying the perturbed $i^{\mathrm{th}}$ instance as itself rather than its neighbors. Then the augmentation of the $i^{\mathrm{th}}$ instance is retained by identifying the optimal perturbation that maximally disturbs its instance-level identity within the neighborhood. That is, + +$$ +\begin{array}{l} \delta_ {i} ^ {*} = \underset {\| \delta_ {i} \| _ {2} \leq \Delta} {\arg \max } \ell_ {\mathcal {N} (i)} \left(\mathbf {z} _ {i} ^ {\delta}, \mathbf {z} _ {i}\right), \tag {3} \\ \mathbf {e} _ {i ^ {*}} = \mathbf {e} _ {i} + \delta_ {i} ^ {*}. \\ \end{array} +$$ + +For the $i^{\mathrm{th}}$ instance, denote $\mathcal{N}_{\mathrm{A}}(i)$ as the augmented neighborhood that consists of its $K$ nearest neighbors and their associated augmentations. That is, $\mathcal{N}_{\mathrm{A}}(i) = \{k,k^{*}\}_{k = 1}^{K}$ with $\mathbf{e}_k$ and $\mathbf{e}_{k^*}$ denoting the original representation and the augmented representation of the $k^{\mathrm{th}}$ nearest neighbor of instance $i$ , respectively. Here, each augmentation $\mathbf{e}_{k^*}$ is obtained by solving Equations (3) with respect to the neighborhood $\mathcal{N}(k)$ of $e_k$ . We then discriminate the $i^{\mathrm{th}}$ instance and its augmentation from the augmented neighborhood $\mathcal{N}_{\mathrm{A}}(i)$ , + +$$ +\ell_ {\mathcal {N} _ {\mathrm {A}} (i)} = \ell_ {\mathcal {N} _ {\mathrm {A}} (i)} (\mathbf {z} _ {i} ^ {*}, \mathbf {z} _ {i}) + \ell_ {\mathcal {N} _ {\mathrm {A}} (i)} (\mathbf {z} _ {i}, \mathbf {z} _ {i} ^ {*}). \quad (4) +$$ + +Here both terms on the right hand side are defined in the same way as Equation (2) with respect to the augmentation $e_i^*$ and the augmented neighborhood $\mathcal{N}_{\mathrm{A}}(i)$ of the $i^{\mathrm{th}}$ instance. + +Putting it all together Therefore, for each randomly sampled minibatch $\mathcal{B}$ with $M$ samples, we minimize the following: + +$$ +\begin{array}{l} \mathcal {L} _ {\mathrm {V a S C L}} = \frac {1}{2 M} \sum_ {i = 1} ^ {M} \left\{\ell_ {\bar {\boldsymbol {B}}} \left(\mathbf {z} _ {i}, \mathbf {z} _ {i ^ {\prime}}\right) + \ell_ {\bar {\boldsymbol {B}}} \left(\mathbf {z} _ {i ^ {\prime}}, \mathbf {z} _ {i}\right) \right. \\ \left. + \ell_ {\mathcal {N} _ {\mathrm {A}} (i)} \left(\mathbf {z} _ {i}, \mathbf {z} _ {i} ^ {*}\right) + \ell_ {\mathcal {N} _ {\mathrm {A}} (i)} \left(\mathbf {z} _ {i} ^ {*}, \mathbf {z} _ {i}\right) \right\} \tag {5} \\ \end{array} +$$ + +The last two terms of the right hand side are defined in Equation 4. Notice that, $\ell_{\bar{B}}(\mathbf{z}_i,\mathbf{z}_{i'})$ is defined in the same way as Equation (1) except that $\mathbf{z}_i,\mathbf{z}_{i'}$ are retained by feeding the $i^{\mathrm{th}}$ instance in $\bar{B}$ to the encoder twice. In summary, two instance discrimination tasks are posed for each training example: i) discriminating each instance and its dropout induced variation from the other in-batch instances; and ii) separating each instance and its virtual augmentation from its K nearest neighbors and their associated virtual augmentations. + +# 4 Experiment + +In this section, we mainly evaluate VaSCL against SimCSE (Gao et al., 2021) which leverages the dropout (Srivastava et al., 2014) induced noise as data augmentation. We show that VaSCL consistently outperforms SimCSE on various downstream tasks that involve semantic understanding at different granularities. We carefully study the regularization effects of VaSCL and empirically demonstrate that VaSCL leads to a more dispersed representation space with semantic structure better encoded. Please refer to Appendix A for details of our implementations and the dataset being used. + +# 4.1 Evaluation Datasets + +In addition to the popular semantic textual similarity (a.k.a STS) related tasks, we evaluate two additional downstream tasks, short text clustering and few-shot learning based intent classification. Our motivation is twofold. First, these two tasks provide a new evaluation aspect that complements the pairwise similarity-oriented STS evaluation by assessing the high-level categorical semantics encoded in the representations. Second, two desired challenges are posted as short text clustering requires more effective representations due to the weak signal each text example manifests; and intent classification often suffers from data scarcity since the intents can vary significantly over different dialogue systems and the intent examples are costly to collect. + +
STS12STS13STS14STS15STS16SICK-RSTS-BAvg.
RoBERTa distil54.4146.8556.9665.7964.2261.1059.0158.33
SimCSE distil65.5877.4270.1779.3178.4567.6677.9873.79
VaSCL distil67.6880.6172.1980.9278.5968.8177.3275.16
RoBERTa base53.9547.4255.8764.7363.5562.9458.4058.12
SimCSE base68.8880.4673.5480.9880.6869.5480.2976.34
VaSCL base69.0282.3873.9382.5480.9669.4080.5276.96
RoBERTa large55.0050.1454.8762.1462.9958.9354.5656.95
SimCSE large69.8381.2974.4283.7779.7968.8980.6676.95
VaSCL large73.3683.5577.1683.2580.6672.9682.3679.04
+ +Table 1: Spearman rank correlation between the cosine similarity of sentence representation pairs and the ground truth similarity scores. + +Semantic Textual Similarity The semantic textual similarity (STS) tasks are the most commonly used benchmark for evaluating sentence representations. STS consists of seven tasks, namely STS 2012-2016 (Agirre et al., 2012, 2013, 2014, 2015, 2016), the STS Benchmark (Cer et al., 2017), and the SICK-Relatedness (Marelli et al., 2014). For each sentence pair in these datasets, a fine-grained similarity score ranges from 0 to 5 is provided. + +Short Text Clustering Compared with general text clustering, short text clustering has its own challenge due to lack of signal. Nevertheless, texts containing only a few words grow at unprecedented rates from a wide range of popular resources, including Reddit, Stackoverflow, Twitter, and Instagram. Clustering those texts into groups of similar texts plays a crucial role in many real-world applications such as topic discovery (Kim et al., 2013), trend detection (Mathioudakis and Koudas, 2010), and recommendation (Bouras and Tsogkas, 2017). We evaluate six benchmark datasets for short text clustering. As shown in Table 4, the datasets present the desired diversities regarding both the cluster sizes and the number of clusters contained in each dataset. + +Intent Classification Intent classification aims to identify the intents of user utterances, which is a critical component of goal-oriented dialog systems. Attaining high intent classification accuracy is an important step towards solving many downstream tasks such as dialogue state tracking (Wu et al., 2019; Zhang et al., 2019) and dialogue management (Gao et al., 2018; Ham et al., 2020). A practical challenge is data scarcity because differ + +ent systems define different sets of intents, and it is costly to obtain enough utterance samples for each intent. Therefore, few-shot learning has attracted much attention under this scenario, which is also our main focus. We evaluate four intent classification datasets originating from different domains. We summarize the data statistics in Appendix B.1. + +# 4.2 Main Results + +# 4.2.1 Evaluation Setup + +Semantic Textual Similarity. Same as Reimers and Gurevych (2019b); Gao et al. (2021), in Table 1 we report the Spearman correlation3 between the cosine similarity of the sentence representation pairs and the ground truth similarity scores. Short Text Clustering. We evaluate the sentence representations using K-Means (MacQueen et al., 1967; Lloyd, 1982) given its simplicity and report the clustering accuracy4 averaged over ten independent runs in Table 2. Intent Classification. We freeze the transformer and fine-tune a linear classification layer with the softmax-based cross-entropy loss. We merge the training and validation sets, from which we sample K training and validation samples per class. We report the mean and standard deviation of the testing classification accuracy evaluated over five different splits in Table 3.5 We set the learning rate to 1e-04 and batch size to 32. For each task, we train the model with 1000 iterations + +
Ag NewsSearch SnippetsStack OverflowBio-medicalTweetGoogle NewsAvg
RoBERTa distil59.3233.1814.1624.6937.1058.0537.75
SimCSE distil73.3360.7466.9735.6950.6867.5559.16
VaSCL distil71.7162.7673.9838.8251.3567.6661.05
RoBERTa base66.5030.8315.6326.9837.8058.5139.38
SimCSE base65.5355.9764.1838.1249.1665.6956.44
VaSCL base68.3347.2676.1539.5351.5067.1058.31
RoBERTa large69.3553.0027.8933.2546.0864.0448.93
SimCSE large62.9351.5554.1135.3950.9267.8653.79
VaSCL large66.0961.5769.0442.9156.7467.7560.68
+ +and evaluate the validation set every 100 iterations. We report the testing accuracy on the checkpoint achieving the best validation accuracy. + +Table 2: Clustering accuracy reported on six short text clustering datasets. + +
SNIPSBANK77CLINC150HWU64
S-ShotRoBERTa76.71±4.8438.77±2.2955.19±1.9951.52±2
SimCSE76.94±2.5367.48±1.6372.84±1.566.1±1.9
VaSCL78.51±1.3970.10±1.7674.23±1.1767.06±2.17
10-ShotRoBERTa85.63±2.4346.55±1.8460.55±1.1657.47±0.91
SimCSE85.14±2.1872.19±0.8877.13±0.7670.87±1.35
VaSCL84.83±1.0575.25±0.8179.15±0.8272.43±1.12
20-ShotRoBERTa88.14±1.5451.65±1.4263.51±1.0860.93±1.27
SimCSE88.43±1.275.13±0.7878.59±0.7874.44±0.74
VaSCL89.11±1.2978.06±0.3781.39±0.6076.39±0.26
+ +Table 3: Few-shot learning evaluation of Intent Classification. Each result is aggregated over 5 independent splits. We choose RoBERTa-base as backbone. + +# 4.2.2 Evaluation Results + +We report the evaluation results in Tables 1, 2, and 3. As we can see, both SimCSE and VaSCL largely improve the performance of the pre-trained language models, while VaSCL consistently outperforms SimCSE on most tasks. To be more specific, we attain $0.6\% - 2.1\%$ averaged absolute improvement over SimCSE on seven STS tasks and $1.8\% - 6.9\%$ averaged absolute improvement on six short text clustering tasks. We also achieved considerable improvement over SimCSE on intent classification tasks under different few-shot learning scenarios. We do not include the evaluation on ATIS in Table 3 as this dataset is highly imbalanced with one single class account for more than $73\%$ of the data. Please refer to Appendix C for details. + +# 4.3 Analysis + +To better understand what enables the good performance of VaSCL, we carefully analyze the representations at different semantic granularities. + +Neighborhood Evaluation on Categorical Data We first evaluate the neighborhood statistics on StackOverflow (Xu et al., 2017) which contains 20 balanced categories, each with 1000 text instances. For each instance, we retrieve its K nearest (top-K) neighbors in the representation space, among which those from the same class as the instance itself as treated as positives. In Figure 2a, we report both the percentage of true positives and the average distance of an instance to its top-K neighbors. For each top-K value, the evaluation is averaged over all 20,000 instances. + +As indicated by the small distance values reported in Figure 2a, the representation space of the original RoBERTa model is tighter and is incapable of uncovering the categorical structure of data. In contrast, both VaSCL and SimCSE are capable of scattering representations apart while better capturing the semantic structures. Compared with SimCSE, VaSCL leads to even more dispersed representations with categorical structures being better encoded. This is also demonstrated by the better performance attained on both clustering and few-shot learning reported in Tables 2&3. + +Fine-grained Semantic Understanding We then compare VaSCL against SimCSE and RoBERTa on encoding more fine-grained semantic concepts. We randomly sample 20,000 premises from the combined set of SNLI (Bowman et al., 2015a) and MNLI (Williams et al., 2017), where + +![](images/af95dd164e885bff654ea0aa63f5c3f221f216c2692e400c3d74d29455af4158.jpg) +(a) Neighborhood evaluation on StackOverflow. Instances from the same category are treated as true positives. + +![](images/033b4c8bd70f6335a72ce8d81901d166306e2e3a442be2982d0025b19e9f3a5a.jpg) +(b) Fine-grained semantics encoding evaluation on NLI. +Figure 2: VaSCL leads to more dispersed representation with data structure being better uncovered. + +the associated entailment and contradiction hypotheses are also sampled for each premise instance. In Figure 2b, we report both the distributions of the pairwise distances of the entailment or the contradiction pairs (left). While on the right-hand side, we plot the distance of each premise to its entailment hypothesis over that to its contradiction hypothesis (right). + +We observe the same trend that both SimCSE and VaSCL well separate different instances apart in the representation space while better discriminating each premise's entailment hypothesis from the contradiction one. Figure 2b also demonstrates that VaSCL outperforms SimCSE on better capturing the fine-grained semantics when separating different instances apart. This advantage of VaSCL is further validated by Table 1, where VaSCL consistently outperforms SimCSE on the STS tasks that require pairwise semantic inference on an even more fine-grained scale. + +# 4.4 Explicit Data Augmentation + +To better evaluate our virtual augmentation-oriented VaSCL model, we compare it against different explicit data augmentation strategies that directly operate on the discrete text. Specifically, we consider the following approaches: $\underline{\underline{WDel}}$ (random word deletion) removes words from the input + +text randomly; $\underline{WNet}$ (WordNet synonym substitute) transforms a text instance by replacing its words with the WordNet synonyms (Morris et al., 2020; Ren et al., 2019); and $CTxt$ (contextual synonyms substitute) leverages the pre-trained transformers to find top-n suitable words of the input text for substitution (Kobayashi, 2018). For each strategy, we evaluate three augmentation strengths by partially changing $5\%$ , $10\%$ , and $20\%$ words of each text instance. For a positive pair $(x_i, x_i')$ , $x_i$ denotes the original text and $x_{i'}$ is the associated augmentation. We also explore the case where both $x_i$ and $x_{i'}$ are the transformations of the original text, which we find yielding worse performance. + +Virtual Augmentation Performs Better The performance of explicit text augmentation is evaluated using the standard dropout for training, i.e., "SimCSE w/ {WDel/WNet/CTxt})" in Figure 3. As Figure 3a shows, contrastive learning with moderate explicit text augmentations, i.e., augmentation strength less than $20\%$ , does yield better sentence representations when compared with the original RoBERTa model. Nevertheless, both virtual augmentation strategies, i.e., SimCSE & VaSCL, substantially outperform all three explicit text augmentation strategies on almost all downstream tasks. Although a bit surprising, especially considering the performance gap between SimCSE and explicit augmentations, this comparison provides a new perspective on interpreting the underlying challenge of designing effective transformations that operate on the discrete text directly. + +VaSCL Outperforms SimCSE Figure 3a also empirically demonstrates that VaSCL outperforms SimCSE no matter in the presence of explicit text augmentations or not. The only exception occurs when the explicit augmentation strength is too large, i.e., $20\%$ of the words of each text are perturbed. One possible explanation is that undesired noises are generated by the large perturbations on discrete texts directly, which can violate the coherent semantics maintained by a neighborhood and hence make it hard for VaSCL to generate effective virtual augmentations. + +New Linguistic Patterns Are Required Another observation drawn from Figure 3a is that both SimCSE and VaSCL attain worse performance on most downstream tasks when combining with explicit text augmentations. Although VaSCL does improve the performance of explicit augmentations + +![](images/0f9913fc0be063de8f7df8e29e291e18e44cdd8f285ec52e8b00477b17dbeafa.jpg) +(a) Virtual augmentation vs. explicit augmentation. For each downstream task, we report the mean performance averaged over all its subtasks. The explicit augmentations are evaluated using SimCSE (dropout) for training, i.e., "SimCSE w/ {WDel/WNet/CTxt})". + +![](images/9d9477aae6e61d937299f160fc010f9973f4809bc609c6030190d6487189322e.jpg) + +![](images/aeff020776de8b8daa6e7287b3bb5b6ebbeaf3813a2ba7bf629c494ada4ce7fb.jpg) + +Figure 3: Comparing and combining virtual augmentation with explicit augmentation. +![](images/c7e8e7b8fbe65d3cbf38fd13f8408395e29b15b80f2c8e62a1ff45e9388f1b1e.jpg) +(b) Cosine similarity between each original training example and its augmentation evaluated on the representation spaces of different models. From left to right, the augmentations are obtained via WDel, WNet, and CTxt. Each point is averaged over 20,000 randomly sampled training examples and their augmentations. We exclude "SimCSE w/WNet" and "VaSCL w/WNet" for better visualization. Please refer to Figure 4 in Appendix for the full plot. + +![](images/1d31a33f9be5ef91835e8d5468ee26bc95854764ba1cf39a253225996bf1e35e.jpg) + +![](images/dcb3d492f99d32db5b8b3c8762216dee65494bf831257b3124f5faaaf679e11b.jpg) + +in most cases, this is undesired as we expect a win-win outcome that moderate explicit augmentations could further enhance VaSCL. We hypothesize that new and informative linguistic patterns are missing for the expected performance gain. + +To validate our hypothesis, in Figure 3b we report the cosine similarity between each original training example and its augmentation evaluated on the representation spaces of different models. Our observation is twofold. First, the representations induced by RoBERTa and the one trained with contextual synonyms substitution ("SimCSE w/ CTxt") are very similar in all three settings, which also explains why "SimCSE w/ WDel" attains similar performance as RoBERTa on the downstream tasks. We attribute this to the fact that CTxt leverages the transformer itself to generate augmentations which hence carry limited unseen and effective linguistic patterns. Second, as indicated by the comparatively smaller similarity values in Figure 3b, the incorporation of explicit augmentations tightens the representation spaces of both SimCSE and VaSCL, which also results in a worse performance of downstream tasks. One possible explanation is that all the three explicit augmentations are weak and noisy, which harms both the instance discriminative + +ination force and the semantic relevance of each neighborhood. + +# 5 Conclusion + +In this paper, we present a virtual augmentation-oriented contrastive learning framework for unsupervised sentence representation learning. Our key insight is that data augmentation can be interpreted as constructing the neighborhoods of each training instance, which can, in turn, be leveraged to generate effective data augmentations. We evaluate VaSCL on a wide range of downstream tasks and substantially advance the state-of-the-art results. Moreover, we conduct in-depth analyses and show that VaSCL leads to a more dispersed representation space with the data semantics at different granularities being better encoded. + +On the other hand, we observe a performance drop of both SimCSE and VaSCL when combined with the explicit text augmentations. We suspect this is caused by the linguistic patterns generated by explicit augmentations being less informative yet noisy. We hypothesize effective data augmentation operations on the discrete texts could complement our virtual augmentation approach if new and informative linguistic patterns are generated. + +# References + +Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Inigo Lopez-Gazpio, Montse Maritxalar, Rada Mihalcea, et al. 2015. Semeval-2015 task 2: Semantic textual similarity, english, spanish and pilot on interpretability. In Proceedings of the 9th international workshop on semantic evaluation (SemEval 2015), pages 252-263. +Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2014. Semeval-2014 task 10: Multilingual semantic textual similarity. In Proceedings of the 8th international workshop on semantic evaluation (SemEval 2014), pages 81-91. +Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor Gonzalez Agirre, Rada Mihalcea, German Rigau Claramunt, and Janyce Wiebe. 2016. Semeval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. In SemEval-2016. 10th International Workshop on Semantic Evaluation; 2016 Jun 16-17; San Diego, CA. Stroudsburg (PA): ACL; 2016. p. 497-511. ACL (Association for Computational Linguistics). +Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 task 6: A pilot on semantic textual similarity. In *SEM 2012: The First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 385-393. +Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, and Weiwei Guo. 2013. *sem 2013 shared task: Semantic textual similarity. In Second joint conference on lexical and computational semantics (*SEM), volume 1: proceedings of the Main conference and the shared task: semantic textual similarity, pages 32-43. +Philip Bachman, Ouais Alsharif, and Doina Precup. 2014. Learning with pseudo-ensembles. Advances in neural information processing systems, 27:3365-3373. +Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. The journal of machine learning research, 3:1137-1155. +David Berthelot, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, and Colin Raffel. 2020. Remixmatch: Semi-supervised learning with distribution matching and augmentation anchoring. In International Conference on Learning Representations. + +David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin Raffel. 2019. Mixmatch: A holistic approach to semi-supervised learning. arXiv preprint arXiv:1905.02249. +Christos Bouras and Vassilis Tsogkas. 2017. Improving news articles recommendations via user clustering. International Journal of Machine Learning and Cybernetics, 8(1):223-237. +Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015a. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326. +Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. 2015b. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349. +Inigo Casanueva, Tadas Temčinas, Daniela Gerz, Matthew Henderson, and Ivan Vulić. 2020. Efficient intent detection with dual sentence encoders. arXiv preprint arXiv:2003.04807. +Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv preprint arXiv:1708.00055. +Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder. arXiv preprint arXiv:1803.11175. +Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709. +Kyunghyun Cho, Bart Van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259. +Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of machine learning research, 12(ArtICLE):2493-2537. +Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. arXiv preprint arXiv:1705.02364. +Alice Coucke, Alaa Saade, Adrien Ball, Theodore Bluche, Alexandre Caulier, David Leroy, Clément Doumouro, Thibault Gisselbrecht, Francesco Caltagirone, Thibaut Lavril, et al. 2018. Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces. arXiv preprint arXiv:1805.10190. + +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Hongchao Fang and Pengtao Xie. 2020. Cert: Contrastive self-supervised learning for language understanding. arXiv preprint arXiv:2005.12766. +Jianfeng Gao, Michel Galley, and Lihong Li. 2018. Neural approaches to conversational ai. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pages 1371-1374. +Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence embeddings. arXiv preprint arXiv:2104.08821. +John M Giorgi, Osvald Nitski, Gary D Bader, and Bo Wang. 2020. Declutr: Deep contrastive learning for unsupervised textual representations. arXiv preprint arXiv:2006.03659. +Donghoon Ham, Jeong-Gwan Lee, Youngsoo Jang, and Kee-Eung Kim. 2020. End-to-end neural pipeline for goal-oriented dialogue systems using gpt-2. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 583-592. +Zellig S Harris. 1954. Distributional structure. Word, 10(2-3):146-162. +Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9729-9738. +Charles T Hemphill, John J Godfrey, and George R Doddington. 1990. The atis spoken language systems pilot corpus. In Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27, 1990. +Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1367-1377, San Diego, California. Association for Computational Linguistics. +Nal Kalchbrenner, Edward Grefenstette, and Phil Blun som. 2014. A convolutional neural network for modelling sentences. arXiv preprint arXiv:1404.2188. +Hwi-Gang Kim, Seongjoo Lee, and Sunghyon Kyeong. 2013. Discovering hot topics using twitter streaming data social topic detection and geographic clustering. In 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2013), pages 1215-1220. IEEE. + +Taeuk Kim, Kang Min Yoo, and Sang-goo Lee. 2021. Self-guided contrastive learning for bert sentence representations. arXiv preprint arXiv:2106.07345. +Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2015. Skip-thought vectors. arXiv preprint arXiv:1506.06726. +Sosuke Kobayashi. 2018. Contextual augmentation: Data augmentation by words with paradigmatic relations. arXiv preprint arXiv:1805.06201. +Stefan Larson, Anish Mahendran, Joseph J Peper, Christopher Clarke, Andrew Lee, Parker Hill, Jonathan K Kummerfeld, Kevin Leach, Michael A Laurenzano, Lingjia Tang, et al. 2019. An evaluation dataset for intent classification and out-of-scope prediction. arXiv preprint arXiv:1909.02027. +Xingkun Liu, Arash Eshghi, Pawel Swietojanski, and Verena Rieser. 2021. Benchmarking natural language understanding services for building conversational agents. In Increasing Naturalness and Flexibility in Spoken Dialogue Interaction, pages 165-183. Springer. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. +Stuart Lloyd. 1982. Least squares quantization in pmc. IEEE transactions on information theory, 28(2):129-137. +Lajanugen Logeswaran and Honglak Lee. 2018. An efficient framework for learning sentence representations. arXiv preprint arXiv:1803.02893. +James MacQueen et al. 1967. Some methods for classification and analysis of multivariate observations. In Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, volume 1, pages 281-297. Oakland, CA, USA. +Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, Roberto Zamparelli, et al. 2014. A sick cure for the evaluation of compositional distributional semantic models. In Lrec, pages 216-223. Reykjavik. +Michael Mathioudakis and Nick Koudas. 2010. Twittermonitor: trend detection over the twitter stream. In Proceedings of the 2010 ACM SIGMOD International Conference on Management of data, pages 1155-1158. + +Yu Meng, Chenyan Xiong, Payal Bajaj, Saurabh Tiwary, Paul Bennett, Jiawei Han, and Xia Song. 2021. Coco-lm: Correcting and contrasting text sequences for language model pretraining. arXiv preprint arXiv:2102.08473. +Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111-3119. +John X. Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. 2020. Textattack: A framework for adversarial attacks, data augmentation, and adversarial training in nlp. +James Munkres. 1957. Algorithms for the assignment and transportation problems. Journal of the society for industrial and applied mathematics, 5(1):32-38. +Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In NIPS-W. +Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543. +Xuan-Hieu Phan, Le-Minh Nguyen, and Susumu Horiguchi. 2008. Learning to classify short and sparse text & web with hidden topics from large-scale data collections. In Proceedings of the 17th international conference on World Wide Web, pages 91-100. +Md Rashadul Hasan Rakib, Norbert Zeh, Magdalena Jankowska, and Evangelos Milios. 2020. Enhancement of short text clustering by iterative classification. arXiv preprint arXiv:2001.11631. +Nils Reimers and Iryna Gurevych. 2019a. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084. +Nils Reimers and Iryna Gurevych. 2019b. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics. +Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial examples through probability weighted word saliency. In Proceedings of the 57th annual meeting of the association for computational linguistics, pages 1085-1097. +Mehdi Sajjadi, Mehran Javanmardi, and Tolga Tasdizen. 2016. Regularization with stochastic transformations and perturbations for deep semi-supervised learning. Advances in neural information processing systems, 29:1163-1171. + +Laine Samuli and Aila Timo. 2017. Temporal ensembling for semi-supervised learning. In International Conference on Learning Representations (ICLR), volume 4, page 6. +Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631-1642. +Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958. +Antti Tarvainen and Harri Valpola. 2017. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. arXiv preprint arXiv:1703.01780. +Vikas Verma, Kenji Kawaguchi, Alex Lamb, Juho Kannala, Yoshua Bengio, and David Lopez-Paz. 2019. Interpolation consistency training for semi-supervised learning. arXiv preprint arXiv:1903.03825. +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. +Adina Williams, Nikita Nangia, and Samuel R Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426. +Chien-Sheng Wu, Andrea Madotto, Ehsan Hosseini-Asl, Caiming Xiong, Richard Socher, and Pascale Fung. 2019. Transferable multi-domain state generator for task-oriented dialogue systems. arXiv preprint arXiv:1905.08743. +Zhuofeng Wu, Sinong Wang, Jiatao Gu, Madian Khabsa, Fei Sun, and Hao Ma. 2020. Clear: Contrastive learning for sentence representation. arXiv preprint arXiv:2012.15466. +Jiaming Xu, Bo Xu, Peng Wang, Suncong Zheng, Guanhua Tian, and Jun Zhao. 2017. Self-taught convolutional neural networks for short text clustering. Neural Networks, 88:22-31. +Yuanmeng Yan, Rumei Li, Sirui Wang, Fuzheng Zhang, Wei Wu, and Weiran Xu. 2021. Consent: A contrastive framework for self-supervised sentence representation transfer. arXiv preprint arXiv:2105.11741. +Ainur Yessenalina and Claire Cardie. 2011. Compositional matrix-space models for sentiment analysis. + +In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 172-182. + +Jianhua Yin and Jianyong Wang. 2016. A model-based approach for text clustering with outlier detection. In 2016 IEEE 32nd International Conference on Data Engineering (ICDE), pages 625-636. IEEE. + +Dejiao Zhang, Shang-Wen Li, Wei Xiao, Henghui Zhu, Ramesh Nallapati, Andrew O Arnold, and Bing Xiang. 2021. Pairwise supervised contrastive learning of sentence representations. arXiv preprint arXiv:2109.05424. + +Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. 2017. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412. + +Jian-Guo Zhang, Kazuma Hashimoto, Chien-Sheng Wu, Yao Wan, Philip S Yu, Richard Socher, and Caiming Xiong. 2019. Find or classify? dual strategy for slot-value predictions on multi-domain dialog state tracking. arXiv preprint arXiv:1910.03544. + +Xiang Zhang and Yann LeCun. 2015. Text understanding from scratch. arXiv preprint arXiv:1502.01710. + +Yan Zhang, Ruidan He, Zuozhu Liu, Kwan Hui Lim, and Lidong Bing. 2020. An unsupervised sentence embedding method by mutual information maximization. arXiv preprint arXiv:2009.12061. + +# A Implementation + +Same as the original SimCSE work (Gao et al., 2021), we adopted $10^{6}$ randomly sampled sentences from English Wikipedia as training data. + +We implement our models with Pytorch (Paszke et al., 2017). We use the pre-trained RoBERTa models as the backbone. We choose a two-layer MLP with size $(d\times d,d\times 128)$ to optimize our contrastive learning losses, where $d$ denotes the dimension of the sentence representations. We use Adam (Kingma and Ba, 2015) as our optimizer with a constant learning rate of $5e - 04$ , which we scale to $5e - 06$ for updating the backbones/transformers. We set the virtual augmentation strength of VaSCL, i.e., $\Delta$ in Equation (3), to 15 for both DistilRoBERTa and RoBERTaBase, and 30 for RoBERTaLarge. + +We train SimCSE (Gao et al., 2021) using $3e - 05$ for optimizing the contrastive learning head and the backbone. We also tried the default learning rate $1e - 05$ (suggested in Gao et al. (2021)) as well as our learning rate setup for optimizing the RoBERTa + +models with SimCSE. We found $3e - 05$ yields better performance. For both SimCSE and VaSCL, we set the batch size to 1024, train all models over five epochs and evaluate the development set of STS-B every 500 iterations. We report all our evaluations on the downstream tasks with the associated checkpoints attaining the best performance on the validation set of STS-B. + +# B Dataset Statistics + +# B.1 Intent Classification Dataset + +We evaluate our model on four intent classification datasets: (1) SNIPS (Coucke et al., 2018) is a SLU benchmark that consists of 7 distinct intents. (2) BANKING77 (Casanueva et al., 2020) is a large fine-grained single banking domain intent dataset with 77 intent classes. (3) HWU64 (Liu et al., 2021) contains 25,716 examples for 64 intents in 21 domains. (4) CLINC150 (Larson et al., 2019) spans 150 intents and 23,700 examples across 10 domains. As we can see here, SNIPS are limited to only a small number of classes, which oversimplifies the intent detection task and does not emulate the true environment of commercial systems. The remaining three datasets contain much more diversity and are more challenging. + +# B.2 Short Text Clustering Dataset + +
DatasetNWCImN
AgNews8.0K2341
SearchSnippets12.3K1887
StackOverflow20K8201
Biomedical20K13201
GoogleNews11.1K28152143
Tweet2.5K889249
+ +Table 4: Statistics of six short text clustering datasets. N: number of text samples; $\bar{W}$ : average number of words each text example has; C: number of clusters; ImN: imbalance number defined as the size of the largest class divided by that of the smallest class. + +- **SearchSnippets** is extracted from web search snippets, which contains 12340 snippets associated with 8 groups Phan et al. (2008). +- StackOverflow is a subset of the challenge data published by Kaggle8, where 20000 question titles associated with 20 different categories are selected by Xu et al. (2017). + +![](images/5c609cfae83e9e608a4bd6d232d573c9543272635c8afa4828a75fec7b8b523d.jpg) +(a) Evaluating VaSCL in presence of different explicit data augmentation strategies. + +![](images/6d6d66e99efcf91dbd29ee14948e30ce9a238733bde840d16cf5c42418d5ee2b.jpg) + +![](images/911f9c750f97e374581fb87ce509ce9e77785d594456b31488e410cbcb556a27.jpg) + +![](images/901ed3cd88c890b3c54f4e52f10aa911c29583e53f5aac04a5f89c3af65ec2a4.jpg) +(b) Cosine similarity between the representations of each original training example and its augmentation evaluated on different models. From left to right, the augmentations are obtained via WDel, WNet, and CTxt. Each point is averaged over 20,000 randomly sampled training examples. + +![](images/31a52d24f7e81b300cf624a8662163251dddaf4cd963be5415de1374e26cf05f.jpg) +Figure 4: Comparing and combining virtual augmentation with explicit text augmentations. (Full plot of Figure 3 in Section 4.4.) + +![](images/c5ab23c4b06e26142106a0cd64524b1e13fe76c92f35490029176e4e3b712a92.jpg) + +- Biomedical is a subset of PubMed data distributed by BioASQ9, where 20000 paper titles from 20 groups are randomly selected by Xu et al. (2017). +- AgNews is a subset of news titles (Zhang and LeCun, 2015), which contains 4 topics selected by Rakib et al. (2020). +- Tweet consists of 89 categories with 2472 tweets in total (Yin and Wang, 2016). +- GoogleNews contains titles and snippets of 11109 news articles related to 152 events (Yin and Wang, 2016). + +# C Full Evaluation of Intent Classification + +ATIS (Hemphill et al., 1990) is a benchmark for the air travel domain. This dataset is highly imbalanced, with the largest class containing $73\%$ of all the training and validation examples. Moreover, more than $60\%$ classes have less than 20 examples. We thereby exclude this task in our evaluation. \ No newline at end of file diff --git a/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/images.zip b/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..d5b921969929782e1a7e9a58ef586bd2512c7020 --- /dev/null +++ b/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:787020883c40cb6b475a7d6a00c756e8281d51389689c3cb96178de771db2bc5 +size 556574 diff --git a/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/layout.json b/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..c1b0f36d791e80aaece28ba09cc0a856d4d492e7 --- /dev/null +++ b/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9472ba21d39d1f7161849629e02396be87ef47df8ace12da3b038e17de225915 +size 427671 diff --git a/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/1b07c5cd-6c0d-416a-85c4-4f4153c49b67_content_list.json b/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/1b07c5cd-6c0d-416a-85c4-4f4153c49b67_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..09c1269cde8b0aa6bfa2a7e80ea5ed4ed27d921b --- /dev/null +++ b/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/1b07c5cd-6c0d-416a-85c4-4f4153c49b67_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1fcef02565674fa448ba34032aa96a7f80f34326ab6382550520368faf1c90b4 +size 72833 diff --git a/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/1b07c5cd-6c0d-416a-85c4-4f4153c49b67_model.json b/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/1b07c5cd-6c0d-416a-85c4-4f4153c49b67_model.json new file mode 100644 index 0000000000000000000000000000000000000000..18179c5c7cadd8de1703f6b0b1e8f49c42599ae6 --- /dev/null +++ b/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/1b07c5cd-6c0d-416a-85c4-4f4153c49b67_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5a86a6d9069381ecc2ba8bf31c18f8212e55b0de05e858fcb291f9c38907f5f1 +size 89189 diff --git a/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/1b07c5cd-6c0d-416a-85c4-4f4153c49b67_origin.pdf b/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/1b07c5cd-6c0d-416a-85c4-4f4153c49b67_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3b823f1c9d1856894625d7d033ac3c1a25b4bca5 --- /dev/null +++ b/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/1b07c5cd-6c0d-416a-85c4-4f4153c49b67_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:68405f294496435133e98df33d4bfb034859f691ef7905029cd817b865ba0d94 +size 1640211 diff --git a/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/full.md b/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/full.md new file mode 100644 index 0000000000000000000000000000000000000000..161a8ee07250ff7ecc72a5a16a107ec49f960fcb --- /dev/null +++ b/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/full.md @@ -0,0 +1,242 @@ +# VISITRON: Visual Semantics-Aligned Interactively Trained Object-Navigator + +Ayush Shrivastava1*, Karthik Gopalakrishnan2, Yang Liu2, Robinson Piramuthu2, Gokhan Tur2, Devi Parikh1, Dilek Hakkani-Tur2 + +1Georgia Tech, 2Amazon Alexa AI + +{ayshrv, parikh}@gatech.edu + +{karthgop, yangliud, robinpir, gokhatur, hakkanit}@amazon.com + +# Abstract + +Interactive robots navigating photo-realistic environments need to be trained to effectively leverage and handle the dynamic nature of dialogue in addition to the challenges underlying vision-and-language navigation (VLN). In this paper, we present VISITRON, a multi-modal Transformer-based navigator better suited to the interactive regime inherent to Cooperative Vision-and-Dialog Navigation (CVDN). VISITRON is trained to: i) identify and associate object-level concepts and semantics between the environment and dialogue history, ii) identify when to interact vs. navigate via imitation learning of a binary classification head. We perform extensive pre-training and fine-tuning ablations with VISITRON to gain empirical insights and improve performance on CVDN. VISITRON's ability to identify when to interact leads to a natural generalization of the gameplay mode introduced by Roman et al. (2020) for enabling the use of such models in different environments. VISITRON is competitive with models on the static CVDN leaderboard and attains state-of-the-art performance on the Success weighted by Path Length (SPL) metric. + +# 1 Introduction + +Large pre-trained Transformer-based language models (Vaswani et al., 2017) are ubiquitous in natural language processing (NLP) and have performed very well in interactive settings such as open-domain (Gopalakrishnan et al., 2019; Huang et al., 2020) and task-oriented dialogue (Kim et al., 2020). The success of Transformers and the pretrain/fine-tune paradigm in NLP has also inspired their adoption in vision-and-language research, with cross-modal representations being learned (Li et al., 2020) and utilized towards tasks like image and object captioning, visual question answering, visual commonsense reasoning and visual dialogue. + +![](images/d220a93a5c6d829b50df7c3839196cf4200caa9882b6966486b1b69345edbaa5.jpg) +Figure 1: Cooperative Vision-and-Dialog Navigation (CVDN) with Dynamic Question-Asking + +Vision-and-language navigation (VLN) is a challenging cross-modal research task in which agents need to learn to navigate in response to natural language instructions in simulated photo-realistic environments. VLN has been studied extensively with the advent of the Room-to-Room (R2R) dataset (Anderson et al., 2018b) and there has been growing interest recently in pushing the pre-train/fine-tune paradigm towards VLN, with work on leveraging disembodied corpora (Majumdar et al., 2020) to learn cross-modal pre-trained representations that can improve embodied VLN performance. As depicted in Figure 1, the Cooperative Vision-and-Dialog Navigation (CVDN) dataset (Thomason et al., 2020) allows for dialogue with a guide during navigation: a navigator can ask natural language questions to a guide when it needs assistance and the guide responds in natural language by using privileged knowledge of the environment accessible only to it, thus expanding beyond the traditional VLN task towards deployable interactive agents that are more robust and generalizable. But preliminary navigator modeling using CVDN is still VLN-style via the Navigation from Dialog History (NDH) task, treating the dia + +logue history as a static instruction. + +In this paper, we present work on training VISITRON, a multi-modal Transformer-based navigator with a focus on tackling challenges unique to CVDN: i) moving beyond rote memorization to associative learning in order to learn to identify and acquire visio-linguistic concepts and semantics while interacting in new environments, and ii) learning when to ask questions (Chi et al., 2020). VISITRON builds off the recent cross-modal object-semantics aligned pre-training (OSCAR) strategy and uses object-tags as explicit anchor points during training to learn to associate the environment's visual semantics with the textual dialogue history, thus allowing for interaction/experience-grounded (Bisk et al., 2020) visio-linguistic concepts and semantics identification and acquisition. VISITRON is trained in a data-driven fashion to identify when to engage in dialogue, i.e., ask questions, vs. when to navigate, thus providing the first known empirical baselines for this task. We also present empirical results from various first-principles modeling ablations performed with VISITRON. We demonstrate that for CVDN, panoramic viewpoint selection is a better formulation than discrete turn-based action prediction, akin to what has been seen on VLN with R2R (Fried et al., 2018). We observe that multi-task learning with long-trajectory VLN datasets leads to significant CVDN performance gains relative to training on CVDN alone. VISITRON is competitive with models on the leaderboard for the static NDH task on EvalAI (Yadav et al., 2019), attaining state-of-the-art performance on the Success weighted by Path Length (SPL) metric. Given VISITRON's design and ability to identify when to engage in dialogue, we also propose a generalization of the game-play mode introduced by Roman et al. (2020) for jointly fine-tuning and evaluating VISITRON and future such models with pre-trained guides to help them easily adapt to their guides' capabilities. + +# 2 Background + +# 2.1 Vision-and-Language Navigation + +The Vision-and-Language Navigation (VLN) task requires an agent spawned in an indoor environment at a starting position $s_0$ to follow natural language instructions $x$ and navigate to a target position $s_{goal}$ . This can also be seen as a Partially Observable Markov Decision Process $\mathcal{M} = \langle S, \mathcal{A}, P_s, r \rangle$ where $S$ is the visual state space, $\mathcal{A}$ + +is the discrete action space, $P_{s}$ is the unknown environment distribution from which the next state is drawn and $r\in \mathbb{R}$ is the reward function (Hao et al., 2020). At a given time step $t$ , the agent receives an RGB image observation $obs(s_{t})$ , where $s_t\in S$ . Based on the observation, the agent takes an action $a_{t}\in \mathcal{A}$ transitions into the next state $s_{t + 1}$ drawn as follows: $s_{t + 1}\sim P_s(\cdot |s_t,a_t)$ , and receives a new image observation $obs(s_{t + 1})$ . To end the episode, the agent must select the special STOP action. A $T$ -step trajectory can be represented as $\pmb {\tau} = [s_0,a_0,s_1,a_1,\dots ,s_T,a_T]$ . The episode is considered successful if the agent stops within $\epsilon$ distance of the goal, i.e., $|s_T - s_{goal}|\leq \epsilon$ Using a training dataset $\mathcal{D} = \{(\pmb {\tau},\pmb {x})\}$ consisting of expert trajectory $\pmb{\tau}$ and instructions $\pmb{x}$ pairs, the goal is to train a policy $\pi_{\theta}(\tau |x)$ with $\pmb{\theta}$ parameters that maximizes the log-likelihood of the target trajectory given instructions $\pmb{x}$ .. + +$$ +\begin{array}{l} \max _ {(\boldsymbol {\tau}, \boldsymbol {x}) \sim \mathcal {D}} \mathcal {L} _ {\boldsymbol {\theta}} (\boldsymbol {\tau}, \boldsymbol {x}) = \log \pi_ {\boldsymbol {\theta}} (\boldsymbol {\tau} | \boldsymbol {x}) \\ = \sum_ {t = 0} ^ {T} \log \pi_ {\boldsymbol {\theta}} (\boldsymbol {a} _ {t} | \boldsymbol {s} _ {t}, \boldsymbol {x}) \tag {1} \\ \end{array} +$$ + +Several datasets have been released for VLN based on Matterport3D (Chang et al., 2017), a large-scale RGB-D dataset containing $\sim 10000$ panoramic views from $\sim 194000$ RGB-D images of 90 building-scale scenes. The most popular VLN dataset based on Matterport3D is the Room-to-Room (R2R) dataset (Anderson et al., 2018b), containing $\sim 7200$ trajectories and 3 natural language instructions per trajectory. For validation and test sets, seen and unseen splits are created to easily evaluate how well an agent generalizes. Room-4-Room (R4R) (Jain et al., 2019) is an augmentation of R2R wherein existing short trajectories in R2R are joined to form longer, challenging trajectories. Room-across-Room (RxR) (Ku et al., 2020) is a newly introduced dataset with several properties, including but not limited to multilingual instructions, larger scale (for each language, $\sim 14000$ trajectories with 3 instructions per trajectory), fine-grained spatio-temporal grounding and follower demonstrations. + +A navigating agent's actions typically belong in a pre-defined discrete set comprising options such as FORWARD, LEFT, RIGHT, etc. Predicting the next best action from this low-level visuomotor space (Fried et al., 2018) of actions is referred to + +as turn-based action prediction. Given the nature of the aforementioned VLN datasets, it is also possible to have a navigating agent's actions belong in the panoramic space, wherein the agent selects the next best viewpoint in the navigation graph from the panoramic space visible to it at its current location. This is referred to as viewpoint selection. + +# 2.2 Cooperative Vision-and-Dialog Navigation + +Cooperative Vision-and-Dialog Navigation (CVDN) is a recently introduced dataset (Thomason et al., 2020) collected by partnering crowd-workers in simulated photo-realistic environments. One worker acts as a NAVIGATOR, seeking to navigate to a goal and interacting in natural language with a GUIDE along the way if it needs assistance. The other worker acts as a GUIDE, answering the NAVIGATOR's questions while having privileged access to the best next steps the NAVIGATOR should take according to an ORACLE full-state shortest path planner. The collection of each CVDN instance begins with the state $(S, T_{O}, s_{0}, G)$ , where $S$ is the environment in which the agents are placed, $s_{0}$ is the start location of the NAVIGATOR, $G$ is the goal region and $T_{O}$ is the initial hint given to both agents about the goal region containing object $O$ . At any time step $t$ , the NAVIGATOR can make one of three choices: i) take a sequence of $k_{t}$ navigation steps $N_{t} = [n_{t}^{1}, n_{t}^{2}, \ldots, n_{t}^{k_{t}}]$ , ii) ask a question $Q_{t}$ to the GUIDE, iii) declare its current position as the goal region. If a question is asked, the GUIDE looks at $l$ next steps along the shortest path to the goal and replies with an answer $A_{t}$ . The instance ends when the NAVIGATOR reaches $G$ . Thus, a CVDN instance comprises $\left[(S, T_{O}, s_{o}, G), \langle N_{0}, Q_{1}, A_{1}, N_{1}, Q_{2}, A_{2}, N_{2}, \ldots, Q_{m}, A_{m}, N_{m} \rangle\right]$ , where $m$ is the number of dialogue exchanges between the NAVIGATOR and GUIDE, and $N_{0}$ is the sequence of navigation steps before the $1^{\text{st}}$ exchange. + +# 2.2.1 Navigation from Dialog History (NDH) + +With the CVDN dataset, the NDH task for the NAVIGATOR was introduced (Thomason et al., 2020), in which the NAVIGATOR needs to navigate towards a goal given a dialogue history. Specifically, the NAVIGATOR is spawned at the terminal position of $N_{t-1}$ (or $s_0$ in the case of $N_0$ ) in environment $S$ and is given $(T_O, Q_{1:t}, A_{1:t})$ . The task is to predict the navigation steps that bring the agent closer to the goal region $G$ . To train a NAVIGATOR agent + +for this task, the navigation steps needed for supervision from the dataset can be provided in any of the three forms: i) human NAVIGATOR steps, $N_{t}$ : the navigation steps that were taken by the human NAVIGATOR after the dialogue exchange at time step $t$ , ii) ORACLE steps, $O_{t}$ : the shortest path steps accessible to the GUIDE when it gave the answer $A_{t}$ , iii) MIXED: a mix of both human NAVIGATOR and ORACLE supervision where the supervision path is $N_{t}$ when $e(O_{t}) \in N_{t}$ , and $O_{t}$ otherwise, where $e(\cdot)$ represents the terminal position of a sequence of navigation steps. The agent NAVIGATOR is trained VLN-style using Equation 1 on NDH instances extracted as described above from the CVDN instances, and evaluated on NDH instances using VLN metrics such as Goal Progress and Success weighted by Path Length (SPL), defined in Section 4.1. In the CVDN literature, it has been observed that MIXED supervision typically performs the best, followed by ORACLE and human NAVIGATOR supervision respectively. However, for the purposes of all our experiments, we pick the human NAVIGATOR supervision mode to establish a lower-bound on performance for VISITRON. + +# 2.2.2 Gameplay Mode + +In the CVDN dataset, a human NAVIGATOR cooperates with a human GUIDE to find a goal region $G$ with target object $O$ . Roman et al. (2020) introduced the game-play mode, which is essentially an agent-agent replica of this dynamic dataset creation process wherein the two trained agents consume each other's outputs. This mode can be applied during both fine-tuning and evaluation and helps understand how well a pre-trained NAVIGATOR agent adapts to the capabilities of different GUIDE agents in a dynamic/interactive setting. For the sake of consistency with game-play mode notation introduced by Roman et al. (2020), we denote the role of asking questions that is intrinsic to the NAVIGATOR by QUESTIONER. Thus, in a game-play mode episode, at $t = 0$ (prior to the first QA exchange), the NAVIGATOR takes $N_{0}$ steps given the initial hint $T_{O}$ . For time steps $t > 0$ , the QUESTIONER generates a question $Q_{t}$ , GUIDE generates an answer $A_{t}$ having access to the next $l$ steps in the shortest path, and then NAVIGATOR generates $N_{t}$ navigation steps of length $k_{t}$ . All agents have access to the entire visual navigation $(N_{0:t-1})$ and dialogue $(Q_{1:t-1}A_{1:t-1})$ histories in addition to the initial hint $T_{O}$ . The QUESTIONER asks questions every $4^{\text{th}}$ time-step, which is a hard-coded heuristic + +by Roman et al. (2020) since their NAVIGATOR does not know when to ask questions. The episode ends when the NAVIGATOR declares that the current position is in the goal region $G$ or a maximum number of turns (20) are played. NAVIGATOR's performance in game-play mode is measured using Goal Progress (see Section 4.1). While the focus of our work is not to train a QUESTIONER, we ensure our NAVIGATOR is equipped with the ability to identify when to ask questions. This leads to our proposed general game-play mode, wherein the aforementioned description of a regular gameplay mode episode still holds but the hard-coded heuristic of asking questions every $4^{\text{th}}$ time-step is eliminated, i.e., the NAVIGATOR decides when a question must be asked to continue game-play. + +# 2.3 OSCAR + +The OSCAR pre-training strategy (Li et al., 2020) for cross-modal Transformers uses object tags detected in images as anchor points to ease the learning of semantic alignments between images and text. The input is represented as Word-Tag-Image $(\boldsymbol{w}, \boldsymbol{q}, \boldsymbol{v})$ , where $\boldsymbol{w}$ and $\boldsymbol{q}$ are the sequence of word embeddings of the text and object tags respectively, and $\boldsymbol{v}$ is the sequence of region features of the image. To generate $\boldsymbol{v}$ , Faster R-CNN (Ren et al., 2015) is used to extract visual semantics of each region as $(v', z)$ where $v' \in \mathbb{R}^P$ ( $P = 2048$ ) is the region feature, $z \in \mathbb{R}^6$ is the region position represented by the coordinates of the top-right and bottom-left corners and the height & width. $v'$ and $z$ are concatenated to form a position-sensitive region feature, which is further transformed into $v$ using a projection layer such that $v$ has the same dimension as the input token embeddings. It is then pre-trained with a Masked Token Loss (MTL) and a Contrastive Loss (CL). + +$$ +\begin{array}{l} \mathcal {L} _ {\text {P r e - t r a i n i n g}} = \mathcal {L} _ {M T L} + \mathcal {L} _ {C L} \\ = - \mathbb {E} _ {\left(\boldsymbol {v}, \boldsymbol {h}\right) \sim \mathcal {D}} \log p \left(h _ {i} \mid \boldsymbol {h} _ {\backslash i}, \boldsymbol {v}\right) \\ - \mathbb {E} _ {\left(\boldsymbol {h} ^ {\prime}, \boldsymbol {w}\right) \sim \mathcal {D}} \log p (y | f (\boldsymbol {h} ^ {\prime}, \boldsymbol {w})) \\ \end{array} +$$ + +The MTL is akin to that in BERT (Devlin et al., 2019), masking the input tokens $(\boldsymbol{w}, \boldsymbol{q})$ with a probability of $15\%$ and predicting them. The CL is computed by polluting the object tags $\boldsymbol{q}$ with a probability of $50\%$ with randomly chosen object tags from the dataset, and a feed-forward layer on top of [CLS] predicts whether the input contains the original image representation or a pol + +lated one. In the previous equation, $\pmb{h} = [\pmb{w},\pmb{q}]$ , $\pmb{h}' = [\pmb{q},\pmb{v}]$ , $h_{\backslash i}$ are the surrounding tokens of masked token $h_i$ , $f(.)$ denotes the binary classifier where $y = 0$ if the object tags are polluted and 1 otherwise, and $\mathcal{D}$ is the dataset. OSCAR uses a collection of popular image-text datasets for pretraining, including but not limited to Conceptual Captions (Sharma et al., 2018), MS-COCO (Lin et al., 2014), Flickr30K (Young et al., 2014) and GQA (Hudson and Manning, 2019). Such datasets typically have images of objects taken from perfect angles whereas a navigating agent will see objects from different vantage points, which also motivates augmenting OSCAR and performing an additional phase of navigation-specific pre-training. + +# 3 Approach + +The policy for NDH (and VLN) can be decomposed into an encoder-decoder setup, $\pi_{\pmb{\theta}} = f_{\pmb{\theta}_{E}}\circ f_{\pmb{\theta}_{D}}$ .. + +- A vision-language encoder $f_{\theta_E}:\{s_{1:t},x\} \to z_t$ where $s_{1:t}$ are visual states, $x$ is the dialogue history (or instructions for VLN) and $z_{t}$ is the joint latent representation at time step $t$ . +- An action decoder $f_{\theta_D} : \{s_t, z_t, a_{t-1}\} \to a_t$ , where $a_t$ is the next action. + +We model $\pi_{\theta}$ by VISITRON, a visio-linguistic Transformer-based model. VISITRON's encoder is structurally similar to OSCAR's Transformer (Li et al., 2020). This is by design to enable easy transfer of visual semantics-aligned representations learned from disembodied image-text data. We make navigation-specific modifications to OSCAR, but they are all structured as augmentations of modules instead of removal of network components, thus enabling us to use the pre-trained weights of OSCAR's Transformer to initialize large portions of our encoder. The augmentations are described in Section 3.1. As with OSCAR, the input to VISITRON's encoder is represented as Word-Tag-Image $(w, q, v)$ , where $w$ and $q$ are the sequence of word embeddings of the text and object tags respectively, and $v$ is the sequence of region features of the image. We represent the panorama in 36 views, extract Faster R-CNN (Ren et al., 2015) region features $r'$ from each view and add positional vector $p$ , $r = (r', p)$ . To incorporate 3D direction, we add direction embedding $d$ to the region features, $v = r + d$ . $d$ is a 128-dimensional orientation vector represented by repeating $[\sin \phi; \cos \phi; \sin \omega; \cos \omega]$ + +![](images/1c3a15f606fa508cdf6145c35b8af2490fed5354a46793e0227858c54f0bb17b.jpg) +Figure 2: VISITRON's Encoder Architecture and Semantics-Aligned Navigation Pre-Training Tasks + +32 times where $\phi$ and $\omega$ are heading and elevation poses. In addition to the standard [CLS] and [SEP], we also use [TAR], [NAV], [GUI] as delimiter tokens for the initial target hint, NAVIGATOR's questions and the GUIDE's answers respectively. While this input structure is dialogue-specific, it is amenable to instructions-based datasets for multi-tasking. + +# 3.1 VISITRON Pre-Training + +We adopt a two-stage pre-training strategy, initializing VISITRON's encoder with weights from OsCAR to begin with web-scale disembodied visiolinguistic representations, followed by facilitating a domain shift to navigation and actions by pretraining on navigation data. For each navigation trajectory, we extract $(\boldsymbol{w}, \boldsymbol{q}, \boldsymbol{v}, \boldsymbol{a})$ tuples where $\boldsymbol{w}$ is the dialogue history/instruction, $\boldsymbol{q}$ is the sequence of object tags from the current panorama, $\boldsymbol{v}$ is the sequence of region features and $\boldsymbol{a}$ is the direction in the $360^{\circ}$ panoramic space where the next node in the trajectory is located (Fried et al., 2018). The pre-training objectives are: + +1. Masked Language Modeling: Input word tokens are replaced with [MASK] with $15\%$ probability and the masked token $x_{i}$ is predicted conditioned on surrounding tokens $x_{\backslash i}$ . +2. Masked Object Tag Prediction: Object tags are replaced with [MASK] with $15\%$ probability. A feed-forward head on top of [MASK] is used to predict the tag from a distribution over Faster R-CNN semantic classes. This provides more fine-grained object supervision unlike OSCAR's global masked token loss for tokens in both object tags and text, since this computes a distribution over the object detector's semantic classes instead of over the + +entire input vocabulary. + +3. Directional Grounding: [CLS] hidden state goes into a feed-forward head to predict $\mathbf{a}$ . + +Figure 2 illustrates VISITRON's encoder architecture and the pre-training objectives we use, with an extracted tuple from a sample NDH instance. + +# 3.2 VISITRON Fine-Tuning + +After pre-training the encoder, we leverage it with an attention-based Long Short-Term Memory (LSTM) action decoder (Hochreiter and Schmidhuber, 1997), as shown in Figure 3. At time-step $t$ , the decoder (cell state $d_t$ ) takes the previous action $a_{t-1}$ , the panoramic ResNet features extracted from the current location/state and decodes the next action $a_t$ , while attending to the VISITRON encoder's cross-modal representation of its input. After this LSTM is fine-tuned, the same stack is frozen and a randomly initialized two-layer feedforward head is added and trained with a binary cross-entropy loss to learn to classify when to ask a question. The supervision for this head comes from the elongated CVDN instances defined in Section 2.2, with time-steps when a question was asked serving as positive labels and the remaining time-steps during which navigation occurs serving as negative labels. Note that as described in Section 2.1, the decoder's actions can belong in either the panoramic space or the low-level visuomotor space (Fried et al., 2018), leading to independent formulations for viewpoint selection and turn-based action prediction. + +# 4 Experiments + +In this section, we first describe the evaluation metrics we adopt. We then describe and discuss + +Table 1: Pre-Training Ablations (Fine-Tuning and Evaluating on NDH) + +
Semantics-aligned Pre-Training CurriculumVal SeenVal Unseen
Stage 1: Web (OSCAR)Stage 2: Navigation
Contrastive+Masked LMObjectTagsMaskedLMMasked ObjectTag PredictionDirectionalGroundingGP (m) ↑SPL (%) ↑SR (%) ↑nDTW (%) ↑GP (m) ↑SPL (%) ↑SR (%) ↑nDTW (%) ↑
1(No pre-training and no object tags)4.7636.5646.0730.972.099.9622.496.50
24.8250.7358.1147.342.6724.8834.2924.21
34.3845.1552.0941.142.3013.0324.818.63
45.0925.9241.1017.911.9011.2723.485.62
54.8348.2256.0247.012.7024.0432.8623.46
65.3455.1661.7854.832.7124.5632.5224.51
+ +![](images/fe0939deffc6693b7fa8e145b52cf566960baebf893ee66f178f565506504d9e.jpg) +VISITRON +Figure 3: NAVIGATOR predicts navigation actions, given dialogue history and visual observations. The same stack decides when to ask the GUIDE a question. A similar setup can be used for question generation. + +our experimental observations from performing ablations during VISITRON pre-training and finetuning respectively. We present our observations for question-asking classification for CVDN, establishing a strong baseline for future models. We finally present and discuss our observations from submitting our model checkpoints to the static EvalAI leaderboard for CVDN. + +# 4.1 Evaluation Metrics + +We evaluate VISITRON's ability to navigate to the goal with the following metrics: + +- Goal Progress (GP) measures the difference between the distance from the start position to the final goal and the distance from the end position to the final goal. It is used to determine how much progress in meters the agent has made towards the final goal. +- Success weighted by (Normalized Inverse) Path Length (SPL) introduced by Anderson + +et al. (2018a) provides a measure of success normalized by the ratio between the length of the shortest path and the selected path. + +- Success Rate (SR) measures the success of an episode. If the agent stops within 3 meters of the goal, it is considered a success. +- Normalized Dynamic Time Warping (nDTW) introduced by Ilharco et al. (2019) helps measure a navigator agent's fidelity to the dialogue history/instruction by softly penalizing deviations from the reference path. + +We evaluate the question-asking classification head by computing accuracy and balanced accuracy (Brodersen et al., 2010). The latter accounts for the natural class imbalance of more navigation time-steps than question-asking time-steps expected in dialogue-based navigation by computing the average of recall obtained on each class. + +# 4.2 Pre-Training Ablations + +Using NDH and R2R trajectories, we pre-train VISITRON as described in Section 3.1. We begin experimenting with cumulative addition of each pretraining stage and objective to obtain an ablative understanding of their effect on the downstream NDH task. Results are shown in Table 1. We see that our pre-training strategy helps: the best performance on Val Seen (as measured by all metrics) is obtained when using all pre-training stages and objectives. We also see that Goal Progress (GP) is highest on Val Unseen in this setting (an absolute increase of 0.62 relative to no pre-training). Rows 3-4 demonstrate the efficacy of our second-stage masked language modeling (MLM) task, helping improve Val Seen GP from 4.38 to 5.09. Rows 4-5 demonstrate the efficacy of our newly introduced masked object tag prediction task as a means towards experience-driven concepts and semantics + +Table 2: Fine-Tuning Ablations + +
#Action SpaceMulti-Task Fine-Tuning NDH+Val SeenVal Unseen
GP (m) ↑SPL (%) ↑SR (%) ↑nDTW (%) ↑GP (m) ↑SPL (%) ↑SR (%) ↑nDTW (%) ↑
NORTHIS1Turn-basedX1.159.6611.7826.861.6013.0214.7729.28
2Action Prediction✓(RxR)1.5012.3015.1819.950.9711.5215.4420.49
VA3ViewpointX5.3455.1661.7854.832.7124.5632.5224.51
4Selection✓(RxR)5.1112.3325.654.663.2510.7427.343.78
+ +identification and acquisition, with significant increases in all metrics across both validation seen and unseen splits. Rows 5-6 show that our directional grounding task for pre-training the encoder plays a particularly important role: the increase in both GP and nDTW suggest that this task improves VISITRON's ability to navigate closer to the goal while ensuring that dialogue fidelity is maintained in the process by aligning encoder representations in the direction along the reference path. + +# 4.3 Fine-Tuning Ablations + +Next, we perform ablations during fine-tuning, leveraging all objectives from Table 1 since our previous analysis demonstrated their effectiveness. For VLN agents, it has been shown that viewpoint selection in the panoramic space is a better formulation than turn-based action prediction in the low-level visuomotor space (Fried et al., 2018). However, it is not immediately obvious or known whether this can be extrapolated to dialogue-based navigation as in CVDN. So we experiment with both formulations for our NAVIGATOR. Given the sparsity of NDH instances $(\sim 4k)$ for fine-tuning, we also study if multi-task fine-tuning with the RxR dataset helps boost performance. Table 2 presents the fine-tuning ablation results. Row 1 and 3 demonstrate that panoramic viewpoint selection is a better formulation than turn-based action prediction for CVDN, with all metrics increasing significantly when switching to viewpoint selection. Further, we see in rows 3 and 4 that multi-task fine-tuning leads to better CVDN generalization, with Val Unseen GP increasing from 2.71 to 3.25 when multi-tasking with viewpoint selection. However, we see this increase in GP occurs alongside a decrease in nDTW, SPL and SR. This decrease can be attributed to the fact that the RxR dataset has very long trajectories, which prime the model to take long paths to the final CVDN goal (which GP cares about), well-beyond the next 5 GUIDE steps in the NDH instance that nDTW, SPL and SR + +evaluate against. + +# 4.4 Question-Asking Classification and Leaderboard Evaluation + +We pick the VISITRON model checkpoint with the highest GP in Table 2 (row 4), and perform imitation learning of the question-asking classification head as described in Section 3.2. We evaluate the classification head by creating elongated CVDN instances from the validation sets as described in Section 2.2, akin to how supervision was provided during training: time-steps when a question was asked serve as positive instances and the remaining timesteps during which navigation occurs serve as negative instances. As seen in Table 3, our approach to identifying when to ask questions vs. when to navigate establishes a strong baseline for future work on identifying when to ask questions with CVDN, as measured by accuracy and balanced accuracy on Val Unseen. It is important to note that our design choice of adding and training a separate head for this task while keeping the navigator stack frozen ensures that there is no direct impact on navigation performance itself. This is unlike approaches that perform direct navigation action space augmentation with a special action for question-asking, where navigation actions themselves are affected by the presence of an additional competing variable for shared total probability mass. + +Table 3: Question-Asking Classification Performance + +
Metric (%)Val SeenVal Unseen
Accuracy68.0567.87
Balanced Accuracy63.3361.09
+ +We submitted this model checkpoint to the CVDN leaderboard aimed at the static NDH task. We observe in Table 4 that this model checkpoint's performance is competitive with state-of-the-art models with a hidden test GP of 3.11. However, the low hidden test SPL of 12 indicates the impact + +that multi-task fine-tuning with long RxR paths had on this checkpoint's ability to take short paths to the goal, like we discussed earlier in Section 4.3. Given this expected decrease in SPL when utilizing such long trajectories, we also created a model checkpoint by multi-task fine-tuning VISITRON on NDH, R2R and R4R. We observe that this model checkpoint obtains state-of-the-art SPL of 25 alongside an associated decrease in GP to 2.40. + +Table 4: NDH Hidden Test Set Performance + +
#MethodGP (m) ↑SPL (%) ↑
1MT-RCM + EnvAg (Wang et al., 2020)3.9117
2BabyWalk (Zhu et al., 2020b)3.6511
3VISITRON3.1112
4Cross-modal Memory Network (Zhu et al., 2020c)2.9514
5PREVALENT (Hao et al., 2020)2.4424
6VISITRON (Best SPL)2.4025
+ +# 5 Related Work + +Vision-and-language pre-training (Tan et al., 2019; Lu et al., 2019; Sun et al., 2019; Chen et al., 2020; Zhou et al., 2020) has grown to become a popular area of research, primarily aimed at solving downstream tasks such as image captioning, visual question answering and image retrieval. This line of work typically involves learning cross-modal representations using self-supervised objectives with a co-attention Transformer that fuses the two modalities represented by input token embeddings and visual region features, where the latter is typically sourced from Faster R-CNN (Ren et al., 2015). + +Research in vision-and-language navigation (VLN) has also seen tremendous progress (Fried et al., 2018; Ke et al., 2019; Anderson et al., 2019; Tan et al., 2019; Zhu et al., 2020a) since the advent of the Room-to-Room (R2R) dataset (Anderson et al., 2018b) based on Matterport3D (Chang et al., 2017), with scope for further advances only increasing with the recent release of the much larger, densely annotated and multilingual Room-across-Room (RxR) dataset (Ku et al., 2020). As an extension to VLN, the recent Cooperative Vision-and-Dialog Navigation (CVDN) dataset (Thomason et al., 2020) allows for training interactive navigator and guide agents. The dominant focus of research with CVDN so far has been the Navigation from Dialog History (NDH) task introduced with CVDN, which is equivalent to treating the dialogue history as a VLN-style fixed instruction. The NDH formulation allows for easy transfer and multi-task learning (Hao et al., 2020; Wang et al., 2020; Zhang + +et al., 2020) with VLN. However, state-of-the-art VLN models such as VLN-BERT (Majumdar et al., 2020) rely on the fully-observable setting when framing the task as ahead-of-time path selection, which is fundamentally at odds with the need for dialogue in CVDN: dialogue is aimed at enabling the navigating agent to succeed while it makes navigation decisions and decides it needs assistance. The recent Recursive Mental Model (RMM) (Roman et al., 2020) for CVDN attempts to address this by introducing a simulated dialogue game-play mode, where a trained navigator is fine-tuned jointly with a pre-trained guide and evaluated in this mode. However, the RMM navigator does not dynamically ask questions, instead relying on a data-driven heuristic of asking questions after every 4th navigation time-step. VISITRON's design naturally leads to a generalization of this game-play mode which eliminates the aforementioned heuristic. + +Our work is similar to recent work (Hao et al., 2020) on leveraging pre-trained cross-modal representations for the NDH task. However, our work takes on added goals of learning when to ask questions and associative learning of visio-linguistic concepts and semantics to ensure they can be identified and acquired when interacting in new environments, which are key requirements for full cooperative vision-and-dialogue navigation. + +# 6 Conclusion and Future Work + +We presented VISITRON, a Transformer-based navigator designed to identify and acquire visio-linguistic concepts and semantics and make decisions, all key traits for interactive navigation inherent to CVDN. We demonstrated the efficacy of our approach via experiments and ablations. We proposed generalizing the game-play regime introduced with RMM (Roman et al., 2020) to enable interactive fine-tuning and evaluation of VISITRON-like models with pre-trained guides. The trade-off between GP and SPL in dialogue-based navigation, Sim-to-Real transfer (Anderson et al., 2021) and robustness in dialogue-based navigation in presence of speech recognition errors (Gopalakrishnan et al., 2020) are all important problems that merit detailed investigation in future work. + +# 7 Societal Impact + +The primary dataset of interest for our work on interactive navigation in photo-realistic indoor environments: Cooperative Vision-and-Dialog Nav + +igation (CVDN), is an English-only dataset. We also multi-task with several other datasets, namely R2R, R4R and RxR, but RxR is the only multilingual dataset and covers English, Hindi and Telugu. Due to CVDN being English-only, we utilized the English-pportion of the RxR data during multi-task fine-tuning. There are over 6500 known languages spoken in the world today and vision-and-dialog navigation research could, in principle, be deployed in every home in the world, but due to current data limitations, it can only be deployed in English-speaking homes. Our modeling methods should transfer to other languages given sufficient volume of data, but ensuring that might not be possible for low-resource or endangered languages. VISITRON may benefit from new training schemes and modeling improvements to account for such scenarios. When deployed in real homes, speech would be the primary modality for most humans to interact with such robots. While speech recognition research has advanced considerably, ensuring accurate speech recognition across various speaker populations and accents is still challenging. Errors in speech recognition could impact VISITRON's ability to navigate accurately, so making VISITRON robust to speech recognition errors will be necessary, potentially via augmentation of the language component of its training data with synthetic and actual speech recognition errors (Gopalakrishnan et al., 2020). + +During navigation, VISITRON needs access to neighboring viewpoints to select from. Each environment in CVDN contains an underlying navigation graph which provides this information, which might not be the case in real unseen environments. In its absence, additional modules can be added that generate a local navigation graph based on the surroundings (Anderson et al., 2021). Datasets in the vision-and-language navigation space such as R2R and CVDN typically consider the environment to be static. Obstacle avoidance methods need to be added to models built using these datasets to avoid hazardous collisions in a dynamic environment, such as with moving humans and pets. + +Large language models are known to have a high carbon footprint associated with training them (Strubell et al., 2019). VISITRON is about the same size as BERT (Devlin et al., 2019), which is now ubiquitously used in both academic and industrial settings and can be trained reasonably fast. The carbon footprint of this work was maintained within permissible limits by using a maximum of 8 + +Tesla V100 GPUs for training. + +# Acknowledgments + +Many thanks to Jesse Thomason and Aishwarya Padmakumar for useful technical discussions and actionable feedback on multiple versions of this paper. We would also like to thank the anonymous reviewers for their service and useful feedback. + +# References + +Peter Anderson, Angel Chang, Devendra Singh Chaplot, Alexey Dosovitskiy, Saurabh Gupta, Vladlen Koltun, Jana Kosecka, Jitendra Malik, Roozbeh Mottaghi, Manolis Savva, et al. 2018a. On evaluation of embodied navigation agents. arXiv preprint arXiv:1807.06757. +Peter Anderson, Ayush Shrivastava, Devi Parikh, Dhruv Batra, and Stefan Lee. 2019. Chasing ghosts: Instruction following as bayesian state tracking. In Advances in Neural Information Processing Systems, pages 371-381. +Peter Anderson, Ayush Shrivastava, Joanne Truong, Arjun Majumdar, Devi Parikh, Dhruv Batra, and Stefan Lee. 2021. Sim-to-real transfer for vision-and-language navigation. In Conference on Robot Learning, pages 671-681. PMLR. +Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sunderhauf, Ian Reid, Stephen Gould, and Anton van den Hengel. 2018b. Vision- and-language navigation: Interpreting visually-grounded navigation instructions in real environments. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3674-3683. +Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lapata, Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, Nicolas Pinto, and Joseph Turian. 2020. Experience grounds language. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8718-8735, Online. Association for Computational Linguistics. +Kay Henning Brodersen, Cheng Soon Ong, Klaas Enno Stephan, and Joachim M. Buhmann. 2010. The balanced accuracy and its posterior distribution. In Proceedings of the 2010 20th International Conference on Pattern Recognition, ICPR '10, page 3121-3124, USA. IEEE Computer Society. +Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niessner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang. 2017. Matterport3d: Learning from rgb-d data in indoor environments. International Conference on 3D Vision (3DV). + +Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. Uniter: Universal image-text representation learning. In European conference on computer vision, pages 104-120. Springer. +Ta-Chung Chi, Minmin Shen, Mihail Eric, Seokhwan Kim, and Dilek Hakkani-tur. 2020. Just ask: An interactive learning framework for vision and language navigation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 2459-2466. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186. +Daniel Fried, Ronghang Hu, Volkan Cirik, Anna Rohrbach, Jacob Andreas, Louis-Philippe Morency, Taylor Berg-Kirkpatrick, Kate Saenko, Dan Klein, and Trevor Darrell. 2018. Speaker-follower models for vision-and-language navigation. In Advances in Neural Information Processing Systems, pages 3314-3325. +Karthik Gopalakrishnan, Behnam Hedayatnia, Qinlang Chen, Anna Gottardi, Sanjeev Kwatra, Anu Venkatesh, Raefer Gabriel, and Dilek Hakkani-Tur. 2019. Topical-chat: Towards knowledge-grounded open-domain conversations. In INTERSPEECH. +Karthik Gopalakrishnan, Behnam Hedayatnia, Longshaokan Wang, Yang Liu, and Dilek Hakkani-Tur. 2020. Are neural open-domain dialog systems robust to speech recognition errors in the dialog history? an empirical study. In INTERSPEECH. +Weituo Hao, Chunyuan Li, Xiujun Li, Lawrence Carin, and Jianfeng Gao. 2020. Towards learning a generic agent for vision-and-language navigation via pretraining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13137-13146. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780. +Minlie Huang, Xiaoyan Zhu, and Jianfeng Gao. 2020. Challenges in building intelligent open-domain dialog systems. ACM Transactions on Information Systems (TOIS), 38(3):1-32. +Drew A Hudson and Christopher D Manning. 2019. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6700-6709. + +Gabriel Ilharco, Vihan Jain, Alexander Ku, Eugene Ie, and Jason Baldridge. 2019. General evaluation for instruction conditioned navigation using dynamic time warping. In ViGIL@NeurIPS. +Vihan Jain, Gabriel Magalhaes, Alexander Ku, Ashish Vaswani, Eugene Ie, and Jason Baldridge. 2019. Stay on the path: Instruction fidelity in vision-and-language navigation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1862-1872. +Liyiming Ke, Xiujun Li, Yonatan Bisk, Ari Holtzman, Zhe Gan, Jingjing Liu, Jianfeng Gao, Yejin Choi, and Siddhartha Srinivasa. 2019. Tactical rewind: Self-correction via backtracking in vision-and-language navigation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6741-6749. +Seokhwan Kim, Mihail Eric, Karthik Gopalakrishnan, Behnam Hedayatnia, Yang Liu, and Dilek Hakkani-Tur. 2020. Beyond domain APIs: Task-oriented conversational modeling with unstructured knowledge access. In Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 278-289, 1st virtual meeting. Association for Computational Linguistics. +Alexander Ku, Peter Anderson, Roma Patel, Eugene Ie, and Jason Baldridge. 2020. Room-across-room: Multilingual vision-and-language navigation with dense spatiotemporal grounding. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4392-4412. +Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, et al. 2020. Oscar: Object-semantics aligned pre-training for vision-language tasks. In European Conference on Computer Vision, pages 121-137. Springer. +Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer. +Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems, pages 13-23. +Arjun Majumdar, Ayush Shrivastava, Stefan Lee, Peter Anderson, Devi Parikh, and Dhruv Batra. 2020. Improving vision-and-language navigation with imagetext pairs from the web. In European Conference on Computer Vision, pages 259-274. Springer. +Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28:91-99. + +Homero Roman Roman, Yonatan Bisk, Jesse Thomason, Asli Celikyilmaz, and Jianfeng Gao. 2020. Rmm: A recursive mental model for dialog navigation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 1732-1745. +Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2556-2565. +Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in nlp. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3645-3650. +Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid. 2019. Videobert: A joint model for video and language representation learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7464-7473. +Hao Tan, Licheng Yu, and Mohit Bansal. 2019. Learning to navigate unseen environments: Back translation with environmental dropout. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2610-2621. +Jesse Thomason, Michael Murray, Maya Cakmak, and Luke Zettlemoyer. 2020. Vision-and-dialog navigation. In Conference on Robot Learning, pages 394-406. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008. +Xin Eric Wang, Vihan Jain, Eugene Ie, William Yang Wang, Zornitsa Kozareva, and Sujith Ravi. 2020. Environment-agnostic multitask learning for natural language grounded navigation. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIV 16, pages 413–430. Springer. +Deshraj Yadav, Rishabh Jain, Harsh Agrawal, Prithvi-jit Chattopadhyay, Taranjeet Singh, Akash Jain, Shiv Baran Singh, Stefan Lee, and Dhruv Batra. 2019. Evalai: Towards better evaluation systems for ai agents. arXiv preprint arXiv:1902.03570. +Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67-78. + +Yubo Zhang, Hao Tan, and Mohit Bansal. 2020. Diagnosing the environment bias in vision-and-language navigation. arXiv preprint arXiv:2005.03086. +Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason Corso, and Jianfeng Gao. 2020. Unified vision-language pre-training for image captioning and vqa. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 13041-13049. +Fengda Zhu, Yi Zhu, Xiaojun Chang, and Xiaodan Liang. 2020a. Vision-language navigation with self-supervised auxiliary reasoning tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10012-10022. +Wang Zhu, Hexiang Hu, Jiacheng Chen, Zhiwei Deng, Vihan Jain, Eugene Ie, and Fei Sha. 2020b. Babywalk: Going farther in vision-and-language navigation by taking baby steps. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2539-2556. +Yi Zhu, Fengda Zhu, Zhaohuan Zhan, Bingqian Lin, Jianbin Jiao, Xiaojun Chang, and Xiaodan Liang. 2020c. Vision-dialog navigation by exploring cross-modal memory. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10730-10739. \ No newline at end of file diff --git a/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/images.zip b/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..2fdc13787378f561068e26c50a35646e07eea3b2 --- /dev/null +++ b/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2842fc141ef4415c01869bb7226cee472980244bf527e2204abf5a51ef2e2409 +size 320965 diff --git a/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/layout.json b/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d960b28b8eeab42664fbbadff641b1eb8bd71147 --- /dev/null +++ b/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:09cbe83dbeeee5df43c91df29626b104b8591fb66ca9cdede9a1a8cc951c5a6a +size 375440 diff --git a/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/107c2baa-b9b2-4045-8ab0-42040c79d133_content_list.json b/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/107c2baa-b9b2-4045-8ab0-42040c79d133_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0f83695c79cef9f5794da213a8954f8758c2fbd4 --- /dev/null +++ b/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/107c2baa-b9b2-4045-8ab0-42040c79d133_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c9b7403828bb7bd053d1a9540fdad29cd37390a1aaf6dbdae0d5f43565fbd823 +size 88192 diff --git a/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/107c2baa-b9b2-4045-8ab0-42040c79d133_model.json b/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/107c2baa-b9b2-4045-8ab0-42040c79d133_model.json new file mode 100644 index 0000000000000000000000000000000000000000..31186dc038bba180209672c1753830f9eb08a73c --- /dev/null +++ b/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/107c2baa-b9b2-4045-8ab0-42040c79d133_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7f1ea8642aa16da7ea13be0eb62b98263f6acfe14b4a4ac0b26b5ea53b7c825a +size 106670 diff --git a/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/107c2baa-b9b2-4045-8ab0-42040c79d133_origin.pdf b/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/107c2baa-b9b2-4045-8ab0-42040c79d133_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..68a9ebf1556b53d252e0d8ffbaf06840b496b68e --- /dev/null +++ b/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/107c2baa-b9b2-4045-8ab0-42040c79d133_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5a1fff26cf3ed982f67c0c211b8abe9b6475e2271cbd8115ddf1024b6413ccba +size 1068290 diff --git a/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/full.md b/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/full.md new file mode 100644 index 0000000000000000000000000000000000000000..fd933a1787844bd54c8e1822a5801e75c146214d --- /dev/null +++ b/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/full.md @@ -0,0 +1,375 @@ +# Visualizing the Relationship Between Encoded Linguistic Information and Task Performance + +Jiannan Xiang\*, Huayang Li\*, Defu Lian\*, Guoping Huang\*, Taro Watanabe\*, Lemao Liu\* + +Carnegie Mellon University $\spadesuit$ Nara Institute of Science and Technology + +$\diamond$ University of Science and Technology of China $\clubsuit$ Tencent AI Lab + +jiannanx@cs.cmu.edu, li.huayang.lh6@is.naist.jp + +liandefu@ustc.edu.cn, donkeyhuang@tencent.com + +taro@is.naist.jp, lemaoliu@gmail.com + +# Abstract + +Probing is popular to analyze whether linguistic information can be captured by a well-trained deep neural model, but it is hard to answer how the change of the encoded linguistic information will affect task performance. To this end, we study the dynamic relationship between the encoded linguistic information and task performance from the viewpoint of Pareto Optimality. Its key idea is to obtain a set of models which are Pareto-optimal in terms of both objectives. From this viewpoint, we propose a method to optimize the Pareto-optimal models by formalizing it as a multi-objective optimization problem. We conduct experiments on two popular NLP tasks, i.e., machine translation and language modeling, and investigate the relationship between several kinds of linguistic information and task performances. Experimental results demonstrate that the proposed method is better than a baseline method. Our empirical findings suggest that some syntactic information is helpful for NLP tasks whereas encoding more syntactic information does not necessarily lead to better performance, because the model architecture is also an important factor. + +# 1 Introduction + +Recent years have witnessed great success of deep neural networks for natural language processing tasks, such as language modeling (Zaremba et al., 2014; Merity et al., 2018) and Neural Machine Translation (Bahdanau et al., 2015; Vaswani et al., 2017). The excellent task performance they achieved spiked the interest in interpreting their underlying mechanism. Since linguistic knowledge is crucial in natural languages, an emerging body of literature uses probes (Conneau et al., 2018; Alt et al., 2020; Saleh et al., 2020; Cao et al., 2021) to investigate whether a standard model trained + +![](images/0115ce830f97feed69af32222e157313092a49fc9ac32c72700c5ffd387c880e.jpg) +Figure 1: Illustration of Pareto frontier by a toy example. Triangle $(\triangle)$ corresponds to the standard checkpoint with best performance and each circle $(\bigcirc)$ corresponds to a sampled checkpoint. The y-axis indicates the linguistic information $\mathcal{I}$ encoded by the model, and x-axis indicates the negative loss value $-\mathcal{L}$ . + +towards better task performance also captures the linguistic information. From the perspective of information theory, Voita and Titov (2020) and Pimentel et al. (2020b) show that probes can be used to estimate the amount of linguistic information captured by a fixed model. + +However, the above probing only extracts linguistic information from a fixed standard model, which helps little to understand the relationship between the task performance and linguistic information encoded by the model. For example, under their methodology, it is difficult to answer the following two questions. First, would adding linguistic information be beneficial for an NLP model; second, is it harmful when this linguistic information is reduced. Therefore, it is still an open and intriguing question to reveal how task performance changes with respect to different amounts of linguistic information. + +To this end, this paper proposes a novel viewpoint to study the relationship between task performance and the amount of linguistic information, inspired by the criterion of Pareto Optimality which is widely used in economics (Greenwald and Stiglitz, 1986). Our main idea is to obtain Pareto-optimal models on a test set in terms of both linguistic information and task performance and then visualize their relationship along with these + +optimal models. By comparing a standard model with these optimal models, it is clear to answer the question that whether adding the encoded information is helpful to improve the task performance over the standard model, as illustrated in Figure 1, where the points on the line are Pareto-optimal and the red triangle denotes the standard model with best performance. + +Nevertheless, it is typically intractable to obtain the Pareto-optimal models according to both dimensions on test data. To address the challenge, we propose a principled method to approximately optimize the Pareto-optimal models on the training data which can be expected to generalise well on test sets according to statistical learning theory (Vapnik, 1999). Formally, the approach can be regarded as a multi-objective optimization problem: during the learning procedure, it optimizes two objectives, i.e., the task performance and extracted linguistic information. In addition, we develop a computationally efficient algorithm to address the optimization problem. By inspecting the trend of those Pareto-optimal points, the relationship between task performance and linguistic information can be clearly illustrated. Back to our questions, we also consider two instances within the proposed methodology: one aims to maximize the amount of linguistic information (i.e., adding) while the other tries to minimize it (i.e., reducing). + +We conduct experiments on two popular NLP tasks, i.e., machine translation and language modeling, and choose three different linguistic properties, including two syntactic properties (Part-of-Speech and dependency labels) and one phonetic property. We investigate the relationship between NMT performance and each syntactic information, and the relationship between LM performance and phonetic information. For machine translation, we use LSTM, i.e., RNN-search (Bahdanau et al., 2015), and Transformer (Vaswani et al., 2017) as the main model architectures, and conduct our experiments on $\mathrm{En} \Rightarrow \mathrm{De}$ and $\mathrm{Zh} \Rightarrow \mathrm{En}$ tasks. For language modeling, we employ the LSTM model and conduct experiments on the Penn Treebank dataset. The experimental results show that: i) syntactic information encoded by NMT models is important for MT task and reducing it leads to sharply decreased performance; ii) the standard NMT model obtained by maximum likelihood estimation (MLE) is Pareto-optimal for Transformer but it is not the case for LSTM based NMT; iii) reducing the phonetic in + +formation encoded by LM models only makes task performance drop slightly. + +In summary, our contributions are three-fold: + +1. We make an initial attempt to study the relationship between encoded linguistic information and task performance, i.e., how the change of linguistic information affects the performance of models. +2. We propose a new viewpoint from Pareto Optimality as well as a principled approach which is formulated as a multi-objective optimization problem, to visualize the relationship. +3. Our experimental results show that encoding more linguistic information is not necessary to yield better task performance depending on the specific model architecture. + +# 2 Related Work + +Probe With the impressive performance of Neural Network models for NLP tasks (Sutskever et al., 2014; Luong et al., 2015; Vaswani et al., 2017; Devlin et al., 2019; Xu et al., 2020), people are becoming interested in understanding neural models (Ding et al., 2017; Li et al., 2019, 2020). One popular interpretation method is probe (Conneau et al., 2018), also known as auxiliary prediction (Adi et al., 2017) and diagnostic classification (Hupkes et al., 2018), which aims to understand how neural models work and what information they have encoded and used. From the perspective of information theory, Voita and Titov (2020) and Pimentel et al. (2020b) show that probes can be used to estimate the amount of linguistic information captured by a model. However, recent research studies point out that probes fail to demonstrate whether the information is used by models. For example, Hewitt and Liang (2019) show that the probe can also achieve high accuracy in predicting randomly generated tags, which is useless for the task. And Ravichander et al. (2021) present that the representations encode the linguistic properties even if they are invariant and not required for the task. Instead of studying the encoded linguistic information by training a probe for fixed representations, in this work we study how the amount change of linguistic information affects the performance of NLP tasks. + +Information Removal Information removal is crucial in the area of transfer learning (Ganin and Lempitsky, 2015; Tzeng et al., 2017; Long et al., 2018) and fairness learning (Xie et al., 2017; Elazar and Goldberg, 2018), where people want to remove + +domain information or bias from learned representations. One popular method is Adversarial Learning (Goodfellow et al., 2014; Ganin and Lempitsky, 2015), which trains a classifier to predict the properties of representations, e.g., domain information or gender bias, while the feature extractor tries to fool the classifier. In this work, when using our method to reduce the linguistic information in the representations, we find that our multi-objective loss function is the same form as adversarial learning, which provides the theoretical guarantee for using adversarial learning to find the Pareto-optimal solutions to a multi-objective problem. + +Recently, Elazar et al. (2020) also propose to study the role of linguistic properties with the idea of information removal (Ravfogel et al., 2020). However, the representations got by their method may not be Pareto-optimal, because it only minimizes the mutual information, but ignores the objective of task performance. On the contrary, our proposed method optimizes towards both objectives, thus our results can be used to visualize the relationship between linguistic properties and task performance. + +Pareto Optimality The idea of Pareto Optimality (Mas-Colell et al., 1995) is an important criterion in economics, where the goal is to characterize situations where no variable can be better off without making at least one variable worse off. It has been also widely used in the area of sociology and game theory (Beckman et al., 2002; Chinchuluun et al., 2008). In addition, in artificial intelligence Martínez et al. (2020) use Pareto optimality to solve group fairness problem and Duh et al. (2012) proposed to optimize an MT system on multiple metrics based on the theory of Pareto optimality. In particular, Pimentel et al. (2020a) propose a variant of probing on the hidden representation of deep models and they consider Pareto optimality in terms of both objectives similar to our work. Comparing with their work, one difference is the choice of objectives. Another significant difference is that they optimize probing model in a conventional fashion, and thus are unable to study the relationship between linguistic information and task performance. + +# 3 Visualizing Relationship via Pareto Optimality + +We consider the relationship between linguistic information and task performance for two popular + +tasks in NLP, i.e., machine translation and language modeling. Let $\boldsymbol{x} = \{x_{1}, x_{2}, \dots, x_{N}\}$ be a sentence and $s = \{s_{1}, s_{2}, \dots, s_{N}\}$ be the labels of the linguistic property of $\boldsymbol{x}$ , where $s_{i}$ is the label for $x_{i}$ , e.g., POS tag. On both tasks, a deep model typically encodes $\boldsymbol{x}$ into a hidden representation $\boldsymbol{h}$ with a sub-network $E$ parameterized by $\theta_{e}$ : $h = E(\boldsymbol{x})$ , and then uses another sub-network $D$ parameterized by $\theta_{d}$ to map $h$ into an output. + +# 3.1 Background + +$h$ and Loss in NMT An NMT architecture aims to output a target sentence $\pmb{y} = \{y_{1},y_{2},\dots,y_{M}\}$ for a given source sentence $\pmb{x}$ according to $P(\pmb{y}|\pmb{x};\theta)$ (Zaremba et al., 2014; Vaswani et al., 2017), where $\theta$ indicates a set of parameters of a sequence-to-sequence neural network, which contains an encoder $E$ and a decoder $D$ . We define $\pmb{h}$ as the output of the encoder. To train $\theta$ , the MLE loss is usually minimized on a training dataset. For NMT, the loss is defined as following: + +$$ +L _ {\theta} (\boldsymbol {x}, \boldsymbol {y}) = - \sum_ {j = 1} ^ {M} \log P \left(y _ {j} \mid \boldsymbol {x}, \boldsymbol {y} _ {< j}; \theta\right) \tag {1} +$$ + +In our experiments, we consider two models, namely the LSTM (Bahdanau et al., 2015) and Transformer (Vaswani et al., 2017). + +$h$ and Loss in LM For language modeling task, a deep model typically generates a token $x_{j}$ based on $\pmb{x}_{< j}$ according to $P(x_{j}|\pmb{x}_{< j};\theta)$ . Here the subnetworks $E$ is set as one hidden layer to encode $\pmb{x}_{< j}$ into $h_{< j}$ and $D$ is set as the sub-network to generate $x_{j}$ on top of $h_{< j}$ . The parameter $\theta$ is optimized by the following MLE loss: + +$$ +L _ {\theta} (\boldsymbol {x}) = - \sum_ {j = 1} ^ {N} \log P (x _ {j} | \boldsymbol {x} _ {< j}; \theta). +$$ + +To make notations consistent for both NMT and LM, in the rest of this paper, we follow the form of Eq. (1) and re-write the $L_{\theta}(\pmb {x})$ in LM as $L_{\theta}(\pmb {x},\pmb {y})$ , where $\pmb{y}$ is a shifted version of $\pmb{x}$ , i.e., $\pmb {y} = \{x_2,\dots ,x_N\}$ + +Encoded Information Let $\operatorname {I}(\boldsymbol {h},\boldsymbol {s})$ denote the linguistic information in the representation $h$ , i.e., the mutual information between $\pmb{h}$ and the linguistic label $s$ . Since the probability $p(h,s)$ is unknown, it is intractable to compute $\operatorname {I}(h,s)$ . Following Pimentel et al. (2020b), we approximately estimate + +$\operatorname {I}(\pmb {h},\pmb {s})$ by using a probing model $q$ as follows: + +$$ +\begin{array}{l} \operatorname {I} (\boldsymbol {h}, \boldsymbol {s}) = H (\boldsymbol {s}) - H (\boldsymbol {s} | \boldsymbol {h}) \\ \approx \mathrm {H} (\boldsymbol {s}) - \min _ {\theta_ {q}} L _ {\theta_ {q}} (\boldsymbol {h}, \boldsymbol {s}) \tag {2} \\ = \mathrm {H} (\boldsymbol {s}) + \min _ {\theta_ {q}} \sum_ {i} \log q (s _ {i} | \boldsymbol {h}; \theta_ {q}) \\ \end{array} +$$ + +where $H(s)$ is the entropy of linguistic labels, $H(\boldsymbol{s}|\boldsymbol{h})$ is the ideal cross entropy, and $L_{\theta_q}(\boldsymbol{h},\boldsymbol{s})$ is the cross-entropy loss of the probe model $q$ parameterized by $\theta_q$ . + +Theory of Pareto Optimality Pareto optimality (Mas-Colell et al., 1995) is essentially involved in the multi-objective optimization problem. Suppose that we have $K$ different objectives $M_{k}$ to evaluate a parameter $\theta^{\prime}$ , i.e., + +$$ +\arg \max _ {\theta^ {\prime}} [ M _ {1} (\theta^ {\prime}); M _ {2} (\theta^ {\prime}); \dots ; M _ {K} (\theta^ {\prime}) ]. \tag {3} +$$ + +There are two important concepts in Pareto optimality as follows: + +Definition 1. Pareto Optimal: A parameter $\theta^{*}$ is Pareto-optimal iff for any $\theta^{\prime}$ , the condition always holds that, $\forall i = 1,\dots,k$ , $M_{i}(\theta^{*})\geq M_{i}(\theta^{\prime})$ and $\exists j$ such that $M_{j}(\theta^{*}) > M_{j}(\theta^{\prime})$ . + +Definition 2. Pareto Frontier: The set of all Pareto-optimal parameters is called the Pareto frontier. + +# 3.2 Viewpoint via Pareto Optimality + +Motivation Suppose $\theta$ is a given model parameter, $L(\theta)$ is its task performance on a test set, and $I(\theta)$ is the amount of linguistic information encoded in its hidden representation. Conventionally, if one can figure out a function $f$ such that $I = f(L)$ for any $\theta$ , it is trivial to study their relationship by visualizing $f$ . Unfortunately, for some complicated situations as illustrated in Figure 1, there does not exist such a function to represent the relationship between two variables due to a large number of many-to-many correspondences. + +Our Viewpoint Pareto Optimality, a well-known criterion in economics (Mas-Colell et al., 1995), is widely used to analyze the relationship among multiple variables in a complicated environment (Chinchuluun et al., 2008). In our context, it is also a powerful tool to reveal the relationship between the encoded linguistic information and task performance. Taking the Pareto Frontier in Figure 1 as an example, since the capacity of a model is fixed and linguistic information may compete with other kinds of information, capturing more linguistic information may reduce the amount of + +information from other sources that are also helpful for the model. Nevertheless, if increasing the amount of linguistic information constantly leads to performance gain, i.e., linguistic information is complimentary to translation, only one Pareto Optimal point would exist on the top right corner. + +Therefore, in this paper, we propose to study the relationship between $I(\theta)$ and $L(\theta)$ from the viewpoint of Pareto Optimality. Our key idea is to take into account only Pareto-optimal models rather than all models like the conventional method. Thanks to the definition of Pareto optimality, there are no many-to-many correspondences between two variables along the Pareto frontier. Hence their relationship can be visualized by the trend of these frontier points, as shown in Figure 1. Taking Figure 1 as an example, to answer the questions mentioned before, we can see that adding more information is possible to increase the task performance comparing with a standard model. According to this viewpoint, the core challenge is how to obtain a set of models which are Pareto optimality on a test dataset. + +It is natural to employ a heuristic method to approximately obtain the Pareto-optimal models as following. We can first randomly select a number of checkpoints during the standard training and probe each checkpoint by optimizing its corresponding probing model $q$ , as shown in Eq (2). Second, we can record the task performances and the amounts of linguistic information of the selected models on a test set. Finally, we can find the Pareto-optimal points and obtain the Pareto frontier. However, when using this method in our experiments, we find the amounts of encoded linguistic information for all checkpoints are similar and the task performances of those checkpoints are worse than the optimal model. Hence, in the next section, a new method will be presented to approximately derive the Pareto-optimal models. + +# 4 Methodology + +# 4.1 Multi-Objective Optimization + +To study the relationship between linguistic information and task performance, our goal is to obtain a set of models $\theta$ which are Pareto optimal on test data in terms of both objectives. Inspired by statistical learning theory (Vapnik, 1999), we propose an approach by optimizing the Pareto-optimal models towards both objectives on a given training dataset, which are expected to generalize well on unseen + +test data, i.e., these models are Pareto optimal on unseen test data. Formally, Our approach can be formulated as the following multi-objective optimization problem: + +$$ +\arg \min _ {\theta} [ L _ {\theta} (\boldsymbol {x}, \boldsymbol {y}); - \mathrm {I} (\boldsymbol {h}, \boldsymbol {s}) ] \tag {4} +$$ + +where minimizing $L_{\theta}(\pmb{x}, \pmb{y})$ aims to promote the task performance and maximizing $\mathrm{I}(h, s)$ encourages a model to encode more linguistic information in the representation. Once we obtain a set of Pareto-optimal models, we can observe how increasing the encoded linguistic information affects the variance of task performance. + +To further study how reducing the encoded linguistic information affects task performance, we optimize a similar multi-objective problem: + +$$ +\arg \min _ {\theta} [ L _ {\theta} (\boldsymbol {x}, \boldsymbol {y}); \mathrm {I} (\boldsymbol {h}, \boldsymbol {s}) ] \tag {5} +$$ + +The only difference between Eq. (4) and Eq. (5) is that the former maximizes $\mathrm{I}(\pmb {h},\pmb {s})$ while the latter minimizes $\operatorname {I}(\pmb {h},\pmb {s})$ . + +Since $H(s)$ is a constant term, we can plug Eq. (2) into the above two equations and obtain the following reduced multi-objective problems: + +$$ +\arg \min _ {\theta} [ L _ {\theta} (\boldsymbol {x}, \boldsymbol {y}); \min _ {\theta_ {q}} L _ {\theta_ {q}} (\boldsymbol {h}, \boldsymbol {s}) ] \tag {6} +$$ + +$$ +\arg \min _ {\theta} [ L _ {\theta} (\boldsymbol {x}, \boldsymbol {y}); - \min _ {\theta_ {q}} L _ {\theta_ {q}} (\boldsymbol {h}, \boldsymbol {s}) ] \tag {7} +$$ + +Notice that in the above equations, $\min_{\theta_q} L_{\theta_q}(h, s)$ resembles a conventional probing if $h$ is a fixed representation. However, unlike the standard probing applied on top of a fixed $h$ determined by the standard model, here $h$ is the representation obtained from a encoder $E$ parameterized by $\theta_e$ . It is also worth noting that the Pareto frontiers obtained from the Eq. (6) and (7) are independent, although they have a similar measurement, because the Pareto Optimal is only valid for the same objective. + +# 4.2 Optimization Algorithm + +To solve the above multi-objective problems, we leverage the linear-combination method to find a set of solutions, and then filter the non-Pareto-optimal points from the set to get the Pareto frontier. The details of our algorithm are shown below. + +Optimization Process Since the detailed optimization method for Eq. (6) is similar to that for Eq. (7), in the following we take Eq. (6) as an example to describe the optimization method. Inspired by (Duh et al., 2012), we employ a two-step strategy for optimization to find the Pareto frontier to + +![](images/76d5e50e06bf1ad66cb86045d3e4598ca175cf3b7e0884686e061b59bf61b057.jpg) +Figure 2: Overview of our multi-objective optimization method, where $L_{y} = L_{\theta}(\boldsymbol{x}, \boldsymbol{y})$ and $L_{\theta_q} = L_{\theta_q}(\boldsymbol{h}, \boldsymbol{s})$ . In the back propagation, the GM Layer multiplies the gradient by $\pm \lambda$ , i.e., $\lambda$ for Eq. (6) and $-\lambda$ for Eq. (7). + +address the multi-objective problems. + +In the first step, we adopt an method to find the Pareto-optimal solutions to the problem. There are several different methods to solve the problem, such as linear-combination, PMO (Duh et al., 2012) and APStar (Martínez et al., 2020). In this work, we adopt the linear-combination method because of its simplicity. Specifically, we select a coefficient set $\{\lambda_k \mid \lambda_k > 0\}$ and minimize the following interpolating function for each coefficient $\lambda_k$ : + +$$ +\arg \min _ {\theta} \left(L _ {\theta} (\boldsymbol {x}, \boldsymbol {y}) + \lambda_ {k} \min _ {\theta_ {q}} L _ {\theta_ {q}} (\boldsymbol {h}, \boldsymbol {s})\right) \tag {8} +$$ + +Notice that the first term of the loss function $L_{\theta}(\boldsymbol{x}, \boldsymbol{y})$ is the function of both encoder parameters $\theta_e$ and decoder parameters $\theta_d$ , while the second term $\min_{\theta_q} L_{\theta_q}(\boldsymbol{h}, \boldsymbol{s})$ is only the function of $\theta_e$ . Therefore, when minimizing Eq.(8), we apply a Gradient-Multiple (GM) Layer on the representations before inputting it into the probe model. As shown in Fig. 2, in the forward propagation, the GM Layer acts as an identity transform, while in the backward propagation, the GM Layer multiplies the gradient by $\pm \lambda$ and passes it to the preceding layers. Note that when the multiplier is $-\lambda$ , the GM Layer is the same as Gradient Reversal Layer (Ganin and Lempitsky, 2015). + +Suppose $\{\theta_k^* \mid \theta_k^* > 0\}$ is the minimized solution set for Eq. (8). In the second step, to get more accurate solutions, we filter the non-Pareto-optimal points of the solution set obtained by $\{\theta_k^* \mid \theta_k^* > 0\}$ . Finally, we get the Pareto frontier to the multi-objective problem according to the definition of Pareto optimality. + +Algorithm 1 Optimization Algorithm +Input: $\Lambda = \{\lambda_k\}$ , learning rate $\eta$ +Output: Pareto frontier set $\mathcal{P} = \{\langle \theta_e^i, \theta_d^i \theta_q^i \rangle\}$ +1: $\mathcal{M} = \{\}$ empty model set +2: for $\lambda_k \in \Lambda$ do minimize Eq. (8) +3: Random initialize $\theta_e^k, \theta_d^k$ , and $\theta_q^k$ +4: while convergence do +5: $\theta_e^k = \theta_e^k - \eta \left( \frac{\partial L_\theta(x,y)}{\partial \theta_e} + \lambda_k \frac{\partial L_\theta_q(s,h)}{\partial \theta_e} \right)$ +6: $\theta_d^k = \theta_d^k - \eta \frac{\partial L_\theta(x,y)}{\partial \theta_d}$ +7: $\theta_q^k = \theta_q^k - \eta \frac{\partial L_\theta_q(s,h)}{\partial \theta_q}$ +8: end while +9: Re-train a probe model $\theta_{q'}^k$ based on fixed encoder $\theta_e$ +10: Add $\langle \theta_e^k, \theta_d^k, \theta_{q'}^k \rangle$ into $\mathcal{M}$ +11: end for +12: $\mathcal{P} = \{\}$ Pareto frontier set +13: for all $\langle \theta_e^k, \theta_d^k, \theta_{q'}^k \rangle \in \mathcal{M}$ do +14: if IsParetoOptimal( $\theta_e^k, \theta_d^k, \theta_{q'}^k$ ) then +15: Add $\langle \theta_e^k, \theta_d^k, \theta_{q'}^k \rangle$ into $\mathcal{P}$ +16: end if +17: end for + +Detailed Algorithm The overall optimization algorithm regarding to Eq. (6) is shown in Algorithm 1. Theoretically, when minimizing Eq. (8), in every step updating $\theta$ , we should retrain the probe model $\theta_q$ to minimize $L_{\theta_q}(\boldsymbol{h}, \boldsymbol{s})$ in for many steps, in order to estimate $\mathrm{H}(\boldsymbol{s}|\boldsymbol{h})$ precisely. However, this is time-consuming and inefficient. Instead, after updating $\theta$ , we update $\theta_q$ only by one step (see line 7 Algorithm 1). Empirically, we find that optimization in this way has been very effective. + +In addition, as is mentioned by Elazar and Goldberg (2018), information leakage may occur when minimizing the mutual information. Therefore, after the training process is finished, we fix the deep model and retrain another probe model to estimate $\mathrm{H}(s|h)$ more precisely (line 9 in Algorithm 1). When maximizing the mutual information, we find there is no difference between $\mathrm{H}(s|h)$ estimated by jointly trained or retrained probe models. + +# 5 Experimental Settings + +# 5.1 Dataset + +We conduct experiments on both machine translation and language modeling tasks. For machine + +in our preliminary experiments. + +translation, we conduct the experiments on $\mathrm{En} \Rightarrow$ De and $\mathrm{Zh} \Rightarrow$ En translation tasks. For $\mathrm{En} \Rightarrow$ De task, we use WMT14 corpus which contains 4M sentence pairs. For $\mathrm{Zh} \Rightarrow$ En task, we use LDC corpus which consists of 1.25M sentence pairs, and we choose NIST02 as our validation set, and NIST06 as our test set. For language modeling task, we use Penn Treebank $^2$ dataset. We preprocess our data using byte-pair encoding (Sennrich et al., 2016) and keep all tokens in the vocabulary. For machine translation, we use case-insensitive 4-gram BLEU score (Papineni et al., 2002) to measure the task performance, which is proved to be positively correlated well with the MLE loss (?); For language modeling, we directly use the MLE loss to evaluate the task performance. + +# 5.2 Linguistic Properties + +For machine translation, we study part-of-speech (POS) and dependency labels in this work. Since there are no gold labels for the MT datasets, we use Stanza toolkit3 (Qi et al., 2020) to annotate source sentences and use the pseudo labels for running our algorithm, following Senrich and Haddow (2016); Li et al. (2018). We clean the labels and remove the sentences that fail to be parsed by Stanza from the dataset. To study whether all kinds of linguistic information are critical for neural models, we also investigate the phonetic information on the language modeling task. More precisely, the probing model needs to predict the first character of the International Phonetic Alphabet of each word.4 We get the labels with the open source toolkit English-to-IPA5. We use mutual information $\mathrm{I}(h,s) = \mathrm{H}(s) - \mathrm{H}(s|h)$ to evaluate the amount of information in the representations. Since $\mathrm{H}(s)$ is a constant, we only compare $\mathrm{H}(s|h)$ in the experiments. Note that $\mathrm{H}(s|h)$ is estimated by our probe model $q$ . + +# 5.3 Implementation Details + +All of our models are implemented with Fairseq $^6$ (Ott et al., 2019). For NMT experiments, our LSTM model consists of a bi-directional 2-layer encoder with 256 hidden units, and a 2-layer decoder + +$^{2}$ https://deepai.org/dataset/ +penn-treebank + $^{3}$ https://github.com/stanfordlp/stanza + $^{4}$ For example, given the input sentence "This dog is so cute", the probing model is asked to predict "ō dɪs k". + $^{5}$ https://github.com/mphilli/ +English-to-IPA + $^{6}$ https://github.com/pytorch/fairseq + +![](images/4323d34e4d3012d6b57477a245e1d13391a31f2921d5a8a56716974ecef162b7.jpg) + +![](images/c062bcd8773d36202a4d8f29e5ecd712e66421a68561de2b6ba8271f3eb9264d.jpg) + +![](images/7606519993d0c6a3555f4a69885af695ef0a4d1d72ce26371b03d433f51381ca.jpg) + +![](images/4049d66b55e36f4c526fb720affb39f710068b645b88641acef7c151de703fba.jpg) + +![](images/1b24c1857805d8bbba10723941b9d7ef621929e7d855fd51bd8d0cccc3dc0bad.jpg) +Figure 3: Experiments on WMT14 corpus. Triangle $(\triangle)$ denotes the model trained by minimizing MLE loss, circle $(\bigcirc)$ denotes the models obtained by our method, and the models on the line $(—)$ denotes the Pareto frontier. + +![](images/9381e4ca9e5bc7581a185eb75c1e7be30099c2404fa594db7ab3785ae248a5d6.jpg) + +![](images/2d88ae23b31d7cfa3bd12d1d016b6e55468967a50d4144183f96de915555a6af.jpg) + +![](images/d2c0183289d864ff90ef3fd29b028f3f995d0740c821df7b4774edc14cbed2b0.jpg) + +![](images/471b833ef910db125cc0a9bea09f5452da86b6ae961fa06d61b54c9f4869042f.jpg) +Figure 4: Comparison with baseline method. Triangle $(\triangle)$ denotes the standard model by minimizing MLE loss. The green line and blue line are frontiers got from baseline method and our method respectively. + +with 512 hidden units, and the probe model is a 2-layer MLP with 512 hidden units. Our Transformer model consists of a 6-layer encoder and a 6-layer decoder, whose hyper-parameters are the same as the base model in (Vaswani et al., 2017), and the probe model is a 6-layer transformer encoder. For LM experiments, our model is a 2-layer LSTM with 256 hidden states, and the probe model is a 2-layer MLP with 256 hidden states. More training details about our models are shown in appendix A. + +# 6 Experiment Results + +In the following experiments, "Model + Property", e.g., "Transformer+Pos", which is corresponding to Eq. 4 and studies how adding the linguistic properties information affects the task performance. Instead, "Model - Property", e.g., "Transformer-Pos", which is corresponding to Eq. 5 and studies how removing the linguistic properties information affects the task performance. It is worth noting + +that merging the two frontiers of $+$ Property and - Property together would lead to trivial results, because Pareto Optimal points of the $+$ Property are more likely to dominate. However, we think the frontier of - Property is helpful for answering the question that whether reducing the encoded linguistic information would affect the model performance. Therefore, we plot the Pareto frontiers for the two objectives independently. + +# 6.1 Soundness of Methodology + +The heuristic method mentioned before can be considered as a simple and straightforward baseline method to measure the relationship. To set up this baseline, we firstly save some checkpoints every 1,000 steps when training a standard model. Second, we randomly sample 30 checkpoints for probing and plot a scatter diagram in terms of BLEU and encoded linguistic information. + +As shown in Figure 4, we compare our proposed method with the heuristic method in the setting of "Transformer+Pos". Comparing with the baseline method, the frontier obtained from our method is better: for each model explored by baseline, there exists at least one model explored by our method whose two objectives, i.e., encoded linguistic information and BLEU score, are larger. The main reason is that the objective of baseline method only considers the task performance and most checkpoints contain similar encoded linguistic information. Therefore, the models optimized by our multi-objective method is more close to the globally Pareto-optimal points $^7$ , making the + +![](images/6a3c662761aca73a51fee1bb23438e3c8ff01273bc8b4a66557c6b258d3c338d.jpg) + +![](images/9aeb0d3ab73a964e9b8e0dc365029865c78f032b76db3e1797d8c07bc6a38173.jpg) + +![](images/acecfd39476ddc5baf036559ac5c9c397d5802647712eb273fe27cddd79cd46f.jpg) + +![](images/b0a563baf43cf1f405cab16fb936702709e22690ceef149fafbba9625c68bf05.jpg) + +![](images/5e4da49c4c2f32b0446cdf4a9ec02e10dd9751cb7a33431d714834ac7e31d87f.jpg) +Figure 5: Experimental results on LDC corpus. The format is the same as Fig. 3 + +![](images/04c8e466d69d83a5fdb7cd2c2b8cc96f264902c9818121dadd48e2e6ac0f1c14.jpg) + +![](images/8ea63cbc81506e476f5b572057d26a7c3a636b0e756a9af68cc30036ffbfc63a.jpg) + +![](images/d22806e88b3a38bf069ac15824a85cb0431dc8284bb162f79384d63ad780aa87.jpg) + +revealed relationship between encoded linguistic information and task performance more reliable. Therefore, in the next subsection, our proposed method will be used to visualize the relationship between encoded linguistic information and task performance for neural models. + +# 6.2 Visualization Results + +Results on NMT The results of machine translation on the WMT dataset are shown in Figure 3. For LSTM based NMT, we observe that the standard model, i.e., the $\triangle$ in Figure 3, is not in the Pareto frontier in Figure 3 (a,c). In other words, when adding linguistic information into the LSTM model, it is possible to obtain a model which contains more POS or DEP information and meanwhile leads to better BLEU score than the standard model by standard training. In contrast, for Transformer based NMT, the standard model is in the Pareto frontier, as shown in Figure 3 (e,g). This finding provides an explanation to the fact in NMT: many efforts (Luong et al., 2016; Nădejde et al., 2017; Bastings et al., 2017; Hashimoto and Tsuruoka, 2017; Eriguchi et al., 2017) have been devoted to improve the LSTM based NMT architecture by explicitly modeling linguistic properties, but few have been done on Transformer based NMT (McDonald and Chiang, 2021; Currey and Heafield, 2019). In addition, when removing the linguistic information from LSTM or Transformer, the standard model is very close to the lower right of Pareto frontier, or even at the frontier, as shown in Figure 3 (b,d,f,h). This result shows that removing linguistic informa + +![](images/ab660e79684be1a39a19473093a5b1e66ed178f9e0e2da7c60af6b466160db2c.jpg) +Figure 6: Experimental results on the PTB dataset. + +![](images/3a96f495b5ffc214ded15afb9acd2e9fe789d72afe906261ee963562ecb4468e.jpg) + +tion always hurts the performance of NMT models for both LSTM and Transformer, indicating that encoding POS and DEP information is important for NMT task. Similar trends are observed on the LDC datasets, as shown in Figure 5. More details about the effect of randomness on our approach are shown in appendix B. + +Results on LM Above experiments have shown that both syntactic information are important for NMT models, and then a natural question is whether all kinds of linguistic information are important for neural models. To answer this question, we propose to investigate the influence of phonetic information on a language model. Figure 6 depicts the relationship between encoded phonetic information and task performance for an LSTM based language model. In Figure 6 (a), we find that the performances of Pareto-optimal models drop slightly when forcing an LSTM model to encode more phonetic information. Besides, as the Pareto-frontier shown in Figure 6 (b), removing phonetic information from an LSTM model only leads to a slight change in performance. These experiments demonstrate that the encoded phonetic information may be not that critical for an LSTM based language model. This finding suggests that + +not all kinds of linguistic information are crucial for LSTM based LM and it is not promising to further improve language modeling with phonetic information. + +# 7 Conclusion + +This paper aims to study the relationship between linguistic information and the task performance and proposes a new viewpoint inspired by the criterion of Pareto Optimality. We formulate this goal as a multi-objective problem and present an effective method to address the problem by leveraging the theory of Pareto optimality. We conduct experiments on both MT and LM tasks and study their performance with respect to linguistic information sources. Experimental results show that the presented approach is more plausible than a baseline method in the sense that it explores better models in terms of both encoded linguistic information and task performance. In addition, we obtain some valuable findings as follows: i) syntactic information encoded by NMT models is important for MT task and reducing it leads to sharply decreased performance; ii) the standard NMT model obtained by minimizing MLE loss is Pareto-optimal for Transformer but it is not the case for LSTM based NMT; iii) reducing the phonetic information encoded by LM models only leads to slight performance drop. + +# Acknowledgement + +We would like to thank the anonymous reviewers for their constructive comments. L. Liu is the corresponding author. + +# References + +Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. +Christoph Alt, Aleksandra Gabryszak, and Leonhard Hennig. 2020. Probing linguistic features of sentence-level representations in neural relation extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1534-1545, Online. Association for Computational Linguistics. +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly + +learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Jasmijn Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Sima'an. 2017. Graph convolutional encoders for syntax-aware neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1957-1967, Copenhagen, Denmark. Association for Computational Linguistics. +Steven R Beckman, John P Formby, W James Smith, and Buhong Zheng. 2002. Envy, malice and pareto efficiency: An experimental examination. Social Choice and Welfare, 19(2):349-367. +Steven Cao, Victor Sanh, and Alexander Rush. 2021. Low-complexity probing via finding subnetworks. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 960-966, Online. Association for Computational Linguistics. +Mia Xu Chen, Orhan First, Ankur Bapna, Melvin Johnson, Wolfgang Macherey, George Foster, Llion Jones, Mike Schuster, Noam Shazeer, Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Zhifeng Chen, Yonghui Wu, and Macduff Hughes. 2018. The best of both worlds: Combining recent advances in neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 76-86, Melbourne, Australia. Association for Computational Linguistics. +Altannar Chinchuluun, Panos M Pardalos, Athanasios Migdalas, and Leonidas Pitsoulis. 2008. Pareto optimality, game theory and equilibria. Springer. +Alexis Conneau, German Kruszewski, Guillaume Lample, Loïc Barrault, and Marco Baroni. 2018. What you can cram into a single $\$ \& !\#$ * vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 2126-2136, Melbourne, Australia. Association for Computational Linguistics. +Anna Currey and Kenneth Heafield. 2019. Incorporating source syntax into transformer-based neural machine translation. In Proceedings of the Fourth Conference on Machine Translation, pages 24-33, Florence, Italy. Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. + +Yanzhuo Ding, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Visualizing and understanding neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1150-1159, Vancouver, Canada. Association for Computational Linguistics. +Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for multiple language translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1723-1732, Beijing, China. Association for Computational Linguistics. +Kevin Duh, Katsuhito Sudoh, Xianchao Wu, Hajime Tsukada, and Masaaki Nagata. 2012. Learning to translate with multiple objectives. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1-10, Jeju Island, Korea. Association for Computational Linguistics. +Yanai Elazar and Yoav Goldberg. 2018. Adversarial removal of demographic attributes from text data. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 11-21, Brussels, Belgium. Association for Computational Linguistics. +Yanai Elazar, Shauli Ravfogel, Alon Jacovi, and Yoav Goldberg. 2020. When bert forgets how to pos: Amnesic probing of linguistic properties and mlm predictions. arXiv preprint arXiv:2006.00995. +Akiko Eriguchi, Yoshimasa Tsuruoka, and Kyunghyun Cho. 2017. Learning to parse and translate improves neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 72-78, Vancouver, Canada. Association for Computational Linguistics. +Yaroslav Ganin and Victor S. Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, volume 37 of JMLR Workshop and Conference Proceedings, pages 1180-1189. JMLR.org. +Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 2672-2680. +Bruce C Greenwald and Joseph E Stiglitz. 1986. Externalities in economies with imperfect information and incomplete markets. The quarterly journal of economics, 101(2):229-264. + +Kazuma Hashimoto and Yoshimasa Tsuruoka. 2017. Neural machine translation with source-side latent graph parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 125-135, Copenhagen, Denmark. Association for Computational Linguistics. +John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2733-2743, Hong Kong, China. Association for Computational Linguistics. +Dieuwke Hupkes, Sara Veldhoen, and Willem Zuidema. 2018. Visualisation and diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure. Journal of Artificial Intelligence Research, 61:907-926. +Jason Lee, Dustin Tran, Orhan Firat, and Kyunghyun Cho. 2020. On the discrepancy between density estimation and sequence generation. In Proceedings of the Fourth Workshop on Structured Prediction for NLP, pages 84-94, Online. Association for Computational Linguistics. +Jierui Li, Lemao Liu, Huayang Li, Guanlin Li, Guoping Huang, and Shuming Shi. 2020. Evaluating explanation methods for neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 365-375, Online. Association for Computational Linguistics. +Xintong Li, Guanlin Li, Lemao Liu, Max Meng, and Shuming Shi. 2019. On the word alignment from neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1293-1303, Florence, Italy. Association for Computational Linguistics. +Xintong Li, Lemao Liu, Zhaopeng Tu, Shuming Shi, and Max Meng. 2018. Target foresight based attention for neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1380-1390, New Orleans, Louisiana. Association for Computational Linguistics. +Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I. Jordan. 2018. Conditional adversarial domain adaptation. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montreal, Canada, pages 1647-1657. +Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multitask sequence to sequence learning. In 4th International Conference on Learning Representations, + +ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings. +Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421, Lisbon, Portugal. Association for Computational Linguistics. +Natalia Martínez, Martin Bertran, and Guillermo Sapiro. 2020. Minimax pareto fairness: A multi objective perspective. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 6755-6764. PMLR. +Andreu Mas-Colell, Michael Dennis Whinston, Jerry R Green, et al. 1995. Microeconomic theory, volume 1. Oxford university press New York. +Colin McDonald and David Chiang. 2021. Syntax-based attention masking for neural machine translation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 47-52, Online. Association for Computational Linguistics. +Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. Regularizing and optimizing LSTM language models. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. +Maria Nădejde, Siva Reddy, Rico Senrrich, Tomasz Dwojak, Marcin Junczys-Dowmunt, Philipp Koehn, and Alexandra Birch. 2017. Predicting target language CCG supertags improves neural machine translation. In Proceedings of the Second Conference on Machine Translation, pages 68-79, Copenhagen, Denmark. Association for Computational Linguistics. +Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 48-53, Minneapolis, Minnesota. Association for Computational Linguistics. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. + +Tiago Pimentel, Naomi Saphra, Adina Williams, and Ryan Cotterell. 2020a. Pareto probing: Trading off accuracy for complexity. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pages 3138-3153, Online. Association for Computational Linguistics. +Tiago Pimentel, Josef Valvoda, Rowan Hall Maudslay, Ran Zmigrod, Adina Williams, and Ryan Cotterell. 2020b. Information-theoretic probing for linguistic structure. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4609-4622, Online. Association for Computational Linguistics. +Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A python natural language processing toolkit for many human languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 101-108, Online. Association for Computational Linguistics. +Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null it out: Guarding protected attributes by iterative nullspace projection. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7237-7256, Online. Association for Computational Linguistics. +Abhilasha Ravichander, Yonatan Belinkov, and Eduard Hovy. 2021. Probing the probing paradigm: Does probing accuracy entail task relevance? In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics, pages 3363-3377, Online. Association for Computational Linguistics. +Abdelrhman Saleh, Tovly Deutsch, Stephen Casper, Yonatan Belinkov, and Stuart Shieber. 2020. Probing neural dialog models for conversational understanding. In Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI, pages 132-143, Online. Association for Computational Linguistics. +Rico Sennrich and Barry Haddow. 2016. Linguistic input features improve neural machine translation. In Proceedings of the First Conference on Machine Translation, pages 83-91, Berlin, Germany. Association for Computational Linguistics. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1715-1725, Berlin, Germany. Association for Computational Linguistics. +Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. + +In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104-3112. +Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. 2017. Adversarial discriminative domain adaptation. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 2962-2971. IEEE Computer Society. +Vladimir N Vapnik. 1999. An overview of statistical learning theory. IEEE transactions on neural networks, 10(5):988-999. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008. +Elena Voita and Ivan Titov. 2020. Information-theoretic probing with minimum description length. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 183-196, Online. Association for Computational Linguistics. +Qizhe Xie, Zihang Dai, Yulun Du, Eduard H. Hovy, and Graham Neubig. 2017. Controllable invariance through adversarial feature learning. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 585-596. +Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and Ming Zhou. 2020. Layoutm: Pretraining of text and layout for document image understanding. In KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, CA, USA, August 23-27, 2020, pages 1192-1200. ACM. +Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329. + +
BLEUH(POSlh)
meanvarmeanvar
21.080.004070.11130
21.320.015360.10930
21.490.018470.1080
21.520.000600.11230
+ +Table 1: Experiment results from LSTM + POS setting. Specifically, "mean" and "var" denotes the mean and the variance over the window. + +# A Training Details + +On the WMT14 corpus, training one LSTM model with 4 V100 GPUs costs 5 hours, and training one Transformer with 8 V100 GPUs costs 8 hours. On LDC corpus, training one LSTM model with 4 V100 GPUs costs 3 hours, and training one Transformer with 8 V100 GPUs costs 3 hours. On the PTB dataset, training LSTM model with 1 V100 GPU costs 6 minutes. + +When running our algorithm, we empirically observe that when $\lambda$ is below 0.01, the optimized models show little difference comparing with the standard model, and when $\lambda$ is larger than 0.1, the proposed algorithm becomes unstable and can't converge to Pareto-optimal solutions well. Therefore, we take ten values from 0.1 to 0.01 at equal intervals as $\lambda$ in Eq. 8, and train ten models with different $\lambda$ for each condition respectively. Then we plot all the models and the Pareto frontier of these models in the experiments. + +# B Effects of Randomness + +Following the method from Chen et al. (2018), we check if randomness will affect our experimental results. Specifically, we select a window of size 3 around the best checkpoint model and report the mean and variance over the selected window. The results are shown in Table 1. Because repeating experiments under all the settings are too extensive, we only randomly select 4 models from LSTM + POS settings. As shown in the table, all the variances are small, and the variances of the entropy even achieve 0. This suggests that the random disturbance of our experiments are small and thus our results are reliable. \ No newline at end of file diff --git a/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/images.zip b/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..46f794da255a2226c5612693ebdd890d6458b9dc --- /dev/null +++ b/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5416e2354963625019e2b5f4e770a114bfaa0b8d73297283234efdebc4c0b2ab +size 264455 diff --git a/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/layout.json b/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..7523b87dd431c21d21243d3f1f71273867c1fb69 --- /dev/null +++ b/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:40567492d8285065b9b9a12ee997e38acdd392f5d1a30a2fd7e80d0721b1be96 +size 485764 diff --git a/weightedselfdistillationforchinesewordsegmentation/7512f1ba-fcd5-444c-8afd-c76d1af2aa68_content_list.json b/weightedselfdistillationforchinesewordsegmentation/7512f1ba-fcd5-444c-8afd-c76d1af2aa68_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..d1a717a0165955795fce87d5cf93665f6d5ab965 --- /dev/null +++ b/weightedselfdistillationforchinesewordsegmentation/7512f1ba-fcd5-444c-8afd-c76d1af2aa68_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ee0daebec702f44cc79b226f950c366c087bc7ebf6a20683a8b3f5ebab425e44 +size 99190 diff --git a/weightedselfdistillationforchinesewordsegmentation/7512f1ba-fcd5-444c-8afd-c76d1af2aa68_model.json b/weightedselfdistillationforchinesewordsegmentation/7512f1ba-fcd5-444c-8afd-c76d1af2aa68_model.json new file mode 100644 index 0000000000000000000000000000000000000000..110c9c0ff53cb89f085f811b92e0eb3b7660cd77 --- /dev/null +++ b/weightedselfdistillationforchinesewordsegmentation/7512f1ba-fcd5-444c-8afd-c76d1af2aa68_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f1fb5749fa868587103188f9bbca6ffc1be3bbecbd62b9f52080dac2179b348 +size 116812 diff --git a/weightedselfdistillationforchinesewordsegmentation/7512f1ba-fcd5-444c-8afd-c76d1af2aa68_origin.pdf b/weightedselfdistillationforchinesewordsegmentation/7512f1ba-fcd5-444c-8afd-c76d1af2aa68_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..53eadf91af7b4123869441c3a92338d647aabdad --- /dev/null +++ b/weightedselfdistillationforchinesewordsegmentation/7512f1ba-fcd5-444c-8afd-c76d1af2aa68_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca023a96a7546e1ef6b14cfa5050a28784a06a14f02e286ad5d5f3ea0502abd5 +size 1197614 diff --git a/weightedselfdistillationforchinesewordsegmentation/full.md b/weightedselfdistillationforchinesewordsegmentation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..9572bc4a51ecaa8af03f4d70f552e088a237b470 --- /dev/null +++ b/weightedselfdistillationforchinesewordsegmentation/full.md @@ -0,0 +1,405 @@ +# Weighted self Distillation for Chinese word segmentation + +Rian He $^{1}$ , Shubin Cai $^{*2}$ , Zhong Ming $^{*3}$ , Jialei Zhang $^{4}$ +National Engineering Laboratory for Big Data System Computing Technology +College of Computer Science and Software Engineering +Shenzhen University, Shenzhen 518060, China + $^{1}$ herian2020@email.szu.edu.cn $^{2}$ shubin@szu.edu.cn + $^{3}$ mingz@szu.edu.cn $^{4}$ zhangjialei2021@email.szu.edu.cn + +# Abstract + +Recent researches show that multi-criteria resources and n-gram features are beneficial to Chinese Word Segmentation (CWS). However, these methods rely heavily on such additional information mentioned above and focus less on the model itself. We thus propose a novel neural framework, named Weighted self Distillation for Chinese word segmentation (WeiDC). The framework, which only requires unigram features, adopts self-distillation technology with four hand-crafted weight modules and two teacher models configurations. Experiment results show that WeiDC can make use of character features to learn contextual knowledge and successfully achieve state-of-the-art or competitive performance in terms of strictly closed test settings on SIGHAN Bakeoff benchmark datasets. Moreover, further experiments and analyses also demonstrate the robustness of WeiDC. Source codes of this paper are available on $\mathrm{Github}^1$ + +# 1 Introduction + +Chinese is written without explicit word delimiters, while numerous Natural Language Processing (NLP) applications are word-based. Moreover, CWS is always a fundamental and essential step for processing most language tasks. + +Following the pace of many researchers (Sun and Xu, 2011; Chen et al., 2015; Ke et al., 2021), we also choose [B, I/M, E, S] tags (Beginning, Inside/Middle, End, Single character), which represent the precise position of a character in one word. Figure 1 gives a simple example. + +Char: 我 喜欢 大 自 然。 Tag: S B E B I E S + +Figure 1: The [B, I, E, S] tagging scheme. "我喜欢大自然。" ("I love nature.") + +Generally, a CWS task usually consists of three important parts: Embedding, Encoder and Decoder. Google published two papers, Mikolov et al. (2013a) and Mikolov et al. (2013b), and distributed representation has been widely used in NLP due to its low dimensions and efficiency in semantic similarity. Most researchers keep a close eye to the encoder part which includes Maximum Entropy (ME) (Berger et al., 1996), feed-forward neural network (Zheng et al., 2013), recursive neural network (Wang and Xu, 2017), long-short-term memory (LSTM) (Chen et al., 2015), Pre-training of Deep Bidirectional Transformers such as BERT (Tian et al., 2020) and other models. As for the decoder part, in addition to softmax, Conditional Random Fields (CRF) (Lafferty et al., 2001) usually plays a vital role because it can use the rich contextual feature in the annotation process. + +With the prevalence of pre-training and fine-tuning, transformer-based pre-trained models have dominated the field of CWS in recent years. Given sufficient training data, the pre-trained models (Nakkiran et al., 2020; Xu et al., 2020) have achieved remarkable results. However, these works may suffer from poor predicting accuracy when rare words or OOV (out-of-vocab) words exist. What's more, Huang and Zhao (2007) confirm that the loss of word segmentation accuracy, caused by OOV words, is at least 5 times greater than word segmentation ambiguity. We believe that improving the accuracy of the OOV words is worthy of further exploration. + +Unlike traditional Knowledge Distillation (KD) methods, self distillation teaches a student network by itself instead of a separate teacher (Xu and Liu, 2019; Zhang et al., 2019). Specifically, during one training epoch, the best student model or the student model from the last iteration will be saved as the teacher model for the next training epoch to teach the student itself. + +Moreover, we believe that the student model + +should study knowledge selectively according to the importance of information, so it is a practical solution to add an weight matrix to the training process. Different from the temperature distillation technology proposed by Hinton et al. (2015), we subtly utilize the information gap between pseudo labels, predicted by the teacher model or student model, and real labels to obtain the hand-crafted weight matrix. From another perspective, the process of acquiring weight matrices can also be seen as a kind of communication between teachers and students. Finally, to more precisely demonstrate the impact of WeiDC, we will temporarily ignore all external information. + +Our contributions are summarized below. We proposed WeiDC, which only requires unigram features and adopts self-distillation technology with four hand-crafted weight modules and two teacher models configurations. Considering there are few choices of weight measures, it is also a challenge to design a feasible method to obtain a rational weight value. We also performed various experiments, such as testing its robustness in some low-resource settings, and explored the efficiency of our framework by combining different encoders and decoders. Experimental results from four widely used benchmark datasets confirm that WeiDC can achieve state-of-the-art or competitive performance, especially in OOV recall. + +# 2 Related Work + +Xue and Converse (2002) first treat CWS as a sequence labeling task and use a maximum entropy tagger to train the data set. Xu (2003) shows a unique charm of the sequential labeling method in the CWS bakeoffs (Sproat and Emerson, 2003), especially its results on $\mathsf{R}_{\mathsf{OOV}}$ (Recall of Out Of Vocabulary). People thus turn their attention to the research of sequence labeling method (Peng et al., 2004; Zhao et al., 2006; Zhao and Kit, 2008). And Huang and Zhao (2007) conclude that treating the word segmentation process as a character labeling problem can balance the recognition of vocabulary words and unregistered words, because all words are realized through one unified character marking process. In general, our research is related to the following works. + +Pre-trained Frameworks Transformer-based pretrained models, such as BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), and ZEN (Diao et al., 2020), have demonstrated excellent performance + +in CWS tasks. Qiu et al. (2020) propose one unified model for multi-criteria CWS by leveraging the powerful ability of the Transformer encoder. Huang et al. (2020) also use BERT to capture various annotation criteria among datasets. Ke et al. (2021) propose a CWS-specific pre-trained model METASEG. Tian et al. (2020) and Liu et al. (2021) consider the combination of lexicon features and BERT for CWS. Huang et al. (2021) propose a semi-supervised neural method based on RoBERTa encoder through pseudo labels. + +Knowledge Distillation Hinton et al. (2015) first propose knowledge distillation, using a larger network to teach a smaller network. Tang et al. (2019) choose to distill knowledge from BERT, a state-of-the-art language representation model, into a simple heterogeneous model. Huang et al. (2020) also extract knowledge from BERT to a truncated (3 or 6 layers) BERT to balance computational cost and segmentation accuracy on CWS tasks. Jiao et al. (2020) adopt multiple distilling strategies to reduce the number of parameters of the pre-trained language models. Huang et al. (2021) collect massive unlabeled data and distill knowledge from the teacher model to the student model by generating pseudo labels. Zhang et al. (2019) put forward self-distillation, which has recently been used in computer vision, but not commonly used in NLP. + +To summarize, for further improving word segmentation accuracy, many researchers make use of lexicon information (Tian et al., 2020; Liu et al., 2021), multi-criteria label data (Chen et al., 2017; Huang et al., 2020; Qiu et al., 2020; Ke et al., 2020) and even unlabeled data (Sun and Xu, 2011; Zhang et al., 2013; Huang et al., 2021). + +# 3 The WeiDC Framework + +Huang and Zhao (2007) point out that CWS is the first step of most Chinese information processing systems, which usually relies on the shallow information of the text content, such as character features, which is distinct from the idea, "understand first and then segment words". As shown in Figure 2, we adopted the traditional word segmentation scheme, but added self distillation and weight modules to the training phase. + +# 3.1 The Sequential Part + +The traditional word segmentation scheme consists of the Embedding layer, Encoder layer, and Decoder layer. Formally, $x$ is always seen as all + +![](images/9da150c90bafe5b2c8fa0e39b70db4c659cbb901ab1453f6c9d538cb07aebe9e.jpg) +Figure 2: The WeiDC framework. The sentence, "千载难逢天外客" ("A once-in-a-l lifetime visitor from outside the sky"), is from the MSR testing corpus. And it's difficult to split "天外客" ("A visitor from outside the sky"). + +marked data sequences and $x = [x_{1}, x_{2}, \ldots, x_{n}]$ , and $y$ is over corresponding label sequences and $y = [y_{1}, y_{2}, \ldots, y_{n}]$ . We choose the BERT model to get character embeddings and encode these embeddings. After that, the encoder's outputs are fed into the decoder layer to obtain predicted tags. + +Embedding layer We use BertTokenizer to obtain our input embeddings. Each character embedding consists of token embedding and position embedding. We don't need to consider the Next Sentence Prediction problem and remove token_type embedding. Additionally, to easily explore various weight mechanisms, WeiDC ignores unlabeled data or n-gram features. + +Encoder layer Once obtaining character embeddings, they will be fed into an encoder, such as BERT or its derivative models. We choose bert-base-chinese $^2$ version and only need config.json, pytorch_model.bin, and vocab.txt to train linguistic data. Vaswani et al. (2017) give BERT, based on Transformer, an abundant description. We decide to omit its background description here. Furthermore, we also take RoBERTa $^3$ as our encoder to explore the impact of various pre-trained models on the CWS experiments. + +Decoder layer Compared with Hidden Markov Models, Lafferty et al. (2001) present CRF for building probabilistic models to mark and segment the sequence data with weak independence assumptions. + +$$ +p \left(y _ {i} \mid x _ {i}\right) = \frac {\exp \left(W _ {c} \cdot z _ {i} + b _ {c}\right)}{\sum_ {y _ {i - 1} y _ {i}} \exp \left(W _ {c} \cdot z _ {i} + b _ {c}\right)} \tag {1} +$$ + +In addition, softmax is also a frequent decoder, which can efficiently convert logit to probability regardless of intrinsic correlation. + +$$ +p \left(y _ {i} \mid x _ {i}\right) = \log \frac {\exp \left(z _ {i} ^ {d}\right)}{\sum_ {d} ^ {\mathcal {D}} \exp \left(z _ {i} ^ {d}\right)} \tag {2} +$$ + +where $z_{i}\in \mathbb{R}^{|\mathcal{D}|}$ is logits and $z_{i}^{d}$ is the value at dimension $d$ in $z_{i}$ . $p(y_{i}|x_{i})$ is the corresponding probability value. $W_{c}\in \mathbb{R}^{|\mathcal{D}|\times |\mathcal{D}|}$ and $b_{c}\in \mathbb{R}^{|\mathcal{D}|}$ are trainable parameters of CRF. $y_{i - 1}y_{i}$ models the state from $y_{i - 1}$ to $y_{i}$ . + +We continue to operate on the probability $(p(y|x))$ to get the predicted label $(\hat{y})$ . + +$$ +\hat {y} = \operatorname {a r g m a x} p (y | x) \tag {3} +$$ + +Through comparative experiments, Qiu et al. (2020) conclude that with or without CRF does not make much difference. Since CRF is more complex and the training cost is higher, we mainly try softmax to decode logits to make full use of computing resources. + +# 3.2 Weight Mechanism + +During one training epoch, the pseudo labels $(\hat{y})$ from $t$ or $s$ are compared with corresponding true labels (y), which can be expressed by formula 4. $t$ and $s$ indicate that $\hat{y}$ come from the teacher model or student model, respectively. $\eta$ refers to the information difference between $\hat{y}$ and y. + +$$ +\eta_ {m} = | \hat {y} _ {m} - y |, m = t, s \tag {4} +$$ + +In the process of executing equation 4, we use absolute value operations. When one pseudo label is equal to the corresponding true label, we get 0, otherwise we get a positive number. Since the + +result is the opposite of what we want, we have to perform 5 and 6. + +$$ +F (j) = \left\{ \begin{array}{l l} 0, & j = 0 \\ 1, & j \neq 0 \end{array} \right. \tag {5} +$$ + +$F(j)$ converts all positive numbers to 1 and $j$ is a variable symbol. Then, the intermediate value is processed by equation 6 to get the final result. + +$$ +\eta_ {m} = 1 - F (\eta_ {m}) \tag {6} +$$ + +We hope there will be enough communication between the teacher and student to obtain a reasonable weight value, so we designed equation 7. $w_{wei}^{1}$ is the first type of weight vector. + +$$ +w _ {w e i} ^ {1} = \eta_ {t} + \eta_ {s} + 1 \tag {7} +$$ + +The meaning of equation 7 is very concise. During distillation, samples with higher accuracy are given more attention, while samples with lower accuracy are given less attention. Moreover, to avoid losing the basic information carried by each sample, we need to make sure that the minimum value of $w_{wei}^{1}$ is 1, we thus add 1. + +We also notice that $\eta_t$ and $\eta_s$ may contain various amounts of knowledge. Therefore, we multiply $\eta_t$ or $\eta_s$ by 2 to get equations 8 and 9, respectively. Certainly, other coefficients can also be selected according to actual needs. + +$$ +w _ {w e i} ^ {2} = 2 \cdot \eta_ {t} + \eta_ {s} + 1 \tag {8} +$$ + +$$ +w _ {w e i} ^ {3} = \eta_ {t} + 2 \cdot \eta_ {s} + 1 \tag {9} +$$ + +From another perspective, if the teacher model is correct and the student model is wrong, this kind of knowledge should be more valuable. We thus get another calculation method, which is described in equation 10, to obtain the weight vector. + +$$ +w _ {w e i} ^ {4} = 2 \cdot \eta_ {t} - \eta_ {s} + 2 \tag {10} +$$ + +We must add 2 to ensure that the minimum value of $w_{wei}^{4}$ is 1. + +Finally, according to different weight modules, all possible values of a single character (marked as k) are shown in Table 1. The above four weight mechanisms show that different key factors affect the weight value. In other words, for the same pseudo label, different reference factors will lead to various weight values. + +
ηtkηskw1weikw2weikw3weikw4weik
113443
102324
012231
001112
+ +Table 1: All possible weight values of character k. + +For example, if we consider that words with low frequency can better reflect the models' performance, we can increase their weights to penalize the loss of misclassifying these words. As a result, the student model will pay more attention to low-frequency words. + +According to different distillation scenarios or learning needs, it is necessary to choose appropriate reference factors to design weight calculation methods. Here, we take the segmentation difficulty of words as a reference standard. + +# 3.3 Distillation + +Unlike self-training, self-distillation takes a fully supervised way to dig the potential of the model itself, requiring no auxiliary models or data. In this paper, the teacher model comes from two sources, either the student model from the last iteration $(D_{last})$ or the student model with the best historical performance $(D_{best})$ . + +The student also learns from two sources of information, predicted probabilities from the teacher and one-hot ground-truth label. Hence, the final loss $(\mathcal{L}_{KD})$ consists of two parts, cross-entropy loss $(\mathcal{L}_{CE})$ and distillation loss $(\mathcal{L}_{Distill})$ : + +$$ +\mathcal {L} _ {K D} = (1 - \alpha) \cdot \mathcal {L} _ {C E} + \alpha \cdot \mathcal {L} _ {\text {D i s t i l l}} \tag {11} +$$ + +To balance the above two losses, we need a coefficient $\alpha$ , which is also set to a fixed value during the training phase. + +$\mathcal{L}_{CE}$ is to penalize the cross-entropy loss between the predicted label $(\hat{y})$ against the true label $(y)$ : + +$$ +\mathcal {L} _ {C E} = - \sum_ {x} y \log \hat {y} _ {(x)} \tag {12} +$$ + +$\mathcal{L}_{Distill}$ is to reduce the mean-squared-error loss between the teacher's logits $(z^{(T)})$ and the student's logits $(z^{(S)})$ , and $w_{wei}$ can be any of the above four weight types. + +$$ +\mathcal {L} _ {\text {D i s t i l l}} = \left\| w _ {w e i} \cdot z ^ {(T)} - w _ {w e i} \cdot z ^ {(S)} \right\| _ {2} ^ {2} \tag {13} +$$ + +To better verify the effect of WeiDC, the temperature distillation technology is not considered here. + +
DatasetMSRPKUASCITYU
traintesttraintesttraintesttraintest
Char4,050K184K1,826K173K8,368K198K2,403K68K
Word2,368K107K1,110K104K5,450K123K1,456K41K
Char types5,1682,8384,6982,9345,9793,6284,8322,663
Word types88,11912,92355,30313,148141,33918,75969,0858,993
OOV Rate-2.7-5.8-4.3-7.2
+ +Table 2: Corpus details of four CWS datasets + +Distinct from previous studies on knowledge distillation, our framework adds the weight mechanism, allowing the teacher and the student to communicate fully to focus on more valuable knowledge. Furthermore, the teacher is not a static model but dynamically evolves as training proceeds. Hence, the weight vector will also alter as the teacher model changes so that the student model can learn richer knowledge. + +# 4 Experiments + +# 4.1 Dataset and Evaluation Metric + +The second SIGHAN international Chinese word segmentation bakeoff (Emerson, 2005), which includes MSR, PKU, AS and CITYU datasets, is frequently used in CWS tasks. Since AS and CITYU are traditional Chinese characters, we convert these data into simplified ones by following previous studies (Chen et al., 2015; Qiu et al., 2020; Tian et al., 2020). We will use these datasets in the following experiments and corpus details are listed in Table 2. + +We also choose precision (P), recall (R), F-score, and $R_{OOV}$ , which is the recall for out-of-vocabulary (OOV) words, to evaluate segmentation performance. Specifically, we first record the word information in the complete training corpus and then divide the corpus into a training set and validation set. Besides, we take no extra resources but only training corpus to train our model. + +# 4.2 Baselines + +According to whether to use a pre-trained model such as BERT as the encoder, we have selected two types of baselines, Non-pretrained Models and Pre-trained Models. + +Non-pretrained Models Chen et al. (2017) propose adversarial multi-criteria learning for CWS tasks by exploiting the underlying shared knowledge across multiple heterogeneous criteria. Ma et + +al. (2018) also point out that using external knowledge can improve the CWS accuracy. Gong et al. (2019) provide a more flexible solution to transfer the learned information to new criteria. They all use the bidirectional LSTM encoder. Qiu et al. (2020) propose one unified model for multi-criteria CWS based on the Transformer encoder. Through the Gaussian-masked Directional (GD) Transformer, Duan and Zhao (2020) try to further strengthen the model itself to perfect CWS tasks. + +Pre-trained Models Huang et al. (2020) propose a domain adaptive segmenter to exploit various open-domain knowledge. Tian et al. (2020) use key-value memory networks to incorporate word-hood information with BERT or ZEN as the encoder. Ke et al. (2021) put forward a CWS-specific pre-trained model to alleviate the discrepancy between pre-trained models and downstream CWS tasks. Nguyen et al. (2021) propose a span labeling approach to model n-gram features for word segmentation. + +# 4.3 Training Details + +All experiments are implemented on the hardware with Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz and NVIDIA Tesla V100. Following previous works (Ma et al., 2018; Qiu et al., 2020), we randomly select $10\%$ training data for development and only use its testing set at the end of the training phase. Similar to the previous work (Tian et al., 2020), we performed other preprocessing measures on all data sets. + +During fine-tuning, we use Adam with the learning rate of 2e-5. Both train_batch_size and eval_batch_size are 16. As for the trade-off hyperparameter $(\alpha)$ , we randomly select $1\%$ of the training set to explore the influence of various $\alpha$ on WeiDC. We observe that when $\alpha$ is 0.3, WeiDC performs better. + +Besides, we train all models up to 50 with some early stopping strategies, such as "patient epochs" + +
ModelMSRPKUASCITYUAVG
FROOVFROOVFROOVFROOVFROOV
Chen et al. (2017) *96.0471.694.3272.6794.7575.3795.5581.495.1775.26
Ma et al. (2018) †98.180.096.178.896.270.797.287.596.979.25
Gong et al. (2019) *97.7864.296.1569.8895.2277.3396.2273.5896.3477.82
Qiu et al. (2020) *†98.0578.9296.4178.9196.4476.3996.9186.9196.9580.28
Duan and Zhao (2020)97.6-95.5-95.7-95.4-96.05-
Huang et al. (2020) *97.984.096.781.696.777.397.690.197.2383.25
Tian et al. (2020) †(BERT)98.2886.6796.5186.7696.5878.4897.887.5797.2984.87
Tian et al. (2020) †(ZEN)98.484.8796.5385.3696.6279.6497.9390.1597.3785.0
Ke et al. (2021) *‡98.583.0396.9280.997.0180.8998.290.6697.6683.87
Nguyen et al. (2021) †98.3185.3296.5685.8396.6279.3697.7487.4597.3184.49
WeiDC (BERT)98.2886.3996.5987.2196.7680.2397.7987.5897.3685.35
WeiDC (RoBERTa)98.4387.1796.7487.4896.5979.2697.9589.9397.4385.96
+ +of 3 and "minimum F value" of 0.0001. Specifically, when the gap between the current F value and the optimal F value is less than 0.0001, we will not replace our saved model to avoid frequently updating the teacher model. Table 4 summarizes all the vital parameters. + +Table 3: First two blocks record different baselines, namely Non-pre and Pre. The last block is our scores. $\star$ uses a multi-criteria learning framework, which means that the marked training data are different from the rest. $\dagger$ uses lexicons or n-gram features. $\ddagger$ uses a CWS-specific pre-trained model. + +
minimum F value1e-4train_batch_size16
num_train_epochs50eval_batch_size16
patient_epochs3learning_rate2e-5
train : eval9 : 1alpha (α)0.3
+ +Table 4: Hyper parameters of WeiDC. + +We take [B, I, E, S] tagging scheme in our experiments. To explore the influence of diverse weight modules on CWS, we will only try BERT and RoBERTa as our encoder. As for BERT, we follow the default settings in their paper (Devlin et al., 2019). In addition to combining four weight modules and two types of teacher models, we also plan to conduct some exploratory experiments, such as testing the performance of WeiDC on a small amount of training data. + +# 5 Results and Analysis + +In this section, we firstly report the results of WeiDC and its comparison with the state-of-the-art works available. Then we explore the robustness of WeiDC through lots of experiments in different low-resource settings. We also analyze the impact of OOV words on the model. Finally, we perform various NER tasks to test WeiDC's effectiveness. + +# 5.1 Main Results + +Several observations are drawn from Table 3 and Table 5, where the overall F-score and OOV recall are all reported. + +First, Table 3 demonstrates that pre-trained models, with lots of prior knowledge, perform better than non-pretrained models, especially in OOV recall. Compared with baselines listed in Table 3, the results in these experiments not only confirm that self distillation and weight mechanism are effective methods to benefit CWS without any auxiliary data or CWS-specific pre-trained models, but also fully illustrate that the design of WeiDC can enhance the model learning ability. + +Second, as shown in Table 5, WeiDC achieved exciting results on $\mathbf{R}_{OOV}$ with maintaining competitive performance on F-score. For instance, when we took BERT as our encoder, WeiDC improved the F-score by $0.16\%$ on average, from $97.2\%$ to $97.36\%$ , and the $\mathbf{R}_{OOV}$ score by $1.71\%$ on average, from $83.64\%$ to $85.35\%$ . + +Third, in most cases, $D_{best}$ outperforms $D_{last}$ , and we speculate that updating the teacher model too frequently will be detrimental to the learning process of the student model. Besides, different CWS tasks need various weight modules, so it is essential to choose reasonable weight mechanisms according to the characteristics of datasets. + +Fourth, with BERT as the encoder and softmax as the decoder, our base model is powerful, but the improvement of WeiDC on $\mathbf{R}_{OOV}$ scores is still very decent. Specifically, under the current + +
ModelMSRPKUASCITYUAVG
FROOVFROOVFROOVFROOVFROOV
BERT(base)98.2285.2296.585.696.4477.3797.6386.3597.283.64
+Dbest98.2285.5896.5987.0496.6479.5197.6886.5297.2884.66
+Dbest + w2wei98.1786.0796.5388.0396.7180.5797.685.497.2585.02
+Dbest + w4wei98.2886.3996.5987.2196.7680.2397.7987.5897.3685.35
RoBERTa(base)98.3386.7496.5887.0496.3476.1497.888.897.2684.68
+Dbest98.4386.6796.5686.3496.5278.4797.8489.3897.3485.22
+Dbest + w2wei98.3386.2196.7988.3496.679.2697.9690.3397.4286.04
+Dbest + w4wei98.4387.1796.7487.4896.5979.2697.9589.9397.4385.96
+ +Table 5: Ablation studies combining self distillation and four weight modules. Complete results can be found in the Appendix Tables 10 and 11. + +
Sampling Rates1%5%10%20%50%80%100%AVG
FRooVFRooVFRooVFRooVFRooVFRooVFRooVFRooV
BERT(base)93.9283.3894.3777.6594.7276.7495.8383.4696.1585.1396.3384.1896.585.695.482.31
+Dbest93.782.959582.3395.7986.5695.9885.6396.3485.696.3684.9196.5987.0495.6885.0
+Dbest+w2cei93.2983.395.3787.8695.6987.3695.8286.3996.3587.9696.5687.7396.5388.0395.6686.95
+ +Table 6: Scores on PKU test set in low-resource settings. + +experimental conditions (listed in table 4), $w_{wei}^{4}$ has the best overall performance on all data sets, while $w_{wei}^{3}$ has the worst performance. + +Last, RoBERTa outperforms BERT when we deal with the CWS task. If CRF is used as the decoder, the CWS model seems to be more prone to overfitting, resulting in worse word segmentation. + +# 5.2 Low-Resource Settings + +In real life, the training corpus is usually insufficient, and it is valuable to measure the performance of CWS models in some low-resource settings. The partition criterion of our training sets follows Ke et al. (2021), whose sampling rates are 0.01, 0.05, 0.1, 0.2, 0.5, 0.8, and 1.0. For easy operation, we will obtain the above training datasets after randomizing the original training dataset but finally test on the same original testing dataset. + +We decided to perform the above experiment on PKU without changing any parameters in Table 4. We first took BERT as the base model and gradually added $D_{best}$ and $w_{wei}^2$ . Related results of the experiment are shown in Table 6. + +We notice that the performance of all models is greatly affected by sampling rates, especially at a low ratio such as $1\%$ and $5\%$ . In addition, self distillation can significantly improve the effect of CWS, and weight mechanisms can further increase the $\mathbf{R}_{OOV}$ scores. + +Specifically, when the sampling rate drops from $100\%$ to $5\%$ , "BERT + $D_{best}$ " and "BERT + $D_{best}+$ + +$w_{wei}^{2}$ have better F1 scores than "BERT". For $R_{OOV}$ scores, "BERT" decreases by $7.95\%$ while that of "BERT + $D_{best}$ " only decreases by $4.71\%$ . Surprisingly, "BERT + $D_{best}$ + $w_{wei}^{2}$ " almost always maintains high $R_{OOV}$ scores, fluctuating between $87\%$ and $88\%$ . We do not pay too much attention to $1\%$ , because the sample size may be too small to reflect the real performance of the model. + +Generally speaking, the above results confirm that WeiDC has strong robustness when manual annotation resources are insufficient. + +# 5.3 OOV Words + +From the above experiments, WeiDC worked well in $\mathbf{R}_{OOV}$ . To verify the performance of each model on OOV words, we operated the PKU training corpus to train all models but took other testing data sets to evaluate these models. + +We first digitized the discrepancy between the training set of PKU and the test sets of MSR, AS and CITYU. For visual comparison, we also listed the distribution of OOV words in the PKU test set. See Table 7 for more details. It should not be ignored that both AS and CITYU are traditional Chinese datasets, where words may be slightly different, such as "铁公路" ("iron road") on CITYU while "铁路" ("railway") on PKU. + +As shown in Table 8, WeiDC almost performs better than the base model on all three testing tasks, especially in $R_{OOV}$ . According to table 7 and Table 8, the effect of WeiDC on the test set with a higher + +
\(OOV_{word}\)PKUMSRASCITYU
TypeFreqTypeFreqTypeFreqTypeFreq
NotInPKU_Train286360064100811083861800630996726
All Test Word131481043721292310687318759122610899340936
OOV Rate21.785.7531.737.6044.7014.6934.4616.43
+ +Table 7: OOV words for the four CWS test sets. "NotInPKU_Train" represents words that appear in the test set while not in the PKU training set. Column "Type" only includes the type of OOV word, but column "Freq" considers the frequency. + +
ModelMSRASCITYU
FROOVFROOVFROOV
BERT(base)86.9520.5190.0571.8290.7773.51
+Dbest+0.0+0.88+0.45+2.38+0.52+2.2
+Dbest + w2wei-0.08+0.81+0.47+2.41+0.51+3.06
+ +frequency of OOV words is more distinct. However, the number of types of OOV words seems to be less beneficial. + +We finally checked the PKU and MSR datasets to find out why all models performed poorly on MSR. The word segmentation standards of the above two corpora are very different, such as "最大" ("biggest") on MSR while "最大" ("most" and "big") on PKU, which directly causes all models to perform better on AS and CITYU, but poorly on MSR. + +# 5.4 NER Tasks + +Similar to CWS tasks, Named Entity Recognition (NER) tasks can also be performed in the form of sequence annotations. To further explore the effectiveness of the weight mechanism and compare which weight mechanism performs better, we conduct some NER experiments. All hyperparameters are the same as the CWS task. The relevant results are shown in Appendix Table 13. + +We can get the following suggestions. First, the hand-crafted weight module can improve sequence labeling tasks, whether CWS or NER. Second, $w_{wei}^{4}$ has the best overall performance among all weight mechanisms and is also a good choice when the features of the training dataset are unclear. + +Moreover, the labeling rules of various datasets vary widely, so it is almost impossible to design a general weight mechanism. This also explains that our chosen parameters do not always yield the best results. To focus our attention on experimental exploration, we did not spend much time on parameter tuning. + +# 6 Case Study + +For CWS tasks, it is very hard to get the right segmentations if two adjacent words, such as "天外" ("outside the sky") and "客" ("guest"), both appear for the first time, as shown in Table 9. Unfortunately, WeiDC can't handle this problem properly either. However, we find that if we add some valuable context, our model can still get rational results. + +Table 8: Train on PKU, but test on other three datasets. + +
Gold千载难逢 天外 客
Originaltext: 千载难逢天外客BERT: 千载难逢 天外客+Dbest+w2: 千载难逢天外客
Replace 1天外的人,千载难逢天外客天外的人,千载难逢 天外客天外的人,千载难逢 天外客
Replace 2天外的客,千载难逢天外客天外的客,千载难逢 天外客天外的客,千载难逢 天外客
Replace 3天外的流星,来做客,千载难逢天外客天外的流星,来做客,千载难逢 天外客天外的流星,来做客,千载难逢 天外客
Replace 4客人说,见到了天外来的流星,千载难逢天外客客人说,见到了天外来的流星,千载难逢 天外客客人说,见到了天外来的流星,千载难逢 天外客
+ +Table 9: "千载难逢天外客" ("A once-in-a-lifetime visitor from outside the sky"). In each block, the first line is a raw text, and the last two lines are segmentation results of BERT and WeiDC, respectively. Both models are trained on PKU. + +Although in some cases both "天外客" ("A visitor from outside the sky") and "天外客" ("outside the sky" and "guest") are rational representations, here we assume that "天外客" ("outside the sky" and "guest") is correct one and let these models learn it by enhancing the semantic environment. + +First, according to "Replace 1" and "Replace 4", if only "天外" ("outside the sky") appears in the previous text, BERT obtains "天外客" ("outside the sky" and "guest") at the cost of inconsistent segmentation criteria in "天外" ("outside the sky"). For WeiDC, "天外客" ("A visitor from outside the sky") is regarded as a derivative of "天外" ("outside the sky"), as shown in "Replace 1". After semantic + +information gets enriched, the possibility of "天外" ("outside the sky") becoming an independent word increases, so the correct result is obtained. We also notice that when text content is rich, WeiDC will get desired results even if there is interference information such as "外来" ("outside") in the added semantic knowledge. + +Second, from "Replace 2", when "的" ("of") locates between "天外" ("outside the sky") and "客" ("guest"), both BERT and WeiDC learn the right segmentation position by treating "的" ("of") as a single word. We analyzed the PKU training set for further exploration and found that "的" ("of") is a high-frequency single-character word. When we blur the semantic information, as shown in "Replace 3", WeiDC treats "天外客" ("A visitor from outside the sky") as a word, while BERT can still obtain the correct segmentation. We speculate that the added interference information hurts the small text content. From another perspective, WeiDC has a strong ability to learn contextual knowledge from different semantic environments to assist CWS tasks. + +Last but not least, we make heatmaps to visualize the word segmentation process in Figure 3. + +![](images/75f5a2cc2fd1b11448a763e702c5b5ce03a8a9a27a5deda821b84c2c02bf4523.jpg) +(a)"A once-in-a-lifetime visitor from outside the sky" + +![](images/a08673a6df158f3ef3a5860b4f2c558a3ecce17b168cb5b91d4eb88f743e6246.jpg) +(b) "A visitor from outside the sky, a once-in-a-lifetime visitor from outside the sky" +Figure 3: Heatmaps of the label probability. + +# 7 Conclusion + +In this paper, we proposed a novel framework named WeiDC, which could make good use of the knowledge in teacher models through self-distillation. The framework also follows the sequence labeling paradigm but first applies self distillation and weight mechanism to CWS, combining four hand-crafted weight modules and two types of teacher models. Experimental results show that WeiDC could achieve higher performance on four CWS datasets, with the average F-score ranking second and the average $\mathrm{R}_{OOV}$ score ranking first. + +However, for non-sequential labeling problems, such as text classification, a paragraph only corresponds to one tag, so the number of labels is too small, which may render the method in this paper ineffective. How to solve such a dilemma deserves more exploration. Besides, it is also promising to consider whether more efficient weight methods exist. + +# 8 Acknowledgments + +We thank the anonymous reviewers for constructive and expert comments, and the support of National Natural Science Foundation of China No. 61836005. + +# References + +Adam L. Berger, Stephen Della Pietra, and Vincent J. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39-71. +Xinchi Chen, Xipeng Qiu, Chenxi Zhu, Pengfei Liu, and Xuanjing Huang. 2015. Long short-term memory neural networks for Chinese word segmentation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 1197-1206. Association for Computational Linguistics. +Xinchi Chen, Zhan Shi, Xipeng Qiu, and Xuanjing Huang. 2017. Adversarial multi-criteria learning for Chinese word segmentation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1193-1203. Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186. Association for Computational Linguistics. +Shizhe Diao, Jiaxin Bai, Yan Song, Tong Zhang, and Yonggang Wang. 2020. ZEN: pre-training chinese text encoder enhanced by n-gram representations. In Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of Findings of ACL, pages 4729-4740. Association for Computational Linguistics. + +Sufeng Duan and Hai Zhao. 2020. Attention is all you need for chinese word segmentation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 3862-3872. Association for Computational Linguistics. +Thomas Emerson. 2005. The second international Chinese word segmentation bakeoff. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing, SIGHAN@IJCNLP 2005, Jeju Island, Korea, 14-15, 2005. Association for Computational Linguistics. +Jingjing Gong, Xinchi Chen, Tao Gui, and Xipeng Qiu. 2019. Switch-lstms for multi-criteria Chinese word segmentation. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 6457-6464. AAAI Press. +Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. stat, 1050:1-9. +Changning Huang and Hai Zhao. 2007. Chinese word segmentation: A decade review. Journal of Chinese information processing, 27:8-19. +Kaiyu Huang, Junpeng Liu, Degen Huang, Deyi Xiong, Zhuang Liu, and Jinsong Su. 2021. Enhancing Chinese word segmentation via pseudo labels for practicability. In Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, Online Event, August 1-6, 2021, volume ACL/IJCNLP 2021 of Findings of ACL, pages 4369-4381. Association for Computational Linguistics. +Weipeng Huang, Xingyi Cheng, Kunlong Chen, Taifeng Wang, and Wei Chu. 2020. Towards fast and accurate neural Chinese word segmentation with multi-criteria learning. In Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8-13, 2020, pages 2062-2072. International Committee on Computational Linguistics. +Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2020. Tinybert: Distilling BERT for natural language understanding. In Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of Findings of ACL, pages 4163-4174. Association for Computational Linguistics. +Zhen Ke, Liang Shi, Erli Meng, Bin Wang, Xipeng Qiu, and Xuanjing Huang. 2020. Unified multi-criteria Chinese word segmentation with BERT. CoRR, abs/2004.05808. + +Zhen Ke, Liang Shi, Songtao Sun, Erli Meng, Bin Wang, and Xipeng Qiu. 2021. Pre-training with meta learning for chinese word segmentation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 5514-5523. Association for Computational Linguistics. +John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning (ICML 2001), Williams College, Williamstown, MA, USA, June 28 - July 1, 2001, pages 282-289. Morgan Kaufmann. +Gina-Anne Levow. 2006. The third international Chinese language processing bakeoff: Word segmentation and named entity recognition. In Proceedings of the Fifth Workshop on Chinese Language Processing, SIGHAN@COLING/ACL 2006, Sydney, Australia, July 22-23, 2006, pages 108-117. Association for Computational Linguistics. +Wei Liu, Xiyan Fu, Yue Zhang, and Wenming Xiao. 2021. Lexicon enhanced chinese sequence labeling using BERT adapter. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 5847-5858. Association for Computational Linguistics. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692. +Ji Ma, Kuzman Ganchev, and David Weiss. 2018. State-of-the-art Chinese word segmentation with bilstms. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 4902-4908. Association for Computational Linguistics. +Tomás Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. In 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings. +Tomás Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, + +Lake Tahoe, Nevada, United States, pages 3111-3119. +Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and Ilya Sutskever. 2020. Deep double descent: Where bigger models and more data hurt. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. +Duc-Vu Nguyen, Linh-Bao Vo, Dang Van Thin, and Ngan Luu-Thuy Nguyen. 2021. Span labeling approach for Vietnamese and chinese word segmentation. In PRICAI 2021: Trends in Artificial Intelligence - 18th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2021, Hanoi, Vietnam, November 8-12, 2021, Proceedings, Part II, volume 13032 of Lecture Notes in Computer Science, pages 244-258. Springer. +Fuchun Peng, Fangfang Feng, and Andrew McCallum. 2004. Chinese segmentation and new word detection using conditional random fields. In COLING 2004, 20th International Conference on Computational Linguistics, Proceedings of the Conference, 23-27 August 2004, Geneva, Switzerland, pages 562-568. COLING. +Nanyun Peng and Mark Dredze. 2015. Named entity recognition for chinese social media with jointly trained embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 548-554. The Association for Computational Linguistics. +Xipeng Qiu, Hengzhi Pei, Hang Yan, and Xuanjing Huang. 2020. A concise model for multi-criteria Chinese word segmentation with transformer encoder. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of *Findings of ACL*, pages 2887-2897. Association for Computational Linguistics. +Richard Sproat and Thomas Emerson. 2003. The first international chinese word segmentation bakeoff. In Proceedings of the Second Workshop on Chinese Language Processing, SIGHAN 2003, Sapporo, Japan, July 11-12, 2003, pages 133-143. +Weiwei Sun and Jia Xu. 2011. Enhancing Chinese word segmentation using unlabeled data. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, EMNLP 2011, 27-31 July 2011, John McIntyre Conference Centre, Edinburgh, UK. A meeting of SIGDAT, a Special Interest Group of the ACL, pages 970-979. Association for Computational Linguistics. +Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, and Jimmy Lin. 2019. Distilling task-specific knowledge from BERT into simple neural networks. CoRR, abs/1903.12136. + +Yuanhe Tian, Yan Song, Fei Xia, Tong Zhang, and Yonggang Wang. 2020. Improving Chinese word segmentation with wordhood memory networks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 8274-8285. Association for Computational Linguistics. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008. +Chunqi Wang and Bo Xu. 2017. Convolutional neural network with word embeddings for Chinese word segmentation. In Proceedings of the Eighth International Joint Conference on Natural Language Processing, IJCNLP 2017, Taipei, Taiwan, November 27 - December 1, 2017 - Volume 1: Long Papers, pages 163-172. Asian Federation of Natural Language Processing. +Canwen Xu, Wangchunshu Zhou, Tao Ge, Furu Wei, and Ming Zhou. 2020. Bert-of-theseus: Compressing BERT by progressive module replacing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 7859-7869. Association for Computational Linguistics. +Nianwen Xu. 2003. Chinese word segmentation as character tagging. Computational Linguistics and Chinese Language Processing., 8(1):29-48. +Ting-Bing Xu and Cheng-Lin Liu. 2019. Data-distortion guided self-distillation for deep neural networks. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 5565-5572. AAAI Press. +Nianwen Xue and Susan P. Converse. 2002. Combining classifiers for Chinese word segmentation. In The First Workshop on Chinese Language Processing, SIGHAN@COLING 2002, Taipei, Taiwan, August 24 - September 1, 2002, pages 1-7. +Linfeng Zhang, Jiebo Song, Anni Gao, Jingwei Chen, Chenglong Bao, and Kaisheng Ma. 2019. Be your own teacher: Improve the performance of convolutional neural networks via self distillation. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, pages 3712-3721. IEEE. +Longkai Zhang, Houfeng Wang, Xu Sun, and Mairgup Mansur. 2013. Exploring representations from unlabeled data with co-training for chinese word segmentation. In Proceedings of the 2013 Conference + +on Empirical Methods in Natural Language Processing, EMNLP 2013, 18-21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 311-321. Association for Computational Linguistics. +Yue Zhang and Jie Yang. 2018. Chinese NER using lattice LSTM. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 1554-1564. Association for Computational Linguistics. +Hai Zhao, Changning Huang, and Mu Li. 2006. An improved chinese word segmentation system with conditional random field. In Proceedings of the Fifth Workshop on Chinese Language Processing, SIGHAN@COLING/ACL 2006, Sydney, Australia, July 22-23, 2006, pages 162-165. Association for Computational Linguistics. +Hai Zhao and Chunyu Kit. 2008. An empirical comparison of goodness measures for unsupervised Chinese word segmentation with a unified framework. In Third International Joint Conference on Natural Language Processing, IJCNLP 2008, Hyderabad, India, January 7-12, 2008, pages 9-16. Association for Computer Linguistics. +Xiaoqing Zheng, Hanyang Chen, and Tianyu Xu. 2013. Deep learning for chinese word segmentation and POS tagging. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, 18-21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 647-657. Association for Computational Linguistics. + +
ModelMSRPKUASCITYUAVG
FROOVFROOVFROOVFROOVFROOV
BERT(base)98.2285.2296.585.696.4477.3797.6386.3597.283.64
+Dbest98.2285.5896.5987.0496.6479.5197.6886.5297.2884.66
+Dbest+w1wei98.1685.7596.6387.2996.6880.6297.7886.5297.3185.05
+Dbest+w2wei98.1786.0796.5388.0396.7180.5797.685.497.2585.02
+Dbest+w3wei98.1185.6196.586.3396.6780.5797.6886.5997.2484.78
+Dbest+w4wei98.2886.3996.5987.2196.7680.2397.7987.5897.3685.35
+Dlast98.1686.4396.6486.9396.5178.2297.6386.0497.2484.41
+Dlast+w2wei97.8286.0796.5387.0896.6780.5197.7787.397.285.24
+Dlast+w4wei98.1686.2196.5887.8196.6880.1197.6886.7697.2885.22
+Dbest+w2wei+CRF98.1785.3796.3785.2696.7580.9697.7986.8697.2784.61
+Dbest+w4wei+CRF98.1685.6196.4886.5996.7781.6397.6385.8197.2684.91
+ +Table 10: Take BERT as the base model. + +
ModelMSRPKUASCITYUAVG
FROOVFROOVFROOVFROOVFROOV
RoBERTa(base)98.3386.7496.5887.0496.3476.1497.888.897.2684.68
+Dbest98.4386.6796.5686.3496.5278.4797.8489.3897.3485.22
+Dbest + w1wei98.3588.5596.6487.3996.5378.5897.9590.0397.3786.14
+Dbest + w2wei98.3386.2196.7988.3496.679.2697.9690.3397.4286.04
+Dbest + w3wei98.2587.8896.5787.2396.679.4197.989.5897.3386.03
+Dbest + w4wei98.4387.1796.7487.4896.5979.2697.9589.9397.4385.96
+Dlast98.487.4596.5387.1996.4878.3697.8989.9397.3385.73
+Dlast + w2wei98.1586.8996.788.3996.5479.2197.9490.297.3386.17
+Dlast + w4wei98.2387.8896.6788.0996.6779.8197.9889.8297.3986.4
+Dbest + w2wei + CRF98.4187.096.6386.8696.5579.0997.989.2897.3785.56
+ +Table 11: Take RoBERTa as the base model. + +# A CWS Appendix + +Combining two encoders and two decoders, the final results on the four datasets are included in Tables 10 and 11. All experiments adopted the same hyperparameters, as shown in Table 4. + +We speculate that RoBERTa benefits from longer training time and larger batches of training data than BERT. In addition, some training tricks used in RoBERTa may also improve the performance of the pre-trained model, such as removing the next sentence prediction target, training longer sequences, and dynamically changing the mask pattern to be applied to the training data. + +To our surprise, if CRF is used as the decoder, the CWS model seems to be more prone to overfitting, resulting in worse word segmentation. However, we also notice that CRF performs well on the AS dataset when using BERT as the encoder, suggesting that Softmax may not really outperform + +CRF. We consider that the current parameters are more suitable for Softmax. More detailed analysis is available from Section 5. + +
DatasetWEIBORESUMEMSRA
traintestdevtraintestdevtraintestdev
Sentences1.4k0.27k0.27k3.8k0.48k0.46k46.4k4.4k-
Chars73.8k14.8k14.5k124.1k15.1k13.9k2169.9k172.6k-
Entities1.89k0.42k0.39k1.34k0.15k0.16k74.8k6.2k-
+ +Table 12: Corpus details of three NER datasets + +
ModelWEIBORESUMEMSRAAVG
PRFPRFPRFPRF
BERT(base)68.0166.2767.1594.5895.3494.9695.6694.0394.8486.0885.2185.65
+Dbest68.8366.0367.494.3496.0795.294.8494.8794.8686.085.6685.82
+Dbest+w1wei70.1269.6269.8795.2196.3295.7695.0994.2794.6886.8186.7486.77
+Dbest+w2wei70.166.7568.3895.5295.4695.4995.3994.7495.0687.085.6586.31
+Dbest+w3wei69.9370.17095.3296.295.7695.4894.7395.186.9187.0186.95
+Dbest+w4wei71.0870.5770.8394.895.1594.9895.8494.6495.2487.2486.7987.02
+ +Table 13: NER tasks. Take BERT as the base model. + +# B NER Appendix + +Corpus details of MSRA (Levow, 2006), WEIBO (Peng and Dredze, 2015), and RESUME (Zhang and Yang, 2018) are summarized in Table 12. We have no access to OntoNote 4, so didn't test it. All experiments adopted the same hyperparameters, as shown in Table 4. We did not list the latest performance of existing NER tasks, as we only explored whether WeiDC works for NER tasks and which weight mechanism is more robust. + +As shown in Table 13, $w_{wei}^{4}$ performs the best on the WEIBO and MSRA datasets, while the worst on the RESUME dataset, indicating that it is difficult, if not impossible, to design a general weight mechanism. The overall performance of $w_{wei}^{4}$ is still higher than other weight mechanisms. How to more naturally integrate weight mechanisms and knowledge distillation into NER tasks requires more exploration and research. + +In addition to such NER tasks, non-sequence annotation tasks, such as text classification, usually have only one label per sentence, which may limit the application of WeiDC. \ No newline at end of file diff --git a/weightedselfdistillationforchinesewordsegmentation/images.zip b/weightedselfdistillationforchinesewordsegmentation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..9f8e734a786c5266fcec1b23e1e1881fee39cbd3 --- /dev/null +++ b/weightedselfdistillationforchinesewordsegmentation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0edbe809f7cdcab9f7b706f54f18508585af50bc06da3ecd98921f33c3bcf4a5 +size 866744 diff --git a/weightedselfdistillationforchinesewordsegmentation/layout.json b/weightedselfdistillationforchinesewordsegmentation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..644d6d289185415548bb755e19893a39912cd3e0 --- /dev/null +++ b/weightedselfdistillationforchinesewordsegmentation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0cc27957bbbb6bef6b43e1829e4a0b55cf828f7362c855698c995e0398f58134 +size 444380 diff --git a/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/928fa972-f20c-459e-bee4-213b5cceabc0_content_list.json b/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/928fa972-f20c-459e-bee4-213b5cceabc0_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..8f7c298235d596953f6383e8464f8611c392050b --- /dev/null +++ b/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/928fa972-f20c-459e-bee4-213b5cceabc0_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b63f355c17de7cedea4d6115e4301e985ff4001115ee2390c8d85c51d9c7b0b8 +size 101035 diff --git a/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/928fa972-f20c-459e-bee4-213b5cceabc0_model.json b/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/928fa972-f20c-459e-bee4-213b5cceabc0_model.json new file mode 100644 index 0000000000000000000000000000000000000000..01b77797328e3678eb9dcadc34f4d569774b0ca7 --- /dev/null +++ b/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/928fa972-f20c-459e-bee4-213b5cceabc0_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5c37071ad150b34c82552c4f64f6a5f589fb84e113358bfa7941de3298c7e26e +size 124113 diff --git a/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/928fa972-f20c-459e-bee4-213b5cceabc0_origin.pdf b/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/928fa972-f20c-459e-bee4-213b5cceabc0_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..80c612d4dd8c28d89f5b739b2df443acacebea08 --- /dev/null +++ b/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/928fa972-f20c-459e-bee4-213b5cceabc0_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e583b39e7c13447b7c85fe074c9cf8323ff424bbe1e8b0a85bd62a50d61a6e9e +size 504371 diff --git a/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/full.md b/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c3fb1946416b43b143efc47d5e05d274d9b78586 --- /dev/null +++ b/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/full.md @@ -0,0 +1,393 @@ +# What does it take to bake a cake? The RecipeRef corpus and anaphora resolution in procedural text + +Biaoyan Fang $^{1}$ , Timothy Baldwin $^{3,1}$ and Karin Verspoor $^{2,1}$ + +1The University of Melbourne, Australia + +$^{2}$ RMIT University, Australia + +$^{3}$ MBZUAI, Abu Dhabi + +biaoyanf@student.unimelb.edu.au + +{tbaldwin, karin.verspoor}@unimelb.edu.au + +# Abstract + +Procedural text contains rich anaphoric phenomena, yet has not received much attention in NLP. To fill this gap, we investigate the textual properties of two types of procedural text, recipes and chemical patents, and generalize an anaphora annotation framework developed for the chemical domain for modeling anaphoric phenomena in recipes. We apply this framework to annotate the RecipeRef corpus with both bridging and coreference relations. Through comparison to chemical patents, we show the complexity of anaphora resolution in recipes. We demonstrate empirically that transfer learning from the chemical domain improves resolution of anaphora in recipes, suggesting transferability of general procedural knowledge. + +# 1 Introduction + +Anaphora resolution is a core component in information extraction tasks (Poesio et al., 2016; Rösiger, 2019) and critical for various downstream natural language processing tasks, such as named entity recognition (Dai et al., 2019) and machine translation (Stanovsky et al., 2019). It consists of two primary anaphoric types, coreference (Ng, 2017; Clark and Manning, 2015) and bridging (Asher and Lascarides, 1998; Rösiger et al., 2018). Most anaphora corpora (Pradhan et al., 2012; Ghaddar and Langlais, 2016; Poesio et al., 2008), however, only focus on either coreference or bridging. To fill the gap in anaphora resolution, it is becoming increasingly important to have both types annotated. + +Current research on anaphora resolution is mostly based on declarative text (Pradhan et al., 2012; Ghaddar and Langlais, 2016; Rösiger, 2018a; Hou et al., 2018), such as news or dialogue. Procedural text, such as chemical patents or instruction manuals, has received limited attention despite being critical for human knowledge (Yamakata et al., + +2020). In turn, correct resolution of entities is the cornerstone of procedural text comprehension—resolution of anaphora in these texts is required to determine what action applies to which entity. + +We focus in this work on the procedural text type of recipes. As shown in Fig. 1, recipes have rich and complex anaphora phenomena. Here, the expression the biscuits appears several times in text; while each occurrence relates to the same biscuits concept, their state and semantic meaning vary. + +Our aim in this paper is to address anaphora resolution in procedural text, especially for recipes, identifying anaphoric references and determining the relationships among the entities. We first investigate the textual properties of procedural texts, i.e. chemical patents and recipes. We then adapt an existing anaphora annotation schema developed for chemical patents (Fang et al., 2021a,b) to recipes, and define four types of anaphora relationships, encompassing coreference and bridging. We further create a dataset based on this schema and achieve high inter-annotator agreement with two annotators experienced with the domain. We additionally explore the feasibility of applying transfer learning from the chemical domain to model recipe anaphora resolution. The dataset and related code are publicly available. $^{1}$ + +Our contributions in this paper include: (1) adaptation of the anaphora annotation framework from chemical patents for modeling anaphoric phenomena in recipes; (2) creation of a publicly accessible recipe anaphora resolution dataset based on the annotation framework (Fang et al., 2022); (3) investigation of the textual properties of chemical patents and recipes; and (4) demonstration of the benefit of utilizing procedural knowledge from the chemical domain to enhance recipe anaphora resolution via transfer learning. + +![](images/3c557aa255b570ce5a9fc2868fd8a2338c97f2a0b21738f084d391209e8bf880.jpg) +Figure 1: Excerpt of a recipe annotated for anaphora. Different color links represent different anaphora relation types. Detailed anaphora relation definitions are provided in Section 3.3. + +# 2 Related Work + +Anaphora relation subsumes two referring types: (1) coreference — expressions in the text that refer to the same entity (Clark and Manning, 2015; Ng, 2017); and (2) bridging — expressions that do not refer to the same entity, but are linked via semantic, lexical, or encyclopedic relations (Asher and Lascarides, 1998; Hou et al., 2018). + +Existing anaphora corpora mostly focus on declarative text, across a range of domains (Poesio et al., 2008; Pradhan et al., 2012; Ghaddar and Langlais, 2016; Cohen et al., 2017). There have been attempts to annotate procedural text corpora for anaphora, but most focus exclusively on coreference (Mysore et al., 2019; Friedrich et al., 2020). + +Pradhan et al. (2012) developed the CoNLL 2012 corpus for generic coreference resolution. It consists of declarative texts including news and magazine articles, across three languages — English, Chinese, and Arabic. This corpus adopted the OntoNotes 5.0 (Weischedel et al., 2013) annotation scheme, modeling coreference in terms of two subtypes: (1) identity, where the anaphoric references and referents are identical; and (2) apposite, where a noun phrase is modified by an intermediately-adjacent noun phrase. It models coreference as a clustering task, ignoring the direction of relations. Following largely the same annotation framework, the WikiCoref corpus (Ghaddar and Langlais, 2016) targeted Wikipedia texts. The InScript corpus (Modi et al., 2016) consists of 1,000 stories from 10 different scenarios corresponding to a "script", i.e. a standardised sequence of events. The corpus includes coreference annota + +tions for noun phrases. + +BioNLP-ST 2011 (Nguyen et al., 2011) is a gene-related coreference corpus based on abstracts from biomedical publications. It consists of four types of coreference: RELAT (relative pronouns or relative adjectives, e.g. that), PRON (pronouns, e.g. it), DNP (definite NPs or demonstrative NPs, e.g. NPs that begin with the) and APPOS (coreferences in apposition). As it only focuses on gene-related annotation, coreference is limited. CRAFT-ST 2019 (Cohen et al., 2017) annotates 97 full biomedical articles for coreference resolution, based on a slightly-modified version of the OntoNotes 5.0 annotation scheme. Compared to the BioNLP 2011 corpus, it contains a wider range of relation types, and is not limited to only abstracts. SCIERC (Luan et al., 2018) contains 500 abstracts from scientific articles, and coreference annotation. + +Due to the complexities of defining bridging (Zeldes, 2017; Hou et al., 2018), different corpora have adopted different definitions of bridging. According to Rosiger et al. (2018), bridging can be divided into: (1) referential, where the anaphoric references rely on the referent to be interpretable (e.g. a new town hall – the door, the old oak tree – leaves, etc.); and (2) lexical, encompassing lexical-semantic relations, such as meronymy or hyponymy (e.g. Europe and Spain are in a whole-part relation). The ARRAU corpus (Poesio et al., 2008) consists of three types of declarative text: news, dialogue and narrative text. The bridging annotations are mostly lexical, with a much smaller number of referential references. The ISNotes corpus (Hou et al., 2018) is based on 50 Wall Street + +Journal (WSJ) texts from the OntoNotes corpus, and contains both coreference and referential bridging. Similar to ISNotes, BASHI (Rösiger, 2018a) is based on another 50 WSJ texts from OntoNotes with referential bridging. With the same annotation scheme as BASHI, SciCorp (Rösiger, 2016) focuses on scientific text and referential bridging. + +A small number of domain-specific anaphora corpora have been developed for procedural text. The ChEMU-ref corpus (Fang et al., 2021a) contains 1,500 chemical patent excerpts describing chemical reactions. Based on generic and chemical knowledge, the corpus contains five types of anaphora relationships, i.e. Coreference, Transfers, Reaction-associated, Work-up, and Contained. Friedrich et al. (2020) developed the SOFC-Exp corpus based on 45 material sciences articles, for the purposes of information extraction. The corpus is primarily targeted at named entity recognition and relation extraction, with coreference as a secondary annotation task, based on coindexation between a common noun or pronoun and a more specific mention earlier in the text. Also in the context of material sciences, Mysore et al. (2019) annotated 230 synthesis procedures for coreference, largely based on text in parentheses and coreferent abbreviations. + +Recent work in recipe comprehension includes visual instructions (Huang et al., 2017; Nishimura et al., 2020) and linguistic texts (Agarwal and Miller, 2011; Kiddon et al., 2015; Jiang et al., 2020) across Japanese (Harashima and Hiramatsu, 2020; Harashima et al., 2016) and English (Batra et al., 2020; Marin et al., 2019). Most research analyzes the text of recipes as a workflow graph based on actions (Kiddon et al., 2015; Mori et al., 2014; Yamakata et al., 2020), where the vertices represent name entities (e.g. action, food, etc.) and edges represent relational structure (e.g. action complement, food complement, etc.). Although interactions among ingredients can be derived via action nodes, this approach doesn't sufficiently capture anaphora phenomena, i.e. coreference and bridging. The RISEC corpus (Jiang et al., 2020) identifies candidate expressions for zero anaphora verbs in English recipes. However, they do not capture generic anaphoric phenomena. + +In terms of modeling, most research has handled coreference and bridging separately due to limited data availability (and a lack of annotated datasets containing both coreference and bridging). + +For coreference resolution, span ranking models (Lee et al., 2017, 2018) have become the benchmark method, supplanting mention ranking models (Clark and Manning, 2015, 2016a,b; Wiseman et al., 2015, 2016). Various span ranking variants have been proposed (Zhang et al., 2018; Grobol, 2019; Kantor and Globerson, 2019), and achieved strong results. With the increasing number of coreference corpora, transfer learning (Brack et al., 2021; Xia and Van Durme, 2021) involving pretraining on a source domain and fine-tuning on a target domain has shown great potential at improving coreference resolution. Bridging methods can be categorised into: (1) rule-based methods (Hou et al., 2014; Rösiger et al., 2018; Rösiger, 2018b); and (2) machine learning methods (Hou, 2018a,b, 2020; Yu and Poesio, 2020). Hou (2020) modeled bridging resolution as a question answering task, and fine-tuned the question answering model from generic question answering corpora. By utilizing transfer learning, they achieved a stronger performance on the bridging task. Yu and Poesio (2020) proposed a joint training framework for bridging and coreference resolution based on an end-to-end coreference model (Lee et al., 2017). Similar to coreference, they modeled bridging as a clustering task. Through joint training, they achieved substantial improvements for bridging, but the impact on coreference was less clear. Fang et al. (2021a) adopted the same end-to-end framework for joint training, modeling bridging as a mention pair classification task, and achieved improvements on both subtasks. + +# 3 Annotation Scheme + +In this section, we describe our adapted annotation scheme for recipe anaphora annotation. The complete annotation guideline is available at Fang et al. (2022). + +# 3.1 Corpus Selection + +We create our RecipeRef dataset by random sampling texts from RecipeDB (Batra et al., 2020), a large, diverse recipe database containing 118,171 English recipes with 268 processes and more than 20,262 ingredients. It consists of ingredient lists and instruction sections. We select the instruction section of each recipe, which details the steps for preparing the dish. + +# 3.2 Mention Types + +As our goal is to capture anaphora in recipes, we focus on ingredient-related expressions. In line with previous work (Pradhan et al., 2012; Cohen et al., 2017; Fang et al., 2021a; Ghaddar and Langlais, 2016), we leave out singleton mentions, i.e. mentions that are not involved in anaphora relations (as defined in Section 3.3) are not annotated. Mention types that are considered for anaphora relations are listed below. + +Ingredient Terms: In recipes, ingredient terms are essential as they indicate what ingredients are used, in the form of individual words or phrases, such as butter, endive heads, red peppers, or garlic powder. + +Referring Expressions: We consider referring expressions to be pronouns (e.g. it or they) and generic phrases (e.g. soup, or the pastry mixture) used to represent ingredients that were previously introduced in the recipe text. + +We adopt several criteria in annotating mentions: + +- Premodifiers: One of the key challenges in procedural text is to track state changes in entities. It is critical to include premodifiers, as they play an important role in identifying an entity's state. We consider ingredients with premodifiers to be atomic mentions, e.g. chopped chicken, roasted red peppers, and four sandwiches.2 +- Numbers: In some cases, standalone numeric expressions can be used to reference to ingredients, and in such cases are considered to be mentions. Examples of this are 1 in Beat eggs, 1 at a time, and three in Combine together to make a sandwich. Repeat to make three. + +# 3.3 Relation Types + +A core challenge in procedural text comprehension is tracking the state of each entity (Dalvi et al., 2018; Tandon et al., 2018). Recipes contain rich information about changes in the state of ingredients. As shown in Fig. 1, to obtain the biscuits in line 6, the biscuits in line 1 has gone through several processes, involving physical (e.g. flatten) and chemical change (e.g. bake). Capturing labeled + +![](images/fee9b57920d8b608ad954895970a6c8bfe3adee5c253ecd3f1685d8fe5f1f3a0.jpg) +Figure 2: Overall schema of anaphora relations for recipes. + +interactions between ingredients provides a richer understanding of ingredients and their interactions (i.e. where is the ingredient from). + +There are two basic types of anaphora: coreference and bridging. In recipes, we define bridging according to three subtypes of referring relations, based on the state of entities (with coreference making up the fourth subtype). The overall schema of anaphora relations for recipes is shown in Fig. 2. + +In anaphora resolution, an antecedent is a linguistic expression that anchors the interpretation of a second expression, the anaphor, which cannot be interpreted in isolation or has little meaning on its own. Anaphors are linked to antecedents via anaphora relations. Consistent with previous work, we limit anaphors to link to antecedents appearing earlier in the text (i.e. we do not annotate instances of cataphora, which we found to occur very rarely in recipe texts), and the direction of links is preserved. + +# 3.3.1 Coreference + +In general applications, coreference focuses on expressions that refer to the same entity in the real-world (Clark and Manning, 2015; Ng, 2017). In procedural text, the state of an entity can be changed by an action applied to that entity. To capture state changes, we add an extra constraint on coreference in requiring that the two mentions refer to the same entity in the same state. + +To eliminate ambiguity in linking coreferent mentions, the closet antecedent is linked for a given anaphor. + +# 3.3.2 Bridging + +As discussed in Section 3.3.1, we consider the state of entities to interface with anaphora in procedural text. As such, we define three subtypes of bridging relations, based on the state of the entities involved. + +TRANSFORMED A one-to-one anaphoric link for an ingredient that is meaning-wise the same + +
Combination +ProcessChemical +Patents...5-Isopropylisoxazol-3-carboxylic acid (1.00 g, 6.45 mmol) was dissolved in methanol (20 mL), and thionyl chloride (1.51 g, 12.9 mmol) was slowly added at 0°C. The reaction solution was slowly warmed to 25°C and stirred for 12 hour...
Recipes... mix 2 tablespoons of the olive oil, chili powder, allspice, salt, and pepper in a small bowl and brush the turkey all over with the spice mixture...
Removal +ProcessChemical +Patents...the mixture was extracted three times with ethyl acetate (50 mL). The combined ethyl acetate layer was washed with saturated brine (50 mL) and dried over anhydrous sodium sulfate...
Recipes...add chicken thighs to the broth and simmer until cooked through, about 10 minutes. remove chicken with slotted spoon and set aside; when cool enough to handle, slice thinly. continue to simmer broth, return to pot...
+ +Table 1: Examples of processes in chemical patents and recipes. + +but has undergone physical/chemical change (e.g. peeling, baking, or boiling). For example, in Fig. 1, the biscuits in line 4 and 5 are annotated as TRANSFORMED because of the bake action that changes the state of the biscuits in line 4. + +# INGREDIENT(WITHOUT-STATE-CHANGE)- + +ASSOCIATED A one-to-many relationship between a processed food mention and its source ingredients, where the source ingredients have not undergone a state change (i.e. physical/chemical change). As shown in Fig. 1, the cheese in line 5 refers to its source ingredients the mozzarella and Parmesan cheese in line 4 and there is no state change. Thus, they are annotated as INGREDIENT(WITHOUT-STATE-CHANGE)-ASSOCIATED. + +# INGREDIENT(WITH-STATE-CHANGE)- + +ASSOCIATED A one-to-many relationship between a processed food mention and its source ingredients, involving a state change. As an example, the biscuits in Fig. 1 line 6 is a combination of previously-mentioned source ingredients (i.e. the sauce, a pinch of the oregano, pepperoni, the cheese, and the biscuits) involving a state change through baking. They are thus annotated as INGREDIENT(WITH-STATE-CHANGE)-ASSOCIATED. + +# 3.4 Comparison with Chemical Patents + +As shown in Table 1, chemical patents and recipes have many commonalities. They use similar language to describe the application of processes (e.g. combination or removal) to source entities to obtain new entities, making it feasible to adapt the anaphora annotation scheme from chemical patents (Fang et al., 2021a,b) to recipes. + +However, there are some key differences in the annotation schemes. + +- Domain Differences: Some relation types defined for chemical patents are domain-specific, + +e.g. the WORK-UP relation is specific to chemistry and cannot be directly applied to recipes. + +- Determining State Change: In both chemical patents and recipes, anaphora resolution aims to capture anaphoric relations between mentions involving possible state changes. In the chemical domain, we are most concerned with chemical changes (e.g. oxidation or acidification). However, in the recipe domain, we are also interested in physical changes (e.g. chop or slice). +- Rich Semantic Meaning in Recipes: Ingredient terms in recipes may represent a combination of ingredients. As shown in Fig. 1, the biscuits in line 6 represent a combination of previously-mentioned ingredients and not just the biscuit ingredient itself. However, in chemical patents, chemical names have specific meanings and cannot be semantically extended. This is a key challenge in resolving anaphora in recipes. +- Variability in Instruction Descriptions: Although chemical patents and recipes have similar structure, instruction descriptions in recipes are structurally more variable. In chemical patents, processed entities are mostly directly used in the immediately-proceeding process. However, processed entities in recipes can be mentioned far later in the text (esp. in "modular" recipes, e.g. where a cake, cake filling, and cake icing are separately prepared, and only combined in a final step). +- Hierarchical Structure in Recipe Relation Types: Anaphora relation types in recipes are defined hierarchically (as shown in Fig. 2), such that a simplified version of the recipe anaphora resolution task, without considering state change, can be easily derived. In chemical patents, there is no clear way of simplify + +
RecipeRefChEMU-ref
Documents801,125
Sentences9995,768
Tokens per sentence12.627.6
Mentions1,40817,023
Mentions per doc17.615.1
COREF229 / 4153,243
COREF per doc2.9 / 5.22.9
Bridging*1,104 / 91812,796
Bridging* per doc13.8 / 11.511.4
TR186 / —
IWOA91 / 918
IwA827 / —
+ +Table 2: Corpus statistics. For ChEMU-ref, we include the training and development set. "COREF", "TR", "IWOA" and "IWA" denote the COREREFERENCE, TRANSFORMED, INGREDIENT(WITHOUT-STATE-CHANGE)-ASSOCIATED and INGREDIENT(WITH-STATE-CHANGE)-ASSOCIATED relations, respectively. "/" shows the number of relations with and without consideration of state change. "Bridging*" is the total number of bridging relations across all subtypes. + +ing the scheme while preserving the anaphoric relations. + +# 4 Task Definition + +Following the approach of Fang et al. (2021a), anaphora resolution is modeled as a two-step task: (1) mention detection; and (2) anaphora relation detection. + +As anaphora relation types in recipes are defined hierarchically, we can derive a simplified version of the recipe anaphora resolution task by removing state changes. That is, COREFERENCE and TRANSFORMED can be merged when we remove consideration of state changes, and INGREDIENT(WHOTOUT-STATE-CHANGE)-ASSOCIATED and INGREDIENT(WHOTOUT-STATE-CHANGE)-ASSOCIATED can similarly be merged. As such, we evaluate recipe anaphora resolution both with state change (4-way), and without state change (2-way). + +As our corpus includes one-to-many anaphoric relations for bridging, standard coreference evaluation metrics (Luo, 2005; Recasens and Hovy, 2011; Moosavi and Strube, 2016), which assume a given mention only occurs in a unique cluster, are not suitable for this task. Although coreferences involving one-to-one relations in this task could be evaluated with these metrics, to maintain a unified evaluation for bridging and coreference, we utilize precision, + +recall and F1 as our core metrics. Specifically, we follow the evaluation of the ChEMU-ref corpus, scoring coreference from two perspectives: (1) surface coreference, where a coreferent anaphor links to its closest antecedent; and (2) atom coreference, where a coreferent anaphor links to a correct antecedent (Kim et al., 2012). + +For manual annotation, we use the Brat rapid annotation tool. In an attempt to achieve high quality, we went through 8 rounds of annotation training and refinement of the anaphora annotation with two annotators experienced with the recipe domain. In each round of training, the annotators independently annotated 10 recipes (different for each round of annotation) and met afterwards to compare annotation results. Further refinements of the annotation guidelines were made based on the discussion. + +After training, we reached a high inter-annotator agreement (IAA) of Krippendorff's $\alpha = 0.85$ , mention-level $F1 = 0.88$ , and relation-level $F1 = 0.67$ . As a point of comparison, the respective values after the first round of annotator training were 0.45, 0.51 and 0.29, respectively. + +We use 80 double-annotated recipes with harmonized annotations as our corpus. The statistics of this corpus in comparison with the ChEMU-ref corpus (Fang et al., 2021a) are shown in Table 2. + +# 5 Methodology + +To investigate the benefit of transfer learning from the chemical domain, we follow the configuration of Fang et al. (2021a), modeling bridging as a classification task and adopting the benchmark end-to-end neural coreference model of Lee et al. (2017, 2018) for joint training of the two anaphora resolution types. + +For each span $x_{i}$ , the model learns: (1) a mention score $s_{m_i}$ for mention detection: + +$$ +s _ {m} (i) = w _ {s} \cdot \mathrm {F F N N} _ {s} (s _ {i}) +$$ + +and (2) a distribution $P(\cdot)$ over possible antecedent spans $Y(i)$ for coreference resolution: + +$$ +P (y) = \frac {\exp \left(s _ {c} (i , y)\right)}{\sum_ {y ^ {\prime} \in Y} \exp \left(s _ {c} (i , y ^ {\prime})\right)} +$$ + +where $s_c(i, y)$ is the output of a feed-forward neural network with span pair embedding $s_{i,y}$ , and (3) a pair-wise score $s_b(i, y)$ of each possible antecedent span $y$ for bridging resolution: + +$$ +s _ {b} (i, y) = \mathrm {s o f t m a x} (w _ {b} \cdot \mathrm {F F N N} _ {b} (s _ {i, y})) +$$ + +A span representation $s_i$ is the concatenation of output token representations $(x_i^*)$ from a bidirectional LSTM (BiLSTM) (Hochreiter and Schmidhuber, 1997), the syntactic head representation $(h_i)$ obtained from an attention mechanism (Bahdanau et al., 2015), and a feature vector of the mention $(\phi(i))$ : + +$$ +s _ {i} = [ x _ {\mathrm {S T A R T} (i)} ^ {*}, x _ {\mathrm {E N D} (i)} ^ {*}, h _ {i}, \phi (i) ] +$$ + +where START(i) and END(i) represent the starting and ending token index for span $i$ , respectively. + +A span pair embedding $s_{i,y}$ is obtained by the concatenation of each span embedding $(s(i), s(y))$ and the element-wise multiplication of the span embeddings $(s(i) \circ s(y))$ and a feature vector $(\phi(i, y))$ for span pair $i$ and $y$ : + +$$ +s _ {i, y} = [ s (i), s (y), s (i) \circ s (y), \phi (i, y) ] +$$ + +For mention loss, we use cross-entropy loss: + +$$ +\begin{array}{l} L _ {m} = - \sum_ {i = 1} ^ {\lambda T} m _ {i} * \log (\operatorname {s i g m o i d} (s _ {m} (i))) \\ + \left(1 - m _ {i}\right) * \log \left(1 - \operatorname {s i g m o i d} \left(s _ {m} (i)\right)\right) \\ \end{array} +$$ + +where: + +$$ +m _ {i} = \left\{ \begin{array}{l l} 0 & \text {s p a n} i \notin \mathrm {G O L D} _ {m} \\ 1 & \text {s p a n} i \in \mathrm {G O L D} _ {m} \end{array} \right. +$$ + +and $\mathrm{GOLD}_m$ is the set of gold mentions that are involved in anaphora relations. + +For coreference resolution, we compute the loss as follows, where $\mathrm{GOLD}_c(i)$ is the gold coreferent antecedents that span $i$ refers to: + +$$ +L _ {c} = \log \prod_ {i = 1} ^ {\lambda T} \sum_ {\hat {y} \in Y (i) \bigcap \mathrm {G O L D} _ {c} (i)} P (\hat {y}) +$$ + +For bridging resolution, the loss is obtained by multiclass cross-entropy: + +$$ +L _ {b} = - \sum_ {c = 1} ^ {K _ {c}} \sum_ {i = 1} ^ {\lambda T} \sum_ {y} b _ {i, j, c} \log (s _ {b} (i, y, c)) +$$ + +where $K_{c}$ represents the number of bridging categories, $s_b(i,j,c)$ denotes the prediction of $s_b(i,j)$ under category $c$ , and: + +$$ +b _ {i, j, c} = \left\{ \begin{array}{l l} 0 & \text {s p a n p a i r} (i, j) \notin \operatorname {G O L D} _ {b} (c) \\ 1 & \text {s p a n p a i r} (i, j) \in \operatorname {G O L D} _ {b} (c) \end{array} \right. +$$ + +where $\mathrm{GOLD}_b(c)$ is the gold bridging relation under category $c$ . + +We compute the total loss as $L = L_{m} + L_{ref}$ where: + +$$ +L _ {r e f} = \left\{ \begin{array}{l l} L _ {c} & \text {f o r c o r e f e r e n c e} \\ L _ {b} & \text {f o r b r i d g i n g} \\ L _ {c} + L _ {b} & \text {f o r j o i n t t r a i n i n g} \end{array} \right. +$$ + +# 6 Experiments + +In this section, we present experimental results both with and without state change for recipe anaphora resolution. We use a similar configuration to Lee et al. (2018). Specifically, we use the concatenation of 300-dimensional GloVe embeddings (Pennington et al., 2014), 1024-dimensional ELMo word representations (Peters et al., 2018), and 8-dimensional character embeddings that are learned from a character CNN with windows of 3, 4, and 5 characters as the pretrained token embeddings. Each feed-forward neural network consists of two hidden layers with 150 dimensions and rectified linear units (Nair and Hinton, 2010). The gold mentions are separated in coreference and bridging. For joint training, the gold mentions are combined. + +We use 10-fold cross-validation to evaluate our model on recipe anaphora resolution. Since end-to-end model performance varies due to random initialization (Lee et al., 2017), we randomly shuffle the dataset 5 times and run cross-validation 3 times for each shuffle. Averaged results are reported. + +Table 3 shows our primary results, without state change. For coreference resolution, we provide experimental results on both surface and atom coreference metrics. For bridging resolution, we focus on overall bridging results. Since surface and atom coreference metrics show the same trends in performance, we use surface coreference and overall bridging to compute overall results. + +Overall, joint training achieves $26.2\%$ $F_{1}$ score for surface coreference and $26.9\%$ $F_{1}$ score for bridging, with $+1.4\%$ and $+0.9\%$ $F_{1}$ score absolute improvement over the component-wise models. As such, joint training improves the performance of both tasks. Compared to precision, recall in + +
RelationMethodPARAFAPRRRFR
COREF (Surface)coreference62.0 ± 1.037.8 ± 0.846.1 ± 0.833.6 ± 0.920.4 ± 0.624.8 ± 0.7
joint_train65.2 ± 0.937.5 ± 0.946.7 ± 0.836.8 ± 0.921.0 ± 0.626.2 ± 0.7
COREF (Atom)coreference62.0 ± 1.037.8 ± 0.846.1 ± 0.846.8 ± 1.126.1 ± 0.732.9 ± 0.7
joint_train65.2 ± 0.937.5 ± 0.946.7 ± 0.850.4 ± 1.126.7 ± 0.734.4 ± 0.8
Bridgingbridging56.1 ± 1.235.1 ± 0.941.7 ± 0.836.3 ± 0.921.5 ± 0.826.0 ± 0.7
joint_train57.7 ± 1.335.5 ± 0.942.7 ± 0.838.0 ± 0.821.9 ± 0.726.9 ± 0.7
Overalljoint_train62.1 ± 0.737.0 ± 0.546.0 ± 0.537.4 ± 0.721.8 ± 0.527.1 ± 0.5
+ +anaphor and relation detection is lower, indicating the complexity in anaphoric forms in recipes. + +We also experimented with joint coreference resolution and change-of-state classification, and observed similar trends in the results, at reduced performance levels due to the difficulty in additionally predicting state changes (as shown in Appendix A). + +Table 3: Anaphora resolution results based on 10-fold cross validation without considering state change. Models were trained over 10,000 epochs, and averaged over 3 runs with 5 different random seeds (a total of $5 \times 3 \times 10$ runs). Models are trained for "coreference", "bridging" or "joint_train" (both tasks jointly). "F_A" denotes the F1 score for anaphor prediction, and "F_R" for relation prediction. + +
RelationMethodFAFR
COREF (Surface)coreference46.1 ± 0.824.8 ± 0.7
- w/ transfer46.7 ± 0.825.3 ± 0.7
joint_train46.7 ± 0.826.2 ± 0.7
- w/ transfer45.3 ± 0.926.9 ± 0.7
COREF (Atom)coreference46.1 ± 0.832.9 ± 0.7
- w/ transfer46.7 ± 0.833.5 ± 0.8
joint_train46.7 ± 0.834.4 ± 0.8
- w/ transfer45.3 ± 0.933.9 ± 0.8
Bridgingbridging41.7 ± 0.826.0 ± 0.7
- w/ transfer40.6 ± 0.926.7 ± 0.7
joint_train42.7 ± 0.826.9 ± 0.7
- w/ transfer43.4 ± 0.827.9 ± 0.7
Overalljoint_train46.0 ± 0.527.1 ± 0.5
- w/ transfer45.2 ± 0.627.9 ± 0.5
+ +Table 4: Experiments with transfer learning, without considering state change. “ $F_A$ ” denotes the F1 score for anaphor prediction, and “ $F_R$ ” for relation prediction. + +As discussed in Section 3.4, chemical patents and recipes have similar text structure. Based on the hypothesis that this structural similarity can lead to successful domain transfer, we experiment with transfer learning from the chemical domain to recipes. Specifically, we pretrain the anaphora resolution model on the ChEMU-ref corpus (Fang et al., 2021a,b) with 10,000 epochs, and fine-tune it over the recipe corpus. + +Table 4 shows the results with transfer learning, demonstrating consistent improvements over coreference and bridging resolution. Overall, we achieve $27.9\%$ $F_{1}$ score for relation prediction under joint + +training and transfer learning, obtaining $+0.8\%$ $F_{1}$ score absolute improvement. Incorporating procedural knowledge also improves component-wise models by $+0.5\%$ and $+0.7\%$ $F_{1}$ score (absolute) for surface coreference and bridging, respectively. + +We performance error analysis on 5 randomly-selected batches from 10-fold cross-validation based on joint training. There are two primary causes of error. First, the model struggles to capture the semantics of ingredient terms as they are combined with other ingredients. As discussed in Section 3.4, ingredient terms can semantically represent a mixture. E.g. the biscuits in Fig. 1 line 6 and the yellowtail in Table 5 Ex 1 both represent a mixture of previous ingredients which includes the key ingredient of biscuits and yellowtail, respectively. The model fails to capture the fact that these mentions incorporate multiple antecedents, and incorrectly analyzes them as COREFERENCE. The second cause of error is in failing to detect state change, mostly in falsely analyzing TRANSFORMED as COREFERENCE, and INGREDIENT(WHOUT-STATE-CHANGE)-ASSOCIATED as INGREDIENT(WITH-STATE-CHANGE)-ASSOCIATED. + +Errors in coreference resolution occur due to two primary factors: (1) imbalance of coreference and bridging; and (2) entities with different surface expressions. As shown in Table 2, coreference relations are not common in recipes, making it hard for models to capture coreference links. Models also fail to capture the coreference relationship of entities in the face of lexical variation. + +In bridging resolution, models also tend to predict anaphoric links as INGREDIENT(WITH-STATE-CHANGE)-ASSOCIATED due to its predominance in the annotated data. Furthermore, given that it is a many-to-one relation, models + +
1Season the yellowtail fillets with salt and pepper, then dust 1 side only with flour, shaking off any excess. in a medium sized saute pan, heat the olive oil until just nearly smoking and add the yellowtail, flour side down...
2In a bowl, mash the corned beef as much as you can. Add the tinned tomatoes, onions and curry powder. Mix well until the mixture becomes free of any lump of corned beef. Transfer to a frying pan on a medium heat, cook the mixture for about 10 – 15 minutes until the mixture is heated through...
3In a ceramic or glass bowl, combine chiles, orange juice, lemon juice, and orange peel. Add the fish and refrigerate for 4 to 6 hours, stirring occasionally until the fish loses all translucency. You may leave in the refrigerator overnight to marinate, if desired. Remove the fish, reserving the juice.
4...Add the white wine and passion fruit. Over medium heat, reduce by 3/ the liquid in the pan will begin to look thick and bubbly. Remove the pan from the heat and slowly whisk in the butter a little bit at a time, making sure all butter is whisked in before adding more...
+ +Table 5: Examples of anaphora phenomena from the RecipeRef dataset. + +tend to over-predict INGREDIENT(WITH-STATE-CHANGE)-ASSOCIATED relations to mentions which are not associated with the given anaphor. A natural explanation for this is that span-pair predictions are made independent of one another, and there is no way for the model to capture interactions between anaphors. Simultaneously evaluating candidate antecedents might address this issue. + +By incorporating procedural knowledge via transfer learning, models achieve better performance. The improvement occurs in two main forms. First, mention detection improves. For example in Table 5 Ex 3, the juice and its related anaphoric relations are predicted by models with transfer learning, yet not captured by standard joint training models. Second, detection of lexically-varienced coreferent mentions improves. With Ex 4, standard joint training models fail to capture the the COREFERENCE relation between the butter and all butter due to variation in expression, but this relation is correctly captured by models with transfer learning. + +Directions for future work include: (1) joint learning with COREFERENCE and TRANSFORMED relations, which differ only in whether there is a state change or not, such that considering them together may be effective; (2) incorporation of external knowledge, including knowledge about ingredient entities, which may further improve transfer learning; and (3) utilization of transformer based models (Joshi et al., 2020; Xia and Van Durme, 2021), which have been shown to perform well in general-domain coreference settings. + +# 7 Conclusion + +In this paper, we have extended earlier work on anaphora resolution over chemical patents to the domain of recipes. We adapted the annotation schema and guidelines for chemical patents, and created a labeled anaphora resolution corpus for recipes. We further defined two tasks for modeling anaphora phenomena in recipes, with and without consider- + +ation of state change. Our experiments show the benefit of joint training, and also transfer learning from the chemical domain. + +# Acknowledgements + +This work was done in the framework of the ChEMU project, supported by Australian Research Council Linkage Project project number LP160101469 and Elsevier. A graduate research scholarship was provided by the University of Melbourne Faculty of Engineering and IT to Biaoyan Fang. We would also like to thank Dr. Christian Druckenbrodt, Dr. Saber A. Akhondi, and Dr. Camilo Thorne from Elsevier, as well as our two expert recipe annotators Kate Baldwin and Ayah Tayeh, for their contributions in refining the annotation guidelines. + +# References + +Rahul Agarwal and Kevin Miller. 2011. Information extraction from recipes. Department of Computer Science, Stanford University-2008. +Nicholas Asher and Alex Lascarides. 1998. Bridging. Journal of Semantics, 15(1):83-113. +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations (ICLR 2015), San Diego, USA. +Devansh Batra, Nirav Diwan, Utkarsh Upadhyay, Jushaan Singh Kalra, Tript Sharma, Aman Kumar Sharma, Dheeraj Khanna, Jaspreet Singh Marwah, Srilakshmi Kalathil, Navjot Singh, Rudraksh Tuwani, and Ganesh Bagler. 2020. RecipeDB: A resource for exploring recipes. Database, 2020. +Arthur Brack, Daniel Uwe Müller, Anett Hoppe, and Ralph Ewerth. 2021. Coreference resolution in research papers from multiple domains. In Proc. of the 43rd European Conference on Information Retrieval, online. +Kevin Clark and Christopher D Manning. 2015. Entity-centric coreference resolution with model stacking. + +In Proc. of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1405-1415, Beijing, China. +Kevin Clark and Christopher D. Manning. 2016a. Deep reinforcement learning for mention-ranking coreference models. In Proc. of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2256-2262, Austin, USA. +Kevin Clark and Christopher D. Manning. 2016b. Improving coreference resolution by learning entity-level distributed representations. In Proc. of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 643-653, Berlin, Germany. +K Bretonnel Cohen, Arrick Lanfranchi, Miji Joo-young Choi, Michael Bada, William A Baumgartner, Natalya Panteleyeva, Karin Verspoor, Martha Palmer, and Lawrence E Hunter. 2017. Coreference annotation and resolution in the Colorado Richly Annotated Full Text (CRAFT) corpus of biomedical journal articles. BMC Bioinformatics, 18(1):372. +Zeyu Dai, Hongliang Fei, and Ping Li. 2019. Coreference aware representation learning for neural named entity recognition. In *IJCAI*, pages 4946-4953. +Bhavana Dalvi, Lifu Huang, Niket Tandon, Wen tau Yih, and Peter Clark. 2018. Tracking state changes in procedural text: A challenge dataset and models for process paragraph comprehension. In *NAACL*. +Biaoyan Fang, Christian Druckenbrodt, Saber A Akhondi, Jiayuan He, Timothy Baldwin, and Karin Verspoor. 2021a. ChEMU-ref: A corpus for modeling anaphora resolution in the chemical domain. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1362-1375, Online. Association for Computational Linguistics. +Biaoyan Fang, Christian Druckenbrodt, Saber A. Akhondi, Camilo Thorne, Timothy Baldwin, and Karin Verspoor. 2022. RecipeRef corpus for modeling anaphora resolution from the procedural text of recipes. Mendeley Data. +Biaoyan Fang, Christian Druckenbrodt, Colleen Yeow Hui Shiuan, Sacha Novakovic, Ralph Hössel, Saber A. Akhondi, Jiayuan He, Meladel Mistica, Timothy Baldwin, and Karin Verspoor. 2021b. ChEMU-Ref dataset for modeling anaphora resolution in the chemical domain. Mendeley Data. +Annemarie Friedrich, Heike Adel, Federico Tomazic, Johannes Hingerl, Renou Benteau, Anika Marusczyk, and Lukas Lange. 2020. The SOFC-exp corpus and neural approaches to information extraction in the materials science domain. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1255-1268, Online. Association for Computational Linguistics. + +Abbas Ghaddar and Philippe Langlais. 2016. WikiCoref: An English coreference-annotated corpus of Wikipedia articles. In Proc. of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 136-142, Porto-rož, Slovenia. +Loic Grobol. 2019. Neural coreference resolution with limited lexical context and explicit mention detection for oral French. In Proc. of the Second Workshop on Computational Models of Reference, Anaphora and Coreference, pages 8-14, Minneapolis, USA. +Jun Harashima, Michiaki Ariga, Kenta Murata, and Masayuki Ioki. 2016. A large-scale recipe and meal data collection as infrastructure for food research. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 2455-2459, Porto Roz, Slovenia. European Language Resources Association (ELRA). +Jun Harashima and Makoto Hiramatsu. 2020. Cookpad parsed corpus: Linguistic annotations of Japanese recipes. In Proceedings of the 14th Linguistic Annotation Workshop, pages 87-92, Barcelona, Spain. Association for Computational Linguistics. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural Computation*, 9(8):1735-1780. +Yufang Hou. 2018a. A deterministic algorithm for bridging anaphora resolution. In Proc. of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1938-1948, Brussels, Belgium. +Yufang Hou. 2018b. Enhanced word representations for bridging anaphora resolution. In Proc. of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 1-7, New Orleans, USA. +Yufang Hou. 2020. Bridging anaphora resolution as question answering. In Proc. of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1428-1438, Online. +Yufang Hou, Katja Markert, and Michael Strube. 2014. A rule-based system for unrestricted bridging resolution: Recognizing bridging anaphora and finding links to antecedents. In Proc. of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), pages 2082-2093, Doha, Qatar. +Yufang Hou, Katja Markert, and Michael Strube. 2018. Unrestricted bridging resolution. Computational Linguistics, 44(2):237-284. +De-An Huang, Joseph J Lim, Li Fei-Fei, and Juan Carlos Niebles. 2017. Unsupervised visual-linguistic reference resolution in instructional videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2183-2192. + +Yiwei Jiang, Klim Zaporojets, Johannes Deleu, Thomas Demeester, and Chris Develder. 2020. Recipe instruction semantics corpus (RISc): Resolving semantic structure and zero anaphora in recipes. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 821-826, Suzhou, China. Association for Computational Linguistics. +Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64-77. +Ben Kantor and Amir Globerson. 2019. Coreference resolution with entity equalization. In Proc. of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019), pages 673-677, Florence, Italy. +Chloe Kiddon, Ganesa Thandavam Ponnuraj, Luke Zettlemoyer, and Yejin Choi. 2015. Mise en place: Unsupervised interpretation of instructional recipes. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 982-992, Lisbon, Portugal. Association for Computational Linguistics. +Jin-Dong Kim, Ngan Nguyen, Yue Wang, Jun'ichi Tsujii, Toshihisa Takagi, and Akinori Yonezawa. 2012. The Genia event and protein coreference tasks of the BioNLP shared task 2011. BMC Bioinformatics, 13(11):S1. +Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In Proc. of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 188-197, Copenhagen, Denmark. +Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-order coreference resolution with coarse-to-fine inference. In Proc. of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 687-692, New Orleans, USA. +Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3219-3232, Brussels, Belgium. Association for Computational Linguistics. +Xiaoqiang Luo. 2005. On coreference resolution performance metrics. In Proc. of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing (EMNLP 2005), pages 25-32, Vancouver, Canada. + +Javier Marin, Aritro Biswas, Ferda Ofli, Nicholas Hynes, Amaia Salvador, Yusuf Aytar, Ingmar Weber, and Antonio Torralba. 2019. *Recipe1m+: A dataset for learning cross-modal embeddings for cooking recipes and food images*. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(1):187-203. +Ashutosh Modi, Tatjana Anikina, Simon Ostermann, and Manfred Pinkal. 2016. InScript: Narrative texts annotated with script information. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 3485-3493, Porto-rož, Slovenia. European Language Resources Association (ELRA). +Nafise Sadat Moosavi and Michael Strube. 2016. Which coreference evaluation metric do you trust? a proposal for a link-based entity aware metric. In Proc. of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 632-642, Berlin, Germany. +Shinsuke Mori, Hirokuni Maeta, Yoko Yamakata, and Tetsuro Sasada. 2014. Flow graph corpus from recipe texts. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 2370-2377, Reykjavik, Iceland. European Language Resources Association (ELRA). +Sheshera Mysore, Zachary Jensen, Edward Kim, Kevin Huang, Haw-Shiuan Chang, Emma Strubell, Jeffrey Flanigan, Andrew McCallum, and Elsa Olivetti. 2019. The materials science procedural text corpus: Annotating materials synthesis procedures with shallow semantic structures. In Proceedings of the 13th Linguistic Annotation Workshop, pages 56-64, Florence, Italy. Association for Computational Linguistics. +Vinod Nair and Geoffrey E Hinton. 2010. Rectified linear units improve restricted Boltzmann machines. In Proc. of the 33rd International Conference on Machine Learning (ICML 2016), New York, USA. +Vincent Ng. 2017. Machine learning for entity coreference resolution: A retrospective look at two decades of research. In Proc. of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI'17), pages 4877-4884, San Francisco, USA. +Ngan Nguyen, Jin-Dong Kim, and Jun'ichi Tsujii. 2011. Overview of BioNLP 2011 protein coreference shared task. In Proc. of BioNLP Shared Task 2011 Workshop, pages 74-82, Portland, USA. +Taichi Nishimura, Suzuki Tomori, Hayato Hashimoto, Atsushi Hashimoto, Yoko Yamakata, Jun Harashima, Yoshitaka Ushiku, and Shinsuke Mori. 2020. Visual grounding annotation of recipe flow graph. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4275-4284, Marseille, France. European Language Resources Association. +Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proc. of the 2014 Conference on + +Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. +Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proc. of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, USA. +Massimo Poesio, Ron Artstein, et al. 2008. Anaphoric annotation in the ARRAU corpus. In Proc. of the Sixth International Conference on Language Resources and Evaluation (LREC 2008), Marrakech, Morocco. +Massimo Poesio, Roland Stuckardt, and Yannick Versley. 2016. Anaphora Resolution. Springer. +Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL-2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes. In Proc. of EMNLP-CoNLL 2012: Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1-40, Jeju, Korea. +Marta Recasens and Eduard Hovy. 2011. BLANC: Implementing the Rand index for coreference evaluation. Natural Language Engineering, 17(4):485-510. +Ina Rosiger. 2016. Scicorp: A corpus of English scientific articles annotated for information status analysis. In Proc. of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pages 1743-1749, Portoorž, Slovenia. +Ina Rösiger. 2018a. BASHI: A corpus of Wall Street Journal articles annotated with bridging links. In Proc. of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018). +Ina Rosiger. 2018b. Rule- and learning-based methods for bridging resolution in the ARRAU corpus. In Proc. of the First Workshop on Computational Models of Reference, Anaphora and Coreference, pages 23-33, New Orleans, USA. +Ina Rosiger. 2019. Computational modelling of coreference and bridging resolution. Ph.D. thesis, Stuttgart University. +Ina Rosiger, Arndt Riester, and Jonas Kuhn. 2018. Bridging resolution: Task definition, corpus resources and rule-based experiments. In Proc. of the 27th International Conference on Computational Linguistics, pages 3516-3528, Santa Fe, USA. +Gabriel Stanovsky, Noah A. Smith, and Luke Zettlemoyer. 2019. Evaluating gender bias in machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1679-1684, Florence, Italy. Association for Computational Linguistics. + +Niket Tandon, Bhavana Dalvi Mishra, Joel Grus, Wen tau Yih, Antoine Bosselut, and Peter Clark. 2018. Reasoning about actions and state changes by injecting commonsense knowledge. In EMNLP. +Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Ni-anwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, and Ann Houston. 2013. OntoNotes release 5.0. Linguistic Data Consortium Catalog No. LDC2013T19. +Sam Wiseman, Alexander M. Rush, Stuart Shieber, and Jason Weston. 2015. Learning anaphoricity and antecedent ranking features for coreference resolution. In Proc. of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1416-1426, Beijing, China. +Sam Wiseman, Alexander M. Rush, and Stuart M. Shieber. 2016. Learning global features for coreference resolution. In Proc. of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 994-1004, San Diego, USA. +Patrick Xia and Benjamin Van Durme. 2021. Moving on from OntoNotes: Coreference resolution model transfer. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5241-5256, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Yoko Yamakata, Shinsuke Mori, and John Carroll. 2020. English recipe flow graph corpus. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 5187-5194, Marseille, France. European Language Resources Association. +Juntao Yu and Massimo Poesio. 2020. Multitask learning-based neural bridging reference resolution. In Proceedings of the 28th International Conference on Computational Linguistics, pages 3534-3546, Barcelona, Spain (Online). International Committee on Computational Linguistics. +Amir Zeldes. 2017. The GUM corpus: Creating multilayer resources in the classroom. Language Resources and Evaluation, 51(3):581-612. +Rui Zhang, Cicero Nogueira dos Santos, Michihiro Yasunaga, Bing Xiang, and Dragomir Radev. 2018. Neural coreference resolution with deep biaffine attention by joint mention detection and mention clustering. In Proc. of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 102-107, Melbourne, Australia. + +# A Additional Experimental Results + +In the following tables, we provide detailed experimental results. + +Table 6 provides anaphora resolution results with state changes based on 10-fold cross validation. + +Table 7 provides a full comparison of transfer learning per anaphora relation with state change based on 10-fold cross validation. + +Table 8 provides a full comparison of transfer learning per anaphora relation without state change based on 10-fold cross validation. + +Table 9 provides a full comparison of transfer learning for coreference resolution based on 10-fold cross validation, under standard coreference evaluation metrics, i.e. MUC, BCUBED, and CRAFE. Specifically, models are trained with the same parameters (e.g. data partitions, training epochs, etc.) discussed in Section 6 but with a change of coreference evaluation metric, i.e. standard coreference evaluation metrics. We consider the "Ave. $F$ " as the main evaluation metric, computed by averaging F1 scores of MUC, BCUBED, and CRAFE. + +
RelationMethodPARAFAPRRRFR
COREF (Surface)coreference46.5 ± 2.213.3 ± 0.719.7 ± 0.922.7 ± 2.06.2 ± 0.59.2 ± 0.7
joint_train48.6 ± 1.915.3 ± 0.722.0 ± 0.928.7 ± 1.78.6 ± 0.512.5 ± 0.7
COREF (Atom)coreference46.5 ± 2.213.3 ± 0.719.7 ± 0.927.9 ± 2.17.5 ± 0.511.2 ± 0.8
joint_train48.6 ± 1.915.3 ± 0.722.0 ± 0.933.5 ± 1.89.8 ± 0.514.4 ± 0.7
Bridgingbridging51.7 ± 1.025.3 ± 0.633.2 ± 0.636.3 ± 0.819.4 ± 0.624.5 ± 0.6
joint_train52.6 ± 1.024.6 ± 0.632.7 ± 0.737.7 ± 0.819.1 ± 0.624.7 ± 0.6
TRbridging47.0 ± 2.316.6 ± 0.923.0 ± 1.232.9 ± 1.913.2 ± 0.817.3 ± 0.9
joint_train52.0 ± 2.316.0 ± 0.922.9 ± 1.137.5 ± 2.213.2 ± 0.817.9 ± 1.0
IWOAbridging5.9 ± 1.63.3 ± 1.13.7 ± 1.13.1 ± 1.12.3 ± 1.12.3 ± 1.0
joint_train4.3 ± 1.32.4 ± 0.72.7 ± 0.72.5 ± 1.00.9 ± 0.41.1 ± 0.4
IWAbridging55.2 ± 1.236.8 ± 1.042.9 ± 0.937.9 ± 0.922.7 ± 0.827.3 ± 0.7
joint_train55.6 ± 1.235.8 ± 1.042.3 ± 0.939.4 ± 1.022.4 ± 0.827.5 ± 0.7
Overalljoint_train51.6 ± 0.821.5 ± 0.429.9 ± 0.536.3 ± 0.717.3 ± 0.523.0 ± 0.5
+ +Table 6: Anaphora resolution results based on 10-fold cross validation with state change. Models were trained over 10,000 epochs, and averaged over 3 runs with 5 different random seeds (a total of $5 \times 3 \times 10$ runs). Models are trained for "coreference", "bridging" or "joint_train" (both tasks jointly). "F_A" denotes the F1 score for anaphor prediction, and "F_R" for relation prediction. + +
RelationMethodPARAFAPRRRFR
COREF (Surface)coreference45.6 ± 2.313.9 ± 0.820.0 ± 1.027.9 ± 2.18.3 ± 0.611.9 ± 0.8
joint_train43.4 ± 2.312.3 ± 0.718.1 ± 1.024.5 ± 1.96.5 ± 0.59.7 ± 0.6
COREF (Atom)coreference45.6 ± 2.313.9 ± 0.820.0 ± 1.032.9 ± 2.29.4 ± 0.613.7 ± 0.8
joint_train43.4 ± 2.312.3 ± 0.718.1 ± 1.029.1 ± 2.17.6 ± 0.511.3 ± 0.7
Bridgingbridging53.4 ± 1.024.9 ± 0.533.3 ± 0.638.9 ± 0.819.8 ± 0.625.7 ± 0.6
joint_train55.2 ± 1.025.6 ± 0.634.3 ± 0.639.6 ± 0.819.7 ± 0.525.8 ± 0.6
TRbridging50.6 ± 2.217.8 ± 0.924.3 ± 1.037.8 ± 2.114.3 ± 0.818.9 ± 0.9
joint_train53.8 ± 2.416.5 ± 0.923.5 ± 1.236.3 ± 2.212.9 ± 0.817.3 ± 0.9
IWOAbridging4.4 ± 1.41.9 ± 0.62.3 ± 0.71.2 ± 0.50.5 ± 0.20.6 ± 0.2
joint_train5.0 ± 1.52.9 ± 1.13.3 ± 1.12.6 ± 1.11.9 ± 1.02.0 ± 1.0
IWAbridging56.9 ± 1.235.4 ± 1.042.4 ± 0.940.5 ± 0.923.1 ± 0.728.5 ± 0.7
joint_train58.2 ± 1.237.8 ± 1.044.4 ± 0.941.5 ± 0.923.4 ± 0.729.0 ± 0.7
Overalljoint_train53.2 ± 0.821.3 ± 0.430.0 ± 0.537.9 ± 0.717.5 ± 0.423.6 ± 0.5
+ +Table 7: Experiments with transfer learning based on 10-fold cross validation with state change. Models were trained over 10,000 epochs, and averaged over 3 runs with 5 different random seeds (a total of $5 \times 3 \times 10$ runs). Models are trained for "coreference", "bridging" or "joint_train" (both tasks jointly). "F_A" denotes the F1 score for anaphor prediction, and "F_R" for relation prediction. + +
RelationMethodPARAFAPRRRFR
COREF (Surface)coreference63.3 ± 0.937.8 ± 0.846.7 ± 0.834.4 ± 0.920.5 ± 0.625.3 ± 0.7
joint_train66.4 ± 1.035.4 ± 0.945.3 ± 0.939.7 ± 1.021.0 ± 0.626.9 ± 0.7
COREF (Atom)coreference63.3 ± 0.937.8 ± 0.846.7 ± 0.847.8 ± 1.126.3 ± 0.733.5 ± 0.8
joint_train66.4 ± 1.035.4 ± 0.945.3 ± 0.952.2 ± 1.225.8 ± 0.733.9 ± 0.8
Bridgingbridging55.5 ± 1.333.1 ± 0.940.6 ± 0.938.0 ± 1.021.5 ± 0.726.7 ± 0.7
joint_train58.4 ± 1.235.8 ± 0.943.4 ± 0.840.3 ± 1.022.3 ± 0.627.9 ± 0.7
Overalljoint_train63.0 ± 0.735.8 ± 0.645.2 ± 0.639.8 ± 0.622.0 ± 0.527.9 ± 0.5
+ +Table 8: Experiments with transfer learning based on 10-fold cross validation without state change. Models were trained over 10,000 epochs, and averaged over 3 runs with 5 different random seeds (total $5 \times 3 \times 10$ runs). Models are trained for "coreference", "bridging" or "joint_train" (both tasks jointly). "F_A" denotes the F1 score for anaphor prediction, and "F_R" for relation prediction. + +
StateMethodMUCBCUBEDCRAFEAve. F
PRFPRFPRF
With statecoreference30.1 ± 2.08.8 ± 0.612.7 ± 0.837.9 ± 1.810.8 ± 0.515.7 ± 0.746.2 ± 1.712.1 ± 0.518.5 ± 0.715.6 ± 0.7
- w/ transfer35.1 ± 2.011.2 ± 0.616.0 ± 0.840.8 ± 1.812.4 ± 0.617.8 ± 0.748.3 ± 1.712.9 ± 0.519.6 ± 0.717.8 ± 0.7
joint_train30.4 ± 1.710.9 ± 0.715.3 ± 0.937.1 ± 1.612.3 ± 0.617.4 ± 0.843.0 ± 1.613.5 ± 0.619.9 ± 0.817.5 ± 0.8
- w/ transfer36.4 ± 2.29.5 ± 0.614.2 ± 0.841.8 ± 2.010.5 ± 0.515.7 ± 0.746.1 ± 1.811.4 ± 0.517.6 ± 0.715.8 ± 0.7
Without statecoreference50.5 ± 1.132.2 ± 0.838.7 ± 0.849.3 ± 0.930.2 ± 0.736.5 ± 0.654.6 ± 0.828.1 ± 0.736.5 ± 0.737.2 ± 0.7
- w/ transfer51.9 ± 1.130.3 ± 0.837.7 ± 0.851.9 ± 1.028.4 ± 0.635.7 ± 0.655.4 ± 0.827.6 ± 0.536.5 ± 0.536.6 ± 0.6
joint_train53.4 ± 1.132.2 ± 0.939.5 ± 0.953.6 ± 1.030.1 ± 0.837.5 ± 0.756.2 ± 0.829.6 ± 0.738.2 ± 0.738.4 ± 0.7
- w/ transfer54.5 ± 1.130.2 ± 0.838.2 ± 0.855.4 ± 1.128.4 ± 0.636.6 ± 0.657.0 ± 0.829.2 ± 0.638.1 ± 0.637.6 ± 0.7
+ +Table 9: Results based on standard coreference evaluation metrics, i.e. MUC, BCUBED, and CRAFE, based on 10-fold cross validation without state change. Models were trained over 10,000 epochs, and averaged over 3 runs with 5 different random seeds (a total of $5 \times 3 \times 10$ runs). Models are trained for "coreference", or "joint_train" (both tasks jointly). "Ave. F" denotes the average F1 score of MUC, BCUBED, and CRAFE. \ No newline at end of file diff --git a/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/images.zip b/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..745c250306cf4427f35700c26ece0e295264fb6b --- /dev/null +++ b/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:149d98e16ab25f412b2c19dc0b00ddcb7050c283d930853169c03a1fe273d96f +size 821339 diff --git a/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/layout.json b/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..5d38bfc48ae166b7971d59963c24c4aadb366cbd --- /dev/null +++ b/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:22741fdc7eaf76da4f9c751c7e07731e3546ab5c2ebb23f828a61e8ead9630fd +size 421922 diff --git a/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/3f6b5ce8-aea6-402b-a84c-fccce228cf0f_content_list.json b/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/3f6b5ce8-aea6-402b-a84c-fccce228cf0f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..6c8aa4514b096d71ff0c48ad92e765a3c0e24f34 --- /dev/null +++ b/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/3f6b5ce8-aea6-402b-a84c-fccce228cf0f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:03c189b408ea05e3caaee25aa98d8ca58f8ecbc9cf486e898d3a587789ac5104 +size 76436 diff --git a/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/3f6b5ce8-aea6-402b-a84c-fccce228cf0f_model.json b/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/3f6b5ce8-aea6-402b-a84c-fccce228cf0f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..014d6a36ac7f3ba1438c3df9b732e83e6582e4ca --- /dev/null +++ b/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/3f6b5ce8-aea6-402b-a84c-fccce228cf0f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f922418710645f70845575ce27570ae477f0a5bbc96d5fd780f595a3dce7a8c0 +size 93601 diff --git a/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/3f6b5ce8-aea6-402b-a84c-fccce228cf0f_origin.pdf b/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/3f6b5ce8-aea6-402b-a84c-fccce228cf0f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c2e1c50aca4a4af414927c32e1f2b09e3fd20118 --- /dev/null +++ b/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/3f6b5ce8-aea6-402b-a84c-fccce228cf0f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fe35aa83e423e74b3c123520a29c343de090bf85499d8cbc92544dc93a4a8ad6 +size 604606 diff --git a/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/full.md b/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..e850b91d5010d683686c00cc5e9561d66196f702 --- /dev/null +++ b/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/full.md @@ -0,0 +1,390 @@ +# What is wrong with you?: Leveraging User Sentiment for Automatic Dialog Evaluation + +Sarik Ghazarian $^{1*}$ Behnam Hedayatnia $^{2}$ Alexandros Papangelis $^{2}$ Yang Liu $^{2}$ Dilek Hakkani-Tur $^{2}$ + +1 University of Southern California / Information Sciences Institute + +2 Amazon Alexa AI + +sarik@isi.edu + +{behnam,papangea,yangliud,hakkanit}@amazon.com + +# Abstract + +Accurate automatic evaluation metrics for open-domain dialogs are in high demand. Existing model-based metrics for system response evaluation are trained on human annotated data, which is cumbersome to collect. In this work, we propose to use information that can be automatically extracted from the next user utterance, such as its sentiment or whether the user explicitly ends the conversation, as a proxy to measure the quality of the previous system response. This allows us to train on a massive set of dialogs with weak supervision, without requiring manual system turn quality annotations. Experiments show that our model is comparable to models trained on human annotated data. Furthermore, our model generalizes across both spoken and written open-domain dialog corpora collected from real and paid users. + +# 1 Introduction + +Relying on human evaluation to determine the quality of open-domain dialog systems is not an efficient approach in terms of time and cost. Automatic evaluation can be a good replacement for human annotations and can increase the pace of open-domain dialog system development. However, standard word-overlap metrics (BLEU, ROUGE, Perplexity) do not correlate well with human judgements of open-domain dialog systems (Deriu et al., 2020; Liu et al., 2016) because of the diverse set of outputs that can be relevant given a dialog context. + +A solution for better automatic evaluation methods is to train reference-free evaluators that learn how to assess the generated responses given dialog contexts from different aspects such as relevancy (Tao et al., 2018; Ghazarian et al., 2019; Lan et al., 2020), engagement (Ghazarian et al., 2020), fluency (Zhang et al., 2021b; Pang et al., 2020), + +contradiction (Pang et al., 2020; Nie et al., 2021) amongst others. It is also important to get some holistic evaluation at the dialog level in order to assess the dialogs as a whole (Zhang et al., 2021a; Li et al., 2021; Mehri and Eskenazi, 2020; Finch et al., 2021). + +Recently, Mehri and Eskenazi (2020); Eskenazi et al. (2019) have shown the effectiveness of looking into the next user utterance as a proxy to measure the quality of the chatbot's generated responses. See and Manning (2021) have shown that predicting next user satisfaction helps to select more relevant system utterances. Inspired by works in this area, we propose to automatically extract features from the next user utterance, such as sentiment, to use as a proxy to evaluate system responses. The advantage of our method is that we do not need to train on data with human annotations for turn level quality, and instead can rely on available large datasets with automatically extracted annotations. + +Most existing automatic evaluators focus purely on open-domain text-based dialog systems. In addition to textual interactions, we perform experiments on voice-based interactions that were collected via paid and real users. Furthermore, we compute correlations with a real user's own (referred to as first party, 1P) rating when available, in addition to annotations by third party (3P) annotators. Our contributions include: + +1. training an automatic evaluator on the sentiment of the next user utterance in a weakly supervised fashion to evaluate system responses, +2. outperforming existing automatic evaluation metrics on both text and voice-based open-domain dialog datasets, +3. a turn-level annotated open-domain text-based dialog dataset that we will release. $^1$ + +![](images/77e656667be2a929d2a5a4de2e90bfde470d57a486c4cc8fa7884857a4ad5109.jpg) +Figure 1: Training/Inference for turn quality estimation. The dotted arrows show how $q_{i}$ , which represents the system turn quality for system response $r_{i}$ , is constructed for training. For our regression model indicated by the red arrow, $s_{i + 1}$ (user sentiment) and $e_{i + 1}$ (user stop) are summed together to create $q_{i}$ . For our classification model indicated by the blue arrow, $q_{i}$ is equal to $t_{i}$ . In the example dialog, the user expresses negative sentiment in $u_{i + 1}$ . The sentiment score -1.97 is used as the reference label $q_{i}$ , indicating the quality of response $r_{i}$ . + +# 2 Methods for Automatic Evaluation + +For turn quality estimation, the task is defined as follows: given a dialog context and a system response in the last turn, $D = [u_{1}, r_{1} \dots u_{i}, r_{i}]$ (where $u_{i}$ and $r_{i}$ are the user utterance and system response respectively for the $i^{th}$ turn in a dialog), determine if $r_{i}$ is an appropriate response. $q_{i}$ indicates the quality of response $r_{i}$ and will be used as our reference label when training the model. Figure 1 shows our model architecture. We train a BERT-base (Devlin et al., 2019) model that encodes the dialog context and the latest system response. We use the pooled representation output by the BERT model and pass it through a linear layer to determine the quality of the response. Depending on the reference label used to train this model, we adopt a classification or regression setup, described below. + +- Classification model trained using turn level annotations. When annotations for system responses are available in our training data (a binary label $t_i$ as shown in Figure 1 for response $r_i$ , indicating if the system response is appropriate), we train a classification model + +using such reference labels. + +- Regression model trained using next user sentiment. Obtaining turn level annotations for dialogs is costly. In this work, we explore using weak supervision to approximate response quality. Eskenazi et al. (2019) stated that given a system response, the follow up user's utterance should be used to evaluate the quality of the system response as it increased agreement amongst human annotators. Motivated by this, we propose to use the sentiment of the next user utterance as a proxy to estimate the quality of the previous system response. In Figure 1, $s_{i+1}$ is the sentiment score for the next user utterance $u_{i+1}$ . Note that this information is automatically generated from the user utterance, and thus allows us to leverage data without a turn level annotation. Since such sentiment scores are often continuous, we use a regression model for these target labels. + +- Next user stop signal. We also examine if the next user utterance stops a dialog ( $e_{i+1}$ in Figure 1). $e_{i+1}$ is 0 if the user stops the dialog and 1 if they continue the dialog. We use this as an additional signal by summing it with the sentiment information above as target labels for model training. + +For dialog level evaluation, we follow previous work and use mean aggregation techniques to estimate dialog level ratings from turn level scores (Lowe et al., 2017; Ghazarian et al., 2019, 2020; Lan et al., 2020; Yeh et al., 2021). In our experiments, we show how aggregated turn level quality and user sentiment scores correlate with dialog level ratings. + +# 3 Dialog Datasets + +As described earlier, most previous work in automatic evaluation focuses on text-based open-domain dialog systems (Yeh et al., 2021; Lan et al., 2020; Sinha et al., 2020; Huang et al., 2020; Ghazarian et al., 2020). Additionally most dialog datasets are collected via crowdworkers. While we also evaluate on written (text-based) dialogs, the primary dataset in our work consists of spoken (voice-based) interactions between a dialog system and a real user. + +# 3.1 Open Domain Dialog System + +We first describe the open-domain dialog system used for our spoken dialog data collection. The + +
Dialog SplitNumber of Interactions (Train/Dev/Test)Avg. Number of Turns (Train/Dev/Test)3P turn quality3P rating1P rating
PUI-/-/87- / - / 14.5
RUI-1P6215 / 690 / -10.3 / 10.8 / -
RUI-3P500 / 55 / 13211.1 / 10.7 / 14.3
ConTurE- / - / 119- / - / 8.95
+ +Table 1: Dataset Statistics for Spoken and Written dialog datasets. RUI (Real User Interactions) + +![](images/4bf24f08afa5b665cf7b179c4c8cb9127a8eaa0e74b41cdb427b9457159ecd24.jpg) +Figure 2: Architecture of our open-domain dialog system. NER = Named Entity Recognition, DA = Dialog Act + +architecture of our dialog system is shown in Figure 2. Every user utterance in the dialog is sent into an ASR system whose output goes through a series of NLU modules that classifies topics, dialog acts, sentiment, extracts entities, and detects if the user utterance is offensive. Our system then calls multiple response generators (called responders) for the given dialog context and logs all the generated response candidates within the State Manager. The final response is selected based on a rule-based ranking strategy, and then sent to the TTS module whose output is presented to the user. + +For popular topics in open domain dialogs, such as movies, music, recent news, we develop template-based responders (highlighted in green in Figure 2) for the given dialog state. An example state and response for the movie domain is: when the user turns mentions a movie name (based on the NER result), we respond with information about the actor, the rating, or the plot of this certain movie. In addition to topic-specific template-based responders, our system includes other template-based responders for some special dialog contexts, such as greetings, topic switches, etc. + +For every user turn, we also apply a neural network-based response generation (NRG) model to produce a response, highlighted in purple in Figure 2. Our NRG Responder is a GPT2-XL (Radford et al., 2019) based model trained on real user-system interactions described in Section 3.2. + +The rule-based response ranker uses predefined logic and selects a template-based responder when + +it is available and the user topic matches that responder, otherwise it uses the NRG response as a fall back. In our system since we have just a few template-based responders, the system uses NRG responses most of the time. + +# 3.2 Spoken Dialogs + +We deploy the dialog system described above within the Alexa Prize Socialbot framework (Ram et al., 2018) to interact with real users. A user initiates an interaction with our dialog system and consents to have their data collected. A turn within an interaction is specified as a user utterance-system response pair. These interactions end when the user requests to stop the conversation. At the end of each interaction, users are given the opportunity to leave a rating in the range of 1 to 5. We define these ratings as $1P$ rating as they come from the same users who interacted with the conversational agent. We denote this dataset as Real User Interactions $(RUI)^{2}$ . Our data consists of approximately 100k interactions and 5 million turns. This dataset is used to train our NRG Responder mentioned in the previous section. We discuss its training details in the Appendix. + +Not every user leaves a rating; therefore, we take a sample of interactions from $RUI$ that contain user ratings and denote this dataset as $RUI - 1P$ . + +In addition to real user interactions, we form a dataset of interactions from paid users who were + +instructed to speak to the same dialog system. We denote these interactions as paid user interactions $PUI^2$ . The difference between paid and real users is that the former are internal workers who are recruited to rigorously test and probe the dialog system and as a result are much more proactive in the dialogs as opposed to real users who are known to be less proactive in these social conversations (Juraska et al., 2021; Finch et al., 2020). These internal workers are considered paid as their primary job consists of assisting with data collection. Real users, however, are consenting to a dialog with our dialog system but are not being paid. + +To obtain turn quality labels, we annotate a subset of $RUI-1P$ at the turn level. Given a complete interaction, an experienced annotator was asked to annotate each system response either as 1 or 0, where 1 indicates the response is appropriate and vice versa for 0. Additionally, we ask annotators to leave a dialog level rating in the range of 1 to 5. We define this turn and dialog level annotations as $3P$ turn quality and $3P$ ratings respectively, since they came from annotators who rated other users' interactions. We denote this annotated data as $RUI-3P$ . An example of a turn level annotation is shown in the Appendix. We also perform the same annotation on the $PUI$ data. Table 1 shows the statistics for each of these collections and available annotations for each dataset. + +To obtain sentiment labels, we leverage the BiLSTM sentiment model from (Kim et al., 2020), which was trained on spoken dialog data and automatically tag user utterances with sentiment. The model takes in both audio and textual features and outputs a real-valued valence score on a scale from -3 to 3, which measures the degree of the utterance's positivity/negativity. + +# 3.3 Written Dialogs + +We sample a set of dialogs released from the Interactive Evaluation of Dialog track (Gunasekara et al., 2020) to be annotated for turn quality. These dialogs were collected from invited participants conversing with knowledge-grounded response generation models through textual exchanges, and have been publicly released4. The original authors of this dataset asked Amazon Mechanical Turk (AMT) workers to rate 2200 interactions on multiple dialog level dimensions, such as coher + +ent, informative, overall. The full list of dialog level annotation dimensions is included in the Appendix. However, these dialogs do not have turn level annotations. In order to evaluate our models at the turn level, we sample 119 dialogs with an average length of 8 turns. For each turn, we ask three AMT workers to rate whether they dislike, somewhat like or like the Chatbot's response with a score of either 0, 1, or 2 respectively. To help workers judge response quality, we ask them to look at how relevant and interesting a response is. We use majority voting to determine the final score. In the case of ties we use a score from an internal author. The Krippendorff's alpha score is 0.31 representing fair agreement between annotators. We denote these assessments as $3P$ turn quality since the AMT workers are rating other workers' dialogs. We denote this dataset as Conversational Turns Evaluation (ConTurE) and publicly release it. $^5$ + +# 4 Results and Discussions + +We compare our method with a suite of open source models from (Yeh et al., 2021) $^4$ including RUBER, BERT-RUBER, PONE, PredictiveEngagement and FED (Tao et al., 2018; Ghazarian et al., 2019; Lan et al., 2020; Ghazarian et al., 2020; Mehri and Eskenazi, 2020). + +Table 2 shows the automatic turn level quality estimation, measured using both Pearson and Spearman correlation against turn level annotations on three datasets for different methods. On the spoken dialog testsets (RUI-3P and PUI) the baseline models perform poorly. In contrast, our Classification(3P) model trained using $3P$ turn quality achieves the highest correlation (0.29/0.28) on RUI-3P. This can be partly explained by the matched training and testing setup. We observe promising results for the Reg (Sentiment + User Stop) model which was trained with next user sentiment information combined with stop signal which achieves the highest correlation on the PUI test set and a correlation of (0.22/0.23) on RUI-3P. This demonstrates the effectiveness of weak supervision. We compare different training sizes RUI-1P (40%) versus RUI-1P and show the expected benefit of more data for model training. We also see that our models outperform the baseline models on the ConTurE testset. It is important to note that all the baseline models have been designed and evaluated + +
Training SetModel (Ref label)RUI-3P (test set)PUIConTurE
PearsonSpearmanPearsonSpearmanPearsonSpearman
-RUBER-0.08-0.07-0.1-0.1-0.01-0.03
-BERT-RUBER0.010.02-0.02-0.02-0.0070.004
-PONE0.010.004-0.02-0.0300.01
-PredictiveEng-0.11-0.11-0.06-0.05-0.11-0.09
-FED-0.006-0.02-0.03-0.040.110.10
Our method
RUI-3PClassification (3P)0.290.280.230.24-0.010.11
RUI-1PReg (Sentiment)0.150.120.190.160.340.34
RUI-1PReg (Sentiment + User Stop)0.220.230.350.30.30.33
RUI-1P (40%)Reg (Sentiment + User Stop)0.20.220.290.240.310.32
+ +using written dialogs, and though our models were fine-tuned only on spoken dialog, they are able to generalize to a different modality. FED has been shown to be a good dialog-level evaluator (Yeh et al., 2021). However we see in Table 2 that FED achieves low performance for turn-level evaluation. This matches the conclusion in (Mehri and Eskenazi, 2020) who point out that FED captures the dialog-level qualities from its training data Reddit better than turn-level qualities. + +Table 3 shows the correlation results of the aggregated turn level scores with $3P$ ratings and $1P$ ratings on the spoken dataset. From the first row, we can see that there is a moderate positive correlation between the aggregated mean of $3P$ turn quality and $3P$ ratings (0.50 / 0.46), but see a very low positive correlation with $1P$ ratings (0.16 / 0.12). This may be due to the fact that Likert scale ratings can have lower inter-annotator agreement (Belz and Kow, 2010). Additionally, the 3P annotators had access to the whole interaction and could re-read the context. This is in contrast to 1P users who may forget what happened earlier in the interaction as it is spoken. Another reason is that 3P annotators only read the transcript of the dialog for turn or dialog evaluation, and may miss the tones in utterances that may affect 1P user ratings. When using the user sentiment scores, we can see through mean aggregation it has positive correlation with both $3P$ ratings (0.48/0.46) and $1P$ ratings (0.38/0.37). The higher correlation of user sentiment (as opposed to $3P$ turn quality) with $1P$ rating is partly because of the different signals used in 3P annotation as discussed above. These results suggest sentiment can be used to estimate dialog level ratings, as done in previous work such as (Kim et al., 2020). + +Overall, we see that the next user utterance sentiment serves as a reasonable proxy to the quality of the previous system response, hence when there + +is not much data with turn level quality annotation, we can train models using weak supervision coming from the next user utterance. In this study, we use the sentiment scores obtained from user utterances in speech based dialogs, therefore, acoustic features were used to obtain such sentiment information. Since speech based sentiment or emotion recognition has been widely studied, it does not require much additional annotation to train the sentiment model for user utterances, and thus we can rely on existing models. We also explored using sentiment just based on text, but observed some issues in our preliminary study. For example, when users reply with a 'no' to a question, it is classified as negative, however, this may not indicate a problem with the previous system response. We plan to further investigate this in our future work, which will allow us to better utilize more available text based dialog data. Example outputs from both FED and our model are shown in the Appendix. + +Table 2: Correlation between both baseline and our model outputs against ${3P}$ turn quality for spoken and written datasets. For our method, reference labels used for Classification or Reg (Regression) models are presented. + +
3P Ratings1P Ratings
PSPS
3P turn quality0.500.460.160.12
User sentiment0.480.460.380.37
+ +Table 3: Correlation between turn level information (3P turn quality and user turn sentiment) and dialog level rating on RUI-3P. P=Pearson, S=Spearman. + +# 5 Conclusion + +In this work, we show that instead of training on manually annotated data we can train on sentiment from the next user utterance in a weakly supervised manner to evaluate system responses. We show that our model has better cross domain generalization and performs well on a written dialog dataset. In our future work we will investigate other methods beyond simple aggregation for dialog level estimation and using more text based dialog data. + +# 6 Ethics and Broader Impact + +Our work involves leveraging user sentiment to evaluate the quality of system responses. We acknowledge that we are using data from real users who have not been paid for these interactions. We also acknowledge there may be biases in the demographics of the user population. We conducted our ConTurE annotation through Amazon Mechanical Turk. We pay turkers $12 per hour, which is well above the federal minimum age. + +# References + +Anja Belz and Eric Kow. 2010. Comparing rating scales and preference judgements in language evaluation. In Proceedings of the 6th International Natural Language Generation Conference. +Jan Deriu, Alvaro Rodrigo, Arantxa Otegi, Guillermo Echegoyen, Sophie Rosset, Eneko Agirre, and Mark Cieliebak. 2020. Survey on evaluation methods for dialogue systems. Artificial Intelligence Review, pages 1-56. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Maxine Eskenazi, Shikib Mehri, Evgeniia Razumovskaia, and Tiancheng Zhao. 2019. Beyond turing: Intelligent agents centered on the user. arXiv preprint arXiv:1901.06613. +James D Finch, Sarah E Finch, and Jinho D Choi. 2021. What went wrong? explaining overall dialogue quality through utterance-level impacts. In Proceedings of the 3rd Workshop on Natural Language Processing for Conversational AI, pages 93-101. +Sarah E Finch, James D Finch, Ali Ahmadvand, Xiangjue Dong, Ruixiang Qi, Harshita Sahijwani, Sergey Volokhin, Zihan Wang, Zihao Wang, Jinho D Choi, et al. 2020. Emora: An inquisitive social chatbot who cares for you. Alexa Prize Proceedings. +Sarik Ghazarian, Johnny Wei, Aram Galstyan, and Nanyun Peng. 2019. Better automatic evaluation of open-domain dialogue systems with contextualized embeddings. In Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation, pages 82-89. +Sarik Ghazarian, Ralph Weischedel, Aram Galstyan, and Nanyun Peng. 2020. Predictive engagement: An efficient metric for automatic evaluation of open-domain dialogue systems. In Proceedings of the + +AAAI Conference on Artificial Intelligence, volume 34, pages 7789-7796. +Chulaka Gunasekara, Seokhwan Kim, Luis Fernando D'Haro, Abhinav Rastogi, Yun-Nung Chen, Mihail Eric, Behnam Hedayatnia, Karthik Gopalakrishnan, Yang Liu, Chao-Wei Huang, et al. 2020. Overview of the ninth dialog system technology challenge: Dstc9. arXiv preprint arXiv:2011.06486. +Lishan Huang, Zheng Ye, Jinghui Qin, Liang Lin, and Xiaodan Liang. 2020. Grade: Automatic graph-enhanced coherence metric for evaluating open-domain dialogue systems. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9230-9240. +Juraj Juraska, Kevin K Bowden, Lena Reed, Vrindavan Harrison, Wen Cui, Omkar Patil, Rishi Rajasekaran, Angela Ramirez, Cecilia Li, Eduardo Zamora, et al. 2021. Athena 2.0: Contextualized dialogue management for an alexa prize socialbot. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP). +Yelin Kim, Joshua Levy, and Yang Liu. 2020. Speech sentiment and customer satisfaction estimation in socialbot conversations. Proc. Interspeech 2020, pages 1833-1837. +Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. +Tian Lan, Xian-Ling Mao, Wei Wei, Xiaoyan Gao, and Heyan Huang. 2020. Pone: A novel automatic evaluation metric for open-domain generative dialogue systems. ACM Transactions on Information Systems (TOIS), 39(1):1-37. +Zekang Li, Jinchao Zhang, Zhengcong Fei, Yang Feng, and Jie Zhou. 2021. Conversations are not flat: Modeling the dynamic information flow across dialogue utterances. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 128-138. +Chia-Wei Liu, Ryan Lowe, Iulian Vlad Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2122-2132. +Ryan Lowe, Michael Noseworthy, Iulian V Serban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. 2017. Towards an automatic turing test: Learning to evaluate dialogue responses. arXiv preprint arXiv:1708.07149. +Shikib Mehri and Maxine Eskenazi. 2020. Unsupervised evaluation of interactive dialog with dialogpt. In Proceedings of the 21th Annual Meeting of the + +Special Interest Group on Discourse and Dialogue, pages 225-235. +Yixin Nie, Mary Williamson, Mohit Bansal, Douwe Kiela, and Jason Weston. 2021. I like fish, especially dolphins: Addressing contradictions in dialogue modeling. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1699-1713. +Bo Pang, Erik Nijkamp, Wenjuan Han, Linqi Zhou, Yixian Liu, and Kewei Tu. 2020. Towards holistic and automatic evaluation of open-domain dialogue generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3619-3629. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. +Ashwin Ram, Rohit Prasad, Chandra Khatri, Anu Venkatesh, Raefer Gabriel, Qing Liu, Jeff Nunn, Behnam Hedayatnia, Ming Cheng, Ashish Nagar, et al. 2018. Conversational ai: The science behind the alexa prize. arXiv preprint arXiv:1801.03604. +Abigail See and Christopher Manning. 2021. Understanding and predicting user dissatisfaction in a neural generative chatbot. In Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 1-12, Singapore and Online. Association for Computational Linguistics. +Koustuv Sinha, Prasanna Parthasarathi, Jasmine Wang, Ryan Lowe, William L Hamilton, and Joelle Pineau. 2020. Learning an unreferenced metric for online dialogue evaluation. arXiv preprint arXiv:2005.00583. +Chongyang Tao, Lili Mou, Dongyan Zhao, and Rui Yan. 2018. Rubber: An unsupervised method for automatic evaluation of open-domain dialog systems. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32. +Thomas Wolf, Victor Sanh, Julien Chaumont, and Clement Delangue. 2019. Transfertransfo: A transfer learning approach for neural network based conversational agents. arXiv preprint arXiv:1901.08149. +Yi-Ting Yeh, Maxine Eskenazi, and Shikib Mehri. 2021. A comprehensive assessment of dialog evaluation metrics. In The First Workshop on Evaluations and Assessments of Neural Conversation Systems, pages 15-33. +Chen Zhang, Yiming Chen, Luis Fernando D'Haro, Yan Zhang, Thomas Friedrichs, Grandee Lee, and Haizhou Li. 2021a. Dynaeval: Unifying turn and dialogue level evaluation. In Proceedings of the + +59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5676-5689. +Chen Zhang, Luis Fernando D'Haro, Rafael E Banchs, Thomas Friedrichs, and Haizhou Li. 2021b. Deep am-fm: Toolkit for automatic dialogue evaluation. In Conversational Dialogue Systems for the Next Decade, pages 53-69. Springer. + +# A Appendices + +# A.1 Hyperparameters for the turn level quality estimation model + +All our BERT models were finetuned with a batch size of 8 and a learning rate of 1e-5 with the Adam optimizer (Kingma and Ba, 2014). We train each model for 10 epochs and select the best model by computing correlation on the RUI-3P (dev set). + +# A.2 NRG responder hyperparameters + +We train our NRG Responder models on the RUI dataset described in Section 3.2. This dataset is split into a 80/10/10 train, valid, test set. Our model is initialized with GPT2-XL (Radford et al., 2019) and is finetuned with a batch size of 2 and a learning rate of $6.25\mathrm{e} - 5$ with the Adam optimizer. We train the model for 3 epochs and we finetune both the language modeling head and multiple choice Head of GPT2 in a TransferTransfo fashion (Wolf et al., 2019). For the multiple choice head, one randomly selected negative candidate was used. We leverage the HuggingFace's transformers library for all our models. $^6$ + +# A.3 Turn level annotation example + +# Example + +Dialog History: + +System: I can talk about news, movies, music. What topic would you like to talk about today? + +User: news + +System: Sure. I love sports! what is the sport that you watched the most? + +Turn quality: 0 + +Reason for annotation: The system response was off-topic + +Figure 3: Example of 3P turn annotations. Due to privacy concerns, this example dialog is from an internal author. + +# A.4 Turn level statistics + +# ConTurE + +
Percentage of Turns with score 030.7%
Percentage of Turns with score 122.2%
Percentage of Turns with score 247%
RUI-turn
Percentage of Turns with score 035.3%
Percentage of Turns with score 164.7%
+ +# A.5 Dialog level scores for ConTurE + +We take the mean aggregation of the turn level annotations and compute the correlation against each dialog level dimension in the original DSTC9 dataset. We see that the annotations have moderate correlation with all the parameters, with the highest being with 'human (overall)' (0.45/0.48). This shows that even though the turn and dialog level annotations were left by two different Turkers the turn annotations seem reliable. + +Table 4: Statistics of Turn level annotations for both ConTurE and RUI-3P datasets. + +
Dialog level parameterPearsonSpearman
consistent0.380.40
likeable0.420.45
diverse0.230.25
informative0.300.34
coherent0.320.37
human (overall)0.450.48
understanding0.360.42
flexible0.330.40
topic depth0.340.35
error recovery0.370.40
inquisitive0.200.27
+ +Table 5: Correlation between ConTurE turn-level and dialog-level scores. + +# A.6 Example outputs of turn quality estimation + +We pick a mixture of examples to show where our model works and fails. For each example we show the ground truth turn quality left by our human annotators and the score from each model. + +# Example 1 + +Dialog History: + +User: Who is Donald Trump? + +System: that's a funny question, well it isn't really surprising that he isn't one of the three wealthiest presidents in american history, i know they measure by inflation but still. + +Human Annotation (0-2) = 2 + +Classification(3P) = 0 + +Reg(Sentiment) = 0.14 + +$\mathrm{Reg}(\mathrm{Sentiment} + \mathrm{User}\mathrm{Stop}) = 1.24$ + +# Example 2 + +Dialog History: + +User: Covid19 is a virus that's spreading all over the world + +System: i guess it is his book then, do you like to read?. + +Human Annotation (0-2) = 0 + +Classification(3P) = 0 + +Reg(Sentiment) = -0.12 + +$\mathrm{Reg}(\mathrm{Sentiment} + \mathrm{User}\mathrm{Stop}) = 1.04$ + +Figure 4: Examples from both our Classification and Reg (Regression) models on the ConTurE dataset. In example 1 the classification model incorrectly predicts the system response as a bad response. For the Reg models there is no threshold set on the output and scores must be looked at in a relative matter. We see for the Reg models that the output score from the model in example 2 is lower than that in example 1, showing the model is capturing that the system response in example 2 was of worse quality than the system response in example 1. + +# Example 1 + +Dialog History: + +·· + +System: i see. i find sports so exciting. + +User: i don't + +System: well, lets switch the subject. i am fascinated with the world. are you interested in travel? + +Human Annotation $(0,1) = 1$ + +Classification(3P) = 1 + +$\operatorname{Reg}(\text{Sentiment}) = 0.004$ + +$\mathrm{Reg}(\mathrm{Sentiment} + \mathrm{User}\mathrm{Stop}) = 0.96$ + +# Example 2 + +Dialog History: + +System: i like dogs as well. i think they are much better than dogs. so, what's one of your favorite places you've been? + +User: i don't like dogs + +System: i am not a big fan of dogs either. i would rather stay in the united states. so, what did you eat for dinner last night? + +Human Annotation $(0,1) = 0$ + +Classification(3P) = 0 + +Reg(Sentiment) = -0.15 + +Reg(Sentiment + User Stop) = 0.67 + +# Example 3 + +Dialog History: + +·· + +System: i recently read a really nice uplifting news article. want to hear about it? + +User: yes + +System: sure. here's some positive, uplifting news i found. every day, we find our balance. want to hear more? + +User: yes + +System: that's great. what do you think about it? + +Human Annotation $(0,1) = 0$ + +Classification(3P) = 0 + +Reg(Sentiment) = 0.20 + +Reg(Sentiment + User Stop) = 1.20 + +Figure 5: Examples from both our Classification and Reg (Regression) models on a spoken interaction. Due to privacy concerns, this example interaction is from an internal author. The classification model is able to predict all examples correctly, which makes sense as it was trained on similar annotations. Both Reg models output a lower score for the system response in example 2 than in example 1, showing they are both able to capture that the system response in example 2 is of worse quality. However in example 3 the Reg models incorrectly give a high score to the system response that ignores the user's request. + +# A.7 FED versus our turn quality estimation + +We show examples from the ConTurE dataset to compare the FED model against our Reg (Sentiment) model, which had the best performance on ConTurE. The max and min value score from FED are 4.39 and 1.74 respectively. The higher the output score from the FED model means the response is more appropriate. Below are examples where + +the FED model incorrectly predicts the system response is appropriate even though it was marked as inappropriate by the human annotator. + +# Example 1 + +Dialog History: + +System: probably! + +User: you're not a good chat bot + +System: i haven't been there myself + +Human Annotation (0-2) = 0 + +$\mathbf{FED} = 4.13$ + +Reg(Sentiment) = -0.01 + +# Example 2 + +Dialog History: + +中 + +System: do you like horses? + +User: No + +System: no, not really. you? + +Human Annotation (0-2) = 0 + +FED = 4.39 + +Reg(Sentiment) = 0.09 + +# Example 3 + +Dialog History: + +System: hello! how are you? i just found out that judge judy makes $ 123,000 per episode! + +User: oh wow! that's a lot! how much did oprah make? + +System: she gets paid $ 123 + +Human Annotation $(0 - 2) = 0$ + +$\mathbf{FED} = 4.18$ + +$\operatorname{Reg}(\text{Sentiment}) = 0.17$ + +# Example 4 + +Dialog History: + +System: I love gold! Especially when I buy trucks with it! +User: hell yeah brother + +System: Yeah. I also like gold, money, power metal, and trucks. + +Human Annotation (0-2) = 0 + +$\mathbf{FED} = 4.03$ + +Reg(Sentiment) = 0.29 + +Figure 6: In both example 1 and 2 the last system response ignores the previous user utterance and therefore is marked as inappropriate. The FED model assigns a high score to these system responses. For example 3 both the FED and Reg(Sentiment) model incorrectly give a high score to the system response, which is factually incorrect. For example 4 both the FED and Reg(Sentiment) model incorrectly give a high score to the system response, which shows repetition. + +![](images/139bbe7dd1ebd0a83c76d472a7a7d3ba8a15647eee06b3c1c6d40e4d7cdd99e7.jpg) +(a) Regression (Sentiment) + +![](images/512eeb42ce1c9f639967fcf6cc0faf11d5a4cb86a647f0655c7bae55492e6e04.jpg) +(b) Regression (Sentiment + User Stop) +Figure 7: We plot the model output scores for the Regression (Sentiment) and Regression (Sentiment + User Stop) models for each reference label i.e. Class 0 and Class 1. We see that for Regression (Sentiment + User Stop) in Figure 7b the separation between model outputs for Class 0 and Class 1 become more pronounced as compared to Regression (Sentiment) in Figure 7a. + +![](images/d40348e57c42e276604881d63c5bcd8b81aec41927d10d0b9ffdaa3d416c9402.jpg) +Figure 8: We plot the model probability outputs from the Classification(3P) model for each reference label i.e. Class 0 and Class 1. We use a threshold of 0.5 such that any score above or equal to that is considered a good response (1) and vice versa. We see that for the reference label Class 1 most probability scores are below the threshold. \ No newline at end of file diff --git a/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/images.zip b/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..1c6bd44599c491e214cd64da18ee62ee9c1fc8e2 --- /dev/null +++ b/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a5d780fff426e1c79bb0f73bace415c2d1b1b9e76a184c5361f28af9acac7978 +size 320129 diff --git a/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/layout.json b/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..30fafb6901622eb064628e4db1e93fe0650f8d04 --- /dev/null +++ b/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:826f5efa711480dddc7b2cf3ae2c99307ed6e5a7a103a9ea2b5a474fa30d086e +size 373966 diff --git a/whattolearnandhowtowardeffectivelearningfromrationales/ade13865-242c-4e8b-88fd-b3ce5fc00cc7_content_list.json b/whattolearnandhowtowardeffectivelearningfromrationales/ade13865-242c-4e8b-88fd-b3ce5fc00cc7_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..7527bda93ee3852decc0d53d4bf284551fff05fc --- /dev/null +++ b/whattolearnandhowtowardeffectivelearningfromrationales/ade13865-242c-4e8b-88fd-b3ce5fc00cc7_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:91f848dad30cd4cbf418e3de040488048e312b1c862425499c626848b735268b +size 107535 diff --git a/whattolearnandhowtowardeffectivelearningfromrationales/ade13865-242c-4e8b-88fd-b3ce5fc00cc7_model.json b/whattolearnandhowtowardeffectivelearningfromrationales/ade13865-242c-4e8b-88fd-b3ce5fc00cc7_model.json new file mode 100644 index 0000000000000000000000000000000000000000..8c55fe5b93f909019703ad8b9f72107a959c9d21 --- /dev/null +++ b/whattolearnandhowtowardeffectivelearningfromrationales/ade13865-242c-4e8b-88fd-b3ce5fc00cc7_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:37267828779f30029bc572cd3bf76eb4c28b677cd4ff0177b94cdc15e20d429d +size 128155 diff --git a/whattolearnandhowtowardeffectivelearningfromrationales/ade13865-242c-4e8b-88fd-b3ce5fc00cc7_origin.pdf b/whattolearnandhowtowardeffectivelearningfromrationales/ade13865-242c-4e8b-88fd-b3ce5fc00cc7_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..08be2eb8f524626ad474876c5399aafa9599482f --- /dev/null +++ b/whattolearnandhowtowardeffectivelearningfromrationales/ade13865-242c-4e8b-88fd-b3ce5fc00cc7_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a92ee1b62d4620ca14f40a959c5de13c6851536b90c3b14bbb0c789c791a307a +size 1104866 diff --git a/whattolearnandhowtowardeffectivelearningfromrationales/full.md b/whattolearnandhowtowardeffectivelearningfromrationales/full.md new file mode 100644 index 0000000000000000000000000000000000000000..45d05ee5c91861b823b825d5401760b47177d7a4 --- /dev/null +++ b/whattolearnandhowtowardeffectivelearningfromrationales/full.md @@ -0,0 +1,468 @@ +# What to Learn, and How: Toward Effective Learning from Rationales + +Samuel Carton + +University of Chicago + +carton@uchicago.edu + +Surya Kanoria + +University of Colorado Boulder + +surya.kanoria@colorado.edu + +# Chenhao Tan + +University of Chicago + +chenhao@uchicago.edu + +# Abstract + +Learning from rationales seeks to augment model prediction accuracy using human-annotated rationales (i.e. subsets of input tokens) that justify their chosen labels, often in the form of intermediate or multitask supervision. While intuitive, this idea has proven elusive in practice. We make two observations about human rationales via empirical analyses: 1) maximizing rationale supervision accuracy is not necessarily the optimal objective for improving model accuracy; 2) human rationales vary in whether they provide sufficient information for the model to exploit for prediction. Building on these insights, we propose several novel loss functions and learning strategies, and evaluate their effectiveness on three datasets with human rationales. Our results demonstrate consistent improvements over baselines in both label and rationale accuracy, including a $3\%$ accuracy improvement on MultiRC. Our work highlights the importance of understanding properties of human explanations and exploiting them accordingly in model training. + +# 1 Introduction + +In the past several years, explainability has become a prominent issue in machine learning, addressing concerns about the safety and ethics of using large, opaque models for decision-making. As interest has grown in explanations for understanding model behavior, so has interest grown in soliciting gold-standard explanations from human annotators and using them to inject useful inductive biases into models (Hase and Bansal, 2021). Many such explanation datasets have become available recently (Wiegreffe and Marasović, 2021). + +A common format for explanations in NLP is the rationale, a subset of input tokens that are relevant to the decision. A popular architecture for generating such explanations is the rationale model, + +# (A) Unsupervised rationale + +[CLS] susan wanted to have a birthday party. she called all of her friends . she has five friends . her mom said that susan can invite them all to the party . her first friend could not go to the party because she was sick . her second friend was going out of town . her third friend was not so sure if her parents would let her . the fourth friend said maybe . the fifth friend could go to the party for sure . susan was a little sad , on the day of the party , all five friends showed up . each friend had a present for susan . susan was happy and sent each friend a thank you card the next week . [SEP] how many people did susan call ? | 5 [SEP] + +# Prediction: False + +# (B) Human rationale + +[CLS] susan wanted to have a birthday party. she called all of her friends . she has five friends , her mom said that susan can invite them all to the party . her first friend could not go to the party because she was sick . her second friend was going out of town . her third friend was not so sure if her parents would let her . the fourth friend said maybe . the fifth friend could go to the party for sure . susan was a little sad . on the day of the party , all five friends showed up . each friend had a present for susan . susan was happy and sent each friend a thank you card the next week . [SEP] how many people did susan call ? | 5 [SEP] + +# Prediction: True + +Table 1: An example of unsupervised versus human-provided rationale in MultiRC. The unsupervised model struggles to localize its attention and makes an incorrect prediction. The same model makes a correct prediction by only looking at the human rationale. + +an explain-then-predict architecture which first extracts a rationale from the input and then makes a prediction from the rationale-masked text (that is, only the tokens included in rationale) (Lei et al., 2016; DeYoung et al., 2019). Without external supervision on this rationale, we typically pursue parsimony via a sparsity objective. Table 1A shows an example unsupervised rationale. + +With the benefit of a human-annotated rationale for the true label, we can begin to understand model mistakes in terms of reliance on inappropriate features (and correct them). In the example above, the unsupervised rationale suggests that the model's + +mistake is due to missing key information about how many friends Susan has (i.e., "five"). Forcing the model to see these key tokens by only using the human rationale as the input fixes this mistake (Table 1B). Prior work has shown that this is not a fluke. For some datasets, human rationales consistently improve model accuracy over baseline when used as an input mask, by orienting model attention toward informative tokens and away from confounding ones (Carton et al., 2020). + +Knowing that human rationales contain useful predictive signal, the key question becomes: can we improve model prediction accuracy by incorporating human rationales into training? + +Numerous approaches to using human rationales in training have been tried, including: regularizing the parameters of a (linear) model (Zaidan et al., 2007); regularizing model output gradients (Ross et al., 2017); regularizing internal transformer attention weights (Jayaram and Allaway, 2021); and direct supervision on a rationale model (DeYoung et al., 2019), which serves as our baseline approach in this paper. These approaches have generally failed to significantly improve model prediction accuracy (Hase and Bansal, 2021). + +A quality these prior approaches have in common is treating human rationales as internally and collectively uniform in predictive utility. That is, any token included in the human rationale is treated as equally important to include in the input representation; vice versa for tokens excluded. Furthermore, all human rationales are weighted equally. + +The reality, we demonstrate empirically via ablation studies in §4, is that the predictive utility of human rationales is distributed unevenly between tokens in a rationale, and unevenly between rationales in a dataset. Based on this analysis, we suggest that learning objectives which weight every token equally (accuracy in the case of direct supervision), and every rationale equally, are not optimal for improving downstream model accuracy. + +We operationalize these hypotheses in four distinct modifications to the baseline rationale model architecture. Three of these modify the naive token-wise accuracy supervision objective, and the fourth implements "selective supervision", ignoring unhelpful human rationales in training. + +Evaluating on three datasets, our proposed methods produce varying levels of improvement over both a baseline BERT model and a baseline BERT-to-BERT supervised rationale model, ranging from + +substantial for MultiRC $(3\%)$ to marginal for ESNLI $(0.4\%)$ . Additionally, our methods also improve rationale prediction performance. + +Taken together, our results demonstrate the importance of considering the variance of predictive utility both between and within human rationales as a source of additional training signal. Our proposed modifications help pave the way toward truly effective and general learning from rationales. + +# 2 Related Work + +# 2.1 Rationalization + +The extractor-predictor rationale model proposed by Lei et al. (2016) and described in more detail in §5, is an approach to feature attribution, which is one among many families of explanation methods (see Vilone and Longo (2020) for a recent survey). + +Recent work has extended the original architecture in various ways, including replacing the use of reinforcement learning with differentiable binary variables (Bastings et al., 2020; DeYoung et al., 2019), alternatives to the original sparsity objective (Paranjape et al., 2020; Antognini and Faltings, 2021), and additional modules which change the interaction dynamics between the extractor and predictor (Carton et al., 2018; Yu et al., 2019; Chang et al., 2020). Pipeline models (Lehman et al., 2019) are similar, but train the two modules separately rather than end-to-end. + +Rationale models are a powerful approach to NLP explanations because of how specific objectives can be put on the properties of the rationale, but they have some downsides. First, they are unstable, the extractor often collapsing to all-0 or all-1 output (DeYoung et al., 2019; Yu et al., 2019). We introduce an engineering trick in §5 that appears to lessen this risk. Also, with end-to-end training comes the risk of information leakage between the extractor and predictor (Jethani et al., 2021; Hase et al., 2020; Yu et al., 2021). This idea of leakage plays a part in how we estimate explanation predictive utility in section §4. + +# 2.2 Learning from Explanations + +Wiegreffe and Marasović (2021) present a review of explainable NLP datasets, a number of which have been incorporated into the ERASER collection and benchmark (DeYoung et al., 2019). + +Early work in learning from human explanations include Zaidan et al. (2007) and Druck et al. (2009), and a line of work termed "explanatory debugging" + +(Kulesza et al., 2015; Lertvittayakumjorn and Toni, 2021). More recent work spans a variety of approaches, categorized by Hase and Bansal (2021) into regularization (e.g., Ross et al. (2017)), data augmentation (e.g., Hancock et al. (2018)), and supervision over intermediate outputs (e.g., DeYoung et al. (2019); Jayaram and Allaway (2021)). + +Significant improvements to model accuracy as a result of explanation learning have proven elusive. Studies occasionally claim such improvement, such as Rieger et al. (2020), which observes general improvements on a medical vision task. More commonly their claims pertain to secondary objective such as explanation quality (e.g., Plumb et al. (2020)), robustness (e.g., Ross et al. (2017), Srivastava et al. (2020)), or few-shot learning (e.g., Yao et al. (2021)). Hase and Bansal (2021) gives an overview of the problem and discusses circumstances under which learning from explanations is liable to work. Our paper contributes to this discussion by considering the variance of training signal quality both within and between human rationales, and how to exploit these variances. + +# 3 Data + +We consider three datasets in this work. All three are document-query text comprehension tasks, where the task is to determine whether the query is true or false given the document. We use the train, development, test splits offered by DeYoung et al. (2019). Table 2 shows the basic statistics of each dataset based on the training set. + +- MultiRC (Khashabi et al., 2018). A reading comprehension dataset of 32,091 document-question-answer triplets that are true or false. Rationales consist of 2-4 sentences from a document that are required to answer the given question. +- FEVER (Thorne et al., 2018). A fact verification dataset of 76,051 snippets of Wikipedia articles paired with claims that they support or refute. Rationales consist of a single contiguous snippet, so the basic unit of rationale is sentence. +- E-SNLI (Camburu et al., 2018). A textual entailment dataset of 568,939 short snippets and claims for which each snippet either refutes, supports, or is neutral toward. Input texts are much shorter than MultiRC and FEVER, and rationales are at the token level. + +
DatasetText lengthRationale lengthRationale granularity
MultiRC336.052.0sentence
FEVER355.947.0sentence
E-SNLI23.56.1token
+ +Table 2: Basic statistics of the datasets. + +# 4 Analysis + +To understand properties of human rationales for the purpose of learning from rationales, we analyze the effect of human rationales when they are used as inputs to a trained model. + +# 4.1 Human Rationales have Predictive Utility + +A basic question about the viability of learning from rationales is whether human rationales bear the potential for improving model performance. That is, do human explanations successfully reveal useful tokens while occluding confounding tokens, such that a model evaluated only on the revealed tokens is able to get improved performance relative to the full input? We refer to such rationale-redacted inputs as rationalized inputs. + +We define sufficiency-accuracy (SA) as how accurate the model is across a corpus of rationalized input. This is an aggregate measure, similar to sufficiency as defined in DeYoung et al. (2019) but focused on absolute performance rather than similarity to baseline model output. We refer to the sufficiency-accuracy of the human rationales as human sufficiency-accuracy (HSA). + +Estimating sufficiency-accuracy is problematic. The natural way to probe whether the tokens in a rationale are sufficient for an accurate prediction is to remove the non-included tokens from the input, run the model on just the included tokens, and assess its accuracy. But a version of the input where a majority of tokens are removed or masked (by a [MASK] special token in the case of BERT), is out-of-distribution relative to the training data, which has no removal or masking. This difference may lead to unpredictable output from the model when tested on masked input. This masking-is-OOD problem has not received much discussion in the literature, though Jacovi and Goldberg (2021) propose to mitigate it with random masking during model training. The effect of this problem will be to underestimate the sufficiency-accuracy of rationales tested against an un-adapted model. + +The opposite problem stems from overfitting rather than OOD issues: label leakage. A human rationale may contain signal about the true label + +![](images/2ce7e5e1513e6b22a7bef0289fd1b1f4017a793921264b62fafd90c029f6d896.jpg) +(a) Fine-tuned on full input (unadapted). + +![](images/e04e752b518e9d383efcb95647baf1dd8f9274293e518a817598c1abb25761c2.jpg) +(b) Fine-tuned on both full and human-rationalized input (adapted). + +![](images/d96ec84140265f5f3585bd6eb0010652279e224667f64a825a7a7b77b990dae1.jpg) +(a) All samples + +![](images/a775246833f4407b15d0508195585edf086aa8df720fe5dbd87cf265ef9f47a6.jpg) +Figure 1: Baseline performance vs. human sufficiency-accuracy for rationalized inputs with token removal and [MASK] token substitution. As rationalized inputs are different from the full text inputs that the original training set includes, we build a calibrated model where the model is trained on both full text inputs and rationalized inputs. +(b) Human sufficiency-accuracy $= 1$ (c) Human sufficiency-accuracy $= 0$ + +![](images/b3af762223e303960fd9da792d2d9216f9d28a3f401c73925bf2a268bc58b995.jpg) +Figure 2: Sufficiency-accuracy of human rationales on baseline BERT model with increasing levels of corruption via swaps, drops and additions. Model performance decreases quickly when we drop rationale tokens, but stays high as we add non-rationale tokens. These effects are moderated by HSA. + +that goes beyond the semantics of the tokens included in the rationale, and a model trained on human-rationalized input may learn to pick up on these spurious signals. A known example is in ESNLI, where annotators had different explanation instructions based on their chosen label. This issue is discussed in several recent papers (Yu et al., 2021; Jethani et al., 2021; Hase et al., 2020), albeit mostly concerning model-generated rather than human explanations. The effect of this problem will be to overestimate the sufficiency-accuracy of rationales tested against an adapted model. + +Fig. 1 shows sufficiency-accuracy results for human rationales on both unadapted and adapted models. We expand on the analysis presented by Carton et al. (2020) by showing results for both masking-via-removal and masking-via-[MASK]-token-substitution. + +Fig. 1a shows that token removal suffers less from the masking-is-OOD problem on an unadapted model than [MASK] token substitution. [MASK] token substitution results in lower accuracy across the board, while removal improves baseline accuracy for MultiRC, matches it for FEVER, and lowers it for E-SNLI. + +With adaptation (Fig. 1b), token removal and [MASK] token substitution have near-identical effects, improving accuracy by a large margin for MultiRC and E-SNLI, and a small margin for FEVER. The near- $100\%$ sufficiency-accuracy for E-SNLI is probably due to label leakage. + +If an unadapted model is liable to underestimate sufficiency model, and an adapted model to overestimate, then we suggest that the potential benefit of learning from rationales lies somewhere between the two. On this hypothesis, this figure suggests that MultiRC has a large potential benefit, FEVER a small one, and E-SNLI an unclear benefit depending on how much of the predictive utility of E-SNLI rationales is due to label leakage. The results in §6 ultimately bear out these expectations. + +# 4.2 Importance of Rationale Accuracy + +We focus on MultiRC, where evaluating a non-rationale-adapted fine-tuned BERT model on human-rationalized data results in a sufficiency-accuracy of $74\%$ , a significant improvement over the normal test accuracy of $68\%$ . But how robust is this improvement to rationale prediction error? We examine how the sufficiency-accuracy of human rationales changes as they are corrupted by random addition, dropping, and swapping of tokens. + +In this analysis, an $N\%$ drop removes $N\%$ of tokens from each rationale in the dataset, reducing recall to $100 - N$ . An $N\%$ addition adds tokens numbering $N\%$ the size of each rationale, from the set of non-rationale tokens, reducing precision to $\frac{100}{100 + N}$ . An $N\%$ swap performs both operations, swapping $N\%$ of rationale tokens for the same number of non-rationale tokens. + +The "dropped" curve in Fig. 2a shows that human rationales afford improved accuracy over the + +baseline until roughly $40\%$ of tokens have been dropped from them, suggesting that a minimum of $60\%$ recall is needed to derive an advantage from human rationales over the full input. Per the "added" curve, adding the same number of irrelevant tokens to the rationale has a much less severe impact on accuracy, suggesting that errors of omission are significantly worse than errors of inclusion for learning from rationales. + +Fig. 2b and 2c respectively show the effect of this perturbation on high- and low-sufficiency-accuracy human rationales, which constitute $74\%$ and $26\%$ of rationales respectively for this model. High-SA rationales follow a similar trend to the whole population, but the recall requirement is lower than Fig. 2a to exceed model accuracy with the full input (the "dropped" curve meets the blue line at $50\%$ ). In comparison, low-SA rationales demonstrate interesting properties. These rationales actually have a sabotaging effect in a quarter of cases: the model would have an accuracy of $27\%$ with the full input, which is lowered to $0\%$ by the presence of these rationales. Also, addition and dropping have a similar effect in mitigating this sabotage. Similar results hold on FEVER and E-SNLI except the apparent required recall is much higher $(>90\%)$ for both methods (see the appendix), indicating challenges for learning from rationales on these datasets. + +In summary, our analyses inspire two general observations about learning from rationales: 1) moving away from naive accuracy (toward recall, for example) as a rationale supervision objective, and 2) focusing on useful rationales over harmful ones. + +# 5 Methods + +We propose architecture changes based on these insights. Our code is available at https://github.com/ChicagoHAI/learning-from-rationales. + +# 5.1 Background and Baseline Models + +Our training data include input tokens, their corresponding rationales, and labels. Formally, an instance is denoted as $(x,\alpha ,y)$ , where $x = (x_{1},\ldots ,x_{L})$ is a text sequence of length $L$ and human rationale $\alpha$ of the same length. $\alpha_{i} = 1$ indicates that token $x_{i}$ is part of the rationale (and relevant for the prediction), $\alpha_{i} = 0$ otherwise. + +We use HuggingFace's BERT-base-uncased (Devlin et al., 2018; Wolf et al., 2020) as the basis for our experiments and analysis. Used in the standard way, BERT ignores $\alpha$ and is fine-tuned on tuples + +![](images/1a27bbe6f0fc9d09ed15ad162ce6e09656496cc4e283105dc7c15b643140e3ff.jpg) +Figure 3: Illustration of our multi-task framework. Our main innovation lies in how we define rationale loss for the supervised case and the masking function $m$ . + +of $(x,y)$ . This is our simplest baseline. + +Rationale model. We use the rationale model of Lei et al. (2016) for both supervised and unsupervised rationale generation, in its updated BERT-to-BERT form (DeYoung et al., 2019). This model consists of two BERT modules: a rationale extractor $g$ that generates a binary attention mask $\hat{\alpha}$ as the rationale, and a predictor $f$ which makes a prediction using the rationalized input via a masking function $m$ on $x$ and $\hat{\alpha}$ (Fig. 3): + +$$ +\begin{array}{l} g (\boldsymbol {x}) \rightarrow \hat {\boldsymbol {\alpha}}, \\ f (m (\boldsymbol {x}, \hat {\boldsymbol {\alpha}})) \rightarrow \hat {y}. \\ \end{array} +$$ + +The two components are trained in tandem. In the unsupervised scenario, the joint objective function consists of a prediction loss term and a rationale sparsity term, encouraging the model to retain only those tokens in $x$ that are necessary for accurate prediction: + +$$ +\mathcal {L} _ {u} = \mathcal {L} _ {p} (y, \hat {y}) + \lambda_ {s p} | | \hat {\boldsymbol {\alpha}} | |, +$$ + +where $\mathcal{L}_p$ is typically cross entropy. + +In the supervised scenario, given a human rationale $\alpha$ , we replace the sparsity objective with a rationale supervision objective: + +$$ +\mathcal {L} _ {s u} = \mathcal {L} _ {p} (y, \hat {y}) + \frac {\lambda_ {s u}}{L} \sum_ {i = 1} ^ {L} \mathcal {L} _ {p} (\boldsymbol {\alpha} _ {i}, \hat {\boldsymbol {\alpha}} _ {i}), +$$ + +where $\lambda_{su}$ is a hyperparameter that controls the weight of rationale loss compared to label loss. + +Each of these scenarios represents a baseline for our experiment. We refer to the unsupervised version as unsupervised rationale model, and the supervised version as supervised rationale model. + +Implementation details. The original Lei et al. (2016) model generates binary rationales by Bernoulli sampling from continuous probability values produced by the generator, and uses the REINFORCE algorithm (Williams, 1992) to prop + +agate approximate gradients through this non-differentiable operation. + +We instead use Gumbel Softmax (Jang et al., 2017) to generate differentiable approximate binary rationale masks. In this framework, the generator produces logits $z_{i}$ to which are added random noise $G \sim Gumbel(0,1)$ , before applying a softmax to produce class probabilities $c_{i}$ . This approximates a discrete distribution parameterized by $e^{z_i}$ . We then use the positive class probability $c_{i}^{1}$ as the rationale value $\hat{\alpha}_{i}$ . + +$$ +\pmb {c} _ {i} = \mathrm {s o f t m a x} (\pmb {z} _ {i} + \pmb {G} \sim G u m b e l (0, 1)); \hat {\alpha} _ {i} = \pmb {c} _ {i} ^ {1} +$$ + +Generating stable rationales. We find it helpful as an engineering trick to pre-train the predictor layer of this model on the full input before constraining the predictor and extractor on the joint objective. This step appears to mitigate some of the issues this model has with rationale collapse, noted for example by DeYoung et al. (2019). + +Given $\hat{\alpha}_i$ , we mask non-rationale tokens by multiplicatively substituting the [MASK] token vector across their vector representations, analogously to what is done during the MASK-LM pretraining of the BERT model: + +$$ +m _ {s} (\boldsymbol {x} _ {i}, \hat {\boldsymbol {\alpha}} _ {i}) = \hat {\boldsymbol {\alpha}} _ {i} \cdot \boldsymbol {e} _ {i} + (1 - \hat {\boldsymbol {\alpha}} _ {i}) \cdot \boldsymbol {e} _ {[ \mathrm {M A S K} ]}, +$$ + +where $e_i$ represents the embedding associated with $x_i$ and $e_{[\mathrm{MASK}]}$ is the embedding for the [MASK] token. We never mask special tokens [CLS] or [SEP], and we set $\hat{\alpha}_i = 1$ for the query in MultiRC and FEVER as well because the query is always part of human rationales in these two datasets. + +# 5.2 Learning from Human Rationales + +Inspired by the analysis in §4, we propose four strategies for improving the efficacy of learning from rationales: 1) tuning class weights for rationale supervision; 2) enforcing sentence-level rationalization; 3) using non-occluding "importance embeddings"; and 4) selectively supervising only rationales with high sufficiency-accuracy. The first three are designed to loosen the supervision's dependence on flat tokenwise accuracy, while the last tries to operationalize our observations about helpful versus non-helpful rationales. + +Class weights. Rationales may become more effective enablers of model prediction accuracy at different balances of precision and recall. We can adjust this balance simply by using differing weights to positive and negative classes in rationale supervi + +sion: + +$$ +\mathcal {L} _ {w} = \mathcal {L} _ {p} (y, \hat {y}) + \frac {1}{L} \sum_ {i = 1} ^ {L} (1 + \lambda_ {s u} ^ {1} \alpha_ {i}) \mathcal {L} _ {p} (\alpha_ {i}, \hat {\alpha} _ {i}), +$$ + +where $\lambda_{su}^{1}$ controls the relative weight of rationale vs. non-rationale tokens. In particular, as we will discuss in §4, we find that increased recall is associated with increased model accuracy. Thus, we explore several values for $\lambda_{su}^{1}$ in our experiment to encourage higher recall. + +Sentence-level rationalization. Another divergence from strict token-wise accuracy is to rationalize at the sentence rather than the token level. Given a function sent mapping a token $x_{i}$ to its corresponding sentence $s$ consisting of tokens $\{.,x_i,..\}$ , we average token-level logits $z_{i}$ across each sentence to produce a binary mask at the sentence level and then propagate that mask value to all sentence tokens: + +$$ +\hat {\boldsymbol {\alpha}} _ {i} = \hat {\boldsymbol {\alpha}} _ {s e n t (i)} ^ {s}, +$$ + +where $z^s = \frac{1}{|\{i|sent(i) = s\}|}\sum_{\{i|sent(i) = s\}}z_i$ is used to generate $\hat{\alpha}_{sent(i)}^{s}$ . + +Importance embeddings. Another way to mitigate the impact of false negatives in predicted rationales is for these negatives to still remain visible to the predictor. This variant uses additive embeddings for rationalization rather than occluding masks, using a two-element embedding layer $e$ constituting one embedding for rationale tokens and one for nonrationale tokens, added to the input vectors according to the predicted rationale. This way, input tokens are tagged as important or unimportant, but the predictor $f$ has the freedom to learn how to engage with these tags for maximum label accuracy, rather than being fully blinded to "unimportant" tokens. + +$$ +m _ {e} (\pmb {x} _ {i}, \hat {\pmb {\alpha}} _ {i}) = \pmb {e} _ {i} + (1 - \hat {\pmb {\alpha}} _ {i}) \cdot \pmb {e} _ {\mathrm {n o n - r a t i o n a l e}} + \hat {\pmb {\alpha}} _ {i} \cdot \pmb {e} _ {\mathrm {r a t i o n a l e}}. +$$ + +An important drawback of this approach is that the predictor now has access to the full input instead of only the rationalized input, so these rationales provide a weak guarantee that important tokens are actually used to make predictions. This method also represents a large distribution shift from full text, so we find it necessary to calibrate the predictor using human rationales, as described in Fig. 1b. + +Selective supervision. Our fourth modification attempts to improve rationale prediction performance on high-sufficiency-accuracy rationales by selectively supervising only on human rationales with this property, ignoring those where human ratio + +
DatasetModelAcc.Rationale predictionHuman Suff. Acc.Methods
F1Prec.Rec.MaskingGranularityPos. class weightSelective supervision
MultiRCBERT baseline68.1---73.9-Tokens--
Unsupervised rationale model67.222.218.527.971.2[MASK]Tokens--
Supervised rationale model67.046.541.552.970.8[MASK]Tokens1.0No
Best overall model71.257.144.978.474.5EmbeddingsSentences5.0No
FEVERBERT baseline90.2---89.4-Tokens--
Unsupervised rationale model88.322.620.525.188.7[MASK]Tokens--
Supervised rationale model90.768.461.776.791.1[MASK]Tokens1.0No
Best overall model91.581.283.579.191.6EmbeddingsSentences1.0No
E-SNLIBERT baseline89.7---73.9-Tokens--
Unsupervised rationale model88.940.628.272.685.0[MASK]Tokens--
Supervised rationale model87.858.747.776.089.4[MASK]Tokens1.0No
Best overall model90.159.645.586.292.3EmbeddingsTokens3.0No
+ +Table 3: Best-performing model variant compared to baseline models. + +nales do not allow a correct prediction. + +Specifically, for every training batch, we use the true human rationales $\alpha$ as an input mask for the BERT predictor to get the HSA for each document. HSA then serves as a weight on the human rationale supervision during the main training batch: + +$$ +\mathcal {L} _ {s s} = \mathcal {L} _ {p} (y, \hat {y}) + I (y = f (m (\boldsymbol {x}, \boldsymbol {\alpha})) \frac {\lambda_ {s u}}{L} \sum_ {i = 1} ^ {L} \mathcal {L} _ {p} (\boldsymbol {\alpha} _ {i}, \hat {\boldsymbol {\alpha}} _ {i}). +$$ + +By weighting supervision this way, we hope to ignore low-quality human rationales during training and focus instead on those that enable good accuracy. + +# 6 Results + +# 6.1 Experiment Setup + +Our goal in this experiment is to understand the impact of our four proposed model/training modifications. We do this with a comprehensive scan: We try three positive rationale supervision class weights $\lambda_{su}^{1}$ (\{0, 2, 4\}), and toggle sentence-level rationalization, importance embedding, selective supervision on and off. In addition, we vary rationale supervision loss weight $\lambda_{su}$ in $\{0.5, 1, 2\}$ . This resulted in 72 models for MultiRC and FEVER, and 36 models for E-SNLI (for which sentence-level rationalization is not applicable). + +The best resultant model is our best overall model. The best model with $\lambda_{su^1} = 1$ (i.e., identical class weights for human rationales) and no other learning strategy enabled is our baseline supervised rationale model. We additionally train three unsupervised rationale models with sparsity weights 0.15, 0.25, and 0.35, selecting as representative the one which produced the sparsest rationales while maintaining a reasonable level of accuracy (because in this architecture, there is invariably a trade-off between accuracy and sparsity). + +To evaluate the performance of our models, we consider both accuracy of the predicted labels $(\hat{y})$ + +and performance of rationale prediction in terms of F1, precision, and recall. We use Pytorch Lightning (Falcon et al., 2019) for training with a learning rate of 2e-5 and gradient accumulation over 10 batches for all models. Early stopping was based on validation set loss with a patience of 3, evaluated every fifth of an epoch. Training was performed on two 24G NVidia Titan RTX GPUs. + +# 6.2 Model Performance + +Table 3 compares our best overall model against the baselines, and presents the learning strategies used in the models. + +Prediction accuracy. For MultiRC, this best model includes every proposed modification (sentence-level rationalization, importance embeddings, class weights) except for selective supervision, and yields a 3-point improvement from the baseline accuracy of $68.1\%$ to $71.2\%$ . We observe a more modest improvement on FEVER, with the best model using sentence-level rationalization and importance embeddings, and scoring a 1-point improvement from $90.2\%$ to $91.5\%$ . We note, however, that this approaches the accuracy of the model with access to a human rationale oracle $(91.6\%)$ . Finally, we observe a tiny improvement of $0.4\%$ on E-SNLI, though our proposed methods do improve upon the baselines of unsupervised and supervised rationale model, which causes a performance drop. + +A McNemar's significance test with Bonferroni correction between the best and baseline model finds that the accuracy improvement is significant for MultiRC and FEVER $(p = 2\mathrm{e} - 7$ and 3e-6 respectively) and not significant for E-SNLI $(p = 0.1)$ . The limited improvement in E-SNLI echos the performance drop in Fig. 1a without adaptation, suggesting that human rationales in this dataset are too idiosyncratic to improve model performance. + +Factor analysis. We use regression analysis to + +
MethodCoefficients
MultiRCFEVERE-SNLI
Sentences.015***.001-
Class weights.017***.007***.005
Importance embeddings.012***.006***-.010**
Selective supervision0.004-.006***-.032***
+ +Table 4: Regression coefficients for effect each proposed method on overall prediction accuracy + +
DatasetSel. Sup.Acc.F1.
High-HSALow-HSA
MultiRCNo71.259.357.2
Yes71.056.254.1
FEVERNo91.579.072.5
Yes90.661.257.0
E-SNLINo90.161.248.0
Yes88.849.044.9
+ +Table 5: Label accuracy and predicted rationale F1 for high- versus low-HSA examples. + +understand the impact of the different modifications on model accuracy. Table 4 suggests that rationale class weighting has the highest positive effect on accuracy across datasets. Importance embeddings have a positive effect for MultiRC and FEVER and a negative effect for E-SNLI, while sentence-level rationalization improves only MultiRC. + +Selective supervision is found to have a nonexistent or negative effect across all three datasets. Table 5 details this result, showing model accuracy and rationale performance for the best model with (yes) vs. without (no) selective supervision. If our method succeeded, F1 for high-HSA examples would increase from the "No" to the "Yes" models and remain flat or decrease for low-HSA examples. Indeed, we observe lower rationale F1 for low-HSA examples, but the rationale F1 also drops substantially for high-HSA examples, possibly because of the reduced available training data. + +Rationale performance. Although our modifications are designed to improve label prediction performance, they also improve rationale prediction performance in most cases. The only exception is the reduced precision in E-SNLI compared to the supervised rationale model. + +# 6.3 Qualitative Analysis + +Table 6 shows three examples, each drawn from a different dataset, to illustrate different outcomes. For each example, we show the human rationale and predicted rationales for both the baseline supervised rationale model and our best overall model. Incorrect predictions are colored red. + +Example 6a shows an instance sampled from + +MultiRC where our best model, with higher recall and sentence-level rationalization, more successfully captures the (sufficient) information present in the human rationale, allowing for a correct prediction where the supervised rationale model fails. + +Example 6b presents a contrasting example from the FEVER dataset. The human rationale omits important context, that Legendary Entertainment is a subsidiary of Wanda Group, making it harder to infer that it is not a subsidiary of Warner Bros. Our best model succeeds at capturing this snippet in its rationale, but still predicts the incorrect label, illustrating that a sufficient (for humans) rationale does not always produce a correct label. + +Finally, example 6c shows a case where the baseline supervised rationale model succeeds while our best model fails. This is a hard-to-interpret example, mainly a demonstration of the limitations of rationales as an explanatory device for certain kinds of task. This begs a question: how relevant are rationales as an explanation or learning mechanism when models like GPT-3 (Brown et al., 2020) are increasingly capable of human-level natural language explanations (Table 7)? + +Our position is that however an explanation is presented, meaning is still localized within text, so rationales can still serve as a useful interface for scrutinizing or controlling model logic, even if they require additional translation to be comprehensible to humans. Works that hybridize the two ideas such as Zhao and Vydiswaran (2020) may represent a good way of resolving this issue. + +# 7 Discussion + +The analysis in section §4 explores the limits of potential improvement from learning from rationales. It suggests two insights toward improved learning from rationales: 1) that insofar as they boost model accuracy, not all human rationale tokens are equally valuable, e.g., with false positives causing less degradation than false negatives; and 2) we could in principle boost label accuracy with good rationale accuracy on useful (high-SA) rationales and low accuracy on useless (low-SA) ones. + +We exploit these two insights with four modifications to the baseline architecture. Three of these diverge from flat rationale supervision accuracy: rationale supervision class weighting, sentence-level rationalization, and importance embeddings. The last, selective supervision, pursues utility-discriminative weighting during model training. + +
Human rationaleBaseline supervised rationaleBest model
(A) MultiRC: Best model beats supervised baseline
[CLS] there have been many organisms that have lived in earths past . only a tiny number of them became fossils . still , scientists learn a lot from fossils . fossils are our best clues about the history of life on earth . fossils provide evidence about life on earth . they tell us that life on earth has changed over time . fossils in younger rocks look like animals and plants that are living today . fossils in older rocks are less like living organisms . fossils can tell us about where the organism lived . was it land or marine ? fossils can even tell us if the water was shallow or deep . fossils can even provide clues to ancient climates . [SEP] what can we tell about former living organisms from fossils ? | | how they adapted [SEP][CLS] there have been many organisms that have lived in earths past . only a tiny number of them became fossils . still , scientists learn a lot from fossils . fossils are our best clues about the history of life on earth . fossils provide evidence about life on earth . they tell us that life on earth has changed over time . fossils in younger rocks look like animals and plants that are living today . fossils in older rocks are less like living organisms . fossils can tell us about where the organization lived . was it land or marine ? fossils can even tell us if the water was shallow or deep . fossils can even provide clues to ancient climates . [SEP] what can we tell about former living organisms from fossils ? | | how they adapted [SEP][CLS] there have been many organisms that have lived in earths past . only a tiny number of them became fossils . still , scientists learn a lot from fossils . fossils are our best clues about the history of life on earth. fossils provide evidence about life on earth . they tell us that life on earth has changed over time . fossils in younger rocks look like animals and plants that are living today . fossils in older rocks are less like living organisms . fossils can tell us about where the organism lived . was it land or marine ? fossils can even tell us if the water was shallow or deep . fossils can even provide clues to ancient climates . [SEP] what can we tell about former living organisms from fossils ? | | howthey adapted [SEP]
Prediction: FalsePrediction: TruePrediction: False
(B) FEVER: Human rationale is insufficient
[CLS] legendary entertainment - lrb - also known as legendary pictures or legendary - rrb - is an american media company based in burbank , california , the company was founded by thomas tull in 2000 and in 2005 , concluded an agreement to co - produce and co - finance films with warner bros ., and began a similar arrangement with universal studios in 2014 . since 2016 , legendary has been a subsidiary of the chinese conglomerate wanda group . [SEP] legendary entertainment is a subsidiary of warner bros pictures . [SEP][CLS] legendary entertainment - lrb - also known as legendary pictures or legendary - rrb - is an american media company based in burbank , california , the company was founded by thomas tull in 2000 and in 2005 , concluded an agreement to co - produce and co - finance films with warner bros ., and began a similar arrangement with universal studios in 2014 . since 2016 , legendary has be a subsidiary of the chinese conglomerate wanda group . [SEP] legendary entertainment is a subsidiary of warner bros pictures . [SEP][CLS] legendary entertainment - lrb - also known as legendary pictures or legendary - rrb - is an american media company based in burbank , california , the company was founded by thomas tull in 2000 and in 2005 , concluded an agreement to co - produce and co - finance films with warmer bros ., and began a similar arrangement with universal studios in 2014 . since 2016 , legendary has been a subsidiary of the chinese conglomerate wanda group . [SEP] legendary entertainment is a subsidiary of warner bros pictures . [SEP]
Prediction: SupportsPrediction: SupportsPrediction: Supports
(C) E-SNLI: Supervised baseline beats best model
[CLS] a big dog catches a ball on his nose [SEP] a big dog is sitting down while trying to catch a ball [SEP][CLS] a big dog catches a ball on his nose [SEP] a big dog is sitting down while trying to catch a ball [SEP][CLS] a big dog catches a ball on his nose [SEP] a big dog is sitting down while trying to catch a ball [SEP]
Prediction: NeutralPrediction: NeutralPrediction: Contradiction
+ +Table 6: Examples of human, supervised baseline, and best model rationales and predictions. + +
SourceNatural language explanation
HumanThere is no indication that the dog is sitting down while playing catch on his nose.
HumanA dog can catch a ball by not to sitting down.
GPT-3The entailment of this sentence is that the dog is sitting down, and the contradiction would be if the dog was standing up. This sentence is neutral, meaning it doesn’t entail or contradict anything.
+ +Table 7: Examples of natural language explanations for the "neutral" prediction on E-SNLI example from Table 6c. See Appendix §D for GPT-3 prompt details. + +Taken together, our proposed methods yield a substantial $3\%$ improvement over baseline performance for MultiRC, a $1\%$ improvement on FEVER, and a tiny $4\%$ improvement on E-SNLI, mirroring the potential improvements observed in the analysis. We find that all three token supervision methods are useful in achieving this, while selective supervision has a marginal or negative effect. + +In summary, our results support the potential for learning from rationales in certain datasets, and demonstrate the importance of understanding the properties of human rationales to properly exploit them for this purpose. We believe that these two insights are useful steps towards effective learning from rationales, and could yield even greater improvements if operationalized optimally. + +Limitation. A limitation of our analysis is that + +all three datasets are document-query style reading comprehension tasks, as opposed to, e.g., sentiment analysis. Because of the popularity of this type of task in NLP benchmarks, this type of dataset represents a majority of what is available in the ERASER collection (DeYoung et al., 2019). By contrast, sentiment is often scattered throughout a text, so human rationales for sentiment are likely to contain redundant signal, which could impact their predictive utility. We leave a more comprehensive survey of NLP tasks for future work. + +Acknowledgments. We thank anonymous reviewers for their feedback, and members of the Chicago Human+AI Lab for their insightful suggestions. This work is supported in part by research awards from Amazon, IBM, Salesforce, and NSF IIS-2126602. + +# References + +Diego Antognini and Boi Faltings. 2021. Rationalization through Concepts. arXiv:2105.04837 [cs]. ArXiv: 2105.04837. +Jasmijn Bastings, Wilker Aziz, and Ivan Titov. 2020. Interpretable Neural Predictions with Differentiable Binary Variables. arXiv:1905.08160 [cs]. ArXiv: 1905.08160. +Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. arXiv:2005.14165 [cs]. ArXiv: 2005.14165. +Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-snli: Natural language inference with natural language explanations. In Proceedings of NeurIPS. +Samuel Carton, Qiaozhu Mei, and Paul Resnick. 2018. Extractive Adversarial Networks: High-Recall Explanations for Identifying Personal Attacks in Social Media Posts. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3497-3507, Brussels, Belgium. Association for Computational Linguistics. +Samuel Carton, Anirudh Rathore, and Chenhao Tan. 2020. Evaluating and Characterizing Human Rationales. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9294-9307, Online. Association for Computational Linguistics. +Shiyu Chang, Yang Zhang, Mo Yu, and Tommi Jaakkola. 2020. Invariant Rationalization. In Proceedings of the 37th International Conference on Machine Learning, pages 1448-1458. PMLR. ISSN: 2640-3498. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805 [cs]. ArXiv: 1810.04805. +Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2019. ERASER: A Benchmark to Evaluate Rationalized NLP Models. arXiv:1911.03429 [cs]. ArXiv:1911.03429. +Gregory Druck, Burr Settles, and Andrew McCallum. 2009. Active Learning by Labeling Features. In Proceedings of the 2009 Conference on Empirical + +Methods in Natural Language Processing, pages 81-90, Singapore. Association for Computational Linguistics. +William Falcon et al. 2019. Pytorch lightning. GitHub. Note: https://github.com/PyTorchLightning/pytorchlightning, 3. +Braden Hancock, Paroma Varma, Stephanie Wang, Martin Bringmann, Percy Liang, and Christopher Ré. 2018. Training classifiers with natural language explanations. In Proceedings of ACL. +Peter Hase and Mohit Bansal. 2021. When Can Models Learn From Explanations? A Formal Framework for Understanding the Roles of Explanation Data. arXiv:2102.02201 [cs]. ArXiv: 2102.02201. +Peter Hase, Shiyue Zhang, Harry Xie, and Mohit Bansal. 2020. Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language? arXiv:2010.04119 [cs]. ArXiv: 2010.04119. +Alon Jacovi and Yoav Goldberg. 2021. Aligning Faithful Interpretations with their Social Attribution. arXiv:2006.01067 [cs]. ArXiv:2006.01067. +Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical Reparameterization with Gumbel-Softmax. arXiv:1611.01144 [cs, stat]. ArXiv: 1611.01144. +Sahil Jayaram and Emily Allaway. 2021. Human Rationales as Attribution Priors for Explainable Stance Detection. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5540-5554, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. +Neil Jethani, Mukund Sudarshan, Yindalon Aphinyanaphongs, and Rajesh Ranganath. 2021. Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations. arXiv:2103.01890 [cs, stat]. ArXiv:2103.01890. +Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). +Todd Kulesza, Margaret Burnett, Weng-Keen Wong, and Simone Stumpf. 2015. Principles of Explanatory Debugging to Personalize Interactive Machine Learning. In Proceedings of the 20th International Conference on Intelligent User Interfaces, pages 126-137, Atlanta Georgia USA. ACM. +Eric Lehman, Jay DeYoung, Regina Barzilay, and Byron C. Wallace. 2019. Inferring Which Medical Treatments Work from Reports of Clinical Trials. In + +Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3705-3717, Minneapolis, Minnesota. Association for Computational Linguistics. +Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing Neural Predictions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 107-117. +Piyawat Lertvittayakumjorn and Francesca Toni. 2021. Explanation-Based Human Debugging of NLP Models: A Survey. arXiv:2104.15135 [cs]. ArXiv: 2104.15135. +Bhargavi Paranjape, Mandar Joshi, John Thickstun, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020. An Information Bottleneck Approach for Controlling Conciseness in Rationale Extraction. arXiv:2005.00652 [cs]. ArXiv:2005.00652. +Gregory Plumb, Maruan Al-Shedivat, Angel Alexander Cabrera, Adam Perer, Eric Xing, and Ameet Talwalkar. 2020. Regularizing Black-box Models for Improved Interpretability. arXiv:1902.06787 [cs, stat]. ArXiv: 1902.06787. +Laura Rieger, Chandan Singh, William Murdoch, and Bin Yu. 2020. Interpretations are Useful: Penalizing Explanations to Align Neural Networks with Prior Knowledge. In International Conference on Machine Learning, pages 8116-8126. PMLR. ISSN: 2640-3498. +Andrew Slavin Ross, Michael C. Hughes, and Finale Doshi-Velez. 2017. Right for the Right: Reasons: Training Differentiable Models by Constraining their Explanations. arXiv preprint arXiv:1703.03717. 00000. +Megha Srivastava, Tatsunori Hashimoto, and Percy Liang. 2020. Robustness to Spurious Correlations via Human Annotations. arXiv:2007.06661 [cs, stat]. ArXiv: 2007.06661. +James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and verification. In Proceedings of NAACL. +Giulia Vilone and Luca Longo. 2020. Explainable Artificial Intelligence: a Systematic Review. arXiv:2006.00093 [cs]. ArXiv:2006.00093. +Sarah Wiegrefe, Jack Hessel, Swabha Swayamdipta, Mark Riedl, and Yejin Choi. 2021. Reframing Human-AI Collaboration for Generating Free-Text Explanations. arXiv:2112.08674 [cs]. ArXiv: 2112.08674. +Sarah Wiegrefe and Ana Marasovic. 2021. Teach Me to Explain: A Review of Datasets for Explanable NLP. arXiv:2102.12060 [cs]. ArXiv: 2102.12060. + +Ronald J. Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229-256. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, and Jamie Brew. 2020. HuggingFace's Transformers: State-of-the-art Natural Language Processing. arXiv:1910.03771 [cs]. ArXiv: 1910.03771. +Huihan Yao, Ying Chen, Qinyuan Ye, Xisen Jin, and Xiang Ren. 2021. Refining Neural Networks with Compositional Explanations. arXiv:2103.10415 [cs]. ArXiv: 2103.10415. +Mo Yu, Shiyu Chang, Yang Zhang, and Tommi S. Jaakkola. 2019. Rethinking Cooperative Rationalization: Introspective Extraction and Complement Control. arXiv preprint. ArXiv: 1910.13294. +Mo Yu, Yang Zhang, Shiyu Chang, and Tommi S. Jaakkola. 2021. Understanding Interlocking Dynamics of Cooperative Rationalization. arXiv:2110.13880 [cs]. ArXiv:2110.13880. +Omar Zaidan, Jason Eisner, and Christine Piatko. 2007. Using "Annotator Rationales" to Improve Machine Learning for Text Categorization. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 260-267, Rochester, New York. Association for Computational Linguistics. +Xinyan Zhao and V. G. Vinod Vydiswaran. 2020. LIREx: Augmenting Language Inference with Relevant Explanation. arXiv:2012.09157 [cs]. ArXiv: 2012.09157. + +
DatasetMethodRoleAccuracyRationale predictionHuman Suff. Acc.
F1PrecisionRecall
MultiRCSentencesBest with71.257.144.978.474.5
SentencesBest without70.641.627.784.175.8
Class-weightsBest with71.257.144.978.474.5
Class-weightsBest without70.855.266.147.476.5
Importance embeddingsBest with71.257.144.978.474.5
Importance embeddingsBest without71.053.639.782.575.8
Selective supervisionBest with71.053.639.782.575.8
Selective supervisionBest without71.257.144.978.474.5
FEVERSentencesBest with91.581.283.579.191.6
SentencesBest without91.372.461.388.591.6
Class-weightsBest with91.579.673.187.391.8
Class-weightsBest without91.581.283.579.191.6
Importance embeddingsBest with91.581.283.579.191.6
Importance embeddingsBest without91.480.074.985.991.8
Selective supervisionBest with90.656.441.488.690.4
Selective supervisionBest without91.581.283.579.191.6
E-SNLIClass-weightsBest with90.159.645.586.292.3
Class-weightsBest without89.962.255.770.492.0
Importance embeddingsBest with90.159.645.586.292.3
Importance embeddingsBest without89.933.520.2100.072.5
Selective supervisionBest with88.849.033.293.484.0
Selective supervisionBest without90.159.645.586.292.3
+ +Table 8: Comparison of best model with each proposed factor against best model without that factor. + +# A Detailed Factor Analysis + +Table 8 compares, for each proposed method, the performance of the best model using that method and the best model not using it. The story shown here is similar to the regression analysis in Table 4, but one new insight is that the improvement in model prediction performance appears to be driven by the sentence-level rationalization method, as it cuts down on stray tokens dropped from or added to the predicted rationales. + +# B Rationale Perturbation on FEVER and E-SNLI + +Furthering the analysis in §4.2, we extend the human rationale perturbation experiment to FEVER and E-SNLI. + +Fig. 4 show the result for FEVER. Fig. 4a shows that the baseline accuracy is so high for this dataset that to match just the baseline accuracy for FEVER, we require near perfect prediction of human rationales. + +Moreover, even for documents with HSA = 1, the model performance drops below baseline on dropping just $\sim 10\%$ tokens (synonymous with rationale recall = $\sim 0.9$ ) in Fig. 4b. Interestingly, the model performance remains consistently above the + +baseline when adding non-rationale tokens (synonymous with decreasing rationale precision). In comparison, the model performance for MultiRC in Fig. 2b drops below baseline after dropping $\sim 50\%$ of the tokens. + +For FEVER examples with HSA = 0 (Fig. 4c), the model performance remains below the baseline accuracy consistently, supporting the second hypothesis in §4.2. The near-perfect need to predict rationales in FEVER may explain behind the difference in improvements of model performance between MultiRC and FEVER. + +Fig. 5 covers E-SNLI. We see that the model performance decreases after dropping rationale tokens (signifying decreasing recall) and it consistently remains below the baseline. In contrast, the model performance shows a slight improvement after adding non-rationale tokens (signifying decrease in rationale precision). Moreover, for documents with HSA = 1, the model performance drops below baseline at $\sim 3\%$ for dropping and swapping rationale tokens, where as the model performance plateaus with addition of non-rationale tokens. These insights highlights the substantial challenges in learning from explanations for E-SNLI. + +![](images/1d48b477b4e8d528ea92273c0844af81c67d10881e8992e9ff6b5348b2126700.jpg) +(a) All samples + +![](images/3a7f7c30333a5195b822948514abeef01d5a9f5f6433ad561a9a68296b89e95e.jpg) +(b) Human sufficiency-accuracy $= 1$ (c) Human sufficiency-accuracy $= 0$ + +![](images/55b36a482d0c13e6a552c3a75f545c2c75ff0802e714d6dccbc31c60acb23e43.jpg) + +![](images/010709602a35c3d61831b392cbf36bbfab331b68c627cfe4f637259edde50672.jpg) +Figure 4: Performance of corrupted rationale for FEVER. Model performance drops below baseline accuracy immediately on both dropping human rationales (i.e., recall $\downarrow$ ) and adding non-rationale tokens (i.e., precision $\downarrow$ ). For HSA = 1, model performance remains consistently above baseline on adding non-rationale tokens (i.e. precision $\downarrow$ ) +(a) All samples +Figure 5: Performance of corrupted rationales for E-SNLI. Model performance for human rationale remains below baseline accuracy and slightly increases with addition of non-rationale tokens (i.e. precision $\downarrow$ ). Even for HSA $= 1$ , model performance drops below baseline accuracy at just $\sim 4\%$ corruption. + +![](images/2c8adc95304b984a03a5b8e7be57c71d27218018cecddc5184e270370e951dc7.jpg) +(b) Human sufficiency-accuracy $= 1$ (c) Human sufficiency-accuracy $= 0$ + +![](images/113824d964b81342c4817c72bb92a6b8fb48cc47e9c729b929a48e510acc7643.jpg) + +# C Rationale Perturbation for Adapted Models + +We perform the same perturbation analysis on calibrated model trained on both full and rationalized input, for which distribution shift from masking are less of a concern. + +In Fig. 6, for MultiRC, we find that model performance plateaus with addition of non-rationale tokens and drops quickly with rationale tokens even for a calibrated model. This observation is consistent for FEVER (Fig. 7). + +For E-SNLI, we find different properties using a calibrated BERT model compared to the standard BERT model show in Fig. 5a. + +In contrast to MultiRC and FEVER, we find that the model performance drops more rapidly with the addition of non-rationale tokens compared to removal of rationale tokens. This is consistent for documents with HSA = 1, suggesting that for E-SNLI, rationale precision maybe more important when using a calibrated model. Similar to FEVER, we see the model performance drop below the baseline with very little corruption of rationales, echoing the need to perfectly mimic human rationalization for effective learning from rationales for this dataset. + +# D GPT-3 Prompt + +We generate a zero-shot GPT-3 (Brown et al., 2020) explanation using the Davinci model variant on the OpenAI playground1, and a modified version of the prompt proposed by Wiegreffe et al. (2021): + +Let's explain classification decisions. + +A big dog catches a ball on his nose. + +question: A big dog is sitting down while trying to catch a ball. + +entailment, contradiction, or neutral? + +A second step prompting for an explanation is not needed, as GPT-3 gives its prediction in the form of a natural language explanation. + +![](images/9df01b92f38b5ba6823dd2215c959f862ae7c8b57e0349100f34c48ec8b32991.jpg) +(a) All samples + +![](images/36718f1447a117fbb717a5db34b193c2c32cbb6b1b7e1aa599fb92ddbc8a8f78.jpg) +Figure 6: Performance of corrupted rationales for MultiRC using a calibrated model. Model performance decreases consistently when we drop human rationales (i.e., recall $\downarrow$ ), where as the model performance stays high as we add non-rationale tokens (i.e., precision $\downarrow$ ). The impact of recall is moderated when HSA = 1. + +![](images/e8ca83984babaab1f6530915532c9f227f84974f5991e25bab0fd457367f3eab.jpg) +(b) Human sufficiency-accuracy $= 1$ (c) Human sufficiency-accuracy $= 0$ + +![](images/a0ea726fe99f4329f4fe2030e87b64315c6213a98889614893219f12916f9cd7.jpg) +(a) All samples + +![](images/f45d70970207bffb9f530931e727667abac39584c9710d54e70eb75cb0595f02.jpg) +Figure 7: Performance of corrupted rationales for FEVER using a calibrated model. Model performance decreases quickly when we drop human rationales (i.e., recall $\downarrow$ ), where as the model performance remains above baseline as we add non-rationale tokens (i.e., precision $\downarrow$ ). + +![](images/fcfe30c2298280e3a5d9f67decb02d66b6f66dd38120c741112d7ca8bf30b06a.jpg) +(b) Human sufficiency-accuracy $= 1$ (c) Human sufficiency-accuracy $= 0$ + +![](images/c2f479fe57c8cbb3fad21fb93f47334192eacf4845c7db7bd8679c9803ab678b.jpg) +(a) All samples + +![](images/f721346a468b061df827aaa63e1cb25ae8797e35d4a9e337b8d9efa3a09ae175.jpg) +Figure 8: Performance of corrupted rationales for E-SNLI using a calibrated model. Model performance decreases quickly when we add non-rationale tokens (i.e., precision $\downarrow$ ), where as the model performance drops less rapidly as we drop rationale tokens (i.e., recall $\downarrow$ ). + +![](images/b04bd4f619fac04a0b5fb53fc0c4b41a0f13e92c8917ccdb75bbd7d40ef1fb5f.jpg) +(b) Human sufficiency-accuracy $= 1$ (c) Human sufficiency-accuracy $= 0$ \ No newline at end of file diff --git a/whattolearnandhowtowardeffectivelearningfromrationales/images.zip b/whattolearnandhowtowardeffectivelearningfromrationales/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b810dfc192167963018d372e1f3b0d4239512b6f --- /dev/null +++ b/whattolearnandhowtowardeffectivelearningfromrationales/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ce63919c5808d935b14430bd34f76232e0c1a261e1044a6cb4900e03209e1981 +size 974769 diff --git a/whattolearnandhowtowardeffectivelearningfromrationales/layout.json b/whattolearnandhowtowardeffectivelearningfromrationales/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..ca493626643659bec08c759af77bb059a0126169 --- /dev/null +++ b/whattolearnandhowtowardeffectivelearningfromrationales/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f9a733306d46bc29bb11d6317d20c36e396f087d119ff2f1e70d572a194c2e1d +size 511061 diff --git a/whatworksanddoesntworkadeepdecoderforneuralmachinetranslation/18573deb-d0f8-465d-9b30-4e20d6c39e29_content_list.json b/whatworksanddoesntworkadeepdecoderforneuralmachinetranslation/18573deb-d0f8-465d-9b30-4e20d6c39e29_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..897cbf869b759107c49e142b59c5f7e63c8a30d6 --- /dev/null +++ b/whatworksanddoesntworkadeepdecoderforneuralmachinetranslation/18573deb-d0f8-465d-9b30-4e20d6c39e29_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:20ac2f36d167851586ab1fa9cf9da8fe85c5af8470215707aebc93de4929575f +size 90019 diff --git a/whatworksanddoesntworkadeepdecoderforneuralmachinetranslation/18573deb-d0f8-465d-9b30-4e20d6c39e29_model.json b/whatworksanddoesntworkadeepdecoderforneuralmachinetranslation/18573deb-d0f8-465d-9b30-4e20d6c39e29_model.json new file mode 100644 index 0000000000000000000000000000000000000000..2eda0c97c9ac1259d64117e37bc62e045b3a03fd --- /dev/null +++ b/whatworksanddoesntworkadeepdecoderforneuralmachinetranslation/18573deb-d0f8-465d-9b30-4e20d6c39e29_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:68325b608247f0a50e74e158f17655123bccb6c314340a6c1ece598099686a3a +size 108890 diff --git a/whatworksanddoesntworkadeepdecoderforneuralmachinetranslation/18573deb-d0f8-465d-9b30-4e20d6c39e29_origin.pdf b/whatworksanddoesntworkadeepdecoderforneuralmachinetranslation/18573deb-d0f8-465d-9b30-4e20d6c39e29_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..9a68d3fbb4dfefbb57de7bb8373dacfc5e71f385 --- /dev/null +++ b/whatworksanddoesntworkadeepdecoderforneuralmachinetranslation/18573deb-d0f8-465d-9b30-4e20d6c39e29_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f639dd18ba26fdb137ca6dad9691054a31fd39d6ee85a0db82f4480fe0fed546 +size 489198 diff --git a/whatworksanddoesntworkadeepdecoderforneuralmachinetranslation/full.md b/whatworksanddoesntworkadeepdecoderforneuralmachinetranslation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..81eb4381753dac812741099ac8f4390c470552f7 --- /dev/null +++ b/whatworksanddoesntworkadeepdecoderforneuralmachinetranslation/full.md @@ -0,0 +1,301 @@ +# What Works and Doesn't Work, A Deep Decoder for Neural Machine Translation + +Zuchao Li $^{1}$ , Yiran Wang $^{2}$ , Masao Utiyama $^{2,*}$ , Eiichiro Sumita $^{2}$ , Hai Zhao $^{1*}$ , and Taro Watanabe $^{3}$ + +$^{1}$ Shanghai Jiao Tong University (SJTU), Shanghai, China + +$^{2}$ National Institute of Information and Communications Technology (NICT), Kyoto, Japan + +$^{3}$ Nara Institute of Science and Technology (NAIST), Nara, Japan + +charlee@sjtu.edu.cn, {yiran.wang,mutiyama}@nict.go.jp, + +eiichiro-sumita@nict.go.jp, zhaohai@cs.sjtu.edu.cn, taro@is.naist.jp + +# Abstract + +Deep learning has demonstrated performance advantages in a wide range of natural language processing tasks, including neural machine translation (NMT). Transformer NMT models are typically strengthened by deeper encoder layers, but deepening their decoder layers usually results in failure. In this paper, we first identify the cause of the failure of the deep decoder in the Transformer model. Inspired by this discovery, we then propose approaches to improving it, with respect to model structure and model training, to make the deep decoder practical in NMT. Specifically, with respect to model structure, we propose a cross-attention drop mechanism to allow the decoder layers to perform their own different roles, to reduce the difficulty of deep-decoder learning. For model training, we propose a collapse reducing training approach to improve the stability and effectiveness of deep-decoder training. We experimentally evaluated our proposed Transformer NMT model structure modification and novel training methods on several popular machine translation benchmarks. The results showed that deepening the NMT model by increasing the number of decoder layers successfully prevented the deepened decoder from degrading to an unconditional language model. In contrast to prior work on deepening an NMT model on the encoder, our method can deepen the model on both the encoder and decoder at the same time, resulting in a deeper model and improved performance. + +# 1 Introduction + +With the help of the deep neural network, the feature extraction capability of models has been + +substantially enhanced (Schmidhuber, 2015; LeCun et al., 2015). Deep neural network models are also popular for natural language processing (NLP) tasks. The most typical deep neural network model in NLP is based on the convolutional neural network (CNN) (Gehring et al., 2017) and Transformer (Vaswani et al., 2017) structures, and the deep pretrained Transformer language model has begun to dominate NLP. The deep neural network model has also attracted substantial interest in neural machine translation (NMT), for both theoretical research (Wang et al., 2019; Li et al., 2020a, 2021a; Kong et al., 2021) and competition evaluation (Zhang et al., 2020; Wu et al., 2020b,a; Meng et al., 2020). Because it has been demonstrated that deep neural network models can benefit from an enriched representation, deep NMT models also show advantages with respect to translation performance (Wu et al., 2019; Wei et al., 2020). + +Although deep models have been extensively studied in machine translation and are frequently used to improve translation performance, almost all work on deepening models has focused on increasing the number of encoder layers; there has been very little research on deepening the decoder. Through preliminary experiments on varying the number of decoder layers in the Transformer NMT model, we observed that, when the decoder is deepened beyond a certain number of layers, the translation performance of the overall model fails to improve; moreover, it declines rapidly to near zero. This demonstrates that there are flaws in the current structure or training method, and the deep-decoder NMT model cannot be trained. + +By analyzing the training process of the deep-decoder model, we found that the training perplexity of the model was relatively low, but the translation performance of the obtained model was much worse than that of a shallow model. Inspired by this phenomenon, we hypothesize that, as the decoder deepens, the model may increasingly ignore the + +source inputs and degenerate to an unconditional language model, even though a low perplexity can be obtained on the training set. In this case, the purpose of translation learning is not achieved, and thus the model training fails. + +According to our hypotheses, preventing the decoder from degenerating to an unconditional language model is the key to overcoming the failure of deep-decoder NMT model training. Consequently, we propose two aspects of model improvement: model structure and model training. In model structure, the only difference between the decoder of the NMT model and that of the unconditional language model is cross-attention; therefore, we focus mainly on this structure. In model training, we aim to make the decoder output distant from the output of the unconditional language model to avoid fitting the target sentences while ignoring the source inputs in the training dataset. + +Specifically, we propose a cross-attention drop (CAD) mechanism for the deep-decoder layer structure. The original intention of this mechanism is that we suspected that the degeneration of the deep decoder to an unconditional language model was caused by the training difficulties resulting from too many cross-attentions. Because the purpose of cross-attention is to force the decoder layer to obtain features from the source representation, the different layers in the deep decoder should perform distinct roles. However, the conventional deep decoder requires each layer to extract source features similarly, thus increasing the training difficulty. As a result, to minimize training loss, the model chooses to memorize the training target sentences directly and ignore the source inputs. In this mechanism, we drop the cross-attention in some decoder layers to lower the overall training difficulty, thereby preventing the failure of deep-decoder training. In addition to structural changes, we also propose a decoder dropout regularization (DDR) loss and anti-LM-degradation (ALD) loss for joint model optimization, based on contrastive learning; these increase the stability of deep-decoder NMT model training and avoid degeneration to an unconditional language model. + +Our experiments were conducted mainly on two popular machine translation benchmarks: WMT14 English-to-German and English-to-French. The results of the experimental exploration of decoders with different depths show that a successfully trained depth decoder greatly benefits the overall + +translation performance and can work with the deep encoder to achieve higher translation performance. Moreover, the novel training approaches that we propose both increase the stability of the training of the deep-decoder model and enable additional improvements. + +# 2 Related Work + +# 2.1 Deep NMT Model + +In computer vision tasks, it has been found that increasing the depth of convolutional neural networks can significantly increase the performance (He et al., 2016). As deep neural networks have become widely used in NLP tasks, machine translation tasks have also incorporated deep neural networks for modeling, using an encoder-decoder architecture based on a recurrent neural network (RNN) (Sutskever et al., 2014; Bahdanau et al., 2015). Since the emergence of the Transformer-based model (Vaswani et al., 2017), the deep model has become the mainstream baseline model for machine translation (Li et al., 2021d). The Transformer NMT model employs a deeper architecture than the RNN-based model, with six encoder layers and six decoder layers. During the same time period, Gehring et al. (2017) introduced an encoder-decoder architecture wholly based on CNNs, which increased both the number of encoder layers and the number of decoder layers to 20. In addition to structural design, unsupervised learning have also become another important branch of NMT (Lample et al., 2018; Li et al., 2019a, 2020b, 2021c; Nguyen et al., 2021). + +Because greater model capacity has the potential to contribute significantly to quality improvement (Zhang et al., 2019b; Li et al., 2019b; Parnow et al., 2021), deepening a model is regarded as a good method of boosting the capacity of the model with the same architecture. It has been shown that more expressive features are extracted (Mhaskar et al., 2016; Telgarsky, 2016; Eldan and Shamir, 2016), which has resulted in improved performance for vision tasks (He et al., 2016; Srivastava et al., 2015) over the past few years. In Transformer NMT models, there have also been numerous studies on deepening the model for better performance. Bapna et al. (2018) took the first step toward training extraordinarily deep models by deepening the encoders for translation, but discovered that simply increasing the encoder depth of a basic Transformer model was insufficient. Because of the difficulty of + +training, models utterly fail to learn. Transparent attention has also been proposed to regulate deep-encoder gradients; this eases the optimization of deeper models and results in consistent gains with a 16-layer Transformer encoder. + +Following research on deepening the encoder to obtain a deep NMT model, as in (Bapna et al., 2018), Wu et al. (2019) proposed a two-stage training strategy with three special model structural designs for constructing deep NMT models with eight encoder layers. Wang et al. (2019) proposed a dynamic linear combination mechanism and successfully trained a Transformer model with a 30-layer encoder, with the proposed mechanism shortening the path from upper-level layers to lower-level layers to prevent the gradient from vanishing or exploding. Zhang et al. (2019a) proposed a depth-scale initialization for improving norm preservation and a merged attention sublayer that integrates a simplified average-based self-attention sublayer into the cross-attention module. Fan et al. (2020) employed a layer-drop mechanism to train a 12-6 Transformer NMT model and pruned subnetworks during inference without fine-tuning. More recently, Wei et al. (2020) proposed to attend the decoder to multigranular source information with different space-scales, thereby boosting the training of very deep encoders without special training strategies. Li et al. (2020a) developed a shallow-to-deep training strategy and employed sparse connections across blocks to successfully train a 48-layer encoder model. Kong et al. (2021) studied using deep-encoder and shallow-decoder models to improve decoding speed while maintaining high translation quality. Most of these related studies focused on deepening the encoder for deep NMT models, whereas there have been very few studies on deepening the decoder. Herein lies the most significant dissimilarity between our work and this related work. + +# 2.2 Contrastive Learning in NLP + +Contrastive learning (Hadsell et al., 2006) is an effective approach to learning and is usually used for unsupervised learning because of its unique characteristics. It has achieved significant success in various computer vision tasks (Misra and van der Maaten, 2020; Zhuang et al., 2019; Tian et al., 2020; He et al., 2020; Chen et al., 2020). Gao et al. (2021) introduced a simple contrastive learning framework for unsupervised learning of sen + +tence embedding, which performed as well as previous supervised approaches. Wu et al. (2020c) employed multiple sentence-level augmentation strategies—such as word and span deletion, reordering, and substitution—with a sentence-level contrastive learning objective to pretrain a language model for a noise-invariant sentence representation. Fang et al. (2020) pretrained language representation models using contrastive self-supervised learning at the sentence level by predicting whether two back-translated sentences originate from the same sentence. In (Giorgi et al., 2021), a universal sentence embedding encoder was trained to minimize the distance between the embeddings of textual segments randomly sampled from nearby locations in the same document by a self-supervised contrastive objective. Pan et al. (2021) demonstrated the effectiveness of contrastive learning in NMT, particularly for the zero-shot machine translation situation. Current contrastive learning for NMT primarily employs cross-lingual representation similarity, whereas we aim to prevent the outputs of the deep decoder and the unconditional language model from becoming too similar, thus preventing degradation. Li et al. (2021b) presented an contrastive learning-reinforced domain adaptation approach for NMT. Part of our method is similar to (Miao et al., 2021) in purpose, but it is designed to avoid the NMT model from over-confident, while ours is to tackle the problem of the deep decoder collapsing into an unconditional language model. + +# 3 Our Method + +Given bilingual parallel sentences $\langle \mathbf{X},\mathbf{Y}\rangle$ , the NMT model learns a set of parameters $\Theta$ by maximizing the likelihood $\mathcal{J}(\mathbf{Y}|\mathbf{X},\Theta)$ , which is represented as the product of the conditional probabilities of all target words: + +$$ +\begin{array}{l} \mathcal {I} _ {\mathrm {N L L}} (\mathbf {Y} | \mathbf {X}; \boldsymbol {\Theta}) = \prod_ {i = 1} ^ {| \mathbf {Y} |} P \left(\mathrm {Y} _ {i} \mid \mathbf {Y} _ {< i}, \mathbf {X}; \boldsymbol {\Theta}\right) \tag {1} \\ = - \sum_ {i = 1} ^ {| \mathrm {Y} |} \log P (\mathbf {Y} _ {i} | \mathbf {Y} _ {< i}, \mathbf {X}; \boldsymbol {\Theta}), \\ \end{array} +$$ + +where $|\mathbf{Y}|$ represents the sequence length of $\mathbf{Y}$ , $\mathrm{Y}_i$ represents the $i$ -th token of sequence $\mathbf{Y}$ , and $\mathbf{Y}_{