diff --git a/abertbaseddistractorgenerationschemewithmultitaskingandnegativeanswertrainingstrategies/200b215b-a02d-42ad-b7b9-f094eb49facd_content_list.json b/abertbaseddistractorgenerationschemewithmultitaskingandnegativeanswertrainingstrategies/200b215b-a02d-42ad-b7b9-f094eb49facd_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..797a048db31c61e97782d59d6d4bf542a978e3a1 --- /dev/null +++ b/abertbaseddistractorgenerationschemewithmultitaskingandnegativeanswertrainingstrategies/200b215b-a02d-42ad-b7b9-f094eb49facd_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1131b752f73ca4c843460ffe6309290e1d6f702d77878be610f443594feffa36 +size 86985 diff --git a/abertbaseddistractorgenerationschemewithmultitaskingandnegativeanswertrainingstrategies/200b215b-a02d-42ad-b7b9-f094eb49facd_model.json b/abertbaseddistractorgenerationschemewithmultitaskingandnegativeanswertrainingstrategies/200b215b-a02d-42ad-b7b9-f094eb49facd_model.json new file mode 100644 index 0000000000000000000000000000000000000000..609b2b1dce49f7804bf64c725fa81eb07c2d6265 --- /dev/null +++ b/abertbaseddistractorgenerationschemewithmultitaskingandnegativeanswertrainingstrategies/200b215b-a02d-42ad-b7b9-f094eb49facd_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:af5916f3684aa5a664fb97b0e2087a2ecc565822c9566739183c56adbbfc8128 +size 100252 diff --git a/abertbaseddistractorgenerationschemewithmultitaskingandnegativeanswertrainingstrategies/200b215b-a02d-42ad-b7b9-f094eb49facd_origin.pdf b/abertbaseddistractorgenerationschemewithmultitaskingandnegativeanswertrainingstrategies/200b215b-a02d-42ad-b7b9-f094eb49facd_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..52d26cb4be98b26810c4f852c013086775bb6bd4 --- /dev/null +++ b/abertbaseddistractorgenerationschemewithmultitaskingandnegativeanswertrainingstrategies/200b215b-a02d-42ad-b7b9-f094eb49facd_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6da46688ad6aa068062ebbc003fcd042331d00374164094ec61aa6f7481bfb89 +size 403100 diff --git a/abertbaseddistractorgenerationschemewithmultitaskingandnegativeanswertrainingstrategies/full.md b/abertbaseddistractorgenerationschemewithmultitaskingandnegativeanswertrainingstrategies/full.md new file mode 100644 index 0000000000000000000000000000000000000000..f11e9b5277b8ebac45567588e11176d6cc91a8af --- /dev/null +++ b/abertbaseddistractorgenerationschemewithmultitaskingandnegativeanswertrainingstrategies/full.md @@ -0,0 +1,420 @@ +# A BERT-based Distractor Generation Scheme with Multi-tasking and Negative Answer Training Strategies + +Ho-Lam Chung $^{1}$ , Ying-Hong Chan $^{2}$ , Yao-Chung Fan $^{3}$ + +Department of Computer Science and Engineering + +National Chung Hsing University, + +Taichung, Taiwan + +1holam.chung@protonmail.com + +$^{2}$ harry831120@gmail.com + +3yfan@nchu.edu.tw + +# Abstract + +In this paper, we investigate the following two limitations for the existing distractor generation (DG) methods. First, the quality of the existing DG methods are still far from practical use. There are still room for DG quality improvement. Second, the existing DG designs are mainly for single distractor generation. However, for practical MCQ preparation, multiple distractors are desired. Aiming at these goals, in this paper, we present a new distractor generation scheme with multi-tasking and negative answer training strategies for effectively generating multiple distractors. The experimental results show that (1) our model advances the state-of-the-art result from 28.65 to 39.81 (BLEU 1 score) and (2) the generated multiple distractors are diverse and shows strong distracting power for multiple choice question. + +# 1 Introduction + +Given a passage, a question, and an answer phrase, the goal of distractor generation (DG) is to generate context-related wrong options (i.e., distractor) for multiple-choice questions (MCQ). Pioneering research (Gao et al., 2019; Yeung et al., 2019; Zhou et al., 2019) have demonstrated the feasibility of generating distractors based on deep learning techniques. + +While significant advances for DG were reported in the literature, we find that the existing DG results are still far from practical use. In this paper, we investigate the following two issues for distractor generation: (1) DG quality improvement and (2) Multiple distractor generation. + +DG Quality Improvement There is still room to be improved for high-quality distractor generation. By manually examining the DG results generated by the existing method, we find that the results are still far from ideal for practical use. Thus, one + +# Example 1 + +Context Omitted. (See Appendix) + +Question + +- Why did Mr.King want to send Henry away? + +Answer + +- Because Henry was too lazy. + +Gen. Distractors + +$d_{1}$ : Because Henry didn't want to go. +$d_{2}$ : Because Henry didn't want to go to the bookstore. + +# Example 2 + +Context Omitted. (See Appendix) + +Question + +- Which of the following women would look most attractive? + +Answer + +A short red-haired woman who wears a purple hat. + +Gen. Distractors + +$d_{1}$ : A young woman who wears a white hat. +$d_2$ : A woman who wears a white hat. + +Table 1: Two examples for showing the issue of generating multiple distractors by a simple beam search: Note that the generated distractors (i.e., $d_{1}$ and $d_{2}$ ) are the same statements with only slight word usage difference. Such results lower the distracting power for MCQ preparation. + +goal of our research is to improve the DG quality further. + +For the quality issues, in this paper, we explore BERT model's employment for performance improvement. As known, employing transformer-based language models has shown to be useful for improving NLP tasks. Thus, we investigate the BERT model's application for DG and report our design in this paper. + +Multiple Distractor Generation The existing DG methods mainly focus on single distractor generation. However, for practical MCQ preparation, multiple distractors are desired. For more than one distractor, the existing practice is to keep multiple results given by a beam search strategy. However, we find that in many cases, the generated distractors are all referred to the same concept/thing. In + +fact, the generated distractors are all from the same latent representation, which brings concerns that they might be semantically similar. In Table 1, we show two DG examples for this problems. In the illustrated examples, one can observe that the generated distractors are the same statements with only a slight word usage difference. Such results lower the distracting power for MCQ preparation. + +For this limitation, we propose to view multiple distractor generation/selection problems as a coverage problem, rather than individually selecting top- $k$ distractors based on prediction probability. In other words, we propose to choose a distractor set, which maximizes the difficulty of multiple-choice questions, rather than individually picking results with the highest probability but with similar semantic + +The contributions of this paper are (1) a new DG model based on the BERT model employment. The experiment evaluation with benchmarking datasets shows that our model outperforms the existing best models (Zhou et al., 2019) and pushes the state-of-the-art result from 28.65 to 39.81 (BLEU 1 score). (2) An investigation to employ the use of multiple-choice question answering task to evaluate the DG performance. (3) An investigation for considering the multiple distractors generation problem as a coverage problem. The experiment result demonstrates that the generated multiple distractors are diverse and show strong distracting power for multiple-choice questions. + +The rest of this paper is organized as follows. In Section 2, we introduce our model design for a single distractor generation. In Section 3, we introduce to our multiple distractor schemes and the incorporation of the question-answer models for distractor selection. In Section 4, we report the result of performance analysis. In Section 5, we review the literature related to this work. Finally, Section 6 concludes our study and discusses future works. + +# 2 BERT Distractor Generation + +# 2.1 BERT Model Review + +The BERT model and its family (Liu et al., 2019; Lan et al., 2019) are composed of a stack of multilayer bidirectional Transformer encoders. The input to a BERT model is a sequence of tokens. For a given token, its input representation to the BERT model is first constructed by summing the corresponding token, segment, and position embed + +dings. After the input representation, the input embeddings travel through the pre-trained/fine-tuned BERT for task learning and prediction. In general, BERT can be employed in two-level language modeling tasks: sequence-level classification and token-level prediction tasks. For the tasks, there are three special tokens, [C], [S], and [M]. The embedding of the [C] token is designed to be used as the aggregate sequence representation for classification tasks. The [S] is designed to distinguish different sentences of a token sequence (to provide/signal information from multiple sentences, as the input token sequence can be a pack of multiple sentences). On the other hand, the [M] token is designed to be used in token-level prediction (e.g., predicting a masked token based on context words or predicting the starting/ending probabilities for span-based tasks such as QA tasks). + +As reported in (Chan and Fan, 2019; Dong et al., 2019), BERT essentially is an auto-encoder language modeling design, which aims to reconstruct the original data from corrupted inputs. If BERT is asked to predict a sequence of consecutive masked tokens, it often produces incoherent and ramble results. For example, when using BERT to predict three consecutive [M] [M] [M] masked tokens, the same prediction result for the tokens are often observed. This is because the context (the information for predicting the tokens) for the masked tokens are nearly the same except for the position embedding, making the generated sentences incoherent. Thus, we take into consideration the previous decoded results for decoding the next distractor token, as will be introduced in the next subsection. + +# 2.2 BERT-based Distractor Generation (BDG) + +In a distractor generation scenario, there are three given inputs: (1) a paragraph $P$ , (2) an answer $A$ , and (3) a question $Q$ . For ease of discussion, let $C$ (referred to as a context sequence) denote the sequence of tokens given by concatenating $P$ , $Q$ , and $A$ . + +Our BDG model generates distractor tokens in an auto-regressive manner. Specifically, the BDG model predicts a token at a time based on (1) the given context sequence $C$ and (2) the previously predicted distractor tokens. The BDG model takes multiple iterations to generate a distractor. In Table 2, we show a running example of the BDG model. Note that our model predicts a token based + +
Iter.Input SequencePredict
1[C] C [S] [M]Because
2[C] C [S] Because [M]Henry
3[C] C [S] Because Henry [M]didn't
4[C] C [S] Because Henry didn't [M]want
5[C] C [S] Because Henry didn't want [M]to
6[C] C [S] Because Henry didn't want to [M]go
7[C] C [S] Because Henry didn't want to go [M].
8[C] C [S] Because Henry didn't want to go. [M][S]
+ +Table 2: A Running Example for the BDG scheme + +on $C$ and the previously generated tokens at each iteration. For example, at Iteration 1, we generate "Because" based on $C$ . At Iteration 2, we generate "Henry" based on $C$ and "Because" tokens, and Iteration 3, we generate "didn't" based on $C$ , "Because", and "Henry". The generation terminates when [S] is predicted. In this example, "Because Henry didn't want to go." is the final generated result. + +Specifically, the input sequence $X_{i}$ at Iteration $i$ to BERT is + +$$ +X _ {i} = \left(\left[ \mathrm {C} \right], C, \left[ \mathrm {S} \right], \hat {d} _ {1}, \dots , \hat {d} _ {i}, \left[ \mathrm {M} \right]\right) +$$ + +Let $\mathbf{h}_{[\mathbb{M}]}\in \mathbb{R}^h$ denote the hidden representation of [M] of $X_{i}$ returned by BERT transformer stacks. The prediction of $\hat{d}_i$ is given by a linear layer transformation $\mathbf{W}_{\mathrm{DG}}\in \mathbb{R}^{h\times |V|}$ and a softmax activation to all vocabulary dimension as follows. + +$$ +\begin{array}{l} p (w | X _ {i}) = \operatorname {s o f t m a x} \left(\mathbf {h} _ {[ M ]} \cdot \mathbf {W} _ {D G} + \mathbf {b} _ {D G}\right) \\ \hat {d _ {i + 1}} = \operatorname {a r g m a x} _ {w} P r (w | X _ {i}) \\ \end{array} +$$ + +Subsequently, the newly generated token $\hat{d}_i$ is appended into $X_{i + 1}$ and the distractor generation process is repeated based on the new $X_{i + 1}$ until [S] is predicted. Our loss function is as follows. + +$$ +\underset {\theta} {\text {m i n i m i z e}} - \sum_ {\forall (C, D)} \sum_ {i = 0} ^ {| D |} \left(\log_ {2} p \left(d _ {i + 1} \mid C, d _ {1: i}; \theta\right)\right) +$$ + +# 2.3 Multi-task with Parallel MLM + +From the experiment results (will be presented in the later section), we see the BDG model advances the state-of-the-art result (Zhou et al., 2019) from 28.65 to 35.30 (BLEU 1 score). While the token-level evaluation result looks promising, we find that generation results still have room to be improved. + +For performance improvement, we first propose to jointly train BDG and a parallel MLM (P-MLM) + +architecture for distractor generation to enhance the quality of BDG. The P-MLM scheme for generating distractors is structured as follows. + +For a given context $C$ , the input sequence $X$ to P-MLM model is formulated as + +$$ +X = \big ([ \mathsf {C} ], C, [ \mathsf {S} ], [ \mathsf {M} ] _ {\mathrm {d} _ {1}}, [ \mathsf {M} ] _ {\mathrm {d} _ {2}}, \dots , [ \mathsf {M} ] _ {\mathrm {d} _ {| \mathsf {D} |}} \big) +$$ + +Let $\mathbf{h}_{[\mathsf{M}]_{\mathrm{d_i}}}\in \mathbb{R}^h$ denote the hidden representation of $[\mathsf{M}]_{\mathrm{d_i}}$ of $X$ returned by BERT transformer stacks. The prediction of $\hat{q}_i$ is given by a linear layer transformation $\mathbf{W}_{\mathsf{P - MLM}}\in \mathbb{R}^{h\times |V|}$ and applying a softmax activation to all vocabulary dimension as follows. + +$$ +\begin{array}{l} p (w | X) = \operatorname {s o f t m a x} \left(\mathbf {h} _ {[ \mathbb {M} ] _ {\mathrm {d} _ {\mathrm {i}}}} \cdot \mathbf {W} _ {\mathbb {P} - \mathbb {M L M}} + \mathbf {b} _ {\mathbb {P} - \mathbb {M L M}}\right) \\ \hat {d} _ {i} = \operatorname {a r g m a x} _ {w} P r (w | X) \\ \end{array} +$$ + +The loss function for P-MLM is + +$$ +\underset {\theta} {\text {m i n i m i z e}} - \sum_ {\forall (C, D)} \phi_ {\mathrm {P - M L M}} (C, D) +$$ + +$$ +\phi_ {\mathrm {P - M L M}} (C, D) = \sum_ {\forall d _ {i}} \log_ {2} p (d _ {i} | C, [ \mathrm {M} ] _ {d _ {i}}; \theta) +$$ + +We propose to jointly train P-MLM and BDG by the following multi-tasking loss function. Note that $\gamma$ is a hyper-parameter controlling the weighting between the two tasks. See also the effect of the $\gamma$ value in Subsection 4.6. + +$$ +\begin{array}{l} \underset {\theta} {\text {m i n i m i z e}} - \sum_ {\forall (C, D)} \left[ \phi_ {\mathrm {B D G}} (C, D) + \gamma \cdot \phi_ {\mathrm {P - M L M}} (C, D) \right], \\ \phi_ {\mathrm {B D G}} (C, D) = \sum_ {i = 0} ^ {| D |} \left(\log_ {2} p \left(d _ {i + 1} \mid C, d _ {1: i}; \theta\right)\right) \\ \end{array} +$$ + +The multi-task design is motivated by the following observations. First, as mentioned, we target + +
P.M.Gold
# of cases on BLEU 15712
# of cases on BLEU 2554
# of cases on BLEU 3480
# of cases on BLEU 4350
# of cases on ROUGE-L551
+ +Table 3: Answer Copying Problem on P.M. + +at learning distractor generation from real reading comprehension examination (RACE-like MCQ), and we find that many questions in the RACE dataset are summary-oriented; many questions are about "what is the best title for this passage?" or "what is this passage about?" Such questions require the model to have the capability of passage semantic summarization. While the original BDG scheme design successfully generates fluent question sentences, we find that it may over-fit in sentence writing and under-fit in learning the passage semantic understanding capability. Note that the sequential-MLM design (BDG) essentially is a one-by-one masked token prediction architecture. Such a method may over-focus on the guess of a single token and ignore the overall semantic understanding. Thus, we propose to incorporate the multi-task learning setting to prevent the potential over-fitting problem. From the experiments, we find the multitask learning setting indeed improves the quality of distractor generation. + +# 2.4 Answer Negative Regularization + +In addition to the multi-task design, from the DG result examination, we find another observation that in many cases, there is an answer copying problem; the generated distractors are similar to the given answers. To better see this phenomenon, we experiment to count such cases. In the following table, we show the number of cases that the generated distractor $\hat{D}$ has a token-level similarity score greater than 0.95 with respect to the answer $A$ . We also show the cases for the gold distractors (the human-invented distractors from the RACE dataset). By comparison in Table 3, there is a significant gap between the human-invented distractors and the model generated ones. + +Motivated by the answer copying problem, we propose to incorporate a loss (referred to as answer negative loss) to discourage predicting tokens in $A$ when predicting $\hat{d}_i$ . With the answer negative loss, + +![](images/592683ed1bf58ba722f408779378e6854876747c99db128114dc8f33fc5680e6.jpg) +Figure 1: The Multi-tasking Architecture + +our loss function for BDG is as follows. + +$$ +\underset {\theta} {\text {m i n i m i z e}} - \sum_ {\forall (C, D)} \left(\phi_ {\mathrm {A N}} (C, D) + \gamma \cdot \phi_ {\mathrm {P - M L M}} (C, D)\right), +$$ + +$$ +\begin{array}{l} \phi_ {\mathrm {A N}} = \sum_ {i = 0} ^ {| D |} (\log_ {2} p (d _ {i + 1} | C, d _ {1: i}; \theta) + \\ \sum_ {\forall a _ {j} \in A} \log_ {2} (1 - p \left(a _ {j} \mid C, [ \mathrm {M} ] _ {a _ {j}}; \theta\right)) \tag {1} \\ \end{array} +$$ + +The design of answer negative loss is motivated by that we expect to regulate the generated distractor $\hat{D}$ to use words different from $A$ . + +The overall architecture for training our BDG model is shown in Figure 1. The core structure for our distractor generation is mainly based on the sequential recurrent MLM decoding mechanism. That is, during the testing stage, we use the results from the sequential recurrent MLM decoding part. However, during the training stage, we incorporate the parallel MLM decoding mechanism by jointly considering answer negative regularization and sentence-level distractor loss, as shown in the right-part of the architecture in Figure 1. + +# 3 Multiple Distractor Generation + +# 3.1 Selecting Distractors by Entropy Maximization + +As mentioned, another point that can be improved for DG is that the existing methods mainly focus on single distractor generation. For having more than one distractor, the existing practices are to select the results on different beam search paths as + +multiple options for distractor generation, which lowers the power of distracting a reader for MCQ preparation. + +Our viewpoint is to select a distractor set (by considering semantic diversity) rather than individually selecting top-k distractors based on prediction probability. + +Based on this view, we propose to incorporate a multi-choice reading comprehension (MRC) model for ranking/selecting distractor sets. First, let $\mathbb{M}_{\mathrm{MRC}}$ be a MRC model. Note that $\mathbb{M}_{\mathrm{MRC}}$ takes a passage $P$ , a question $Q$ , and a set of options (including an answer $A$ and distractors $D_{1}, D_{2}, \ldots, D_{n}$ ) as input and outputs $[p_{A}, p_{D_{1}}, \ldots, p_{D_{n}}]$ as the answer probabilities of the options. $\mathbb{M}_{\mathrm{MRC}}$ is trained by maximizing the answer probability $p_{A}$ while minimizing the probabilities $[p_{D_{1}}, \ldots, p_{D_{n}}]$ . + +With $\mathbb{M}_{\mathrm{MRC}}$ , our idea is as follows. First, let $\mathbb{DG}_{\mathrm{BDG}}$ be a BDG model for distractor generation. Also, let $\hat{D} = \{\hat{d}_1,\hat{d}_2,\dots,\hat{d}_n\}$ be the set of generated distractors by the BDG model. In a common MCQ setting, there are four options (one answer $A$ and three distractors $d_{i},d_{j},d_{k}$ ) for each question. Our idea is to enumerate all possible triples from $\{\hat{d}_1,\hat{d}_2,\dots,\hat{d}_n\}$ . That is, we have a triple set + +$$ +\{(d _ {i}, d _ {j}, d _ {k}) | i \neq j \neq k, d _ {i}, d _ {j}, d _ {k} \in \hat {D} \} +$$ + +For a given passage $P$ , question $Q$ , and answer $A$ , our goal is to find a triple $(d_i, d_j, d_k)$ to form an option set $O$ (i.e., $\{d_i, d_j, d_k, A\}$ ) that maximizes the following entropy function. + +$$ +\text {m a x i m i z e} - \sum_ {\forall o _ {i} \in O} p _ {o _ {i}} \log_ {2} p _ {o _ {i}} \tag {2} +$$ + +# 3.2 BDG-EM + +The idea of selecting distractors by entropy maximization can be further generalized by employing multiple DG models. For having multiple DG models, our idea is to leverage the variants of the BDG model (i.e., models with/without answer negative regularization or with/without both answer negative regularization and P-MLM multi-task training). Let $\hat{D},\hat{D}_{\mathrm{PM}}$ ,and $\hat{D}_{\mathrm{PM + AN}}$ be the BDG model without both answer negative regularization and P-MLM multi-task training, the BDG model without answer negative regularization, and the full BDG model. That is, we have a triple set as follows. + +$$ +\left\{\left(d _ {i}, d _ {j}, d _ {k}\right) \mid d _ {i} \in \hat {D}, d _ {j} \in \hat {D} _ {\mathrm {P M}}, d _ {k} \in \hat {D} _ {\mathrm {P M + A N}} \right\} +$$ + +With the triple set, the set that maximizes Eq. (2) is selected as final distractors. + +# 4 Performance Evaluation + +# 4.1 Experimental Settings + +Datasets We follow the setting (Gao et al., 2019) to evaluate our framework with the RACE (Lai et al., 2017) dataset. RACE contains 27,933 articles with 97,687 questions from English examinations of Chinese students from grade 7 to 12. We use data split setting from (Gao et al., 2019). Table 4 reports the statistics for the test data set. All sentences are tokenized by the WordPiece tokenizer (Wu et al., 2016). + +Implementation Details Our models are implemented based on huggingface transformers framework (Wolf et al., 2019). All experiments are based on bert-base-cased model. For optimization in the training, we use AdamW as the optimizer and the initial learning rate 5e-5 for all baselines and our model. The maximum number of epoch is set to 6 with a batch size of 30 on two RTX Titan GPUs. We also make our code and model available at https://github.com/voidful/BDG + +# 4.2 Compared Methods + +In the experiments, we mainly compare the following distractor generation methods. + +- CO-Att. We compare with the state-of-the-art method reported in (Zhou et al., 2019). The model is based on LSTM augmented by coattention mechanism. +- DS-Att. We also compare with the method based on LSTM augmented by dynamic and static attention designed reported in (Gao et al., 2019). This method is served as a baseline for distractor generation based on seq2seq RNN architectures. +- GPT We also experiment with a model based on GPT (Radford et al., 2018) to learn the distractor generation. This scheme can be served as a baseline based on transformer-based pretrained model. +- BDG The scheme without the answer negative technique and parallel masked-LM multi-task training. +- $\mathbf{BDG}_{\mathbf{PM}}$ The BDG scheme with the parallel masked-LM multi-task training ( $\gamma = 1$ ). +- $\mathbf{BDG_{AN + PM}}$ The BDG scheme with both techniques ( $\gamma = 1$ ). + +
Train samples96501
Test samples12284
Avg.article length335.6
Avg.distractor length8.6
Avg(question length10.0
Avg_answer length8.3
Avg.distractor number2.1
+ +Table 4: Training Data Statistics + +# 4.3 Token Score Comparison + +We employ BLEU score (Papineni et al., 2002) and ROUGE (L) (Lin, 2004) scores to evaluate the performance of the compared methods. The BLEU scores evaluate average n-gram precision on a set of reference sentences, with a penalty for overly long sentences. The ROUGE (L) measure is the recall of longest common sub-sequences. + +The comparison results are summarized in Table 5. There are three observations to note. First, one can see that our models significantly outperform the existing methods (i.e., DS-Att. and CO-Att.). Our best performing model advances the state-of-the-art result from 28.65 to 39.81 (BLEU 1 score). Second, as shown, the methods based on transformer models outperform the RNN-based models. This result again demonstrates the effectiveness of the employment of pre-trained transformer model to the downstream tasks. Third, one may notice that our models based on BERT outperforms the GPT-based model. We believe the reason is that the distractors in the RACE data set is mostly a summary type sentence that requires semantic understanding. The GPT-based model may over-focus on sentence writing, and fail to capture the whole context to generate summary-type sentences, and therefore obtain lower scores. + +We also provide experiment results to observe the effectiveness on reducing the answer copying problem discussed in Subsection 2. In Table 6, we show the number of cases that the generated distractor $\hat{D}$ has a token-level similarity score greater than 0.95 with respect to the context answer $A$ . From the experiment result, we see that there are significant improvement made by the BDG schemes. + +# 4.4 MCQ Model Accuracy Comparison + +In this set of experiment, we evaluate the DG quality by the RACE reading comprehension task (Lai et al., 2017). Our idea is that a poorly generated + +DG result will reduce the difficulty of a MCQ task. Thus, we propose to incorporate a MCQ answering model (also trained by the RACE dataset) to evaluate the accuracy of a multiple-choice question with the distractors generated by the compared model. Specifically, given $C$ , $Q$ , and $A$ , we generate three distractors $D_{1}$ , $D_{2}$ , and $D_{3}$ , and then submit the multiple-choice question to the RACE model. Randomly generated results will be the easiest task to solve, and the best generated results will bring challenges to the MCQ model. Therefore, we use the accuracy of the model as a metric. The higher the accuracy, the worse the generation quality. + +The training details of the RACE model is as follows. We use PyTorch Transformers (Wolf et al., 2019) and the roberta-base-openai-detector fine-tuned by OpenAI (Solaiman et al., 2019) with max 512 tokens to implement the model. AdamW with a Learning rate $= 1\mathrm{e} - 5$ is used for fine-tuning. The model is trained for 10 epoch on 2 GPUs (V100) with gradient accumulation per two steps, which makes the batch size approximately equal to 18. Model checkpoints are saved and evaluated on the validation set every 5,000 steps. We select the top checkpoint based on evaluation loss on the validations set. The RACE dataset includes middle and high dataset. The total number of passages and questions is 27,933 and 97,687 respectively. Middle dataset averages about 250 words per passage while the High dataset averages 350 words per passage. + +In this set of experiment, we compare BDG, $\mathrm{BDG}_{\mathrm{PM}}$ , $\mathrm{BDG}_{\mathrm{AN} + \mathrm{PM}}$ , the BDG model with entropy maximization (called $\mathrm{BDG}_{\mathrm{EM}}$ ) (introduced in Subsection 3.2) by setting the beam search size to 3, and the BDG model ensemble introduced in Subsection 3.2. In addition, we also experiment with the GPT, a scheme that takes randomly selected distractors from the data as the DG result, and the scheme uses the gold distractors. The results of the compared methods are summarized in Table 7. + +We have the following findings to note about the results shown in Table 7. First, as expected, the method with randomly selected distractors makes the MCQA model has the highest accuracy, as the randomly selected distractors obviously lower the difficulty of MCQ task. Second, all our models outperform the MCQ with the gold distractors, showing the effectiveness of the proposed models. Third, as expected, our $\mathrm{BDG_{EM}}$ provides the best performing result on this metric. + +
BLEU 1BLEU 2BLEU 3BLEU 4ROUGE L
BDGAN+PM39.5224.2917.2813.2833.40
BDGPM39.8124.8117.6613.5634.01
BDG35.3020.6513.669.5331.11
GPT36.4920.7513.319.3131.59
DS-Att.27.3214.699.296.4715.12
CO-Att.28.6515.159.777.0115.39
+ +Table 5: Performance Comparison on Token Scores + +
BDGAN+PMBDGPMBDGGPTGoldRandom
BLEU 14357115124120
BLEU 2405511512140
BLEU 3374810910900
BLEU 43035978800
ROUGE-L425512212310
+ +Table 6: The Effect on Mitigating Answer Copying Problem + +
Accuracy
Random Selected Distractors88.10%
Gold Distractor78.00%
GPT78.07%
BDG73.96%
BDGPM74.34%
BDGAN+PM74.05%
BDGEM69.44%
+ +Table 7: Comparison by MCQ Accuracy + +# 4.5 Qualitative Examination by Case Study + +In this subsection, we present showcases to see the improvement on multiple distractor generation scenario. We use the same examples introduced in Section 1 for comparison. First, as mentioned, the naive employment of beam search strategy produces similar DG results. As shown in the examples, the distractors generated by BDG are about the same concept. However, as shown in Table 8, we see the $\mathrm{BDG_{EM}}$ produce more diverse distractors with respect to each other. The results demonstrate the effectiveness of our $\mathrm{BDG_{EM}}$ scheme for generating multiple distractors for MCQ preparation. + +# 4.6 Parameter Study on $\gamma$ + +In this subsection, we examine the effects on varying the values of the parameter $\gamma$ . In Table 9, we show the results. From the result, we can see that the best setting for $\gamma$ is 6, and for BDG trained by answer negative and parallel-MLM, the best setting for $\gamma$ is 7. + +# 5 Related Work + +The DG research can be categorized from different perspectives. First, for DG task type, there are two main task categories for DG: cloze-style distractor generation and reading comprehension (RC) distractor generation. In cloze-style DG task, it is viewed as a word filling problem. In general, the first step is to extract distractor candidates from context or some knowledge base, and then the next step is to rank the extracted distractors as a final result. Along this direction, the models are mainly based on similarity heuristic (Sumita et al., 2005; Mitkov et al., 2006; Guo et al., 2016; Ren and Zhu, 2020) or supervised machine learning way (Liang et al., 2018; Yeung et al., 2019). The distractors generated for cloze-style DG are mainly word/phrase level. On the other hand, the RC-type QG focuses on generating sentence-level distractors for reading comprehension level testing, such as summarizing article or understanding author opinion (Gao et al., 2019; Zhou et al., 2019). For the sentence-level distractors, neural models are commonly employed as it is difficult to generate a semantic rich and fluent distractor from question, content, and answer. In this paper, we also focus on generative sentence-level DG for RC task. However, as mentioned in the introduction, we find the existing DG results are still far from human level. The best SOTA result (in terms of BLEU 1 score) is 29, which is far from the ideal result for practical use. Aiming at this point, we explore the employment of transformer-based pre + +# Example 1 + +Context Omitted. (See Appendix) + +Question + +Why did Mr.King want to send Henry away? + +Answer + +- Because Henry was too lazy. + +BDG + +$\cdot d_{1}$ : Because Henry didn't want to go. +$\cdot d_{2}$ : Because Henry didn't want to go to the bookstore. +$\cdot d_{3}$ : Because Henry didn't want to go out. + +BDGEM + +$\cdot d_{1}$ : Because Henry didn't want to go. +$d_{2}$ : Because Henry wanted to be rich. +$\cdot d_{3}$ : Because Henry wanted to be a clever man. + +# Example 2 + +Context Omitted. (See Appendix) + +Question + +- Which of the following women would look most attractive? + +Answer + +A short red-haired woman who wears a purple hat. + +BDG + +$\cdot d_{1}$ : A young woman who wears a white hat. +$\cdot d_{2}$ : A woman who wears a white hat. +BDGEM +$\cdot d_{1}$ : A short black woman with big, round faces. +$\cdot d_{2}$ : A young woman who doesn't like a white hat. +$d_{3}$ : A little woman who wears a pink hat. + +trained models for performance improvement. For clarity of comparison, we summarize the existing studies on distractor generation in Table 10. + +# 6 Conclusion + +We present a state-of-the-art neural model based on a pre-trained transformer-based model for DG. We introduce two techniques, Answer Negative Regularization and Multi-task with Parallel MLM, to boost the DG performance. In addition, we also introduce BDG ensemble with an entropy maximization mechanism to enhance the DG quality by + +Table 8: Qualitative Examination by Case Study + +
BLEU 1BLEU 2BLEU 3BLEU 4ROUGE L
PM(γ=1)36.9722.0714.8210.5032.64
PM(γ=2)38.4523.2115.8111.3633.18
PM(γ=3)39.2324.2717.0412.7833.82
PM(γ=4)39.2224.2417.0812.9534.05
PM(γ=5)39.7424.5017.2913.0934.11
PM(γ=6)39.8124.8117.6613.5634.01
PM(γ=7)39.3724.1317.0913.0733.45
AN+PM(γ=1)37.4922.0813.7310.4432.40
AN+PM(γ=2)38.2522.8115.3310.9132.99
AN+PM(γ=3)38.7123.5416.2612.0433.82
AN+PM(γ=4)38.8423.7016.5712.4633.53
AN+PM(γ=5)39.1923.9716.9612.9233.67
AN+PM(γ=6)39.5824.2317.1113.1133.38
AN+PM(γ=7)39.5224.2917.2813.2833.40
+ +Table 9: Performance Comparison on Token Scores with Different $\gamma$ Settings + +leveraging a reading comprehension model. By experimental evaluation, our models outperform the existing best performing models and advances the state-of-the-art result to 39.81 (BLEU 1 score). + +# Acknowledgement + +This work was supported by the Ministry of Science and Technology, Taiwan, under projects No. 109-2221-E-005-058-MY3 and 107-2221-E-005-064-MY2 + +# References + +Jun Araki, Dheeraj Rajagopal, Sreecharan Sankaranarayanan, Susan Holm, Yukari Yamakawa, and Teruko Mitamura. 2016. Generating questions and multiple-choice answers using semantic analysis of texts. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1125-1136. + +Ying-Hong Chan and Yao-Chung Fan. 2019. A recurrent bert-based model for question generation. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pages 154-162. + +Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In Advances in Neural Information Processing Systems, pages 13042-13054. + +Yifan Gao, Lidong Bing, Piji Li, Irwin King, and Michael R Lyu. 2019. Generating distractors for reading comprehension questions from real examinations. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6423-6430. + +Qi Guo, Chinmay Kulkarni, Aniket Kittur, Jeffrey P Bigham, and Emma Brunskill. 2016. Questionator: Generating knowledge assessments for arbitrary topics. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI'16). AAAI Press. + +Girish Kumar, Rafael E Banchs, and Luis Fernando D'Haro. 2015. Revup: Automatic gap-fill question generation from educational texts. In Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 154-161. + +Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-scale reading comprehension dataset from examinations. arXiv preprint arXiv:1704.04683. + +Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942. + +
Distractor LevelAnswer TypeMethod TypeModel
Word/phraseSentenceClozeR.C.ExtractiveGenerativeType
Gao et al. 2019YYYYRNN
Zhou et al. 2019YYYYRNN
Araki et al. 2016YYYNon-neural model
Welbl et al. 2017YYYRandom forests
Guo et al. 2016YYYWord2Vec
Kumar et al. 2015YYYYSVM
Liang et al. 2017YYYGAN
Liang et al. 2018YYYYNon-neural model
+ +Table 10: An Overview of the Existing DG works + +Chen Liang, Xiao Yang, Neisarg Dave, Drew Wham, Bart Pursel, and C Lee Giles. 2018. Distractor generation for multiple choice questions using learning to rank. In Proceedings of the thirteenth workshop on innovative use of NLP for building educational applications, pages 284-290. + +Chen Liang, Xiao Yang, Drew Wham, Bart Pursel, Rebecca Passonneaur, and C Lee Giles. 2017. Distractor generation with generative adversarial nets for automatically creating fill-in-the-blank questions. In Proceedings of the Knowledge Capture Conference, pages 1-4. + +Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81. + +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. + +Ruslan Mitkov, Ha Le An, and Nikiforos Karamanis. 2006. A computer-aided environment for generating multiple-choice test items. Natural language engineering, 12(2):177-194. + +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311-318. Association for Computational Linguistics. + +Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws.com/openai-assetss/researchcovers/languageunsupervised/language understanding paper.pdf. + +Siyu Ren and Kenny Q Zhu. 2020. Knowledge-driven distractor generation for cloze-style multiple choice questions. arXiv preprint arXiv:2004.09853. + +Irene Solaiman, Miles Brundage, Jack Clark, Amanda Askell, Ariel Herbert-Voss, Jeff Wu, Alec Radford, and Jasmine Wang. 2019. Release strategies and the + +social impacts of language models. arXiv preprint arXiv:1908.09203. + +Eiichiro Sumita, Fumiaki Sugaya, and Seiichi Yamamoto. 2005. Measuring non-native speakers' proficiency of english by using a test with automatically-generated fill-in-the-blank questions. In Proceedings of the second workshop on Building Educational Applications Using NLP, pages 61-68. + +Johannes Welbl, Nelson F Liu, and Matt Gardner. 2017. Crowdsourcing multiple choice science questions. arXiv preprint arXiv:1707.06209. + +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R'emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. + +Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. + +Chak Yan Yeung, John SY Lee, and Benjamin K Tsou. 2019. Difficulty-aware distractor generation for gapfill items. In Proceedings of the The 17th Annual Workshop of the Australasian Language Technology Association, pages 159-164. + +Xiaorui Zhou, Senlin Luo, and Yunfang Wu. 2019. Coattention hierarchical network: Generating coherent long distractors for reading comprehension. + +Appendix + +
ContentThe building is shaking. A woman with a baby in her arms is trying to open the door, but fails. Finding no way, she rushes into her bedroom and there they survive the earthquake. In a factory building, as the workshop floor swings under the terrible shaking, workers run for safety. Some hide under the machines and survive, but others who try to run outside are killed by the falling ceilings. These scenes, played by actors and actresses, are from a film of science education Making a Split Second Decision shown in 1998 on China Central TV in memory of Tangshan Earthquake. By studying actual cases in the earthquake areas and scientific experiments, experts find that buildings remain untouched for the first 12 seconds of an earthquake. In this short time, one has the best chance of surviving an earthquake by staying near the inside walls, in bedrooms and under beds, experts concluded in the film. "Earthquakes seem to catch the lives of those who run," said many survivors in the earthquake areas, describing how their friends were killed on the doorways or along the stair steps as they tried to get out of the building. Their advice was proved in the film, "Take a hiding-place where you are rather than run, unless you are sure you can reach a safe open place in ten seconds."
QuestionThe workers who try to run outside the building die because?
AnswerThey don't have enough time to run outside.
DistractorThey don't know how to get out of the building.
+ +Table 11: BDG showcase + +
ContentHenry found work in a bookstore after he finished middle school. He wouldn’t do anything but wanted to get rich. Mr.King thought he was too lazy and was going to send him away. Henry was afraid and had to work hard. It was a cold morning. It was snowing and there was thin ice on the streets. Few people went to buy the books and the young man had nothing to do. He hated to read, so he watched the traffic. Suddenly he saw a bag fall off a truck and it landed by the other side of the street. It must be full of expensive things. Henry said to himself. I have to get it, or others will take it away. He went out of the shop and ran across the street. A driver saw him and began to whistle, but he didn’t hear it and went on running. The man drove aside, hit a big tree and was hurt in the accident. Two weeks later Henry was taken to court. A judge asked if he heard the whistle when he was running across the street. He said that something was wrong with his ears and he could hear nothing. “But you’ve heard me this time.” said the judge. “Oh, I’m sorry. Now I can hear with one ear.” “Cover the ear with your hand and listen to me with your deaf one. Well, can you hear me?” “No, I can’t, Sir.”
QuestionWhy did Mr.King want to send Henry away?
AnswerBecause Henry was too lazy.
BDGBecause Henry didn’t want to go.
Because Henry didn’t want to go out.
Because Henry didn’t want to go to the bookstore.
BDG ensembleBecause Henry didn’t want to go.
Because Henry wanted to be rich.
Because Henry wanted to be a clever man.
+ +Table 12: Context for Example 1 + +
ContentMost of the time, people wear hats to protect themselves from weather conditions. Hats are also worn to show politeness and as signs of social position. But nowadays, hats, especially women's hats, are much more than that. More exactly, hats have changed into fashion and style symbols by many movie stars. What's more, people now consider many different features when choosing even a simple hat. Many designers point out that, when choosing the right hat, it's important to consider the color of your skin as well as your hair, your height, and the shape of your face. First of all, the color of the hat should match the color of your skin and hair. For instance, black hats should be avoided if you are dark skinned. If a purple hat is placed on top of red hair, one will look as attractive as a summer flower. Second, the height of the hat is also an important point. Tall women should not go for hats with tall crowns, just as short women should choose hats with upturned brims to give the look of height. Third, and most importantly, the shape of the face decides the kind of hat one should pick. A small, gentle hat that fits the head looks good on a small face. However, women with big, round faces should choose a different style. As the saying goes, 'Fine feathers make fine birds.' Å good hat can not only help your dress but also support your features, so why not choose the best possible one next time you want to be in public?
QuestionAccording to the article, which of the following women would look most attractive?
AnswerA short red-haired woman who wears a purple hat.
BDGA young woman who wears a white hat.
A young woman who doesn't like a white hat.
A woman who wears a white hat.
BDG ensembleA short black woman with big, round faces.
A young woman who doesn't like a white hat.
A little woman who wears a pink hat.
+ +Table 13: Context for Example 2 + +
ContentMemory, they say, is a matter of practice and exercise. If you have the wish and really made a conscious effort, then you can quite easily improve your ability to remember things. But even if you are successful, there are times when your memory seems to play tricks on you. Sometimes you remember things that really did not happen. One morning last week, for example, I got up and found that I had left the front door unlocked all night, yet I clearly remember locking it carefully the night before. Memory "trick" work the other way as well. Once in a while you remember not doing something, and then find out that you did. One day last month, for example, I was sitting in a barber shop waiting for my turn to get a haircut, and suddenly I realized that I had got a haircut two days before at the barber shop across the street from my office. We always seem to find something funny and amusing in incidents caused by people's forgetfulness or absent-mindedness. Stories about absent-minded professors have been told for years, and we never got tired of hearing new ones. Unfortunately, however, absent-mindedness is not always funny. There are times when "trick" of our memory can cause us great trouble.
QuestionWhich of the following statements is true according to the passage?
AnswerOne night the writer forgot to lock the front door.
BDGThe writer couldn't find a hair cut in the barber shop.
The writer couldn't find a hair cut in the shop.
BDG ensembleThe writer didn't want to open the front door.
The writer couldn't find the reason why he left the front door.
+ +Table 14: Yet another example for BDG multiple distractor generation \ No newline at end of file diff --git a/abertbaseddistractorgenerationschemewithmultitaskingandnegativeanswertrainingstrategies/images.zip b/abertbaseddistractorgenerationschemewithmultitaskingandnegativeanswertrainingstrategies/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..6ab5bbc7cb17dc6756dd93812616bc463c40269a --- /dev/null +++ b/abertbaseddistractorgenerationschemewithmultitaskingandnegativeanswertrainingstrategies/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:148c1062046c3862575af78830edabd7627596012ac987bc4e51a29b0085fc31 +size 1186766 diff --git a/abertbaseddistractorgenerationschemewithmultitaskingandnegativeanswertrainingstrategies/layout.json b/abertbaseddistractorgenerationschemewithmultitaskingandnegativeanswertrainingstrategies/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..c8e5d355acf27a1216522205d0cdee08e648df28 --- /dev/null +++ b/abertbaseddistractorgenerationschemewithmultitaskingandnegativeanswertrainingstrategies/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a5ce0a1b5c83a9f61e110dd8dd32f98b9ca3ec3c183b1ab248b0efb283b241cb +size 406341 diff --git a/abstractivemultidocumentsummarizationviajointlearningwithsingledocumentsummarization/a0c61489-b7ee-4e8a-a1ea-343af81a791f_content_list.json b/abstractivemultidocumentsummarizationviajointlearningwithsingledocumentsummarization/a0c61489-b7ee-4e8a-a1ea-343af81a791f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..835a90a419c4095d370eaa13f0f7384e66a710c7 --- /dev/null +++ b/abstractivemultidocumentsummarizationviajointlearningwithsingledocumentsummarization/a0c61489-b7ee-4e8a-a1ea-343af81a791f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:74377c75fdda39ae84bab66ebf49c361911547aa5cc39f951dff15e479622194 +size 71406 diff --git a/abstractivemultidocumentsummarizationviajointlearningwithsingledocumentsummarization/a0c61489-b7ee-4e8a-a1ea-343af81a791f_model.json b/abstractivemultidocumentsummarizationviajointlearningwithsingledocumentsummarization/a0c61489-b7ee-4e8a-a1ea-343af81a791f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..26132c32a60ca6cddb09c4eda16c047d0d4387bf --- /dev/null +++ b/abstractivemultidocumentsummarizationviajointlearningwithsingledocumentsummarization/a0c61489-b7ee-4e8a-a1ea-343af81a791f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7b5b9a772403fd5fdf2cfe53037bfb6f2aa870bbdf4d0ed63262e767b6f4655e +size 84299 diff --git a/abstractivemultidocumentsummarizationviajointlearningwithsingledocumentsummarization/a0c61489-b7ee-4e8a-a1ea-343af81a791f_origin.pdf b/abstractivemultidocumentsummarizationviajointlearningwithsingledocumentsummarization/a0c61489-b7ee-4e8a-a1ea-343af81a791f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..fd884893d6a87e88e8b3b3e9ca7d007240a2fc68 --- /dev/null +++ b/abstractivemultidocumentsummarizationviajointlearningwithsingledocumentsummarization/a0c61489-b7ee-4e8a-a1ea-343af81a791f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:03093f8b19278d719fee3fdfb3eb500e51567d8182e9eea8d39d417698c73823 +size 910897 diff --git a/abstractivemultidocumentsummarizationviajointlearningwithsingledocumentsummarization/full.md b/abstractivemultidocumentsummarizationviajointlearningwithsingledocumentsummarization/full.md new file mode 100644 index 0000000000000000000000000000000000000000..fe78f4ac70a93d29af93e49363d7f1a795d42f3e --- /dev/null +++ b/abstractivemultidocumentsummarizationviajointlearningwithsingledocumentsummarization/full.md @@ -0,0 +1,340 @@ +# Abstractive Multi-Document Summarization via Joint Learning with Single-Document Summarization + +Hanqi Jin, Xiaojun Wan + +Center for Data Science, Peking University + +Wangxuan Institute of Computer Technology, Peking University + +The MOE Key Laboratory of Computational Linguistics, Peking University + +{jinhanqi,wanxiao jun}@pku.edu.cn + +# Abstract + +Single-document and multi-document summarizations are very closely related in both task definition and solution method. In this work, we propose to improve neural abstractive multi-document summarization by jointly learning an abstractive single-document summarizer. We build a unified model for single-document and multi-document summarizations by fully sharing the encoder and decoder and utilizing a decoding controller to aggregate the decoder's outputs for multiple input documents. We evaluate our model on two multi-document summarization datasets: Multi-News and DUC-04. Experimental results show the efficacy of our approach, and it can substantially outperform several strong baselines. We also verify the helpfulness of single-document summarization to abstractive multi-document summarization task. + +# 1 Introduction + +Document summarization aims at producing a fluent, condensed summary for the given document or document set. It involves identifying important information and filtering out redundant information from input sources. While single-document summarization takes a single source document as input, multi-document summarization requires producing a summary from a cluster of thematically related documents. There are two primary methodologies for document summarization: extractive and abstractive. Extractive methods directly select important sentences from the original documents, which are relatively simple but face the drawbacks of information redundancy and incoherence between sentences. Abstractive methods enable generating new words, phrases, and sentences, which are able to generate better summaries with higher readability and conciseness. In this paper, we focus on abstractive document summarization. + +Empowered by large parallel datasets automatically harvested from online news websites, sequence-to-sequence learning has shown promising results on abstractive single-document summarization (See et al., 2017; Paulus et al., 2018; Tan et al., 2017; Celikyilmaz et al., 2018). Compared with single-document summarization, annotated multi-document summarization datasets are often scarce. Several works have explored adapting the neural encoder-decoder model trained for single-document summarization to multi-document summarization. Zhang et al. (2018) add a document set encoder to extend the neural abstractive model trained on large scale single-document summarization corpus to the multi-document summarization task. Lebanoff et al. (2018) incorporate the maximal marginal relevance method into a neural encoder-decoder model trained for single-document summarization to address the information redundancy for multi-document summarization. + +Single-document and multi-document summarizations are very closely related in both task definition and solution method (Wan, 2010). Both tasks need to deal with document-level input, identify the important content of documents, and paraphrase the important information to generate the summary, while the main difference is that multi-document summarization involves summarizing multiple input documents. Since the two tasks are closely related, it is promising to learn for two summarization tasks jointly. Compared with single-document summarization, multi-document summarization needs to handle multiple input documents. A simple method is to concatenate multiple documents into a long flat text and treat it as a long sequence-to-sequence task. However, it blurs the boundaries between documents and loses the hierarchy within the document cluster. It is natural to regard multi-document summarization as a two-stage process + +of summarizing every single document and then merging multiple summaries. Nevertheless, this process is quite trivial, and it is difficult to utilize multi-document summarization corpus to train the single-document summarization model. Furthermore, the synthesis of multiple summaries involves eliminating redundant parts and organizing related paragraphs or sentences, which are also challenges to be solved. + +In this work, we propose a joint learning approach to improve neural abstractive multi-document summarization by using single-document summarization corpus to address these issues. Our approach first uses a shared document encoder to encode each document in the document set, then uses a shared decoder to predict the word probabilities for each document, and finally applies a decoding controller to aggregate all output probabilities from the summary decoder to make the final prediction at each decoding step. The shared encoder and decoder are jointly trained on the single document summarization data. In this way, we can unify single-document and multi-document summarizations into one architecture simultaneously, and make better use of single-document and multi-document corpora, so that both tasks can benefit from joint learning, especially for the multi-document summarization task. + +We evaluate our approach on the benchmark multi-document summarization datasets, MultiNews and DUC-04, and it brings substantial improvements over several strong baselines for multi-document summarization. We leverage CNN/DailyMail, a single-document summarization dataset, to perform joint learning with Multi-News. We also test the performance on CNN/DailyMail test set, and joint learning also brings certain performance improvement for the single-document summarization baselines. + +In summary, we make the following contributions in this paper: + +- To the best of our knowledge, we are the first to explore joint learning for neural abstractive single-document and multi-document summarizations. +- We propose a unified model by fully sharing encoder and decoder and utilizing a decoding controller to aggregate the decoder's outputs for multiple input documents. +Experimental results show that our approach + +substantially outperforms several strong baselines, and single document summarization is verified to be very helpful to neural abstractive multi-document summarization. Our code is publicly available at https://github.com/zhongxia96/MDS-and-SDS. + +# 2 Related Work + +# 2.1 Multi-Document Summarization + +The methods for multi-document summarization can generally be categorized to extractive and abstractive. The extractive methods produce a summary by extracting and merging sentences from the input documents, while the abstractive methods generate a summary using arbitrary words and expressions based on the understanding of the documents. Due to the lack of available training data, most previous multi-document summarization methods were extractive (Erkan and Radev, 2004; Christensen et al., 2013; Yasunaga et al., 2017). Recently, two multi-document summarization datasets have been proposed, one for very long input, aimed at generating Wikipedia (Liu et al., 2018) and another dedicated to generating a comprehensive summary of multiple real-time news (Fabbri et al., 2019). Several works have begun to explore abstractive multi-document summarization. Liu et al. (2018) concatenated multiple source documents into a long flat text and modeled multi-document summarization as a long sequence-to-sequence task. Liu and Lapata (2019) represented cross-document relationships via an attention mechanism that allows sharing information as opposed to simply concatenating text spans and processing them as a flat sequence. Fabbri et al. (2019) incorporated MMR into a hierarchical pointer-generator network to address the information redundancy in multi-document summarization. The above works were all trained and tested on multi-document summarization corpus. + +# 2.2 Adaptation Method from Single to Multi-Document Summarization + +Since the neural abstractive models have achieved promising results on single-document summarization (See et al., 2017; Paulus et al., 2018; Gehrmann et al., 2018; Celikyilmaz et al., 2018), some works trained abstractive summarization models on a single document dataset and adjusted the model to adapt the multi-document summarization task. Zhang et al. (2018) added a document set en + +![](images/0893f885d29e3d35f433c537e1b9477127edb0dba1646d8668c2dda1d828743d.jpg) +Figure 1: The overview of our model. + +coder into the single document summarization framework and tuned the pre-trained model on the multi-document summarization dataset. Lebanoff et al. (2018) combined an extractive summarization algorithm (MMR) for sentence extraction to reweigh the original sentence importance distribution learned in the abstractive single document summarization model. In this work, we propose to jointly learn for two summarization tasks and build a unified model. It utilizes a shared encoder-decoder to summarize a document and use a decoding controller to aggregate all decoders' outputs. Compared with the above adaptation methods, our method can make better use of multi-document and single-document corpora and improve the effectiveness of single-document summarization at the same time. + +# 3 Methodology + +# 3.1 Overview and Notations + +Multi-document summarization takes a document cluster $D = \{D_{1},D_{2},\ldots ,D_{I}\}$ as the input, and produces the summary $Y$ , where $I$ is the number of documents. Each document $D_{i} = (x_{i,1},x_{i,2},\dots,x_{i,N_{i}})$ is a sequence of $N_{i}$ words, and $Y = (y_{1},y_{2},\dots,y_{M})$ is a sequence of $M$ words. Compared with multi-document summarization, single-document summarization has only one input document. In order to unify the symbols, single-document summarization is regarded as a special input case of $I = 1$ . + +As illustrated in Figure 1, our model consists of a document encoder, a summary decoder, and a decoding controller. Different documents in multi-document summarization share document encoder and summary decoder. Single-document summarization also shares document encoder and summary decoder with multi-document summarization. A decoding controller is applied to aggregate the outputs of the summary decoder for multiple input documents. + +The shared document encoder reads each input document $D_{i}$ and builds the contextual-level representations $C_i$ . + +$$ +C _ {i} = \operatorname {e n c o d e r} \left(D _ {i}\right) \tag {1} +$$ + +In each decoding step $t$ , the shared summary decoder produces the vocabulary distribution of the next word given previously (predicted) words and each input document $D_{i}$ . + +$$ +P _ {i} ^ {t} = \operatorname {d e c o d e r} \left(C _ {i}, y _ {1: t - 1}\right) \tag {2} +$$ + +Note that for multi-document summarization, the same sequence of previous words $y_{1:t-1}$ (i.e., partial summary) is used for decoding for every document of the multiple inputs. + +Since single-document summarization only summarizes one input document, the summary decoder can make the final prediction based on the output vocabulary distribution. While for multi-document summarization, a decoding controller is applied to aggregate multiple vocabulary distributions from + +the summary decoder for multiple input documents. + +$$ +P _ {f} ^ {t} = \sum_ {i = 1} ^ {I} P _ {i} ^ {t} z _ {i} ^ {t} \tag {3} +$$ + +Here $z_{i}^{t}$ is the importance weight for each of the multiple vocabulary distributions in the $t$ -th step. + +The following sections will introduce the document encoder, the summary decoder, and the decoding controller, respectively. + +# 3.2 Document Encoder + +Document encoder reads an input document $D_{i}$ and constructs its contextual-level representation. For multi-document summarization, multiple input documents can be processed in parallel. This part is the same as Transformer encoder (Vaswani et al., 2017), and we will give a brief introduction. The document encoder is composed of a stack of $L$ identical layers. Each layer has two sub-layers, where the first sub-layer is a multi-head self-attention mechanism, and the second sub-layer is a position-wise fully connected feed-forward network. A residual connection (He et al., 2016) is employed around each of the two sub-layers, followed by layer normalization (Ba et al., 2016). + +Tokens of each input document are first represented by word embeddings. Let $e_{i,j}$ denote the embedding assigned to word $x_{i,j}$ . Since the Transformer is a non-recurrent model, we need to add the "positional embedding" $p_j$ to the word embedding to indicate the position of the word in the document, and the input representation can be obtained by simply adding these two representations: $w_{i,j} = e_{i,j} + p_j$ . We take $\{w_{i,1}, w_{i,2}, \ldots, w_{i,N_i}\}$ as the input to the document encoder. For convenience, we denote the input of the first layer as $h^0$ and the output of $l$ -th layer as $h^l$ . The multi-head self-attention sub-layer takes the output of the previous layer as the input to construct contextual-level representation, while the FFN sub-layer is used to transform the representation further. + +$$ +\tilde {h} = \operatorname {L a y e r N o r m} \left(h ^ {l - 1} + \operatorname {M H A t t} \left(h ^ {l - 1}, h ^ {l - 1}\right)\right) +$$ + +$$ +h ^ {l} = \operatorname {L a y e r N o r m} (\tilde {h} + \operatorname {F F N} (\tilde {h})) \tag {4} +$$ + +The final output $h^L$ is fed to the summary decoder, and it is also fed to the decoding controller for multi-document summarization. For convenience, we denote the output for the document $D_i$ as $C_i$ . + +# 3.3 Summary Decoder + +In each decoding step, the summary decoder takes the decoded subsequences $(y_{1},y_{2},\dots ,y_{t - 1})$ as the input, and predicts the probability distribution of generating the next word for each input document $D_{i}$ . Similar to the document encoder, the summary decoder is also a stack of $L$ identical layers. The layer consists of three sub-layers: masked multi-head self-attention mechanism, multi-head cross-attention mechanism over the output of the encoder stack, and position-wise feed-forward network. + +We also need to add "positional embedding" to the word embedding in the same way as the document encoder. Let $d^{l}$ denote the output of the $l$ -th layer in the summary decoder, and the input for the first layer as $d^{0}$ . The masked multi-head self-attention sub-layer is used for encoding the information of the decoded subsequences. The output of the self-attention is fed to the cross-attention sub-layer and feed-forward network. The cross-attention sub-layer performs multi-head attention over the output $C_{i}$ of the document encoder. + +$$ +\tilde {d} = \text {L a y e r N o r m} \left(d ^ {l - 1} + \text {M H A t t} \left(d ^ {l - 1}, d ^ {l - 1}\right)\right) +$$ + +$$ +g = \text {L a y e r N o r m} (\tilde {d} + \text {M H A t t} (\tilde {d}, C _ {i})) +$$ + +$$ +d ^ {l} = \text {L a y e r N o r m} (g + \operatorname {F F N} (g)) \tag {5} +$$ + +Let $U_{i}^{t}$ denote the output of the $L$ -th layer for document $D_{i}$ at position $t$ . + +The output $U_{i}^{t}$ is passed through a softmax layer to calculate the generation distribution of next word over the target vocabulary. + +$$ +\hat {P} _ {i} ^ {t} = \operatorname {s o f t m a x} \left(U _ {i} ^ {t} W _ {g} + b _ {g}\right) \tag {6} +$$ + +where $W_{g}\in \mathbb{R}^{d_{model}\times d_{vocab}},b_{g}\in \mathbb{R}^{d_{vocab}}$ and $d_{vocab}$ is the size of target vocabulary. To tackle the problem of out-of-vocabulary (OOV) words, we compute the copy attention $\varepsilon_i^t$ between $U_{i}^{t}$ and the input representations $C_i$ to allow copying words from the source text, and obtain the copy distribution (Gu et al., 2016). + +$$ +\varepsilon_ {i} ^ {t} = \mathrm {s o f t m a x} (U _ {i} ^ {t} C _ {i} ^ {\top}) +$$ + +$$ +\tilde {P} _ {i} ^ {t} = \sum_ {j = 1} ^ {N _ {i}} \varepsilon_ {i, j} ^ {t} o _ {i, j} \tag {7} +$$ + +where $o_{i,j}$ is the one-hot indicator vector for $w_{i,j}$ . + +The generation probability $\eta_i^t\in [0,1]$ is calculated from the decoder output $U_{i}^{t}$ . + +$$ +\eta_ {i} ^ {t} = \sigma \left(U _ {i} ^ {t} W _ {\eta} + b _ {\eta}\right) \tag {8} +$$ + +where $W_{\eta}\in \mathbb{R}^{d_{model}\times 1},b_{\eta}\in \mathbb{R}^{1}$ . The overall distribution for document $D_{i}$ is given by combining the two distributions with $\eta_i^t$ + +$$ +P _ {i} ^ {t} = \eta_ {i} ^ {t} * \hat {P} _ {i} ^ {t} + (1 - \eta_ {i} ^ {t}) * \tilde {P} _ {i} ^ {t} \tag {9} +$$ + +# 3.4 Decoding Controller + +Multi-document summarization requires producing a summary for a cluster of thematically related documents. While the summary decoder has predicted the vocabulary distribution for each input document, the decoding controller aggregates multiple vocabulary distributions to predict the final vocabulary distribution for multi-document summarization. Figure 2 shows a example. To better aggregate multiple vocabulary distributions, the controller needs to grasp the theme of the document cluster. We first use an attention pooling over the document encoder outputs to obtain corresponding document representation, and adopt a bidirectional LSTM (Hochreiter and Schmidhuber, 1997; Schuster and Paliwal, 1997) to encode multiple document representations in the document cluster. Then, we take the output of the bidirectional LSTM as the initial state of another unidirectional LSTM, which will be used to calculate the weights that the next word comes from each document. + +Attention Pooling The attention pooling operation is used over the contextual-level representations $C_i = (c_{i,1}, c_{i,2}, \dots, c_{i,N_i})$ to obtain a fixed-length representation $\hat{c}_i$ for document $D_i$ . We first transform the input vector $c_{i,j}$ into attention score $a_{i,j}$ and value vector $v_{i,j}$ . Then we calculate a probability distribution $\hat{a}_i$ over words within the document $D_i$ based on attention scores. + +$$ +a _ {i, j} = c _ {i, j} W _ {a} +$$ + +$$ +v _ {i, j} = c _ {i, j} W _ {v} \tag {10} +$$ + +$$ +\hat {a} _ {i, j} = \frac {\exp (a _ {i , j})}{\sum_ {j = 1} ^ {n} \exp (a _ {i , j})} +$$ + +where $W_{a}\in \mathbb{R}^{d_{model}\times 1}$ and $W_{v}\in \mathbb{R}^{d_{model}\times d_{model}}$ . Finally, we get the document vector $\hat{c}_i$ by weighing the value vectors. + +$$ +\hat {c} _ {i} = \sum_ {j = 1} ^ {n} \hat {a} _ {i, j} v _ {i, j} \tag {11} +$$ + +A bidirectional LSTM is adopted to further encode document representations $\{\hat{c}_1,\hat{c}_2,\dots ,\hat{c}_I\}$ The forward LSTM reads the document context representations from left to right and gets + +![](images/6ea2f95de05b9ce54d3514dcfb757143ec9e33b2003d0ec57f8370ee517e6617.jpg) +Figure 2: The decoding controller weighs the multiple output distributions to predict the next word. If simply averaging the vocabulary distributions, we will get the word "is". And we can get the correct word "lives" by calculating and using the weights through the decoding controller. + +a sequence of hidden states $\left(\overrightarrow{f}_1,\overrightarrow{f}_2,\dots ,\overrightarrow{f}_I\right)$ The backward LSTM reads the document context representations reversely, from right to left, and results in another sequence of hidden states $\left(\overleftarrow{f}_1,\overleftarrow{f}_2,\dots ,\overleftarrow{f}_I\right)$ . We add the last forward hidden state $\vec{f}_I$ and backward hidden state $\overleftarrow{f}_{1}$ as the output $r$ of the bidirectional LSTM. + +$$ +r = \overleftarrow {f} _ {1} + \overrightarrow {f} _ {I} \tag {12} +$$ + +The output $r$ is used as the initial state of another unidirectional LSTM. In the decoding step $t$ , the unidirectional LSTM takes the previous word $y_{t-1}$ as input and produces the new state $s_t$ . + +$$ +s _ {t} = \operatorname {L S T M} \left(s _ {t - 1}, y _ {t - 1}\right) \tag {13} +$$ + +We calculate the weights $z^t$ using $s_t$ and decoder outputs $U^t = \{U_1^t, U_2^t, \dots, U_I^t\}$ : + +$$ +z ^ {t} = \operatorname {s o f t m a x} \left(U ^ {t} W _ {z} s _ {t} ^ {\top}\right) \tag {14} +$$ + +where $W_{z}\in \mathbb{R}^{d_{model}\times d_{model}}$ + +The final vocabulary distribution for multi-document summary generation is the interpolation of all output distributions. + +$$ +P _ {f} ^ {t} = \sum_ {i = 1} ^ {I} P _ {i} ^ {t} z _ {i} ^ {t} \tag {15} +$$ + +# 3.5 Objective Function + +We jointly learn the single-document and multi-document summarizations in a unified model. Our + +goal is to maximize the probability of output summary $Y$ given a single document $S$ or a document set $D$ . We use $\mathcal{T}_s$ to denote the single-document training set and $\mathcal{T}_m$ to denote the multi-document training set. We calculate negative logarithm likelihood function for single-document and multi-document summarizations, respectively. + +$$ +L _ {s} = - \frac {1}{| \mathcal {T} _ {s} |} \sum_ {(S, Y) \in \mathcal {T} _ {s}} \log P (Y | S) +$$ + +$$ +L _ {m} = - \frac {1}{\left| \mathcal {T} _ {m} \right|} \sum_ {(D, Y) \in \mathcal {T} _ {m}} \log P (Y | D) \tag {16} +$$ + +For simplicity, we optimize the sum of the above losses. + +# 4 Experiment + +# 4.1 Datasets + +We conduct experiments on a latest released Multi-News dataset (Fabbri et al., 2019) and a standard DUC multi-document summarization dataset (Over et al., 2007). The Multi-News dataset contains 44,972 documents-summary pairs for training, 5,622 for development, and 5,622 for test. The number of source documents per summary ranges from 2 to 10. DUC-03 and DUC-04 contain 30 and 50 topics, respectively. Each topic has 10 documents paired with 4 different human-written references. CNN/Dailymail (Hermann et al., 2015; Nallapati et al., 2016) is a large scale single document summarization dataset, which contains 287,226 document-summary pairs for training, 13,368 for development and 11,490 for test. + +# 4.2 Implementation Details + +We train the model on the Multi-News and CNN/DailyMail datasets. Considering that different datasets have different expression characteristics, we set different BOS for each dataset in the decoding phase. We take the DUC-04 as the test set, and DUC-03 is used for tuning the model when evaluating on DUC-04 dataset. We set our model parameters based on preliminary experiments on the Multi-News and CNN/DailyMail development set. We prune the vocabulary to $50\mathrm{k}$ and use the word in source text with maximum weights in copy attention to replacing the unknown word to solve the OOVs problem. We set the dimension of word embeddings and hidden units $d_{model}$ to 512, feed-forward units to 1024. We set 4 heads for multi-head self-attention, masked multi-head + +self-attention, and multi-head cross-attention. The number of layers $L$ is set to 6. We set dropout rate to 0.1 and use Adam optimizer with an initial learning rate $\alpha = 0.0001$ , momentum $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ and weight decay $\epsilon = 10^{-5}$ . The learning rate is halved if the valid loss on the development set increases for two consecutive epochs. We use a mini-batch size of 10. Beam search with a beam size of 5 is used for decoding. + +# 4.3 Metrics and Baselines + +We use ROUGE (Lin, 2004) to evaluate the generated summary in our experiments. Following previous work, we report ROUGE F11 on Multi-News and DUC-04 datasets. We compare our model with several typical baselines and several baselines proposed in the latest years. + +PGN (See et al., 2017) is an RNN based model with an attention mechanism and allows the system to copy words from the source text via pointing for abstractive summarization. CopyTransformer (Gehrmann et al., 2018) augments Transformer with one of the attention heads chosen randomly as the copy distribution. Hi-MAP (Fabbri et al., 2019) expands the pointer-generator network model into a hierarchical network and integrates an MMR module to calculate sentence-level scores. The above baselines are trained on the Multi-News corpus, and have been compared and reported in Fabbri et al. (2019), which releases the Multi-News dataset. We directly report the results of the above methods from this paper. PG-MMR (Lebanoff et al., 2018) combines MMR with the abstractive model trained on CNN/DailyMail corpus to generate the summary from multi-document inputs, which requires no multi-document summarization training corpus. SDS-to-MDS (Zhang et al., 2018) is an approach to extend the neural abstractive model trained on CNN/DailyMail dataset to the multi-document summarization task, which leverages multi-document summarization corpus to tune the pre-trained single-document summarization model. It originally conducts experiments on the DUC datasets, and we also reproduce their method on the Multi-News dataset. Besides, we implement CopyTransformer\* to jointly learn single-document and multi-document summarizations, and train it on the CNN/DailyMail and Multi-News corpora. It concatenates the multiple + +
ModelR-1R-2R-SU4
LexRank (Erkan and Radev, 2004)38.2712.7013.20
TextRank (Mihalcea and Tarau, 2004)38.4413.1013.50
MMR(Carbonell and Goldstein, 1998)38.7711.9812.91
PGN (See et al., 2017)41.8512.9116.46
CopyTransformer(Gehrmann et al., 2018)43.5714.0317.37
Hi-MAP(Fabbri et al., 2019)43.4714.8917.41
SDS-to-MDS(Zhang et al., 2018)44.7415.9319.44
CopyTransformer*45.0316.3519.59
Ours46.2617.0220.46
+ +Table 1: ROUGE F1 evaluation results on the Multi-News test set. + +
ModelR-1R-2R-SU4
LexRank (Erkan and Radev, 2004)35.567.8711.86
TextRank (Mihalcea and Tarau, 2004)33.166.1310.16
MMR(Carbonell and Goldstein, 1998)30.144.558.16
PGN (See et al., 2017)31.436.0310.01
CopyTransformer(Gehrmann et al., 2018)28.546.387.22
PG-MMR(Lebanoff et al., 2018)36.429.3613.23
Hi-MAP(Fabbri et al., 2019)35.788.9011.43
SDS-to-MDS(Zhang et al., 2018)36.77.8312.4
CopyTransformer*36.488.2212.29
Ours37.248.6012.67
+ +Table 2: ROUGE F1 evaluation results on the DUC-04 dataset. + +input documents into a long flat text, and treats multi-document summarization as a long single-document summarization task. The best hyperparameter configuration is chosen for each model. + +# 4.4 Automatic Evaluation + +Following previous work, we report ROUGE-1 (unigram), ROUGE-2 (bigram) and ROUGE-SU4 (skip bigrams with a maximum distance of 4 words) scores as the metrics for automatic evaluation (Lin and Hovy, 2003). In Table 1, we report the results on the Multi-News, and our proposed model outperforms various baseline models. CopyTransformer performs much better than PGN and achieves 1.72 points improvement on the ROUGE-1 F1, which demonstrates the superiority of the Transformer architecture. The methods of leveraging single-document corpus (i.e., SDS-to-MDS, CopyTransformer\*, and ours) perform much better than that of only training on multi-document corpus (i.e., PGN, CopyTransformer, and Hi-MAP). Our model gains an improvement of 1.52 points compared with SDS-to-MDS, 1.23 points compared with CopyTransformer\* on ROUGE-1 F1, which verifies the effectiveness of the proposed architecture for the multi-document summarization task. + +In Table 2, we report the results on the DUC-04 test set. Our model achieves scores of 37.24, 8.60 and 12.67 on three ROUGE metrics, respectively. PG-MMR and Hi-MAP obtain the higher score on ROUGE-2 or ROUGE-SU4 F1, while they employ the MMR technique to avoid the redundancy further. Our proposed model achieves the best performances on ROUGE-1 F1 among all compared models. It indicates our proposed model has a good transferability between different datasets. + +# 4.5 Human Evaluation + +To further evaluate the quality of the generated summaries, we carry out a human evaluation. We focus on three aspects: fluency, informativeness, and non-redundancy. The fluency indicator focuses on whether the summary is well-formed and grammatical. The informativeness indicator can reflect whether the summary covers salient points from the input documents. The non-redundancy indicator measures whether the summary contains repeated information. We sample 50 instances from the Multi-News test set and employ five graduate students to rate each summary. Each human judgment evaluates all outputs of different systems for the same sample. Three human judgments are obtained for every sample, and the final scores are averaged across different judges. + +Results are presented in Table 3. We can see that our model performs much better than all baselines. The Spearman correlation coefficients between annotators are high, which guarantees the validity of the human evaluation. In the fluency indicator, our model achieves a high score of 3.5, which is higher than 3.42 of CopyTransfromer\* and 3.3 of SDS-to-MDS, indicating that our model can reduce the grammatical errors and improve the readability of the summary. In the informativeness indicator, our model is higher than CopyTransfromer\* by 0.16 and SDS-to-MDS by 0.2, which indicates that our model can effectively capture the salient information. In the non-redundancy indicator, our model also outperforms all baselines. It indicates our proposed method can better avoid repeating information of the generated summary. + +# 4.6 Ablation Study + +We perform the ablation study to investigate the influence of joint learning with single-document summarization and the effectiveness of the decoding controller. First, we train the model only on the Multi-News dataset to verify the helpfulness + +
ModelFluencyInformativenessNon-redundancy
CopyTransformer(Gehrmann et al., 2018)3.13.082.94
Hi-MAP(Fabbri et al., 2019)2.982.943.02
SDS-to-MDS(Zhang et al., 2018)3.33.223.18
CopyTransformer*3.423.263.24
Ours3.53.423.36
Spearman0.7320.7150.698
+ +Table 3: Human evaluation. The ratings are on a Liert scale of 1(worst) to 5(best). + +
ModelR-1R-2R-SU4
Ours46.2617.0220.46
w/o joint learning44.6416.1419.06
w/o decoding controller44.9416.0719.11
+ +of single-document summarization to abstractive multi-document summarization task. Then we replace the decoding controller with a fixed weight vector $z = [1 / I, \dots, 1 / I]$ by simply averaging the vocabulary distributions from the summary decoder to verify the effectiveness of the decoding controller. + +Table 4 presents the results. We find that the ROUGE-1 F1 score drops by 1.62 and the ROUGE-2 F1 score drops by 0.88 when training the model only on the Multi-News dataset. It indicates joint learning with single-document summarization is beneficial to the multi-document summarization. ROUGE-1 F1 score drops by 1.32 and ROUGE-2 F1 score drops by 0.95 after the decoding controller is removed, which shows that the decoding controller can effectively aggregate the outputs of the summary decoder for multiple input documents. + +# 4.7 Discussion + +Performance on Single-Document Summarization In Table 5, we report the results on CNN/DailyMail test set. CopyTransformer* outperforms CopyTransformer by 0.71 points on ROUGE-1 F1, which indicates joint learning can also improve the performance for single-document summarization. Compared with the CopyTransformer*, our method gains an improvement of 0.31 points on ROUGE-1 F1, which indicates our method can make better use of multi-document corpus to improve the performance for single-document summarization. + +Performance against the Document Number of Inputs Different document number of inputs may affect the summarization performance, so we further test our model and strong baseline + +Table 4: Results of ablation study on the Multi-News test set. + +
ModelR-1R-2R-L
Lead-340.3417.7036.57
PGN (See et al., 2017)39.5317.2836.38
CopyTransformer40.6818.2637.38
CopyTransformer*41.3918.5838.03
Ours41.718.8638.36
+ +Table 5: ROUGE F1 evaluation results on the CNN/DailyMail test set. + +![](images/f26e9a6133c0a8ab3d23b604efeec0ed0b4c43c8c9f363a999cdbcf48f589214.jpg) +Figure 3: ROUGE-2 F1 score on different document number of inputs for CopyTransformer\* baseline and our model on Multi-News test set. + +CopyTransformer\* with respect to different document number of inputs on the Multi-News test set. The document number of inputs in the test sets ranges from 2 to 7. In Figure 3, we can see that the performances of both models drop when the number of input documents increases. The performance curve of our model always appears on the top of that of CopyTransformer\*, and our model can get better results in the case of more documents than CopyTransformer\*. + +# 5 Conclusion and Future Work + +In this paper, we propose a joint learning approach to improve neural abstractive multi-document summarization by using single-document summarization dataset. Specifically, we use the shared document encoder and summary decoder to process each document in the document set, and apply a decoding controller to aggregates all output probabilities from the summary decoder for multi-document summarization. The shared encoder and decoder are jointly trained on the single document sum + +marization dataset. Experimental results show that our approach substantially outperforms several strong multi-document summarization baselines and achieves state-of-the-art or very competitive performances on Multi-News and DUC-04 datasets. + +In the future, we will incorporate BERT or other pre-trained language models into our model to further improve the performance. + +# Acknowledgments + +This work was supported by National Natural Science Foundation of China (61772036), Beijing Academy of Artificial Intelligence (BAAI) and Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). We appreciate the anonymous reviewers for their helpful comments. Xiaojun Wan is the corresponding author. + +# References + +Lei Jimmy Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer normalization. CoRR, abs/1607.06450. +Jaime G. Carbonell and Jade Goldstein. 1998. The use of mmr, diversity-based reranking for reordering documents and producing summaries. In SIGIR '98: Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, August 24-28 1998, Melbourne, Australia, pages 335-336. ACM. +Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, and Yejin Choi. 2018. Deep communicating agents for abstractive summarization. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 1662-1675. Association for Computational Linguistics. +Janara Christensen, Mausam, Stephen Soderland, and Oren Etzioni. 2013. Towards coherent multi-document summarization. In Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings, June 9-14, 2013, Westin Peachtree Plaza Hotel, Atlanta, Georgia, USA, pages 1163-1173. The Association for Computational Linguistics. +Günes Erkan and Dragomir R. Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. J. Artif. Intell. Res., 22:457-479. + +Alexander Richard Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir R. Radev. 2019. Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 1074-1084. Association for Computational Linguistics. +Sebastian Gehrmann, Yuntian Deng, and Alexander M. Rush. 2018. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 4098-4109. Association for Computational Linguistics. +Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O. K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. The Association for Computer Linguistics. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR. +Karl Moritz Hermann, Tomás Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 1693-1701. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780. +Logan Lebanoff, Kaiqiang Song, and Fei Liu. 2018. Adapting the neural encoder-decoder framework from single to multi-document summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 4131-4141. Association for Computational Linguistics. +Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81. +Chin-Yew Lin and Eduard H. Hovy. 2003. Automatic evaluation of summaries using n-gram cooccurrence statistics. In Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, HLT-NAACL 2003, Edmonton, Canada, May 27 - June 1, 2003. The Association for Computational Linguistics. + +Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summarizing long sequences. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. +Yang Liu and Mirella Lapata. 2019. Hierarchical transformers for multi-document summarization. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 5070-5081. Association for Computational Linguistics. +Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, EMNLP 2004, A meeting of SIGDAT, a Special Interest Group of the ACL, held in conjunction with ACL 2004, 25-26 July 2004, Barcelona, Spain, pages 404-411. ACL. +Ramesh Nallapati, Bowen Zhou, Cícero Nogueira dos Santos, Caglar Gülçehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, CoNLL 2016, Berlin, Germany, August 11-12, 2016, pages 280-290. ACL. +Paul Over, Hoa Dang, and Donna Harman. 2007. DUC in context. Inf. Process. Manage., 43(6). +Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. +Mike Schuster and Kuldip K. Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Trans. Signal Processing, 45(11):2673-2681. +Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1073-1083. Association for Computational Linguistics. +Jiwei Tan, Xiaojun Wan, and Jianguo Xiao. 2017. Abstractive document summarization with a graph-based attentional neural model. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1171-1181. Association for Computational Linguistics. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz + +Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 5998-6008. +Xiaojun Wan. 2010. Towards a unified approach to simultaneous single-document and multi-document summarizations. In COLING 2010, 23rd International Conference on Computational Linguistics, Proceedings of the Conference, 23-27 August 2010, Beijing, China, pages 1137-1145. Tsinghua University Press. +Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Parek, Krishnan Srinivasan, and Dragomir R. Radev. 2017. Graph-based neural multi-document summarization. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), Vancouver, Canada, August 3-4, 2017, pages 452-462. Association for Computational Linguistics. +Jianmin Zhang, Jiwei Tan, and Xiaojun Wan. 2018. Towards a neural network approach to abstractive multi-document summarization. CoRR, abs/1804.09010. \ No newline at end of file diff --git a/abstractivemultidocumentsummarizationviajointlearningwithsingledocumentsummarization/images.zip b/abstractivemultidocumentsummarizationviajointlearningwithsingledocumentsummarization/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..70277c5ba3006a42a57a3d572b14c3fde0c71d5f --- /dev/null +++ b/abstractivemultidocumentsummarizationviajointlearningwithsingledocumentsummarization/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aae735894f118572a685a956364666a597223dc393efd183c7350dfd5d204d53 +size 356601 diff --git a/abstractivemultidocumentsummarizationviajointlearningwithsingledocumentsummarization/layout.json b/abstractivemultidocumentsummarizationviajointlearningwithsingledocumentsummarization/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..3c1832ecaaa37df01100f33bf859b8c50914f03f --- /dev/null +++ b/abstractivemultidocumentsummarizationviajointlearningwithsingledocumentsummarization/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fe06435ea65478ae21f6e66ff9c61b37f312fa11e06a51fa3127fb486f882370 +size 340289 diff --git a/accuratepolyglotsemanticparsingwithdaggrammars/a362056e-eba4-4767-a0a5-8a53a63a1b9c_content_list.json b/accuratepolyglotsemanticparsingwithdaggrammars/a362056e-eba4-4767-a0a5-8a53a63a1b9c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..73f0d5fe517db159cb190e47440d921b4f535971 --- /dev/null +++ b/accuratepolyglotsemanticparsingwithdaggrammars/a362056e-eba4-4767-a0a5-8a53a63a1b9c_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:49a5b76258b57d3f5214f87496cf47de8b853688f8fca7c54ae9c72c34b183ec +size 90429 diff --git a/accuratepolyglotsemanticparsingwithdaggrammars/a362056e-eba4-4767-a0a5-8a53a63a1b9c_model.json b/accuratepolyglotsemanticparsingwithdaggrammars/a362056e-eba4-4767-a0a5-8a53a63a1b9c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..a1e12b849fd918f262dd9bb21f7bcc4f398680ce --- /dev/null +++ b/accuratepolyglotsemanticparsingwithdaggrammars/a362056e-eba4-4767-a0a5-8a53a63a1b9c_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2be79ebb86207ad6da7438482c661e1a0f739c3256808bc12d9d125f301a308f +size 105215 diff --git a/accuratepolyglotsemanticparsingwithdaggrammars/a362056e-eba4-4767-a0a5-8a53a63a1b9c_origin.pdf b/accuratepolyglotsemanticparsingwithdaggrammars/a362056e-eba4-4767-a0a5-8a53a63a1b9c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b1e01953544134a9082e6b4d543038f7d2780feb --- /dev/null +++ b/accuratepolyglotsemanticparsingwithdaggrammars/a362056e-eba4-4767-a0a5-8a53a63a1b9c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5a9dc98896fcb02058ac0d8dc28d3afe6561121eacc519b62537780d426c94d3 +size 701598 diff --git a/accuratepolyglotsemanticparsingwithdaggrammars/full.md b/accuratepolyglotsemanticparsingwithdaggrammars/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1a21a1022cdcab240393e382dc95dab7f55d6260 --- /dev/null +++ b/accuratepolyglotsemanticparsingwithdaggrammars/full.md @@ -0,0 +1,344 @@ +# Accurate polyglot semantic parsing with DAG grammars + +Federico Fancellu Akos Kádár Ran Zhang Afsaneh Fazly +Samsung AI Centre Toronto (SAIC Toronto) + +{ federico.f, ran.zhang, a.fazly}@samsung.com + +# Abstract + +Semantic parses are directed acyclic graphs (DAGs), but in practice most parsers treat them as strings or trees, mainly because models that predict graphs are far less understood. This simplification, however, comes at a cost: there is no guarantee that the output is a well-formed graph. A recent work by Fancellu et al. (2019) addressed this problem by proposing a graph-aware sequence model that utilizes a DAG grammar to guide graph generation. We significantly improve upon this work, by proposing a simpler architecture as well as more efficient training and inference algorithms that can always guarantee the well-formedness of the generated graphs. Importantly, unlike Fancellu et al., our model does not require language-specific features, and hence can harness the inherent ability of DAG-grammar parsing in multilingual settings. We perform monolingual as well as multilingual experiments on the Parallel Meaning Bank (Abzianidze et al., 2017). Our parser outperforms previous graph-aware models by a large margin, and closes the performance gap between string-based and DAG-grammar parsing. + +# 1 Introduction + +Semantic parsers map a natural language utterance into a machine-readable meaning representation, thus helping machines understand and perform inference and reasoning over natural language data. Various semantic formalisms have been explored as the target meaning representation for semantic parsing, including dependency-based compositional semantics (Liang et al., 2013), abstract meaning representation (AMR, Banarescu et al., 2013), minimum recursion semantics (MRS, Copestake et al., 2005), and discourse representation theory (DRT, Kamp, 1981). Despite meaningful differences across formalisms or parsing models, a representation in any of these formalisms can be ex + +![](images/6f3333853c2a7ad3790b0cfa686d59d92eabfc7c3020ef56af74ba32fd4eafc1.jpg) +Figure 1: The discourse representation structure for 'We barred the door and locked it'. For ease of reference in later figures, each box includes a variable corresponding to the box itself, at top right in gray. + +pressed as a directed acyclic graph (DAG). + +Consider for instance the sentence 'We barred the door and locked it', whose meaning representation as a Discourse Representation Structure (DRS) is shown in Figure 1. A DRS is usually represented as a set of nested boxes (e.g. $b_{1}$ ), containing variable-bound discourse referents (e.g. 'lock(e2)'), semantic constants (e.g. 'speaker'), predicates (e.g. AGENT) expressing relations between variables and constants, and discourse relations between the boxes (e.g. CONTINUATION). This representation can be expressed as a DAG by turning referents and constants into vertices, and predicates and discourse relations into connecting edges, as shown in Figure 2. + +How can we parse a sentence into a DAG? Commonly-adopted approaches view graphs as strings (e.g. van Noord and Bos, 2017; van Noord et al., 2018), or trees (e.g. Zhang et al., 2019a; Liu et al., 2018), taking advantage of the linearized graph representations provided in annotated data (e.g. Figure 3, where the graph in Figure 2 is represented in PENMAN notation (Goodman, 2020)). + +![](images/53e19befb60c813803c791689afd53dac33cf82b6578be810b4ade542e8f8562.jpg) +Figure 2: The DRS of Figure 1 expressed as a DAG. +Figure 3: The DAG of Figure 2 expressed as a string. + +```lisp +$(b_{1} / \square$ CONTINUATION $\mathbf{\Pi}_1(b_2 / \square$ :DRS(e1/bar :AGENT(c1/speake :THEME(x1/doorp) :CONTINUATION $\mathbf{\Pi}_2(b_3 / \square$ :DRS(e2/lock :AGENT c1 :PATIENTx1)) +``` + +An advantage of these linearized representations is that they allow for the use of well-understood sequential decoders and provide a general framework to parse into any arbitrary formalism. However, these representations are unaware of the overall graph structure they build as well as of reentrant semantic relations, such as coordination, coreference, and control, that are widespread in language. Parsers such as Zhang et al. (2019b) although able to generate reentrancies in their output, they do so by simply predicting pointers back to already generated nodes. + +Parsing directly into DAGs, although desirable, is less straightforward than string-based parsing. Whereas probabilistic models of strings and trees are ubiquitous in NLP, at present, it is an active problem in modern formal language theory to develop formalisms that allow to define probability distributions over DAGs of practical interest. A successful line of work derives semantic graphs using graph grammars that allow to generate a graph by rewriting non-terminal symbols with graph fragments. Among these, hyperedge replacement grammar (HRG) has been explored for parsing into semantic graphs (Habel, 1992; Chiang et al., 2013). However, parsing with HRGs is not practical due to its complexity and large number of possible derivations per graph (Groschwitz et al., 2015). Thus, work has looked at ways of constraining the space of possible derivations, usually in the form of align- + +ment or syntax (Peng et al., 2015). For example, Groschwitz et al. (2018) and Donatelli et al. (2019) extracted fine-grained typed grammars whose productions are aligned to the input sentence and combined over a dependency-like structure. Similarly, Chen et al. (2018) draw on constituent parses to combine together HRG fragments. + +Björklund et al. (2016) show that there exists a restricted subset of HRGs, Restricted DAG grammar (RDG), that provides a unique derivation per graph. A unique derivation means that a graph is generated by a unique sequence of productions, which can then be predicted using sequential decoders, without the need of an explicit alignment model or an underlying syntactic structure. Furthermore, the grammar places hard constraints on the rewriting process, which can be used to guarantee the well-formedness of output graphs during decoding. Drawing on this result, a recent work by Fancellu et al. (2019) introduces recurrent neural network RDGs, a sequential decoder that models graph generation as a rewriting process with an underlying RDG. However, despite the promising framework the approach in FA19² falls short in several aspects. + +In this paper, we address these shortcomings, and propose an accurate, efficient, polyglot model for Neural RDG parsing. Specifically, our contributions are as follows: + +Grammar: In practice, RDGs extracted from training graphs can be large and sparse. We show a novel factorization of the RDG production rules that reduces the sparsity of the extracted grammars. Furthermore, we make use of RDGs extracted on fully human annotated training data to filter out samples from a larger noisy machine-generated dataset that cannot be derived using such grammars. We find that this strategy not only drastically reduces the size of the grammar, but also improves the final performance. + +Model: FA19 use a syntactic parsing inspired architecture, a stackLSTM, trained on a gamut of syntactic and semantic features. We replace this with a novel architecture that allows for batched input, while adding a multilingual transformer encoder that relies on word-embedding features only. Constrained Decoding: We identify a limitation in the decoding algorithm presented by FA19, in that it only partially makes use of the well + +formdness constraints of an RDG. We describe the source of this error, implement a correction and show that we can guarantee well-formed DAGs. + +Multilinguality: Training data in languages other than English is often small and noisy. FA19 addressed this issue with cross-lingual models using features available only for a small number of languages, but did not observe improvements over monolingual baselines in languages other than English. We instead demonstrate the flexibility of RDGs by extracting a joint grammar from graph annotations in different languages. At the same time, we make full use of our multilingual encoder to build a polyglot model that can accept training data in any language, allowing us to experiment with different combinations of data. Our results tell a different story where models that use combined training data from multiple languages always substantially outperform monolingual baselines. + +We test our approach on the Parallel Meaning Bank (PMB, Abzianidze et al., 2017), a multilingual graphbank. Our experimental results demonstrate that our new model outperforms that of FA19 by a large margin on English while fully exploiting the power of RDGs to always guarantee a well-formed graph. We also show that the ability of simultaneously training on multiple languages substantially improves performance for each individual language. Importantly, we close the performance gap between graph-aware parsing and state-of-the-art string-based models. + +# 2 Restricted DAG Grammar + +We model graph generation as a process of graph rewriting with an underlying grammar. Our grammar is a restricted DAG grammar (RDG, Björklund et al., 2016), a type of context-free grammar designed to model linearized DAGs. For ease of understanding, we represent fragments in grammar productions as strings. This is shown in Figure 4, where the right-hand-side (RHS) fragment can be represented as its left-to-right linearization, with reentrant nodes flagged by a dedicated $ symbol. + +An RDG is a tuple $\langle P, N, \Sigma, S, V \rangle$ where $P$ is a set of productions of the form $\alpha \rightarrow \beta$ ; $N$ is the set of non-terminal symbols $\{L, T_0, \dots, T_n\}$ up to a maximum number of $n$ ; $\Sigma$ is the set of terminal symbols; $S$ is the start symbol; $V$ is an unbounded set of variable references $\{\mathbb{S}1, \mathbb{S}2, \dots\}$ , whose role is described below. + +![](images/2f900035edad98263833c8be14121f91d1aed44e35031df604e8578afadbc696.jpg) +Figure 4: An example production for a grammar. The graph fragment on the right-hand side can be replaced with a string representing its depth-first traversal. + +The left-hand-side (LHS) $\alpha$ of a production $p\in P$ is a function $T_{i}\in N$ (where $i$ is the rank) that takes $i$ variable references as arguments. Variable references are what ensure the well-formedness of a generated graph in an RDG, by keeping track of how many reentrancies are expected in a derivation as well as how they are connected to their neighbouring nodes. Rank, in turn, is an indication of how many reentrancies are present in a graph derivation. For instance, in the graph fragment in Figure 4, given that there are two variable references and a non-terminal of rank 2, we are expecting two reentrant nodes at some point in the derivation. The RHS $\beta$ is a typed fragment made up of three parts: a variable $v$ describing the semantic type3, a label non-terminal $L$ , and a list of tuples $\langle e,s\rangle$ where $e$ is an edge label from a set of labels $E$ and $s$ is either a non-terminal function $T$ or a variable reference. The non-terminal $L$ can only be rewritten as a terminal symbol $l\in \Sigma$ . If a node is reentrant, we mark it with a superscript $*$ over $v$ . Variable references are percolated down the derivation and are replaced once a reentrant variable $v^{*}$ is found on the RHS. + +Following FA19, we show a complete derivation in Figure 5 that reconstructs the graph in Figure 2. Our grammar derives strings by first rewriting the start symbol $S$ , a non-terminal function $T_{0}$ . At each subsequent step, the leftmost non-terminal function in the partially derived string is rewritten, with special handling for variable references described below. A derivation is complete when no non-terminals remain. + +Variable references are resolved when applying a production that maps a reentrant variable name + +
StepProductionResult
1r1(b1/L:CONT T2($1,$2):CONT T2($1,$2))
2r2(b1/□:CONT (b2/L:DRS T2($1,$2)):CONT T2($1,$2))
3r3(b1/□:CONT (b2/□:DRS (e1/L:AGENT T1($1):THEME T1($2)))) :CONT T2($1,$2))
4r4(b1/□:CONT (b2/□:DRS (e1/bar:AGENT (c*/L):THEME T1($2)))) :CONT T2(c,$2))
5r5(b1/□:CONT (b2/□:DRS (e1/bar:AGENT (c*/speaker):THEME (x*/L)))) :CONT T2(c,x))
6r2(b1/□:CONT (b2/□:DRS (e1/bar:AGENT (c*/speaker):THEME (x*/doorp)))) :CONT (b3/□:DRS T2(c,x)))
7r6(b1/□:CONT (b2/□:DRS (e1/bar:AGENT (c*/speaker):THEME (x*/doorp)))) :CONT (b3/□:DRS (e2/lock:AGENT c:PATIENT x)
+ +Figure 5: A full RDG derivation for the graph in Figure 2. At each step $t$ the leftmost non-terminal $T_{n}$ (in blue) is rewritten into a fragment (underlined) and its label non-terminal $L$ (in red) replaced with a terminal. Variable references are percolated down the derivation unless a reentrant variable $v^{*}$ is found (step 4 and 5). + +to a reference, as shown for production $r_4$ , where the variable $c$ is mapped to $1. Once this mapping is performed, all instances of$ 1 in the RHS are replaced by the corresponding variable name. In this way, the reference to $c$ is kept track of during the derivation becoming the target of AGENT in $r_6$ . Same applies in $r_5$ where $x$ is mapped to $2. + +All our fragments are delexicalized. This is achieved by the separate non-terminal $L$ that at every step is rewritten in the corresponding terminal label (e.g. bar). Delexicalization allows to reduce the size of grammar and factorize the prediction of fragments and labels separately. + +However, DAG grammars can still be large due to the many combinations of how edge labels and their corresponding non-terminals can appear in a fragment. For this reason, we propose a further simplification where edge labels are replaced with placeholders $\hat{e}_1\ldots \hat{e}_{|e|}$ , which we exemplify using the production in Figure 4 as follows: + +$$ +\mathrm {S} \rightarrow (b _ {1} / L \hat {e} _ {1} T _ {2} (\$ 1, \$ 2) \hat {e} _ {2} T _ {2} (\$ 1, \$ 2)) +$$ + +After a fragment is predicted, placeholders are then replaced with actual edge labels by a dedicated module (see § 3.2 for more details). + +Comparison with Groschwitz et al. (2018)'s AM algebra. RDG is very similar to other graph grammars proposed for semantic parsing, in particular to Groschwitz et al. (2018)'s AM algebra used for AMR parsing. Groschwitz et al. (2018)'s framework relies on a fragment extraction process similar to ours where each node in a graph along with its outgoing edges makes up a fragment. However, the two grammars differ mainly in how typing and as a + +consequence, composition is thought of: whereas in the AM algebra both the fragments themselves and the non-terminal edges are assigned thematic types (e.g. S[object], O[bject], MOD[ifier]), we only place rank information on the non-terminals and assign a more generic semantic type to the fragment. + +The fine-grained thematic types in the AM algebra add a level of linguistic sophistication that RDG lacks, in that fragments fully specify the roles a word is expected to fill. This ensures that the output graphs are always semantically well-formed; in that AM algebra behaves very similar to CCG. However this sophistication not only requires ad-hoc heuristics that are tailored to a specific formalism (AMR in this case) but also relies on alignment information with the source words. + +On the other hand, our grammar is designed to predict a graph structure in sequential models. Composition is constrained by the rank of a nonterminal so to ensure that at each decoding step the model is always aware of the placement of reentrant nodes. However, we do not ensure semantic well-formedness in that words are predicted separately from their fragments and we do not rely on alignment information. In that our grammar extraction algorithm does not rely on any heuristics and can be easily applied to any semantic formalism. + +# 3 Architecture + +Our model is an encoder-decoder architecture that takes as input a sentence and generates a DAG $G$ as a sequence of fragments with their corresponding labels, using the rewriting system in § 2. In what follows we describe how we obtain the logits for + +each target prediction, all of which are normalized with the softmax function to yield probability distributions. A detailed diagram of our architecture is shown in Figure 7 in Appendix A. + +# 3.1 Encoder + +We encode the input sentence $w_{1}, \ldots, w_{|n|}$ using a pre-trained multilingual BERT (mBERT) model (Devlin et al., 2018). The final word-level representations are obtained through mean-pooling the sub-word representations of mBERT computed using the Wordpiece algorithm (Schuster and Nakajima, 2012). We do not rely on any additional (language-specific) features, hence making the encoder polyglot. The word vectors are then fed to a two-layer BiLSTM encoder, whose forward and backward states are concatenated to produce the final token encodings $\mathbf{s}_1^{enc}, \ldots, \mathbf{s}_n^{enc}$ . + +# 3.2 Decoder + +The backbone of the decoder is a two layer LSTM, with two separate attention mechanisms for each layer. Our decoding strategy follows steps similar to those in Figure 5. At each step we first predict a delexicalized fragment $f_{t}$ , and substitute a terminal label $l_{t}$ in place of $L$ . We initialize the decoder LSTM with the encoder's final state $\mathbf{s}_{\mathbf{n}}^{\mathrm{enc}}$ . At each step $t$ , the network takes as input $[\mathbf{f}_{t-1}; \mathbf{l}_{t-1}]$ , the concatenation of the embeddings of the fragment and its label output at the previous time step. At $t = 0$ , we initialize both fragment and label encodings with a $\langle \text{START} \rangle$ token. The first layer in the decoder is responsible for predicting fragments. The second layer takes as input the output representations of the first layer, and predicts terminal labels. The following paragraphs provide details on the fragment and label predictions. + +Fragment prediction. We make the prediction of a fragment dependant on the embedding of the parent fragment and the decoder history. We define as parent fragment the fragment containing the non-terminal the current fragment is rewriting; for instance, in Figure 5, the fragment in step 1 is the parent of the fragment underlined in step 2. Following this intuition, at time $t$ , we concatenate the hidden state of the first layer $\mathbf{h}_t^1$ with a context vector $\mathbf{c}_t^1$ and the embedding of its parent fragment $\mathbf{u}_t$ . The logits for fragment $f_t$ are predicted with + +a single linear layer $\mathbf{W}^{f}[\mathbf{c}_{t}^{1};\mathbf{u}_{t};\mathbf{h}_{t}^{1}] + \mathbf{b}$ . We compute $\mathbf{c}_{t}^{1}$ using a standard soft attention mechanism (Xu et al., 2015) as follows, where $\mathbf{s}_{1:\mathbf{N}}^{\mathrm{enc}}$ represents the concatenation of all encoding hidden states: + +$$ +\begin{array}{l} \mathbf {c} _ {t} ^ {1} = \sum_ {i} ^ {N} \alpha_ {i} \mathbf {s} _ {i} ^ {\text {e n c}} (1) \\ \mathbf {a} = \mathrm {M L P} ^ {1} \left[ \mathbf {h} _ {t} ^ {1}; \mathbf {s} _ {1: N} ^ {\text {e n c}} \right] (2) \\ \end{array} +$$ + +$$ +\alpha_ {i} = \frac {e ^ {\mathbf {a} _ {i}}}{\sum_ {j} \mathbf {a} _ {j}} \tag {3} +$$ + +$$ +\mathrm {M L P} ^ {1} (\mathbf {x}) = \operatorname {R e L U} (\mathbf {W x} + \mathbf {b}) \tag {4} +$$ + +Label prediction. Terminal labels in the output graph can either correspond to a lemma in the input sentence (e.g. 'bar', 'lock'), or to a semantic constant (e.g. 'speaker'). We make use of this distinction by incorporating a selection mechanism that learns to choose to predict either a lemma from the input or a token from a vocabulary of $L$ . We concatenate the hidden state of the second layer $\mathbf{h}_t^2$ with the embedding of the fragment predicted at the current time-step $\mathbf{f}_t$ and the second layer context vector $\mathbf{c}_t^2$ . Let us refer to this representation as $\mathbf{z}_t = [\mathbf{f}_t; \mathbf{h}_t^2; \mathbf{c}_t^2]$ . The context vector for the second layer is computed in the same way as $\mathbf{c}_t^1$ , but using $\mathbf{h}_t^2$ in place of $\mathbf{h}_t^1$ and separate attention MLP parameters. To compute the logits for label-prediction we apply a linear transformation to the encoder representations $\mathbf{e} = \mathbf{W}^s \mathbf{s}_{\mathbf{1:N}}^{\mathrm{enc}}$ . We concatenate the resulting vector with the label embedding matrix $\mathbf{L}$ and compute the dot product $\mathbf{z}_t^T[\mathbf{e}; \mathbf{L}]$ to obtain the final unnormalized scores jointly for all tokens in the input and $L$ . + +In the PMB, each label is also annotated with its sense tag and information about whether it is presupposed in the context or not. We predict the former, $s_t$ , from a class of sense tags $S$ extracted from the training data, and the latter, $p_t$ , a binary variable, by passing $\mathbf{z}_t$ two distinct linear layers to obtain the logits for each. + +Edge factorization. In §2, we discussed how we made grammars even less sparse by replacing the edge labels in a production fragment with placeholders. From a modelling perspective, this allows to factorize edge label prediction, where the decoder first predicts all the fragments in the graph and then predicts the edge labels $e_i \dots e_{|e|}$ that substitute in place of the placeholders. + +To do so, we cache the intermediate representations $\mathbf{z}_t$ over time. We use these as features, to + +replace the edge-placeholders $\hat{e}_i$ with the corresponding true edge labels $e_i$ . To obtain the edge-label logits we pass the second-layer representation for the child fragment $\mathbf{z}^c$ and parent fragment $\mathbf{z}^p$ to a pairwise linear layer: $\mathbf{W}^e[\mathbf{W}^c\mathbf{z}^c \odot \mathbf{W}^p\mathbf{z}^p]$ . + +# 3.3 Graph-aware decoding + +At inference time, our graph decoder rewrites non-terminals left-to-right by choosing the fragment with the highest probability, and then predicts terminal and/or edge labels. The rank of a non-terminal and the variable references it takes as arguments place a hard constraint on the fragment that rewrites in its place (as shown in § 2). Only by satisfying these constraints, the model can ensure well-formedness of generated graphs. + +By default, our decoder does not explicitly follow these constraints and can substitute a non-terminal with any fragment in the grammar. This is to assess whether a vanilla decoder can learn to substitute in a fragment that correctly matches a non-terminal. On top of the vanilla decoder, we then exploit these hard constrains in two different ways, as follows: + +Rank prediction. We incorporate information about rank as a soft constraint during learning by having the model predict it at each time step. This means that the model can still predict a fragment whose rank and variable references do not match those of a non-terminal but it is guided not to do so. We treat rank prediction as a classification task where we use the same features as fragment prediction that we then pass to a linear layer: $\mathbf{r}_t = \mathbf{W}^r [\mathbf{c}_t^1;\mathbf{u}_t;\mathbf{h}_t^1] + \mathbf{b}^r$ . Note that the range of predicted ranks is determined by the training grammar so it is not possible to generate a rank that has not been observed and doesn't have associated rules. + +Constrained decoding. We explicitly ask the model to choose only amongst those fragments that can match the rank and variable references of a non-terminal. This may override model predictions but always ensures that a graph is well-formed. To ensure well-formedness, FA19 only checks for rank. This can lead to infelicitous consequences. Consider for instance the substitution in Figure 6. Both fragments at the bottom of the middle and right representations are of rank 2 but whereas the first allows for the edges to refer back to the reentrant nodes, the second introduces an extra reentrant node, leaving therefore one of the reentrant + +nodes disconnected. Checking just for rank is therefore not enough; one also needs to check whether a reentrant node that will substitute in a variable reference has already been generated. If not, any fragment of the same rank can be accepted. If such a node already exists, only fragments that do not introduce another reentrant node can be accepted. This constrained decoding strategy is what allows us to always generate well-formed graphs; we integrate this validation step in the decoding algorithm when selecting the candidate fragment. + +Finally, we integrate these hard constraints in the softmax layer as well. Instead of normalizing the logits across all fragment types with a single softmax operation, we normalize them separately for each rank. The errors are only propagated through the subset of parameters in $\mathbf{W}^f$ and $\mathbf{b}^f$ responsible for the logits within the target rank $r_t$ . + +# 3.4 Training objective + +Our objective is to maximize the log-likelihood of the full graph $P(G|s)$ approximated by the decomposition over each prediction task separately: + +$$ +\begin{array}{l} \sum_ {t} \log P (f _ {t}) + \log P (\ell_ {t}) + \log P (r _ {t}) \\ + \log P \left(s _ {t}\right) + \log P \left(p _ {t}\right) + \sum_ {i} \log P \left(e _ {i}\right) \tag {5} \\ \end{array} +$$ + +where $f_{t}$ is the fragment; $\ell_t$ is the label; $r_t$ is the (optional) rank of $f_{t}$ ; $s_t$ and $p_t$ are the sense and presupposition label of terminal label $\ell_t$ ; $e_i \dots e_{|e|}$ are the edge labels of $f_{t}$ . To prevent our model from overfitting, rather than directly optimizing the log-likelihoods, we apply label smoothing for each prediction term (Szegedy et al., 2016). + +# 4 Experimental setup + +# 4.1 Data + +We evaluate our parser on the Parallel Meaning Bank (Abzianidze et al., 2017), a multilingual graph bank where sentences in four languages (English (en), Italian (it), German (de) and Dutch (nl)) are annotated with their semantic representations in the form of Discourse Representation Structures (DRS). We test on v.2.2.0 to compare with previous work, and present the first results on v.3.0 on all four languages. We also present results when training on both gold and silver data, where the latter is $\sim 10x$ larger but contains machine-generated + +
# training instances# fragments +edge label# fragments -edge labelavg. rank
PMB2.2.0-g458511962321.56
PMB2.2.0-s639601741425862.85
PMB3-g661816952762.22
PMB3-s947763683362513.01
PMB3-it274318273782.32
PMB3-de501940258432.61
PMB3-nl123813383182.29
+ +parsing, of which only a small fraction has been manually edited. Statistics for both versions of the PMB are reported in Appendix B. + +Our model requires an explicit grammar which we obtain by automatically converting each DAG in the training data into a sequence of productions. This conversion follows the one in FA19 with minor changes; we include details in Appendix C. + +Statistics regarding the grammars extracted from the PMB are presented in Table 1, where along with the number of training instances and fragments, we report average rank — an indication of how many reentrancies (on average) are present in the graphs. RDGs can be large especially in the case of silver data, where incorrect parses lead to a larger number of fragments extracted and more complex, noisy constructions, as attested by the higher average ranks. More importantly, we show that removing the edge labels from the fragments leads to a drastic reduction in the number of fragments, especially for the silver corpora. + +# 4.2 Evaluation metrics + +To evaluate our parser, we need to compare its output DRSs to the gold-standard graph structures. For this, we use the Counter tool of Van Noord et al. (2018), which calculates an F-score by searching for the best match between the variables of the predicted and the gold-standard graphs. Counter's search algorithm is similar to the evaluation system SMATCH for AMR parsing (Cai and Knight, 2013). + +There might be occurrences where our graph is deemed ill-formed by Counter; we assign these graphs a score of 0. The ill-formedness is however not due to the model itself but to specific requirements placed on the output DRS by the Counter script. + +Table 1: Statistics for the grammars extracted from the PMB (g - gold; s - silver). + +
PRF1
baseline80.070.975.2
+ rank-prediction81.072.376.4
+ constrained-decoding80.575.277.8
+ edge-factorization82.578.580.4
ours-best + silver83.880.682.2
ours-best + filtering83.180.581.8
+ +Table 2: Ablation results on the dev portion of PMB2.2.0. The top half shows results for models trained on gold data only. The bottom half shows results of models trained on silver+gold data. + +# 5 Experimental Results + +We first present results of ablation experiments to understand which model configuration performs best (§ 5.1). We then compare our best-performing model with several existing semantic parsers (§ 5.2), and present our model's performance in multilingual settings (§ 5.3). + +# 5.1 Ablation experiments + +Table 2 shows results for our model in various settings. Our baseline is trained on gold data alone, uses a full grammar and performs unconstrained decoding, with and without rank prediction. Note that unconstrained decoding could lead to ill-formed graphs. To better understand the effect of this, we compare the performance of the baseline with a model that uses constrained decoding and thus always generates well-formed graphs. We train all our models on a single TitanX GPU v100. We report hyperparameters and other training details in Appendix D. + +Our results are different from that of FA19, who show that a baseline model outperforms one with constrained decoding. Not only we find that constrained decoding outperforms the baseline, but we observe that without it, 26 graphs ( $\sim 4\%$ ) are ill-formed. In addition, our results show that predicting edge labels separately from fragments (edge factorization) leads to a substantial improvement in performance, while also drastically reducing the size of the grammar (as shown in Table 1). + +We also train our best-performing model (oursbest) on the silver and gold data combined (+silver). This is to assess whether more data, albeit noisy, results in better performance. However, noisy data can lead to noisy grammar; to reduce this noise, we experiment with first extracting a grammar from + +![](images/5badca3391c7dd085eeb66b45606412d9a9661c22d27da85d03d6e884e50830d.jpg) +Figure 6: Example of correct (middle) and wrong (right) substitution of non-terminal function (left, in blue) during constrained decoding. + +the gold training set, and use it to filter the silver set, where only instances that can be derived using the gold grammar are kept (+filtering). The filtering results in smaller grammar (232 vs. 2586 fragments), while at the same time sacrificing only a small percentage of training instances $(10\%)$ . + +van Noord et al. (2019), Liu et al. (2019) and FA 19 found that models trained on silver data requires an additional training fine-tuning on gold data alone to achieve the best performance; we also follow this strategy in our experiments. Overall, results show that adding silver data improves performance, and that filtering the input silver data leads only to a slight loss in performance while keeping the size of the grammar small. + +# 5.2 Comparison to previous work + +We compare our best-performing model against previous work on PMB2.2.0. We first compare the performance on models trained solely on gold data. Besides the DAG-grammar parser of FA19, we compare with the transition-based stackLSTM of Evang (2019) that utilizes a buffer-stack architecture to predict a DRS fragment for each input token using the alignment information in the PMB; our graph parser does not make use of such information and solely relies on attention. + +We then compare our best-performing model with two models trained on gold plus silver data. van Noord et al. (2019) is a seq2seq parser that decodes an input sentence into a concatenation of clauses, essentially a flattened version of the boxes in Figure 1. Similar to FA19, their model also uses a wide variety of language-dependent features, including part-of-speech, dependency and CCG tags, while ours relies solely on word embeddings. In this respect, our model is similar to Liu et al. (2019)'s that uses the same architecture as the model of van Noord et al. (2019) but replaces the LSTM encoder with a transformer model, without + +
PRF1
Fancellu et al. (2019)--73.4
Evang (2019)--74.4
ours-best84.581.382.9
van Noord et al. (2019)--86.8
Liu et al. (2019)85.884.585.1
ours-best + silver86.183.684.9
+ +Table 3: Comparison with previous work on the test portion of PMB2.2.0. Results in the top half are for models trained on gold data, whereas bottom half shows results for models trained on silver+gold data. + +the use of additional features. + +Results are summarized in Table 3. When trained on gold data alone, our model outperforms previous models by a large margin, without relying on alignment information or extra features besides word embeddings. When trained on silver+gold, we close the performance gap with state-of-the-art models that decode into strings, despite relying solely on multilingual word embeddings. + +# 5.3 Multilingual experiments + +Table 4 shows the results on languages other than English. In our multilingual experiments, we first train and test monolingual models in each language. In addition, we perform zero-shot experiments by training a model on English and testing it on other languages (cross-lingual). We also take full advantage of the fact that our models rely solely on multilingual word embeddings, and experiment with two other multilingual settings: The bilingual models are trained on data in English plus data in a target language (tested on the target language). The polyglot models combine training data of all four languages (tested on each language). Parameters for all languages in the bilingual and polyglot mod + +PMB2.2.0 + +
endenlit
FA19 (monolingual)-67.965.875.9
FA19 (cross-lingual)-63.565.172.1
Ours (cross-lingual)-73.473.976.9
ours-best (various) trained and tested on PMB3
monolingual8064.260.971.5
cross-lingual-73.274.175.2
bilingual-71.876.077.7
polyglot79.872.574.177.9
+ +Table 4: Results for the multilingual experiments on the test sets for PMB2.2.0 (top half) and PMB3.0 (bottom half). For the sake of brevity, we report only $\mathrm{F}_1$ scores here, and refer the reader to Table 6 in Appendix E for Precision and Recall values. + +els are fully shared. + +FA19 only experiment with a cross-lingual model trained with additional language-dependent features, some of which available only for a small number of languages (on PMB2.2.0). We therefore compare our cross-lingual models with theirs on PMB2.2.0. We then introduce the first results on PMB3, where we experiment with the other two multilingual settings. + +Our results tell a different story from FA19, where all of our multilingual models (bilingual, polyglot and cross-lingual) outperform the corresponding monolingual baselines. We hypothesize this is mainly due to the fact that for languages other than English, only small silver training data are available and adding a large gold English data might help dramatically with performance. This hypothesis is also reinforced by the fact that a cross-lingual model training on English data alone can reach a performance comparable to the other two models. + +# 6 Conclusions + +In this paper, we have introduced a graph parser that can fully harness the power of DAG grammars in a seq2seq architecture. Our approach is efficient, fully multilingual, always guarantees well-formed graphs and can rely on small grammars, while outperforming previous graph-aware parsers in English, Italian, German and Dutch by large margin. At the same time, we close the gap between string-based and RDG-based decoding. In the future, we are planning to extend this work to other semantic formalisms (e.g. AMR, UCCA) as well as test + +ing on other languages, so to encourage work in languages other than English. + +# Acknowledgments + +We thank three anonymous reviewers for their useful comments. Research was conducted at Samsung AI Centre Toronto and funded by Samsung Research, Samsung Electronics Co., Ltd. + +# References + +Lasha Abzianidze, Johannes Bjerva, Kilian Evang, Hessel Haagsma, Rik Van Noord, Pierre Ludmann, Duc-Duy Nguyen, and Johan Bos. 2017. The parallel meaning bank: Towards a multilingual corpus of translations annotated with compositional meaning representations. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. + +Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffith, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th linguistic annotation workshop and interoperability with discourse, pages 178-186. + +Valerio Basile and Johan Bos. 2013. Aligning formal meaning representations with surface strings for wide-coverage text generation. In Proceedings of the 14th European Workshop on Natural Language Generation, pages 1-9, Sofia, Bulgaria. Association for Computational Linguistics. + +Henrik Björklund, Frank Drewes, and Petter Ericson. 2016. Between a rock and a hard place-uniform parsing for hyperedge replacement DAG grammars. In Proceedings of the International Conference on Language and Automata Theory and Applications, pages 521-532. Springer. + +Shu Cai and Kevin Knight. 2013. Smatch: An evaluation metric for semantic feature structures. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 748-752. + +Yufei Chen, Weiwei Sun, and Xiaojun Wan. 2018. Accurate shrg-based semantic parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 408-418. + +David Chiang, Jacob Andreas, Daniel Bauer, Karl Moritz Hermann, Bevan Jones, and Kevin Knight. 2013. Parsing graphs with hyperedge replacement grammars. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 924-932. + +Ann Copestake, Dan Flickinger, Carl Pollard, and Ivan A Sag. 2005. Minimal recursion semantics: An introduction. Research on language and computation, 3(2-3):281-332. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT 2019, pages 4171-4186. +Lucia Donatelli, Meaghan Fowlie, Jonas Groschwitz, Alexander Koller, Matthias Lindemann, Mario Mina, and Pia Weißenhorn. 2019. Saarland at MRP 2019: Compositional parsing across all graphbanks. In Proceedings of the Shared Task on Cross-Framework Meaning Representation Parsing at the 2019 Conference on Natural Language Learning, pages 66-75. +Kilian Evang. 2019. Transition-based DRS parsing using stack-LSTMs. In Proceedings of the IWCS Shared Task on Semantic Parsing, Gothenburg, Sweden. Association for Computational Linguistics. +Federico Fancellu, Sorcha Gilroy, Adam Lopez, and Mirella Lapata. 2019. Semantic graph parsing with recurrent neural network DAG grammars. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 2769-2778. +Sorcha Gilroy. 2019. Probabilistic graph formalisms for meaning representations. +Michael Wayne Goodman. 2020. Penman: An open-source library and tool for amr graphs. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 312-319. +Jonas Groschwitz, Alexander Koller, and Christoph Teichmann. 2015. Graph parsing with s-graph grammars. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1481-1490. +Jonas Groschwitz, Matthias Lindemann, Meaghan Fowlie, Mark Johnson, and Alexander Koller. 2018. Amr dependency parsing with a typed semantic algebra. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). +Annegret Habel. 1992. Hyperedge replacement: grammars and languages, volume 643. Springer Science & Business Media. +Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328-339, Melbourne, Australia. Association for Computational Linguistics. + +Hans Kamp. 1981. A theory of truth and semantic representation. Formal semantics-the essential readings, pages 189-222. +Percy Liang, Michael I Jordan, and Dan Klein. 2013. Learning dependency-based compositional semantics. Computational Linguistics, 39(2):389-446. +Jiangming Liu, Shay B Cohen, and Mirella Lapata. 2018. Discourse representation structure parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 429-439. +Jiangming Liu, Shay B Cohen, and Mirella Lapata. 2019. Discourse representation structure parsing with recurrent neural networks and the transformer model. In Proceedings of the IWCS Shared Task on Semantic Parsing. +Rik van Noord, Lasha Abzianidze, Antonio Toral, and Johan Bos. 2018. Exploring neural methods for parsing discourse representation structures. Transactions of the Association for Computational Linguistics, 6:619-633. +Rik van Noord and Johan Bos. 2017. Neural semantic parsing by character-based translation: Experiments with abstract meaning representations. Computational Linguistics in the Netherlands (CLIN). +Rik van Noord, Antonio Toral, and Johan Bos. 2019. Linguistic information in neural semantic parsing with multiple encoders. In Proceedings of the 13th International Conference on Computational Semantics-Short Papers, pages 24-31. +Xiaochang Peng, Linfeng Song, and Daniel Gilda. 2015. A synchronous hyperedge replacement grammar based approach for amr parsing. In Proceedings of the Nineteenth Conference on Computational Natural Language Learning, pages 32-41. +Mike Schuster and Kaisuke Nakajima. 2012. Japanese and korean voice search. In Proceedings of the 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5149-5152. IEEE. +Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2019. How to fine-tune BERT for text classification? In Proceedings of the China National Conference on Chinese Computational Linguistics, pages 194-206. Springer. +Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818-2826. +Rik Van Noord, Lasha Abzianidze, Hessel Haagsma, and Johan Bos. 2018. Evaluating scoped meaning representations. In Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC). + +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R'emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv, abs/1910.03771. +Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 2048-2057, Lille, France. PMLR. +Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin Van Durme. 2019a. AMR Parsing as Sequence-to-Graph Transduction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Florence, Italy. Association for Computational Linguistics. +Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin Van Durme. 2019b. Broad-coverage semantic parsing as transduction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). + +# A System architecture + +An illustration of our system architecture is shown in Figure 7. + +# B PMB - data statistics + +
traindevtest
PMB2.2.0-g4597 (4585)682650
PMB2.2.0-s67965 (63960)--
PMB3-g6620 (6618)885898
PMB3-s97598 (94776)--
PMB3-it2772 (2743)*515547
PMB3-de5250 (5019)*417403
PMB-nl1301 (1238)*529483
+ +Table 5: Data statistics for the PMB v.2.2.0 and $3.0(g$ - gold; $s$ - silver). Numbers in parenthesis are the instances we used during training that we were able to extract a derivation tree for. \*: training instances for languages other than English are silver, whereas dev and test are gold + +# C DAG-grammar extraction + +Our grammar consists of three steps: + +Preprocess the DRS. First, we treat all constants as lexical elements and bind them to a variable $c$ . For instance, in Figure 1 we bind 'speaker' to a variable $c_{1}$ and change the relations $\mathrm{AGENT}(e_1, \text{'speaker'})$ and $\mathrm{AGENT}(e_2, \text{'speaker'}')$ into $\mathrm{AGENT}(e_1, c_1)$ and $\mathrm{AGENT}(e_2, c_1)$ , respectively. Second, we deal with multiple lexical elements that map to the same variables (e.g. $\mathrm{cat}(x_1) \wedge \mathrm{entity}(x_1)$ ), where the second predicate specify the 'nature' of the first) by renaming the second variable as $i$ and creating a dummy relation $\mathrm{Of}$ that maps from the first to the second. Finally, we get rid of relations that generate cycles. We found 25 cycles in the PMB, and they are all related to the same phenomenon where the relationships 'Role' and 'Of' have inverted source and target (e.g. person(x1) - Role - mother(x4), mother(x4) - Of - person(x1)). We remove cyclicity by merging the two relations into one edge label. All these changes are then reverted before evaluation. + +Converting the convert the DRS into a DAG. We convert all main boxes, lexical predicates and constants (now bound to a variable) to nodes whereas binary relations between predicates and boxes are treated as edges. For each box, we identify a root variable (if any) and attach this as child to the box-node with an edge :DRS. A root variable is defined + +as a variable belonging to a box that is *not* at the receiving end of any binary predicates; in Figure 1, these are $e_1$ and $e_2$ for $b_2$ and $b_3$ respectively. We then follow the binary relations to expand the graph. In doing so, we also incorporate presuppositional boxes in the graph (i.e. $b_4$ in Figure 1). Each of these boxes contain predicates that are presupposed in context (usually definite descriptions like 'the door'). To link presupposed boxes to the main boxes (i.e. to get a fully connected DAG) we assign a (boolean) presupposition feature to the root variable of the presupposed box (this feature is marked with the superscript $p$ in Figure 2). Any descendant predicates of this root variable will be considered as part of the presupposed DRS. During post-processing, when we need to reconstruct the DRS out of a DAG, we rebuild the presupposed box around variables for which presupposition is predicted as 'True', and their descendants. + +Note that Basile and Bos (2013) proposed a similar conversion to generate Discourse Representation Graphs (DRG), exemplified in Figure 8 using our working example. We argue that our representation is more compact in that: 1) we ignore 'in' edges – where each variable is explicitly marked as part of the box by means of a dedicated edge. This is possible since each box (the square nodes) has a main predicate and all its descendants belong to the box; 2) we treat binary predicates (e.g. AGENT) as edge labels and not nodes; 3) we remove presupposition boxes (in Figure 8, the subgraph rooted in a P labelled edge) and assign a (boolean) presupposition variable to the presupposed predicates. + +Convert the DAGs into derivation trees. DAGs are converted into derivation trees in two passes following the algorithm in Björklund et al. (2016), which we summarize here; the reader is referred to the original paper for further details. The algorithm consists of two steps: first, for each node $n$ we traverse the graph post-order and store information on the reentrant nodes in the subgraph rooted $n$ . To be more precise, each outgoing edge $e_i$ from $n$ defines a subgraph $s_i$ along which we extract a list of all the reentrant nodes we encounter. This list also includes the node itself, if reentrant. + +We then traverse the tree depth-first to collect the grammar fragments and build the derivation tree. Each node contains information of its variable (and type), lexical predicate and features as well as a list of the labels on outgoing edges that we plug in the fragments. In order to add variable + +![](images/bd0606013c684fef92b56a6e5f9fababeaf53f23813374ea5faffc6d4e82c20d.jpg) +Figure 7: Overview of our architecture, following the description is § 3. Our encoder (on the left) computes multilingual word-embeddings using MBERT which then feed into a 2-layers BiLSTM. At the time step $t$ , a 2 layers decoder LSTM (on the right) reconstructs a graph $G$ by predicting fragment $f_{t}$ and terminal label $l_{t}$ . Additionally, parsing on PMB requires to predict for each label $l_{t}$ a sense tag $s_{t}$ and presupposition information $p_{t}$ (a boolean flag). To predict $f_{t}$ we use the hidden state of the decoder first layer (in blue) along with context vector $c_{t}^{f}$ and information about the parent fragment $u_{t}$ (yellow edges). All other predictions are done using the hidden state of the decoder second layer (in red) along a separate context vector $c_{t}^{l}$ . Both context vectors are computed using soft attention over the input representations (top left). Fragments predicted are used to substitute the leftmost non-terminal in the partial graph $G$ (in pink), as shown at the top for $G_{2}\ldots G_{5}$ . For $G_{1}$ the first fragment predicted initializes the graph (this corresponds to substituting the start symbol $S$ ). The edge labels in the fragments above are replaced with placeholders $e_{1}\ldots e_{|e|}$ to display how edge factorization works. We assume here, for brevity, that $G_{5}$ is our final output graph and show the prediction of two edges that substitute in place of the placeholders (box at the bottom). For edge prediction, we use a bundle of features collected during decoding, namely the parent and children fragment embedding $f_{t}$ , the second layer hidden state (in red) and the context vector $c^{l}$ at time $t$ . + +
endenlit
PRFPRFPRFPRF
monolingual81.678.48064.56464.262.659.260.972.470.671.5
cross-lingual---72.873.673.273.474.974.174.276.275.2
bilingual---7271.571.876.775.37676.878.677.7
polyglot8178.879.872.272.972.574.373.874.178.277.577.9
+ +![](images/721e9445bf77264457befc020b0f238307b77617a9ca08546321355dcad2fdb6.jpg) +Figure 8: The DRS of Figure 2 expressed as a Discourse Representation Graph (DRG). + +references, if any, we need to know whether there are any reentrant nodes that are shared across the subgraphs $s_i \ldots s_{|e|}$ . If so, these become variable references. If the node $n$ itself is reentrant, we flag it with * so that we know that its variable name can substitute a variable reference. + +# D Implementation Details + +We use the pre-trained uncased multilingual BERT base model from Wolf et al. (2019). All models trained on English data, monolingual or multilingual, share the same hyper-parameter settings. Languages other than English in the PMB v3.0 have less training data, especially in the cases of Dutch and Italian. Hence, we reduce the model capacity across the board and increase dropout to avoid over-fitting. Hyperparameter settings are shown in Table. 7. + +We found fine-tuning BERT model necessary to achieve good performance. Following Sun et al. (2019) and Howard and Ruder (2018), we experimented with different fine-tuning strategies, all applied after model performance plateaued: + +1. setting constant learning rate for BERT layers +2. gradually unfreezing BERT layer by layer with decaying learning rate + +3. slanted-triangular learning rate scheduling following Howard and Ruder (2018). + +We have concluded that strategy 1 works best for our task, with fine-tuning learning rate of 2e-5 for English and a smaller learning rate of 1e-5 for other languages. + +Table 6: Results for the multilingual experiments on PMB v.3.0 (test set). Monolingual results (top half) are compared with different combinations of multilingual training data (bottom half). + +
Model Parameters
BERT768
Num of Encoder Layer2
Encoderen de/nl/it2@512 1@512
Fragment/Relation/Labelen de/nl/it100 75
Edge Prediction Layeren de/nl/it100 75
Decoderen de/nl/it1024 512
Optimization Parameters
OptimizerADAM
Learning Rate0.001
Weight Decay1e-4
Gradient Clipping5
Label Smoothing ε0.1
Bert Finetune LRen de/nl/it2e-5 1e-5
Dropouten de/nl/it0.33 0.5
+ +Table 7: Hyper-parameter Settings + +# E Multilingual experiments - full results + +All results for the multilingual experiments including precision and recall are shown in Table 6. \ No newline at end of file diff --git a/accuratepolyglotsemanticparsingwithdaggrammars/images.zip b/accuratepolyglotsemanticparsingwithdaggrammars/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e1b77e9b2cdb760c7d5938d67d8293137cf8115d --- /dev/null +++ b/accuratepolyglotsemanticparsingwithdaggrammars/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a79b646291ea0fb0e5863268667dd8ed11ff44020d0af5fed3810be6d1011399 +size 518540 diff --git a/accuratepolyglotsemanticparsingwithdaggrammars/layout.json b/accuratepolyglotsemanticparsingwithdaggrammars/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..857ca026153049ca24de3423c57e7e8aa7b590a0 --- /dev/null +++ b/accuratepolyglotsemanticparsingwithdaggrammars/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2985c86999a77801c1f9e62032a87a19c28b9f4ce5636da6f6b6e886d46ef2cc +size 449131 diff --git a/acompareaggregatetransformerforunderstandingdocumentgroundeddialogue/5cc63071-3d79-4af8-bd73-f6002698aecf_content_list.json b/acompareaggregatetransformerforunderstandingdocumentgroundeddialogue/5cc63071-3d79-4af8-bd73-f6002698aecf_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..fd6f12a335b7a12ba33e6e15a12ff191d247c548 --- /dev/null +++ b/acompareaggregatetransformerforunderstandingdocumentgroundeddialogue/5cc63071-3d79-4af8-bd73-f6002698aecf_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3ac68d515b1bd34977c702886be8c5e3111bd04d190d598b3479b7792472a4e8 +size 71117 diff --git a/acompareaggregatetransformerforunderstandingdocumentgroundeddialogue/5cc63071-3d79-4af8-bd73-f6002698aecf_model.json b/acompareaggregatetransformerforunderstandingdocumentgroundeddialogue/5cc63071-3d79-4af8-bd73-f6002698aecf_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c38ddd4e25ccd60e0a5337424f2db6ee402c6ec0 --- /dev/null +++ b/acompareaggregatetransformerforunderstandingdocumentgroundeddialogue/5cc63071-3d79-4af8-bd73-f6002698aecf_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:782dc5dc6cf9f9edeebc79e4aaf2bab83e0512f9ec9a5f61e22d87ee93d8c70e +size 86018 diff --git a/acompareaggregatetransformerforunderstandingdocumentgroundeddialogue/5cc63071-3d79-4af8-bd73-f6002698aecf_origin.pdf b/acompareaggregatetransformerforunderstandingdocumentgroundeddialogue/5cc63071-3d79-4af8-bd73-f6002698aecf_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d6c6c1a3ba6c0021143cc65921b415305f4e843b --- /dev/null +++ b/acompareaggregatetransformerforunderstandingdocumentgroundeddialogue/5cc63071-3d79-4af8-bd73-f6002698aecf_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bd802c6edc4edff151eaf8094638df77ed2279bbf6de3a4a0010a6565b48cb71 +size 524574 diff --git a/acompareaggregatetransformerforunderstandingdocumentgroundeddialogue/full.md b/acompareaggregatetransformerforunderstandingdocumentgroundeddialogue/full.md new file mode 100644 index 0000000000000000000000000000000000000000..67a389bb0cba8a2407b664e75abaee31580e6700 --- /dev/null +++ b/acompareaggregatetransformerforunderstandingdocumentgroundeddialogue/full.md @@ -0,0 +1,308 @@ +# A Compare Aggregate Transformer for Understanding Document-grounded Dialogue + +Longxuan Ma, Weinan Zhang, Runxin Sun, Ting Liu + +Research Center for Social Computing and Information Retrieval + +Harbin Institute of Technology, Harbin, Heilongjiang, China + +{lxma,wnzhang,rxsun,tliu}@ir.hit.edu.cn + +# Abstract + +Unstructured documents serving as external knowledge of the dialogues help to generate more informative responses. Previous research focused on knowledge selection (KS) in the document with dialogue. However, dialogue history that is not related to the current dialogue may introduce noise in the KS processing. In this paper, we propose a Compare Aggregate Transformer (CAT) to jointly denoise the dialogue context and aggregate the document information for response generation. We designed two different comparison mechanisms to reduce noise (before and during decoding). In addition, we propose two metrics for evaluating document utilization efficiency based on word overlap. Experimental results on the CMU.DoG dataset show that the proposed CAT model outperforms the state-of-the-art approach and strong baselines. + +# 1 Introduction + +Dialogue system (DS) attracts great attention from industry and academia because of its wide application prospects. Sequence-to-sequence models (Seq2Seq) (Sutskever et al., 2014; Serban et al., 2016) are verified to be an effective framework for the DS task. However, one problem of Seq2Seq models is that they tended to generate generic responses that provides deficient information Li et al. (2016); Ghazvininejad et al. (2018). Previous researchers proposed different methods to alleviate this issue. One way is to focus on models' ability to extract information from conversations. Li et al. (2016) introduced Maximum Mutual Information (MMI) as the objective function for generating diverse response. Serban et al. (2017) proposed a latent variable model to capture posterior information of golden response. Zhao et al. (2017) used conditional variational autoencoders to learn discourse-level diversity for neural dialogue models. The + +# Document: + +Movie Name: The Shape of Water. Year: 2017. Director: Guillermo del Toro. Genre: Fantasy, Drama.Cast: Sally Hawkins as Elisa Esposito, a mute cleaner who works at a secret government laboratory. ... Critical Response: one of del Toro's most stunningly successful works ... + +# Dialogue: + +S1: I thought The Shape of Water was one of Del Toro's best works. What about you? +S2: Yes, his style really extended the story. +S1: I agree. He has a way with fantasy elements that really helped this story be truly beautiful. It has a very high rating on rotten tomatoes, too. +S2: Sally Hawkins acting was phenomenally expressive. Didn't feel her character was mentally handicapped. +S1: The characterization of her as such was definitely off the mark. + +Figure 1: One DGD example in the CMUDoG dataset. S1/S2 means Speaker-1/Speaker-2, respectively. + +other way is introducing external knowledge, either unstructured knowledge texts Ghazvininejad et al. (2018); Ye et al. (2019); Dinan et al. (2019) or structured knowledge triples (Liu et al., 2018; Young et al., 2018; Zhou et al., 2018a) to help open-domain conversation generation by producing responses conditioned on selected knowledge. + +The Document-grounded Dialogue (DGD) (Zhou et al., 2018b; Zhao et al., 2019; Li et al., 2019) is a new way to use external knowledge. It establishes a conversation mode in which relevant information can be obtained from the given document. One example of DGD is presented in Figure 1. Two interlocutors talk about the given document and freely reference the text segment during the conversation. + +To address this task, two main challenges need to be considered in a DGD model: 1) Determining which of the historical conversations are related to the current conversation, 2) Using current conversation and the related conversation history to select proper document information and to gener + +ate an informative response. Previous work Arora et al. (2019); Zhao et al. (2019); Qin et al. (2019); Tian et al. (2020); Ren et al. (2019) generally focused on selecting knowledge with all the conversations. However, the relationship between historical conversations and the current conversation has not been studied enough. For example, in Figure 1, the italics utterance from user1, "Yes, his style really extended the story.", is related to dialogue history. While the black fold utterance from user1, "Sally Hawkins acting was phenomenally expressive. Didn't feel her character was mentally handicapped.", has no direct relationship with the historical utterances. when employing this sentence as the last utterance, the dialogue history is not conducive to generate a response. + +In this paper, we propose a novel Transformer-based (Vaswani et al., 2017) model for understanding the dialogues and generate informative responses in the DGD, named Compare Aggregate Transformer (CAT). Previous research (Sankar et al., 2019) has shown that the last utterance is the most important guidance for the response generation in the multi-turn setting. Hence we divide the dialogue into the last utterance and the dialogue history, then measure the effectiveness of the dialogue history. If the last utterance and the dialogue history are related, we need to consider all the conversations to filter the document information. Otherwise, the existence of dialogue history is equal to the introduction of noise, and its impact should be eliminated conditionally. For this purpose, on one side, the CAT filters the document information with the last utterance; on the other side, the CAT uses the last utterance to guide the dialogue history and employs the guiding result to filter the given document. We judge the importance of the dialogue history by comparing the two parts, then aggregate the filtered document information to generate the response. Experimental results show that our model can generate more relevant and informative responses than competitive baselines. When the dialogue history is less relevant to the last utterance, our model is verified to be even more effective. The main contributions of this paper are: + +(1) We propose a compare aggregate method to determine the relationship between the historical dialogues and the last utterance. Experiments showed that our model outperformed strong baselines on the CMU.DoG dataset. +(2) We propose two new metrics to evaluate the + +document knowledge utilization in the DGD. They are both based on N-gram overlap among generated response, the dialogue, and the document. + +# 2 Related Work + +The DGD maintains a dialogue pattern where external knowledge can be obtained from the given document. Most recently, some DGD datasets Zhou et al. (2018b); Moghe et al. (2018); Qin et al. (2019); Gopalakrishnan et al. (2019) have been released to exploiting unstructured document information in conversations. + +Models trying to address the DGD task can be classified into two categories based on their encoding process with dialogues: one is parallel modeling and the other is incremental modeling. For the first category, Moghe et al. (2018) used a generation-based model that learns to copy information from the background knowledge and a span prediction model that predicts the appropriate response span in the background knowledge. Liu et al. (2019) claimed the first to unify knowledge triples and long texts as a graph. Then employed a reinforce learning process in the flexible multi-hop knowledge graph reasoning process. To improve the process of using background knowledge, (Zhang et al., 2019) firstly adopted the encoder state of the utterance history context as a query to select the most relevant knowledge, then employed a modified version of BiDAF (Seo et al., 2017) to point out the most relevant token positions of the background sequence. Meng et al. (2019) used a decoding switcher to predict the probabilities of executing the reference decoding or generation decoding. Some other researchers (Zhao et al., 2019; Arora et al., 2019; Qin et al., 2019; Meng et al., 2019; Ren et al., 2019) also followed this parallel encoding method. For the second category, Kim et al. (2020) proposed a sequential latent knowledge selection model for Knowledge-Grounded Dialogue. Li et al. (2019) designed an incremental transformer to encode multi-turn utterances along with knowledge in the related document. Meanwhile, a two-way deliberation decoder (Xia et al., 2017) was used for response generation. However, the relationship between the dialogue history and the last utterance is not well studied. In this paper, we propose a compare aggregate method to investigate this problem. It should be pointed out that when the target response changes the topic, the task is to detect whether the topic is ended and to + +![](images/f705c094ba4b7002d512abb5a84388ace9d234093b7d112c43fbfff93b8d1682.jpg) +Figure 2: The architecture of the CAT model. "utter" is short for utterance. "doc" is short for document. + +![](images/f8e466a46a15a8275eb1935e18c64d7e867f40420f03fc35f7c0aadaa61661a9.jpg) + +![](images/fa00f625c90717e95881ca3df8f1f77fdd33a1f93d135b20d360ebd228d96d5c.jpg) + +initiate a new topic (Akasaki and Kaji, 2019). We do not study the conversation initiation problem in this paper, although we may take it as future work. + +# 3 The Proposed CAT Model + +# 3.1 Problem Statement + +The inputs of the CAT model are the given document $\mathbf{D} = (D_{1}, D_{2}, \dots, D_{d})$ with $d$ words, dialogue history $\mathbf{H} = (H_{1}, H_{2}, \dots, H_{h})$ with $h$ words and the last utterance $\mathbf{L} = (L_{1}, L_{2}, \dots, L_{l})$ with $l$ words. The task is to generate the response $\mathbf{R} = (R_{1}, R_{2}, \dots, R_{r})$ with $r$ tokens with probability: + +$$ +P (\mathbf {R} | \mathbf {H}, \mathbf {L}, \mathbf {D}; \Theta) = \prod_ {i = 1} ^ {r} P (R _ {i} | \mathbf {H}, \mathbf {L}, \mathbf {D}, \mathbf {R} _ {< i}; \Theta), \tag {1} +$$ + +where $\mathbf{R}_{< i} = (R_1, R_2, \dots, R_{i-1})$ , $\Theta$ is the model's parameters. + +# 3.2 Encoder + +The structure of the CAT model is shown in Figure 2. The hidden dimension of the CAT model is $\widehat{h}$ . We use the Transformer structure (Vaswani et al., 2017). The self-attention is calculated as follow: + +$$ +\operatorname {A t t e n t i o n} (\mathbf {Q}, \mathbf {K}, \mathbf {V}) = \operatorname {s o f t m a x} \left(\frac {\mathbf {Q} \mathbf {K} ^ {T}}{\sqrt {d _ {k}}}\right) \mathbf {V}, \tag {2} +$$ + +where $\mathbf{Q},\mathbf{K}$ , and $\mathbf{V}$ are the query, the key, and the value, respectively; $d_{k}$ is the dimension of $\mathbf{Q}$ and $\mathbf{K}$ . The encoder and the decoder stack $N$ ( $N = 3$ in our work) identical layers of multihead attention (MAtt): + +$$ +\operatorname {M A t t} (\mathbf {Q}, \mathbf {K}, \mathbf {V}) = [ \mathbf {A} _ {1}, \dots , \mathbf {A} _ {n} ] \mathbf {W} ^ {O}, \tag {3} +$$ + +$$ +\mathbf {A} _ {i} = \operatorname {A t t e n t i o n} \left(\mathbf {Q W} _ {i} ^ {Q}, \mathbf {K W} _ {i} ^ {K}, \mathbf {V W} _ {i} ^ {V}\right), \tag {4} +$$ + +where $\mathbf{W}_i^Q, \mathbf{W}_i^K, \mathbf{W}_i^V (i = 1, \dots, n)$ and $\mathbf{W}^O$ are learnable parameters. + +The encoder of CAT consists of two branches as figure 2 (a). The left branch learns the information selected by dialogue history $\mathbf{H}$ , the right part learns the information chosen by the last utterance $\mathbf{L}$ . After self-attention process, we get $\mathbf{H}_s = \mathrm{MAtt}(\mathbf{H},\mathbf{H},\mathbf{H})$ and $\mathbf{L}_s = \mathrm{MAtt}(\mathbf{L},\mathbf{L},\mathbf{L})$ . Then we employ $\mathbf{L}_s$ to guide the $\mathbf{H}$ . $\mathbf{H}^1 = \mathrm{MAtt}(\mathbf{L}_s,\mathbf{H},\mathbf{H})$ , where $\mathbf{H}^1$ is the hidden state at the first layer. Then we adopt $\mathbf{H}^1$ to select knowledge from the document $\mathbf{D}$ , $\mathbf{D}^1 = \mathrm{FF}(\mathrm{MAtt}(\mathbf{H}^1,\mathbf{D},\mathbf{D}))$ . FF is the feed-forward process. In the second layer, $\mathbf{D}^1$ is the input, $\mathbf{D}_s^1 = \mathrm{MAtt}(\mathbf{D}^1,\mathbf{D}^1,\mathbf{D}^1)$ , $\mathbf{H}^2 = \mathrm{MAtt}(\mathbf{D}_s^1,\mathbf{H},\mathbf{H})$ , $\mathbf{D}^2 = \mathrm{FF}(\mathrm{MAtt}(\mathbf{H}^2,\mathbf{D},\mathbf{D}))$ . After $N$ layers, we obtain the information $\mathbf{D}^n$ selected by $\mathbf{H}$ . In the right branch, we use $\mathbf{L}_s$ to filter the $\mathbf{D}$ . $\widetilde{\mathbf{D}}^n$ is the information selected by $\mathbf{L}$ . + +# 3.3 Comparison Aggregate + +As demonstrated by (Sankar et al., 2019), the last utterance played an fundamental role in response generation. We need to preserve the document information filtered by $\mathbf{L}$ , and determine how much information selected by $\mathbf{H}$ is needed. We propose 2 different compare aggregate methods: one is concatenation before decoding and the other is attended comparison in the decoder. + +# 3.3.1 Concatenation + +We use average pooling to $\mathbf{H}_s$ and $\mathbf{L}_s$ to get their vector representations $\mathbf{H}_{sa}$ and $\mathbf{L}_{sa} \in \widehat{\mathbb{R}^{h*1}}$ , respectively. The concatenation method calculates relevance score $\alpha$ to determine the importance of $\mathbf{D}^n$ as follow: + +$$ +\alpha = \tanh \left(\mathbf {H} _ {s a} \mathbf {W} ^ {H} + \mathbf {L} _ {s a} \mathbf {W} ^ {L}\right), \tag {5} +$$ + +$$ +\mathbf {D} _ {\text {f i n a l}} = \left[ \operatorname {s i g m o i d} \left(\mathbf {W} ^ {\alpha} \alpha\right) * \mathbf {D} ^ {n}; \widetilde {\mathbf {D}} ^ {n} \right], \tag {6} +$$ + +where $\mathbf{W}^H$ , $\mathbf{W}^L \in \mathbb{R}^{\widehat{h}*\widehat{h}}$ , $\mathbf{W}^\alpha \in \mathbb{R}^{1*\widehat{h}}$ are learnable parameters. $[\mathbf{X};\mathbf{Y}]$ is the concatenation of $\mathbf{X}$ and $\mathbf{Y}$ in sentence dimension. * is the element-wise multiplication. Note that the $\mathbf{D}^n$ is guided by $\mathbf{H}$ , the concatenation method performs a second level comparison with $\mathbf{H}$ and $\mathbf{L}$ and then transfers the topic-aware $\mathbf{D}_{final}$ to the two-pass Deliberation Decoder (DD) (Xia et al., 2017). The structure of the DD is shown in Figure 2 (b). The first-pass takes $\mathbf{L}$ and $\mathbf{D}_{final}$ as inputs and learns to generate a contextual coherently response $\mathbf{R}^1$ . The second-pass takes $\mathbf{R}^1$ and the document $\mathbf{D}$ as inputs and learns to inject document knowledge. The DD aggregates document, conversation, and topic information to generate the final response $\mathbf{R}^2$ . Loss is from both the first and the second layers: + +$$ +L = - \sum_ {m = 1} ^ {M} \sum_ {i = 1} ^ {r} \left(\log P \left(R _ {i} ^ {1}\right) + \log P \left(R _ {i} ^ {2}\right)\right), \tag {7} +$$ + +where $M$ is the total training example; $R_{i}^{1}$ and $R_{i}^{2}$ are the $i$ -th word generated by the first and second decoder layer, respectively. + +# 3.3.2 Attended Comparison + +We employ an Enhanced Decoder (Zheng and Zhou, 2019) to perform the attended comparing. The structure of our Enhanced Decoder is illustrated in Figure 2 (c). It accepts $\mathbf{D}^n$ , $\widetilde{\mathbf{D}}^n$ and the response $\mathbf{R}$ as inputs, applying a different way to compare and aggregate. The merge attention computes weight across all inputs: + +$$ +\mathbf {P} = \left[ \mathbf {R}; \mathbf {D} ^ {n}; \widetilde {\mathbf {D}} ^ {n} \right] \mathbf {W} _ {P}, \tag {8} +$$ + +$$ +\mathbf {V} _ {\text {m e r g e}} = P _ {R} \mathbf {R} + P _ {D} \mathbf {D} ^ {n} + P _ {\widetilde {D}} \widetilde {\mathbf {D}} ^ {n}, \tag {9} +$$ + +where $W_{P}$ is learnable parameters. The dimension of $P$ is 3. $P_{R}, P_{D}$ and $P_{\widetilde{D}}$ are the Softmax + +results of $\mathbf{P}$ . $\mathbf{V}_{\text {merge }}$ and $\mathbf{L}$ are used for next utterance attention as shown in Figure 2 (c). The output of the Enhanced Decoder is connected to the second layer of DD and we define this new structure as Enhanced Deliberation Decoder (EDD). The loss is the same as Eq. (7). + +# 4 Experiments + +# 4.1 Dataset + +We evaluate our model with the CMU.DoG (Zhou et al., 2018b) dataset. There are 4112 dialogs based on 120 documents in the dataset. One document contains 4 sections, such as movie introduction and scenes. A related section is given for every several consequent utterances. However, the conversations are not constrained to the given section. In our setting, we use the full document (with 4 section) as external knowledge. The average length of documents is around 800 words. We concatenate consequent utterances of the same person as one utterance. When training, we remove the first two or three rounds of greeting sentences. Each sample contains one document, two or more historical utterances, one last utterance, and one golden response. When testing, we use two different versions of the test set. The first follows the process of training data, we name it Reduced version. The second is constructed by comparing the original document section of the conversation based, we preserve the examples that the dialogue history and the last utterance are based on different document sections. For example, dialogue history is based on section 2, the last utterance and response are based on section 3. We name it Sampled version and it is used for testing our models' comprehending ability of the topic transfer in conversations. The data statistics are shown in Table 1. Please refer to Zhou et al. (2018b) for more details. It is worth noting that the sampled version does not represent the proportion of all conversation topic transfers, but it demonstrates this problem better than the Reduced version. We also test our method on the Holl-E Moghe et al. (2018) dataset. Since the processing of the dataset and the experimental conclusions obtained are similar to CMU.DoG, we did not present in this article. + +# 4.2 Baselines + +We evaluated several competitive baselines. + +
DatasetU.Num(train / dev / test)W/Utter
Original72922 / 3626 / 1157718.6
Reduced66332 / 3269 / 1050219.7
Sampled66332 / 3269 / 131719.6
+ +Table 1: Statistics of the CMU.DoG dataset. "U.Num" means Utterances Numbers, "W/Utter" means average words per utterance. + +# 4.2.1 RNN-based models + +VHRED: A Hierarchical Latent Variable Encoder-Decoder Model (Serban et al., 2017), which introduces a global (semantic level) latent variable $Z$ for the problem that HRED (Serban et al., 2016) is difficult to generate meaningful and high-quality replies. $Z$ is calculated with the encoder RNN outputs and the context RNN outputs. The latent variable $Z$ contains some high-level semantic information, which encourages the model to extract abstract semantic concepts. Please refer to Serban et al. (2017) for more details. We use $Z$ to capture the topic transfer in conversations and test three different settings. For the first setting, we do not employ the document knowledge, only use dialogue as input to generate the response. It is recorded as VHRED(-k). For the second one, we use the same encoder RNN with shared parameters to learn the representation of the document and the utterance, then concatenate the final hidden state of them as the input of the context RNN. It is denoted by VHRED(c). For the third one, we use word-level dot-attention (Luong et al., 2015) to get the document-aware utterance representation and use it as the input of context RNN. It is termed as VHRED(a). + +# 4.2.2 Transformer-based models + +T-DD/T-EDD: They both use the Transformer as the encoder. The inputs are the concatenation of dialogues and the document. These two models parallel encode the dialogue without detecting topic transfer. The T-DD uses a Deliberation Decoder (DD) as the decoder. The T-EDD uses an Enhanced Deliberation Decoder (EDD) as the decoder. + +ITDD (Li et al., 2019): It uses Incremental Transformer Encoder (ITE) and two-pass Deliberation Decoder (DD). Incremental Transformer uses multi-head attention to incorporate document sections and context into each utterance's encoding process. ITDD incrementally models dialogues without detecting topic transitions. + +# 4.3 Evaluation Metrics + +Automatic Evaluation: We employ perplexity (PPL) (Bengio et al., 2000), BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004). The PPL of the gold response is measured, lower perplexity indicates better performance. BLEU measures the n-gram overlap between a generated response and a gold response. Since there is only one reference for each response, BLEU scores are extremely low. ROUGE measures the n-gram overlap based on the recall rate. Since the conversations are constrained by the background material, ROUGE is reliable. + +We also introduce two metrics to automatically evaluate the Knowledge Utilization (KU), they are both based on $N$ -grams overlaps. We define one document, conversations and generated response in Test set as $(\mathbf{D},\mathbf{C},\mathbf{R})$ . The $N$ -grams set of each $(\mathbf{D},\mathbf{C},\mathbf{R})$ are termed as $\mathbf{G}_d^N,\mathbf{G}_c^N$ and $\mathbf{G}_r^N$ , respectively. The number of overlapped $N$ -grams of $\mathbf{G}_d^N$ and $\mathbf{G}_r^N$ is recorded as $\mathbf{G}_{dr}^N$ . Tuples which are in $\mathbf{G}_{dr}^N$ but not in $\mathbf{G}_c^N$ is named $\mathbf{G}_{dr - c}^N$ . Then $\mathbf{KU} = \text{len}(\mathbf{G}_{dr - c}^N) / \text{len}(\mathbf{G}_{dr}^N)$ reflects how many $N$ -grams in the document are used in the generated replies, $\text{len}(\mathbf{G})$ is the tuple number in $\mathbf{G}$ . The larger the KU is, the more $N$ -grams of the document is utilized. Since low-frequency tuples may be more representative of text features, we define the reciprocal of the frequency of each tuple $k$ in $\mathbf{G}$ as $\mathbf{R}_k^G$ , which represents the importance of a tuple. Then the Quality of Knowledge Utilization (QKU) is calculated as: + +$$ +\mathbf {Q K U} = \sum_ {\left(\mathbf {D}, \mathbf {C}, \mathbf {R}\right)} \frac {\sum_ {k} \mathbf {R} _ {k} ^ {G _ {r}}}{\sum_ {k} \mathbf {R} _ {k} ^ {G _ {d}}}, \quad k \in \mathbf {G} _ {d r - c}. \tag {10} +$$ + +If $\mathbf{R}_k^{G_r}$ is more important in response and $\mathbf{R}_k^{G_d}$ is less important in document, the QKU will become even larger. So the smaller QKU means the higher quality of the used document knowledge. + +Human Evaluation: We randomly sampled 100 conversations from the Sampled test set and obtained 800 responses from eight models. We have 5 graduate students as judges. They score each response with access to previous dialogues and the document. We use three metrics: Fluency, Coherence, and Informativeness. Fluency measures whether the response is a human-like utterance. Coherence measures if the response is coherent with the dialogue context. Informativeness measures if the response contains relevant and correct information from the document. They are scored from 1 to + +
ModelPPLBLEU (%)ROUGE-LKU-2/3 (%)QKU-2/3
VHRED(-k)97.3◇ (99.3)*0.49* (0.49)*7.80* (7.82)*-/- (-/-)-/- (-/-)
VHRED(c)80.2◇ (85.4)*0.79* (0.77)*8.64* (8.63)*12.0/27.0◇ (12.1/27.6)◇3.36/2.82◇ (3.35/2.80)◇
VHRED(a)77.2◇ (78.5)*0.84* (0.80)*8.98* (8.99)*13.7/31.7◇ (13.1/31.3)*3.23/2.72* (3.23/2.72)*
T-DD18.2* (20.5)*0.90* (0.89)*9.23* (9.24)*8.0/23.1* (8.0/23.0)*2.55/1.94* (2.55/1.95)*
T-EDD18.2* (20.3)*0.91* (0.90)*9.35* (9.36)*8.3/23.5* (8.1/23.4)*2.45/1.91* (2.45/1.92)*
ITDD16.2* (18.7)*1.01* (0.99)*10.12◇ (10.10)*9.0/24.5* (9.1/24.4)*2.18/1.84* (2.15/1.82)*
CAT-EDD16.0* (18.2)*1.14* (1.14)*11.10* (11.12)*9.5/24.8* (9.7/24.9)*2.12/1.77* (2.11/1.76)*
CAT-DD15.2 (16.1)1.22 (1.21)11.22 (11.22)11.0/26.5 (11.1/26.4)2.08/1.64 (2.05/1.62)
+ +Table 2: Automatic evaluations on the CMU.DoG Dataset. $\cdot (\cdot)$ means Reduced (Sampled) test data. We take the CAT-DD as the base model to do the significant test, $\diamond$ and $*$ stands $p < 0.05$ and $p < 0.01$ , respectively. + +5 (1:very bad, 2:bad, 3:acceptable, 4:good, 5:very good). Overall inter-rater agreement measured by Fliess' Kappa is 0.32 ("fair"). + +# 4.4 Experimental Setup + +We use OpenNMT-py (Klein et al., 2017) as the code framework. For all models, the pre-trained 300 dimension word embedding (Mikolov et al., 2013) is shared by dialogue, document, and generated responses, the dimension of the hidden size is 300. For the RNN-based models, 3-layer bidirectional GRU and 3-layer GRU are applied for encoder and decoder, respectively. For the Transformer-based models, the layers of both encoder and decoder are set to 3, the number of heads in multi-head attention is 8 and the filter size is 2048. We use Adam ( $\alpha = 0.001$ , $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ , and $\epsilon = 10^{-8}$ ) (Kingma and Ba, 2015) for optimization. The beam size is set to 5 in the decoder. We truncate the words of the document to 800 and the dialogue utterance to 40. All models are trained on a TITAN X (Pascal) GPU. The average training time per epoch is around 40 minutes for the Transformer-based models and around 20 minutes for the RNN-based models. + +# 5 Analysis + +# 5.1 Experimental Results study + +Table 2 shows the automatic evaluations for all models on the Reduced (Sampled) dataset. The dialogue history is 2 rounds. We only present ROUGE-L as ROUGE-1/2 show the same trend as ROUGE-L. Through experiments, we can see that the change range of KU-2 (8.0-13.7) is less than KU-3 (23.1-31.7) on the Reduced data, indicating that the KU-3 can better reflect the amount of knowledge used than KU-2. + +In the RNN-based models, the VHRED(-k) gets the worst PPL/BLEU/ROUGE, which reveals the importance of injecting document knowl + +edge in the DGD task. We did not calculate the KU/QKU of the VHRED(-k) since the model did not use document knowledge. The VHRED(a) gets better PPL/BLEU/ROUGE/KU/QKU than the VHRED(c) model, which means the smaller granular extraction of document information benefits more in generating responses. + +Among the Transformer-based models, The ITDD model gets better PPL/BLEU/ROUGE-L/KU/QKU than the T-DD model, which means the incremental encoding method is stronger than parallel encoding. The CAT-EDD and the CAT-DD models achieve better performance than the T-DD and the T-EDD models, respectively. It indicates that our Compare-Aggregate method is helpful to understand the dialogue. The CAT-EDD model outperforms the ITDD model on all metrics, which indicates that our CAT module automatically learns the topic transfer between conversation history and the last utterance as we expected. The CAT-EDD does not perform as good as the CAT-DD, which shows that it is necessary to set up an independent mechanism to learn topic transfer, rather than automatic learning by attentions in the decoder. + +Comparing with the RNN-based models, the Transformer-based models get better performance on PPL/BLEU/ROUGE. It proves that the latter is better in the ability of convergence to the ground truth. The VHRED(c) and the VHRED(a) get better KU and worse QKU than the Transformer-based models. It means that the latent variable models increase the diversity of replies and use more document tuples, but their ability to extract unique tuples is not as good as the Transformer-based ones. + +Table 3 shows the manual evaluations for all models on the Reduced(Sampled) dataset. The CAT-DD model gets the highest scores on Fluency/Coherence/Informativeness. When experimenting with the Sampled test set, we can see that the advantages of our models become greater than + +
ModelFlu.Coh.Inf.
VHRED(-k)3.71 (3.72)2.82 (2.72)3.01 (2.82)
VHRED(c)3.73 (3.82)3.04 (3.11)3.03 (3.05)
VHRED(a)3.84 (3.77)3.11 (3.14)3.22 (3.06)
T-DD3.84 (3.82)3.03 (3.06)3.03 (3.06)
T-EDD3.84 (3.83)3.02 (3.08)3.05 (3.05)
ITEDD3.90 (3.91)3.11 (3.12)3.43 (3.42)
CAT-EDD4.02 (3.93)3.12 (3.33)3.33 (3.41)
CAT-DD4.09 (4.09)3.39 (3.43)3.44 (3.61)
+ +Table 3: Manual evaluations on the CMU_DOG Dataset. Flu. /Coh. /Inf. / $\cdot$ mean Fluency /Coherence /Informativeness /Reduced (Sampled) test data, respectively. + +
ModelsPPLBLEUKU-2(%) / QKU-2
CAT-DD16.11.2111.1 / 2.05
w/o-left19.8*0.90*8.2* / 2.56*
w/o-(5,6)18.7*0.93*9.1* / 2.48◇
w/o-(G)18.2*0.96*9.2◇ / 2.46*
+ +Table 4: Ablation Study on the Sampled test set. We take the CAT-DD as the base model to do the significant test, $\diamond$ and $*$ stand for $p < 0.05$ and $p < 0.01$ , respectively. w/o means without. + +the results of the Reduced version in both automatic and manual evaluations. Our model shows more advantages in datasets with more topic transfer. + +# 5.2 Ablation Study + +Table 4 illustrates the ablation study of the CAT-DD model. w/o-left means the left branch is removed and the model degenerates to T-DD which takes the last utterance and document as inputs. We can see that all the automatic evaluation indexes significantly reduce, indicating the dialogue history can not be simply ignored. w/o-(5,6) is a model without Eq. (5) and (6), which is equivalent to simply connect the outputs of the left and the right encoder branches. The results showed that the ability of the model to distinguish the conversation topic transfer is weakened. w/o-(G) is a model removing the utter-attention in the left branch, which means we do not use L to guide the H, the structure of left branch changes to the right branch and the input is H. The performance is declining, which indicates that the guiding process is useful. The significant tests (twotailed student t-test) on PPL/BLEU/KU-2/QKU-2 reveal the effectiveness of each component. + +# 5.3 History Round Study + +We use the CAT-DD model and the Sampled test set to study the influence of the historical dialogue rounds. For example, setting dialogue history to 0 means we use only the last utterance, the CAT-DD becomes the w/o-left model in the + +![](images/b7d82530d8c82d0bd2341b2ce94ccd260e83c855fc11bdf542a0c2e8acd02290.jpg) + +![](images/1597b8a4eeae7e4cdf1c8f20c8121e38ec810ff0f696f9085a71cffed837bc87.jpg) +Figure 3: The effect of dialogue history rounds on VHRED(a)/ITDD/CAT-DD models. The abscissa represents the historical dialogue rounds. The ordinate represents the BLEU/KU-3/QKU-3 values. + +ablation study. Setting dialogue history to $N$ means we use $N$ rounds of dialogue history for the input of the left branch. We set the conversation history to $0/1/2/3/4$ to test the response of VHRED(a)/ITDD/CAT-DD models. Figure 3 shows the trend of BLEU/KU-3/QKU-3. The top figure shows the BLEU trend, the CAT-DD reaches the maximum when the rounds are 2. The continuous increase of rounds does not significantly improve the generation effect. In the middle picture, with the increase of historical dialogue from 0 to 2, the VHRED(a) and the CAT-DD have a visible improvement on the KU-3, which shows that the information contained in the historical dialogue can be identified and affect the extraction of document information. The ITDD model is not as sensitive as the others on the KU-3, indicating that the incremental encoding structure pays more attention to the information of the last utterance. The bottom figure shows the trend of the QKU-3. When the history dialogue increases, the ITDD model keeps stable and the VHRED(a) and the CAT-DD models have a declining trend, which again indicates that the VHRED(a) and the CAT-DD are more sensitive to the historical dialogue. + +# 5.4 History Importance Study + +Figure 4 shows the average sigmoid $(W^{\alpha}\alpha)$ value in the CAT-DD model over the Reduced/Sampled test set and the Validation set. A higher value means a stronger correlation between the last utterance and the historical dialogue. We can see that + +![](images/372fbcb5f1a09dec13a85581cba3871d2a18c3725c3815f326201b5cd38974ee.jpg) +Figure 4: The rating of dialogue history in the CAT-DD model with Reduced and Sampled test set. The abscissa represents the dialogue rounds and the ordinate represents the correlation score in the model. +Figure 5: Case study in the CMU.DoG Sampled Dataset. S1/S2 means Speaker-1/Speaker-2, respectively. (w/o-(5,6)) and (w/o-(G)) are models in the ablation study. + +
Document: +... sally hawkins as elisa esposito, a mute cleaner who works at a secret government laboratory. michael shannon as colonel richard strickland ... rating rotten tomatoes: 92% The shape of water is a 2017 american fantasy film ... it stars sally hawkins, michael shannon, richard jenkins, Doug jones, michael stuhlbarg, and octavia spencer ...
Dialogue history: +S1: I wonder if it's a government creation or something captured from the wild. i would assume the wild. +S2: It was captured for governmental experiments. +The last Utterance: +S1: Is it a big name cast?
Groud truth: +S2: Sally hawkins played the role of the mute cleaner, mic-hael shannon played the role of colonel richard strickland.
Generated response: +VHRED(a): it has rating rotten tomatoes: 92%. +TDD: i am not sure about it. +ITDD: yes, sally hawkins as elisa esposito. +CAT-DD: sally hawkins, michael shannon, richard jenkins, doug jones, michael stuhlbarg, and octavia spencer. +(w/o-(5,6)): yes, sally hawkins works at a secret governme- +nt laboratory. +(w/o-(G)): it is a 2017 american fantasy film.
+ +on the Reduced test set and the Validation set, the relevance score is higher than that of the Sampled data, which proves that the last utterance and the historical dialogue are more irrelevant in the latter. Our model captures this change and performs better on the Sampled data than the Reduced data. When the historical rounds increase from 1 to 2, the relevance score reduces obviously for all data sets, which means the increase of dialogue history introduces more unrelated information. When the historical conversations increases from 2 to 6, all data have no significant change, indicating that increasing the dialogue rounds does not improve the recognition ability of the model to the topic change. + +# 5.5 Case Study + +In Figure 5, we randomly select an example in the Sampled test set for a case study. The document, + +the dialogue history, the last utterance, and the ground truth are presented. We can observe that the last utterance is irrelevant to the dialogue history. The generated responses of different models are listed below. The VHRED(a) and CAT-DD(w/o-G) models misunderstand the dialogue and use the wrong document knowledge. The TDD gives a generic reply. The ITDD model answers correctly but without enough document information. The CAT-DD(w/o-(5,6)) model gives a response that was influenced by the irrelevant historical dialogue which we want to eliminate. Only the CAT-DD model generates a reasonable reply and uses the correct document knowledge, which means it correctly understands the dialogues. + +# 6 Conclusion + +We propose the Compare Aggregate method to understand Document-grounded Dialogue (DGD). The dialogue is divided into the last utterance and the dialogue history. The relationship between the two parts is analyzed to denoise the dialogue context and aggregate the document information for response generation. Experiments show that our model outperforms previous work in both automatic and manual evaluations. Our model can better understand the dialogue context and select proper document information for response generation. We also propose Knowledge Utilization (KU) and Quality of Knowledge Utilization (QKU), which are used to measure the quantity and quality of the imported external knowledge, respectively. In the future, we will further study the topic transition problem and the knowledge injecting problem in the DGD. + +# Acknowledgments + +This paper is supported by the National Natural Science Foundation of China under Grant No. 62076081, No.61772153 and No.61936010. + +# References + +Satoshi Akasaki and Nobuhiro Kaji. 2019. Conversation initiation by diverse news contents introduction. In *NAACL-HLT (1)*, pages 3988-3998. Association for Computational Linguistics. +Siddhartha Arora, Mitesh M. Khapra, and Harish G. Ramaswamy. 2019. On knowledge distillation from complex networks for response prediction. In NAACL-HLT (1), pages 3813-3822. Association for Computational Linguistics. + +Yoshua Bengio, Réjean Ducharme, and Pascal Vincent. 2000. A neural probabilistic language model. In NIPS, pages 932-938. MIT Press. +Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents. In ICLR (Poster). OpenReview.net. +Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. In AAAI, pages 5110-5117. AAAI Press. +Karthik Gopalakrishnan, Behnam Hedayatnia, Qinlang Chen, Anna Gottardi, Sanjeev Kwatra, Anu Venkatesh, Raefer Gabriel, Dilek Hakkani-Tur, and Amazon Alexa AI. 2019. Topical-chat: Towards knowledge-grounded open-domain conversations. Proc. Interspeech 2019, pages 1891-1895. +Byeongchang Kim, Jaewoo Ahn, and Gunhee Kim. 2020. Sequential latent knowledge selection for knowledge-grounded dialogue. CoRR, abs/2002.07510. +Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67-72, Vancouver, Canada. Association for Computational Linguistics. +Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In HLT-NAACL, pages 110-119. The Association for Computational Linguistics. +Zekang Li, Cheng Niu, Fandong Meng, Yang Feng, Qian Li, and Jie Zhou. 2019. Incremental transformer with deliberation decoder for document grounded conversations. In ACL (1), pages 12-21. Association for Computational Linguistics. +Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81. +Shuman Liu, Hongshen Chen, Zhaochun Ren, Yang Feng, Qun Liu, and Dawei Yin. 2018. Knowledge diffusion for neural dialogue generation. In ACL (1), pages 1489-1498. Association for Computational Linguistics. +Zhibin Liu, Zheng-Yu Niu, Hua Wu, and Haifeng Wang. 2019. Knowledge aware conversation generation with reasoning on augmented graph. CoRR, abs/1903.10245. + +Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 1412-1421. The Association for Computational Linguistics. +Chuan Meng, Pengjie Ren, Zhumin Chen, Christof Monz, Jun Ma, and Maarten de Rijke. 2019. Refnet: A reference-aware network for background based conversation. CoRR, abs/1908.06449. +Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pages 3111-3119. +Nikita Moghe, Siddhartha Arora, Suman Banerjee, and Mitesh M. Khapra. 2018. Towards exploiting background knowledge for building conversation systems. In EMNLP, pages 2322-2332. Association for Computational Linguistics. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL, pages 311-318. ACL. +Lianhui Qin, Michel Galley, Chris Brockett, Xiaodong Liu, Xiang Gao, Bill Dolan, Yejin Choi, and Jianfeng Gao. 2019. Conversing by reading: Contentful neural conversation with on-demand machine reading. In ACL (1), pages 5427-5436. Association for Computational Linguistics. +Pengjie Ren, Zhumin Chen, Christof Monz, Jun Ma, and Maarten de Rijke. 2019. Thinking globally, acting locally: Distantly supervised global-to-local knowledge selection for background based conversation. CoRR, abs/1908.09528. +Chinnadhurai Sankar, Sandeep Subramanian, Chris Pal, Sarath Chandar, and Yoshua Bengio. 2019. Do neural dialog systems use the conversation history effectively? an empirical study. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 32-37. +Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In *ICLR (Poster)*. OpenReview.net. +Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2016. + +Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA, pages 3776-3784. +Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron C. Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA, pages 3295-3301. +Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In NIPS, pages 3104-3112. +Zhiliang Tian, Wei Bi, Dongkyu Lee, Lanqing Xue, Yiping Song, Xiaojiang Liu, and Nevin L. Zhang. 2020. Response-anticipated memory for on-demand knowledge integration in response generation. CoRR, abs/2005.06128. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS, pages 5998-6008. +Yingce Xia, Fei Tian, Lijun Wu, Jianxin Lin, Tao Qin, Nenghai Yu, and Tie-Yan Liu. 2017. Deliberation networks: Sequence generation beyond one-pass decoding. In NIPS, pages 1784-1794. +Hao-Tong Ye, Kai-Ling Lo, Shang-Yu Su, and YunNung Chen. 2019. Knowledge-grounded response generation with deep attentional latent-variable model. CoRR, abs/1903.09813. +Tom Young, Erik Cambria, Iti Chaturvedi, Hao Zhou, Subham Biswas, and Minlie Huang. 2018. Aug-mentation end-to-end dialogue systems with commonsense knowledge. In AAAI, pages 4970-4977. AAAI Press. +Yangjun Zhang, Pengjie Ren, and Maarten de Rijke. 2019. Improving background based conversation with context-aware knowledge pre-selection. CoRR, abs/1906.06685. +Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In ACL (1), pages 654-664. Association for Computational Linguistics. +Xueliang Zhao, Chongyang Tao, Wei Wu, Can Xu, Dongyan Zhao, and Rui Yan. 2019. A document-grounded matching network for response selection in retrieval-based chatbots. In *IJCAI*, pages 5443-5449. ijcai.org. +Wen Zheng and Ke Zhou. 2019. Enhancing conversational dialogue models with grounded knowledge. In CIKM, pages 709-718. ACM. + +Hao Zhou, Tom Young, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2018a. Commonsense knowledge aware conversation generation with graph attention. In *IJCAI*, pages 4623-4629. ijcai.org. +Kangyan Zhou, Shrimai Prabhumoye, and Alan W. Black. 2018b. A dataset for document grounded conversations. In EMNLP, pages 708-713. Association for Computational Linguistics. \ No newline at end of file diff --git a/acompareaggregatetransformerforunderstandingdocumentgroundeddialogue/images.zip b/acompareaggregatetransformerforunderstandingdocumentgroundeddialogue/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..59fbef81f1b1ef6588ac099dc97bf28c8bdf1563 --- /dev/null +++ b/acompareaggregatetransformerforunderstandingdocumentgroundeddialogue/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:24bfc3e39a69d20efee17703f2cc3f997a09838eb0b0db554fb66eedd9c7aea9 +size 469255 diff --git a/acompareaggregatetransformerforunderstandingdocumentgroundeddialogue/layout.json b/acompareaggregatetransformerforunderstandingdocumentgroundeddialogue/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..1f342b715811e12434730b114d63120c29c2b06a --- /dev/null +++ b/acompareaggregatetransformerforunderstandingdocumentgroundeddialogue/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6d83ef1be3df0fe6875c1db5d62334f1de91d98c87209a879b65435c2c4ca51e +size 376133 diff --git a/aconcisemodelformulticriteriachinesewordsegmentationwithtransformerencoder/6484072a-ed0b-4aaa-ae63-8c2e14ddc8fc_content_list.json b/aconcisemodelformulticriteriachinesewordsegmentationwithtransformerencoder/6484072a-ed0b-4aaa-ae63-8c2e14ddc8fc_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..6edc65ca8ef3538cb613c72f17d44b85d44ead1d --- /dev/null +++ b/aconcisemodelformulticriteriachinesewordsegmentationwithtransformerencoder/6484072a-ed0b-4aaa-ae63-8c2e14ddc8fc_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:56ba7825325ff18ed5cc667444ea304e543144017ddf6746e0d4a7da5e258918 +size 80437 diff --git a/aconcisemodelformulticriteriachinesewordsegmentationwithtransformerencoder/6484072a-ed0b-4aaa-ae63-8c2e14ddc8fc_model.json b/aconcisemodelformulticriteriachinesewordsegmentationwithtransformerencoder/6484072a-ed0b-4aaa-ae63-8c2e14ddc8fc_model.json new file mode 100644 index 0000000000000000000000000000000000000000..738604770c7b2b4c43ff1e6c8c49190817f4c994 --- /dev/null +++ b/aconcisemodelformulticriteriachinesewordsegmentationwithtransformerencoder/6484072a-ed0b-4aaa-ae63-8c2e14ddc8fc_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:10da4fda7af9ad0c6c9ccaad9fb47bfa6660deb698f0593cc7b8af26b1b1abd6 +size 96138 diff --git a/aconcisemodelformulticriteriachinesewordsegmentationwithtransformerencoder/6484072a-ed0b-4aaa-ae63-8c2e14ddc8fc_origin.pdf b/aconcisemodelformulticriteriachinesewordsegmentationwithtransformerencoder/6484072a-ed0b-4aaa-ae63-8c2e14ddc8fc_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5edc574519854dd1150919f17e550d126ec2269d --- /dev/null +++ b/aconcisemodelformulticriteriachinesewordsegmentationwithtransformerencoder/6484072a-ed0b-4aaa-ae63-8c2e14ddc8fc_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2e0686c6f4d5044135c6e566851daa1b7d5c7e0938590481a6becc4c84d2e21c +size 690471 diff --git a/aconcisemodelformulticriteriachinesewordsegmentationwithtransformerencoder/full.md b/aconcisemodelformulticriteriachinesewordsegmentationwithtransformerencoder/full.md new file mode 100644 index 0000000000000000000000000000000000000000..d5522466d37873a5f44ff3675d25e2cb3672efb0 --- /dev/null +++ b/aconcisemodelformulticriteriachinesewordsegmentationwithtransformerencoder/full.md @@ -0,0 +1,397 @@ +# A Concise Model for Multi-Criteria Chinese Word Segmentation with Transformer Encoder + +Xipeng Qiu*, Hengzhi Pei, Hang Yan, Xuanjing Huang + +Shanghai Key Laboratory of Intelligent Information Processing, Fudan University + +School of Computer Science, Fudan University + +{xpqiu, hvpei16, hyan19, xjhuang} @fudan.edu.cn + +# Abstract + +Multi-criteria Chinese word segmentation (MCCWS) aims to exploit the relations among the multiple heterogeneous segmentation criteria and further improve the performance of each single criterion. Previous work usually regards MCCWS as different tasks, which are learned together under the multi-task learning framework. In this paper, we propose a concise but effective unified model for MCCWS, which is fully-shared for all the criteria. By leveraging the powerful ability of the Transformer encoder, the proposed unified model can segment Chinese text according to a unique criterion-token indicating the output criterion. Besides, the proposed unified model can segment both simplified and traditional Chinese and has an excellent transfer capability. Experiments on eight datasets with different criteria show that our model outperforms our single-criterion baseline model and other multi-criteria models. Source codes of this paper are available on Github1. + +# 1 Introduction + +Chinese word segmentation (CWS) is a preliminary step to process Chinese text. The mainstream CWS methods regard CWS as a character-based sequence labeling problem, in which each character is assigned a label to indicate its boundary information. Recently, various neural models have been explored to reduce efforts of the feature engineering (Chen et al., 2015a,b; Qun et al., 2020; Wang and Xu, 2017; Kurita et al., 2017; Ma et al., 2018). + +Recently, Chen et al. (2017) proposed multicriteria Chinese word segmentation (MCCWS) to effectively utilize the heterogeneous resources with different segmentation criteria. Specifically, they regard each segmentation criterion as a single + +
CorporaLinDanwonthe championship
CTB林丹赢得总冠军
PKU赢得冠军
MSRA林丹赢得冠军
+ +Table 1: Illustration of different segmentation criteria. + +task under the framework of multi-task learning, where a shared layer is used to extract the criteria-invariant features, and a private layer is used to extract the criteria-specific features. + +However, it is unnecessary to use a specific private layer for each criterion. These different criteria often have partial overlaps. For the example in Table 1, the segmentation of "林丹(Lin Dan)" is the same in CTB and MSRA criteria, and the segmentation of "总|冠军(the championship)" is the same in PKU and MSRA criteria. All these three criteria have the same segmentation for the word "赢得(won)". Although these criteria are inconsistent, they share some partial segmentation. Therefore, it is interesting to use a unified model for all the criteria. At the inference phase, a criterion-token is taken as input to indicate the predict segmentation criterion. Following this idea, Gong et al. (2018) used multiple LSTMs and a criterion switcher at every position to automatically switch the routing among these LSTMs. He et al. (2019) used a shared BiLSTM to deal with all the criteria by adding two artificial tokens at the beginning and end of an input sentence to specify the target criterion. However, due to the long-range dependency problem, BiLSTM is hard to carry the criterion information to each character in a long sentence. + +In this work, we propose a concise unified model for MCCWS task by integrating shared knowledge from multiple segmentation criteria. Inspired by the success of the Transformer (Vaswani et al., 2017), we design a fully shared architecture for MCCWS, where a shared Transformer encoder is + +![](images/3344882337096096ba922c77b58fc91c7d08e64aea31af306d959a219641998d.jpg) +(a) CTB + +![](images/bbd6ce3f2e8829c5036df29d431e643c567c073d7d7c6ee826594029457cbc7f.jpg) +(b) PKU +Figure 1: Unified model for MCCWS. $\left[\cdot \right]$ is a special token indicating the output criterion. The label $\{B,M,E,S\}$ of each character indicates it is the begin, middle, end of a word, or a word with single character. + +used to extract the criteria-aware contextual features, and a shared decoder is used to predict the criteria-specific labels. An artificial token is added at the beginning of the input sentence to determine the output criterion. The similar idea is also used in the field of machine translation, Johnson et al. (2017) used a single model to translate between multiple languages. Figure 1 illustrates our model. There are two reasons to use the Transformer encoder for MCCWS. The primary reason is its neatness and ingenious simplicity to model the criterion-aware context representation for each character. Since the Transformer encoder uses self-attention mechanism to capture the interaction each two tokens in a sentence, each character can immediately perceive the information of the criterion-token as well as the context information. The secondary reason is that the Transformer encoder has potential advantages in capturing the long-range context information and having a better parallel efficiency than the popular LSTM-based encoders. Finally, we exploit the eight segmentation criteria on the five simplified Chinese and three traditional Chinese corpora. Experiments show that the proposed model is effective in improving the performance of MCCWS. + +The contributions of this paper could be summarized as follows. + +- We proposed a concise unified model for MC-CWS based on Transformer encoder, which adopts a single fully-shared model to segment sentences with a given target criterion. It is attractive in practice to use a single model to produce multiple outputs with different criteria. + +- By a thorough investigation, we show the feasibility of using a unified CWS model to segment both simplified and traditional Chinese (see Sec. 4.3). We think it is a promising direction for CWS to exploit the collective knowledge of these two kinds of Chinese. +- The learned criterion embeddings reflect the relations between different criteria, which make our model have better transfer capability to a new criterion (see Sec. 4.4) just by finding a new criterion embedding in the latent semantic space. +- It is a first attempt to train the Transformer encoder from scratch for CWS task. Although we mainly address its conciseness and suitability for MCCWS in this paper and do not intend to optimize a specific Transformer encoder for the single-criterion CWS (SCCWS), we prove that the Transformer encoder is also valid for SCCWS. The potential advantages of the Transformer encoder are that it can effectively extract the long-range interactions among characters and has a better parallel ability than LSTM-based encoders. + +# 2 Background + +In this section, we first briefly describe the background knowledge of our work. + +# 2.1 Neural Architecture for CWS + +Usually, CWS task could be viewed as a character-based sequence labeling problem. Specifically, each character in a sentence $X = \{x_{1},\dots,x_{T}\}$ is labelled as one of $y\in \mathcal{L} = \{B,M,E,S\}$ , indicating the begin, middle, end of a word, or a word with single character. The aim of CWS task is to figure out the ground truth of labels $Y^{*} = \{y_{1}^{*},\ldots ,y_{T}^{*}\}$ : + +$$ +Y ^ {*} = \underset {Y \in \mathcal {L} ^ {T}} {\arg \max } p (Y | X). \tag {1} +$$ + +Recently, various neural models have been widely used in CWS and can effectively reduce the efforts of feature engineering. The modern architecture of neural CWS usually consists of three components: + +**Embedding Layer:** In neural models, the first step is to map discrete language symbols into distributed embedding space. Formally, each character $x_{t}$ is mapped as $\mathbf{e}_{x_t} \in \mathbb{R}^{d_e}$ , where $d_{e}$ is a hyper-parameter indicating the size of character embedding. + +![](images/edd9ddb767e43c58c260b0348a78b40fcd8e4558c41a5c1a6c71a4919b99606d.jpg) +(a) SCCWS + +![](images/769f64e47027aac3947bc7da7274e5264b86f6df8e264e277309b251aa8b687d.jpg) +(b) MTL-based MCCWS +Figure 2: Architectures of SCCWS and MCCWS. The shaded components are shared for different criteria. + +![](images/bc2dcba0f6ee0ce02fbb0de78e36e055d666bc828e9f2591daba15395f8f7f02.jpg) +(c) Unified MCCWS + +Encoding Layer: The encoding layer is to extract the contextual features for each character. + +For example, a prevalent choice for the encoding layer is the bi-directional LSTM (BiLSTM) (Hochreiter and Schmidhuber, 1997), which could incorporate information from both sides of sequence. + +$$ +\mathbf {h} _ {t} = \operatorname {B i L S T M} \left(\mathbf {e} _ {x _ {t}}, \overrightarrow {\mathbf {h}} _ {t - 1}, \overleftarrow {\mathbf {h}} _ {t + 1}, \theta_ {e}\right), \tag {2} +$$ + +where $\vec{\mathbf{h}}_t$ and $\vec{\mathbf{h}}_t$ are the hidden states at step $t$ of the forward and backward LSTMs respectively, $\theta_e$ denotes all the parameters in the BiLSTM layer. + +Besides BiLSTM, CNN is also alternatively used to extract features. + +Decoding Layer: The extracted features are then sent to conditional random fields (CRF) (Lafferty et al., 2001) layer or multi-layer perceptron (MLP) for tag inference. + +When using CRF as decoding layer, $p(Y|X)$ in Eq (1) could be formalized as: + +$$ +p (Y | X) = \frac {\Psi (Y | X)}{\sum_ {Y ^ {\prime} \in \mathcal {L} ^ {n}} \Psi (Y ^ {\prime} | X)}, \tag {3} +$$ + +where $\Psi (Y|X)$ is the potential function. In first order linear chain CRF, we have: + +$$ +\Psi (Y | X) = \prod_ {t = 2} ^ {n} \psi (X, t, y _ {t - 1}, y _ {t}), \tag {4} +$$ + +$$ +\psi (\mathbf {x}, t, y ^ {\prime}, y) = \exp (\delta (X, t) _ {y} + \mathbf {b} _ {y ^ {\prime} y}), \tag {5} +$$ + +where $\mathbf{b}_{y'y} \in \mathbf{R}$ is trainable parameters respective to label pair $(y', y)$ , score function $\delta(X, t) \in \mathbb{R}^{|\mathcal{L}|}$ calculates scores of each label for tagging the $t$ -th character: + +$$ +\delta (X, t) = \mathbf {W} _ {\delta} ^ {\top} \mathbf {h} _ {t} + \mathbf {b} _ {\delta}, \tag {6} +$$ + +where $\mathbf{h}_t$ is the hidden state of encoder at step $t$ , $\mathbf{W}_{\delta} \in \mathbb{R}^{d_h \times |\mathcal{L}|}$ and $\mathbf{b}_{\delta} \in \mathbb{R}^{|\mathcal{L}|}$ are trainable parameters. + +When using MLP as decoding layer, $p(Y|X)$ in Eq (1) is directly predicted by a MLP with softmax function as output layer. + +$$ +p \left(y _ {t} | X\right) = \operatorname {M L P} \left(\mathbf {h} _ {t}, \theta_ {d}\right), \quad \forall t \in [ 1, T ] \tag {7} +$$ + +where $\theta_{d}$ denotes all the parameters in MLP layer. + +Most current state-of-the-art CWS models (Chen et al., 2015a; Xu and Sun, 2016; Liu et al., 2016; Yang et al., 2018; Qun et al., 2020) mainly focus on single-criterion CWS (SCCWS). Figure 2a shows the architecture of SCCWS. + +# 2.2 MCCWS with Multi-Task Learning + +To improve the performance of CWS by exploiting multiple heterogeneous criteria corpora, Chen et al. (2017) utilize the multi-task learning framework to model the shared information among these different criteria. + +Formally, assuming that there are $M$ corpora with heterogeneous segmentation criteria, we refer $\mathcal{D}_m$ as corpus $m$ with $N_{m}$ samples: + +$$ +\mathcal {D} _ {m} = \left\{\left(X _ {n} ^ {(m)}, Y _ {n} ^ {(m)}\right) \right\} _ {n = 1} ^ {N _ {m}}, \tag {8} +$$ + +where $X_{n}^{(m)}$ and $Y_{n}^{(m)}$ denote the $n$ -th sentence and the corresponding label in corpus $m$ respectively. + +The encoding layer introduces a shared encoder to mine the common knowledge across multiple corpora, together with the original private encoder. The architecture of MTL-based MCCWS is shown in Figure 2b. + +Concretely, for corpus $m$ , a shared encoder and a private encoder are first used to extract the criterion-agnostic and criterion-specific features. + +$$ +\mathbf {H} ^ {(s)} = \operatorname {e n c} _ {s} \left(\mathbf {e} _ {X}; \theta_ {e} ^ {(s)}\right), \tag {9} +$$ + +$$ +\mathbf {H} ^ {(m)} = \mathbf {e n c} _ {m} \left(\mathbf {e} _ {X}; \theta_ {e} ^ {(m)}\right), \quad \forall m \in [ 1, M ] \tag {10} +$$ + +where $\mathbf{e}_X = \{\mathbf{e}_{x_1},\dots ,\mathbf{e}_{x_T}\}$ denotes the embeddings of the input characters $x_{1},\dots ,x_{T}$ , $\mathbf{enc}_s(\cdot)$ represents the shared encoder and $\mathbf{enc}_m(\cdot)$ represents the private encoder for corpus $m$ ; $\theta_e^{(s)}$ and + +$\theta_{e}^{(m)}$ are the shared and private parameters respectively. The shared and private encoders are usually implemented by the RNN or CNN network. + +Then a private decoder is used to predict criterion-specific labels. For the $m$ -th corpus, the probability of output labels is + +$$ +p _ {m} (Y | X) = \mathbf {d e c} _ {m} ([ \mathbf {H} ^ {(s)}; \mathbf {H} ^ {(m)} ]; \theta_ {d} ^ {(m)}), \tag {11} +$$ + +where $\mathbf{dec}_m(\cdot)$ is a private CRF or MLP decoder for corpus $m(m\in [1,M])$ , taking the shared and private features as inputs; $\theta_d^{(m)}$ is the parameters of the $m$ -th private decoder. + +Objective The objective is to maximize the log likelihood of true labels on all the corpora: + +$$ +\mathcal {J} _ {\text {s e g}} \left(\Theta^ {m}, \Theta^ {s}\right) = \sum_ {m = 1} ^ {M} \sum_ {n = 1} ^ {N _ {m}} \log p _ {m} \left(Y _ {n} ^ {(m)} \mid X _ {n} ^ {(m)}; \Theta^ {m}, \Theta^ {s}\right), \tag {12} +$$ + +where $\Theta^m = \{\theta_e^{(m)},\theta_d^{(m)}\}$ and $\Theta^s = \{\mathbf{E},\theta_e^{(s)}\}$ denote all the private and shared parameters respectively; $\mathbf{E}$ is the embedding matrix. + +# 3 Proposed Unified Model + +In this work, we propose a more concise architecture for MCCWS, which adopts the Transformer encoder (Vaswani et al., 2017) to extract the contextual features for each input character. In our proposed architecture, both the encoder and decoder are shared by all the criteria. The only difference for each criterion is that a unique token is taken as input to specify the target criterion, which makes the shared encoder to capture the criterion-aware representation. Figure 2 illustrates the difference between our proposed model and the previous models. A more detailed architecture for MCCWS is shown in Figure 3. + +# 3.1 Embedding Layer + +Given a sentence $X = \{x_{1},\ldots ,x_{T}\}$ , we first map it into a vector sequence where each token is a $d_{model}$ dimensional vector. Besides the standard character embedding, we introduce three extra embeddings: criterion embedding, bigram embedding, and position embedding. + +1) Criterion Embedding: Firstly, we add a unique criterion-token at the beginning of $X$ to indicate the output criterion. For the $m$ -th criterion, the criterion-token is $[m]$ . We use $\mathbf{e}_{[m]}$ to denote its embedding. Thus, the model can learn the relations + +![](images/619a786c6bc475207e01831d24da6755d4466433b739e9f6f6bf7128c21dcbfb.jpg) +Figure 3: Proposed Model for MCCWS. + +between different criteria in the latent embedding space. + +2) Bigram Embedding: Based on (Chen et al., 2015b; Shao et al., 2017; Zhang et al., 2018), the character-level bigram features can significantly benefit the task of CWS. Following their settings, we also introduce the bigram embedding to augment the character-level unigram embedding. The representation of character $x_{t}$ is + +$$ +\mathbf {e} _ {x _ {t}} ^ {\prime} = F C \left(\mathbf {e} _ {x _ {t}} \oplus \mathbf {e} _ {x _ {t - 1} x _ {t}} \oplus \mathbf {e} _ {x _ {t} x _ {t + 1}}\right), \tag {13} +$$ + +where $\mathbf{e}$ denotes the $d$ -dimensional embedding vector for the unigram and bigram, $\oplus$ is the concatenation operator, and FC is a fully connected layer to map the concatenated character embedding with the dimension $3d$ into the embedding $\mathbf{e}_{xt}^{\prime} \in \mathbb{R}^{d_{model}}$ . + +3) Position Embedding: To capture the order information of a sequence, a position embedding $PE$ is used for each position. The position embedding can be learnable parameters or predefined. In this work, we use the predefined position embedding following (Vaswani et al., 2017). For the $t$ -th character in a sentence, its position embedding is defined by + +$$ +P E _ {t, 2 i} = \sin \left(t / 1 0 0 0 0 ^ {2 i / d _ {\text {m o d e l}}}\right), \tag {14} +$$ + +$$ +P E _ {t, 2 i + 1} = \cos (t / 1 0 0 0 0 ^ {2 i / d _ {\text {m o d e l}}}), \tag {15} +$$ + +where $i$ denotes the dimensional index of position embedding. + +Finally, the embedding matrix of the sequence $X = \{x_{1},\dots ,x_{T}\}$ with criterion $m$ is formulated as + +$$ +\mathbf {H} = \left[ \mathbf {e} _ {[ m ]} + P E _ {0}; \mathbf {e} _ {x _ {1}} ^ {\prime} + P E _ {1}; \dots ; \mathbf {e} _ {x _ {T}} ^ {\prime} + P E _ {T} \right], \tag {16} +$$ + +where $\mathbf{H} \in \mathbb{R}^{(T + 1) \times d_{model}}$ , $(T + 1)$ and $d_{model}$ represent the length and the dimension of the input vector sequence. + +# 3.2 Encoding Layer + +In sequence modeling, RNN and CNN often suffer from the long-term dependency problem and cannot effectively extract the non-local interactions in a sentence. Recently, the fully-connected self-attention architecture, such as Transformer (Vaswani et al., 2017), achieves great success in many NLP tasks. + +In this work, we adopt the Transformer encoder as our encoding layer, in which several multi-head self-attention layers are used to extract the contextual feature for each character. + +Given a sequence of vectors $H \in \mathbb{R}^{(T + 1) \times d_{\text{model}}}$ , a single-head self-attention projects $H$ into three different matrices: the query matrix $Q \in \mathbb{R}^{(T + 1) \times d_k}$ , the key matrix $K \in \mathbb{R}^{(T + 1) \times d_k}$ and the value matrix $V \in \mathbb{R}^{(T + 1) \times d_v}$ , and uses scaled dot-product attention to get the output representation. + +$$ +Q, K, V = H W ^ {Q}, H W ^ {K}, H W ^ {V} \tag {17} +$$ + +$$ +\operatorname {A t t n} (Q, K, V) = \operatorname {s o f t m a x} (\frac {Q K ^ {T}}{\sqrt {d _ {k}}}) V, \tag {18} +$$ + +where the matrices $W^{Q}\in \mathbb{R}^{d_{model}\times d_{k}},W^{K}\in$ $\mathbb{R}^{d_{model}\times d_k},W^V\in \mathbb{R}^{d_{model}\times d_v}$ are learnable parameters and softmax(·) is performed row-wise. + +The Transformer encoder consists of several stacked multi-head self-attention layers and fully-connected layers. Assuming the input of the multi-head self-attention layer is $H$ , its output $\tilde{H}$ is calculated by + +$$ +Z = \text {l a y e r - n o r m} \left(H + \operatorname {M u l t i H e a d} (H)\right), \tag {19} +$$ + +$$ +\tilde {H} = \text {l a y e r - n o r m} (Z + \operatorname {F F N} (Z)), \tag {20} +$$ + +where layer-norm $(\cdot)$ represents the layer normalization (Ba et al., 2016). + +All the tasks with the different criteria use the same encoder. Nevertheless, with the different criterion-token $[m]$ , the encoder can effectively extract the criterion-aware representation for each character. + +# 3.3 Decoding Layer + +In the standard multi-task learning framework, each task has its private decoder to predict the task-specific labels. Different from the previous work, we use a shared decoder for all the tasks since we have extracted the criterion-aware representation for each character. In this work, we use CRF as the + +decoder since it is slightly better than MLP (see Sec. 4.2). + +With the fully-shared encoder and decoder, our model is more concise than the shared-private architectures (Chen et al., 2017; Huang et al., 2019). + +# 4 Experiments + +Datasets We use eight CWS datasets from SIGHAN2005 (Emerson, 2005) and SIGHAN2008 (Jin and Chen, 2008). Among them, the AS, CITYU, and CKIP datasets are in traditional Chinese, while the MSRA, PKU, CTB, NCC, and SXU datasets are in simplified Chinese. Except where otherwise stated, we follow the setting of (Chen et al., 2017; Gong et al., 2018), and translate the AS, CITYU and CKIP datasets into simplified Chinese. We do not balance the datasets and randomly pick $10\%$ examples from the training set as the development set for all datasets. Similar to the previous work (Chen et al., 2017), we preprocess all the datasets by replacing the continuous Latin characters and digits with a unique token, and converting all digits, punctuation and Latin letters to half-width to deal with the full/half-width mismatch between training and test set. + +We have checked the annotation schemes of different datasets, which are just partially shared and no two datasets have the same scheme. According to our statistic, the averaged overlap is about $20.5\%$ for 3-gram and $4.4\%$ for 5-gram. + +Table 2 gives the details of the eight datasets after preprocessing. For training and development sets, lines are split into shorter sentences or clauses by punctuations, in order to make a faster batch. + +Pre-trained Embedding Based on on (Chen et al., 2015b; Shao et al., 2017; Zhang et al., 2018), n-gram features are of great benefit to Chinese word segmentation and POS tagging tasks. Thus we use unigram and bigram embeddings for our models. We first pre-train unigram and bigram embeddings on Chinese Wikipedia corpus by the method proposed in (Ling et al., 2015), which improves standard word2vec by incorporating token order information. + +Hyper-parameters We use Adam optimizer (Kingma and Ba, 2014) with the same warmup strategy as (Vaswani et al., 2017). The development set is used for parameter tuning. All the models are trained for 100 epochs. Pre-trained embeddings are fixed for the first 80 epochs and then updated during the following epochs. After + +Table 2: Details of the eight datasets after preprocessing. "Word Types" represents the number of unique word. "Char Types" is the number of unique characters. "OOV Rate" is Out-Of-Vocabulary rate. + +
CorporaWords#Chars#Word TypesChar TypesOOV
Sighan05MSRATrain2.4M4.0M75.4K5.1K1.32%
Test0.1M0.2M11.9K2.8K
ASTrain5.4M8.3M128.8K5.8K2.20%
Test0.1M0.2M18.0K3.4K
PKUTrain1.1M1.8M51.2K4.6K2.06%
Test0.1M0.2M12.5K2.9K
CITYUTrain1.1M1.8M43.4K4.2K3.69%
Test0.2M0.4M23.2K3.6K
Sighan08CTBTrain0.6M1.0M40.5K4.2K3.80%
Test0.1M0.1M11.9K2.9K
CKIPTrain0.7M1.1M44.7K4.5K4.29%
Test0.1M0.1M14.2K3.1K
NCCTrain0.9M1.4M53.3K5.3K3.31%
Test0.2M0.2M20.9K3.9K
SXUTrain0.5M0.8M29.8K4.1K2.60%
Test0.1M0.2M11.6K2.8K
+ +
Embedding Size d100
Hidden State Size dmodel256
Transformer Encoder Layers6
Attention Heads4
Batch Size256
Dropout Ratio0.2
Warmup Steps4000
+ +Table 3: Hyper-Parameter Settings + +each training epoch, we test the model on the dev set, and models with the highest $F1$ in the dev set are used in the test set. Table 3 shows the detailed hyperparameters. + +# 4.1 Overall Results + +Table 4 shows the experiment results of the proposed model on test sets of eight CWS datasets. + +We first compare our Transformer encoder with the previous models in the single-criterion scenario. The comparison is presented in the upper block of Table 4. Since Switch-LSTMs (Gong et al., 2018) is designed form MCCWS, it is just slight better than BiLSTM in single-criterion scenario. Compared to the LSTM-based encoders, the Transformer encoder brings a noticeable improvement compared to (Chen et al., 2017; Gong et al., 2018), and gives a comparable performance to (Ma et al., 2018). In this work, we do not intend to prove the superiority of the Transformer encoder over LSTM-based encoders in the single-criterion scenario. Our purpose is to build a concise unified model based on Transformer encoder for MCCWS. + +In the multi-criteria scenario, we compare our unified model with the BiLSTM (Chen et al., 2017) and Switch-LSTMs (Gong et al., 2018). The lower block of Table 4 displays the contrast. Firstly, although different criteria are trained together, our unified model achieves better performance besides CTB. Compared to the single-criterion scenario, 0.42 gain in average $F1$ score is obtained by the multi-criteria scenario. Moreover, our unified model brings a significant improvement of 5.05 in OOV recall. Secondly, compared to previous MCCWS models, our unified model also achieves better average $F1$ score. Especially, our unified model significantly outperforms the unified BiLSTM (He et al., 2019), which indicates the Transformer encoder is more effective in carrying the criterion information than BiLSTM. The reason is that the Transformer encoder can model the interaction of the criterion-token and each character directly, while BiLSTM needs to carry the criterion information step-by-step from the two ends to the middle of the input sentence. The criterion information could be lost for the long sentences. + +There are about 200 sentences are shared by more than one datasets with different segmentation schemes, but it is not much harder to correctly segment them. Their F1 score is 96.84. + +Figure 4 visualizes the 2D PCA projection of the learned embeddings of eight different criteria. Generally, the eight criteria are mapped into dispersed points in the embedding space, which indicates + +
ModelsMSRAASPKUCTBCKIPCITYUNCCSXUAvg.
Single-Criterion Models
Stacked BiLSTM (Ma et al., 2018)F97.496.296.196.7-97.2---
BiLSTM (Chen et al., 2017)F95.8494.293.395.393.0694.0792.1795.1794.14
Switch-LSTMs (Gong et al., 2018)F96.4694.5195.7497.0992.8893.7192.1295.5794.76
Transformer EncoderF98.0796.0696.3996.4195.6696.3295.5797.0896.45
Transformer EncoderOOV73.7573.0572.8282.8279.0583.7271.8177.9576.87
Multi-Criteria Models
BiLSTM (Chen et al., 2017)F96.0494.6494.3296.1894.2695.5592.8396.0494.98
Switch-LSTMs (Gong et al., 2018)F97.7895.2296.1597.2694.9996.2294.1297.2596.12
Unified BiLSTM (He et al., 2019)F97.295.496.096.7-96.1-96.4-
Our Unified ModelF98.0596.4496.4196.9996.5196.9196.0497.6196.87
Our Unified ModelOOV78.9276.3978.918782.8986.9179.385.0881.92
+ +Table 4: Overall results on eight CWS datasets. F and OOV indicate the $F1$ score and OOV recall, respectively. The upper block consists of single-criterion models. Since Stacked BiLSTM (Ma et al., 2018) is a strong SOTA model, the other comparable CWS models are omitted for brevity. The lower block consists of multi-criteria models. + +
ModelsMSRAASPKUCTBCKIPCITYUNCCSXUAvg.
Unified Model98.0596.4496.4196.9996.5196.9196.0497.6196.87
w/o CRF98.0296.4296.4196.996.5996.8795.9697.596.83
w/o bigram97.419696.2596.719696.3194.6296.8496.27
w/o pre-trained emb.97.5196.0696.0296.4796.2295.9994.8296.7696.23
+ +Table 5: Ablation experiments. + +that each criterion is different from others. Among them, MSRA is obviously different from others. A possible reason is that the named entity is regarded as a whole word in the MSRA criterion, which is significantly distinguishing with other criteria. + +![](images/23c4446c1d619ddc4e5036c79a9d62df689705596a28208898d464d69f1271c9.jpg) +Figure 4: Visualization of the criterion embeddings. + +# 4.2 Ablation Study + +Table 5 shows the effectiveness of each component in our model. + +The first ablation study is to verify the effectiveness of the CRF decoder, which is popular in most CWS models. The comparison between the first two lines indicates that with or without CRF does not make much difference. Since a model with CRF takes a longer time to train and inference, we suggest not to use CRF in Transformer encoder models in practice. + +The other two ablation studies are to evaluate the effect of the bigram feature and pre-trained embeddings. We can see that their effects vary in different datasets. Some datasets are more sensitive to the bigram feature, while others are more sensitive to pre-trained embeddings. In terms of average performance, the bigram feature and pre-trained embeddings are important and boost the performance considerably, but these two components do not have a clear winner. + +# 4.3 Joint Training on both simplified and Traditional Corpora + +In the above experiments, the traditional Chinese corpora (AS, CITYU, and CKIP) are translated into simplified Chinese. However, it might be more attractive to jointly train a unified model directly on the mixed corpora of simplified and traditional Chinese without translation. As a reference, the single model has been used to translate between multiple languages in the field of machine translation (Johnson et al., 2017). + +To thoroughly investigate the feasibility of this idea, we study four different settings to train our model on simplified and traditional Chinese corpora. + +1. The first setting ("8Simp") is to translate all + +
ModelsMSRAASPKUCTBCKIPCITYUNCCSXUAvg. F1
8Simp98.0596.4496.4196.9996.5196.9196.0497.6196.87
8Trad97.9896.3996.4996.9996.4996.8695.9897.4896.83
5Simp, 3Trad98.0396.5296.696.9496.3896.896.0297.5596.86
8 Simp, 8 Trad98.0496.4196.4396.9996.5496.8596.0897.5296.86
+ +Table 6: Joint training on both the simplified and traditional Chinese corpus. + +
苹果(apple)蘋果(apple)爱好(hobby)爱好(hobby)担心(worry)擔心(worry)
坚果(nut)微軟(Microsoft)热爱(love)熱愛(love)关心(care)關心(care)
谷歌(Google)黃油(butter)兴趣(interest)爱好(hobby)怀疑(doubt)顧慮(misgiving)
华为(Huawei)現貨(goods in stock)爱好(hobby)興趣(interest)顾虑(misgiving)懷疑(doubt)
黃油(butter)果凍(jelly)梦想(dream)夢想(dream)担忧(concern)擔憂(concern)
鮮果(fresh fruit)京東(JD)愛玩(Playful)愛玩(playful)责怪(blemame)憂慮(anxiety)
微软(Microsoft)賣家(seller)痴迷(addict)喜愛(adore)伤心(sad)責怪(blemame)
诺基(Nokia)苹果(apple)乐趣(pleasure)習慣(habbit)嫌弃(disfavour)傷心(sad)
蘋果(Apple)售後(after-sales)喜爱(adore)樂趣(pleasure)忧虑(anxiety)担心(worry)
+ +Table 7: Qualitative analysis for the joint embedding space of simplified and traditional Chinese. Given the target bigram, we list its top 8 similar bigrams. The bigram with red color indicates it is traditional Chinese. + +the corpora into simplified Chinese. For the pre-trained embeddings, we use the simplified Chinese Wikipedia dump to pre-train the unigram and bigram embeddings. This way is the same as the previous experiments. + +2. The second setting ("8Trad") is to translate all the corpora into traditional Chinese. For the pre-trained embeddings, we first convert the Wikipedia dump into traditional Chinese characters, then we use this converted corpus to pre-train unigram and bigram embeddings. +3. The third setting ("5Simp, 3Trad") is to keep the original characters for five simplified Chinese corpora and three traditional Chinese corpora without translation. The unified model can take as input the simplified or traditional Chinese sentences. In this way, we pre-train the joint simplified and traditional Chinese embeddings in a joint embedding space. We merge the Wikipedia corpora used in "8Trad" and "8Simp" to form a mixed corpus, which contains both the simplified and traditional Chinese characters. The unigram and bigram embeddings are pre-trained on this mixed corpus. +4. The last setting ("8Simp, 8Trad") is to simultaneously train our model on both the eight simplified Chinese corpora in "8Simp" and the eight traditional Chinese corpora in "8Trad". The pre-trained word embeddings are the same as "5Simp, 3Trad". + +Table 6 shows that there does not exist too much + +difference between different settings. This investigation indicates it is feasible to train a unified model directly on two kinds of Chinese characters. + +To better understand the quality of the learned joint embedding space of the simplified and traditional Chinese, we conduct a qualitative analysis to illustrate the most similar bigrams for a target bigram. Similar bigrams are retrieved based on the cosine similarity calculated using the learned embeddings. As shown in Table 7, the traditional Chinese bigrams are similar to their simplified Chinese counterparts, and vice versa. The results show that the simplified and traditional Chinese bigrams are aligned well in the joint embedding space. + +# 4.4 Transfer Capability + +Since except for the criterion embedding, the other parts of the unified model are the same for different criteria, we want to exploit whether a trained unified model can be transferred to a new criterion only by learning a new criterion embedding with few examples. + +We use the leave-one-out strategy to evaluate the transfer capability of our unified model. We first train a model on seven datasets, then only learn the new criterion embedding with a few training instances from the left dataset. This scenario is also discussed in (Gong et al., 2018), and Figure 5 presents their and our outcomes (averaged $F1$ score). There are two observations: Firstly, for the different number of samples, the transferred model always largely outperforms the models learned + +![](images/707006c3da5bd0de0c45ff5d343d7f115e2e7d0766fd237f1ccaa6d0e47a0278.jpg) +Figure 5: Evaluation of the transfer capability. Switch-LSTMs and Ours are models trained on the given instances from scratch. Switch-LSTMs $(trans)$ and Ours $(trans)$ are models learned in transfer fashion. + +from scratch. We believe this indicates that learning a new criterion embedding is an effective way to transfer a trained unified model to a new criterion. Secondly, our model also has superior transferability than Switch-LSTMs (Ours $(trans)$ versus Switch-LSTMs $(trans)$ ). + +# 5 Related Work + +The previous work on the MCCWS can be categorized into two lines. + +One line is multi-task based MCCWS. Chen et al. (2017) proposed a multi-criteria learning framework for CWS, which uses a shared layer to extract the common underlying features and a private layer for each criterion to extract criteria-specific features. Huang et al. (2019) proposed a domain adaptive segmenter to capture diverse criteria based on Bidirectional Encoder Representations from Transformer (BERT) (Devlin et al., 2018). + +Another line is unified MCCWS. Gong et al. (2018) presented Switch-LSTMs to segment sentences, which consists of several LSTM layers, and uses a criterion switcher at every position to change the routing among these LSTMs automatically. However, the complexity of the model makes Switch-LSTMs hard to be applied in practice. He et al. (2019) used a shared BiLSTM by adding two artificial tokens at the beginning and end of an input sentence to specify the output criterion. However, due to the long-range dependency problem, BiLSTM is hard to carry the criterion information to each character in a long sentence. + +Compared to the above two unified models, we use the Transformer encoder in our unified model, + +which can elegantly model the criterion-aware context representation for each character. With the Transformer, we just need a special criterion-token to specify the output criterion. Each character can directly attend the criterion-token to be aware of the target criterion. Thus, we can use a single model to produce different segmented results for different criteria. Different from (Huang et al., 2019), which uses the pre-trained Transformer BERT and several extra projection layers for different criteria, our model is a fully-shared and more concise. + +# 6 Conclusion and Future Work + +We propose a concise unified model for MCCWS, which uses the Transformer encoder to extract the criterion-aware representation according to a unique criterion-token. Experiments on eight corpora show that our proposed model outperforms the previous models and has a stronger transfer capability. The conciseness of our model makes it easy to be applied in practice. + +In this work, we only adopt the vanilla Transformer encoder since we just want to utilize its self-attention mechanism to model the criterion-aware context representation for each character neatly. Therefore, it is promising for future work to look for the more effective adapted Transformer encoder for CWS task or to utilize the pre-trained models (Qiu et al., 2020), such as BERT-based MCCWS (Ke et al., 2020). Besides, we are also planning to incorporate other sequence labeling tasks into the unified model, such as POS tagging and named entity recognition. + +# Acknowledgements + +This work was supported by the National Natural Science Foundation of China (No. 62022027 and 61976056), Science and Technology on Parallel and Distributed Processing Laboratory (PDL). + +# References + +Lei Jimmy Ba, Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer normalization. CoRR, abs/1607.06450. +Xinchi Chen, Xipeng Qiu, Chenxi Zhu, and Xuanjing Huang. 2015a. Gated recursive neural network for Chinese word segmentation. In Proceedings of Annual Meeting of the Association for Computational Linguistics. +Xinchi Chen, Xipeng Qiu, Chenxi Zhu, Pengfei Liu, and Xuanjing Huang. 2015b. Long Short-Term + +Memory Neural Networks for Chinese Word Segmentation. In EMNLP, pages 1197-1206. +Xinchi Chen, Zhan Shi, Xipeng Qiu, and Xuanjing Huang. 2017. Adversarial multi-criteria learning for Chinese word segmentation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1193-1203. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. +T. Emerson. 2005. The second international Chinese word segmentation bakeoff. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing, pages 123-133. Jeju Island, Korea. +Jingjing Gong, Xinchi Chen, Tao Gui, and Xipeng Qiu. 2018. Switch-LSTMs for multi-criteria Chinese word segmentation. arXiv preprint arXiv:1812.08033. +Han He, Lei Wu, Hua Yan, Zhimin Gao, Yi Feng, and George Townsend. 2019. Effective neural solution for multi-criteria word segmentation. In Smart Intelligent Computing and Applications, pages 133-142. Springer. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780. +Weipeng Huang, Xingyi Cheng, Kunlong Chen, Taifeng Wang, and Wei Chu. 2019. Toward fast and accurate neural Chinese word segmentation with multi-criteria learning. arXiv preprint arXiv:1903.04190. +G. Jin and X. Chen. 2008. The fourth international Chinese language processing bakeoff: Chinese word segmentation, named entity recognition and chinese pos tagging. In Sixth SIGHAN Workshop on Chinese Language Processing, page 69. +Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, et al. 2017. Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339-351. +Zhen Ke, Liang Shi, Erli Meng, Bin Wang, Xipeng Qiu, and Xuanjing Huang. 2020. Unified multi-criteria Chinese word segmentation with bert. arXiv preprint arXiv:2004.05808. +Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. + +Shuhei Kurita, Daisuke Kawahara, and Sadao Kurohashi. 2017. Neural joint model for transition-based Chinese syntactic analysis. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 1204-1214. +John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning. +Wang Ling, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015. Two/too simple adaptations of word2vec for syntax problems. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1299-1304. +Yijia Liu, Wanxiang Che, Jiang Guo, Bing Qin, and Ting Liu. 2016. Exploring segment representations for neural segmentation models. arXiv preprint arXiv:1604.05499. +Ji Ma, Kuzman Ganchev, and David Weiss. 2018. State-of-the-art Chinese word segmentation with BiLSTMs. arXiv preprint arXiv:1808.06511. +Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. 2020. Pre-trained models for natural language processing: A survey. SCIENCE CHINA Technological Sciences. +Nuo Qun, Hang Yan, Xipeng Qiu, and Xuanjing Huang. 2020. Chinese word segmentation via BiLSTM+Semi-CRF with relay node. Journal of Computer Science and Technology, 35(5):1115-1126. +Yan Shao, Christian Hardmeier, Jörg Tiedemann, and Joakim Nivre. 2017. Character-based joint segmentation and pos tagging for chinese using bidirectional rnnp-crf. arXiv preprint arXiv:1704.01314. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998-6008. +Chunqi Wang and Bo Xu. 2017. Convolutional neural network with word embeddings for Chinese word segmentation. arXiv preprint arXiv:1711.04411. +Jingjing Xu and Xu Sun. 2016. Dependency-based gated recursive neural network for chinese word segmentation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 567-572. +Jie Yang, Yue Zhang, and Shuailong Liang. 2018. Subword encoding in lattice LSTM for Chinese word segmentation. arXiv preprint arXiv:1810.12594. + +Meishan Zhang, Nan Yu, and Guohong Fu. 2018. A simple and effective neural model for joint word segmentation and POS tagging. IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), 26(9):1528-1538. \ No newline at end of file diff --git a/aconcisemodelformulticriteriachinesewordsegmentationwithtransformerencoder/images.zip b/aconcisemodelformulticriteriachinesewordsegmentationwithtransformerencoder/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..50715ae04a08912aecf71ef6e8b53294ec7ef2a9 --- /dev/null +++ b/aconcisemodelformulticriteriachinesewordsegmentationwithtransformerencoder/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:93800a298a78b043dcfe77d766894558d94056c250655e6d5a5ca34f2e77d885 +size 547906 diff --git a/aconcisemodelformulticriteriachinesewordsegmentationwithtransformerencoder/layout.json b/aconcisemodelformulticriteriachinesewordsegmentationwithtransformerencoder/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..29d552cb300e5a587e2edef8a82ac1c9b5202bff --- /dev/null +++ b/aconcisemodelformulticriteriachinesewordsegmentationwithtransformerencoder/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:48a1a48f38ad336bc0716b62bfc97892edbdd5c608d30d8962dd74c3fa54e5f4 +size 395808 diff --git a/activelearningapproachestoenhancingneuralmachinetranslation/80b307df-133a-4edf-b209-48832d1b757e_content_list.json b/activelearningapproachestoenhancingneuralmachinetranslation/80b307df-133a-4edf-b209-48832d1b757e_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b91c68ff4a9d58f3b541257623482563279dbe5d --- /dev/null +++ b/activelearningapproachestoenhancingneuralmachinetranslation/80b307df-133a-4edf-b209-48832d1b757e_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a7fdb1c1200a9ee59ba9cabbc0ff0db130278764c8581549586e06c235811ca7 +size 77426 diff --git a/activelearningapproachestoenhancingneuralmachinetranslation/80b307df-133a-4edf-b209-48832d1b757e_model.json b/activelearningapproachestoenhancingneuralmachinetranslation/80b307df-133a-4edf-b209-48832d1b757e_model.json new file mode 100644 index 0000000000000000000000000000000000000000..a7f94965b6b68cbe9f2f562cad33abd14195bf3b --- /dev/null +++ b/activelearningapproachestoenhancingneuralmachinetranslation/80b307df-133a-4edf-b209-48832d1b757e_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:90eb7a7668413ef2d57d5df1231cf5af6ab60863b5312851d28f82fe560542c9 +size 96427 diff --git a/activelearningapproachestoenhancingneuralmachinetranslation/80b307df-133a-4edf-b209-48832d1b757e_origin.pdf b/activelearningapproachestoenhancingneuralmachinetranslation/80b307df-133a-4edf-b209-48832d1b757e_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5eff67235755ad777e33cf622057fe47a27e11e9 --- /dev/null +++ b/activelearningapproachestoenhancingneuralmachinetranslation/80b307df-133a-4edf-b209-48832d1b757e_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:76c611dbd86758850f5a242df4ad56b00dff8439342e8ae089f32a9714d23057 +size 638656 diff --git a/activelearningapproachestoenhancingneuralmachinetranslation/full.md b/activelearningapproachestoenhancingneuralmachinetranslation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..24280844635c884ef0765095c0afd0cd38b4d7ff --- /dev/null +++ b/activelearningapproachestoenhancingneuralmachinetranslation/full.md @@ -0,0 +1,377 @@ +# Active Learning Approaches to Enhancing Neural Machine Translation + +Yuekai Zhao $^{1}$ Haoran Zhang $^{1}$ Shuchang Zhou $^{2}$ Zhihua Zhang $^{3}$ + +1 Academy for Advanced Interdisciplinary Studies, Peking University + +2 Megvii Inc. + +$^{3}$ School of Mathematical Sciences, Peking University + +{yuekaizhao, haoran_zhang}@pku.edu.cn + +zsc@megvii.com + +zhzhang@math.pku.edu.cn + +# Abstract + +Active learning is an efficient approach for mitigating data dependency when training neural machine translation (NMT) models. In this paper we explore new training frameworks by incorporating active learning into various techniques such as transfer learning and iterative back-translation (IBT) under a limited human translation budget. We design a word frequency based acquisition function and combine it with a strong uncertainty based method. The combined method steadily outperforms all other acquisition functions in various scenarios. As far as we know, we are the first to do a large-scale study on actively training Transformer (Vaswani et al., 2017) for NMT. Specifically, with a human translation budget of only $20\%$ of the original parallel corpus, we manage to surpass Transformer trained on the entire parallel corpus in three language pairs. + +# 1 Introduction + +Many impressive progresses have been made in neural machine translation (NMT) in the past few years (Luong et al., 2015; Gehring et al., 2017; Vaswani et al., 2017; Wu et al., 2019). However, the general training procedure requires tremendous amounts of high-quality parallel corpus to achieve a deep model's full potential. The scarcity of the training corpus is a common problem for many language pairs, which might lead to the NMT model's poor performance. + +However, constructing a parallel corpus is a slow and laborious process. Professional human translators and well-trained proofreaders are needed. Although several dual learning (He et al., 2016; Bi et al., 2019) and unsupervised learning (Artetxe et al., 2018; Lample et al., 2017; Lample and Conneau, 2019) approaches have been successfully used, they are often inferior to the supervised models. In such cases, active learning might be a good + +choice. The goal of active learning in NMT is to train a well-performing model under a limited human translation budget. We achieve this goal by using some particularly designed acquisition functions to select informative sentences to construct a training corpus. + +Acquisition functions can be categorized into two types: model related and model agnostic. For the former, the methods we use are all based on the idea of uncertainty. For the latter, we devise a word frequency based method which takes linguistic features into consideration. Both types of acquisition functions have been proven to be beneficial in active NMT training, especially when they are appropriately combined. + +Data augmentation techniques that consume no human translation budget are worth exploring in active NMT training. If the parallel corpus of a related language pair is available, transfer learning (Zoph et al., 2016; Kim et al., 2019) might be a good choice. Otherwise, we propose a new training framework that integrates active learning with iterative back-translation (IBT) (Hoang et al., 2018). We achieve success in both the settings, especially when active learning bonds with IBT. + +The main contributions of this work are listed as follows: 1) To the best of our knowledge, we are the first to give a comprehensive study of active learning in NMT under various settings. 2) We propose a word frequency based acquisition function which is model agnostic and effective. This acquisition function can further enhance existing uncertainty based methods, achieving even better results in all settings. 3) We design a new training framework for active iterative back-translation as well as a simple data augmentation technique. With a human translation budget of only $20\%$ of the original parallel corpus, we can achieve better BLEU scores than the fully supervised Transformer does (Vaswani et al., 2017). + +![](images/1cc03f24469976d3053accca75dff9da82c9abb053ad7ad5325a2ca1c370b384.jpg) +Figure 1: (a) shows the diagram of vanilla supervised NMT training. A parallel corpus is available and used to train the model. (b) shows active NMT training. An acquisition function can use the model to score each sentence in the source side monolingual corpus. A parallel corpus is gradually constructed by employing an oracle (human translator) to translate the sentences with high scores. (c) shows active iterative back-translation. An acquisition function can use $Model_{A \to B}$ to score the untranslated sentences in language A. One part of the high score sentences are translated by an oracle (new parallel corpus), another part are translated by $Model_{A \to B}$ (new synthetic corpus). New parallel corpus and new synthetic corpus are used for training $Model_{B \to A}$ and vice versa. + +# 2 Related Work + +Active learning As for natural language processing, active learning is well studied in text classification (Zhang et al., 2017; Ru et al., 2020) and named entity recognition (Shen et al., 2017; Siddhant and Lipton, 2018; Prabhu et al., 2019). Peris and Casacuberta (2018) applied attention based acquisition functions for NMT. Liu et al. (2018) introduced reinforcement learning to actively train an NMT model. + +Data selection in NMT Although active learning has not been thoroughly studied in NMT, the related data selection problem attracts some attention. van der Wees et al. (2017); Wang et al. (2018a) deliberately designed weighted sampling methods, which accelerates training and improves performance. Wang et al. (2018b); Pham et al. (2018) focused on noisy data, coming up with algorithms to filter harmful sentence pairs. Wang et al. (2019) simultaneously dealt with domain data selection and clean data selection. Fadaee and Monz (2018); Poncelas et al. (2019); Dou et al. (2020) considered domain data selection in back-translation. Wang and Neubig (2019) proposed a method to select relevant sentences from other languages to bring performance gains in low resource NMT. Furthermore, Ruiter et al. (2019) tried to extract possible + +parallel data from bilingual Wikipedia. + +Interactive NMT Interactive NMT exploits user feedback to help improve translation systems. Realworld (Kreutzer et al., 2018) or simulated user feedback includes highlighting accurate translation chunks (Petrushkov et al., 2018) or correct errors made by machine (Peris and Casacuberta, 2018; Domingo et al., 2019). Kreutzer and Riezler (2019) took the cost of different types of supervision (feedback) into account, which resembles the idea of active learning. + +# 3 Methodology + +We give a detailed description of active neural machine translation (NMT) in this section. Basic settings and some terminologies are introduced in Section 3.1. In Section 3.2 and Section 3.3, various acquisition functions are presented and explained. Section 3.4 deals with combining active learning with transfer learning and iterative backtranslation. Figure 1 is an illustration of different training frameworks in NMT. + +# 3.1 Active NMT + +Several terminologies need to be clarified before introducing the active NMT circulation, namely, acquisition function, oracle and budget. + +Acquisition Function An acquisition function gives a score to each untranslated sentence in the monolingual corpus. Sentences with higher scores are more likely to be selected as the training corpus. Acquisition functions fall into two types, model related and model agnostic. A model related acquisition function takes a sentence as the model input and gives a score depending on the model output. A model agnostic acquisition function often concerns about the informativeness of the sentence itself, which can score each sentence before training the model. + +Oracle An oracle is a gold standard for a machine learning task. For NMT, an oracle can output the ground truth translation given a source sentence (specifically an expert human translator). A parallel corpus is gradually constructed by employing an oracle to translate the selected sentences. + +Budget Budget means the total cost one can afford to employ an oracle. For NMT, we need to hire human experts to translate sentences. In order to simulate active NMT training, throughout all our experiments, the cost is the number of words been translated. + +In the beginning, we have a large-scale monolingual corpus of the source language. We do several rounds of active training until the total budget is used up. In each round, five steps are taken: + +- Use an acquisition function to score each untranslated sentence. +- Sort the untranslated sentences according to the scores in descending order. +- Select high score untranslated sentences until the token budget in this round is used up. +- Remove the selected sentences from the monolingual corpus and employ an oracle to translate them. +- Add these new sentence pairs to the parallel corpus and retrain the NMT model. + +Transformer is what we use throughout our experiments. As this architecture is commonly used and our implementation has little difference with the original, we skip an exhaustive background description of the underlying model. One can refer to Vaswani et al. (2017) for some details. The active NMT training circulation is shown in part (b) of Figure 1. + +# 3.2 Model Related Acquisition Functions + +All model related acquisition functions we try are based on uncertainty. Settles and Craven (2008) tried these methods on sequence labeling tasks. For NMT, we use greedy decoding to generate a synthetic translation of each sentence $x = (x_{1},\dots ,x_{n})$ in the monolingual corpus $U$ . We denote this synthetic translation as $\hat{y} = (\hat{y}_1,\dots ,\hat{y}_m)$ . In the $i^{\mathrm{th}}$ decoding step, the model outputs a probability distribution over the entire vocabulary $P_{\theta}(\cdot |x,\hat{y}_{< i})$ . + +Least Confident (lc) A direct interpretation of model uncertainty is the average confidence level on the generated translation. We strengthen the model on its weaknesses and force it to learn more on intrinsically hard sentences. + +$$ +\frac {1}{m} \sum_ {i = 1} ^ {m} \left[ 1 - P _ {\theta} \left(\hat {y} _ {i} \mid x, \hat {y} _ {< i}\right) \right] \tag {1} +$$ + +Minimum Margin (margin) Margin means the average probability gap between the model's most confident word $y_{i,1}^{*}$ and second most confident word $y_{i,2}^{*}$ in each decoding step. With a small margin, the model is unable to distinguish the best translation from an inferior one. + +$$ +- \frac {1}{m} \sum_ {i = 1} ^ {m} \left[ P _ {\theta} \left(y _ {i, 1} ^ {*} \mid x, \hat {y} _ {< i}\right) - P _ {\theta} \left(y _ {i, 2} ^ {*} \mid x, \hat {y} _ {< i}\right) \right] \tag {2} +$$ + +Token Entropy (te) Concentrated distributions tend to have low entropy. Entropy is also an appropriate measurement of uncertainty. In NMT, we calculate the average entropy in each decoding step as given by the following equation. + +$$ +\frac {1}{m} \sum_ {i = 1} ^ {m} \text {e n t r o p y} \left(P _ {\theta} (\cdot | x, \hat {y} _ {< i}))\right) \tag {3} +$$ + +Total Token Entropy (tte) To avoid favoring long sentences, we choose to take average over sentence length in the above three methods. However, it remains a question whether querying long sentences should be discouraged. We design an acquisition function to figure out this issue by removing the $\frac{1}{m}$ term from Token Entropy. + +# 3.3 Model Agnostic Acquisition Functions + +Uncertainty based acquisition functions depend purely on probability. We propose a model agnostic acquisition function that focuses on linguistic features. In NMT, it is important to enable the model + +Algorithm 1 Decay Logarithm Frequency Acquisition Function +Input: Selected Corpus $L$ , Untranslated Corpus $U$ Token Budget $b$ Positive Constants $\lambda_{1},\lambda_{2}$ +Output: New Selected Sentences $B$ +1: $B = \emptyset ;\hat{U} = \emptyset$ +2: for s in U do +3: calculate $lf(s)$ by Equation (6) +4: end for +5: for s in sort(U) by $lf$ score do +6: calculate $delfy(s)$ by Equation (7) +7: $\hat{U} = \hat{U}\cup \{\mathrm{s}\}$ +8: end for +9: for s in sort(U) by delfy score do +10: if Cost(BU{S})>b then +11: break +12: end if +13: $B = B\cup \{\mathrm{s}\}$ +14: end for + +to translate unseen future sentences. In other words, we wish to choose those sentences that are representatives of all the untranslated sentences but less similar with what has previously been selected. + +In each active training round, we have a set of untranslated sentences in the source language side, which is denoted as $U$ . Also, those sentences that have been selected in previous active training rounds are denoted as $L$ . We denote a sentence as $s = (s_1, \dots, s_K)$ which is different from what it is in Section 3.2 because we are now working on word level instead of the subword level. First, we define the logarithm frequency of a word $w$ in $U$ , namely, $F(w|U)$ . + +$$ +G (w | U) = \log (C (w | U) + 1) \tag {4} +$$ + +$$ +F (w | U) = \frac {G (w | U)}{\sum_ {w ^ {\prime} \in U} G \left(w ^ {\prime} \mid U\right)} \tag {5} +$$ + +Where $C(w|\cdot)$ means the occurrence number of a word $w$ in a certain sentence set. + +As shown in Equation (6), the representativeness of a sentence $s$ is determined by its average logarithm word frequency in $U$ . A decay factor $\lambda_{1} \geq 0$ is introduced to assist the model to pay more attention to the uncommon words in the previously selected corpus $L$ . + +$$ +l f (s) = \frac {\sum_ {i = 1} ^ {K} F \left(s _ {i} \mid U\right) \times e ^ {- \lambda_ {1} C \left(s _ {i} \mid L\right)}}{K} \tag {6} +$$ + +Directly using $lf$ scores is problematic. The algorithm favors a small number of function words (like "a", "the") which account for a high proportion of the entire corpus. Also, redundancy breaks out since sentences of similar content share similar scores. These two drawbacks are disastrous for building a well-performing translation system. + +A gradual reranking is used to ease these two problems. Equation (6) is employed for the first round of sorting. $\hat{U} (s)$ is the set of all sentences that have a higher $lf$ score than $s$ . If $s$ has a high $lf$ score, but each word $s_i$ in $s$ frequently appears in $\hat{U} (s)$ , we use a decay term $e^{-\lambda_2C(s_i|\hat{U}(s))}$ to cut down its score. In this way, we tend to discard repetitive sentences and filter out insignificant function words. Details can be found in Equations (7) and (8). $\lambda_{1}$ and $\lambda_{2}$ are non-negative constants. + +$$ +d e l f y (s) = \frac {\sum_ {i = 1} ^ {K} F \left(s _ {i} \mid U\right) \times D e c a y \left(s _ {i}\right)}{K} \tag {7} +$$ + +$$ +D e c a y \left(s _ {i}\right) = e ^ {- \lambda_ {1} C \left(s _ {i} \mid L\right)} \times e ^ {- \lambda_ {2} C \left(s _ {i} \mid \hat {U} (s)\right)} \tag {8} +$$ + +We name this model agnostic acquisition function as decay logarithm frequency (delfy) which is summarized in Algorithm 1. + +# 3.4 Active NMT with Data Augmentation + +Directly incorporating active learning into NMT can be beneficial. However, is there any technique that consumes no extra budget to further improve translation performance? The answer depends on the availability of some related parallel corpus. Transferring knowledge from a related language pair can be considered if an extra parallel corpus is available. Iterative back-translation is worth trying if not. + +Transfer Learning We assume that there exists a rich parallel corpus in a related translation direction, e.g., we try to build a German-English NMT system and we have access to French-English sentence pairs. The model is initialized by training on this related parallel corpus. Active NMT training is carried out as described in Section 3.1 after model initialization. + +Iterative Back-Translation Iterative back-translation (IBT) (Sennrich et al., 2016a; Hoang et al., 2018) proves to be of help in boosting model performance. IBT offers a data augmentation technique that is budget free (no human translator needed) when considering active NMT training. However, simply using all monolingual corpus + +Algorithm 2 The Framework for Active Iterative Back-Translation (IBT) +Input: Active IBT Rounds $R$ Parallel Corpus $L = \{L_A,L_B\}$ Monolingual Corpus $U_{A},U_{B}$ ; Initialized NMT Model $M_{A\rightarrow B}$ $M_{B\rightarrow A}$ Acquisition Function $\Phi$ Token Budget $b$ ,Oracle $O$ Token Number in Synthetic Sentences $\alpha$ Output: $M_{A\to B}$ $M_{B\to A}$ . 1: for i in 1 to R do 2: $\overrightarrow{A_i} = \Phi (U_A,L_A,M_A\rightarrow B,b)$ 3: $\vec{B}_i = O(\vec{A_i})$ . $U_{A} = U_{A}\setminus \vec{A}_{i}$ 4: $\vec{P_i} = \Phi (U_A,L_A,M_A\rightarrow B,\alpha)$ 5: $\vec{Q_i} = M_{A\rightarrow B}(\vec{P_i})$ 6: $L_{A} = L_{A}\cup \vec{A_{i}}$ $L_{B} = L_{B}\cup \vec{B_{i}}$ 7: Train $M_{B\rightarrow A}$ on $\{(L_B\cup \vec{Q_i}),(L_A\cup \vec{P_i})\}$ 8: $\vec{B_i} = \Phi (U_B,L_B,M_B\rightarrow A,b)$ 9: $\vec{A_i} = O(\vec{B_i})$ . $U_{B} = U_{B}\setminus \vec{B_{i}}$ 10: $\vec{Q_i} = \Phi (U_B,L_B,M_B\rightarrow A,\alpha)$ 11: $\vec{P_i} = M_{B\rightarrow A}(\vec{Q_i})$ 12: $L_{A} = L_{A}\cup \vec{A_{i}}$ $L_{B} = L_{B}\cup \vec{B_{i}}$ 13: Train $M_{A\rightarrow B}$ on $\{(L_A\cup \vec{P_i}),(L_B\cup \vec{Q_i})\}$ 14: end for + +to generate a synthetic parallel corpus will hurt instead of improving the model performance. We designed some experiments to validate this argument. Detailed results can be seen in Appendix B. + +Two reasons may cause these poor results. First, the quality of synthetic corpus varies. Some of the synthetic sentence pairs can be beneficial, while others only introduce chaos into the NMT model. Second, the percentage of the synthetic corpus in the entire training corpus is too high. To cope with these two problems, we propose a new Active IBT framework. Models of opposite translation directions are responsible for constructing training corpus for each other. Sentences with the highest acquisition function scores are divided into two parts. One part is translated by an oracle to enrich the parallel corpus. Another part is used to generate a new synthetic corpus. In this way, we manage to control the quality as well as the percentage of the synthetic corpus. + +This framework is shown in part (c) of Figure 1, and some details can be found in Algorithm 2. + +Algorithm 3 Active IBT++ (LAN A to LAN B) +Input: Active IBT Rounds R; Merge Number $k_{1}$ , $k_{2}$ Final Parallel Corpus $L^{++} = \{L_{A}, L_{B}\}$ ; $M_{A \rightarrow B, i}$ , $M_{B \rightarrow A, i}$ , $i \in \{1, 2, \dots, R\}$ ; Synthetic Corpus $\overleftarrow{P_{i}}$ , $\overleftarrow{Q_{i}}$ , $i \in \{1, 2, \dots, R\}$ ; Output: $M_{A \rightarrow B}$ ; +1: for j in 1 to $k_{1}$ do +2: $\tilde{L}_{A,j} = M_{B \rightarrow A, R - j + 1}(L_{B})$ ; +3: $\tilde{L}_{B,j} = M_{A \rightarrow B, R - j + 1}(L_{A})$ ; +4: $L^{++} = L^{++} \cup \{\tilde{L}_{A,j}, L_{B}\} \cup \{L_{A}, \tilde{L}_{B,j}\}$ +5: end for +6: for j in 1 to $k_{2}$ do +7: $L^{++} = L^{++} \cup \{\overleftarrow{P}_{R - j + 1}, \overleftarrow{Q}_{R - j + 1}\}$ +8: end for +9: $M_{A \rightarrow B} = \text{Retrain } M_{A \rightarrow B, 1}$ on $L^{++}$ + +Active IBT++ Active learning aims at choosing informative sentences to train the model. Is there any way that we can exploit more value from these selected sentences? Inspired by Nguyen et al. (2019), we propose some further data augmentation techniques after Active IBT is done. Models of the last $k_{1}$ rounds are used for translating the final parallel corpus, such that each selected sentence will have diversified translations. We merge the diversified parallel corpus with the synthetic corpus of a specific translation direction in the last $k_{2}$ rounds. Duplicate sentence pairs are filtered out. The NMT model is re-initialized and trained on this enlarged training corpus. + +We name this technique Active IBT++ and summarize it in Algorithm 3. For simplicity, we only consider one translation direction in Algorithm 3. The same technique can be easily done in another translation direction. + +# 4 Experiments + +# 4.1 Dataset, Preprocessing and Implementation + +We experiment on three language pairs, namely, German-English (DE-EN), Russian-English (RU-EN) and Lithuanian-English (LT-EN). To simulate active NMT training, we use parallel corpus from the WMT 2014 shared task (DE-EN, RU-EN) and the WMT 2019 shared task (LT-EN). For Russian-English, we randomly choose extra 2M sentence pairs from the UN corpus1. The number of sen + +![](images/1d74d7c9cfe439ac54c38b8836e19f31fbee5186eea9762bc1d1493594b03272.jpg) +(a) news test 2014, DE-EN + +![](images/90e931de86fc294fb79f946f53c04285d03e65a437ed1fcf2e8a52ace2413b3c.jpg) +(b) news test 2014, RU-EN + +![](images/4ba1d0c4cac661e87d8089994b49947662c255d5df72c0de180ed15547b6eef9.jpg) +(c) news test 2019, LT-EN + +![](images/89c1dd9cd0c08fe76658587564424f4dc2dc2758013cee1c58b6b6714db06d8e.jpg) +(a) news test 2014, DE-EN + +![](images/b12ec0e2ba9edcbfa0d0103936cefb63847f3fd9819f8f82d231c73d475e4836.jpg) +Figure 2: Active NMT, BLEU scores on the test dataset. +(b) news test 2014, RU-EN + +![](images/728612d62a39324b8497ba6a2ec7bd54b749a642000001c1044f157300b06f9e.jpg) +(c) news test 2019, LT-EN +Figure 3: Active NMT with Transfer Learning, BLEU scores on the test dataset. + +tence pairs in each language pair is 4M (DE-EN), 4M (RU-EN) and 0.8M (LT-EN). Tokenization is done by Moses $^2$ . We employ BPE (Sennrich et al., 2016b) to generate a shared vocabulary for each language pair. The BPE merge operation numbers are 20K (LT-EN), 32K (DE-EN, RU-EN). For active NMT with or without transfer learning, we only experiment on translating into English. Instead, for active iterative back-translation (IBT), evaluation is carried out on translating from English and into English. The evaluation metric is BLEU (Papineni et al., 2002). + +Model hyper parameters are identical to Transformer base (Vaswani et al., 2017). Adam optimizer (Kingma and Ba, 2014) is used with a learning rate of $7 \times 10^{-4}$ . We use the same learning rate scheduling strategy as Vaswani et al. (2017) does with a warmup step of 4000. During training, the label smoothing factor and the dropout probability are set to 0.1. $\lambda_{1}, \lambda_{2}$ in Algorithm 1 are all set to 1.0. + +Our implementation is based on pytorch3. All models are trained on 8 RTX 2080Ti GPU cards with a mini-batch of 4096 tokens. We stop training + +if validation perplexity does not decrease for 10 epochs in each active training round. + +# 4.2 Active NMT + +As a starting point, we empirically compare different acquisition functions proposed in Section 3.2 and Section 3.3, as well as the uniformly random selection baseline. Twelve rounds of active NMT training are done. In each round, $1.67\%$ of the entire parallel corpus is selected and added into the training corpus. Thus, we ensure the token budget is $20\%$ of the entire parallel corpus in the final round. Training corpus in the first round is identical across different acquisition functions to ensure the fairness of comparison. + +Results are shown in Figure 2. Most active acquisition functions can outperform the random selection baseline in all three language pairs. Our model agnostic acquisition function (delfy) is also better than the best uncertainty based acquisition function. We try to combine delfy with some well-performing uncertainty based acquisition functions since they represent different aspects of the informativeness of a sentence. We choose to combine delfy with token entropy (te). We add the ranks given by these two acquisition functions to avoid the magnitude problem. For example, if a sentence + +![](images/3e7b9fb8124276c81b8fe0c84811dd5b43a3c3a263d4f91ced14e2cb41ecda7f.jpg) +(a) news test 2014, DE-EN + +![](images/50bc8290a20fdc8fd3865f7732573ae49b317648901dce6ffdb319ae64b4a5d9.jpg) +(b) news test 2014, RU-EN + +![](images/4ea19b3e3997ba80436d8c5007d026874ecc85f6b3c21185c1632a506ce48d5f.jpg) +(c) news test 2019, LT-EN + +![](images/ab6c0ff2c24b66937f92a1dc8ef6fa9e05c836d262bafd85b546634ca2caa465.jpg) +(d) news test 2014, EN-DE + +![](images/688f8f4c1a94a749d89a2455131d80a929d7fd2d5de26c7df1629b01a36db61a.jpg) +(e) news test 2014, EN-RU + +![](images/1cedb14c9d25eab568cd86a1676bec2397e2311674bf092bbfba3b233448a54d.jpg) +(f) news test 2019, EN-LT +Figure 4: Active Iterative Back-Translation, BLEU scores on the test dataset. + +gets the highest delfy score as well as the second-highest te score, then its delfy rank is 1 and its te rank is 2, such that its final score is $1 + 2 = 3$ . Since we sort sentences in descending order of their scores, we should multiply the summation of the ranks by $-1$ . This new combined acquisition function is named as te-delfy. + +Our combined method (te-delfy) proves to be more effective, outperforming all the other acquisition functions in each active NMT training round in all three language pairs. To be more specific, in the last active training round, te-delfy surpasses the best uncertainty based acquisition function by 1.4 BLEU points in DE-EN, 1.6 BLEU points in RU-EN and 1.1 BLEU points in LT-EN. + +# 4.3 Active NMT with Transfer Learning + +To evaluate different acquisition functions in active NMT with transfer learning, we start from a French to English NMT model. The parallel corpus for building this initial model contains 4M sentence pairs which are randomly selected from the WMT 2014 shared task. To share vocabulary between different languages, we latinize all the Russian sentences4. + +Figure 3 shows the results. All the active acquisition functions are still advantageous compared with + +the random selection baseline except total token entropy (tte). Our combined method (te-delfy) is also the best in most active training rounds. Te-delfy yields the best final results, beating the best uncertainty based acquisition function by 0.5 BLEU points in DE-EN, 0.3 BLEU points in RU-EN and 0.5 BLEU points in LT-EN. However, in active NMT with transfer learning, the performance gains brought by different acquisition functions are not as much as it is in active NMT (Section 4.2). + +# 4.4 Active Iterative Back-Translation + +For active iterative back-translation (IBT), we randomly select $10\%$ of the entire parallel corpus to train an initial NMT model. The initial model is shared across different acquisition functions. We do 10 rounds of Active IBT training. In each round, $1\%$ of the entire parallel corpus is added into the training corpus. The total token budget is still $20\%$ as in Section 4.2 and Section 4.3. For $\alpha$ in Algorithm 2, we use as many as half of the amount of the authentic parallel corpus in this Active IBT round. $k_{1}$ , $k_{2}$ in Algorithm 3 are set to 3 and 6 respectively. + +Results are summarized in Figure 4. Our combined method (te-delfy) becomes even more powerful than it is in active NMT, leading all the way until the final round in all the experiments. All active acquisition functions we try surpass the random + +
MethodSettingDE→ENEN→DERU→ENEN→RULT→ENEN→LT
Transformer BaseEntire Corpus32.527.333.936.624.220.3
RandomActive IBT29.423.628.430.521.215.7
Best UncertaintyActive IBT31.525.532.133.923.019.5
Delfy (Ours)Active IBT31.326.132.034.423.620.0
Te-delfy (Ours)Active IBT31.926.933.536.123.820.3
Te-delfy (Ours)Active IBT++32.827.435.037.425.421.3
+ +Table 1: Comparison between Active IBT models in the final round, Active IBT++ models and the full supervision Transformer. Best results are all achieved by Te-delfy. The token budget is $20\%$ of the entire parallel corpus. + +![](images/5322e734b7eb9af2a967caeb4f5fd90d33cadc47e59fa4f402564a87da535904.jpg) + +![](images/3f5bbc2961d8c5491e786c52e025d780fa8cd7101d90dcfb8cc234892d9532cf.jpg) + +![](images/ea7e9b5c305ad659bc4ca36b82fb6440831c9e4538a3167bb17dc43a7b6c905c.jpg) +(a) DE-EN + +![](images/72411aa119a25bc8bdf58800f3db36d610bf56e599fc241a8b3e16affb602af2.jpg) +Figure 5: Text analysis of selected sentences, including average sentence length, vocabulary coverage and MTLD score. + +![](images/1cf86a669bab62d33faef106d59dd0caca6147a807f7c1d8e0ec60ef829a52dd.jpg) + +![](images/54ea9b8a72a5b475d007e064f660ecc2d3cc4878a8ab5d9c6b58e2a9a25809a3.jpg) +(b) RU-EN + +![](images/3eb1957cd68480305254b70e370ceaa373cd50c56cf33b9eb05d7a5d58195f19.jpg) + +![](images/d51a60fc87613cb83c7036453400b613c62be25d86f342ef6d4e1a977f73e663.jpg) + +![](images/f9accb195e960a836e1ddaa466b9c9a1f453da571a064516ac13af10f31caaba.jpg) + +![](images/08b51d9eb57bd3b2f1e1f4cfd115a313107749eb22a2e369fca0913bce3de8ed.jpg) +(c) LT-EN + +![](images/c26c67a1595d040ec0595f73f17ad84edbfd4918ded517cf8be66e6bca734252.jpg) + +![](images/76439ca8fa52e9e29e0eec12a7e441d016db47f4e3a8ee0bf1bdc17815c162fb.jpg) + +baseline by a large margin, with a minimum performance gain of 1.1 BLEU points. We argue that synthetic sentence pairs need more sophisticated selection criteria than the authentic ones. Low-quality pseudo-parallel data can damage rather than help the model performance. + +We make a comparison between the actively learned models and the full supervision Transformer in Table 1. The best results are all achieved by te-delfy which further proves its superiority. Active IBT++ (Algorithm 3) is applied with te-delfy. With a token budget of $20\%$ of the entire parallel corpus, we can surpass the vanilla Transformer in every translation direction. These results show that Active IBT and Active IBT++ are promising approaches for enhancing NMT models. + +# 5 Analysis + +# 5.1 Linguistic Features + +In order to find the common features of the beneficial sentences in translation, we analyze the final parallel corpus constructed by different acquisition functions in active NMT from four aspects. All the analyses are done on word level instead of the subword level. First, we study the impact of the average sentence length. Second, we study + +the vocabulary coverage by calculating the ratio of the vocabulary size of the selected corpus to the total/test vocabulary size. Finally, the lexical diversity of the selected corpus is analyzed based on the MTLD metric (McCarthy and Jarvis, 2010). Analyses are done on random selection, the best uncertainty based method, delfy and te-delfy. The results are shown in Figure 5. + +Most algorithms tend to choose some medium-length sentences, rather than the extremely long or short ones. We also use sentence length as our acquisition function (choosing the longest or shortest sentences), which proves to be terrible (Appendix A). Vocabulary coverage varies among different acquisition functions, with random selection always being the lowest one. Higher vocabulary coverage means fewer unseen words which might create a more knowledgeable model. Also, delfy and te-delfy always achieve higher MTLD scores than the other two methods do. Note that a higher vocabulary coverage does not necessarily mean a higher diversity score. In LT-EN and RU-EN, delfy always has a larger vocabulary size than te-delfy, but its selected corpus is less diverse. In general, a good acquisition function should favor medium-length sentences as well as having a large vocabulary cov + +erage. Meanwhile, diversified training corpus is also beneficial to model performance. + +
MethodsEasy→HardHard→Easy
lc16.017.5
margin16.318.3
te15.918.7
tte16.118.6
delfy16.919.1
te-delfy16.019.8
+ +Table 2: We validate the necessity of active learning when there is a limited human translation budget. Hard $\rightarrow$ Easy corresponds to active learning. Easy $\rightarrow$ Hard represents reverse active learning. We experiment on EN-LT with a token budget of $20\%$ of the entire parallel corpus. Active learning results are always better than reverse active learning results. + +# 5.2 Reverse Active learning + +Active learning chooses difficult samples for the model. Instead, several curriculum learning methods (Zhang et al., 2018; Platanios et al., 2019; Liu et al., 2020; Zhou et al., 2020) accelerates model convergence, which starts training with easy data samples and gradually moves to hard ones. Curriculum learning's success makes it reasonable to think about whether the reverse of active learning is also beneficial. Reverse active learning selects sentences with the lowest acquisition function scores in each round. We make a comparison between active learning and reverse active learning in Table 2. Reverse active learning lags behind active learning with all acquisition functions we try. Also, reverse active learning can not beat the random baseline of 18.5 BLEU points. Curriculum learning emphasizes the training process of networks (easy to hard), which might accelerate convergence. However, when the amount of training data is limited, active learning is a better choice. + +# 6 Conclusion + +Various acquisition functions are conducted on active NMT, active NMT with transfer learning and active iterative back-translation (IBT). Our experiment results strongly prove that active learning is beneficial to NMT. Our combined method (te-delfy) achieves the best final BLEU score in every experiment we do. Also, the proposed Active IBT++ framework efficiently exploits the selected parallel corpus to further enhance the model accuracy. These techniques may also be useful for + +unsupervised NMT. Active pre-training is worth trying and active IBT has already proven its capability. We leave it for future work to study more acquisition functions in more NMT scenarios. + +# Acknowledgments + +Yuekai Zhao and Zhihua Zhang have been supported by the Beijing Natural Science Foundation (Z190001), National Key Research and Development Project of China (No. 2018AAA0101004), and Beijing Academy of Artificial Intelligence (BAAI). + +# References + +Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural machine translation. In Proceedings of the Sixth International Conference on Learning Representations. +Tianchi Bi, Hao Xiong, Zhongjun He, Hua Wu, and Haifeng Wang. 2019. Multi-agent learning for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 856-865, Hong Kong, China. Association for Computational Linguistics. +Miguel Domingo, Mercedes García-Martínez, Amando Estela, Laurent Bié, Alexandre Helle, Álvaro Peris, Francisco Casacuberta, and Manuerl Herranz. 2019. Demonstration of a neural machine translation system with online learning for translators. arXiv preprint arXiv:1906.09000. +Zi-Yi Dou, Antonios Anastasopoulos, and Graham Neubig. 2020. Dynamic data selection and weighting for iterative back-translation. ArXiv, abs/2004.03672. +Marzieh Fadaee and Christof Monz. 2018. Back-translation sampling by targeting difficult words in neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 436–446, Brussels, Belgium. Association for Computational Linguistics. +Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1243-1252. JMLR.org. +Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. In Advances in Neural Information Processing Systems, pages 820-828. + +Vu Cong Duy Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. 2018. Iterative backtranslation for neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 18-24, Melbourne, Australia. Association for Computational Linguistics. +Yunsu Kim, Yingbo Gao, and Hermann Ney. 2019. Effective cross-lingual transfer of neural machine translation models without shared vocabularies. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1246–1257, Florence, Italy. Association for Computational Linguistics. +Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. Cite arxiv:1412.6980Comment: Published as a conference paper at the 3rd International Conference for Learning Representations, San Diego, 2015. +Julia Kreutzer, Shahram Khadivi, Evgeny Matusov, and Stefan Riezler. 2018. Can neural machine translation be improved with user feedback? arXiv preprint arXiv:1804.05958. +Julia Kreutzer and Stefan Riezler. 2019. Self-regulated interactive sequence-to-sequence learning. arXiv preprint arXiv:1907.05190. +Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. Advances in Neural Information Processing Systems (NeurIPS). +Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2017. Unsupervised machine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043. +Ming Liu, Wray Buntine, and Gholamreza Haffari. 2018. Learning to actively learn neural machine translation. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 334-344. +Xuebo Liu, Houtim Lai, Derek F. Wong, and Lidia S. Chao. 2020. Norm-based curriculum learning for neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 427-436, Online. Association for Computational Linguistics. +Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421, Lisbon, Portugal. Association for Computational Linguistics. +Philip M. McCarthy and Scott Jarvis. 2010. Mtld, vocd-d, and hd-d: A validation study of sophisticated approaches to lexical diversity assessment. Behavior Research Methods, 42(2):381-392. + +Xuan-Phi Nguyen, Shafiq Joty, Wu Kui, and Ai Ti Aw. 2019. Data diversification: An elegant strategy for neural machine translation. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311-318. Association for Computational Linguistics. +Álvaro Peris and Francisco Casacuberta. 2018. Active learning for interactive neural machine translation of data streams. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 151-160, Brussels, Belgium. Association for Computational Linguistics. +Pavel Petrushkov, Shahram Khadivi, and Evgeny Matusov. 2018. Learning from chunk-based feedback in neural machine translation. arXiv preprint arXiv:1806.07169. +Minh Quang Pham, Josep M Crego, Jean Senellart, and François Yvon. 2018. Fixing translation divergences in parallel corpora for neural mt. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2967-2973. +Emmanouil Antonios Platanios, Otilia Stretcu, Graham Neubig, Barnabas Poczos, and Tom Mitchell. 2019. Competence-based curriculum learning for neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1162-1172, Minneapolis, Minnesota. Association for Computational Linguistics. +Alberto Poncelas, Gideon Maillette de Buy Wenniger, and Andy Way. 2019. Adaptation of machine translation models with back-translated data using transductive data selection methods. arXiv preprint arXiv:1906.07808. +Ameya Prabhu, Charles Dognin, and Maneesh Singh. 2019. Sampling bias in deep active classification: An empirical study. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4058-4068, Hong Kong, China. Association for Computational Linguistics. +Dongyu Ru, Yating Luo, Lin Qiu, Hao Zhou, Lei Li, Weinan Zhang, and Yong Yu. 2020. Active sentence learning by adversarial uncertainty sampling in discrete space. ArXiv, abs/2004.08046. +Dana Ruiter, Cristina Espana-Bonet, and Josef van Genabith. 2019. Self-supervised neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1828-1834. + +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computational Linguistics. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics. +Burr Settles and Mark Craven. 2008. An analysis of active learning strategies for sequence labeling tasks. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '08, page 1070-1079, USA. Association for Computational Linguistics. +Yanyao Shen, Hyokun Yun, Zachary Lipton, Yakov Kronrod, and Animashree Anandkumar. 2017. Deep active learning for named entity recognition. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 252-256, Vancouver, Canada. Association for Computational Linguistics. +Aditya Siddhant and Zachary C. Lipton. 2018. Deep bayesian active learning for natural language processing: Results of a large-scale empirical study. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2904-2909, Brussels, Belgium. Association for Computational Linguistics. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008. +Rui Wang, Masao Utiyama, and Eiichiro Sumita. 2018a. Dynamic sentence sampling for efficient training of neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 298-304. +Wei Wang, Isaac Caswell, and Ciprian Chelba. 2019. Dynamically composing domain-data selection with clean-data selection by "co-curricular learning" for neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1282-1292, Florence, Italy. Association for Computational Linguistics. +Wei Wang, Taro Watanabe, Macduff Hughes, Tetsuji Nakagawa, and Ciprian Chelba. 2018b. Denoising neural machine translation training with trusted data and online data selection. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 133-143, Brussels, Belgium. Association for Computational Linguistics. + +Xinyi Wang and Graham Neubig. 2019. Target conditioned sampling: Optimizing data selection for multilingual neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5823-5828, Florence, Italy. Association for Computational Linguistics. +Marlies van der Wees, Arianna Bisazza, and Christof Monz. 2017. Dynamic data selection for neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1400-1410, Copenhagen, Denmark. Association for Computational Linguistics. +Felix Wu, Angela Fan, Alexei Baevski, Yann N. Dauphin, and Michael Auli. 2019. Pay less attention with lightweight and dynamic convolutions. CoRR, abs/1901.10430. +Xuan Zhang, Gaurav Kumar, Huda Khayrallah, Kenton Murray, Jeremy G Winnup, Marianna J Martindale, Paul McNamee, Kevin Duh, and Marine Carpuat. 2018. An empirical exploration of curriculum learning for neural machine translation. arXiv preprint arXiv:1811.00739. +Ye Zhang, Matthew Lease, and Byron C Wallace. 2017. Active discriminative text representation learning. In Thirty-First AAAI Conference on Artificial Intelligence. +Yikai Zhou, Baosong Yang, Derek F. Wong, Yu Wan, and Lidia S. Chao. 2020. Uncertainty-aware curriculum learning for neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6934-6944, Online. Association for Computational Linguistics. +Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1568-1575, Austin, Texas. Association for Computational Linguistics. \ No newline at end of file diff --git a/activelearningapproachestoenhancingneuralmachinetranslation/images.zip b/activelearningapproachestoenhancingneuralmachinetranslation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..86f3523290d8cd1464c64c9d51051f0d7ef87e79 --- /dev/null +++ b/activelearningapproachestoenhancingneuralmachinetranslation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aeeaca59e5aca53d71669d2189db8997ed2d84e3ab0cc44a16d6f4bc87839cdc +size 476322 diff --git a/activelearningapproachestoenhancingneuralmachinetranslation/layout.json b/activelearningapproachestoenhancingneuralmachinetranslation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..85b2c0a2d9472140201a8a4a8d715fc738267600 --- /dev/null +++ b/activelearningapproachestoenhancingneuralmachinetranslation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ef618346f7303b2ceccc68b0b390dfa3cffe90456c63aff9cb604cedde9a7742 +size 448075 diff --git a/activesentencelearningbyadversarialuncertaintysamplingindiscretespace/2b4fe128-e55a-47fa-b1d6-7f9a308c4815_content_list.json b/activesentencelearningbyadversarialuncertaintysamplingindiscretespace/2b4fe128-e55a-47fa-b1d6-7f9a308c4815_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..17f0ba127d0158f99ed736f1d3cbb0f887f7efb3 --- /dev/null +++ b/activesentencelearningbyadversarialuncertaintysamplingindiscretespace/2b4fe128-e55a-47fa-b1d6-7f9a308c4815_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d9f5e95fa94de0847c62582ed29596446d92a2fbf33f329be47a5861af3dcfd +size 65299 diff --git a/activesentencelearningbyadversarialuncertaintysamplingindiscretespace/2b4fe128-e55a-47fa-b1d6-7f9a308c4815_model.json b/activesentencelearningbyadversarialuncertaintysamplingindiscretespace/2b4fe128-e55a-47fa-b1d6-7f9a308c4815_model.json new file mode 100644 index 0000000000000000000000000000000000000000..3a6c20ff4da5ddd090900c8637c4998592f573c1 --- /dev/null +++ b/activesentencelearningbyadversarialuncertaintysamplingindiscretespace/2b4fe128-e55a-47fa-b1d6-7f9a308c4815_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c11e05eb4532bdd3f422b93213a0505d3f67ef684ef370700a7846a6233a87ce +size 79311 diff --git a/activesentencelearningbyadversarialuncertaintysamplingindiscretespace/2b4fe128-e55a-47fa-b1d6-7f9a308c4815_origin.pdf b/activesentencelearningbyadversarialuncertaintysamplingindiscretespace/2b4fe128-e55a-47fa-b1d6-7f9a308c4815_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7f10b43fc1b75bd8c83bcdb080b804ddbd9a88cc --- /dev/null +++ b/activesentencelearningbyadversarialuncertaintysamplingindiscretespace/2b4fe128-e55a-47fa-b1d6-7f9a308c4815_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:18e1fc9bdb4b8188de0c75c8eb99eb259254294debcfce78d009545ea7ca6ee7 +size 370240 diff --git a/activesentencelearningbyadversarialuncertaintysamplingindiscretespace/full.md b/activesentencelearningbyadversarialuncertaintysamplingindiscretespace/full.md new file mode 100644 index 0000000000000000000000000000000000000000..10e48679045e224b6558f1010893791199b13170 --- /dev/null +++ b/activesentencelearningbyadversarialuncertaintysamplingindiscretespace/full.md @@ -0,0 +1,274 @@ +# Active Sentence Learning by Adversarial Uncertainty Sampling in Discrete Space + +Dongyu Ru†,‡, Jiangtao Feng†, Lin Qiu‡, Hao Zhou†, Mingxuan Wang†, Weinan Zhang‡, Yong Yu‡, Lei Li† †ByteDance AI Lab + +{fengjiangtao, zhouhao.nlp, wangmingxuan.89, lileilab}@bytedance.com + +$^{\ddagger}$ Shanghai Jiao Tong University + +{maxru,lqiu,wnzhang,yyu}@apex.sjtu.edu.cn + +# Abstract + +Active learning for sentence understanding aims at discovering informative unlabeled data for annotation and therefore reducing the demand for labeled data. We argue that the typical uncertainty sampling method for active learning is time-consuming and can hardly work in real-time, which may lead to ineffective sample selection. We propose adversarial uncertainty sampling in discrete space (AUSDS) to retrieve informative unlabeled samples more efficiently. AUSDS maps sentences into latent space generated by the popular pre-trained language models, and discover informative unlabeled text samples for annotation via adversarial attack. The proposed approach is extremely efficient compared with traditional uncertainty sampling with more than 10x speedup. Experimental results on five datasets show that AUSDS outperforms strong baselines on effectiveness. + +# 1 Introduction + +Deep neural models become popular in natural language processing (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2018). Neural models usually consume massive labeled data, which requires a huge quantity of human labors. But data are not born equal, where informative data with high uncertainty are decisive to decision boundary and are worth labeling. Thus selecting such worth-labeling data from unlabeled text corpus for annotation is an effective way to reduce the human labors and to obtain informative data. + +Active learning approaches are a straightforward choice to reduce such human labors. Previous works, such as uncertainty sampling (Lewis and Gale, 1994), needs to traverse all unlabeled data to find informative unlabeled samples, which are always near the decision boundary with large entropy. However, the traverse process is very + +time-consuming, thus cannot be executed frequently (Settles and Craven, 2008). A common choice is to perform the sampling process after every specific period, and it samples and labels informative unlabeled data then trains the model until convergence (Deng et al., 2018). + +We argue that infrequently performing uncertainty sampling may lead to the "ineffective sampling" problem. Because in the early phase of training, the decision boundary changes quickly, which makes previously collected samples less effective after several updates of the model. Ideally, uncertainty sampling should be performed frequently in the early phase of model training. + +In this paper, we propose the adversarial uncertainty sampling in discrete space (AUSDS) to address the ineffective sampling problem for active sentence learning by introducing more frequent sampling with significantly lower costs. Specifically, we propose to leverage the adversarial attack (Goodfellow et al., 2014; Kurakin et al., 2016) to the selecting of informative samples with high uncertainty, which significantly narrows down the search space. Fig. 1 shows the difference between uncertainty sampling and AUSDS. The typical uncertainty sampling (Fig. 1.a) traverses all the unlabeled samples to obtain samples of high uncertainty for each sampling run, which is costly with time complexity $O(\text{Unlabeled Data Size})$ . AUSDS (Fig. 1.b) first projects a labeled text to the decision boundary, denoted as an adversarial data point, and searches nearest neighbors of this point. The computational cost of AUSDS is significantly smaller than typical uncertainty sampling with the time complexity $O(\text{Batch Size})$ . But it is non-trivial for AUSDS to perform adversarial attacks, which requires adversarial gradients on sentences, since texts live in a discrete space. We propose to include a pre-trained neural encoder, such as BERT (Devlin et al., 2018), to map unlabeled sentences into + +![](images/f13758c0af5e419534be69657a53ad981333bd21a72ecb967ab3951e54201412.jpg) +(a) Uncertainty sampling: enumerating all unlabeled data to find the most uncertain samples + +![](images/253b846138934017ceeafdd8337d1db8d4d9778030e89af5f6f572346b7f80f2.jpg) +(b) AUSDS: adversarial attack over current batch + KNN search in pre-trained LM space +Figure 1: Comparison between uncertainty sampling and AUSDS for active learning. + +a continuous space, over which the adversarial attack is performed. Since not every adversarial data point in the encoding space can be mapped back to one of the unlabeled sentences, we propose to use the k-nearest neighbor (KNN) algorithm (Altman, 1992) to find the most similar unlabeled sentences (the adversarial samples) to the adversarial data points.Besides, empirically, we mix some random samples into the uncertainty samples to alleviate the sampling bias issue mentioned by (Huang et al., 2010). Finally, the mixed samples are sent to an oracle annotator to obtain their label and are appended to the labeled data set. + +We deploy AUSDS for active sentence learning and conduct experiments on five datasets across two NLP tasks, namely sequence classification and sequence labeling. Experimental results show that AUSDS outperforms random sampling and uncertainty sampling strategies. + +Our contributions are summarized as follows: + +- We propose AUSDS for active sentence learning, which first introduces the adversarial attack for sentence uncertainty sampling, alleviating the ineffective sampling problem. +- We propose to map sentences into the pretrained LM encoding space, which makes adversarial uncertainty sampling available in the discrete sentence space. +- Experimental results demonstrate that our active sentence learning framework by AUSDS, which we call AUSDS learning framework, outperforms strong baselines in sampling effectiveness with acceptable running time. + +# 2 Related Work + +This work focuses on reducing the labeled data size with the help of pre-trained LM in solving sentence learning tasks. The proposed AUSDS approach is related to two different research topics, active learning and adversarial attack. + +# 2.1 Active Learning + +Active learning algorithms can be categorized into three scenarios, namely membership query synthesis, stream-based selective sampling, and pool-based active learning (Settles, 2009). Our work is more related to pool-based active learning, which assumes that there is a small set of labeled data and a large pool of unlabeled data available (Lewis and Gale, 1994). To reduce the demand for more annotations, the learner starts from the labeled data and selects one or more queries from the unlabeled data pool for the annotation, then learns from the new labeled data and repeats. + +The pool-based active learning scenario has been studied in many real-world applications, such as text classification (Lewis and Gale, 1994; Hoi et al., 2006), information extraction (Settles and Craven, 2008) and image classification (Joshi et al., 2009). Among the query strategies of existing active learning approaches, the uncertainty sampling strategy (Joshi et al., 2009; Lewis and Gale, 1994) is the most popular and widely used. The basic idea of uncertainty sampling is to enumerate the unlabeled samples and compute the uncertainty measurement like information entropy for each sample. The enumeration and uncertainty computation makes the sampling process costly and cannot be performed frequently, which induced the ineffective sampling problem. + +There are some works that focus on accelerating + +![](images/763fad6239e43e3757b58861249a9d233c32ae84b7f17c6c843241e043b756e5.jpg) +Figure 2: Overview of active sentence learning framework by AUSDS. Some notations are labeled along with corresponding components. + +the costly uncertainty sampling process. Jain et al. (2010) propose a hashing method to accelerate the sampling process in sub-linear time. Deng et al. (2018) propose to train an adversarial discriminator to select informative samples directly and avoid computing the rather costly sequence entropy. Nevertheless, the above works are still computationally expensive and cannot be performed frequently, which means the ineffective sampling problem still exists. + +# 2.2 Adversarial Attack + +Adversarial attacks are originally designed to approximate the smallest perturbation for a given latent state to cross the decision boundary (Goodfellow et al., 2014; Kurakin et al., 2016). As machine learning models are often vulnerable to adversarial samples, adversarial attacks have been used to serve as an important surrogate to evaluate the robustness of deep learning models before they are deployed (Biggio et al., 2013; Szegedy et al., 2013). Existing adversarial attack approaches can be categorized into three groups, which are one-step gradient-based approaches (Goodfellow et al., 2014; Rozsa et al., 2016), iterative methods (Kurakin et al., 2016) and optimization-based methods (Szegedy et al., 2013). + +Inspired by the similar goal of adversarial attacks and uncertainty sampling, in this paper, instead of considering adversarial attacks as a threat, we propose to combine these two approaches for achieving real-time uncertainty sampling. Some works share a similar but different idea with us. Li et al. (2018) introduce active learning strategies into black-box attacks to enhance query efficiency. Pal et al. (2020) also use active learning strate + +gies to reduce the number of queries for model extraction attacks. Zhu and Bento (2017) propose to train Generative Adversarial Networks to generate samples by minimizing the distance to the decision boundary directly, which is in the query synthesis scenario different from us. Ducoffe and Precioso (2018) also introduce adversarial attacks into active learning by augmenting the training set with adversarial samples of unlabeled data, which is infeasible in discrete space. Note that none of the works above share the same scenario with our problem setting. + +# 3 Active Sentence Learning with AUSDS + +We propose AUSDS learning framework, an efficient and effective computational framework for active sentence learning. The overview of the learning framework is shown in Fig. 2. The learning framework consists of two blocks, a training block and a sampling block AUSDS. The training block learns knowledge from the labeled data, whereas the sampling block retrieves valuable unlabeled samples, whose latent states are close to the decision boundary over the latent space, from the unlabeled text corpus. Note that the definition of latent spaces can be different across encoders and tasks. The samples retrieved by the sampling block will be further sent to an oracle annotator to obtain their label, and the new samples with labels are also appended to the labeled data. + +In this section, we first introduce AUSDS method by showing how AUSDS select samples that are critical to the decision boundary over the latent space. Then we present the computational procedure of the full-fledged framework in detail. + +Algorithm 1 Active Sentence Learning with Adversarial Uncertainty Sampling in Discrete Space +Input: an unlabeled text corpus $T_0$ , an oracle $O$ , a labeled data $D_0 = \{(s, O(s)) | s \in S_0$ , a small initial text corpus\}, pre-trained LM $f_e$ , fine-tuning interval $j$ , and fine-tuning step $k$ . +Output: a well-trained model $f = (f_e, f_d)$ +1: Train $f_d$ on $D_0$ with frozen $f_e$ ; +2: Construct a discrete bijective mapper $M$ , where $M(s) = f_e(s) \in \mathcal{H}$ and $M^{-1}(f_e(s)) = s \in T_0$ ; +3: Sample a training batch $B_0$ from $D_0$ ; +4: $i \gets 0$ +5: while $|T_i| > 0$ do +6: Train decoder $f_d$ on $B_i$ with frozen encoder $f_e$ ; +7: Generate adversarial data points $A \subset \mathcal{H}$ using the adversarial attack algorithm; +8: Retrieve adversarial samples $S_a = \{s_a = M^{-1}(x) \in T_i | x \in \mathrm{KNN}(A)\}$ ; +9: Inject $S_a$ with random samples $S_r$ , where $|S_a| : |S_r| = p : 1 - p$ ; +10: Select top-k ranked samples $S_{add}$ from $S_a$ w.r.t. the information entropy; +11: Label new data $Q \gets \{(s, O(s)) | s \in S_{add}\}$ ; +12: Update labeled data $D_{i+1} \gets D_i \cup Q$ ; +13: Remove newly labeled data from unlabeled dataset $T_{i+1} \gets T_i - S_{add}$ ; +14: Sample a training batch $B_{i+1}$ from $Q$ and $D_{i+1}$ by the ratio of $q : 1 - q$ ; +15: if $i \mod j = 0$ then +16: Fine-tune $f$ with $D_{i+1}$ for $k$ steps; +17: Update the mapper $M$ with the fine-tuned encoder $f_e$ and text corpus $T_{i+1}$ ; +18: end if +19: $i \gets i + 1$ +20: end while + +# 3.1 AUSDS + +AUSDS first defines a latent space, over which sentences are distinguishable according to the model's decision boundary. The latent space is usually determined by the encoder architecture and the downstream task. We detail the latent space definition of specific encoders and tasks in Sec. 4.1. + +At first, we sample a batch of labeled texts and compute their representation as well as their gradients in the latent space. Using the latent states and their gradients, we perform adversarial attacks to generate adversarial data points $A$ near the decision boundary in the latent space. Adversarial attacks are performed using the following existing approaches: + +- Fast Gradient Value (FGV) (Rozsa et al., 2016): a one-step gradient-based approach with high efficiency. The adversarial data points are generated by: + +$$ +\mathbf {x} ^ {\prime} = \mathbf {x} + \lambda \cdot \nabla_ {\mathbf {x}} F _ {d} (\mathbf {x}) \tag {1} +$$ + +where $\lambda$ is a hyper parameter, and $F_{d}$ is the cross entropy loss on $\mathbf{x}$ . + +- DeepFool (Moosavi-Dezfooli et al., 2016): an iterative approach to find the minimal per + +turbation that is sufficient to change the estimated label. + +C&W (Carlini and Wagner, 2017): an optimization-based approach with the optimization problem defined as: + +$$ +\text {m i n i m i z e} D \left(\mathbf {x}, \mathbf {x} ^ {\prime}\right) + c \cdot g \left(\mathbf {x} ^ {\prime}\right) \tag {2} +$$ + +where $g(\cdot)$ is a manually designed function, satisfying $g(\mathbf{x})\leq 0$ if and only if $\mathbf{x}$ 's label is a specific target label. $D$ is a distance measurement like Minkowski distance. + +FGV is efficient in the calculation, whereas the other two methods typically find more precise adversarial data points but with larger computational costs. We use all of them in our experimental part to show the effectiveness of the AUSDS. + +In our sentence learning scenario, the adversarial data points $A$ cannot be grounded on real natural language text samples. Thus we perform k-nearest neighbor (KNN) search (Altman, 1992) to find unlabeled text samples whose latent states are k-nearest to the adversarial data points $A$ . + +We implement the KNN search using Faiss1 (Johnson et al., 2017), an efficient + +similarity search algorithm with GPUs. The computational cost of KNN search results from two processes, including constructing a sample mapper $M$ between text and latent space, and searching similar latent states of adversarial data points. The sampler mapper $M$ here is constructed as a hash map, which is of high computational efficiency, to memorize the mapping between an unlabeled text $s$ and its latent representation $\mathbf{x}$ . The sample mapper is only reconstructed when the encoder is updated, and infrequent encoder updates contribute to efficiency. Besides, the searching process is also fast ( $100 \times$ faster than generating $A$ ) thanks to Faiss. Thus it is possible to performed AUSDS frequently at batch-level without harming computation. + +After acquiring adversarial samples $S_{a}$ using KNN search, we mix $S_{a}$ with random samples $S_{r}$ drawn from unlabeled text corpus $T_{i}$ by the ratio of $p:1 - p$ , where $p$ is a hyper-parameter determined on the development set. The motivation of appending random samples is to balance exploration and exploitation, thus avoiding the model continuously retrieve samples in a small neighborhood. + +We perform top-k ranking over the information entropy of the mixed samples to further retrieve samples with higher uncertainty. Since the size of the mixed samples is comparable to the batch size, the computation cost is acceptable. The remaining samples are further sent to an oracle annotator $O$ to obtain their labels. + +# 3.2 Active Learning Framework + +The overall procedure of the proposed framework equipped with AUSDS is outlined in Algorithm 1 + +Initialization The initialization stage is shown in Algorithm 1 line 1-4. We first initialize our encoder $f_{e}$ with the pre-trained LM, which can be $\mathrm{BERT}_{\mathrm{BASE}}$ (Devlin et al., 2018) or $\mathrm{ELMo}$ (Peters et al., 2018). The decoder here is built upon the latent space and is randomly initialized. After building up the neural model architecture, we train only the decoder on existing labeled data $D_{0}$ to compute an initial decision boundary on the latent space. Meanwhile, we construct an initial discrete sample mapper $M$ used for the sampling block. Finally, we sample a training batch $B_{0}$ from labeled data corpus $D_{0}$ , and set current training step $i$ to 0. + +Training The training stage is shown in Algorithm 1 line 6. With the defined decoders $f_{d}$ and a training batch $B_{i}$ , we train the decoder with a cross + +entropy loss (Fig. 2.b). Note that during the training process, we freeze the encoder as well as the latent space, where a frozen latent space contributes to computational efficiency without reconstructing the mapper $M$ . + +Sampling The sampling stage is shown in Algorithm 1 line 7-14. As is shown in Sec. 3.1, given the gradients on the current batch $B_{i}$ w.r.t. latent states during training, the sampling process generates the adversarial samples $S_{a}$ and labels the samples with high uncertainty from a mixture of $S_{a}$ and randomly injected unlabeled data $S_{r}$ . The labeled samples $Q$ are removed from the unlabeled text corpus and inserted into labeled data, resulting in $T_{i + 1}$ and $D_{i + 1}$ respectively. Then we create a new training batch consist of samples from $Q$ and $D_{i + 1}$ with a ratio of $q:1 - q$ , which favors the newly selected data $Q$ , because the newly selected ones are considered as more critical to the current decision boundary. + +Fine-Tuning The fine-tuning stage is shown in Algorithm 1 line 15-18. We fine-tune the encoder for $k$ steps after $j$ batches are trained. During the fine-tuning process, both of the encoder and the decoder are trained on the accumulated labeled data set $D_{i+1}$ . The encoder is also fine-tuned for enhancing overall performance. Experiments show that the final performance is harmed a lot without updating the encoder. Then we update the mapper $M$ for the future KNN search, because the fine-tuning of the encoder corrupts the projection from texts to latent spaces, which requires renewal of the sampler mapper $M$ . The algorithm terminates until the unlabeled text corpus $T_i$ is used up. + +# 4 Experiments + +We evaluate the AUSDS learning framework on sequence classification and sequence labeling tasks. For the oracle labeler $O$ , we directly use the labels provided by the datasets. In all the experiments, we take average results of 5 runs with different random seeds to alleviate the influence of randomness. + +# 4.1 Set-up + +Dataset. We use five datasets, namely Stanford Sentiment Treebank (SST-2 / SST-5) (Socher et al., 2013), Microsoft Research Paraphrase Corpus (MRPC) (Dolan et al., 2004), AG's News Corpus (AG News) (Zhang et al., 2015) and CoNLL 2003 Named Entity Recognition dataset + +
DatasetTaskSample Size
SST-2 (Socher et al., 2013)sequence classification11.8k sentences, 215k phrases
SST-5 (Socher et al., 2013)sequence classification11.8k sentences, 215k phrases
MRPC (Dolan et al., 2004)sequence classification5,801 sentence pairs
AG News (Zhang et al., 2015)sequence classification12k sentences
CoNLL'03 (Sang and De Meulder, 2003)sequence labeling22k sentences, 300k tokens
+ +Table 1: 5 datasets we used for sentence learning experiments, across sequence classification and sequence labeling tasks. + +
DatasetRMUSAUSDS(FGV)AUSDS(DeepFool)AUSDS(C&W)
SST-21061x1x38x38x28x
SST-51939x1x52x52x38x
MRPC97x1x14x14x11x
AG News1434x1x51x47x38x
CoNLL'0345x1x10x
+ +Table 2: The average speedup of each sampling step in comparison with US on 5 datasets with BERT as the encoder. The statistics are collected using Tesla-V100 GPU. US scans the unlabeled data once when $2\%$ of data are labeled. The AUSDS using DeepFool and C&W on CoNLL'03 are omitted because these adversarial attack methods are not suitable for sequence labeling. + +(CoNLL'03) (Sang and De Meulder, 2003) for experiments. The statistics can be found in Table 1. The train/development/test sets follow the original settings in those papers. We use accuracy for sequence classification and f1-score for sequence labeling as the evaluation metric. + +Baseline Approaches. We use two common baseline approaches in NLP active learning to compare with our framework, namely random sampling (RM) and entropy-based uncertainty sampling (US). For sequence classification tasks, we adopt the widely used Max Entropy (ME) (Berger et al., 1996) as uncertainty measurement: + +$$ +H ^ {M E} (\mathbf {x}) = - \sum_ {m = 1} ^ {c} P (\mathbf {y} = m | \mathbf {x}) \log P (\mathbf {y} = m | \mathbf {x}) \quad (3) +$$ + +where $c$ is the number of classes. For sequence labeling tasks, we use total token entropy (TTE) (Settles and Craven, 2008) as uncertainty measurement: + +$$ +H ^ {T T E} (\mathbf {x}) = - \sum_ {i = 1} ^ {N} \sum_ {m = 1} ^ {l} P \left(\mathbf {y} _ {i} = m | \mathbf {x}\right) \log P \left(\mathbf {y} _ {i} = m | \mathbf {x}\right) \tag {4} +$$ + +where $N$ is the sequence length and $l$ is the number of labels. + +Latent Space Definition We use the adversarial attack in our AUSDS learning framework to find informative samples, which rely on a well-defined latent space. Two types of latent spaces are defined here based on the encoder architectures and tasks: + +1. For pre-trained LMs like BERT (Devlin et al., 2018), which has an extra token [CLS] for sequence classification, we directly use its latent state $\mathbf{x}$ as the representation of the whole sentence in the latent space $\mathcal{H}$ . +2. For the other circumstances where no such special token can be used, a mean-pooling operation is applied to the encoder output, i.e. $\mathbf{x} = \frac{1}{n}\sum_{t=1}^{n}h_t$ , where $h_t$ denotes the contextual word representation of the $t_\mathrm{th}$ token produced by the encoder. The latent space $\mathcal{H}$ is spanned by all the latent states. + +Implementation Details. We implement our frameworks based on $\mathrm{BERT}_{\mathrm{BASE}}$ model and $\mathrm{ELMo}^3$ . The configurations of the two models are the same as reported in (Devlin et al., 2018) and (Peters et al., 2018) respectively. The implementation of the KNN search is introduced in section 3.3. For the rest hyperparameters in our framework, 1) the batch size and the size of $Q$ is set as 32 (16 on MRPC dataset); 2) the fine-tuning interval $j$ and the fine-tuning step size $k$ are set as 50 steps; 3) the ratio $q$ is set as 0.3. All the tuning experiments are performed on the dev sets of five datasets. The accumulated labeled data set $D$ is initialized the same for different approaches, taking $0.1\%$ of the whole unlabeled data ( $0.5\%$ for MRPC because the dataset is relatively small). + +
Label Size2%4%6%8%10%
SST-2RM87.78(.003)89.85(.004)89.85(.010)89.69(.004)90.26(.008)
US87.74(.004)90.25(.006)90.38(.008)90.25(.006)91.27(.007)
AUSDS (FGV)89.18(.002)89.88(.008)89.16(.014)91.07(.005)89.95(.003)
AUSDS (DeepFool)88.74(.004)90.06(.003)89.84(.007)90.74(.006)91.58(.002)
AUSDS (C&W)87.97(.003)89.95(.005)90.83(.007)90.12(.003)91.13(.001)
SST-5RM49.45(.010)50.01(.007)50.88(.006)50.39(.014)51.35(.005)
US49.10(.008)49.54(.009)50.63(.008)50.90(.012)51.43(.005)
AUSDS (FGV)49.57(.006)50.36(.008)50.09(.009)50.19(.014)50.62(.011)
AUSDS (DeepFool)50.20(.012)51.87(.003)51.74(.012)50.97(.012)51.23(.007)
AUSDS (C&W)48.28(.012)48.78(.014)51.58(.007)51.40(.010)47.42(.006)
MRPCRM67.33(.008)68.31(.006)68.56(.018)70.06(.021)71.15(.020)
US62.14(.090)69.34(.005)69.11(.010)70.53(.017)71.49(.016)
AUSDS (FGV)68.89(.014)69.30(.023)70.28(.015)70.06(.012)69.30(.019)
AUSDS (DeepFool)67.92(.009)68.88(.017)69.68(.017)71.69(.014)71.55(.012)
AUSDS (C&W)67.91(.014)68.53(.017)70.46(.012)70.49(.012)68.89(.016)
AG NewsRM89.89(.003)90.89(.002)91.37(.002)91.79(.002)92.21(.002)
US90.29(.006)91.59(.007)92.34(.003)92.71(.001)93.01(.001)
AUSDS (FGV)90.75(.002)91.55(.002)92.26(.003)92.62(.001)93.16(.001)
AUSDS (DeepFool)90.67(.004)91.65(.004)92.43(.004)92.66(.004)93.12(.002)
AUSDS (C&W)90.24(.002)91.29(.002)92.30(.004)92.90(.002)93.10(.003)
CoNLL'03RM80.42(.002)83.38(.002)85.39(.005)86.78(.005)87.42(.003)
US78.12(.002)81.49(.019)84.45(.004)86.73(.008)87.79(.004)
AUSDS (FGV)80.65(.006)83.60(.003)85.98(.010)87.10(.004)87.83(.003)
AUSDS (DeepFool)
AUSDS (C&W)
+ +Table 3: The convergence results w.r.t. the label size in the training from scratch setting with BERT as the encoder. The label size denotes for the ratio of labeled data. The numbers are the averaged results of 5 runs on the test set. The best results with each label size are marked as bold. The sequence classification and sequence labeling tasks are evaluated with accuracy and f1 score, respectively. The AUSDS using DeepFool and C&W on CoNLL'03 are omitted because these adversarial attack methods are not suitable for sequence labeling. + +
Label Size2%4%6%8%10%
RM81.58(.004)82.90(.006)83.53(.008)82.15(.016)84.40(.006)
US78.23(.007)80.34(.003)81.99(.006)82.34(.008)82.21(.004)
AUSDS (FGV)81.22(.004)83.25(.001)84.18(.005)84.49(.004)84.62(.009)
AUSDS (DeepFool)82.37(.003)83.31(.004)83.77(.002)84.68(.001)84.73(.005)
AUSDS (C&W)81.27(.006)84.02(.007)82.76(.002)84.40(.002)83.58(.012)
+ +Table 4: The convergence results w.r.t. the label size in the training from scratch setting with ELMo as encoder on SST-2. The label size denotes for the ratio of labeled data. The best results with each label size are marked as bold. + +# 4.2 Sampling Effectiveness + +AUSDS can achieve higher sampling effectiveness than uncertainty sampling due to the sampling bias problem. The main criteria to evaluate an active learning approach is the sampling effectiveness, namely the model performance with a limited amount of unlabeled data being sampled and labeled. Our AUSDS learning framework is compared with the two baselines using the same amount of labeled data. The limitations are set as $2\%$ , $4\%$ , $6\%$ , $8\%$ , and $10\%$ of all labeled data in each dataset. We only include at most $10\%$ of the whole training data labeled, because active learning focuses on training with a quite limited amount + +of labeled data by selecting more valuable examples to label. It makes no difference whether to perform active learning or not with enough labeled data available. We believe that with less labeled data, the performance gap, namely the difference of sampling effectiveness is more obvious. + +We propose training from scratch setting to better evaluate the sampling effectiveness, in which models are trained from scratch using the labeled data sampled by different approaches with various labeled data sizes. We argue that simply training the model until convergence after each sampling step, which we call continuous training setting, can easily induce the problem of sampling bias (Huang + +![](images/ed53641ce71159b7a731e969ef2ae2946d8ea145279f08a0aaa8d2d1b1f01850.jpg) +(a) Margin during Training + +![](images/4ae13bbdf2e28b0834f726e3f7fb34886d2b3427d024d5df9440193f7e0e57c9.jpg) +(b) Margin Distribution +Figure 3: The margin of outputs on samples selected by different sampling strategies on SST-5. The margin denotes for differences between the largest and the second-largest output probabilities on different classes. The lower the margin is, the closer the sample is located to the decision boundary. Fig. (a) shows the average margin of each sampling step during training. The margins of samples selected by RM and US on whole unlabeled data are also plotted as references. Fig. (b) shows the margin distribution of samples selected from sampling step 800 to 1000, where the average uncertainty becomes steady. US in Fig. (b) is omitted for better visualization. + +et al., 2010). Biased models in the early training phase lead to worse performance even after more informative samples are given. Thus the performance of models during sampling cannot reflect the real informativeness of selected samples. + +The from-scratch training results are shown in Table 3. Our framework outperforms the random baselines consistently because it selects more informative samples for identifying the shape of the decision boundary. Also, it outperforms the common uncertainty sampling in most cases with the same labeled data size limits because the frequent sampling processes in our approach alleviate the sampling bias issue. Uncertainty sampling suffers the sampling bias problem because of frequent variation of the decision boundary in the early phase of training, which results in ineffective sampling. The decision boundary is merely determined by a small number of labeled examples in the early phase. And the easily biased decision boundary may lead to the sampling of high uncertainty samples given the current model state but not that representative to the whole unlabelled data. With the overall results on the five standard benchmarks of 2 NLP tasks, we observe that our AUSDS can achieve better sampling effectiveness with DeepFool for sequence classification and FGV for sequence labeling. The results of CW are also included for completeness and comparison. + +To prove that our AUSDS framework does not heavily depend on BERT, we conduct experiments on SST-2 with ELMo as the encoder, which has a + +different network structure. The results in Table 4 show that in this setting, our AUSDS framework still achieves higher sampling effectiveness, while the original uncertainty sampling gets stuck in a more severe sampling bias problem. The results in this experiment can also be evidence of the generalization ability of our framework to other pre-trained LM encoding space. + +# 4.3 Computational Efficiency + +AUSDS is computationally more efficient than uncertainty sampling. Our AUSDS is computationally efficient enough to be performed at batch-level, thus achieving real-time effective sampling. The average sampling speeds of different approaches are compared w.r.t. US (Table 2). + +We observe that uncertainty sampling can hardly work in a real-time sampling setting because of the costly sampling process. Our AUSDS are more than 10x faster than common uncertainty sampling. The larger the unlabeled data pool is, the more significant the acceleration is. Our framework spends longer computation time, compared with the random sampling baseline, but still fast enough for real-time batch-level sampling. Moreover, the experimental results on Sampling Effectiveness in Sec. 4.2 show that the extra computation for adversarial samples is worthy with obvious performance enhancement on the same amount of labeled data. + +# 4.4 Samples Uncertainty + +AUSDS can actually select examples with higher uncertainty. We plot the margins of outputs of samples selected with different sampling strategies on SST-5 in Fig. 3. We use margin as the measurement of the distance to the decision boundary. Lower margins indicate positions closer to the decision boundary. As shown in Fig. 3(a), the samples selected by our AUSDS with different attack approaches achieve lower average margins during sampling. Samples from step 800 to 1000 are collected to estimate the margin distribution, as shown in Fig. 3(b). It is shown that our AUSDS has better capability to capture the samples with higher uncertainty as their margin distributions are more to the left. The uncertainty sampling performed on the whole unlabeled data gets the most uncertain samples. However, it is very time-consuming and can not be applied frequently. + +In short, AUSDS achieves better sampling effectiveness in comparison with US because the more efficient batch-level sampling alleviates the problem of sampling bias. Adversarial attacks can be an effective way to find critical data points near the decision boundary. + +# 5 Conclusion + +Uncertainty sampling is an effective way of reducing the labeled data size in sentence learning. But uncertainty sampling of high latency may lead to an ineffective sampling problem. In this study, we propose adversarial uncertainty sampling in discrete space for active sentence learning to address the ineffective sampling problem. The proposed AUSDS is more efficient than traditional uncertainty sampling by leveraging adversarial attacks and projecting discrete sentences into pre-trained LM space. Experimental results on five datasets show that the proposed approach outperforms strong baselines in most cases, and achieve better sampling effectiveness. + +# Acknowledgments + +The corresponding author is Yong Yu. The SJTU team is supported by "New Generation of AI 2030" Major Project 2018AAA0100900 and NSFC (61702327, 61772333, 61632017, 81771937). We thank Rong Ye, Huadong Chen, Xunpeng Huang, and the anonymous reviewers for their insightful and detailed comments. + +# References + +Naomi S Altman. 1992. An introduction to kernel and nearest-neighbor nonparametric regression. The American Statistician, 46(3):175-185. +Adam L Berger, Vincent J Della Pietra, and Stephen A Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational linguistics, 22(1):39-71. +Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. 2013. Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery in databases, pages 387-402. Springer. +Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pages 39-57. IEEE. +Yue Deng, KaWai Chen, Yilin Shen, and Hongxia Jin. 2018. Adversarial active learning for sequences labeling and generation. In *IJCAI*, pages 4012-4018. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In Proceedings of the 20th international conference on Computational Linguistics, page 350. Association for Computational Linguistics. +Melanie Ducoffe and Frederic Precioso. 2018. Adversarial active learning for deep networks: a margin based approach. arXiv preprint arXiv:1802.09841. +Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. +Steven CH Hoi, Rong Jin, and Michael R Lyu. 2006. Large-scale text categorization by batch mode active learning. In Proceedings of the 15th international conference on World Wide Web, pages 633-642. ACM. +Sheng-Jun Huang, Rong Jin, and Zhi-Hua Zhou. 2010. Active learning by querying informative and representative examples. In Advances in neural information processing systems, pages 892-900. +Prateek Jain, Sudheendra Vijayanarasimhan, and Kristen Grauman. 2010. Hashing hyperplane queries to near points with applications to large-scale active learning. In Advances in Neural Information Processing Systems, pages 928-936. +Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2017. Billion-scale similarity search with GPUs. arXiv preprint arXiv:1702.08734. + +Ajay J Joshi, Fatih Porikli, and Nikolaos Papanikolopoulos. 2009. Multi-class active learning for image classification. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 2372-2379. IEEE. +Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533. +David D Lewis and William A Gale. 1994. A sequential algorithm for training text classifiers. In SI-GIR'94, pages 3-12. Springer. +Pengcheng Li, Jinfeng Yi, and Lijun Zhang. 2018. Query-efficient black-box attack by active learning. arXiv preprint arXiv:1809.04913. +Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. Deepfool: A simple and accurate method to fool deep neural networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). +Soham Pal, Yash Gupta, Aditya Shukla, Aditya Kanade, Shirish Shevade, and Vinod Ganapathy. 2020. Activethief: Model extraction using active learning and unannotated public data. AAI. +Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. +Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws.com/openai-assetss/researchcovers/languageunsupervised/language understanding paper.pdf. +Andras Rozsa, Ethan M Rudd, and Terrance E Boult. 2016. Adversarial diversity and hard positive generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 25-32. +Erik F Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003. +Burr Settles. 2009. Active learning literature survey. Technical report, University of Wisconsin-Madison Department of Computer Sciences. +Burr Settles and Mark Craven. 2008. An analysis of active learning strategies for sequence labeling tasks. In Proceedings of the conference on empirical methods in natural language processing, pages 1070-1079. Association for Computational Linguistics. +Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models + +for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631-1642. +Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. +Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in neural information processing systems, pages 649-657. +Jia-Jie Zhu and José Bento. 2017. Generative adversarial active learning. arXiv preprint arXiv:1702.07956. \ No newline at end of file diff --git a/activesentencelearningbyadversarialuncertaintysamplingindiscretespace/images.zip b/activesentencelearningbyadversarialuncertaintysamplingindiscretespace/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..9833095e3c5bef67208167650a98af4fd8ce9c7f --- /dev/null +++ b/activesentencelearningbyadversarialuncertaintysamplingindiscretespace/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:563b481a9045262948460dc0c8cb3faefbbd20a20adab78d23356bdccbc626a5 +size 413408 diff --git a/activesentencelearningbyadversarialuncertaintysamplingindiscretespace/layout.json b/activesentencelearningbyadversarialuncertaintysamplingindiscretespace/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..6f2499b0f75e376f22a0bd6eaee35cc015e77888 --- /dev/null +++ b/activesentencelearningbyadversarialuncertaintysamplingindiscretespace/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:64d7ed2a36d0a1250f8af58a1cde79c3e5bd5947a706245a51b080cb8121e609 +size 354808 diff --git a/activetestinganunbiasedevaluationmethodfordistantlysupervisedrelationextraction/7d6b49a6-e5eb-4a5c-9201-b68696e59c6c_content_list.json b/activetestinganunbiasedevaluationmethodfordistantlysupervisedrelationextraction/7d6b49a6-e5eb-4a5c-9201-b68696e59c6c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..272faa2c6f4b8b6679d86a1d52820c7202680e7b --- /dev/null +++ b/activetestinganunbiasedevaluationmethodfordistantlysupervisedrelationextraction/7d6b49a6-e5eb-4a5c-9201-b68696e59c6c_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:252c8038e7209b1288438f905422bcb98ea59477bab84193f073631e50c2a141 +size 58273 diff --git a/activetestinganunbiasedevaluationmethodfordistantlysupervisedrelationextraction/7d6b49a6-e5eb-4a5c-9201-b68696e59c6c_model.json b/activetestinganunbiasedevaluationmethodfordistantlysupervisedrelationextraction/7d6b49a6-e5eb-4a5c-9201-b68696e59c6c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..a0935bb90e9d05f5c58d62e2fc98d4a8b5b5d501 --- /dev/null +++ b/activetestinganunbiasedevaluationmethodfordistantlysupervisedrelationextraction/7d6b49a6-e5eb-4a5c-9201-b68696e59c6c_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:93c36428c0f128c0a14f1a64436eacc19904d21336f3b93a112ae623d69164b6 +size 69486 diff --git a/activetestinganunbiasedevaluationmethodfordistantlysupervisedrelationextraction/7d6b49a6-e5eb-4a5c-9201-b68696e59c6c_origin.pdf b/activetestinganunbiasedevaluationmethodfordistantlysupervisedrelationextraction/7d6b49a6-e5eb-4a5c-9201-b68696e59c6c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..9b6d52be3be5bf7aa3a61c6acca9c1c35f09092b --- /dev/null +++ b/activetestinganunbiasedevaluationmethodfordistantlysupervisedrelationextraction/7d6b49a6-e5eb-4a5c-9201-b68696e59c6c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0501f502f6b4c9089396856adcdefed58458cebc07e2d755129a03aaa622b1bb +size 2822869 diff --git a/activetestinganunbiasedevaluationmethodfordistantlysupervisedrelationextraction/full.md b/activetestinganunbiasedevaluationmethodfordistantlysupervisedrelationextraction/full.md new file mode 100644 index 0000000000000000000000000000000000000000..3e2aa146be657b7dde914216d49560d67958bdc9 --- /dev/null +++ b/activetestinganunbiasedevaluationmethodfordistantlysupervisedrelationextraction/full.md @@ -0,0 +1,281 @@ +# Active Testing: An Unbiased Evaluation Method for Distantly Supervised Relation Extraction + +Pengshuai Li $^{1}$ , Xinsong Zhang $^{2}$ , Weijia Jia $^{3,1*}$ and Wei Zhao $^{4}$ + +$^{1}$ Dept. of CSE, Shanghai Jiao Tong University, Shanghai, China $^{2}$ ByteDance AI Lab Institute of AI & Future Networks, Beijing Normal University (Zhuhai) & UIC, PR China $^{4}$ American University of Sharjah, Sharjah, United Arab Emirates pengshuai.li@sjtu.edu.cn zhangxinsong.0320@bytedance.com jiawj@bnu.edu.cn wzhao@aus.edu + +# Abstract + +Distant supervision has been a widely used method for neural relation extraction for its convenience of automatically labeling datasets. However, existing works on distantly supervised relation extraction suffer from the low quality of test set, which leads to considerable biased performance evaluation. These biases not only result in unfair evaluations but also mislead the optimization of neural relation extraction. To mitigate this problem, we propose a novel evaluation method named active testing through utilizing both the noisy test set and a few manual annotations. Experiments on a widely used benchmark show that our proposed approach can yield approximately unbiased evaluations for distantly supervised relation extractors. + +# 1 Introduction + +Relation extraction aims to identify relations between a pair of entities in a sentence. It has been thoroughly researched by supervised methods with hand-labeled data. To break the bottleneck of manual labeling, distant supervision (Mintz et al., 2009) automatically labels raw text with knowledge bases. It assumes that if a pair of entities have a known relation in a knowledge base, all sentences with these two entities may express the same relation. Clearly, the automatically labeled datasets in distant supervision contain amounts of sentences with wrong relation labels. However, previous works only focus on wrongly labeled instances in training sets but neglect those in test sets. Most of them estimate their performance with the held-out evaluation on noisy test sets, which will yield inaccurate evaluations of existing models and seriously mislead the model optimization. As shown in Table 1, we compare the results of held-out evaluation and human evaluation for the same model on a widely used + +benchmark dataset NYT-10 (Riedel et al., 2010). The biases between human evaluation and existing held-out evaluation are over $10\%$ , which are mainly caused by wrongly labeled instances in the test set, especially false negative instances. + +
EvaluationsP@100P@200P@300
Held-out Evaluation837769
Human Evaluation93(+10)92.5(+15.5)91(+22)
+ +Table 1: The Precision at top K predictions $(\%)$ of the model Lin et al. (2016) upon held-out evaluation and human evaluation on NYT-10. Results are obtained by our implementations. + +A false negative instance is an entity pair labeled as non-relation, even if it has at least one relation in reality. This problem is caused by the incompleteness of existing knowledge bases. For example, over $70\%$ of people included in Freebase have no place of birth (Dong et al., 2014). From a random sampling, we deduce that about $8.75\%$ entity pairs in the test set of NYT-10 are misclassified as non-relation.1 Clearly, these mislabeled entity pairs yield biased evaluations and lead to inappropriate optimization for distantly supervised relation extraction. + +In this paper, we propose an active testing approach to estimate the performance of distantly supervised relation extraction. Active testing has been proved effective in evaluating vision models with large-scale noisy datasets (Nguyen et al., 2018). In our approach, we design an iterative approach, with two stage per iteration: vetting stage and estimating stage. In the vetting stage, we adopt an active strategy to select batches of the most valuable entity pairs from the noisy test set for annotating. In the estimating stage, a metric estimator is proposed to obtain a more accurate evaluation. + +With a few vetting-estimating iterations, evaluation results can be dramatically close to that of human evaluation by using limited vetted data and all noisy data. Experimental results demonstrate that the proposed evaluation method yields approximately unbiased estimations for distantly supervised relation extraction. + +# 2 Related Work + +Distant supervision (Mintz et al., 2009) was proposed to deal with large-scale relation extraction with automatic annotations. A series of studies have been conducted with human-designed features in distantly supervised relation extraction (Riedel et al., 2010; Surdeanu et al., 2012; Takamatsu et al., 2012; Angeli et al., 2014; Han and Sun, 2016). In recent years, neural models were widely used to extract semantic meanings accurately without hand-designed features (Zeng et al., 2015; Lin et al., 2017; Zhang et al., 2019). Then, to alleviate the influence of wrongly labeled instances in distant supervision, those neural relation extractors integrated techniques such as attention mechanism (Lin et al., 2016; Han et al., 2018; Huang and Du, 2019), generative adversarial nets (Qin et al., 2018a; Li et al., 2019), and reinforcement learning (Feng et al., 2018; Qin et al., 2018b). However, none of the above methods pay attention to the biased and inaccurate test set. Though human evaluation can yield accurate evaluation results (Zeng et al., 2015; Alt et al., 2019), labeling all the instances in the test set is too costly. + +# 3 Task Definition + +In distant supervision paradigm, all sentences containing the same entity pair constitute a bag. Researchers train a relation extractor based on bags of sentences and then use it to predict relations of entity pairs. Suppose that a distantly supervised model returns confident score $^2$ $s_i = \{s_{i1}, s_{i2} \ldots s_{ip}\}$ for entity pair $i \in \{1 \ldots N\}$ , where $p$ is the number of relations, $N$ is the number of entity pairs, and $s_{ij} \in (0, 1)$ . $y_i = \{y_{i1}, y_{i2} \ldots y_{ip}\}$ and $z_i = \{z_{i1}, z_{i2} \ldots z_{ip}\}$ respectively represent automatic labels and true labels for entity pair $i$ , where $y_{ij}$ and $z_{ij}$ are both in $\{0, 1\}^3$ . + +In widely used held-out evaluation, existing methods observe two key metrics which are precision at top K $(P@K)$ and Precision-Recall curve + +$(PR \, curve)$ . To compute both metrics, confident score for all entity pairs are sorted in descending order, which is defined as $s' = \{s_1', s_2' \ldots s_P'\}$ where $P = Np$ . Automatic labels and true labels are denoted as $y' = \{y_1', \ldots, y_P'\}$ and $z' = \{z_1', \ldots, z_P'\}$ . In summary, $P@K$ and $R@K$ can be described by the following equations, + +$$ +P @ K \{z _ {1} ^ {\prime} \dots z _ {P} ^ {\prime} \} = \frac {1}{K} \sum_ {i \leq K} z _ {i} ^ {\prime} \tag {1} +$$ + +$$ +R @ K \left\{z _ {1} ^ {\prime} \dots z _ {P} ^ {\prime} \right\} = \frac {\sum_ {i \leq K} z _ {i} ^ {\prime}}{\sum_ {i \leq P} z _ {i} ^ {\prime}} \tag {2} +$$ + +Held-out evaluation replaces $z'$ with $y'$ to calculate $P@K$ and $R@K$ , which leads to incorrect results obviously. + +# 4 Methodology + +In this section, we present the general framework of our method. A small random sampled set is vetted in the initial state. In each iteration there are two steps: 1) select a batch of entity pairs with a customized vetting strategy, label them manually, and add them to the vetted set; 2) use a new metric estimator to evaluate existing models by the noisy set and the vetted set jointly. After a few vetting-evaluating iterations, unbiased performance of relation extraction is appropriately evaluated. In summary, our method consists of two key components: a vetting strategy and a metric estimator. + +# 4.1 Metric Estimator + +Our test set consists of two parts: 1) a noisy set $U$ in which we only know automatic label $y_{i}^{\prime}$ ; 2) a vetted set $V$ in which we know both automatic label $y_{i}^{\prime}$ and manual label $\tilde{z}_{i}^{\prime}$ . We treat the true label $z_{i}^{\prime}$ as a latent variable and $\tilde{z}_{i}^{\prime}$ is its observed value. The performance evaluation mainly depends on the estimation of $z_{i}^{\prime}$ . In our work, we estimate the probability as + +$$ +p (z _ {i} ^ {\prime}) = \prod_ {i \in U} p (z _ {i} ^ {\prime} | \Theta) \prod_ {i \in V} \delta (z _ {i} ^ {\prime} = \tilde {z} _ {i} ^ {\prime}) \qquad (3) +$$ + +where $\Theta$ represents all available elements such as confident score, noisy labels and so on. We make the assumption that the distribution of true latent labels is conditioned on $\Theta$ . + +Given posterior estimates $p(z_i^\prime |\Theta)$ , we can compute the expected performance by replacing the true + +latent label by its probability. Then, the precision and recall equations can be rewritten as + +$$ +E [ P @ K ] = \frac {1}{K} \left(\sum_ {i \in V _ {K}} \tilde {z} _ {i} ^ {\prime} + \sum_ {i \in U _ {K}} p \left(z _ {i} ^ {\prime} = 1 | \Theta\right)\right) \tag {4} +$$ + +$$ +E [ R @ K ] = \frac {\sum_ {i \in V _ {K}} \tilde {z} _ {i} ^ {\prime} + \sum_ {i \in U _ {K}} p \left(z _ {i} ^ {\prime} = 1 \mid \Theta\right)}{\sum_ {i \in V} \tilde {z} _ {i} ^ {\prime} + \sum_ {i \in U} p \left(z _ {i} ^ {\prime} = 1 \mid \Theta\right)} \tag {5} +$$ + +where $U_{K}$ and $V_{K}$ denote the unvetted and vetted subsets of $K$ highest-scoring examples in the total set $U\cup V$ + +To predict the true latent label $z_{i}^{\prime}$ for a specific relation, we use noisy label $y_{i}^{\prime}$ and confident score $s_{i}^{\prime}$ . This posterior probability can be derived as (see appendix for proof) + +$$ +p \left(z _ {i} ^ {\prime} \mid y _ {i} ^ {\prime}, s _ {i} ^ {\prime}\right) = \frac {p \left(y _ {j k} \mid z _ {j k}\right) p \left(z _ {j k} \mid s _ {j k}\right)}{\sum_ {v} p \left(y _ {j k} \mid z _ {j k} = v\right) p \left(z _ {j k} = v \mid s _ {j k}\right)} \tag {6} +$$ + +where $v \in \{0,1\}$ . $s_{jk}, y_{jk}, z_{jk}$ are the corresponding elements of $s_i', y_i', z_i'$ before sorting confident score. Given a few vetted data, we fit $p(y_{jk} | z_{jk})$ by standard maximum likelihood estimation (counting frequencies). $p(z_{jk} | s_{jk})$ is fitted by using logistic regression. For each relation, there is a specific logistic regression function to fit. + +# 4.2 Vetting Strategy + +In this work, we apply a strategy based on maximum expected model change(MEMC) (Settles, 2009). The vetting strategy is to select the sample which can yield a largest expected change of performance estimation. Let $E_{p(z'|V)}Q$ be the expected performance based on the distribution $p(z'|V)$ estimated from current vetted set $V$ . After vetting example $i$ and updating that estimator, it will become $E_{p(z'|V,z_i')}Q$ . The change caused by vetting example $i$ can be written as + +$$ +\Delta_ {i} (z _ {i} ^ {\prime}) = \left| E _ {p (z ^ {\prime} | V)} Q - E _ {p (z ^ {\prime} | V, z _ {i} ^ {\prime})} Q \right| \quad (7) +$$ + +For precision at top K, this expected change can be written as + +$$ +E _ {p \left(z _ {i} ^ {\prime} \mid V\right)} \left[ \Delta_ {i} \left(z _ {i} ^ {\prime}\right) \right] = \frac {2}{K} p _ {i} \left(1 - p _ {i}\right) \tag {8} +$$ + +where $p_i = P(z_i' = 1|\Theta)$ . For the PR curve, every point depends on $P@K$ for different $K$ . Thus, this vetting strategy is also useful for the PR curve. + +With this vetting strategy, the most valuable data is always selected first. Therefore, vetting budget + +is the only factor controlling the vetting procedure. In this approach, we take it as a hyper parameter. When the budget is used up, the vetting stops. The procedure is described in Algorithm 1. + +# Algorithm 1 Active Testing Algorithm + +Require: unvetted set $U$ , vetted set $V$ , vetting budget $T$ , vetting strategy VS, confident score $S$ , estimator $p(z^{\prime})$ + +1: while $T > 0$ do +2: select a batch of items $B \in U$ with vetting strategy $VS$ +3: vet B and get manual label $\tilde{z}^{\prime}$ +4: $\mathbf{U} = \mathbf{U} - \mathbf{B},\mathbf{V} = \mathbf{V}\cup \mathbf{B}$ +5: fit $p(z')$ with $U, V, S$ +6: $\mathrm{T} = \mathrm{T} - |B|$ +7: end while + +# 5 Experiment + +We conduct sufficient experiments to support our claims; 1) The proposed active testing is able to get more accurate results by introducing very few manual annotations. 2) The held-out evaluation will misdirect the optimization of relation extraction, which can be further proved through re-evaluation of eight up-to-date relation extractors. + +# 5.1 Experimental Setting + +Dataset. Our experiments are conducted on a widely used benchmark NYT-10 (Riedel et al., 2010) and an accurate dataset named NYT-19, which contains 500 randomly selected entity pairs from the test set of NYT-10. It contains 106 positive entity pairs and 394 negative entity pairs, in which 35 entity pairs are false negative. NYT-19 has been well labeled by NLP researchers. + +Initialization. We use PCNN+ATT (Lin et al., 2016) as baseline relation extractors. To be more convincing, we provide the experimental results of BGRU+ATT in the appendix. The initial state of vetted set includes all the positive entity pairs of the test set in NYT-10 and 150 vetted negative entity pairs. The batch size for vetting is 20 and the vetting budget is set to 100 entity pairs. + +# 5.2 Effect of Active Testing + +We evaluate the performance of PCNN+ATT with held-out evaluation, human evaluation and our method. The results are shown in Table 2, and Figure 1. Due to high costs of manual labeling for + +the whole test set, we use the PR-curve on NYT-19 to simulate that on NYT-10. + +
ModelEvaluationsP@100P@200P@300
PCNN+ATTHeld-out Evaluation837769
Our method91.288.483.4
Human Evaluation9392.591
+ +Table 2: The Precision at top K predictions $(\%)$ of PCNN+ATT upon held-out evaluation, our method and human evaluation on NYT-10. + +![](images/2b3808c1c66d22410e717dea6a37d2b05e7c1deee142e4bde1d2c7c292d1b90e.jpg) +Figure 1: The PR curve of PCNN+ATT on NYT-19. + +To measure the distance between two curves, we sample 20 points equidistant on each curve and calculate the Euclidean distance of the two vectors. In this way, our method gets the distances 0.17 to the curve of human evaluation while corresponding distances for held-out evaluation is 0.72. We can observe that 1) The performance biases between manual evaluation and held-out evaluation are too significant to be neglected. 2) The huge biases caused by wrongly labeled instances are dramatically alleviated by our method. Our method obtains at least $8.2\%$ closer precision to manual evaluation than the held-out evaluation. + +# 5.3 Effect of Vetting Strategy + +We compare our MEMC strategy with a random vetting strategy as shown in Figure 2. The distance from curves of different vetting strategies to that of human evaluation is 0.176 and 0.284. From the figure, we can conclude that the proposed vetting strategy is much more effective than the random vetting strategy. With the same vetting budget, MEMC gets more accurate performance estimation at most parts of the range. + +# 5.4 Re-evaluation of Relation Extractors + +With the proposed performance estimator, we re-evaluate eight up-to-date distantly supervised rela + +![](images/cfc5bb2181f250373ef5ed06895689efd0cf7e2aa5e3e6c8cd393cb7dac05f1d.jpg) +Figure 2: The PR curves of PCNN+ATT evaluated with various vetting strategies on NYT-19 + +tion extractors. + +
ModelP@100(%)P@200(%)P@300(%)
Zeng et al. 201588.085.182.3
Lin et al. 201691.288.983.8
Liu et al. 201794.089.087.0
Qin et al. 2018b88.886.284.8
Qin et al. 2018a87.083.880.8
Liu et al. 201895.793.489.9
BGRU94.489.584.7
BGRU+ATT95.190.187.1
+ +Table 3: The P@N precision of distantly supervised relation extractors on NYT-10. All the methods are implemented with the same framework and running in the same run-time environment. + +From Table 3, we can observe that: 1) The relative ranking of the models according to precision at top $K$ almost remains the same except Qin et al. 2018b and Qin et al. 2018a. Although GAN and reinforcement learning are helpful to select valuable training instances, they are tendently to be overfitted. 2) Most models make the improvements as they mentioned within papers at high confident score interval. 3) BGRU performs better than any other models, while BGRU based method Liu et al. 2018 achieves highest precision. More results and discussions can be found in the Appendix. + +# 6 Conclusion + +In this paper, we propose a novel active testing approach for distantly supervised relation extraction, which evaluates performance of relation extractors with both noisy data and a few vetted data. Our experiments show that the proposed evaluation method is appropriately unbiased and significant for optimization of distantly relation extraction in future. + +# Acknowledgements + +This work is partially supported by Chinese National Research Fund (NSFC) Key Project No. 61532013 and No. 61872239; BNU-UIC Institute of Artificial Intelligence and Future Networks funded by Beijing Normal University (Zhuhai) and AI and Data Science Hub, BNU-HKBU United International College (UIC), Zhuhai, Guangdong, China. + +# References + +Christoph Alt, Marc Hübner, and Leonhard Hennig. 2019. Fine-tuning pre-trained transformer language models to distantly supervised relation extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1388-1398. +Gabor Angeli, Julie Tibshirani, Jean Wu, and Christopher D Manning. 2014. Combining distant and partial supervision for relation extraction. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1556-1567. +Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: A web-scale approach to probabilistic knowledge fusion. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pages 601-610. +Jun Feng, Minlie Huang, Li Zhao, Yang Yang, and Xiaoyan Zhu. 2018. Reinforcement learning for relation classification from noisy data. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI), pages 5779-5786. +Xianpei Han and Le Sun. 2016. Global distant supervision for relation extraction. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI), pages 2950-2956. +Xu Han, Pengfei Yu, Zhiyuan Liu, Maosong Sun, and Peng Li. 2018. Hierarchical relation extraction with coarse-to-fine grained attention. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2236-2245. +Yuyun Huang and Jinhua Du. 2019. Self-attention enhanced cnns and collaborative curriculum learning for distantly supervised relation extraction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 389-398. +Pengshuai Li, Xinsong Zhang, Weijia Jia, and Hai Zhao. 2019. Gan driven semi-distant supervision + +for relation extraction. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3026-3035. +Yankai Lin, Zhiyuan Liu, and Maosong Sun. 2017. Neural relation extraction with multi-lingual attention. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), pages 34-43. +Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), pages 2124-2133. +Tianyi Liu, Xinsong Zhang, Wanhao Zhou, and Weijia Jia. 2018. Neural relation extraction via inersentence noise reduction and transfer learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2195-2204. +Tianyu Liu, Kexiang Wang, Baobao Chang, and Zhi-fang Sui. 2017. A soft-label method for noisetolerant distantly supervised relation extraction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1790-1795. +Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the AFNLP (ACL-IJCNLP), pages 1003-1011. +Phuc Xuan Nguyen, Deva Ramanan, and Charless C. Fowlkes. 2018. Active testing: An efficient and robust framework for estimating accuracy. In ICML, pages 3759-3768. +Pengda Qin, Weiran Xu, and William Yang Wang. 2018a. Dsgan: Generative adversarial training for distant supervision relation extraction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), pages 496-505. +Pengda Qin, Weiran Xu, and William Yang Wang. 2018b. Robust distant supervision relation extraction via deep reinforcement learning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), pages 2137-2147. +Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (ECML-PKDD), pages 148-163. + +Burr Settles. 2009. Active learning literature survey. Technical report, University of Wisconsin-Madison Department of Computer Sciences. + +Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multi-instance multi-label learning for relation extraction. In Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning (EMNLP-CoNLL), pages 455-465. + +Shingo Takamatsu, Issei Sato, and Hiroshi Nakagawa. 2012. Reducing wrong labels in distant supervision for relation extraction. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (ACL), pages 721-729. + +Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1753-1762. + +Xinsong Zhang, Pengshuai Li, Weijia Jia, and Hai Zhao. 2019. Multi-labeled relation extraction with attentive capsule network. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI), pages 3243-3249. + +# A Appendices + +# A.1 Logistic Regression + +Here we provide the derivation of Equation.6 in the main paper. + +$$ +\begin{array}{l} p \left(z _ {i} ^ {\prime} \mid y _ {i} ^ {\prime}, s _ {i} ^ {\prime}\right) = \frac {p \left(z _ {i} ^ {\prime} , y _ {i} ^ {\prime} , s _ {i} ^ {\prime}\right)}{\sum_ {v} p \left(z _ {i} ^ {\prime} = v , y _ {i} ^ {\prime} , s _ {i} ^ {\prime}\right)} \\ = \frac {p \left(z _ {j k} , y _ {j k} , s _ {j k}\right)}{\sum_ {v} p \left(z _ {j k} = v , y _ {j k} , s _ {j k}\right)} \\ = \frac {p \left(y _ {j k} \mid z _ {j k} , s _ {j k}\right) p \left(z _ {j k} \mid s _ {j k}\right)}{\sum_ {v} p \left(y _ {j k} \mid z _ {j k} = v , s _ {j k}\right) p \left(z _ {j k} = v \mid s _ {j k}\right)} \\ \end{array} +$$ + +We assume that given $z_{jk}$ , the observed label $y_{jk}$ is conditionally independent of $s_{jk}$ , which means $p(y_{jk}|z_{jk}, s_{jk}) = p(y_{jk}|z_{jk})$ . The expression is simplified to: + +$$ +p (z _ {i} ^ {\prime} | y _ {i} ^ {\prime}, s _ {k} ^ {\prime}) = \frac {p (y _ {j k} | z _ {j k}) p (z _ {j k} | s _ {j k})}{\sum_ {v} p (y _ {j k} | z _ {j k} = v) p (z _ {j k} = v | s _ {j k})} +$$ + +# A.2 Vetting Strategy + +Here we provide the derivation of Equation.8 in the main paper. + +$$ +\begin{array}{l} E _ {p \left(z _ {i} ^ {\prime} \mid V\right)} \left[ \Delta_ {i} \left(z _ {i} ^ {\prime}\right) \right] = p _ {i} \frac {1}{K} \left| 1 - p _ {i} \right| + \left(1 - p _ {i}\right) \frac {1}{K} \left| 0 - p _ {i} \right| \\ = \frac {2}{K} p _ {i} (1 - p _ {i}) \\ \end{array} +$$ + +
ModelEvaluationsP@100P@200P@300
BGRU+ATTHeld-out Evaluation8278.574.3
Our method95.290.187.1
Human Evaluation989695
+ +Table 4: The Precision at top K predictions $(\%)$ of BGRU+ATT upon held-out evaluation, our method and human evaluation on NYT-10. + +![](images/17beb0430d93fe93d946135413bedf09d1e4cfe616d8cb4b44422fc2eb216352.jpg) +Figure 3: The PR curve of BGRU+ATT on NYT-19. + +# A.3 Experimental result of BGRU+ATT + +We also evaluate the performance of BGRU+ATT with held-out evaluation, human evaluation and our method. The results are shown in Table 4, and Figure 3. Our method gets the distances 0.15 to the curve of human evaluation while corresponding distances for held-out evaluation is 0.55. + +# A.4 The result of different iterations + +We have recorded the distance of different iterations between the curves obtained by our method and manual evaluation in Figure 4. With the results, we can observe that the evaluation results obtained by our method become closer to human evaluation when the number of annotated entity pairs is less than 100. When the number is more than 100, the distance no longer drops rapidly but begins to fluctuate. + +# B Case Study + +We present realistic cases in NYT-10 to show the effectiveness of our method. In Figure 6, all cases are selected from Top 300 predictions of PCNN+ATT. These instances are all negative instances and has the automatic label $NA$ in NYT-10. In held-out evaluation, relation predictions for these instances are judged as wrong. However, part of them are false negative instances in fact and have the corresponding relations, which cause considerable biases between manual and held-out evaluation. In + +![](images/464a7c96bf28ab85d1aa3e3b380ac3433dbf64bb5ce5595fa2dd802bb6693e4e.jpg) +Figure 4: The result of different iterations for the active testing algorithm with PCNN+ATT and BGRU+ATT + +our approach, those relation predictions for false negative instances are given a high probability to be corrected. At the same time, true negative instances are accurately identified and given a low (near zero) probability. + +# C Re-evaluation Discussion + +The detailed descriptions and discussions of re-evaluation experiments are conducted in this section. + +# C.1 Models + +PCNN (Zeng et al., 2015) is the first neural method used in distant supervision without human-designed features. + +PCNN+ATT (Lin et al., 2016) further integrates a selective attention mechanism to alleviate the influence of wrongly labeled instances. The selective attention mechanism generates attention weights over multiple instances, which is expected to reduce the weights of those noisy instances dynamically. + +PCNN+ATT+SL (Liu et al., 2017) is the development of PCNN+ATT. To correct the wrong labels at entity-pair level during training, the labels of entity pairs are dynamically changed according to the confident score of the predictive labels. Clearly, this method highly depends on the quality of label generator, which has great potential to be over-fitting. PCNN+ATT+RL (Qin et al., 2018b) adopts reinforcement learning to overcome wrong labeling problem for distant supervision. A deep reinforcement learning agent is designed to choose correctly labeled instances based on the performance change of the relation classifier. After that, PCNN+ATT is adopted on the filtered data to do relation classifi + +cation. + +PCNN+ATT+DSGAN (Qin et al., 2018a) is an adversarial training framework to learn a sentence level true-positive generator. The positive samples generated by the generator are labeled as negative to train the generator. The optimal generator is obtained when the discriminator cannot differentiate them. Then the generator is adopted to filter distant supervision training dataset. PCNN+ATT is applied to do relation extraction on the new dataset. BGRU is one of recurrent neural network, which can effectively extract global sequence information. It is a powerful fundamental model for wide use of natural language processing tasks. + +BGRU+ATT is a combination of BGRU and the selective attention. + +STPRE (Liu et al., 2018) extracts relation features with BGRU. To reduce inner-sentence noise, authors utilize a Sub-Tree Parse(STP) method to remove irrelevant words. Furthermore, model parameters are initialized with a prior knowledge learned from the entity type prediction task by transfer learning. + +![](images/806b545d81b19684384b563b93dcf282a9c961da15618504059efe472d3b452d.jpg) +Figure 5: PR curve of distantly supervised relation extractors on NYT-10 with the proposed active testing. + +# C.2 Discussion + +In this section, we additionally provide PR curves to show the performance of baselines. From both Table 3 and Figure 5, we are aware of that: 1) The relative ranking is quite different from that on held-out evaluation according to PR curve. 2) The selective attention has limited help in improving the overall performance, even though it may have positive effects at high confident score. 4) The soft-label method greatly improves the accuracy at high confident score but significantly reduces the overall performance. We deduce that it is severely + +
InstancesReal LabelPredictionProbability
false negativeHe renewed that call four years ago in a document jointly written with Ami Ayalon, a former chief of Israel's shin bet security agency and a leader of the labor party./person/nationality/person/nationality1.0(vetted)
But, if so, you probably would not be familiar with the town of Ramapo in Rockland County./location/contain/location/contain0.842
Mr. voulgaris lives in oyster bay but has summered on shelter island since he was a child growing up in Huntington in western Suffolk County./location/contain/location/contain0.837
true negativeHis visit opened a new level of debate in Israel about the possibility of negotiations with the Syrian president, Bashar Al-Assad.NA/person/nationality0.0(vetted)
They are in the united states, the United Kingdom and Canada, among other places, but not in the Jewish settlements of the west bank.NA/administrative_dis +vision/country0.0
Mr. spielberg and stacey snider, the former Universal Pictures studio chairman who joined DreamWorks last year as chief executive, have sole authority to greenlight films that cost $ 85 million or less.NA/person/company0.088
+ +Figure 6: A case study of active testing approach for distantly supervised relation extraction. The entities are labeled in red. $1.0(\text{vetted})$ and $0.0(\text{vetted})$ mean that the entity pair is vetted in our method. + +affected by the unbalanced instance numbers of different relations, which will make label generator over-fitting to frequent labels. 4) For the overall performance indicated by PR curves, BGRU is the most solid relation extractor. \ No newline at end of file diff --git a/activetestinganunbiasedevaluationmethodfordistantlysupervisedrelationextraction/images.zip b/activetestinganunbiasedevaluationmethodfordistantlysupervisedrelationextraction/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..97f96dd6a7a86cc75223afaa31277914fe2ad4ab --- /dev/null +++ b/activetestinganunbiasedevaluationmethodfordistantlysupervisedrelationextraction/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c666bd6c57581fe3a76ec4d54b6a757ccecac90b9e9bb1cd6090ae809b476ced +size 398639 diff --git a/activetestinganunbiasedevaluationmethodfordistantlysupervisedrelationextraction/layout.json b/activetestinganunbiasedevaluationmethodfordistantlysupervisedrelationextraction/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..72d47b95cf8f9066fbd35c3506da0ea202ea74e0 --- /dev/null +++ b/activetestinganunbiasedevaluationmethodfordistantlysupervisedrelationextraction/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6982e4bc021a7914592f07b657fc26d5d77dbed80188f15130d5c84283b4d361 +size 304243 diff --git a/actordoublecriticincorporatingmodelbasedcriticfortaskorienteddialoguesystems/ea6019ac-bb6b-406e-aca5-4d658eddd193_content_list.json b/actordoublecriticincorporatingmodelbasedcriticfortaskorienteddialoguesystems/ea6019ac-bb6b-406e-aca5-4d658eddd193_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..2a0a2885cafc2d7854e79a7850f5bd879fd6e450 --- /dev/null +++ b/actordoublecriticincorporatingmodelbasedcriticfortaskorienteddialoguesystems/ea6019ac-bb6b-406e-aca5-4d658eddd193_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:738d5f21aadc6dccdb898dbdf973d56de9aafa0425a36c1144d74192ddefc16c +size 65737 diff --git a/actordoublecriticincorporatingmodelbasedcriticfortaskorienteddialoguesystems/ea6019ac-bb6b-406e-aca5-4d658eddd193_model.json b/actordoublecriticincorporatingmodelbasedcriticfortaskorienteddialoguesystems/ea6019ac-bb6b-406e-aca5-4d658eddd193_model.json new file mode 100644 index 0000000000000000000000000000000000000000..3128d32acd06e87acc09f952f0a7422ea69ade21 --- /dev/null +++ b/actordoublecriticincorporatingmodelbasedcriticfortaskorienteddialoguesystems/ea6019ac-bb6b-406e-aca5-4d658eddd193_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c70eded82ad25000f3c9644a454aba2b2f993e6aaa197fafa9c01e9d7962876a +size 82462 diff --git a/actordoublecriticincorporatingmodelbasedcriticfortaskorienteddialoguesystems/ea6019ac-bb6b-406e-aca5-4d658eddd193_origin.pdf b/actordoublecriticincorporatingmodelbasedcriticfortaskorienteddialoguesystems/ea6019ac-bb6b-406e-aca5-4d658eddd193_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..62f3d17423f46a33fac86836d9126d2837382bb7 --- /dev/null +++ b/actordoublecriticincorporatingmodelbasedcriticfortaskorienteddialoguesystems/ea6019ac-bb6b-406e-aca5-4d658eddd193_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d6167f20cb45502878f770a6edf1736829e45da619a631c429180f6e1e72fd40 +size 441475 diff --git a/actordoublecriticincorporatingmodelbasedcriticfortaskorienteddialoguesystems/full.md b/actordoublecriticincorporatingmodelbasedcriticfortaskorienteddialoguesystems/full.md new file mode 100644 index 0000000000000000000000000000000000000000..0078748eae6ce4464ed68d32c98aadb949418461 --- /dev/null +++ b/actordoublecriticincorporatingmodelbasedcriticfortaskorienteddialoguesystems/full.md @@ -0,0 +1,306 @@ +# Actor-Double-Critic: Incorporating Model-Based Critic for Task-Oriented Dialogue Systems + +Yen-Chen Wu $^{1}$ , Bo-Hsiang Tseng $^{1}$ , and Milica Gašić $^{2}$ + +1University of Cambridge, UK + +{ycw30,bht26}@cam.ac.uk + +2Heinrich Heine University Dusseldorf, Germany + +gasic@uni-duesseldorf.de + +# Abstract + +In order to improve the sample-efficiency of deep reinforcement learning (DRL), we implemented imagination augmented agent (I2A) in spoken dialogue systems (SDS). Although I2A achieves a higher success rate than baselines by augmenting predicted future into a policy network, its complicated architecture introduces unwanted instability. In this work, we propose actor-double-critic (ADC) to improve the stability and overall performance of I2A. ADC simplifies the architecture of I2A to reduce excessive parameters and hyperparameters. More importantly, a separate model-based critic shares parameters between actions and makes back-propagation explicit. In our experiments on Cambridge Restaurant Booking task, ADC enhances success rates considerably and shows robustness to imperfect environment models. In addition, ADC exhibits the stability and sample-efficiency as significantly reducing the baseline standard deviation of success rates and reaching the $80\%$ success rate with half training data. + +# 1 Introduction + +Spoken Dialogue Systems (SDS) enable humancomputer interaction via natural language. The core of SDS, dialogue management, can be formulated as an RL problem (Levin et al., 1997; Young et al., 2013; Williams, 2008). Great advancements can be achieved with deep RL algorithms (Dhingra et al., 2016; Chang et al., 2017; Budzianowski et al., 2017; Casanueva et al., 2017; Liu et al., 2018; Gao et al., 2018; Takanobu et al., 2019; Wu et al., 2020). Yet, deep RL methods are notoriously expensive in terms of the number of interactions they require. Even relatively simple tasks can require thousands of labelled dialogues and modelling complex behaviour such as a multi-domain application might need substantially more (Gašić et al., 2011; Li et al., 2016; Su et al., 2016). + +Model-based reinforcement learning (MBRL) is one way of improving sample-efficiency in RL (Tamar et al., 2016; Silver et al., 2016; Gu et al., 2016; Nagabandi et al., 2018; Oh et al., 2017). By learning the environment model, we can predict the future states after taking a certain action. In a dialogue system, that means the system can predict the user's behaviour. In contrast, the model-free RL algorithms only learn the mapping of belief states and Q-values and do not make use of the user behaviour patterns in the training data. In other words, model-free RL is wasting actions by going through similar transitions multiple times to get accurate return estimations. + +Dyna-Q (Sutton, 1990; Sutton et al., 2012) has achieved some success in SDS (Peng et al., 2018; Su et al., 2018; Wu et al., 2019; Zhang et al., 2019) by generating training data for agents and keeping improving its environment model from real interactions between agents and users. Nevertheless, the noisy data generated by inaccurate environment models could adversely affect the experience replay buffer and result in convergence toward suboptimal performance. This problem is even more critical in real-world tasks such as real-world dialogue systems where training a good environment model is challenging. + +I2A (Weber et al., 2017) addresses this problem by augmenting model-based information into the input of policy networks in order to filter out the noise generated by poor environment models. However, I2A introduces unwanted instability when we applied it to a dialogue system due to its complex architecture and excessive hyper-parameters. The unstable performance makes it even harder to tune the parameters. + +In this paper, we propose Actor-Double-Critic (ADC), a new architecture to augment model-based information into the policy network. By training two critics from model-free and model-based data + +![](images/94b929442a22d5fb621bd5694e76a51a3f4902a990fff94b69a698b3d5857703.jpg) +a) + +![](images/2df7a85b968688e12f44115b9135a236d8eeb3d918bcb6a9955e0f772220c43f.jpg) +b) +Figure 1: ADC architecture: Green blocks indicate predicted belief states. a) the environment model predicts the next time step $b_{t+1,a_i}$ conditioned on an action $a_i$ . b) the actor outputs the policy $pi$ as in a standard actor-critic architecture. c) the two critics estimate $Q$ -values based on the current belief state and predicted next belief states respectively. Final $Q$ -values are the weighted sum of the outputs of two critics. Note that model-based critic predicts $i$ -th $Q$ -value based on $b_{t+1,a_i}$ , so this process is repeated for all actions $a_i \in A$ to obtain all of the $Q$ -values. + +![](images/b437e42dfcb675469081a6fc71d8cf8d9645d28f59cf8a9fb98f821cec3e07cc.jpg) + +separately and combining them in an ensemble, we reduce the number of redundant parameters and make back-propagation more efficient. In the Cambridge Restaurant dialogue system task, experimental results show a substantial improvement in success rates. Regarding sample efficiency, ADC takes only half of baseline training data to achieve the $80\%$ success rate. In addition, ADC is the most stable approach among all considered baselines. Compared to a model-free actor-critic algorithm, ACER (Wang et al., 2016), it reduces the standard deviation of success rates from 7.7 to 1.2. It also proves more stable than a Bayesian model-free algorithm GP-SARSA (Gasic et al., 2010). + +# 2 Dialogue management through reinforcement learning + +Dialogue management can be cast as a continuous MDP (Young et al., 2013) composed of a continuous multivariate belief state space $B$ , a finite set of actions $A$ and a reward function $R(b_{t},a_{t})$ . The belief state $b$ is a probability distribution over all possible (discrete) states. At a given time $t$ , the agent (policy) observes the belief state $b_{t}\in B$ and executes an action $a_{t}\in A$ . The agent then receives a reward $r_t\in R$ drawn from $R(b_{t},a_{t})$ . The policy $\pi$ is defined as a function $\pi :B\times A\to [0,1]$ that with probability $\pi (b,a)$ takes an action $a$ in a state $b$ . For any policy $\pi$ and $b\in B$ , the value function $V_{\pi}$ corresponding to $\pi$ is defined as: + +$$ +V ^ {\pi} (b) = \mathbb {E} \left\{r _ {t} + \gamma r _ {t + 1} + \dots | b _ {t} = b, \pi \right\} \quad (1) +$$ + +where $0 \leq \gamma \leq 1$ , is a discount factor and $r_t$ is + +a one-step reward. The objective of reinforcement learning is to find an optimal policy $\pi^{*}$ , i.e. a policy that maximizes the value function in each belief state. Equivalently, the goal is to find an optimal policy $\pi^{*}$ that maximises the discounted total return + +$$ +R = \sum_ {t = 0} ^ {T - 1} \gamma^ {t} r _ {t} \left(b _ {t}, a _ {t}\right) \tag {2} +$$ + +over a dialogue with $T$ turns, where $r_t(b_t, a_t)$ is the reward when taking action $a_t$ in dialogue state $b_t$ at turn $t$ and $\gamma$ is the discount factor. + +# 3 Imagination Augmented Agent (I2A) + +I2A (Weber et al., 2017) manages to implicitly incorporate all the possible future information into the policy network. Basically, it can be divided into three hierarchies: + +- Imagination core. An environment model is trained on future states and rewards prediction conditioned on an action. By interacting with a baseline actor, the environment model is used to simulate potential trajectories. +- Single imagination roll-out. To efficiently use these simulated trajectories, the agent learns an encoder that extracts information from these imaginations including both states and rewards. The encoder is designed to select useful information and ignore the noisy one generated by imperfect models. +- Augmentative architecture. For each possible action, the simulated trajectories are generated. All the information extracted from + +trajectories are concatenated together and provided as additional context to a policy network. + +However, we found that I2A's hierarchical architecture is not stable enough when experimented on SDS tasks. This architecture contains several fragile components which have a strong impact on the performance, such as the environment model and the roll-out policy network. Excessive hyperparameters, like rollout-depth and embedded feature sizes for the encoder, also make it hard to conduct parameter tuning and apply I2A to real-world applications. + +# 4 Actor-Double-Critic (ADC) + +To increase the stability of the augmenting-style approaches, we simplify the previous architecture and propose a key component - model-based critic. As illustrated in Figure 1, we train two critics based on model-free and model-based information respectively and combine their outputs by the weighted sum in an ensemble. + +In this section, we explain why we simplify the architecture in these ways and the benefits of using a model-based critic. + +# 4.1 Simplified architecture + +To reduce the model complexity, we simplify the architecture in the following three ways, + +- Our environment model predicts only the next belief state $b_{t+1, a_i}$ conditioned on an action $a_i$ : the model does not predict rewards. That is because the reward signals in SDS domain are sparse and hard to predict. +- In I2A, the pre-trained environment model will not be updated while learning policy since the policy network is robust to imperfect model. Besides, obtaining pre-training data is not challenging in a simulated game. However, in the real world, pre-training data for SDS is hard to collect. In our approach, in order to improve the sample efficiency, the environment model is updated during policy learning. +- We discard the roll-out policy network. Since the policy always changes, the predicted action sequences change as well. Since we aim at reducing the uncertainties in our framework, roll-out length is set to 1 without using the roll-out policy network. + +# 4.2 Model-based critic + +By definition, a $Q$ -value can be decomposed as: + +$$ +Q _ {i} ^ {\pi} \left(b _ {t}, a _ {i}\right) = r _ {t} + \gamma V \left(b _ {t + 1, a _ {i}}\right) \tag {3} +$$ + +In dialogue system tasks, $r_t$ is typically set to $-1$ for each turn to penalize lengthy dialogue in our experimental setting. At the end of a dialogue, $r_t$ varies depending on the result yet we do not need to predict $Q$ -values at that time. Hence, $r_t$ is a constant in Equation 3 for dialogue system tasks. Given that $r_t$ and $\gamma$ are constants, we can train an estimator for $Q_i^\pi (b_t)$ based on the next belief state $b_{t + 1,a_i}$ , which is predicted by the environment model. + +We call this estimator model-based critic in the actor-critic framework, while the original one is a model-free critic. Compared to previous approaches, adopting the model-based critic has the following three benefits: + +# 4.2.1 Parameter sharing + +Note that given $b_{t+1,a_i}$ , the model-based critic of ADC predicts only one value $Q_i$ . To obtain all of the $Q$ -values, we firstly predict the next belief states $b_{t+1,a_i} \forall a_i \in A$ using the environment model, and then map each of them to $Q_i$ by the model-based critic. Parameters of the model-based critic are shared between actions and the model-complexity is reduced. + +In I2A, $b_{t+1,a_i} \forall a_i \in A$ are concatenated as a large input vector. This means the number of parameters of the model-based path of I2A is increasing with the number of actions, which is not the case in ADC. In practice, the number of parameters in I2A (1.4 millions) is around five times more than ADC (240 thousands). + +# 4.2.2 No redundant connections + +As shown in Equation 3, $Q_{i}$ is not relevant to other predicted belief state $b_{t+1,a_j}$ where $i \neq j$ . $Q_{i}$ results from the predicted belief state $b_{t+1,a_i}$ . But I2A concatenates all of the predicted belief states and the current belief state together to make the prediction of $Q$ -value. That is, most of the connections in I2A should be updated to zero weights after training. Using model-based critic eliminates these redundant connections and predicts one $Q_{i}$ at one time to improve the stability of the algorithm. + +Algorithm 1: Actor-Double-Critic for Dialogue Policy Learning +```txt +Input: Total training epochs $N$ , the environment model $E$ with parameters $\theta_{E}$ , the model-based critic $MB$ , the model-free critic $MF$ , the actor (Policy network) $P$ with parameters $\theta_{P}$ , the experience replay $D$ +``` + +```txt +1 pre-trained $E$ with precollected conversational data +``` + +2 for $n = 1:N$ do +end +```c +// Reinforcement Learning +while $s$ is not a terminal state do + predict $b_{t+1, a_i} \forall a_i \in A$ using $E$ + predict $Q_{MB}$ using $MB$ + compute $Q(b_t, a_i)$ by Eq. 4 + with probability $\epsilon$ select a random action $a$ otherwise select $a = \text{argmax}_{a'} P(b, a')$ + execute $a$ , and observe the next belief state $b'$ and reward $r$ update dialogue state to $b'$ + store $(b, a, r, b')$ in D +``` + +16 end +```txt +sample random minibatches of $(b,a,r,b^{\prime})$ from $D$ +update $\theta_{MF}$ $\theta_{MB}$ via minibatch $Q$ -learning according to Equation 4, 5 +update $\theta_P$ according to ACER or another actor-critic algorithms +// Environment model Learning +sample random minibatches of training samples $(b,a,r,b^{\prime})$ from $D$ +update $\theta_E$ via minibatch SGD of multi-task learning +``` + +
Agent#Parameters
ACER110 K
I2A (Model-free path)80 K
I2A (Model-based path)1.2 M
I2A (Total)1.4 M
ADC (Model-based critic)110 K
ADC (Total)240 K
Environment Model16 K
+ +Table 1: Comparison of the number of parameters. + +# 4.2.3 Explicit update signals + +We can also predict $Q^{\pi}(b_{t})$ through the model-free critic. The final $Q$ -values are the weighted sum of both two critics in an ensemble way to lower the variance. + +$$ +\begin{array}{l} Q ^ {\pi} \left(b _ {t}, a _ {i}\right) = Q _ {M F} ^ {\pi} \left(b _ {t}, a _ {i}\right) \cdot w + \\ Q _ {M B} ^ {\pi} \left(b _ {t + 1, a _ {i}}\right) \cdot (1 - w), \tag {4} \\ \end{array} +$$ + +where $Q_{MF}^{\pi}(b_t, a_i)$ is the output of the model-free critic and $Q_{MB}^{\pi}(b_{t+1}, a_i)$ is the output of the model-based one, and $w$ is a weight parameter. We replace their notation with $Q_{MF}^{\pi}$ and $Q_{MB}^{\pi}$ to keep the expressions succinct. The model selects information either from the model-free path (when $w = 1$ ) when the model is noisy or from the model-based path (when $w = 0$ ) when it provides more accurate information. During the training process, we compute the loss for each critic and $w$ is a hyperparameter. + +$$ +\begin{array}{l} l o s s _ {c r i t i c s} = \left(Q _ {M F} ^ {\pi} - Q ^ {r e t}\right) ^ {2} + \tag {5} \\ (Q _ {M B} ^ {\pi} - Q ^ {r e t}) ^ {2} \\ \end{array} +$$ + +where $Q^{ret}$ is the target of $Q_{\pi}$ using the Retrace algorithm (Munos et al., 2016). + +Note that for each training iteration, we update two critics at the same time. In I2A, we cannot identify whether errors are coming from model-based path or model-free path. In our approach, the information flows from two sources clearly instead of an ambiguous one. We have tried to back-propagate loss from $Q_{\pi}$ through the whole network, but the result is better if we back-propagate the loss defined in equation 5. This result again proves the necessity of using two-critics architecture. + +# 5 Experiments + +# 5.1 Setup + +Experiments are conducted on the Cambridge restaurant domain from the PyDial toolkit (Ultes et al., 2017) with a goal-driven user simulator operating on the semantic level (Schatzmann et al., 2007; Schatzmann and Young, 2009), a LSTM-based NLU model (Mrksic et al., 2016), and a NLG model (Wen et al., 2015). During training, an agent is updated when a dialogue terminates, which is an iteration. Every 200 training dialogues, the agent is tested on 500 dialogues. 10 random seeds were run + +![](images/0f4f81ffbd9d60796e00df5e7ec91e0f5391f3c80fcce06249bf977b91e32627.jpg) +Figure 2: Comparison with baselines. Left: Learning curves of success rate. Right: average turns per dialogue. + +![](images/55f324994ae014a5c9b9169cc415ceab5dbe244272ec5a0dcfc0dcba423ccd40.jpg) + +![](images/0a21c88f759c05224a18c0f1a1b922732d1163f4dcebe80bdd004fc01cb30f0d.jpg) +Figure 3: Left: Comparison between different update algorithms. Right: Experiment on robustness to imperfect model over different architectures. + +![](images/844cd2d2c460a7ccef414eab8b5b291007cfd29eb891545b52e8a1606605a4ac.jpg) + +for each approach to analyze the variance arising from different initialization. The mean $\pm$ standard deviation is depicted as the shaded area in Figure 2, 3. The x-axes of Figure 2, 3 are in log scale to put emphasis on both the early stage and the final performance of the training process. + +User simulator. To accommodate for ASR error, $15\%$ semantic error rate (SER) is included in the user simulator. The maximum dialogue length is set to 25 turns and $\gamma$ was 0.99. The reward is defined as 20 for a successful dialogue minus the number of turns it took to complete the dialogue. + +Implementation details. The input for all models is the full dialogue belief state $b$ of size 268 and the output action space consists of 16 possible actions. For NN-based algorithms, the size of a mini-batch is 64. $\epsilon$ -greedy exploration is used, with $\epsilon$ linearly reducing from 0.3 down to 0 over the training process. Two hidden layers are of size 300 and 100 for actor and critic. The Adam optimiser was used with an initial learning rate of + +0.001 (Kingma and Ba, 2014). For algorithms employing experience replay, the replay memory has a capacity of 2000 interactions. + +# 5.2 Dialogue agents for comparison + +GP-SARSA is a Bayesian baseline, which provides a stable performance by utilising uncertainty estimates. +- ACER is the model-free actor-critic baseline and can be perceived as a model-free counterpart of the proposed method. According to the benchmark results (Casanueva et al., 2017), it performs better than other actor-critic methods such as A2C (Fatemi et al., 2016) and eNAC (Su et al., 2017). Since ADC can be applied to any model-free actor-critic method, not all the performance of RL algorithms are reported here. In this paper, we focus on the gap between ACER and ADC rather than the absolute performance. To have a fair comparison, the pre-training data used by model-based + +
AgentSuc.Std.Required data
ACER78.1±7.71200
GP-SARSA89.6±3.3800
I2A93.9±2.3750
ADC95.8±1.2600
+ +Table 2: Final performance of each agent after training with 4000 dialogues. Tested in 10 runs, each algorithm reports 1) the average success rate 2) the standard deviation of success rates and 3) the average amount of data required to reach the $80\%$ success rate. The latter two matrices are used to evaluate the stability and sample efficiency respectively. + +approaches were put into the experience buffer of ACER at the beginning of the training. + +- I2A is the model-based baseline. The environment model is pre-trained with 400 dialogues generated by interactions between a simulated user and an agent. +- ADC is the proposed method. The ensemble weight $w$ is 0.5 for each critic. The environment model setting is the same as I2A. + +# 5.3 Comparison with baselines + +Success rate. As shown in the left part of Figure 2 and Table 2, ADC outperforms other methods considerably in terms of sample-efficiency, stability, and success rate. I2A performs better than ACER but is still fragile to the initialization, shown as the shaded areas. Compared to I2A, ADC reduces half of the standard deviation of final success rates, from 2.3 to 1.2 + +In contrast, GP-SARSA is quite stable due to its Bayesian nature. While the standard deviation of the final success rate of I2A is smaller than GP-SARSA, I2A is more unstable in the early stage of the training process. It is worth noticing that ADC is even more stable than GP-SARSA, and reach higher performance in the end. In terms of sample efficiency, ADC uses only half of the data (600 dialogues) to reach the $80\%$ success rate, compared to ACER (1200 dialogues). + +Average turns per dialogue As shown in the right part of Figure 2, GP-SARSA takes more turns than other algorithms, and only decrease slightly during training. We found that GP-SARSA tends to take more turns to confirm user intention to stabilize its performance, while some of these confirma + +tions are not necessary. Other approaches steadily reduce the number of turns during the process of training. + +# 5.4 Different back-propagation styles + +In the left part of Figure 3, the red line is the learning curve of the agent that back-propagates only one loss from the ensemble output $Q$ , while the brown line is the agent that update each critic separately and the loss back-propagate from ensemble output only pass through ensemble weight $w$ . + +We can note that the agent with the separate loss function (as in equation 5) is more stable than the other method. This is because when the ensemble $Q$ closes to $Q^{ret}$ , $Q_{MF}$ and $Q_{MB}$ are not necessarily close to the target $Q^{ret}$ . In contrast, the separate update can make sure each of output value is accurate. + +# 5.5 Robustness to imperfect models + +In order to examine the impact of the environment model on ADC, we propose another baseline, actormodel-based-critic (AMC). AMC only use model-based critic to predict $Q$ -value without the model-free critic, so the quality of environment model is critical to AMC. In the experiment, a good environment model is pre-trained with 400 dialogues, and a poor environment model is pre-trained with only 200 dialogues. + +In the right part of Figure 3, we can observe that ADC maintains its performance with poor model, while AMC's performance drops a lot. This might be because a poor environment model cannot lead to accurate value-prediction. The aid from a model-free critic is also substantial. + +# 5.6 Comparison in different environment settings + +To further investigate the properties of ADC, we test it on 6 different environments (simulated user) settings. For each setting, we report the final performance of each agent after training it with 4000 dialogues. Semantic error rate (SER) models the noise from the ASR and NLU channel (Thomson et al., 2012). In addition to the standard user, an unfriendly one is defined, where the user barely provides any extra information to the system. The action masking mechanism is used in environment 1 & 3 to reduce the action space. The setting of each simulated user is listed in Table 3. + +The results are shown in Table 4. In clean environments (1 & 3), ACER learns well after 4000 + +
Env. 1Env. 2Env. 3Env. 4Env. 5Env. 6
SER0%0%15%15%15%30%
MasksOnOffOnOffOnOn
UserStandardStandardStandardStandardUnfriendlyStandard
+ +Table 3: The settings of different environments. + +
TaskGP-SARSAACERI2AADC
Suc.TurnsSuc.TurnsSuc.TurnsSuc.Turns
Env. 199.2%6.498.6%6.097.9%6.099.1%6.0
Env. 295.7%7.287.3%6.579.8%5.898.7%6.0
Env. 395.8%7.795.3%7.196.3%7.096.1%7.0
Env. 489.6%8.278.1%6.793.9%7.295.8%6.8
Env. 592.5%9.694.0%8.294.2%8.095.6%8.0
Env. 690.0%9.081.0%8.187.9%8.192.0%7.9
+ +Table 4: Success rates and average turns after 4000 training dialogues. The highest success rate is highlighted. + +dialogues. Yet, in noisy environments (2 & 4), ADC outperforms ACER significantly. In environment 5, an unfriendly user was used. But this defect does not affect the algorithms a lot as action mask is used, so the number of available actions are reduced and therefore the task is less difficult. It is worthy to note that in environment 6, ADC outperforms hand-crafted policy (89.6% (Casanueva et al., 2017)) and demonstrates the flexibility of reinforcement learning that can learn from environments. Overall, ADC demonstrates its robustness in all environments especially for the environments without action masks. + +# 5.7 Case study + +To further investigate the behaviour of different agents during the training process, we sampled a dialogue session in environment 4 (the setting of the environment is mentioned in section 4.6) after 500 epochs. The agent of ACER and ADC have $57\%$ and $88\%$ success rate respectively. + +As shown in Table 5, ACER informs the restaurant in the early stage while ADC is more conservative and takes more turns. ADC asks more questions before giving the recommended restaurant and sometimes confirms the booking to make sure the one it provides fulfills all the requirements. Besides, ACER keeps asking the same question and sometimes gives a completely wrong reply. That is because, without the aid of environment model, + +ACER cannot predict that the next belief state will be the same and thus cannot foresee the unwanted repetitive conversation which leads to the failure of dialogues. + +# 6 Conclusions + +The policy optimisation algorithm presented in this paper provides a model-based augmentation and improves their performance with spoken dialogue systems (SDS). Our contributions are two-fold: + +1) We adopted I2A, a model-based reinforcement learning approach, on SDS and demonstrated that it can elaborate rich information generated by environment models. 2) Our proposed algorithm further reduces instability by introducing a simple architecture to augment model-based information into policy network. We used ACER as an actor-critic model-free baseline, but this method can augment any deep actor-critic algorithm. + +One interesting topic for future research is model-based actors. In our experiments, incorporating a model-based actor did not work as effectively as ADC. We plan to solve the problems inhibiting in model-based actor and make this algorithm applicable to policy learning approaches (Schulman et al., 2017; Takanobu et al., 2019). + +
Simulation Sample
Cambridge Restaurant Booking User Goal:Constraints: (Area: North), (Price range: Dont Care), (Food type: Chinese)Requests: (Address: None), (Name: None), (Description: None) Patience: 4
ACER agent(57% Success rate)ADC agent(88% Success rate)
1Hello. I want to find a restaurant in Cambridge.1Hello. I want to find a restaurant in Cambridge.
2agt: Do you want Asian oriental food?2What price range do you want? expensive or moderate?
3No, I want some Chinese food.3usr: I don't care.
4agt: Ok, I will book Midsummer House Restaurant for you.4agt: Asian oriental or Mexican food?
5usr: Any other choices?5usr: Chinese food, please.
6The phone number of Midsummer House Restaurant is 01223 369299.6agt: What else do you want?
7usr: Ok.7usr: Just a Chinese restaurant in the north.
8agt: The postcode of Midsummer House Restaurant is CB4 1HA.8agt: Do you want some expensive food?
9usr: I want a restaurant.9usr: Yes, please.
10agt: The phone number of Midsummer House Restaurant is 01223 369299.10Hakka is a Chinese restaurant in the north area.
11usr: In the north area.11usr: Any other choices?
12agt: The address of Midsummer House Restaurant is Midsummer Common.12agt: The Hotpot is also a Chinese restaurant in the north area.
13usr: I don't care.13usr: What is the address?
14agt: The postcode of Midsummer House Restaurant is CB4 1HA.14agt: Which price range do you want?
15usr: What's the food type of Midsummer House?15usr: Give me the address of The Hotpot.
16agt: North or centre?16The address of The Hotpot is 66 Chesterton Road Chesterton.
17usr: North.17usr: What is the description?
18agt: The food type of Midsummer House is British.18agt: Which price range do you want?
19usr: I want some light bites.19usr: What is the description of The Hotpot? The Hotpot serves a variety of Chinese dishes at their restaurant and can cater for private parties of up to five five guests upon request.
20agt: North or south?20agt: Thank you! Bye.
21usr: North.21
Repeat turns 20, 21 till the user is out of patience. FailureSuccess
+ +Table 5: Sample dialogue sessions by ACER and ADC agents trained at epoch 500 in the environment 4 with $15\%$ + +SER. The bold sentences are improper or repetitive responses from the agent. (agt: agent,USR: user) + +# Acknowledgement + +Yen-Chen Wu and Bo-Hsiang Tseng are supported by Cambridge Overseas Trust and the Ministry of Education, Taiwan. Milica Gašić's contribution is part of a project that has received funding from the European Research Council (ERC) under the Grant agreement No. 804636. + +# References + +Paweł Budzianowski, Stefan Ultes, Pei-Hao Su, Nikola Mrkšić, Tsung-Hsien Wen, Inigo Casanueva, Lina Rojas-Barahona, and Milica Gašić. 2017. Subdomain modelling for dialogue management with hierarchical reinforcement learning. arXiv preprint arXiv:1706.06210. +Inigo Casanueva, Pawe Budzianowski, Pei-Hao Su, Nikola Mrksic, Tsung-Hsien Wen, Stefan Ultes, Lina Rojas-Barahona, Steve Young, and Milica Gašić. 2017. A benchmarking environment for reinforcement learning based task oriented dialogue management. arXiv preprint arXiv:1711.11023. +Cheng Chang, Runzhe Yang, Lu Chen, Xiang Zhou, and Kai Yu. 2017. Affordable on-line dialogue policy learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2200-2209. +Bhuwan Dhingra, Lihong Li, Xiujun Li, Jianfeng Gao, Yun-Nung Chen, Faisal Ahmed, and Li Deng. 2016. Towards end-to-end reinforcement learning of dialogue agents for information access. arXiv preprint arXiv:1609.00777. +Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, and Kaheer Suleman. 2016. Policy networks with two-stage training for dialogue systems. arXiv preprint arXiv:1606.03152. +Jianfeng Gao, Michel Galley, and Lihong Li. 2018. Neural approaches to conversational ai. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pages 1371-1374. +Milica Gašić, Filip Jurčíček, Simon Keizer, François Mairesse, Blaise Thomson, Kai Yu, and Steve Young. 2010. Gaussian processes for fast policy optimisation of pomdp-based dialogue managers. In Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 201-204. Association for Computational Linguistics. +Milica Gašić, Filip Jurčíček, Blaise Thomson, Kai Yu, and Steve Young. 2011. On-line policy optimisation of spoken dialogue systems via live interaction with human subjects. In Automatic Speech Recognition and Understanding (ASRU), 2011 IEEE Workshop on, pages 312-317. IEEE. + +Shixiang Gu, Timothy Lillicrap, Ilya Sutskever, and Sergey Levine. 2016. Continuous deep q-learning with model-based acceleration. In International Conference on Machine Learning, pages 2829-2838. +Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. +Esther Levin, Roberto Pieraccini, and Wieland Eckert. 1997. Learning dialogue strategies within the markov decision process framework. In 1997 IEEE Workshop on Automatic Speech Recognition and Understanding Proceedings, pages 72-79. IEEE. +Jiwei Li, Alexander H Miller, Sumit Chopra, Marc'Aurelio Ranzato, and Jason Weston. 2016. Dialogue learning with human-in-the-loop. arXiv preprint arXiv:1611.09823. +Bing Liu, Gokhan Tur, Dilek Hakkani-Tur, Pararth Shah, and Larry Heck. 2018. Dialogue learning with human teaching and feedback in end-to-end trainable task-oriented dialogue systems. arXiv preprint arXiv:1804.06512. +Nikola Mrksic, Diarmuid O Seaghdha, Tsung-Hsien Wen, Blaise Thomson, and Steve Young. 2016. Neural belief tracker: Data-driven dialogue state tracking. arXiv preprint arXiv:1606.03777. +Rémi Munos, Tom Stepleton, Anna Harutyunyan, and Marc Bellemare. 2016. Safe and efficient off-policy reinforcement learning. In Advances in Neural Information Processing Systems, pages 1054-1062. +Anusha Nagabandi, Gregory Kahn, Ronald S Fearing, and Sergey Levine. 2018. Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 7559-7566. IEEE. +Junhyuk Oh, Satinder Singh, and Honglak Lee. 2017. Value prediction network. In Advances in Neural Information Processing Systems, pages 6118-6128. +Baolin Peng, Xiujun Li, Jianfeng Gao, Jingjing Liu, and Kam-Fai Wong. 2018. Integrating planning for task-completion dialogue policy learning. arXiv preprint arXiv:1801.06176. +Jost Schatzmann, Blaise Thomson, Karl Weilhammer, Hui Ye, and Steve Young. 2007. Agenda-based user simulation for bootstrapping a pomdp dialogue system. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers, pages 149-152. Association for Computational Linguistics. +Jost Schatzmann and Steve Young. 2009. The hidden agenda user simulation model. IEEE transactions on audio, speech, and language processing, 17(4):733-747. + +John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347. +David Silver, Hado van Hasselt, Matteo Hessel, Tom Schaul, Arthur Guez, Tim Harley, Gabriel Dulac-Arnold, David Reichert, Neil Rabinowitz, Andre Barreto, et al. 2016. The predictron: End-to-end learning and planning. arXiv preprint arXiv:1612.08810. +Pei-Hao Su, Pawel Budzianowski, Stefan Ultes, Milica Gasic, and Steve Young. 2017. Sample-efficient actor-critic reinforcement learning with supervised data for dialogue management. arXiv preprint arXiv:1707.00130. +Pei-Hao Su, Milica Gasic, Nikola Mrksic, Lina Rojas-Barahona, Stefan Ultes, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. On-line active reward learning for policy optimisation in spoken dialogue systems. arXiv preprint arXiv:1605.07669. +Shang-Yu Su, Xiujun Li, Jianfeng Gao, Jingjing Liu, and Yun-Nung Chen. 2018. Discriminative deep dyna-q: Robust planning for dialogue policy learning. arXiv preprint arXiv:1808.09442. +Richard S Sutton. 1990. Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In Machine Learning Proceedings 1990, pages 216-224. Elsevier. +Richard S Sutton, Csaba Szepesvári, Alborz Geramifard, and Michael P Bowling. 2012. Dyna-style planning with linear function approximation and prioritized sweeping. arXiv preprint arXiv:1206.3285. +Ryuichi Takanobu, Hanlin Zhu, and Minlie Huang. 2019. Guided dialog policy learning: Reward estimation for multi-domain task-oriented dialog. arXiv preprint arXiv:1908.10719. +Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, and Pieter Abbeel. 2016. Value iteration networks. In Advances in Neural Information Processing Systems, pages 2154-2162. +Blaise Thomson, Milica Gasic, Matthew Henderson, Pirros Tsiakoulis, and Steve Young. 2012. N-best error simulation for training spoken dialogue systems. In Spoken Language Technology Workshop (SLT), 2012 IEEE, pages 37-42. IEEE. +Stefan Ultes, Lina M Rojas Barahona, Pei-Hao Su, David Vandyke, Dongho Kim, Inigo Casanueva, Paweł Budzianowski, Nikola Mrkšić, Tsung-Hsien Wen, Milica Gasic, et al. 2017. Pydial: A multidomain statistical dialogue system toolkit. Proceedings of ACL 2017, System Demonstrations, pages 73-78. +Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, and + +Nando de Freitas. 2016. Sample efficient actor-critic with experience replay. arXiv preprint arXiv:1611.01224. +Théophane Weber, Sébastien Racanière, David P Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, et al. 2017. Imagination-augmented agents for deep reinforcement learning. arXiv preprint arXiv:1707.06203. +Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, PeiHao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned LSTM-based natural language generation for spoken dialogue systems. arXiv preprint arXiv:1508.01745. +Jason D Williams. 2008. The best of both worlds: Unifying conventional dialog systems and pomdps. In Ninth Annual Conference of the International Speech Communication Association. +Yen-Chen Wu, Bo-Hsiang Tseng, and Carl Edward Rasmussen. 2020. Improving sample-efficiency in reinforcement learning for dialogue systems by using trainable-action-mask. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8024-8028. IEEE. +Yuexin Wu, Xiujun Li, Jingjing Liu, Jianfeng Gao, and Yiming Yang. 2019. Switch-based active deep dynaq: Efficient adaptive planning for task-completion dialogue policy learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7289-7296. +Steve Young, Milica Gašić, Blaise Thomson, and Jason D Williams. 2013. Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101(5):1160-1179. +Zhirui Zhang, Xiujun Li, Jianfeng Gao, and Enhong Chen. 2019. Budgeted policy learning for task-oriented dialogue systems. arXiv preprint arXiv:1906.00499. \ No newline at end of file diff --git a/actordoublecriticincorporatingmodelbasedcriticfortaskorienteddialoguesystems/images.zip b/actordoublecriticincorporatingmodelbasedcriticfortaskorienteddialoguesystems/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..314f8daa9992529b4bebc9b3ff87cd970773103a --- /dev/null +++ b/actordoublecriticincorporatingmodelbasedcriticfortaskorienteddialoguesystems/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b913b2402d5b6e274fb7c13bdd4be81304d766d3aff40b84783e80bde3e52c52 +size 541701 diff --git a/actordoublecriticincorporatingmodelbasedcriticfortaskorienteddialoguesystems/layout.json b/actordoublecriticincorporatingmodelbasedcriticfortaskorienteddialoguesystems/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..c842e22a88d686fdef1c37b81267ba8865a61082 --- /dev/null +++ b/actordoublecriticincorporatingmodelbasedcriticfortaskorienteddialoguesystems/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:98d15e9a426ae41cd536c13863399e5a642932ac5899b709da149462409c634d +size 398008 diff --git a/adaptingbertforwordsensedisambiguationwithglossselectionobjectiveandexamplesentences/3974f101-9ea9-4b62-be48-c2f5bc591965_content_list.json b/adaptingbertforwordsensedisambiguationwithglossselectionobjectiveandexamplesentences/3974f101-9ea9-4b62-be48-c2f5bc591965_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..afd211dbb0f69aa0f63b4db8535dd46d2b4cd38a --- /dev/null +++ b/adaptingbertforwordsensedisambiguationwithglossselectionobjectiveandexamplesentences/3974f101-9ea9-4b62-be48-c2f5bc591965_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f37b8d105ab2d6de9f5ad20f30b9844cbdfddcd9738f2b8b6fefb12783b39453 +size 44964 diff --git a/adaptingbertforwordsensedisambiguationwithglossselectionobjectiveandexamplesentences/3974f101-9ea9-4b62-be48-c2f5bc591965_model.json b/adaptingbertforwordsensedisambiguationwithglossselectionobjectiveandexamplesentences/3974f101-9ea9-4b62-be48-c2f5bc591965_model.json new file mode 100644 index 0000000000000000000000000000000000000000..214b3876a747b1f1aa903434ae112df1836095c5 --- /dev/null +++ b/adaptingbertforwordsensedisambiguationwithglossselectionobjectiveandexamplesentences/3974f101-9ea9-4b62-be48-c2f5bc591965_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:af8ff72cacfeec1196627b2254071f9ad1aeea5149563ea16f793cd905eebf2e +size 53223 diff --git a/adaptingbertforwordsensedisambiguationwithglossselectionobjectiveandexamplesentences/3974f101-9ea9-4b62-be48-c2f5bc591965_origin.pdf b/adaptingbertforwordsensedisambiguationwithglossselectionobjectiveandexamplesentences/3974f101-9ea9-4b62-be48-c2f5bc591965_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5dc6eeee17b57ea369a5913ca5289989f9f00b53 --- /dev/null +++ b/adaptingbertforwordsensedisambiguationwithglossselectionobjectiveandexamplesentences/3974f101-9ea9-4b62-be48-c2f5bc591965_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2df5c2a0cf72b50201cc08d97d1f1782040d715c6b03f77ada30f10ee7a798cc +size 271453 diff --git a/adaptingbertforwordsensedisambiguationwithglossselectionobjectiveandexamplesentences/full.md b/adaptingbertforwordsensedisambiguationwithglossselectionobjectiveandexamplesentences/full.md new file mode 100644 index 0000000000000000000000000000000000000000..5a679d140e559572b63c67deb9696025c8e25100 --- /dev/null +++ b/adaptingbertforwordsensedisambiguationwithglossselectionobjectiveandexamplesentences/full.md @@ -0,0 +1,177 @@ +# Adapting BERT for Word Sense Disambiguation with Gloss Selection Objective and Example Sentences + +Boon Peng Yap, Andrew Koh, Eng Siong Chng + +Nanyang Technological University, Singapore + +{boonpeng001, andr0081, aseschng}@ntu.edu.sg + +# Abstract + +Domain adaptation or transfer learning using pre-trained language models such as BERT has proven to be an effective approach for many natural language processing tasks. In this work, we propose to formulate word sense disambiguation as a relevance ranking task, and fine-tune BERT on sequence-pair ranking task to select the most probable sense definition given a context sentence and a list of candidate sense definitions. We also introduce a data augmentation technique for WSD using existing example sentences from WordNet. Using the proposed training objective and data augmentation technique, our models are able to achieve state-of-the-art results on the English all-words benchmark datasets. + +# 1 Introduction + +In natural language processing, Word Sense Disambiguation (WSD) refers to the task of identifying the exact sense of an ambiguous word given the context (Navigli, 2009). More specifically, WSD associates ambiguous words with predefined senses from an external sense inventory, e.g. WordNet (Miller, 1995) and BabelNet (Navigli and Ponzetto, 2010). + +Recent studies in learning contextualized word representations from language models, e.g. ELMo (Peters et al., 2018), BERT (Devlin et al., 2019) and GPT-2 (Radford et al., 2019) attempt to alleviate the issue of insufficient labeled data by first pre-training a language model on a large text corpus through self-supervised learning. The weights from the pre-trained language model can then be fine-tuned on downstream NLP tasks such as question answering and natural language inference. For WSD, pre-trained BERT has been utilized in multiple ways with varying degrees of success. Notably, + +Huang et al. (2019) proposed GlossBERT, a model based on fine-tuning BERT on sequence-pair binary classification task, and achieved state-of-the-art results in terms of single model performance on several English all-words WSD benchmark datasets. + +In this paper, we extend the sequence-pair WSD model and propose a new task objective that can better exploit the inherent relationships within positive and negative sequence pairs. Briefly, our contribution is two-fold: (1) we formulate WSD as gloss selection task, in which the model learns to select the best context-gloss pair from a group of related pairs; (2) we demonstrate how to make use of additional lexical resources, namely the example sentences from WordNet to further improve WSD performance. + +We fine-tune BERT using the gloss selection objective on SemCor (Miller et al., 1994) plus additional training instances constructed from the WordNet example sentences and evaluate its impact on several commonly used benchmark datasets for English all-words WSD. Experimental results show that the gloss selection objective can indeed improve WSD performance; and using WordNet example sentences as additional training data can offer further performance boost. + +# 2 Related Work + +BERT (Devlin et al., 2019) is a language representation model based on multi-layer bidirectional Transformer encoder (Vaswani et al., 2017). Previous experiment results have showed that significant improvement can be achieved in many downstream NLP tasks through fine-tuning BERT on those tasks. Several methods have been proposed to apply BERT for WSD. In this section, we briefly describe two commonly used approaches: feature-based and fine-tuning approach. + +# 2.1 Feature-based Approaches + +Feature-based WSD systems make use of contextualized word embeddings from BERT as input features for task-specific architectures. Vial et al. (2019) used the contextual embeddings as inputs in a Transformer-based classifier. They proposed two sense vocabulary compression techniques to reduce the number of output classes by exploiting the semantic relationships between different senses. The Transformer-based classifiers were trained from scratch using the reduced output classes on SemCor and WordNet Gloss Corpus (WNGC). Their ensemble model, which consists of 8 independently trained classifiers achieved state-of-the-art results on the English all-words WSD benchmark datasets. + +Besides deep learning-based approach, Loureiro and Jorge (2019) and Scarlini et al. (2020) construct sense embeddings using the contextual embeddings from BERT. The former generates sense embeddings by averaging the contextual embeddings of sense-annotated tokens taken from SemCor while the latter constructs sense embeddings by concatenating the contextual embeddings of LabelNet definitions with the contextual embeddings of Wikipedia contexts. For WSD, both approaches make use of the constructed sense embeddings in nearest neighbor classification (kNN), in which the simple 1-nearest neighbor approach from Scarlini et al. (2020) showed substantial improvement over the nominal category of the English all-words WSD benchmark datasets. + +# 2.2 Fine-tuning Approaches + +Fine-tuning WSD systems directly adjust the pretrained weights on annotated corpora rather than learning new weights from scratch. Du et al. (2019) fine-tuned two separate and independent BERT models simultaneously: one to encode sense-annotated sentences and another one to encode sense definitions from WordNet. The hidden states from the 2 encoders are then concatenated and used to train a multilayer perceptron classifier for WSD. + +Huang et al. (2019) proposed GlossBERT which fine-tunes BERT on sequence-pair binary classification tasks. The training data consists of context-gloss pairs constructed using annotated sentences from SemCor and sense definitions from WordNet 3.0. Each context-gloss pair contains a sentence from SemCor with a target word to be disambiguated (context) and a candidate sense definition of the target word from WordNet (gloss). Dur + +ing fine-tuning, GlossBERT classifies each context-gloss pair as either positive or negative depending on whether the sense definition corresponds to the correct sense of the target word in the context. Each context-gloss pair is treated as independent training instance and will be shuffled to a random position at the start of each training epoch. At inference stage, the context-gloss pair with the highest output score from the positive neuron among other candidates is chosen as the best answer. + +In this paper, we use similar context-gloss pairs as inputs for our proposed WSD model. However, instead of treating individual context-gloss pair as independent training instance, we group related context-gloss pairs as 1 training instance, i.e. context-gloss pairs with the same context but different candidate glosses are considered as 1 group. Using groups of context-gloss pairs as training data, we formulate WSD as a ranking/selection problem where the most probable sense is ranked first. By processing all related candidate senses in one go, the WSD model will be able to learn better discriminating features between positive and negative context-gloss pairs. + +# 3 Methodology + +We describe the implementation details of our approaches in this section. When customizing BERT for WSD, we use a linear layer consisting of just 1 neuron in the output layer to compute the relevance score for each context-gloss pair, in contrast to the binary classification layer used in GlossBERT. + +Additionally, we also extract example sentences from WordNet 3.0 and use them as additional training data on top of the sense-annotated sentences from SemCor. + +# 3.1 Gloss Selection Objective + +Following Huang et al. (2019), we construct positive and negative context-gloss pairs by combining annotated sentences from SemCor and sense definitions from WordNet 3.0. The positive pair contains a gloss representing the correct sense of the target word while a negative pair contains a negative candidate gloss. Each target word in the contexts is surrounded with two special [TGT] tokens. We group context-gloss pairs with the same context and target word as a single training instance so that they are processed sequentially by the neural network. As illustrated in Figure 1, the output layer takes the hidden states of the [CLS] token from + +![](images/e7fe1ddd00f481ed18163c369360363e687f6e012f43429eeb1a68496e0a2864.jpg) +Figure 1: Visualisation of the gloss selection objective when computing the loss value for a training instance. The context "He turned slowly and began to crawl back up the bank toward the rampart." is annotated with the target word "bank". A training instance consists of n context-gloss pairs (n=4 in this case), including 1 positive pair (shown in green) and n-1 negative pairs (shown in red). The order of the context-gloss pairs within each training instance is randomized during the dataset construction step. + +each context-gloss pair as input and calculate the corresponding relevance score. A softmax layer then aggregates the relevance scores from the same group and computes the training loss using cross entropy as loss function. Formally, the gloss selection objective is given as follow: + +$$ +l o s s = - \frac {1}{m} \sum_ {i = 1} ^ {m} \left[ \sum_ {j = 1} ^ {n _ {i}} 1 \left(y _ {i}, j\right) \log \left(p _ {i j}\right) \right] \tag {1} +$$ + +where $m$ is the batch size, $n_i$ is number of candidate glosses for the $i$ -th training instance, $1(y_i,j)$ is the binary indicator if index $j$ is the same as the index of the positive context-gloss pair $y_i$ , and $p_{ij}$ is the softmax value for the $j$ -th candidate sense of $i$ -th training instance, computed using the following equation: + +$$ +p _ {i j} = \frac {\exp (R e l (\text {c o n t e x t} _ {i} , g l o s s _ {i j}))}{\sum_ {k} ^ {n _ {i}} \exp (R e l (\text {c o n t e x t} _ {i} , g l o s s _ {i k}))} \quad (2) +$$ + +where $Rel(context, gloss)$ denotes the relevance score of a context-gloss pair from the output layer. Similar formulation was presented for web document ranking (Huang et al., 2013) and question-answering natural language inference (Liu et al., 2019). In the case of WSD, we are only interested in the top-1 context-gloss pair. Hence, during testing, we select the context-gloss pair with the highest relevance score and its corresponding sense as the most probable sense for the target word. + +# 3.2 Data Augmentation using Example Sentences + +Most synsets in WordNet 3.0 include one or more short sentences illustrating the usage of the synset + +members (i.e. synonyms). We introduce a relatively straightforward data augmentation technique that combines the example sentences with positive/negative glosses into additional context-gloss pairs. First, example sentences (context) are extracted from each synset and target words are identified via keyword matching and annotated with two [TGT] tokens. Then, context-gloss pairs are constructed by combining the annotated contexts with positive and negative glosses. Using this technique, we were able to obtain 37,596 additional training instances (about $17\%$ more training instances). + +# 4 Experiments + +In this section, we introduce the datasets and experiment settings used to fine-tune BERT. We also present the evaluation results of each model and compare them against existing WSD systems. + +# 4.1 Datasets + +Both training and testing datasets were obtained from the unified evaluation framework for WSD (Raganato et al., 2017b). Our training dataset for gloss selection consists of 2 parts: a baseline dataset with 226,036 training instances constructed from SemCor and an augmented dataset with 37,596 training instances constructed using the data augmentation method. When constructing the context-gloss pairs for the training datasets, we select a maximum of $n = 6$ context-gloss pairs per training instance; for testing datasets, all possible candidate context-gloss pairs are considered. + +The testing dataset contains 5 benchmark datasets from previous Senseval and SemEval competitions, including Senseval-2 (SE2), Senseval-3 + +
SystemDevTestConcatenation of all datasets
SE07SE2SE3SE13SE15NounVerbAdjAdvALL
KBMost frequent sense baseline54.565.666.063.867.167.749.873.180.565.5
Leskext+emb56.763.063.766.264.670.051.151.780.664.2
Babelfy51.667.063.566.470.368.950.773.279.866.4
SupIMS+emb62.672.270.465.971.571.956.675.984.770.1
LSTM-LP63.573.871.869.572.6-----
Bi-LSTM-71.168.464.868.369.555.976.282.468.4
HCAN-72.870.368.572.872.758.277.484.171.1
FeatLMMS2348 (BERT)68.176.375.675.177.0----75.4
SemCor+WNGC, hypernyms (single)---------77.1
SemCor+WNGC, hypernyms (ensemble)73.479.777.878.782.681.468.783.785.579.0
SENSEMBERTsup-----80.4----
BEM274.579.477.479.781.781.468.583.087.979.0
EWISERhyper275.280.879.080.781.882.969.483.687.380.1
FTBERTdef-76.474.976.378.378.365.280.583.876.3
GlossBERT (Sent-CLS-WS)72.577.775.276.180.479.366.978.286.477.0
OursBERTbase (baseline)73.679.476.877.481.580.667.982.287.378.2
BERTbase (augmented)73.679.376.979.182.081.367.782.287.978.7
BERTlarge (baseline)73.079.977.478.281.881.268.881.588.278.7
BERTlarge (augmented)72.779.877.879.784.482.668.582.186.479.5
+ +Table 1: F1-score (\%) on the English all-words WSD benchmark datasets in Raganato et al. (2017b). The systems are grouped into 5 categories: i) knowledge-based system (KB), i.e. the most frequent sense baseline, $\mathrm{Lesk}_{\mathrm{ext}}+\mathrm{emb}$ (Basile et al., 2014) and Babelfy (Moro et al., 2014), ii) supervised models (Sup), i.e. IMS+emb (Iacobacci et al., 2016), LSTM-LP (Yuan et al., 2016), Bi-LSTM (Raganato et al., 2017a) and HCAN (Luo et al., 2018), iii) featured-based approach using contextual embeddings from BERT (Feat), i.e. $\mathrm{LMMS}_{2348}$ (Loureiro and Jorge, 2019), SemCor+WNGC (Vial et al., 2019), $\mathrm{SENSEMBERT}_{\mathrm{sup}}$ (Scarlini et al., 2020), BEM (Blevins and Zettlemoyer, 2020) and $\mathrm{EWISER}_{\mathrm{hyper}}$ (Bevilacqua and Navigli, 2020), iv) fine-tuning approach using BERT (FT), i.e. $\mathrm{BERT}_{\mathrm{def}}$ (Du et al., 2019) and GlossBERT (Huang et al., 2019), v) our models (Ours). + +(SE3), SemEval-07 (SE07), SemEval-13 (SE13), and SemEval-15 (SE15). Following Huang et al. (2019) and others, we choose SemEval-07 as the development set for tuning hyperparameters. + +# 4.2 Experiment Settings + +We experiment with both uncased $\mathrm{BERT}_{\mathrm{base}}$ and $\mathrm{BERT}_{\mathrm{large}}$ models. $\mathrm{BERT}_{\mathrm{base}}$ consists of 110M parameters with 12 Transformer layers, 768 hidden units and 12 self-attention heads while $\mathrm{BERT}_{\mathrm{large}}$ consists of 340M parameters with 24 Transformer layers, 1024 hidden units and 16 self-attention heads. We use the implementation from the transformers package (Wolf et al., 2019). In total, we trained 4 models on 2 setups: (1) $\mathrm{BERT}_{\mathrm{base}/\mathrm{large}}$ (baseline), using only the baseline dataset; (2) $\mathrm{BERT}_{\mathrm{base}/\mathrm{large}}$ (augmented), using the concatenation of baseline and augmented dataset. + +At fine-tuning, we set the initial learning rate to 2e-5 with batch size of 128 over 4 training epochs. The remaining hyperparameters are kept at the default values specified in the transformers package. + +# 4.3 Evaluation Results + +We evaluate the performance of each model and report the F1-scores in Table 1, along with the results from other WSD systems. + +All 4 of our models trained on the proposed gloss selection objective show substantial improvement over the non-ensemble systems across all benchmark datasets, which signifies the effectiveness of this task formulation3. The addition of augmented training set further improves the performance, particularly in the noun category. It is worth noting that Du et al. (2019) and Huang et al. (2019) reported slightly worse or identical results when finetuning on $\mathrm{BERT}_{\text {large }}$ , but both of our models finetuned on $\mathrm{BERT}_{\text {large }}$ obtain considerable better results than the $\mathrm{BERT}_{\text {base }}$ counterparts. This may be partially attributed to the fact that we were using the recently released whole-word masking variant of $\mathrm{BERT}_{\text {large }}$ , which was shown to have a better performance on the Multi-Genre Natural Language Inference (MultiNLI) benchmark. Although the $\mathrm{BERT}_{\text {large }}$ (augmented) model has lower F1- + +score on the development dataset, it outperforms the ensemble system consisting of eight independent $\mathrm{BERT}_{\mathrm{large}}$ models on three testing datasets and achieves the best F1-score on the concatenation of all datasets. + +To illustrate that the improvement of WSD performance comes from the gloss selection objective instead of hyperparameter settings, we fine-tune a $\mathrm{BERT}_{\mathrm{base}}$ model on the unagumented training set using the same hyperparameter settings as GlossBERT (Huang et al., 2019), i.e. setting learning rate and batch size to 2e-5 and 64 respectively, and using 4 context-gloss pairs for each target word. As shown in Table 2, our model fine-tuned on the proposed gloss selection objective consistently outperforms GlossBERT across all benchmark datasets under the same hyperparameter settings. + +
SE07SE2SE3SE13SE15
GlossBERT72.577.775.276.180.4
BERTbase73.079.177.377.481.0
+ +Table 2: Comparison of F1-score $(\%)$ on different benchmark datasets between GlossBERT and a BERTbase model fine-tuned with gloss selection objective. + +# 5 Conclusion + +We proposed the gloss selection objective for supervised WSD, which formulates WSD as a relevance ranking task based on context-gloss pairs. Our models fine-tuned on this objective outperform other non-ensemble systems on five English all-words benchmark datasets. Furthermore, we demonstrate how to generate additional training data without external annotations using existing example sentences from WordNet, which provides extra performance boost and enable our single-model system to surpass the state-of-the-art ensemble system by a considerable margin on a number of benchmark datasets. + +# Acknowledgements + +We thank the meta-reviewer, the three anonymous reviewers and Ms.Vu Thi Ly for their insightful feedback and suggestions. + +# References + +Pierpaolo Basile, Annalina Caputo, and Giovanni Semeraro. 2014. An enhanced lesk word sense disambiguation algorithm through a distributional se + +mantic model. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1591-1600. + +Michele Bevilacqua and Roberto Navigli. 2020. Breaking through the $80\%$ glass ceiling: Raising the state of the art in word sense disambiguation by incorporating knowledge graph information. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2854-2864, Online. + +Terra Blevins and Luke Zettlemoyer. 2020. Moving down the long tail of word sense disambiguation with gloss informed bi-encoders. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1006–1017, Online. + +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186. + +Jiaju Du, Fanchao Qi, and Maosong Sun. 2019. Using bert for word sense disambiguation. arXiv preprint arXiv:1909.08358. + +Luyao Huang, Chi Sun, Xipeng Qiu, and Xuan-Jing Huang. 2019. Glossbert: Bert for word sense disambiguation with gloss knowledge. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3500-3505. + +Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM international conference on Information & Knowledge Management, pages 2333-2338. + +Ignacio Iacobacci, Mohammad Taher Pilehvar, and Roberto Navigli. 2016. Embeddings for word sense disambiguation: An evaluation study. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 897-907. + +Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4487-4496. + +Daniel Loureiro and Alipio Jorge. 2019. Language modelling makes sense: Propagating representations through wordnet for full-coverage word sense disambiguation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5682-5691. + +Fuli Luo, Tianyu Liu, Zexue He, Qiaolin Xia, Zhifang Sui, and Baobao Chang. 2018. Leveraging gloss knowledge in neural word sense disambiguation by hierarchical co-attention. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1402-1411. +George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39-41. +George A Miller, Martin Chodorow, Shari Landes, Claudia Leacock, and Robert G Thomas. 1994. Using a semantic concordance for sense identification. In Proceedings of the workshop on Human Language Technology, pages 240-243. Association for Computational Linguistics. +Andrea Moro, Alessandro Raganato, and Roberto Navigli. 2014. Entity linking meets word sense disambiguation: a unified approach. Transactions of the Association for Computational Linguistics, 2:231-244. +Roberto Navigli. 2009. Word sense disambiguation: A survey. ACM computing surveys (CSUR), 41(2):1-69. +Roberto Navigli and Simone Paolo Ponzetto. 2010. Babelnet: Building a very large multilingual semantic network. In Proceedings of the 48th annual meeting of the association for computational linguistics, pages 216-225. Association for Computational Linguistics. +Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9. +Alessandro Raganato, Claudio Delli Bovi, and Roberto Navigli. 2017a. Neural sequence learning models for word sense disambiguation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1156-1167. +Alessandro Raganato, Jose Camacho-Collados, and Roberto Navigli. 2017b. Word sense disambiguation: A unified evaluation framework and empirical comparison. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 99-110. +Bianca Scarlini, Tommaso Pasini, and Roberto Navigli. 2020. Sensembert: Context-enhanced sense embeddings for multilingual word sense disambiguation. In Proc. of AAAI. + +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008. +Loic Vial, Benjamin Lecouteux, and Didier Schwab. 2019. Sense vocabulary compression through the semantic knowledge of wordnet for neural word sense disambiguation. In Wordnet Conference, page 108. +Thomas Wolf, L Debut, V Sanh, J Chaumond, C Delangue, A Moi, P Cistac, T Rault, R Louf, M Funtopicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. +Dayu Yuan, Julian Richardson, Ryan Doherty, Colin Evans, and Eric Altendorf. 2016. Semi-supervised word sense disambiguation with neural models. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1374-1385. + +# Appendix + +# A Additional Details on Experiment Settings + +All models are trained using a single Nvidia Tesla K40 GPU with 12 GB of memory. Gradient accumulation is used to accommodate large batch size. + +For hyperparameters search, we manually tune for the optimal hyperparameter combinations using the following candidate values: + +- BERT variant: {cased, uncased} +Maximum number of glosses per context: {4, 6} +- Batch size: $\{32, 64, 128\}$ +- Initial learning rate: $\{2\mathrm{e} - 5,3\mathrm{e} - 5,5\mathrm{e} - 5\}$ +- Warm-up steps: $\{0, 0.1 * \text{total steps}\}$ + +At testing stage, model checkpoints with the highest F1 score on the development dataset, i.e. SemEval-07, evaluated at every 1000 steps over 4 training epochs, are selected for evaluation on the testing dataset. We use the scoring script downloaded from http://lcl.uniroma1.it/wsdeval/home. \ No newline at end of file diff --git a/adaptingbertforwordsensedisambiguationwithglossselectionobjectiveandexamplesentences/images.zip b/adaptingbertforwordsensedisambiguationwithglossselectionobjectiveandexamplesentences/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..c3cc1a4ed88442dd7fdbba3ae674faf525d8a9f5 --- /dev/null +++ b/adaptingbertforwordsensedisambiguationwithglossselectionobjectiveandexamplesentences/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c060cdf1d201223e5c3bb0593e67d0bf045ea7fa9253c7b5ce0962b74aaf029a +size 220392 diff --git a/adaptingbertforwordsensedisambiguationwithglossselectionobjectiveandexamplesentences/layout.json b/adaptingbertforwordsensedisambiguationwithglossselectionobjectiveandexamplesentences/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..ea969f3341892e6d9bba7f89c6b4c0c043c631b5 --- /dev/null +++ b/adaptingbertforwordsensedisambiguationwithglossselectionobjectiveandexamplesentences/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bea269c3b76afbab9d221c3c8b51ea2afd081686ab453f5624cf15c84f484130 +size 190403 diff --git a/adaptingcoreferenceresolutiontotwitterconversations/026125ae-0eb2-49c5-9acd-c2bc01d5fc9f_content_list.json b/adaptingcoreferenceresolutiontotwitterconversations/026125ae-0eb2-49c5-9acd-c2bc01d5fc9f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..a0c1e32c73b0cb2907b7f7950dc5f0562213a520 --- /dev/null +++ b/adaptingcoreferenceresolutiontotwitterconversations/026125ae-0eb2-49c5-9acd-c2bc01d5fc9f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b67c4717a1e48e3050692b5c3f7144afe9172b79e8fe402503bab2fd9cc7e4bd +size 48013 diff --git a/adaptingcoreferenceresolutiontotwitterconversations/026125ae-0eb2-49c5-9acd-c2bc01d5fc9f_model.json b/adaptingcoreferenceresolutiontotwitterconversations/026125ae-0eb2-49c5-9acd-c2bc01d5fc9f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..7f1893383a29cd5d8e7a27c329590adf56243bdc --- /dev/null +++ b/adaptingcoreferenceresolutiontotwitterconversations/026125ae-0eb2-49c5-9acd-c2bc01d5fc9f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a9b3b7d3a1e475130955f4765f5aff2922cb45f548de35f8bd48315c81260b54 +size 55975 diff --git a/adaptingcoreferenceresolutiontotwitterconversations/026125ae-0eb2-49c5-9acd-c2bc01d5fc9f_origin.pdf b/adaptingcoreferenceresolutiontotwitterconversations/026125ae-0eb2-49c5-9acd-c2bc01d5fc9f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7b4e05c9b6e2deed94a9cca7d862965ac9a26961 --- /dev/null +++ b/adaptingcoreferenceresolutiontotwitterconversations/026125ae-0eb2-49c5-9acd-c2bc01d5fc9f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8bc8ae6b23dfe52e2c3e078969a2a96f77c97d877ef45adc76e2b94119971bae +size 361713 diff --git a/adaptingcoreferenceresolutiontotwitterconversations/full.md b/adaptingcoreferenceresolutiontotwitterconversations/full.md new file mode 100644 index 0000000000000000000000000000000000000000..8441ab0d05e6f0278af250d7e730cc37c71ca3c4 --- /dev/null +++ b/adaptingcoreferenceresolutiontotwitterconversations/full.md @@ -0,0 +1,185 @@ +# Adapting Coreference Resolution to Twitter Conversations + +Berfin Aktas* + +SFB1287, Research Focus Cognitive Sciences + +University of Potsdam, Germany + +Veronika Solopova* + +Free University of Berlin, + +Germany + +berfinaktas@uni-potsdam.de solopov97@zedat.fu-berlin.de + +# Annalena Kohnert + +Department of Language Science and Technology + +Saarland University, Germany + +annalenakohnert@gmail.com + +# Manfred Stede + +SFB1287, Research Focus Cognitive Sciences + +University of Potsdam, Germany + +stede@uni-potsdam.de + +# Abstract + +The performance of standard coreference resolution is known to drop significantly on Twitter texts. We improve the performance of the (Lee et al., 2018) system, which is originally trained on OntoNotes, by retraining on manually-annotated Twitter conversation data. Further experiments by combining different portions of OntoNotes with Twitter data show that selecting text genres for the training data can beat the mere maximization of training data amount. In addition, we inspect several phenomena such as the role of deictic pronouns in conversational data, and present additional results for variant settings. Our best configuration improves the performance of the "out of the box" system by $21.6\%$ . + +# 1 Introduction and Related Work + +Twitter messages present a discourse genre that includes noisy informal language with abbreviations and purposeful typos, use of nonstandard symbols such as # and @ signs, unintended misspellings, etc., which makes them challenging for NLP applications. We are here interested in the task of automated coreference resolution for nominal mentions in Twitter conversations, i.e., threads of messages that specifically reply to one another. In addition to non-standard words, Twitter conversations also show peculiar phenomena of referring, such as exophoric pointers to non-linguistic content in attached visual media, and mixed pronominal references to the same entity due to the nature of multi-user conversations (Aktas et al., 2018). + +Thus, tweets are a complicated genre for coreference resolution, but at the same time highly relevant for many applications that seek to extract information or opinions from users' messages. In this paper, we use a state-of-the-art resolution system built with the OntoNotes corpus (Pradhan + +et al., 2007) and experiment with adding annotated Twitter conversations to the training data. Next, we consider the different - spoken and written - genres included in the OntoNotes corpus. We thus conduct experiments with training on different portions, and we show that carefully selecting genre subsets beats the straightforward "taking as much as possible". Overall, our best configuration improves the "out of the box" performance of the system by Lee et al. (2018) on Twitter data by $21.6\%$ . + +To our knowledge, there is no work specifically on adapting coreference resolution to Twitter, other than the aforementioned study of Aktas et al. (2018), which showed a significant drop in performance when a system with OntoNotes models is applied to Twitter. More generally, one of the few studies on domain adaptation for coreference resolution is (Do et al., 2015), which adapts the Berkeley system (Durrett and Klein, 2013) to narrative stories. Do et al. do not retrain the system but add linguistic features of narratives as soft constraints to the resolver. - At the same time, Twitter-adaptation has been investigated for other NLP tasks, such as NER. As an example, in (Ritter et al., 2011), performance is measured using tools trained with Twitter-related and out-of-domain data. + +Regarding OntoNotes genre differences, Uryupina and Poesio (2012) and Pradhan et al. (2013) report varying performance in coreference resolution for distinct corpus sections; this work inspired our experiments reported in the following. Section 2 describes our data sets, and Section 3 the experiments. Section 4 provides various additional analyses that shed light on the domain adaptation problem, and Section 5 concludes. + +# 2 Data + +For our experiments, we use the English portion of the OntoNotes benchmark used as training set in the CoNLL-2012 shared task (Pradhan et al., 2012). It has texts from spoken and written registers, and contains gold annotations at different layers, including coreference chains, i.e., sets of mentions referring to the same entity. Spoken data includes telephone conversations (tc), broadcast conversations (bc), and broadcast news (bn); written data contains magazine (mz), newswire (nw), pivot text (pt) and web blogs (wb). As shown in Table 1, the ONT corpus contains 1289K tokens in 2632 documents (in CoNLL terminology, documents are the units of independent annotation). + +
docstokenschainsmentions
ONT26321289K34K152K
tc11181K193112K
bc284144K423618K
bn711172K613821K
mz410164K353413K
nw622387K940434K
pt320210K661142K
wb174131K299312K
TW'18548K15346K
+ +Our second dataset is the Twitter Conversation corpus (TW) presented in (Aktas et al., 2018). They are tree structures where each tweet has a parent (i.e. the tweet it is replied-to) except for the initial tweet starting the conversation. A tree can be shallow, with many replies on just one level, or it can be deep when participants interact with each other across several turns. The corpus holds 1756 tweets in 185 threads, defined as a path from the root to a leaf node of a conversation tree.[2] $69\%$ of the coreference chains in this dataset contain preferential relations across tweets. Hence, considering conversation context is important. We illustrate a thread structure with one example of coreference chain annotation in Figure 1. + +The original TW corpus was annotated with a scheme slightly different from that of ONT. For systematic comparison, we modified the TW annotations so that they are conceptually parallel to + +![](images/d262428a2b73753ade0e708decc70af16b3bad495746760aa4bf2e5ae067a2af.jpg) +Figure 1: A thread sample in TW + +ONT; we thus call the dataset $\mathbf{TW}$ here. + +# 3 Experiments + +For our experiments, we chose 'e2e-coref' (Lee et al., 2018), an update of the end-to-end neural coreference resolver presented at EMNLP 2017. It introduced a refined approach based on differentiable approximation to higher-order inference, and ELMo embeddings (Peters et al., 2018) for span scoring, which significantly improved performance on English ONT. The approach achieved 73.0 F1, representing the 2018 state-of-art. Due to its cost efficiency, speed and flexibility, it was later used as basis for several recent state-of-art models, including SpanBERT (Joshi et al., 2020). + +# 3.1 Test set + +Table 1: Corpus size and basic coreference statistics + +
TokensChainsMentions
train4488514115946
test3260123408
+ +Table 2: Twitter train/test distribution + +Our main goal is to see how different training set configurations affect the coreference resolution performance on Twitter data. In order to achieve informative results, as the data is not linearly distributed and highly variable, we selected a representative test set not via random sampling, but through statistical analysis of three features: number of tokens, chains and mentions per document. To faithfully represent threads of all lengths, we determined the documents where these variables are situated either on the median, or in the first and fourth quartiles of the respective distribution, while omitting obvious outliers (see Figure 2). Because of the linear correlation of the three parameters + +shown on Figure 3, we could make sure to only select the documents where all three are in the same range of their distributions. + +![](images/b7f2044cebab594baa6bcfae32e8d1ee0b5fe73c3c447dfb6d69d84ba47cbfb0.jpg) +Figure 2: Distribution of the three considered parameters. U, L, M marks the forth (upper), first (lower) quartiles, and median respectively. + +Among the pre-screened files, we checked each document, marking features of the annotated mentions (person, number, gender) as well as Twitter phenomena (hash-tags, user names, pronouns with typos, etc.). With this information, we excluded threads without enough coverage and variability of the phenomena in focus. As the threads are not evenly distributed in their total length, we compared the average, median and sum for each of the three characteristics in the whole corpus with those of the determined test set, confirming that all values lie under the $15\%$ threshold of the total number. The final distribution is shown in Table 2. + +![](images/9ef91cd1ab977c2ed9b0a794cbe1ad2bf4a78cfaf1a8d0bd561950c35cfe2b6e.jpg) +Figure 3: Each blue data point represents the chains and token count for each document, while red points denote mention and token information of the same documents. + +# 3.2 Baseline Experiments + +For evaluation, we use the official CoNLL-2012 scripts, measuring the average of precision, recall and F1 for muc, b3 and ceafe metrics. After we successfully reproduced the published e2e-coref results, we measured how a model trained on ONT + +
TestTokensChainsMentions
A - ONT1289K34K152K
B - TW only44.8K1.4K5.9K
C - TW+ONT1333.8K35.4K157.9K
D - TW+spok269.8K7.5K35.9K
E - TW+writ269K5.8K22.8K
+ +Table 3: Experimental setup + +performs on our Twitter test set (Test A). The resulting 45.18 F1 (see Table 4) is almost $28\%$ lower than the result reported on the official ONT test set. + +A second baseline results from using only the TW' twitter corpus as training data, which lead to 60.8 F1 (Test B). Although this model is based on a rather small training set, it already improves significantly on baseline A and points to the difference between in-domain and out-domain training. + +# 3.3 Effects of selecting training (sub-)sets + +Noting that the presence of Twitter data in the training set is beneficial, for Test C we merged ONT and TW', with the latter forming $3.35\%$ of the total size (see Table 3). The results show not only a performance increase of $17\%$ in comparison to Test A, but also a $2\%$ gain over Test B, demonstrating that combinations of both ONT and TW' can be crucial for the learning effects. To study this in more detail, we measured how performance on the test set reacts to training on different subsets of ONT. We roughly distinguished spoken, spontaneous language from written or edited texts. + +Hence, in Test D, the training set consists of Twitter and only ONT's spoken genres, viz. broadcasts conversations and telephone conversations. As a consequence, the proportion of Twitter data in the training set rises from $3.35\%$ to $16.6\%$ . We found an increase in overall performance by $4.3\%$ , indicating that the written genres may rather add confusion instead of benefit to this task. However, it is not entirely clear whether the improvement results from excluding the written genres or from increasing the proportion of Twitter data. + +To answer this question, we proceeded to Test E, which combines the proportion of Twitter data present in Test D with documents from the written genres; we chose newswires (nw) and magazines (mz). Test E scores F1 61.25, which is $5.5\%$ lower than Test D. This result may partly be due to the sparsity of the written data, with a smaller amount of chains and mentions present in the written genre + +
TestRec.Prec.F1Rec.1Prec.1F11Rec.2Prec.2F12
MUC
A-ONT38.2455.8945.4135.7451.3642.1541.0566.4750.75
B-TW only56.8474.6564.5450.9570.8959.29---
C-TW+ONT60.3571.0765.2746.3867.7755.0762.873.0667.54
D-TW+spok62.177.9768.4147.975.4458.661.7572.7266.79
E-TW+writ60.3571.3665.3954.7569.2361.1462.4573.8567.68
B3
A-ONT35.1456.0243.1833.1951.6840.4237.2166.7847.79
B-TW only51.6468.7758.9946.3163.5253.57---
C-TW+ONT55.9566.0260.5744.5863.0452.2358.2968.9763.18
D-TW+spok58.2574.1665.2546.4671.4556.3157.1668.4862.31
E-TW+writ55.1963.959.2349.2860.454.2859.2468.8563.68
CEAFE
A-ONT44.549.7646.9843.2647.5945.3249.1361.0454.44
B-TW only50.9769.6658.8744.5465.9652.96---
C-TW+ONT56.6867.6861.6950.065.4856.7159.2970.1264.25
D-TW+spok61.8171.0666.1253.9468.260.2459.6464.9262.17
E-TW+writ52.467.8559.1346.0164.0653.5558.1467.4762.46
Average
A-ONT39.2953.8945.1837.3950.2142.642.4664.7650.99
B-TW only53.1571.02560.847.2766.5855.27---
C-TW+ONT57.7668.2562.5146.965.4354.6760.1270.7165.0
D-TW+spok60.7274.3966.849.4371.6958.359.5168.763.76
E-TW+writ55.9867.761.2550.0164.5656.3259.9470.0564.60
+ +Table 4: Results (F1 $^1$ , F1 $^2$ ) are calculated after removing first and second person pronouns, and verb mentions respectively. They are discussed in Section 4) + +documents (cf. Table 3), but still indicates an advantage of the spoken portion of ONT over the written one. + +# 4 Additional Analyses + +To gain further insight into the adaptation of coreference resolution to Twitter, we quantitatively and qualitatively compare the results of the best-performing test (D) to the baselines (see Table 5). + +Mention length For all tests, the average token length of mentions additionally predicted by the system (spurious predictions) is significantly longer $(\mathrm{p} \leq 0.05)$ than that of the correct predictions. The higher the proportion of ONT training data (whose mentions are on avg. 0.72 tokens longer than in TW'), the longer those predictions are. At the same time they are significantly shorter $(\mathrm{p} \leq 0.05)$ than the missed gold predictions. Hence there is a tendency to select longer spans (especially when training on ONT), but these are also more error-prone. + +Twitter-specific tokens Hashtags and usernames caused many errors in Test A. In tweets + +that are replies, user addresses are inserted at the beginning, so the majority of such tweet-initial usernames are not part of the syntax and have not been annotated. Table 5 shows that many of those names are incorrectly detected as mentions, while hashtags are completely ignored. With Twitter training data in Test B, identification of Twitter-specific tokens works better. Tweet-initial usernames are ignored as mentions and some username and hashtags are now correctly predicted. Test D shows further improvements for syntactically-integrated hashtags, but usernames or non-integrated hashtags still remain unresolved. + +Pronouns Although they are relatively evenly distributed in the gold annotations, more 3rd person pronouns are resolved than 1st and 2nd ps. pronouns in Test A, resulting in an overall F1 of 0.769. In Test B with Twitter training data, which is rich in pronouns, pronoun performance improves for 1st and especially 2nd ps., and remains the same for 3rd ps., improving the F1 to 0.917. In Test D, pronoun performance is slightly worse (0.905). + +As the entire training data in B and D is conversational, which by nature has many 1st and 2nd ps. pronouns, we repeated all test with removing those chains containing only 1st and 2nd ps. pronouns. This is to make sure that improvement is not exclusively caused by easy detection of the pronouns. The results are in column F11 in Table 4. While deictic pronouns have a major impact on F1, we still see improvements over the baseline for all tests but C, meaning that generally, detection of other anaphoric expressions improves as well. + +Verb annotations Verb mentions are possible in ONT if they co-refer with a nominal mention (Pradhan et al., 2007), but they are not annotated in TW'. Thus four predicted verb mentions in Test A, of which two are correctly linked with the demonstrative pronoun that, are counted as erroneous predictions. After adding training data from TW' in Test D however, no verbal mentions are predicted. To check the influence of this annotation difference, we also ran all tests with the verbal annotations removed from ONT, which reduced mentions by $2.4\%$ and chains by $3.6\%$ . Column $\mathrm{F1}^2$ in Table 4 shows the results. While training with only spoken genres outperformed more written dominant training data in previous experiments, we now see the opposite with Test D giving the worst results. These variations motivate looking further into the specific effects of different training data combinations and how verb annotations (both generally and depending on text genres) influence an otherwise purely nominal coreference resolution task. + +Chain Linking The last section of Table 5 shows that Test B improves the number of correctly predicted chains compared to Test A, and it further increases in Test D, almost doubling from Test A. Partially correct chains also increase over the tests, and the number of missed entities (cases where not a single mention of an entity is predicted) is reduced by $51.3\%$ . Notably, chains consisting only of identical strings profited the most from the combined training set in D. + +# 5 Conclusion + +We showed that the performance of a state-of-the-art "standard" coreference resolution system run on Twitter conversations can improve by $21.6\%$ by adding in-domain training data. In fact, even small amounts of added in-domain data can have an impact. Further, interestingly, for the out-domain + +
GoldABD
Pred. Mentions408305307334
Username85165
tweet-initial14400
Hashtags11045
Correctly Pred.408218265293
Avg. #tokens1.641.411.131.18
Pronouns219149199194
1st person57385350
2nd person64266362
3rd person68606159
Username8655
tweet-initial1100
Hashtags11035
Pred. Chains12311090107
Correct Chains-182737
Partially Correct3-101114
Missed Entities-393220
+ +Table 5: Properties of predicted mentions and chains + +training data (ONT), the choice of genre can make a bigger difference than the bare amount of data. Our additional analyses considered two more variants of the main experiment design: While all results given in Table 4 indicate that adding Twitter data to the training set improves the performance significantly, the best combination of in-domain and out-domain data can depend on specific factors as discussed in section 4. Also, we showed that improvements from Twitter training data do not result just from the large proportion of 1st and 2nd ps. pronouns (as one might have wondered). Finally, we tested the effect of removing verb mentions from ONT, which exhibits different patterns than other setups regarding the best combination of training data. The result encourages deeper exploration of training data arrangements in terms of these features. + +In future work we plan to focus more on the specific kinds of training data portions and examine the influence of spoken versus written register, and on that of formal versus informal language (which need not necessarily coincide). + +# Acknowledgments + +We thank the anonymous reviewers for their helpful comments and suggestions. This work is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Projektnummer 317633480 - SFB 1287, Project A03. + +# References + +Berfin Aktas, Tatjana Scheffler, and Manfred Stede. 2018. Anaphora Resolution for Twitter Conversations: An Exploratory Study. In Proceedings of the Workshop on Computational Models of Reference, Anaphora, and Coreference, CRAC@HLT-NAACL 2018, New Orleans, Louisiana. Association for Computational Linguistics. +Quynh Ngoc Thi Do, Steven Bethard, and Marie-Francine Moens. 2015. Adapting Coreference Resolution for Narrative Processing. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2262-2267, Lisbon, Portugal. Association for Computational Linguistics. +Greg Durrett and Dan Klein. 2013. Easy Victories and Uphill Battles in Coreference Resolution. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1971-1982, Seattle, Washington, USA. Association for Computational Linguistics. +Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving Pre-training by Representing and Predicting Spans. Transactions of the Association for Computational Linguistics, 8:64-77. +Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-Order Coreference Resolution with Coarse-to-Fine Inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 687-692, New Orleans, Louisiana. Association for Computational Linguistics. +Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics. +Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Björkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards Robust Linguistic Analysis using OntoNotes. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 143-152. Association for Computational Linguistics. +Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL-2012 Shared Task: Modeling Multilingual Unrestricted Coreference in OntoNotes. In Joint Conference on EMNLP and CoNLL - Shared Task, pages 1-40. Association for Computational Linguistics. + +Sameer Pradhan, Lance Ramshaw, Ralph Weischedel, Jessica Macbride, and Linnea Micciulla. 2007. Unrestricted Coreference: Identifying Entities and Events in OntoNotes. International Conference on Semantic Computing, 0:446-453. +Alan Ritter, Sam Clark, Mausam, and Oren Etzioni. 2011. Named Entity Recognition in Tweets: An Experimental Study. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1524-1534, Edinburgh, Scotland, UK. Association for Computational Linguistics. +Olga Uryupina and Massimo Poesio. 2012. Domain-specific vs. Uniform Modeling for Coreference Resolution. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012), pages 187-191, Istanbul, Turkey. European Language Resources Association (ELRA). + +# A Appendix: Alignment of Annotation Schemes + +We use two corpora, Twitter Conversations (TW) and OntoNotes, in the experiments presented in the paper. Only the identity relations are annotated in both of the corpora and mentions building singleton chains (i.e. chains containing only 1 item) are not considered as markables in either of them. However their annotation schemes are not fully aligned; there exist differences in the definition of markables. For the sake of comparability of the experimental results, we aligned the type of annotated markables as much as possible by applying semi-automated procedures. We summarize below the main differences we determined and applied handling strategies to harmonize them: + +- In TW, predicative nouns (e.g. This is [a fake account]), and headless relative clauses having the grammatical role of a noun phrase (e.g. A mature male kangaroo doing [what] it's built for) are considered as markables, but not so in OntoNotes. We removed the predicative noun and relative pronoun annotations in TW. +- In TW, appositions (e.g. [His wife], [Florence], fell ill.) are annotated separate from the preceding noun they co-refer with. In the CoNLL formatted version of OntoNotes that we use, appositions are merged with the nominals they modify (e.g. e.g. [His wife, Florence], fell ill.). Therefore, the apposite modifiers in TW are merged with the preceding co-referring noun phrase. + +- Generic "you" instances are annotated in TW but not in OntoNotes. We removed generic "you" annotations from TW. +- In TW, "reflexives" are annotated as separate mentions even if they are used for focus (e.g. [The president] [himself] said this). However, the focus reflexives are both annotated as a separate markable and also a part of the span of the preceding co-referring noun phrase in OntoNotes (e.g. [The president [himself]] said this). Therefore, the focus reflexives in TW are added to the span of the preceding co-referring noun phrase. + +If the removal of a mention made the remaining chain a singleton (i.e. only 1 mention left in the chain), the whole chain is removed from the annotations, as no singleton chains are allowed in the OntoNotes scheme. + +# B Appendix: Preprocessing the Data + +In TW dataset: + +- We normalized parentheses, namely left and right bracket tokens into '-LRB-' and '-RRB-' respectively. +- We converted all smiley and emoji tokens into the strings of $7 \%$ smiley” and $7 \%$ emoji”, respectively. +- We did not apply any preprocessing to hashtags and @-usernames. + +# C Appendix: Experimental Setup + +The experiments are conducted on two servers with GPU, GeForce GTX 1080. \ No newline at end of file diff --git a/adaptingcoreferenceresolutiontotwitterconversations/images.zip b/adaptingcoreferenceresolutiontotwitterconversations/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b81acee37471aa856b2b93d7954000d7515ec13a --- /dev/null +++ b/adaptingcoreferenceresolutiontotwitterconversations/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1b0c258a3ddd9b9c24041efa266f5c51a9c032fc67633d22831d7b78e518da0 +size 407439 diff --git a/adaptingcoreferenceresolutiontotwitterconversations/layout.json b/adaptingcoreferenceresolutiontotwitterconversations/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..ee102fa464fa5eaf0d547bd2e367109cdde9648f --- /dev/null +++ b/adaptingcoreferenceresolutiontotwitterconversations/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:15b1ebe11108b52c4be0b51f4de460f739f74bd003481ee12a069eccdc2d3fa3 +size 194165 diff --git a/adaptingopendomainfactextractionandverificationtocovidfactthroughindomainlanguagemodeling/a3afd639-6954-4d2c-8995-63cc366878d0_content_list.json b/adaptingopendomainfactextractionandverificationtocovidfactthroughindomainlanguagemodeling/a3afd639-6954-4d2c-8995-63cc366878d0_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f01c21751ba948436530f77e592a6d2dd3a54086 --- /dev/null +++ b/adaptingopendomainfactextractionandverificationtocovidfactthroughindomainlanguagemodeling/a3afd639-6954-4d2c-8995-63cc366878d0_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d7dd339f71aa0f4f6d8bcd3e35e8526816e9a035701759d11f709e61fc5e46ff +size 46253 diff --git a/adaptingopendomainfactextractionandverificationtocovidfactthroughindomainlanguagemodeling/a3afd639-6954-4d2c-8995-63cc366878d0_model.json b/adaptingopendomainfactextractionandverificationtocovidfactthroughindomainlanguagemodeling/a3afd639-6954-4d2c-8995-63cc366878d0_model.json new file mode 100644 index 0000000000000000000000000000000000000000..d2e9f029f67c21b82c935f39d0099320642f4a3c --- /dev/null +++ b/adaptingopendomainfactextractionandverificationtocovidfactthroughindomainlanguagemodeling/a3afd639-6954-4d2c-8995-63cc366878d0_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:66c580882cf68e66afd6d8f37bd8a629ea5312071a6bc687c13ff74189ed7b98 +size 55269 diff --git a/adaptingopendomainfactextractionandverificationtocovidfactthroughindomainlanguagemodeling/a3afd639-6954-4d2c-8995-63cc366878d0_origin.pdf b/adaptingopendomainfactextractionandverificationtocovidfactthroughindomainlanguagemodeling/a3afd639-6954-4d2c-8995-63cc366878d0_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..93b9cd6a4464aab29c926515b6e9d4add0efb2f7 --- /dev/null +++ b/adaptingopendomainfactextractionandverificationtocovidfactthroughindomainlanguagemodeling/a3afd639-6954-4d2c-8995-63cc366878d0_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca8acaa4d404f325d0de909d61b9831ad1af7291e7f0938478e8905470c3096f +size 279686 diff --git a/adaptingopendomainfactextractionandverificationtocovidfactthroughindomainlanguagemodeling/full.md b/adaptingopendomainfactextractionandverificationtocovidfactthroughindomainlanguagemodeling/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a3dc94805d54f9dc74e5a593c12028b143bd5327 --- /dev/null +++ b/adaptingopendomainfactextractionandverificationtocovidfactthroughindomainlanguagemodeling/full.md @@ -0,0 +1,204 @@ +# Adapting Open Domain Fact Extraction and Verification to COVID-FACT through In-Domain Language Modeling + +Zhenghao Liu $^{1,2}$ , Chenyan Xiong $^{5}$ , Zhuyun Dai $^{6}$ , Si Sun $^{4}$ , Maosong Sun $^{1,3}$ , Zhiyuan Liu $^{1,3}$ + +$^{1}$ Department of Computer Science and Technology, Tsinghua University, Beijing, China + +Institute for Artificial Intelligence, Tsinghua University, Beijing, China + +Beijing National Research Center for Information Science and Technology + +$^{2}$ State Key Lab on Intelligent Technology and Systems, Tsinghua University, Beijing, China + +$^{3}$ Beijing Academy of Artificial Intelligence + +$^{4}$ Department of Electronic Engineering, Tsinghua University, Beijing, China + +5Microsoft Research, Redmond, USA + +$^{6}$ Carnegie Mellon University, USA + +# Abstract + +With the epidemic of COVID-19, verifying the scientifically false online information, such as fake news and maliciously fabricated statements, has become crucial. However, the lack of training data in the scientific domain limits the performance of fact verification models. This paper proposes an in-domain language modeling method for fact extraction and verification systems. We come up with SciKGAT to combine the advantages of open-domain literature search, state-of-the-art fact verification systems and in-domain medical knowledge through language modeling. Our experiments on SCIFACT, a dataset of expert-written scientific fact verification, show that SciKGAT achieves $30\%$ absolute improvement on precision. Our analyses show that such improvement thrives from our in-domain language model by picking up more related evidence pieces and accurate fact verification. Our codes and data are released via $\mathrm{Github}^{1}$ . + +# 1 Introduction + +Online contents with false information, such as lies, rumors and conspiracy theories, have been growing significantly and spreading widely during the COVID-19 epidemic. An automatic fact-checking system is urgently needed to check these scientific claims, which can avoid undesired consequences. Automatic fact-checking has drawn lots of attention from NLP community. Researchers mainly focus on stopping misinformation transmission through videos and texts (Cinelli et al., 2020; Hossain et al., 2020; Li et al., 2020; Serrano et al., 2020). + +The scientific fact verification task (Wadden et al., 2020) is come up to deal with COVID-FACT with high-quality articles of spanning domains from basic science to clinical medicine. Nevertheless, the small-scale training data of SCIFACT may + +limit the performance of COVID-FACT checking. The state-of-the-art model (Wadden et al., 2020) achieves only $46.6\%$ precision of fact verification, which is hard to be trusted for users. + +This paper presents the Scientific KGAT (SciKGAT) to deal with low-resource COVID-FACT verification. SciKGAT employs the in-domain language model in the fact extraction and verification pipeline (Thorne et al., 2018; Wadden et al., 2020) to adapt fact-checking into COVID domain. The in-domain language model transfers COVID domain knowledge into pre-trained language models with continuous training and learns medical token semantics towards COVID with mask language model based training. The state-of-the-art fact verification model KGAT (Liu et al., 2020; Ye et al., 2020) is also used in SciKGAT for multi-evidence reasoning in the fact verification module. + +Our experiments show that the in-domain language modelings achieve better performance for various components in the whole fact extraction and verification pipeline by achieving more accurate evidence selection and fact verification. Our in-domain language modelings improve the fact verification performance with more than $10\%$ absolute $\mathrm{F_1}$ score and $30\%$ absolute precision (from $46.6\%$ to $76\%$ ) than previous state-of-the-art on SCIFACT. Such improvement shows that our model provides a set of solutions for low-resource fact verification tasks, such as COVID-19. + +# 2 Related Work + +Existing fact extraction and verification models usually employ a three-step pipeline system (Chen et al., 2017): document retrieval (abstract retrieval), sentence selection (rationale selection) and fact verification (Thorne et al., 2018; Wadden et al., 2020). + +The preliminary fact verification methods concatenate all evidence pieces (Nie et al., 2019; Wad + +den et al., 2020) for fact verification. KGAT (Liu et al., 2020) conducts fine-grained multiple evidence reasoning with a graph and achieves the state-of-the-art for fact verification (Ye et al., 2020). + +The reasoning ability of the pre-trained language model is crucial and helps improve fact verification performance (Devlin et al., 2019; Li et al., 2019; Zhou et al., 2019; Soleimani et al., 2019). Some work (Beltagy et al., 2019; Lee et al., 2020) transfers medical domain knowledge into pre-trained language models for better medical semantic understanding, which provides a potential way to deal with COVID-FACT checking problem. + +# 3 Methodology + +This section describes our SciKGAT for fact extraction and verification. We first introduce the pipeline of fact extraction and verification (Sec. 3.1) and then continuously train the BERT based model (Sec. 3.2) for the whole. + +# 3.1 Preliminary + +Given a claim $c$ , we aim to predict the claim label $y$ . We usually implement the fact extraction and verification pipeline with three steps: abstract retrieval, rationale selection and fact verification. + +Abstract Retrieval. For the claim $c$ and abstract $D = \{a_{1},\ldots ,a_{l}\}$ , we aim to retrieve three abstracts for the following steps. + +We first retrieve top-100 abstracts with TF-IDF from the abstract collection $D$ , which is the same as the previous work (Wadden et al., 2020). For the claim $c$ and abstract abstract $a = \{e_1, \dots, e_k\}$ with $k$ evidence pieces and title $t$ , we concatenate claim, title and abstract to get the representation $\mathcal{H}^e$ of the pair $\langle c, a \rangle$ with BERT (Devlin et al., 2019): + +$$ +\mathcal {H} = \operatorname {B E R T} ([ \mathrm {C L S} ] \circ c \circ [ \mathrm {S E P} ] \circ t \circ a \circ [ \mathrm {S E P} ]), \tag {1} +$$ + +where $\circ$ is the concatenate operation. The representation $\mathcal{H}$ of $\langle c, a \rangle$ consists of representations of tokens from both claim and evidence. The 0-th representation $\mathcal{H}_0$ denotes the [CLS] representation. The relevance label $y_a$ between claim $c$ and abstract $a$ is calculated: + +$$ +p \left(y _ {a} \mid c, a\right) = \operatorname {s o f t m a x} _ {y _ {a}} \left(\mathrm {M L P} \left(\mathcal {H} _ {0}\right)\right). \tag {2} +$$ + +We rerank abstracts according to the probability $p(y_{a} = 1|c,a)$ and top-3 abstracts are reserved. + +Rationale Selection. Given the retrieved abstract $a$ , rationale selection focuses on selecting relevant sentences for fact verification. + +Similarly, for the evidence $e$ of the retrieved abstract $a$ , we can get the representation $H$ of claim and evidence pair $\langle c, e \rangle$ : + +$$ +H = \operatorname {B E R T} ([ \mathrm {C L S} ] \circ c \circ [ \mathrm {S E P} ] \circ e \circ [ \mathrm {S E P} ]). \tag {3} +$$ + +Then we predict the relevance label $y_{r}$ of claim $c$ and evidence $e$ : + +$$ +p \left(y _ {r} \mid c, e\right) = \operatorname {s o f t m a x} _ {y _ {r}} \left(\mathrm {M L P} \left(H _ {0}\right)\right). \tag {4} +$$ + +The related evidence pieces $(p(y_{r} = 0|c,e) < p(y_{r} = 1|c,e))$ are reserved to form the retrieved evidence set $E = \{e_1,\dots ,e_q\}$ of each abstract $a$ . + +Fact Verification. For the claim $c$ and retrieved evidence set $E$ , fact verification model aims to predict claim label $y$ . We employ the state-of-the-art model KGAT (Liu et al., 2020) as our fact verification module. For the $i$ -th evidence $e_i$ in the evidence set $E$ , we can get the sentence pair representation $H^i$ of the $i$ -th pair $\langle c, e_i \rangle$ through BERT. Then the probability of claim label $y$ is calculated: + +$$ +p (y | c, E) = \operatorname {K G A T} \left(H ^ {1}, \dots , H ^ {q}\right). \tag {5} +$$ + +# 3.2 Continuous In-Domain Training + +To deal with the low-resource COVID-FACT checking, we propose continuous training methods to transfer domain knowledge into pretrained language models. + +For COVID-FACT checking, the medical domain knowledge is useful to understand medical words (Beltagy et al., 2019). However, these medical domain pre-trained language models will be out-of-date with the medical development or emergence of a new virus, such as COVID-19. + +Continuous in-domain training provides a potential way to deal with this problem with the latest medical corpus. Hence we come up with two indomain language models for the fact extraction and verification pipeline with continuous training. + +Rationale prediction based training. We first come up with the rationale prediction style training to continuously train BERT for better reasoning ability towards the COVID-FACT. For the claim and evidence $\langle c, e \rangle$ , we optimize BERT model with supervisions from SCIFACT: + +$$ +L _ {r} (c, e) = \text {C r o s s E n t r o p y} \left(p \left(y _ {r} \mid c, e\right), y _ {r} ^ {*}\right), \tag {6} +$$ + +where $y_{r}^{*}$ denotes the ground truth rationale prediction label of the pair $\langle c, e \rangle$ . Then we get a supervised in-domain language model, BERT-RP, for the fact verification module. + +
ModelDevelopment setTesting Set
Sentence LevelAbstract LevelSentence LevelAbstract Level
Prec.Rec.F1Prec.Rec.F1Prec.Rec.F1Prec.Rec.F1
Baselines
SciBERT45.7838.5241.8451.9344.9848.21------
RoBERTa46.5138.2541.9853.3046.4149.6238.640.539.546.646.446.5
SciKGAT
KGAT57.0731.9740.9872.7338.2850.16------
SciKGAT (w. A)42.0747.8144.7647.6658.3752.4740.5048.3844.0947.0657.6651.82
SciKGAT (w. AR)50.0047.8148.8853.1556.4654.7641.6745.9543.7047.4754.9650.94
SciKGAT (Full)74.3639.6251.6984.2643.5457.4161.1542.9750.4876.0947.3058.33
+ +Table 1: Overall Performance of Fact Extraction and Verification. RoBERTa is the large version. SciKGAT (w. A) and SciKGAT (w. AR) are ablation models with the abstract retrieval and evidence selection of SciKGAT. + +Mask language model based training. To help the model better comprehend the semantics of COVID related words, we substitute tokens with [MASK] and ask the model to generate appropriate tokens for filling it. With continuous training, the language model now sees the language from the new corpus, thus being able to pick up the new terminologies, such as COVID-19. The continuous training with COVID related corpus is able to better capture the context/semantics of such new terminologies (Gururangan et al., 2020). + +We use data from COVID-19 Open Research Dataset Challenge $^2$ for continuous training, which towards the medical topic. In this corpus, there are about 86K papers before 2020, which are about coronaviruses but not about COVID-19, and 54K papers after 2020. Based on the filters used by AI2 to create this dataset, those papers that after 2020 are almost about COVID-19. Thus roughly there are about $40\%$ papers in this corpus that are about COVID-19 (Wang et al., 2020). + +# 4 Experimental Methodology + +This section describes the dataset, evaluation metrics, baselines, and implementation details. + +Dataset. The recently released dataset SCIFACT (Wadden et al., 2020) is leveraged in our experiments. It consists of 1,409 annotated claims with 5,183 scientific articles. All claims are classified as SUPPORT, CONTRADICT or NOT ENOUGH INFO. The training, development and testing sets contain 809, 300 and 300 claims, respectively. FEVER (Thorne et al., 2018) is also used by official baselines to train the fact verification modules of baselines and our models. The FEVER consists of 185,455 annotated claims with 5,416,537 Wikipedia documents. + +Evaluation Metrics. Precision, Recall and $\mathbf{F}_1$ score are used to evaluate model performance, following SCIFACT (Wadden et al., 2020). These evaluations are inspired by FEVER score (Thorne et al., 2018) and consider if the evidence is selected correctly from the abstract level and sentence level. + +Baselines. Since the scientific fact verification task is recently released, our baselines are mainly from Wadden et al. (2020). They first use TF-IDF for abstract retrieval and then use RoBERTa (Large) and SiBERT for rationale selection. KGAT and RoBERTa (Large) are leveraged for fact verification. The rationale selection module is trained with SCIFACT and the fact verification module is trained with data from FEVER and SCIFACT (Wadden et al., 2020). + +Implementation Details. In all experiments, we use SciBERT, RoBERTa (Base) and RoBERTa (Large) (Liu et al., 2019; Beltagy et al., 2019), and inherit huggingface's PyTorch implementation3. Adam is utilized for parameter optimization. For rationale selection, we keep the same setting as Wadden et al. (2020). For abstract retrieval and fact verification, we set the max length to 256, learning rate to 2e-5, batch size to 8 and accumulate step to 4 during training. The other parameters are kept the same with KGAT (Liu et al., 2020). + +For the abstract retrieval module, we follow the previous work (MacAvaney et al., 2020) and fine-tune our in-domain language model with the medical corpus from MS-MARCO (Bajaj et al., 2016) to fit our abstract retrieval module to the open-domain COVID related literature search. + +# 5 Evaluation Result + +This section first tests the overall performance of SciKGAT. Then it studies the impacts of our indomain language modeling techniques in knowl + +
AblationModelEvidence RetrievalFact Checking
Ranking AccuracySentence LevelAbstract Level
Prec.Rec.F1Prec.Rec.F1Prec.Rec.F1
Abstract RetrievalTF-IDF16.1169.3826.1546.5138.2541.9853.3046.4149.62
w. SciBERT19.7885.1732.1042.0947.2744.5348.1856.9452.19
w. SciBERT-MLM20.3387.5633.0042.0747.8144.7647.6658.3752.47
Rationale SelectionSciBERT36.9065.0347.0843.2246.9945.0348.9455.0251.80
SciBERT-MLM43.7360.9350.9150.0047.8148.8853.1556.4654.76
Fact VerificationSciBERT43.7360.9350.9136.5538.2537.3836.9245.9340.94
w. KGAT---51.6134.9741.6958.9939.2347.13
w. KGAT (RP Init)---60.1033.3342.8866.3836.8447.38
w. KGAT (MLM Init)---56.0034.4342.6465.3238.7648.65
RoBERTa-Base43.7360.9350.9142.7236.8939.5944.5046.4145.43
w. KGAT---61.0531.6941.7368.8734.9346.35
w. KGAT (RP Init)---61.1936.6145.8167.4839.7150.00
w. KGAT (MLM Init)---60.3537.4346.2167.1941.1551.04
RoBERTa-Large43.7360.9350.9150.0047.8148.8853.1556.4654.76
w. KGAT---62.8740.7149.4272.3946.4156.56
w. KGAT (RP Init)---73.4739.3451.2583.3343.0656.78
w. KGAT (MLM Init)---74.3639.6251.6984.2643.5457.41
+ +Table 2: In-Domain Language Model Performance of Fact Extraction and Verification on Development Set. Model performance with SciBERT on both abstract retrieval and rationale selection scenarios is presented. For fact verification, the in-domain language modeling methods, MLM (Mask Language Model) and RP (Rationale Prediction), are evaluated with the state-of-the-art fact verification model KGAT (Liu et al., 2020; Ye et al., 2020). + +
Claim: Basophils counteract disease development in patients with systemic lupus erythematosus (SLE).
Evidence 1: ... basophils and IgE autoantibodies amplify autoantibody production that leads to lupus nephritis ...
Evidence 2: Individuals with SLE also have elevated serum IgE, self-reactive IgEs and activated basophils that ...
SciKGAT: Contradict RoBERTa: Not Enough Info
Claim: In adult tissue, most T cells are memory T cells.
Evidence 1: Whereas adult tissues contain a predominance of memory T cells, in pediatric blood and tissues the main subset consists of naive recent thymic emigrants ...
SciKGAT: Support KGAT: Contradict
+ +Table 3: Examples of Fact Verification. All models are implemented with RoBERTa (Large). The contents are emphasized that can verify the given claim. + +edge transfer. Finally, it provides case studies. + +# 5.1 Overall Performance + +The overall performance of SciKGAT is shown in Table 1. The official baseline model uses TF-IDF for abstract retrieval and RoBERTa (Large) for rationale selection and fact verification, which is state-of-the-art. We add modules of SciKGAT step by step to evaluate the model's effectiveness. + +SciKGAT (w. A) and SciKGAT (w. AR) show significant improvement than baselines, which demonstrates our literature search with an indomain language model is effective in selecting related evidence from abstract and sentence levels. For fact verification, our SciKGAT improves pipeline performance by achieving $30\%$ improvement on label prediction precision. The high precision of fact verification demonstrates that our + +model has the ability to provide high quality and convinced COVID-FACT verification results. + +# 5.2 In-Domain Effectiveness + +In this experiment, we evaluate the impacts of the in-domain language model on individual fact extraction and verification components of SciKGAT. + +As shown in Table 2, we first compare SciBERT and SciBERT-MLM on the abstract retrieval and rationale selection tasks. Then we fix the selected evidence and evaluate the reasoning ability of the fact verification module, using two kinds of in-domain language models, MLM model (mask language model training) and RP model (rationale prediction training) with three BERT variants. + +For abstract retrieval and rationale selection, SciBERT-MLM shows better ranking accuracy than SciBERT, and consequently results in better fact verification results. It demonstrates that the mask language model learns specific medical domain knowledge through the latest COVID related papers and thrives on our evidence selection parts with continuous training. + +Then we evaluate the effectiveness of in-domain language models on fact verification with various BERT based models. Our in-domain language models significantly improve fact verification performance and illustrate their stronger reasoning ability compared to vanilla pre-trained language models. Compare to the RP model, MLM model usually achieves better performance. Importantly, MLM + +model does not rely on annotation data, providing a common resolution for COVID related tasks. The consistent improvement on all BERT variants further manifests the robustness of our model. + +# 5.3 Case Study + +As shown in Table 3, two examples from the development set are used to illustrate SciKGAT's effectiveness for fact verification. + +In the first example, both evidence 1 and evidence 2 indicate that basophils can lead to systemic lupus erythematosus, which contradicts the claim. The concatenation based model, RoBERTa, fails to verify the claim, while SciKGAT makes the right prediction. It demonstrates the effectiveness of KGAT's fine-grained reasoning with multiple evidence pieces. In the second example, the evidence piece indicates that memory $T$ cells are the most in $T$ cells for adults. SciKGAT predicts claim label correctly and shows its effectiveness by recognizing and comprehending these medical phrases, which thanks to the in-domain language modeling. + +# 6 Conclusion + +This paper presents in-domain language modeling methods for open domain fact extraction and verification, which transfer domain knowledge for the COVID-FACT checking task. Our experiments show that our pipeline significantly improves the fact-checking performance of the state-of-the-art model with more than $30\%$ absolute prediction precision. Our analyses illustrate that our model has stronger reasoning ability with continuous training and benefits from COVID related knowledge. + +# Acknowledgments + +This work is supported by the National Key RD Program of China (2020AAA0105200), Beijing Academy of Artificial Intelligence (BAAI) and the NExT++ project from the National Research Foundation, Prime Minister's Office, Singapore under its IRC@Singapore Funding Initiative. + +# References + +Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, et al. 2016. Ms marco: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268. +Iz Beltagy, Arman Cohan, and Kyle Lo. 2019. Scibert: Pretrained contextualized embeddings for scientific text. arXiv preprint arXiv:1903.10676. +Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open-domain questions. In Proceedings of ACL, pages 1870-1879. +Matteo Cinelli, Walter Quattrociocchi, Alessandro Galeazzi, Carlo Michele Valensise, Emanuele Brugnoli, Ana Lucia Schmidt, Paola Zola, Fabiana Zollo, and Antonio Scala. 2020. The COVID-19 social media infodemic. arXiv preprint arXiv:2003.05004. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL, pages 4171-4186. +Suchin Gururangan, Ana Marasovic, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of ACL, pages 8342-8360. +Tamanna Hossain, Robert L Logan IV, Arjuna Ugarte, Yoshitomo Matsubara, Sameer Singh, and Sean Young. 2020. Detecting COVID-19 misinformation on social media. +Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240. +Tianda Li, Xiaodan Zhu, Quan Liu, Qian Chen, Zhi-gang Chen, and Si Wei. 2019. Several experiments on investigating pretraining and knowledge-enhanced models for natural language inference. arXiv preprint arXiv:1904.12104. +Yunyao Li, Tyrone Grandison, Patricia Silveyra, Ali Douraghy, Xinyu Guan, Thomas Kieselbach, Chengkai Li, and Haiqi Zhang. 2020. Jennifer for COVID-19: An nlp-powered chatbot built for the people and by the people to combat misinformation. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692. + +Zhenghao Liu, Chenyan Xiong, Maosong Sun, and Zhiyuan Liu. 2020. Fine-grained fact verification with kernel graph attention network. In Proceedings of ACL, pages 7342-7351. +Sean MacAvaney, Arman Cohan, and Nazli Goharian. 2020. Sledge: A simple yet effective baseline for coronavirus scientific knowledge search. arXiv preprint arXiv:2005.02365. +Yixin Nie, Haonan Chen, and Mohit Bansal. 2019. Combining fact extraction and verification with neural semantic matching networks. In Proceedings of AAAI, pages 6859-6866. +Juan Carlos Medina Serrano, Orestis Papakyriakopoulos, and Simon Hegelich. 2020. Nlp-based feature extraction for the detection of Covid-19 misinformation videos on youtube. +Amir Soleimani, Christof Monz, and Marcel Worring. 2019. BERT for evidence retrieval and claim verification. arXiv preprint arXiv:1910.02655. +James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. + +FEVER: a large-scale dataset for fact extraction and VERIFICATION. In Proceedings of NAACL, pages 809-819. +David Wadden, Kyle Lo, Lucy Lu Wang, Shanchuan Lin, Madeleine van Zuylen, Arman Cohan, and Hannaneh Hajishirzi. 2020. Fact or fiction: Verifying scientific claims. arXiv preprint arXiv:2004.14974. +Lucy Lu Wang, Kyle Lo, Yoganand Chandrasekhar, Russell Reas, Jiangjiang Yang, Darrin Eide, Kathryn Funk, Rodney Kinney, Ziyang Liu, William Merrill, et al. 2020. Cord-19: The Covid-19 open research dataset. ArXiv. +Deming Ye, Yankai Lin, Jiaju Du, Zhenghao Liu, Maosong Sun, and Zhiyuan Liu. 2020. Coreferential reasoning learning for language representation. arXiv preprint arXiv:2004.06870. +Jie Zhou, Xu Han, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. 2019. GEAR: Graph-based evidence aggregating and reasoning for fact verification. In Proceedings of ACL, pages 892-901. \ No newline at end of file diff --git a/adaptingopendomainfactextractionandverificationtocovidfactthroughindomainlanguagemodeling/images.zip b/adaptingopendomainfactextractionandverificationtocovidfactthroughindomainlanguagemodeling/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..1c8e738c9ddaea01d6c11409628007bb7307535c --- /dev/null +++ b/adaptingopendomainfactextractionandverificationtocovidfactthroughindomainlanguagemodeling/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1b54985c21bfb25b3509716a83086789de1d374723120fecac75341cb1e40ab8 +size 296068 diff --git a/adaptingopendomainfactextractionandverificationtocovidfactthroughindomainlanguagemodeling/layout.json b/adaptingopendomainfactextractionandverificationtocovidfactthroughindomainlanguagemodeling/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a1c8af6e4bdff5a392ced323f230bc0a539674a2 --- /dev/null +++ b/adaptingopendomainfactextractionandverificationtocovidfactthroughindomainlanguagemodeling/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:427cd89f2a42e9f1b562f0e4891ac5019f8bea06a9159cbaabe70a87f459006b +size 229697 diff --git a/adaptivefeatureselectionforendtoendspeechtranslation/5bf6ca32-d4ab-4b9a-a437-901001d02a84_content_list.json b/adaptivefeatureselectionforendtoendspeechtranslation/5bf6ca32-d4ab-4b9a-a437-901001d02a84_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..7a0dc3120af23a6f998341ef66886a6f45869e11 --- /dev/null +++ b/adaptivefeatureselectionforendtoendspeechtranslation/5bf6ca32-d4ab-4b9a-a437-901001d02a84_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d5874eb7dc32de9c90484cfeb83d9f9d003e9b9ef20fe350abad9c6241263253 +size 86275 diff --git a/adaptivefeatureselectionforendtoendspeechtranslation/5bf6ca32-d4ab-4b9a-a437-901001d02a84_model.json b/adaptivefeatureselectionforendtoendspeechtranslation/5bf6ca32-d4ab-4b9a-a437-901001d02a84_model.json new file mode 100644 index 0000000000000000000000000000000000000000..30bbf1dcda775e82f2ac38ac6f9a023b548a22c1 --- /dev/null +++ b/adaptivefeatureselectionforendtoendspeechtranslation/5bf6ca32-d4ab-4b9a-a437-901001d02a84_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f2f4da1cef2ad6a166ae7c77aada469c3fd732252c374456d62ae64be55835df +size 105285 diff --git a/adaptivefeatureselectionforendtoendspeechtranslation/5bf6ca32-d4ab-4b9a-a437-901001d02a84_origin.pdf b/adaptivefeatureselectionforendtoendspeechtranslation/5bf6ca32-d4ab-4b9a-a437-901001d02a84_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c7792cb35e77b412fcc15451fa79c288aeadf982 --- /dev/null +++ b/adaptivefeatureselectionforendtoendspeechtranslation/5bf6ca32-d4ab-4b9a-a437-901001d02a84_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:049bbe968173f7c3d34b8965cd89a75742933e8de19620424d9afe5494115492 +size 1334680 diff --git a/adaptivefeatureselectionforendtoendspeechtranslation/full.md b/adaptivefeatureselectionforendtoendspeechtranslation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..e2e363443e04880d58d67a39abb5b21d4d48cf9e --- /dev/null +++ b/adaptivefeatureselectionforendtoendspeechtranslation/full.md @@ -0,0 +1,365 @@ +# Adaptive Feature Selection for End-to-End Speech Translation + +Biao Zhang $^{1}$ Ivan Titov $^{1,2}$ Barry Haddow $^{1}$ Rico Senrich $^{3,1}$ + +$^{1}$ School of Informatics, University of Edinburgh + +$^{2}$ ILLC, University of Amsterdam + +$^{3}$ Department of Computational Linguistics, University of Zurich + +B.Zhang@ed.ac.uk, {ititov,bhaddow}@inf.ed.ac.uk, sennrich@cl.uzh.ch + +# Abstract + +Information in speech signals is not evenly distributed, making it an additional challenge for end-to-end (E2E) speech translation (ST) to learn to focus on informative features. In this paper, we propose adaptive feature selection (AFS) for encoder-decoder based E2E ST. We first pre-train an ASR encoder and apply AFS to dynamically estimate the importance of each encoded speech feature to ASR. A ST encoder, stacked on top of the ASR encoder, then receives the filtered features from the (frozen) ASR encoder. We take $\mathcal{L}_0\mathrm{DROP}$ (Zhang et al., 2020) as the backbone for AFS, and adapt it to sparsify speech features with respect to both temporal and feature dimensions. Results on LibriSpeech En-Fr and MuST-C benchmarks show that AFS facilitates learning of ST by pruning out $\sim 84\%$ temporal features, yielding an average translation gain of $\sim 1.3 - 1.6$ BLEU and a decoding speedup of $\sim 1.4\times$ . In particular, AFS reduces the performance gap compared to the cascade baseline, and outperforms it on LibriSpeech En-Fr with a BLEU score of 18.56 (without data augmentation). + +# 1 Introduction + +End-to-end (E2E) speech translation (ST), a paradigm that directly maps audio to a foreign text, has been gaining popularity recently (Duong et al., 2016; Bérard et al., 2016; Bansal et al., 2018; Di Gangi et al., 2019; Wang et al., 2019). Based on the attentional encoder-decoder framework (Bahdanau et al., 2015), it optimizes model parameters under direct translation supervision. This end-to-end paradigm avoids the problem of error propagation that is inherent in cascade models where an automatic speech recognition (ASR) model and + +![](images/c9bda06570798e90a6dd2254253fc28ae7bfa0eafb422ebadc64d94ccd80ede9.jpg) + +![](images/663d3f4ffcae78c99d701b8ce5e6017f342284ef3e184cac060a08c3b6557127.jpg) +Figure 1: Example illustrating our motivation. We plot the amplitude and frequency spectrum of an audio segment (top), paired with its time-aligned words and phonemes (bottom). Information inside an audio stream is not uniformly distributed. We propose to dynamically capture speech features corresponding to informative signals (red rectangles) to improve ST. + +a machine translation (MT) model are chained together. Nonetheless, previous work still reports that E2E ST delivers inferior performance compared to cascade methods (Niehues et al., 2019). + +We study one reason for the difficulty of training E2E ST models, namely the uneven spread of information in the speech signal, as visualized in Figure 1, and the consequent difficulty of extracting informative features. Features corresponding to uninformative signals, such as pauses or noise, increase the input length and bring in unmanageable noise for ST. This increases the difficulty of learning (Zhang et al., 2019b; Na et al., 2019) and reduces translation performance. + +In this paper, we propose adaptive feature selection (AFS) for ST to explicitly eliminate uninformative features. Figure 2 shows the overall architecture. We employ a pretrained ASR encoder to induce contextual speech features, followed by an ST encoder bridging the gap between speech and translation modalities. AFS is inserted in-between them to select a subset of features for ST encoding (see red rectangles in Figure 1). To ensure that the selected features are well-aligned to transcriptions, we pretrain AFS on ASR. AFS estimates the informativeness of each feature through a parameterized gate, and encourages the dropping of + +![](images/99d36af533b2360398d8f815536de774763205dc5f54690483f9e80bb196acf6.jpg) +Figure 2: Overview of our E2E ST model. AFS is inserted between the ST encoder (blue) and a pretrained ASR encoder (gray) to filter speech features for translation. We pretrain AFS jointly with ASR and freeze it during ST training. + +features (pushing the gate to 0) that contribute little to ASR. An underlying assumption is that features irrelevant for ASR are also unimportant for ST. + +We base AFS on $\mathcal{L}_0\mathrm{DROP}$ (Zhang et al., 2020), a sparsity-inducing method for encoder-decoder models, and extend it to sparsify speech features. The acoustic input of speech signals involves two dimensions: temporal and feature, where the latter one describes the spectrum extracted from time frames. Accordingly, we adapt $\mathcal{L}_0\mathrm{DROP}$ to sparsify encoder states along temporal and feature dimensions but using different gating networks. In contrast to (Zhang et al., 2020), who focus on efficiency and report a trade-off between sparsity and quality for MT and summarization, we find that sparsity also improves translation quality for ST. + +We conduct extensive experiments with Transformer (Vaswani et al., 2017) on LibriSpeech En-Fr and MuST-C speech translation tasks, covering 8 different language pairs. Results show that AFS only retains about $16\%$ of temporal speech features, revealing heavy redundancy in speech encodings and yielding a decoding speedup of $\sim 1.4\times$ . AFS eases model convergence, and improves the translation quality by $\sim 1.3 - 1.6$ BLEU, surpassing several strong baselines. Specifically, without data augmentation, AFS narrows the performance gap against the cascade approach, and outperforms it on LibriSpeech En-Fr by 0.29 BLEU, reaching 18.56. We compare against fixed-rate feature selection and a simple CNN, confirming that our adaptive feature selection offers better translation quality. + +Our work demonstrates that E2E ST suffers from redundant speech features, with sparsification bringing significant performance improvements. The E2E ST task offers new opportunities for follow-up research in sparse models to deliver performance gains, apart from enhancing efficiency and/or interpretability. + +# 2 Background: $\mathcal{L}_0$ DROP + +$\mathcal{L}_0$ DROP provides a selective mechanism for encoder-decoder models which encourages removing uninformative encoder outputs via a sparsity-inducing objective (Zhang et al., 2020). Given a source sequence $X = \{x_{1},x_{2},\ldots ,x_{n}\}$ , $\mathcal{L}_0$ DROP assigns each encoded source state $\mathbf{x}_i\in \mathbb{R}^d$ with a scalar gate $g_{i}\in [0,1]$ as follows: + +$$ +\mathcal {L} _ {0} \operatorname {D R O P} (\mathbf {x} _ {i}) = g _ {i} \mathbf {x} _ {i}, \tag {1} +$$ + +$$ +\text {w i t h} g _ {i} \sim \operatorname {H a r d C o n c r e t e} \left(\alpha_ {i}, \beta , \epsilon\right), \tag {2} +$$ + +where $\alpha_{i},\beta ,\epsilon$ are hyperparameters of the hard concrete distribution (HardConcrete) (Louizos et al., 2018). + +Note that the hyperparameter $\alpha_{i}$ is crucial to HardConcrete as it directly governs its shape. We associate $\alpha_{i}$ with $\mathbf{x}_i$ through a gating network: + +$$ +\log \alpha_ {i} = \mathbf {x} _ {i} ^ {T} \cdot \mathbf {w}, \tag {3} +$$ + +Thus, $\mathcal{L}_0\mathrm{DROP}$ can schedule HardConcrete via $\alpha_{i}$ to put more probability mass at either 0 (i.e $g_{i}\rightarrow 0$ or 1 (i.e. $g_{i}\to 1$ ). $\mathbf{w}\in \mathbb{R}^d$ is a trainable parameter. Intuitively, $\mathcal{L}_0\mathrm{DROP}$ controls the openness of gate $g_{i}$ via $\alpha_{i}$ so as to determine whether to remove $(g_{i} = 0)$ or retain $(g_{i} = 1)$ the state $\mathbf{x}_i$ + +$\mathcal{L}_0$ DROP enforces sparsity by pushing the probability mass of HardConcrete towards 0, according to the following penalty term: + +$$ +\mathcal {L} _ {0} (X) = \sum_ {i = 1} ^ {n} 1 - p \left(g _ {i} = 0 \mid \alpha_ {i}, \beta , \epsilon\right). \tag {4} +$$ + +By sampling $g_{i}$ with reparameterization (Kingma and Welling, 2013), $\mathcal{L}_0$ DROP is fully differentiable and optimized with an upper bound on the objective: $\mathcal{L}_{\mathrm{MLE}} + \lambda \mathcal{L}_0(X)$ , where $\lambda$ is a hyperparameter affecting the degree of sparsity – a larger $\lambda$ enforces more gates near 0 – and $\mathcal{L}_{\mathrm{MLE}}$ denotes the maximum likelihood loss. An estimation of the expected value of $g_{i}$ is used during inference. Zhang et al. (2020) applied $\mathcal{L}_0$ DROP to prune encoder outputs for MT and summarization tasks; we adapt it to E2E ST. Sparse stochastic gates and $\mathcal{L}_0$ relaxations were also by Bastings et al. (2019) to construct interpretable classifiers, i.e. models that can reveal which tokens they rely on when making a prediction. + +# 3 Adaptive Feature Selection + +One difficulty with applying encoder-decoder models to E2E ST is deciding how to encode speech + +signals. In contrast to text where word boundaries can be easily identified, the spectrum features of speech are continuous, varying remarkably across different speakers for the same transcript. In addition, redundant information, like pauses in-between neighbouring words, can be of arbitrary duration at any position as shown in Figure 1, while contributing little to translation. This increases the burden and occupies the capacity of ST encoder, leading to inferior performance (Duong et al., 2016; Bérard et al., 2016). Rather than developing complex encoder architectures, we resort to feature selection to explicitly clear out those uninformative speech features. + +Figure 2 gives an overview of our model. We use a pretrained and frozen ASR encoder to extract contextual speech features, and collect the informative ones from them via AFS before transmission to the ST encoder. AFS drops pauses, noise and other uninformative features and retains features that are relevant for ASR. We speculate that these retained features are also the most relevant for ST, and that the sparser representation simplifies the learning problem for ST, for example the learning of attention strength between encoder states and target language (sub)words. Given a training tuple (audio, source transcription, translation), denoted as $(X,Y,Z)$ respectively, we outline the overall framework below, including three steps: + +# E2E ST with AFS + +1. Train ASR model with the following objective and model architecture until convergence: + +$$ +\mathcal {L} ^ {\mathrm {A S R}} = \eta \mathcal {L} _ {\mathrm {M L E}} (Y | X) + \gamma \mathcal {L} _ {\mathrm {C T C}} (Y | X), \tag {5} +$$ + +$$ +\mathcal {M} ^ {\mathrm {A S R}} = D ^ {\mathrm {A S R}} \left(Y, E ^ {\mathrm {A S R}} (X)\right). \tag {6} +$$ + +2. Finetune ASR model with AFS for $m$ steps: + +$$ +\mathcal {L} ^ {\mathrm {A F S}} = \mathcal {L} _ {\mathrm {M L E}} (Y | X) + \lambda \mathcal {L} _ {0} (X), \tag {7} +$$ + +$$ +\mathcal {M} ^ {\mathrm {A F S}} = D ^ {\mathrm {A S R}} \left(Y, F \left(E ^ {\mathrm {A S R}} (X)\right)\right). \tag {8} +$$ + +3. Train ST model with pretrained and frozen ASR and AFS submodules until convergence: + +$$ +\mathcal {L} ^ {\mathrm {S T}} = \mathcal {L} _ {\mathrm {M L E}} (Z | X), \tag {9} +$$ + +$$ +\mathcal {M} ^ {\mathrm {S T}} = D ^ {\mathrm {S T}} \left(Z, E ^ {\mathrm {S T}} \left(\overline {{F}} \overline {{E}} ^ {\mathrm {A S R}} (X)\right)\right). \tag {10} +$$ + +We handle both ASR and ST as sequence-to-sequence problem with encoder-decoder models. We use $E^{*}(\cdot)$ and $D^{*}(\cdot, \cdot)$ to denote the correspond + +ing encoder and decoder respectively. $F(\cdot)$ denotes the AFS approach, and $\overline{FE}$ means freezing the ASR encoder and the AFS module during training. Note that our framework puts no constraint on the architecture of the encoder and decoder in any task, although we adopt the multi-head dot-product attention network (Vaswani et al., 2017) for our experiments. + +ASR Pretraining The ASR model $\mathcal{M}^{\mathrm{ASR}}$ (Eq. 6) directly maps an audio input to its transcription. To improve speech encoding, we apply logarithmic penalty on attention to enforce short-range dependency (Di Gangi et al., 2019) and use trainable positional embedding with a maximum length of 2048. Apart from $\mathcal{L}_{\mathrm{MLE}}$ , we augment the training objective with the connectionist temporal classification (Graves et al., 2006, CTC) loss $\mathcal{L}_{\mathrm{CTC}}$ as in Eq. 5. Note $\eta = 1 - \gamma$ . The CTC loss is applied to the encoder outputs, guiding them to align with their corresponding transcription (sub)words and improving the encoder's robustness (Karita et al., 2019). Following previous work (Karita et al., 2019; Wang et al., 2020), we set $\gamma$ to 0.3. + +AFS Finetuning This stage aims at using AFS to dynamically pick out the subset of ASR encoder outputs that are most relevant for ASR performance (see red rectangles in Figure 1). We follow Zhang et al. (2020) and place AFS in-between ASR encoder and decoder during finetuning (see $F(\cdot)$ in $\mathcal{M}^{\mathrm{AFS}}$ , Eq. 8). We exclude the CTC loss in the training objective (Eq. 7) to relax the alignment constraint and increase the flexibility of feature adaptation. We use $\mathcal{L}_0$ DROP for AFS in two ways. + +$\mathbf{AFS}^t$ The direct application of $\mathcal{L}_0$ DROP on ASR encoder results in $\mathrm{AFS}^t$ , sparsifying encodings along the temporal dimension $\{\mathbf{x}_i\}_{i=1}^n$ : + +$$ +F ^ {t} (\mathbf {x} _ {i}) = \mathrm {A F S} ^ {t} (\mathbf {x} _ {i}) = g _ {i} ^ {t} \mathbf {x} _ {i}, +$$ + +with $\log \alpha_{i}^{t} = \mathbf{x}_{i}^{T}\cdot \mathbf{w}^{t},$ (11) + +$$ +g _ {i} ^ {t} \sim \operatorname {H a r d C o n c r e t e} \left(\alpha_ {i} ^ {t}, \beta , \epsilon\right), +$$ + +where $\alpha_{i}^{t}$ is a positive scalar powered by a simple linear gating layer, and $\mathbf{w}^t\in \mathbb{R}^d$ is a trainable parameter of dimension $d$ . $\mathbf{g}^t$ is the temporal gate. The sparsity penalty of $\mathrm{AFS}^t$ follows Eq. 4: + +$$ +\mathcal {L} _ {0} ^ {t} (X) = \sum_ {i = 1} ^ {n} 1 - p \left(g _ {i} ^ {t} = 0 \mid \alpha_ {i} ^ {t}, \beta , \epsilon\right). \tag {12} +$$ + +$\mathbf{AFS}^{t,f}$ In contrast to text processing, speech processing often extracts spectrum from overlapping + +time frames to form the acoustic input, similar to the word embedding. As each encoded speech feature contains temporal information, it is reasonable to extend $\mathrm{AFS}^t$ to $\mathrm{AFS}^{t,f}$ , including sparsification along the feature dimension $\{\mathbf{x}_{i,j}\}_{j = 1}^{d}$ : + +$$ +F ^ {t, f} \left(\mathbf {x} _ {i}\right) = \mathrm {A F S} ^ {t, f} \left(\mathbf {x} _ {i}\right) = g _ {i} ^ {t} \mathbf {x} _ {i} \odot \mathbf {g} ^ {f}, +$$ + +with $\log \alpha^f = \mathbf{w}^f$ (13) + +$$ +g _ {j} ^ {f} \sim \operatorname {H a r d C o n c r e t e} \left(\alpha_ {j} ^ {f}, \beta , \epsilon\right), +$$ + +where $\alpha^f\in \mathbb{R}^d$ estimates the weights of each feature, dominated by an input-independent gating model with trainable parameter $\mathbf{w}^f\in \mathbb{R}^{d.3}$ $\mathbf{g}^f$ is the feature gate. Note that $\alpha^f$ is shared for all time steps. $\odot$ denotes element-wise multiplication. $\mathrm{AFS}^{t,f}$ reuses $g_i^t$ -relevant submodules in Eq. 11, and extends the sparsity penalty $\mathcal{L}_0^t$ in Eq. 12 as follows: + +$$ +\mathcal {L} _ {0} ^ {t, f} (X) = \mathcal {L} _ {0} ^ {t} + \sum_ {j = 1} ^ {d} 1 - p \left(g _ {j} ^ {f} = 0 \mid \alpha_ {j} ^ {f}, \beta , \epsilon\right). \tag {14} +$$ + +We perform the finetuning by replacing $(F, \mathcal{L}_0)$ in Eq. (8-7) with either $\mathrm{AFS}^t(F^t, \mathcal{L}_0^t)$ or $\mathrm{AFS}^{t,f}(F^{t,f}, \mathcal{L}_0^{t,f})$ for extra $m$ steps. We compare these two variants in our experiments. + +E2E ST Training We treat the pretrained ASR and AFS model as a speech feature extractor, and freeze them during ST training. We gather the speech features emitted by the ASR encoder that correspond to $g_{i}^{t} > 0$ , and pass them similarly as done with word embeddings to the ST encoder. We employ sinusoidal positional encoding to distinguish features at different positions. Except for the input to the ST encoder, our E2E ST follows the standard encoder-decoder translation model ( $\mathcal{M}^{\mathrm{ST}}$ in Eq. 10) and is optimized with $\mathcal{L}_{\mathrm{MLE}}$ alone as in Eq. 9. Intuitively, AFS bridges the gap between ASR output and MT input by selecting transcript-aligned speech features. + +# 4 Experiments + +Datasets and Preprocessing We experiment with two benchmarks: the Augmented LibriSpeech dataset (LibriSpeech En-Fr) (Kocabiyikoglu et al., 2018) and the multilingual MuST-C dataset (MuST-C) (Di Gangi et al., 2019). LibriSpeech En-Fr is + +collected by aligning e-books in French with English utterances of LibriSpeech, further augmented with French translations offered by Google Translate. We use the 100 hours clean training set for training, including 47K utterances to train ASR models and double the size for ST models after concatenation with the Google translations. We report results on the test set (2048 utterances) using models selected on the dev set (1071 utterances). MuST-C is built from English TED talks, covering 8 translation directions: English to German (De), Spanish (Es), French (Fr), Italian (It), Dutch (Nl), Portuguese (Pt), Romanian (Ro) and Russian (Ru). We train ASR and ST models on the given training set, containing $\sim 452$ hours with $\sim 252\mathrm{K}$ utterances on average for each translation pair. We adopt the given dev set for model selection and report results on the common test set, whose size ranges from 2502 (Es) to 2641 (De) utterances. + +For all datasets, we extract 40-dimensional log-Mel filterbanks with a step size of 10ms and window size of 25ms as the acoustic features. We expand these features with their first and second-order derivatives, and stabilize them using mean subtraction and variance normalization. We stack the features corresponding to three consecutive frames without overlapping to the left, resulting in the final 360-dimensional acoustic input. For transcriptions and translations, we tokenize and truecase all the text using Moses scripts (Koehn et al., 2007). We train subword models (Sennrich et al., 2016) on each dataset with a joint vocabulary size of 16K to handle rare words, and share the model for ASR, MT and ST. We train all models without removing punctuation. + +Model Settings and Baselines We adopt the Transformer architecture (Vaswani et al., 2017) for all tasks, including $\mathcal{M}^{\mathrm{ASR}}$ (Eq. 6), $\mathcal{M}^{\mathrm{AFS}}$ (Eq. 8) and $\mathcal{M}^{\mathrm{ST}}$ (Eq. 10). The encoder and decoder consist of 6 identical layers, each including a self-attention sublayer, a cross-attention sublayer (decoder alone) and a feedforward sublayer. We employ the base setting for experiments: hidden size $d = 512$ , attention head 8 and feedforward size 2048. We schedule learning rate via Adam $(\beta_{1} = 0.9, \beta_{2} = 0.98)$ (Kingma and Ba, 2015), paired with a warmup step of $4\mathrm{K}$ . We apply dropout to attention weights and residual connections with a rate of 0.1 and 0.2 respectively, and also add label smoothing of 0.1 to handle overfitting. We train all models with a maximum step size of $30\mathrm{K}$ and a + +minibatch size of around 25K target subwords. We average the last 5 checkpoints for evaluation. We use beam search for decoding, and set the beam size and length penalty to 4 and 0.6, respectively. We set $\epsilon = -0.1$ , and $\beta = 2/3$ for AFS following Louizos et al. (2018), and finetune AFS for an additional $m = 5\mathrm{K}$ steps. We evaluate translation quality with tokenized case-sensitive BLEU (Papineni et al., 2002), and reportWER for ASR performance without punctuation. + +We compare our models with four baselines: + +ST: A vanilla Transformer-based E2E ST model of 6 encoder and decoder layers. Logarithmic attention penalty (Di Gangi et al., 2019) is used to improve the encoder. + +ST + ASR-PT: We perform the ASR pretraining (ASR-PT) for E2E ST. This is the same model as ours (Figure 2) but without AFS finetuning. + +**Cascade:** We first transcribe the speech input using an ASR model, and then passes the results on to an MT model. We also use the logarithmic attention penalty (Di Gangi et al., 2019) for the ASR encoder. + +ST + Fixed Rate: Instead of dynamically selecting features, we replace AFS with subsampling at a fixed rate: we extract the speech encodings after every $k$ positions. + +Besides, we offer another baseline, $\mathbf{ST} + \mathbf{CNN}$ , for comparison on MuST-C En-De: we replace the fixed-rate subsampling with a one-layer 1D depth-separable convolution, where the output dimension is set to 512, the kernel size over temporal dimension is set to 5 and the stride is set to 6. In this way, the ASR encoder features will be compressed to around $1/6$ features, a similar ratio to the fixed-rate subsampling. + +# 4.1 Results on MuST-C En-De + +We perform a thorough study on MuST-C En-De. With AFS, the first question is its feasibility. We start by analyzing the degree of sparsity in speech features (i.e. sparsity rate) yielded by AFS, focusing on the temporal sparsity rate $\# \{g_i^t = 0\} /n$ and the feature sparsity rate $\# \{g_j^f = 0\} /d$ . To obtain different rates, we vary the hyperparameter $\lambda$ in Eq. 7 in a range of [0.1, 0.8] with a step size 0.1. + +Results in Figure 3 show that large amounts of encoded speech features $(>59\%)$ can be easily pruned out, revealing heavy inner-speech redundancy. Both $\mathrm{AFS}^t$ and $\mathrm{AFS}^{t,f}$ drop $\sim 60\%$ temporal features with $\lambda$ of 0.1, and this number increases + +![](images/4935d18061940defbf9cdec920b91d393442976941b17fc2463b1038d3fa9f5c.jpg) +(a) Feature Gate Value + +![](images/d5c085978a5cf626dca2b40521c0b01307b6e05689e14ada3bbab5add10235db.jpg) +(b) Temporal Sparsity Rate + +![](images/86b1cc1cd13ad8fc20a6495f69426f010b414cdfd9e76a5c6d94eaf230aa36bf.jpg) +Figure 3: Feature gate value and temporal sparsity rate as a function of $\lambda$ on MuST-C En-De dev set. Larger $\lambda$ decreases the gate value of $\mathbf{g}^f$ but without dropping any neurons, i.e. feature sparsity rate $0\%$ . By contrast, speech features are of high redundancy along temporal dimension, easily inducing high sparsity rate of $\sim 85\%$ . +(a) ASR +Figure 4: ASR (WER↓) and ST (BLEU↑) performance as a function of temporal sparsity rate on MuST-C En-De dev set. Pruning out $\sim 85\%$ temporal speech features largely improves translation quality and retains $\sim 95\%$ ASR accuracy. + +![](images/9218d041715d347530ae166879cfe3687f46e50756ed43b9a636d316c1bff600.jpg) +(b) ST + +to $>85\%$ when $\lambda \geq 0.5$ (Figure 3b), remarkably surpassing the sparsity rate reported by Zhang et al. (2020) on text summarization $(71.5\%)$ . In contrast to rich temporal sparsification, we get a feature sparsity rate of 0, regardless of $\lambda$ 's value, although increasing $\lambda$ decreases $\mathbf{g}^f$ (Figure 3a). This suggests that selecting neurons from the feature dimension is harder. Rather than filtering neurons, the feature gate $\mathbf{g}^f$ acts more like a weighting mechanism on them. In the rest of the paper, we use sparsity rate for the temporal sparsity rate. + +We continue to explore the impact of varied sparsity rates on the ASR and ST performance. Figure 4 shows their correlation. We observe that AFS slightly degenerates ASR accuracy (Figure 4a), but still retains $\sim 95\%$ accuracy on average; $\mathrm{AFS}^{t,f}$ often performs better than $\mathrm{AFS}^t$ with similar sparsity rate. The fact that only $15\%$ speech features successfully support $95\%$ ASR accuracy proves the informativeness of these selected features. These findings echo with (Zhang et al., 2020), where they observe a trade-off between sparsity and quality. + +However, when AFS is applied to ST, we find consistent improvements to translation quality by $>0.8$ BLEU, shown in Figure 4b. Translation quality on the development set peaks at 22.17 BLEU + +
ModelBLEU↑Speedup↑
MT29.69-
Cascade22.521.06×
ST17.440.87×
ST + ASR-PT20.671.00×
ST + CNN20.641.31×
ST + Fixed Rate (k=6)21.14 (83.3%)1.42×
ST + Fixed Rate (k=7)20.87 (85.7%)1.43×
ST + AFSt21.57 (84.4%)1.38×
ST + AFSt,f22.38 (85.1%)1.37×
+ +Table 1: BLEU↑ and speedup↑ on MuST-C En-De test set. $\lambda = 0.5$ . We evaluate the speedup on GeForce GTX 1080 Ti with a decoding batch size of 16, and report average results over 3 runs. Numbers in parentheses are the sparsity rate. + +![](images/2d1c101105c6ff77df49d58c211ec4bc895a06f8c5365ec812b52f2cc75a65dd.jpg) +Figure 5: Impact of $k$ in fixed-rate subsampling on ST performance on MuST-C En-De test set. Sparsity rate: $k^{-1} / k$ . This subsampling underperforms AFS, and degenerates the ST performance at suboptimal rates. + +achieved by $\mathrm{AFS}^{t,f}$ with a sparsity rate of $85.5\%$ . We set $\lambda = 0.5$ (corresponding to sparsity rate of $\sim 85\%$ ) for all other experiments, since $\mathrm{AFS}^t$ and $\mathrm{AFS}^{t,f}$ reach their optimal result at this point. + +We summarize the test results in Table 1, where we set $k = 6$ or $k = 7$ for $ST + Fixed$ Rate with a sparsity rate of around $85\%$ inspired by our above analysis. Our vanilla ST model yields a BLEU score of 17.44; pretraining on ASR further enhances the performance to 20.67, significantly outperforming the results of Di Gangi et al. (2019) by 3.37 BLEU. This also suggests the importance of speech encoder pretraining (Di Gangi et al., 2019; Stoian et al., 2020; Wang et al., 2020). We treat ST with ASR-PT as our real baseline. We observe improved translation quality with fixed-rate subsampling, $+0.47$ BLEU at $k = 6$ . Subsampling offers a chance to bypass noisy speech signals and reducing the number of source states makes learning translation alignment easier, but deciding the optimal sampling rate is tough. Results in Figure 5 reveal that fixed-rate subsampling deteriorates ST performance with suboptimal rates. Replacing fixed-rate subsampling with our one-layer CNN also fails to improve over the baseline, although CNN offers more flexibility in feature manipulation. By con + +![](images/50538012e8f45cc3c07addfc38b147f67009cb695a41c29e5c79db29290a7bfd.jpg) +Figure 6: ST training curves (MuST-C En-De dev set). ASR pretraining significantly accelerates model convergence, and feature selection further stabilizes and improves training. $\lambda = 0.5$ , $k = 6$ . + +trast to fixed-rate subsampling, the proposed AFS is data-driven, shifting the decision burden to the data and model themselves. As a result, $\mathrm{AFS}^t$ and $\mathrm{AFS}^{t,f}$ surpass ASR-PT by 0.9 BLEU and 1.71 BLEU, respectively, substantially narrowing the performance gap compared to the cascade baseline (-0.14 BLEU). + +We also observe improved decoding speed: AFS runs $\sim 1.37\times$ faster than ASR-PT. Compared to the fixed-rate subsampling, AFS is slightly slower which we ascribe to the overhead introduced by the gating module. Surprisingly, Table 1 shows that the vanilla ST runs slower than ASR-PT $(0.87\times)$ while the cascade model is slightly faster $(1.06\times)$ . By digging into the beam search algorithm, we discover that ASR pretraining shortens the number of steps in beam-decoding: 94 ASR-PT vs. 112 vanilla ST (on average). The speedup brought by cascading is due to the smaller English vocabulary size compared to the German vocabulary when processing audio inputs. + +# 4.2 Why (Adaptive) Feature Selection? + +Apart from the benefits in translation quality, we go deeper to study other potential impacts of (adaptive) feature selection. We begin with inspecting training curves. Figure 6 shows that ASR pretraining improves model convergence; feature selection makes training more stable. Compared to other models, the curve of ST with AFS is much smoother, suggesting its better regularization effect. + +We then investigate the effect of training data size, and show the results in Figure 7. Overall, we do not observe higher data efficiency by feature selection on low-resource settings. But instead, our results suggest that feature selection delivers larger performance improvement when more training data is available. With respect to data efficiency, ASR pretraining seems to be more important (Figure 7, left) (Bansal et al., 2019; Stoian et al., 2020). Com + +![](images/5043a193d149a3efd470c676f62fc8c678f970d580655a3762a09912a8ee2402.jpg) +Figure 7: BLEU as a function of training data size on MuST-C En-De. We split the original training data into non-overlapped five subsets, and train different models with accumulated subsets. Results are reported on the test set. Note that we perform ASR pretraining on the original dataset. $\lambda = 0.5,k = 6$ + +![](images/498ae34aa70f75bbb054f4ff99a1eaefedc214e0f69ebf88facd784ebc8b6bdb.jpg) +Figure 9: The number of selected features vs. word duration (left) and position (right) on MuST-C En-De test set. For word duration, we align the audio and its transcription by Montreal Forced Aligner (McAuliffe et al., 2017), and collect each words' duration and its corresponding retained feature number. For position, we uniformly split each input into 50 pieces, and count the average number of retained features in each piece. $\lambda = 0.5$ , $k = 6$ . + +![](images/108ae947fc7abf3726ae2f9dc6dead835470c0788b3c9742e10f2e25df6c99dc.jpg) +Figure 8: Histogram of the cross-attention weights received per ST encoder output on MuST-C En-De test set. For each instance, we collect attention weights averaged over different heads and decoder layers following Zhang et al. (2020). Larger weight indicates stronger impact of the encoder output on translation. Feature selection biases the distribution towards larger weights. $\lambda = 0.5$ , $k = 6$ . + +![](images/d3edd19d858ef5db3f26f1e82f5813f50bb9d75bea9912277fbaec0363e4ced9.jpg) + +pared to AFS, the fixed-rate subsampling suffers more from small-scale training: it yields worse performance than ASR-PT when data size $\leq 100\mathrm{K}$ , highlighting better generalization of AFS. + +In addition to model performance, we also look into the ST model itself, and focus on the cross-attention weights. Figure 8 visualize the attention value distribution, where ST models with feature selection noticeably shift the distribution towards larger weights. This suggests that each ST encoder output exerts greater influence on the translation. By removing redundant and noisy speech features, feature selection eases the learning of the ST encoder, and also enhances its connection strength with the ST decoder. This helps bridge the modality gap between speech and text translation. Although fixed-rate subsampling also delivers a distribution shift similar to AFS, its inferior ST performance compared to AFS corroborates the better quality of adaptively selected features. + +AFS vs. Fixed Rate We compare these two approaches by analyzing the number of retained features with respect to word duration and temporal position. Results in Figure 9a show that the underlying pattern behind these two methods is similar: + +![](images/c43d6ea791805652f772474b8f874079d1016270f49611d918fe5fd53b595dd7.jpg) +(a) Duration Analysis + +![](images/4ba7a57f242c547a164bf53391daf169dbaaf4b3b87a4f60daa39c1cb8123e24.jpg) +(b) Position Analysis + +![](images/53cf0c11d4e65d8dd0f2beef2d12a3c54aced857ab88a1f8cd1881dc0a780981.jpg) +Figure 10: Illustration of feature gate $\mathbf{g}^f$ with $\lambda = 0.5$ . + +words with longer duration correspond to more speech features. However, when it comes to temporal position, Figure 9b illustrates their difference: fixed-rate subsampling is context-independent, periodically picking up features; while AFS decides feature selection based on context information. The curve of AFS is more smooth, indicating that features kept by AFS are more uniformly distributed across different positions, ensuring the features' informativeness. + +$\mathbf{AFS}^t$ vs. $\mathbf{AFS}^{t,f}$ Their only difference lies at the feature gate $\mathbf{g}^f$ . We visualize this gate in Figure 10. Although this gate induces no sparsification, it offers $\mathbf{AFS}^{t,f}$ the capability of adjusting the weight of each neuron. In other words, $\mathbf{AFS}^{t,f}$ has more freedom in manipulating speech features. + +# 4.3 Results on MuST-C and LibriSpeech + +Table 2 and Table 3 list the results on MuST-C and LibriSpeech En-Fr, respectively. Over all tasks, $\mathrm{AFS}^t / \mathrm{AFS}^{t,f}$ substantially outperforms ASR-PT by $1.34 / 1.60$ average BLEU, pruning out $84.5\%$ temporal speech features on average and yielding an average decoding speedup of $1.45 \times$ . Our model narrows the gap against the cascade model to -0.8 average BLEU, where AFS surpasses Cascade on LibriSpeech En-Fr, without using KD (Liu et al., + +
MetricModelDeEsFrItNlPtRoRu
BLEU↑Di Gangi et al. (2019)17.3020.8026.9016.8018.8020.1016.5010.50
Transformer + ASR-PT*21.7726.4131.5621.4625.2226.8420.5314.31
ST17.4423.8528.4319.5421.2322.5517.6612.10
ST + ASR-PT20.6725.9632.2420.8423.2724.8319.9413.96
Cascade22.5227.9234.5324.0226.7427.5722.6116.13
ST + AFSt21.5726.7833.3423.0824.6826.1321.7315.10
ST + AFSt,f22.3827.0433.4323.3525.0526.5521.8714.92
SacreBLEU ↑ST + AFSt21.626.631.522.624.625.920.814.9
ST + AFSt,f22.426.931.623.024.926.321.014.7
Temporal Sparsity RateST + AFSt84.4%84.5%83.2%84.9%84.4%84.4%84.7%84.2%
ST + AFSt,f85.1%84.5%84.7%84.9%83.5%85.1%84.8%84.7%
Speedup ↑ST + AFSt1.38×1.35×1.50×1.34×1.54×1.43×1.59×1.31×
ST + AFSt,f1.37×1.34×1.50×1.39×1.42×1.26×1.46×1.37×
+ +Table 2: Performance over 8 languages on MuST-C dataset. *: results reported by the ESPNet toolkit (Watanabe et al., 2018), where the hyperparameters of beam search are tuned for each dataset. + +
MetricModelEn-Fr
BLEU↑Bérard et al. (2018)13.40
Watanabe et al. (2018)16.68
Liu et al. (2019a)17.02
Wang et al. (2019)17.05
Wang et al. (2020)17.66
ST14.32
ST + ASR-PT17.05
Cascade18.27
ST + AFSt18.33
ST + AFSt,f18.56
SacreBLEU↑ST + AFSt16.9
ST + AFSt,f17.2
Temporal Sparsity RateST + AFSt84.7%
ST + AFSt,f83.5%
Speedup↑ST + AFSt1.84×
ST + AFSt,f1.78×
+ +Table 3: Performance on LibriSpeech En-Fr. + +2019a) and data augmentation (Wang et al., 2020). Comparability to previous work is limited due to possible differences in tokenization and letter case. To ease future cross-paper comparison, we provide SacreBLEU (Post, 2018)4 for our models. + +# 5 Related Work + +Speech Translation Pioneering studies on ST used a cascade of separately trained ASR and MT systems (Ney, 1999). Despite its simplicity, this approach inevitably suffers from mistakes made by ASR models, and is error prone. Research in this direction often focuses on strategies capable of mitigating the mismatch between ASR output and + +MT input, such as representing ASR outputs with lattices (Saleem et al., 2004; Mathias and Byrne, 2006; Zhang et al., 2019a; Beck et al., 2019), injecting synthetic ASR errors for robust MT (Tsvetkov et al., 2014; Cheng et al., 2018) and differentiable cascade modeling (Kano et al., 2017; Anastasopoulos and Chiang, 2018; Sperber et al., 2019). + +In contrast to cascading, another option is to perform direct speech-to-text translation. Duong et al. (2016) and Bérard et al. (2016) employ the attentional encoder-decoder model (Bahdanau et al., 2015) for E2E ST without accessing any intermediate transcriptions. E2E ST opens the way to bridging the modality gap directly, but it is data-hungry, sample-inefficient and often underperforms cascade models especially in low-resource settings (Bansal et al., 2018). This led researchers to explore solutions ranging from efficient neural architecture design (Karita et al., 2019; Di Gangi et al., 2019; Sung et al., 2019) to extra training signal incorporation, including multi-task learning (Weiss et al., 2017; Liu et al., 2019b), submodule pretraining (Bansal et al., 2019; Stoian et al., 2020; Wang et al., 2020), knowledge distillation (Liu et al., 2019a), meta-learning (Indurthi et al., 2019) and data augmentation (Kocabiyikoglu et al., 2018; Jia et al., 2019; Pino et al., 2019). Our work focuses on E2E ST, but we investigate feature selection which has rarely been studied before. + +Speech Feature Selection Encoding speech signals is challenging as acoustic input is lengthy, noisy and redundant. To ease model learning, previous work often selected features via downsampling + +Gangi et al., 2019) and fixed-rate subsampling (Lu et al., 2015). Recently, Zhang et al. (2019b) and Na et al. (2019) proposed dynamic subsampling for ASR which learns to skip uninformative features during recurrent encoding. Unfortunately, their methods are deeply embedded into recurrent networks, hard to adapt to other architectures like Transformer (Vaswani et al., 2017). Recently, Salesky et al. (2020) have explored phoneme-level representations for E2E ST, but this requires nontrivial phoneme recognition and alignment. + +Instead, we resort to sparsification techniques which have achieved great success in NLP tasks recently (Correia et al., 2019; Child et al., 2019; Zhang et al., 2020). In particular, we employ $\mathcal{L}_0\mathrm{DROP}$ (Zhang et al., 2020) for AFS to dynamically retain informative speech features, which is fully differentiable and independent of concrete encoder/decoder architectures. We extend $\mathcal{L}_0\mathrm{DROP}$ by handling both temporal and feature dimensions with different gating networks, and apply it to E2E ST. + +# 6 Conclusion and Future Work + +In this paper, we propose adaptive feature selection for E2E ST to handle redundant and noisy speech signals. We insert AFS in-between the ST encoder and a pretrained, frozen ASR encoder to filter out uninformative features contributing little to ASR. We base AFS on $\mathcal{L}_0\mathrm{DROP}$ (Zhang et al., 2020), and extend it to modeling both temporal and feature dimensions. Results show that AFS improves translation quality and accelerates decoding by $\sim 1.4\times$ with an average temporal sparsity rate of $\sim 84\%$ . AFS successfully narrows or even closes the performance gap compared to cascading models. + +While most previous work on sparsity in NLP demonstrates its benefits from efficiency and/or interpretability perspectives (Zhang et al., 2020), we show that sparsification in our scenario - E2EST - leads to substantial performance gains. + +In the future, we will work on adapting AFS to simultaneous speech translation. + +# Acknowledgments + +We would like to thank Shucong Zhang for his great support on building our ASR baselines. IT acknowledges support of the European Research Council (ERC Starting grant 678254) and the Dutch National Science Foundation (NWO VIDI 639.022.518). This work has received funding + +from the European Union's Horizon 2020 Research and Innovation Programme under Grant Agreement No 825460 (ELITR). Rico Sennrich acknowledges support of the Swiss National Science Foundation (MUTAMUR; no. 176727). + +# References + +Antonios Anastasopoulos and David Chiang. 2018. Tied multitask learning for neural speech translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 82-91, New Orleans, Louisiana. Association for Computational Linguistics. +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Sameer Bansal, Herman Kamper, Karen Livescu, Adam Lopez, and Sharon Goldwater. 2018. Low-resource speech-to-text translation. In Proc. Interspeech 2018, pages 1298-1302. +Sameer Bansal, Herman Kamper, Karen Livescu, Adam Lopez, and Sharon Goldwater. 2019. Pretraining on high-resource speech recognition improves low-resource speech-to-text translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 58-68, Minneapolis, Minnesota. Association for Computational Linguistics. +Jasmijn Bastings, Wilker Aziz, and Ivan Titov. 2019. Interpretable neural predictions with differentiable binary variables. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2963-2977, Florence, Italy. Association for Computational Linguistics. +Daniel Beck, Trevor Cohn, and Gholamreza Haffari. 2019. Neural speech translation using lattice transformations and graph networks. In Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13), pages 26-31, Hong Kong. Association for Computational Linguistics. +Alexandre Bérard, Laurent Besacier, Ali Can Kocabiyikoglu, and Olivier Pietquin. 2018. End-to-end automatic speech translation of audiobooks. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6224-6228. IEEE. + +Alexandre Bérard, Olivier Pietquin, Christophe Servan, and Laurent Besacier. 2016. Listen and translate: A proof of concept for end-to-end speech-to-text translation. In NIPS Workshop on End-to-end Learning for Speech and Audio Processing, Barcelona, Spain. +Yong Cheng, Zhaopeng Tu, Fandong Meng, Junjie Zhai, and Yang Liu. 2018. Towards robust neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1756-1766, Melbourne, Australia. Association for Computational Linguistics. +Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. 2019. Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509. +Gonçalo M. Correia, Vlad Niculae, and André F. T. Martins. 2019. Adaptively sparse transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2174-2184, Hong Kong, China. Association for Computational Linguistics. +Mattia A. Di Gangi, Roldano Cattoni, Luisa Bentivogli, Matteo Negri, and Marco Turchi. 2019. MuST-C: a Multilingual Speech Translation Corpus. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2012-2017, Minneapolis, Minnesota. Association for Computational Linguistics. +Mattia A. Di Gangi, Matteo Negri, and Marco Turchi. 2019. Adapting Transformer to End-to-End Spoken Language Translation. In Proc. Interspeech 2019, pages 1133-1137. +Long Duong, Antonios Anastasopoulos, David Chiang, Steven Bird, and Trevor Cohn. 2016. An attentional model for speech translation without transcription. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 949-959, San Diego, California. Association for Computational Linguistics. +Alex Graves, Santiago Fernández, and Faustino Gomez. 2006. Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks. In *In Proceedings of the International Conference on Machine Learning*, ICML 2006, pages 369-376. +Sathish Indurthi, Houjeung Han, Nikhil Kumar Lakumarapu, Beomseok Lee, Insoo Chung, Sangha Kim, and Chanwoo Kim. 2019. Data efficient direct speech-to-text translation with modality agnostic meta-learning. arXiv preprint arXiv:1911.04283. + +Ye Jia, Melvin Johnson, Wolfgang Macherey, Ron J Weiss, Yuan Cao, Chung-Cheng Chiu, Naveen Ari, Stella Laurenzo, and Yonghui Wu. 2019. Leveraging weakly supervised data to improve end-to-end speech-to-text translation. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7180-7184. IEEE. +Takatomo Kano, Sakriani Sakti, and Satoshi Nakamura. 2017. Structured-based curriculum learning for end-to-end english-japanese speech translation. In Proc. Interspeech 2017, pages 2630-2634. +Shigeki Karita, Nanxin Chen, Tomoki Hayashi, Takaaki Hori, Hirofumi Inaguma, Ziyan Jiang, Masao Someki, Nelson Enrique Yalta Soplin, Ryuichi Yamamoto, Xiaofei Wang, Shinji Watanabe, Takenori Yoshimura, and Wangyou Zhang. 2019. A comparative study on transformer vs rnn in speech applications. In 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 449-456. +Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations. +Diederik P Kingma and Max Welling. 2013. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114. +Ali Can Kocabiyikoglu, Laurent Besacier, and Olivier Kraif. 2018. Augmenting librispeech with French translations: A multimodal corpus for direct speech translation evaluation. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). +Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177-180, Prague, Czech Republic. Association for Computational Linguistics. +Yuchen Liu, Hao Xiong, Jiajun Zhang, Zhongjun He, Hua Wu, Haifeng Wang, and Chengqing Zong. 2019a. End-to-End Speech Translation with Knowledge Distillation. In Proc. Interspeech 2019, pages 1128-1132. +Yuchen Liu, Jiajun Zhang, Hao Xiong, Long Zhou, Zhongjun He, Hua Wu, Haifeng Wang, and Chengqing Zong. 2019b. Synchronous speech recognition and speech-to-text translation with interactive decoding. arXiv preprint arXiv:1912.07240. + +Christos Louizos, Max Welling, and Diederik P. Kingma. 2018. Learning sparse neural networks through $l_{0}$ regularization. In International Conference on Learning Representations. +Liang Lu, Xingxing Zhang, Kyunghyun Cho, and Steve Renals. 2015. A study of the recurrent neural network encoder-decoder for large vocabulary speech recognition. In Sixteenth Annual Conference of the International Speech Communication Association. +Lambert Mathias and William Byrne. 2006. Statistical phrase-based speech translation. In 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings, volume 1, pages I-I. IEEE. +Michael McAuliffe, Michaela Socolof, Sarah Mihuc, Michael Wagner, and Morgan Sonderegger. 2017. Montreal forced aligner: Trainable text-speech alignment using kaldi. In Proc. Interspeech 2017, pages 498-502. +Rui Na, Junfeng Hou, Wu Guo, Yan Song, and Lirong Dai. 2019. Learning adaptive downsampling encoding for online end-to-end speech recognition. In 2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), pages 850-854. +Hermann Ney. 1999. Speech translation: Coupling of recognition and translation. In 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings. ICASSP99 (Cat. No. 99CH36258), volume 1, pages 517-520. IEEE. +Jan Niehues, Roldano Cattoni, Sebastian Stüker, Matteo Negri, Marco Turchi, Elizabeth Salesky, Ramon Sanabria, Loic Barrault, Lucia Specia, and Marcello Federico. 2019. The iwslt 2019 evaluation campaign. In Proceedings of the 16th International Workshop on Spoken Language Translation (IWSLT 2019). +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. +Juan Pino, Liezl Puzon, Jiatao Gu, Xutai Ma, Arya D McCarthy, and Deepak Gopinath. 2019. Harnessing indirect training data for end-to-end automatic speech translation: Tricks of the trade. In Proceedings of the 16th International Workshop on Spoken Language Translation (IWSLT). +Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191, Belgium, Brussels. Association for Computational Linguistics. + +Shirin Saleem, Szu-Chen (Stan) Jou, Stephan Vogel, and Tanja Schultz. 2004. Using word lattice information for a tighter coupling in speech translation systems. In International Conference of Spoken Language Processing. +Elizabeth Salesky, Matthias Sperber, and Alan W. Black. 2020. Exploring phoneme-level speech representations for end-to-end speech translation. In Proceedings of the 2020 Annual Conference of the Association for Computational Linguistics. Association for Computational Linguistics. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics. +Matthias Sperber, Graham Neubig, Jan Niehues, and Alex Waibel. 2019. Attention-passing models for robust and data-efficient end-to-end speech translation. Transactions of the Association for Computational Linguistics, 7:313-325. +Mihaela C Stoian, Sameer Bansal, and Sharon Goldwater. 2020. Analyzing asr pretraining for low-resource speech-to-text translation. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7909-7913. IEEE. +Tzu-Wei Sung, Jun-You Liu, Hung-yi Lee, and Linshan Lee. 2019. Towards end-to-end speech-to-text translation with two-pass decoding. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7175-7179. IEEE. +Yulia Tsvetkov, Florian Metze, and Chris Dyer. 2014. Augmenting translation models with simulated acoustic confusions for improved spoken language translation. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 616-625, Gothenburg, Sweden. Association for Computational Linguistics. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998-6008. Curran Associates, Inc. +Chengyi Wang, Yu Wu, Shujie Liu, Zhenglu Yang, and Ming Zhou. 2019. Bridging the gap between pretraining and fine-tuning for end-to-end speech translation. arXiv preprint arXiv:1909.07575. + +Chengyi Wang, Yu Wu, Shujie Liu, Ming Zhou, and Zhenglu Yang. 2020. Curriculum pre-training for end-to-end speech translation. arXiv preprint arXiv:2004.10093. +Shinji Watanabe, Takaaki Hori, Shigeki Karita, Tomoki Hayashi, Jiro Nishitoba, Yuya Unno, Nelson Enrique Yalta Soplin, Jahn Heymann, Matthew Wiesner, Nanxin Chen, Adithya Renduchintala, and Tsubasa Ochiai. 2018. Espnet: End-to-end speech processing toolkit. In Proc. Interspeech 2018, pages 2207-2211. +Ron J. Weiss, Jan Chorowski, Navdeep Jaitly, Yonghui Wu, and Zhifeng Chen. 2017. Sequence-to-sequence models can directly translate foreign speech. In Proc. Interspeech 2017, pages 2625-2629. +Biao Zhang, Ivan Titov, and Rico Sennrich. 2020. On sparsifying encoder outputs in sequence-to-sequence models. arXiv preprint arXiv:2004.11854. +Pei Zhang, Niyu Ge, Boxing Chen, and Kai Fan. 2019a. Lattice transformer for speech translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6475-6484, Florence, Italy. Association for Computational Linguistics. +Shucong Zhang, Erfan Loweimi, Yumo Xu, Peter Bell, and Steve Renals. 2019b. Trainable Dynamic Subsampling for End-to-End Speech Recognition. In Proc. Interspeech 2019, pages 1413-1417. \ No newline at end of file diff --git a/adaptivefeatureselectionforendtoendspeechtranslation/images.zip b/adaptivefeatureselectionforendtoendspeechtranslation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..5a609a1f00cdd7d775f0397e828a854453ddac28 --- /dev/null +++ b/adaptivefeatureselectionforendtoendspeechtranslation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7cbf80454419957300ad67bcd2ecf83a822dc87c7d719534c940622d35b30b05 +size 397747 diff --git a/adaptivefeatureselectionforendtoendspeechtranslation/layout.json b/adaptivefeatureselectionforendtoendspeechtranslation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..5b1b7820e8e1ee9d113b7d197e08bac4d4ea3587 --- /dev/null +++ b/adaptivefeatureselectionforendtoendspeechtranslation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bc41cd2b56324643f976a394536189f3fd2f0c79bedcbac529c5fb362e46c434 +size 490787 diff --git a/adualattentionnetworkforjointnamedentityrecognitionandsentenceclassificationofadversedrugevents/66435229-96d8-4d48-8704-c857f1a8c364_content_list.json b/adualattentionnetworkforjointnamedentityrecognitionandsentenceclassificationofadversedrugevents/66435229-96d8-4d48-8704-c857f1a8c364_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f3fbbcf713cdc2d948d7c801ea5e691a1f54d1a2 --- /dev/null +++ b/adualattentionnetworkforjointnamedentityrecognitionandsentenceclassificationofadversedrugevents/66435229-96d8-4d48-8704-c857f1a8c364_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cd78c0e2119d249e955e2ba7ff67cf8cf539942c747496f6b5c85cd78acbb3e9 +size 72308 diff --git a/adualattentionnetworkforjointnamedentityrecognitionandsentenceclassificationofadversedrugevents/66435229-96d8-4d48-8704-c857f1a8c364_model.json b/adualattentionnetworkforjointnamedentityrecognitionandsentenceclassificationofadversedrugevents/66435229-96d8-4d48-8704-c857f1a8c364_model.json new file mode 100644 index 0000000000000000000000000000000000000000..cb8c59c0152a88a3b2929b9b4246c9c96a28fc2f --- /dev/null +++ b/adualattentionnetworkforjointnamedentityrecognitionandsentenceclassificationofadversedrugevents/66435229-96d8-4d48-8704-c857f1a8c364_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ff623108553f2b7bd1494af2207da52af7c88c0c109b57d4ce6483807567b03b +size 88171 diff --git a/adualattentionnetworkforjointnamedentityrecognitionandsentenceclassificationofadversedrugevents/66435229-96d8-4d48-8704-c857f1a8c364_origin.pdf b/adualattentionnetworkforjointnamedentityrecognitionandsentenceclassificationofadversedrugevents/66435229-96d8-4d48-8704-c857f1a8c364_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..30b4b77e4ccae4f98ed32c53e77114374ca23e9e --- /dev/null +++ b/adualattentionnetworkforjointnamedentityrecognitionandsentenceclassificationofadversedrugevents/66435229-96d8-4d48-8704-c857f1a8c364_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:737148b1cbf9754eec9097aa0006b4448c01af3f6767e2ec76668be027c92578 +size 1211648 diff --git a/adualattentionnetworkforjointnamedentityrecognitionandsentenceclassificationofadversedrugevents/full.md b/adualattentionnetworkforjointnamedentityrecognitionandsentenceclassificationofadversedrugevents/full.md new file mode 100644 index 0000000000000000000000000000000000000000..2d2a3acd00a6d6859a69944b4cfcf96af0b5ac0d --- /dev/null +++ b/adualattentionnetworkforjointnamedentityrecognitionandsentenceclassificationofadversedrugevents/full.md @@ -0,0 +1,373 @@ +# A Dual-Attention Network for Joint Named Entity Recognition and Sentence Classification of Adverse Drug Events + +Susmitha Wunnava + +Worcester Polytechnic Institute + +100 Institute Rd + +Worcester, MA 01609 + +swunnava@wpi.edu + +Xiao Qin + +IBM Research - Almaden + +650 Harry Road + +San Jose CA 95120 + +xiao.qin@ibm.com + +Tabassum Kakar + +Worcester Polytechnic Institute + +100 Institute Rd + +Worcester, MA 01609 + +tkakar@wpi.edu + +Xiangnan Kong + +Worcester Polytechnic Institute + +100 Institute Rd + +Worcester, MA 01609 + +xkong@wpi.edu + +Elke A. Rundensteiner + +Worcester Polytechnic Institute + +100 Institute Rd + +Worcester, MA 01609 + +rundenst@wpi.edu + +# Abstract + +An adverse drug event (ADE) is an injury resulting from medical intervention related to a drug. ADE detection from text can be either fine-grained (ADE entity recognition) or coarse-grained (ADE assertive sentence classification), with limited efforts leveraging interdependencies among these two granularities. We instead design a multi-grained joint deep network model MGADE to concurrently solve both ADE tasks MGADE takes advantage of their symbiotic relationship, with a transfer of knowledge between the two levels of granularity. Our dual-attention mechanism constructs multiple distinct representations of a sentence that capture both task-specific and semantic information in the sentence, providing stronger emphasis on the key elements essential for sentence classification. Our model improves state-of-art F1-score for both tasks: (i) entity recognition of ADE words (12.5% increase) and (ii) ADE sentence classification (13.6% increase) on MADE 1.0 benchmark of EHR notes. + +# 1 Introduction + +Background. Adverse drug events (ADEs), injuries resulting from medical intervention, are a leading cause of death in the United States and cost around $30~^\circ \mathbb{S}130$ billion every year (Donaldson et al., 2000). Early detection of ADE incidents aids in the timely assessment, mitigation and prevention of future occurrences of ADEs. Natural Language Processing techniques have been recognized as instrumental in identifying ADEs and related information from unstructured text fields of spontaneous reports and electronic health records (EHRs) and thus in improving drug safety monitoring and pharmacovigilance (Harpaz et al., 2014). + +Fine-grained ADE detection identifies named ADE entities at the word-level, while coarse-grained ADE detection (also ADE assertive text classification) identifies complete sentences describing drug-related adverse effects. (Gurulingappa et al., 2011)'s system for identification of ADE assertive sentences in medical case reports targets the important application of detecting underreported and under-documented adverse drug effects. Lastly, multi-grained ADE detection identifies ADE information at multiple levels of granularity, namely, both entity and sentence level. + +As example, Figure 1 displays ADE and non-ADE sentences. The first is an ADE sentence where the mentions of Drugname and ADE entities have the appropriate relationship with each other. Second and third sentences show that the mention of an ADE entity by itself is not sufficient to assert a drug-related adverse side effect. + +Recently, deep learning-based sequence approaches have shown some promise in extracting fine-grained ADEs and related named entities from text (Liu et al., 2019). However, the prevalence of entity-type ambiguity remains a major hurdle, such as, distinguishing between Indication entities as the reason for taking a drug versus ADE entities as unintended outcomes of taking a drug. Coarse-grained sentence-level detection performs well in identifying ADE descriptive sentences, but is not equipped to detect fine-grained information such as words associated with ADE related named entities. Unfortunately, when the interaction between these two extraction tasks is ignored, we miss the opportunity of the transfer of knowledge between the ADE entity and sentence prediction tasks. + +Attention-based neural network models have been shown to be effective for text classification + +![](images/64665bc5d961305305b47a74c4f44e68bc3195cf2603b2f38431c5f6fe6ad6db.jpg) +Figure 1: Each sentence is classified as ADE sentence (binary yes/no). Each word is labeled using beginning of an entity (B-...) vs inside an entity (I-...) for ADE related named entities (multiple classes). O denotes no entity tag. + +tasks (Luong et al., 2015; Bahdanau et al., 2014) from alignment attention in translation (Liu et al., 2016) to supervising attention in binary text classification (Rei and Søgaard, 2019). Previous approaches typically apply only a single round of attention focusing on simple semantic information In our ADE detection task, instead, key elements of the sentence can be linked to multiple categories of task-specific semantic information of the named entities (ADE, Drug, Indication, Severity, Dose etc.). Thus, single attention is insufficient in exploring this multi-aspect information and consequently risks losing important cues. + +Proposed Approach. In our work, we tackle the above shortcomings by designing a dual-attention based neural network model for multi-grained joint learning, called MGADE, that jointly identifies both ADE entities and ADE assertive sentences. The design of MGADE is inspired by multi-task Recurrent Neural Network architectures for jointly learning to label tokens and sentences in a binary classification setting (Rei and Søgaard, 2019). In addition, our model makes use of a supervised self-attention mechanism based on entity-level predictions to guide the attention function – aiding it in tackling the above entity-type ambiguity problem. We also introduce novel strategies of constructing multiple complementary sentence-level representations to enhance the performance of sentence classification. + +# Our key contributions include: + +1. Joint Model. We jointly model ADE entity recognition as a multi-class sequence tagging problem and ADE assertive text classification as binary classification. Our model leverages the mutually beneficial relationships between these two tasks, e.g., ADE sentence classification can influence ADE entity recognition by identifying clues that contribute to ADE assertiveness of the sentence and match them to ADE entities. +2. Dual-Attention. Our novel method for generating and pooling multiple attention mechanisms pro + +duces informative sentence-level representations. Our dual-attention mechanisms based on word-level entity predictions construct multiple representations of the same sentence. The dual-attention weighted sentence-level representations capture both task-specific and semantic information in a sentence, providing stronger emphasis on key elements essential for sentence classification. + +3. Label-Awareness. We introduce an augmented sentence-level representation comprised of predicted entity labels which adds label-context to the proposed dual-attention sentence-level representation for better capturing the word-level label distribution and word dependencies within the sentence. This further boosts the performance of the sentence classification task. +4. Model Evaluation. We compare our joint model with state-of-art methods for the ADE entity recognition and ADE sentence classification tasks. Experiments on MADE1.0 benchmark of EHR notes demonstrate that our MGADE model drives up the F1-score for both tasks significantly: (i) entity recognition of ADE words by $12.5\%$ and by $23.5\%$ and (ii) ADE sentence classification by $13.6\%$ and by $23.0\%$ , compared to state-of-art single task and joint-task models, respectively. + +# 2 Related Work + +Fine-grained ADE Detection. Jagannatha and Yu (2016b) have employed a bidirectional LSTM-CRF model to label named entities from electronic health records of cancer patients. Pandey et al. (2017) proposed a bidirectional recurrent neural network with attention to extract ADRs and classify the relationship between entities from Medline abstracts and EHR datasets. Wunnava et al. (2019) presented a three-layer deep learning architecture for identifying named entities from EHRs, consisting of a Bi-LSTM layer for character-level encoding, a Bi-LSTM layer for word-level encoding, and a CRF layer for structured prediction. + +Coarse-grained ADE Detection. Huynh et al. (2016) applies Convolutional Neural Networks using pre-trained word embeddings to detect sentences describing ADEs. Tafti et al. (2017) utilized a feed-forward ANN to discover ADE sentences on PubMed Central data and social media. Dev et al. (2017) developed a binary document classifier using logistic regression, random forests and LSTMs to classify an AE case as serious vs. non-serious. + +Multi-grained ADE Detection. Zhang et al. (2018) developed a multi-task learning model that combines entity recognition with document classification to extract the adverse event from a case narrative and classify the case as serious or nonserious. However, they fall short in tackling our problem. Not only do their targeted labels not fall into the drug-related adverse side effects category in which a causal relationship is suspected and required, but their attention model is only simple self-attention. As consequence, MGADE outperforms their model by $23.5\%$ in F1 score for entity recognition and $23.0\%$ for assertive text classification as seen in Section 4. + +# 3 The Proposed Model: MGADE + +# 3.1 Task Definition + +In the ADE and medication related information detection task, the entities are ADE, Drugname, Dose, Duration, Frequency, Indication, Route, Severity and Other Signs & Symptoms. The no-entity tag is O. Because some entities (like weight gain) can have multiple words, we work with a BIO tagging scheme to distinguish between beginning (tag B-...) versus inside of an entity (tag I-...). The notation we use is given in Fig 2. Given a sentence (a sequence of words), task one is the multi-class classification of ADE and medication related named entities in the text sequence, i.e., entity recognition. Task two is the binary classification of a sentence as ADE assertive text. The overall goal is to minimize the weighted sum of entity recognition loss and sentence classification loss. + +# 3.2 Input Embedding Layer + +The input of this layer is a sentence represented by a sequence of words $S = \langle w_1, w_2, \dots, w_N \rangle$ , where $N$ is sentence length. The words are first broken into individual characters and character-level representations which capture the morphology of a word computed with a bidirectional-LSTM over the sequence of characters in the input words. We employ + +the pre-trained word vector, GloVe (Pennington et al., 2014), to obtain a fixed word embedding of each word. A consolidated dense embedding, comprised of pre-trained word embedding concatenated with a learned character-level representation, is used to represent a word. The output of this layer is $X = [x_{1}, x_{2}, \ldots, x_{N}]$ . + +# 3.3 Contextual Layer + +LSTM is a type of recurrent neural network that effectively captures long-distance sequence information and the interaction between adjacent words (Hochreiter and Schmidhuber, 1997). The word representations $x_{t}$ are given as input to two separate LSTM networks (Bi-LSTM) that scan the sequence forward and backward, respectively. The hidden states learned by the forward and backward LSTMs are denoted as $\vec{h}_t$ and $\vec{h}_t$ , respectively. + +$$ +\vec {h} _ {t} = L S T M \left(x _ {t}, \vec {h} _ {t - 1}\right) \tag {1} +$$ + +$$ +\overleftarrow {h} _ {t} = L S T M \left(x _ {t}, \overleftarrow {h} _ {t + 1}\right) \tag {2} +$$ + +The output of this layer is a sequence of hidden states $H = [h_1, h_2, \dots, h_N]$ , where $h_t$ is a concatenation of $\overrightarrow{h}_t$ and $\overleftarrow{h}_t$ . This way, the hidden state $h_t$ of a word encodes information about the $t^{th}$ word and its context: + +$$ +h _ {t} = \left[ \overrightarrow {h} _ {t}; \overleftarrow {h} _ {t} \right] \tag {3} +$$ + +# 3.4 Word-level (NER) Output Layer + +The hidden states $h_t$ are passed through a nonlinear layer and then with the softmax activation function to $k$ output nodes, where $k$ denotes the number of entity-types (classes). Entity-type labels are the named entities in the BIO format. Each output node belongs to some entity-type and outputs a score for that entity-type. The output of the softmax function is a categorical probability distribution, where output probabilities of each class is between 0 and 1, and the total sum of all output probabilities is equal to 1. + +$$ +a _ {t} ^ {(i)} = \frac {\exp (e _ {t} ^ {(i)})}{\sum_ {j = 1} ^ {k} \exp (e _ {t} ^ {(j)})} \tag {4} +$$ + +Data is classified into a entity-type that has the highest probability value. + +$$ +\hat {a} _ {t} = \max _ {i \in \{1, 2, \dots , k \}} a _ {t} ^ {(i)} \tag {5} +$$ + +![](images/eb84885a87df6f3c0b6669037aca07693216a93bdb88ca43f82910077653259a.jpg) +Figure 2: The architecture of the proposed Multi-Grained ADE Detection Network (MGADE) + +# 3.5 Dual-Attention Layer + +The purpose of the attention mechanism in the sentence classification task is to select important words in different contexts to build informative sentence representations. Different words have different importance for ADE sentence classification task. For instance, key elements (words/phrases) in the ADE detection task are linked to multiple aspects of semantic information associated with the named entity categories - ADE, Drugname, Severity, Dose, Duration, Indication...etc. It is necessary to assign the weight for each word according to its contribution to the ADE sentence classification task. + +Moreover, certain named entities are task-specific and are considered essential for ADE sentence classification. There exists a direct correspondence between such task-specific named entities and the sentence. Hence, we anticipate that there would be at least one word of the same label as the sentence-level label. For instance, a sentence that is labeled as an ADE sentence has a corresponding ADE entity word. Although other named entity words detect important information and contribute to the ADE sentence-level classification task, a stronger focus should be on task-specific ADE words indicative of the ADE sentence core message. A single attention distribution tends to be insufficient to explore the multi-aspect information and consequently may risk losing important cues (Wang et al., 2017). + +We address this challenge by generating and us + +ing multiple attention distributions that offer additional opportunities to extract relevant semantic information. This way, we focus on different aspects of an ADE sentence to create a more informative representation. For this, we introduce a novel dual-attention mechanism, which in addition to selecting the important semantic areas in the sentence (henceforth referred as supervised self-attention (Bahdanau et al., 2014; Yang et al., 2016; Rei and Søgaard, 2019)), it also provides stronger emphasis on task-specific semantic aspect areas (henceforth referred as task-specific attention). The task-specific attention promotes the words important to the ADE sentence-classification task and reduces the noise introduced by words which are less important for the task. + +Similar to (Rei and Søgaard, 2019; Yang et al., 2016), we use a self-attention mechanism where, based on softmax probabilities and normalization, attention-weights are extracted from word-level prediction scores. The difference between the two attention mechanism is that the supervised self-attention recognizes word-level prediction scores of all named entities while the task-specific attention recognizes word-level prediction scores w.r.t only selective named entities (one which correspond to the ADE sentence and ignores other named entities). Specifically, the weights of the supervised self-attention and task-specific attention are calculated as follows: + +Word-level prediction w.r.t the task-specific + +named entity (i.e.,) ADE: + +$$ +a _ {t} ^ {\left(\boldsymbol {A D E} _ {e n t i t y}\right)} = \frac {\exp \left(e _ {t} ^ {\left(\boldsymbol {A D E} _ {e n t i t y}\right)}\right)}{\sum_ {j = 1} ^ {k} \exp \left(e _ {t} ^ {(j)}\right)} \tag {6} +$$ + +Task-specific Attention Weight, normalized to sum up to 1 over all values in the sentence, is: + +$$ +\alpha_ {t} = \frac {a _ {t} ^ {\left(\mathbf {A D E} _ {\text {e n t i t y}}\right)}}{\sum_ {n = 1} ^ {N} \left(a _ {n} ^ {\left(\mathbf {A D E} _ {\text {e n t i t y}}\right)}\right)} \tag {7} +$$ + +Supervised Self-Attention Weight, normalized to sum up to 1 over all values in the sentence: + +$$ +\beta_ {t} = \frac {\hat {a} _ {t}}{\sum_ {n = 1} ^ {N} \hat {a} _ {n}} \tag {8} +$$ + +Fig 3 shows the examples of the supervised self-attention and task-specific attention distributions generated from our attention layer. The color depth expresses the degree of importance of the weight in attention vector. As depicted in Fig. 3, the task-specific attention emphasizes more on the parts relevant to the ADE sentence classification task. + +Attention-based Sentence Representations. To generate informative and more accurate sentence representations, we construct two different sentence representations as a weighted sum of the context-conditioned hidden states using the task-specific attention weight $\alpha_{t}$ and supervised self-attention weight $\beta_{t}$ , respectively. + +1. Task-specific attention weighted sentence rep.: + +$$ +T S _ {S} = \sum_ {t = 1} ^ {N} \alpha_ {t} h _ {t} \tag {9} +$$ + +2. Supervised self-attention weighted sentence rep.: + +$$ +S S _ {S} = \sum_ {t = 1} ^ {N} \beta_ {t} h _ {t} \tag {10} +$$ + +Attention Pooling A combination of multiple sentence representations obtained from focusing on different aspects captures the overall contextual semantic information about a sentence. The two attention-based representations are concatenated to form a dual-attention contextual sentence representation: + +$$ +C _ {S} = \left[ T S _ {S}; S S _ {S} \right] \tag {11} +$$ + +# 3.6 Entity Prediction Embedding Layer + +ADE detection is a challenging task. Understanding the co-occurrence of named entities (labels) is essential for ADE sentence classification. Although we implicitly capture long-range label dependencies with Bi-LSTM in the contextual layer, and make even more informative sentence-level representations with the help of the dual-attention layer, explicitly integrating information on the label-distribution in a sentence is further helpful to understand the label co-occurrence structure and dependencies in the sentence. The idea is to further improve the performance of ADE sentence classification task by learning the output word-level label knowledge. For a better representing of the word-level label distribution and to capture potential label dependencies within each sentence, we propose Entity Prediction Embedding (EPE), a sentence-level vector representation of entity labels predicted at the word-level output layer (Sec. 3.4). + +$$ +\hat {l} _ {t} = \underset {i \in \{0, 1, 2, \dots , k \}} {\arg \max } a _ {t} ^ {(i)} \tag {12} +$$ + +$$ +L _ {S} = \left[ v _ {0}, v _ {1}, v _ {2}, \dots , v _ {k} \right]; v _ {i} \in \{0, 1 \} \tag {13} +$$ + +# 3.7 Sentence Encoding Layer + +A final sentence representation that captures the overall contextual semantic information and label dependencies within the sentence is constructed by combining the dual-attention weighted sentence representation and Entity Prediction Embedding, respectively. + +$$ +\mathbf {S} = \left[ \mathbf {C} _ {S}; \mathbf {L} _ {S} \right] \tag {14} +$$ + +# 3.8 Sentence Classification Output Layer + +Finally, we apply a fully connected function and use sigmoid activation to output the sentence prediction score. + +$$ +\hat {y} ^ {\text {s e n t e n c e}} = p \left(y ^ {(j = 1)} \mid S\right) \tag {15} +$$ + +# 3.9 Optimization objective + +The objective is to minimize the mean squared error between the predicted sentence-level score $\hat{y}^{(sentence)}$ and the gold-standard sentence label $y^{(sentence)}$ across all $m$ sentences: + +$$ +L _ {s e n t e n c e} = \sum_ {m} \left(y ^ {(m)} - \hat {y} ^ {(m)}\right) ^ {2} \tag {16} +$$ + +The objective is to minimize the cross-entropy loss between the predicted word-level probability + +(a) Task-specific Attention +(b) Supervised Self-attention +(c) Distribution of attention weights. +Figure 3: Attention Visualizations: Highlighted words indicate attended words. Stronger color denotes higher focus of attention. (a) Task-specific attention: Recognizes task-specific semantic aspect areas of sentence, with focus on ADE entity words essential for ADE sentence classification task. (b) Supervised Self-attention: Recognizes all important areas in the sentence. (c) Distribution of Task-specific attention and Supervised Self-attention weights. +![](images/5ffd6c72a856dde8384bdb6b977403fab9bc2c2888dff44a621c1397d786d8be.jpg) +Task-Specific Attention +She developed sensitivity to bleomycin with an itchy rash that lasted for several days following her cycle, which is addressed by every 6-hour dosing of Benadryl at approximately $1\mathrm{mg/kg}$ per dose. +Supervised Self-Attention She developed sensitivity to bleomycin with an itchy rash that lasted for several days following her cycle, which is addressed by every 6-hour dosing of Benadryl at approximately $1\mathrm{mg / kg}$ per dose. + +score $\hat{y}^{(entity)}$ and the gold-standard sentence label $y^{(entity)}$ across all $N$ words in the sentence: + +$$ +L _ {\text {w o r d}} = - \sum_ {m} \sum_ {t = 1} ^ {N} \sum_ {i = 1} ^ {k} \left[ a _ {t i} ^ {(m)} \log \left(\hat {a} _ {t i} ^ {(m)}\right) \right] \tag {17} +$$ + +Similar to (Rei and Søgaard, 2019), we also add another loss function for joining the sentence-level and word-level objectives that encourages the model to optimize for two conditions on the ADE sentence (i) an ADE sentence must have at least one ADE entity word, and (ii) ADE sentence must have at least one word that is either non-ADE entity or a no-entity word. + +$$ +\begin{array}{l} \begin{array}{r l r} & {L _ {a t t n} = \sum_ {m} \left(\min \left(\hat {a} _ {t, A D E} ^ {(m)}\right) - 0\right) ^ {2}} \\ & {\quad + \quad} & {(1 8)} \end{array} \\ \sum_ {m} \left(\max \left(\hat {a} _ {t, A D E} ^ {(m)}\right) - y ^ {(m)}\right) ^ {2} \\ \end{array} +$$ + +We combine different objective functions using weighting parameters to allow us to control the importance of each objective. The final objective that we minimize during training is then: + +$$ +L = \lambda_ {s e n t} \cdot L _ {s e n t} + \lambda_ {w o r d} \cdot L _ {w o r d} + \lambda_ {a t t n} \cdot L _ {a t t n} \tag {19} +$$ + +By using word-level entity predictions as attention weights for composing sentence-level representations, we explicitly connect the predictions at both levels of granularity. When both objectives work in tandem, they help improve the performance of one another. In our joint model, we give equal importance to both tasks and set $\lambda_{word} = \lambda_{sentence} = 1$ . + +# 4 Experimental Study + +# 4.1 Data Set + +MADE1.0 NLP challenge for detecting medication and ADE related information from EHR (Jagan- + +natha and Yu, 2016a) used 1089 de-identified EHR notes from 21 cancer patients (Training: 876 notes, Testing: 213 notes). The annotation statistics of the corpus are provided (Jagannatha et al., 2019). + +Named Entity Labels. The notes are annotated with several categories of medication information. Adverse Drug Event (ADE), Drugname, Indication and Other Sign Symptom and Diseases (OtherSSD) are specified as medical events that contribute to a change in a patient's medical status. Severity, Route, Frequency, Duration and Dosage specified as attributes describe important properties about the medical events. Severity denotes the severity of a disease or symptom. Route, Frequency, Duration and Dosage as attributes of Drugname label the medication method, frequency of dosage, duration of dosage, and the dosage quantity, respectively. + +Sentence Labels. MADE 1.0 text has each word manually annotated with ADE or medication related entity types. For words that belong to the ADE entity type, an additional relation annotation denotes if the ADE entity is an adverse side effect of the prescription of the Drugname entity. Since MADE 1.0 dataset does not have sentence-level annotations, we use the relation annotation with the word annotation to assign each sentence a label as ADE or nonADE. In this work, the relation labels are used only to assign the sentence labels, but they are not used in the supervised learning process. + +# 4.2 Hyper-parameter Settings + +The model operates on tokenized sentences. Tokens were lower-cased, while the character-level component receives input with the original capitalization to learn the morphological features of each word. As input, the pre-trained publicly available Glove word embeddings of size 300 (Pennington + +![](images/b6c73ee548a100daf8572e552f0090add32430d1e73ddd609aff6827c871f473.jpg) + +![](images/428e3ed71b5804e7dd8bb476ff0cf3dba7f309928cdfd20cbfe51279a1572f44.jpg) +(a) Single Task-specific Attention + +![](images/7988c097563bcb8c57569589d1c00b8e22ce18f86d2a8084ac1792d0d46580e3.jpg) + +![](images/f1e1de68bbc0baa6d1f16ea2d273ad0e712d73dced6c17c6dfe87b9c98f888bc.jpg) +(b) Dual Task-specific attention + +![](images/2a8deec82d5d3296da4613d6472c9f215b6a8698a48df098dabff7e758f77db7.jpg) +(c) Single Supervised Self-attention +(e) Distribution of attention weights +(d) Dual Supervised Self-attention +(f) Sentence prediction scores +Figure 4: Single v.s. dual attention distribution. The color intensity corresponds to the weight given to each word. Attention weight of each word are given in the parenthesis. Single attention-based models (a) and (c) fail to capture sufficient attention weight on the key semantic areas of the sentence. The dual-attention based model where the two attention distributions are combined, accurate weights are assigned (b) and (d). + +et al., 2014). The size of the learned character-level embedding are 100 dimensional vectors. The size of LSTM hidden layers for word-level and char-level LSTM are size 300 and 100 respectively. The hidden combined representation $h_t$ was set to size 200; the attention weight layer $e_t$ was set to size 100. The attention-weighted sentence representations $T S_S$ and $S S_S$ , are 200 dimensional vectors and therefore their combination context vector $C_S$ is 400 dimensional. The Entity Prediction Embedding (EPE) $L_S$ is of size $k$ entities that are in BIO format. Hence EPE is a size 19 dimensional binary vector (eighteen entities plus the no entity tag). The final concatenated sentence-level $S$ vector is thus size 419. To avoid over-fitting, we apply a dropout strategy (Ma and Hovy, 2016; Srivastava et al., 2014) of 0.5 for our model. All models were trained with a learning rate of 0.001 using Adam (Kingma and Ba, 2014). + +# 4.3 Results + +# 4.3.1 ADE Assertive Sentence Classification + +Table 1 compares our model against two baselines of individual ADE sentence classification models. (i) Similar to (Dernoncourt et al., 2017), LAST is a Bi-LSTM based sentence classification model that uses the last hidden states for sentence composition; (ii) Similar to (Yang et al., 2016), ATTN is a B-LSTM model that used simple attention weights for sentence composition. Our full model, MGADE succeeds to improve the F1 scores by $13.6\%$ over the LAST baseline in testing. We also compare with a model similar to (Zhang et al., 2018) joint- + +Table 1: ADE sentence classification: F1 scores. + +
ModelF1
Baseline Individual Models
LAST (Dernoncourt et al., 2017)0.66
ATTN (Yang et al., 2016)0.63
Baseline Joint Model
(Zhang et al., 2018)0.61
MGADE0.75
+ +task model based on self-attention. MGADE outperforms their model by $23.0\%$ for sentence classification. + +Table 2: ADE entity recognition: F1 scores. + +
ModelF1
Baseline Individual Models
Bi-LSTM (Wunnava et al., 2019)0.56
Bi-LSTM + CRF (Wunnava et al., 2019)0.63
Baseline Joint Model
(Zhang et al., 2018)0.51
MGADE0.63
+ +# 4.3.2 ADE Named Entity Recognition + +Table 2 compares our model against the best performing models on MADE1.0 benchmark in the literature (Wunnava et al., 2019) for ADE entity recognition. The entity recognition component of our MGADE is similar to their Bi-LSTM model. MGADE improves the F1 score by $12.5\%$ over their Bi-LSTM only model. Our model achieved comparable results with their Bi-LSTM + CRF combination model. The models with CRF layer predict the label sequence jointly instead of predicting each label individually which is helpful to predict sequences where the label for each word in a sequence depends on the label of the previous + +Table 3: Effect of dual-attention layer. † denotes models with single-attention with Task-specific attention removed from Supervised Self-attention model, and vice versa. + +
ADE Entity RecognitionADE Sentence Classification
ModelPRF1PRF1
MGADE-SelfA†0.580.520.550.840.550.67
MGADE-TaskA†0.620.500.550.820.640.72
MGADE-DualA0.680.550.610.870.650.74
MGADE0.700.570.630.860.670.75
+ +word. Adding an CRF component to our model might further improve the performance of the entity recognition task. We also compare with a model similar to (Zhang et al., 2018) joint-task model based on self-attention. MGADE outperforms their model by $23.5\%$ for entity recognition. + +# 4.3.3 Ablation Analysis + +To evaluate the effect of each part in our model, we remove core sub-components and quantify the performance drop in F1 score. + +Types of Attention. Table 3 studies the two types of attention we generate: Supervised self-attention $(\beta)$ and Task-specific attention $(\alpha)$ for composing sentence-level representations. $\dagger$ denotes the models with single-attention. As shown in the table, models that used only a single attention component, be it Supervised Self-Attention based $(SS_{S})$ or Task-specific attention based sentence representation $(TS_{S})$ achieved the same F1-score for the entity recognition task. However, their sentence classification task performance varies, demonstrating that the two attentions capture different aspects of information in the sentence. The type of attention captured plays a critical role in composing an informative sentence representation. Both single-attention models performed better than the baseline individual sentence-classification models LAST and ATTN (see Table 1). $TS_{S}$ achieved superior sentence classification performance over $SS_{S}$ . Intuitively, stronger focus should be placed on the words indicative of the sentence type, and $TS_{S}$ which emphasizes more on the parts relevant to the ADE sentence classification task is more accurate in identifying ADE sentences. + +Single Attention vs. Dual-Attention. Table 3 studies impact of dual-attention component. As seen, the model with dual-attention sentence representation which combines two attention-weighted sentence representations $C_S$ outperforms the models with single-attention (denoted by $\dagger$ ) in both entity recognition and sentence classification tasks. + +Label-Awareness. Table 3 studies the effect of adding the label-awareness component in im + +proving the sentence representation. Our full model MGADE, with both dual-attention and label-aware components further improves the performance of sentence classification and entity recognition tasks by $1.0\%$ and $2.0\%$ respectively compared to MGADE-DualA, the model with only dual-attention component. + +Case Study. Dual-attention is not only effective in capturing multiple aspects of semantic information in the sentence, but also in reducing the risk of capturing incorrect or insufficient attention when only one of the single attentions (either task-specific or supervised self-attention) is used. Fig 4 shows such an example where single attention, either task-specific or supervised self-attention, fails to capture sufficient attention weight on the key semantic areas of the sentence necessary to make a correct prediction on the sentence. The incorrect distribution of attention weights assigned in the single task-specific and single supervised self-attention (Figures 4a and 4c) is addressed by the dual-attention mechanism. The later corrects the distribution and assigns appropriate weights to the relevant semantic words as in Figures 4b and 4d. In Figures 4e and 4f, we demonstrate the effectiveness of the dual-attention mechanism by plotting attention weight distributions and the sentence prediction scores when specific type of attention is composed into the sentence representation. The bar chart depicts the ADE sentence-level classification confidence scores w.r.t single-attention and dual-attention models and confirms the utility of dual-attention. + +# 5 Conclusion + +We propose a dual-attention network for multi-grained ADE detection to jointly identify ADE entities and ADE assertive sentences from medical narratives. Our model effectively supports knowledge sharing between the two levels of granularity, i.e., words and sentences, improving the overall quality of prediction on both tasks. Our solution features significant performance improvements over state-of-the-art models on both tasks. Our MGADE + +architecture is pluggable, in that other sequential learning models including BERT (Devlin et al., 2019) or other models for sequence labelling and text classification could be substituted in place of the Bi-LSTM sequential representation learning model. We leave this enhancement of our model and its study to future work. + +# References + +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473. +Franck Dernoncourt, Ji Young Lee, and Peter Szolovits. 2017. Neural networks for joint sentence classification in medical paper abstracts. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 2: Short Papers, pages 694-700. Association for Computational Linguistics. +Shantanu Dev, Shinan Zhang, Joseph Voyles, and Anand S Rao. 2017. Automated classification of adverse events in pharmacovigilance. In 2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pages 905-909. IEEE. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186. Association for Computational Linguistics. +Molla S. Donaldson, Janet M. Corrigan, Linda T. Kohn, and Editors. 2000. To err is human: building a safer health system, volume 6. National Academies Press. +Harsha Gurulingappa, Juliane Fluck, Martin Hofmann-Apitius, and Luca Toldo. 2011. Identification of adverse drug event assertive sentences in medical case reports. In First international workshop on knowledge discovery and health care management (KD-HCM), European conference on machine learning and principles and practice of knowledge discovery in databases (ECML PKDD), pages 16-27. +Rave Harpaz, Alison Callahan, Suzanne Tamang, Yen Low, David Odgers, Sam Finlayson, Kenneth Jung, Paea LePendu, and Nigam H Shah. 2014. Text mining for adverse drug events: the promise, challenges, and state of the art. *Drug safety*, 37(10):777-790. +Sepp Hochreiter and Jurgen Schmidhuber. 1997. Long short-term memory. *Neural computation*, 9:1735-1780. + +Trung Huynh, Yulan He, Alistair Willis, and Stefan Rüger. 2016. Adverse drug reaction classification with deep neural networks. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 877-887. +Abhyuday Jagannatha, Feifan Liu, Weisong Liu, and Hong Yu. 2019. Overview of the first natural language processing challenge for extracting medication, indication, and adverse drug events from electronic health record notes (made 1.0). *Drug safety*, 42(1):99-111. +Abhyuday N Jagannatha and Hong Yu. 2016a. Bidirectional rnn for medical event detection in electronic health records. In Proceedings of the conference. ACL. North American Chapter. Meeting, volume 2016, page 473. NIH Public Access. +Abhyuday N. Jagannatha and Hong Yu. 2016b. Structured prediction models for rn based sequence labeling in clinical text. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Conference on Empirical Methods in Natural Language Processing, volume 2016, page 856. Proceedings of the Conference on Empirical Methods in Natural Language Processing. Conference on Empirical Methods in Natural Language Processing. +Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. +Feifan Liu, Abhyuday Jagannatha, and Hong Yu. 2019. Towards drug safety surveillance and pharmacovigilance: current progress in detecting medication and adverse drug events from electronic health records. +Lemao Liu, Masao Utiyama, Andrew M. Finch, and Eiichiro Sumita. 2016. Neural machine translation with supervised attention. In COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, December 11-16, 2016, Osaka, Japan, pages 3093-3102. ACL. +Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025. +Xuezhe Ma and Eduard H. Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-cnns-crf. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. +Chandra Pandey, Zina M. Ibrahim, Honghan Wu, Ehtesham Iqbal, and Richard J. B. Dobson. 2017. Im + +proving RNN with attention and embedding for adverse drug reactions. In Proceedings of the 2017 International Conference on Digital Health, London, United Kingdom, July 2-5, 2017, pages 67-71. +Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543. Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). +Marek Rei and Anders Søgaard. 2019. Jointly learning to label sentences and tokens. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6916-6923. +Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15:1929-1958. +Ahmad P Tafti, Jonathan Badger, Eric LaRose, Ehsan Shirzadi, Andrea Mahnke, John Mayer, Zhan Ye, David Page, and Peggy Peissig. 2017. Adverse drug event discovery using biomedical literature: a big data neural network adventure. JMIR medical informatics, 5(4):e51. +Peng Wang, Lingqiao Liu, Chunhua Shen, Zi Huang, Anton van den Hengel, and Heng Tao Shen. 2017. Multi-attention network for one shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2721-2729. +Susmitha Wunnava, Xiao Qin, Tabassum Kakar, Cansu Sen, Elke A Rundensteiner, and Xiangnan Kong. 2019. Adverse drug event detection from electronic health records using hierarchical recurrent neural networks with dual-level embedding. *Drug safety*, 42(1):113-122. +Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies, pages 1480-1489. +Shinan Zhang, Shantanu Dev, Joseph Voyles, and Anand S Rao. 2018. Attention-based multi-task learning in pharmacovigilance. In 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pages 2324-22328. IEEE. \ No newline at end of file diff --git a/adualattentionnetworkforjointnamedentityrecognitionandsentenceclassificationofadversedrugevents/images.zip b/adualattentionnetworkforjointnamedentityrecognitionandsentenceclassificationofadversedrugevents/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..902733dd5aab1ea55a0bdb3398f32070f9a09ece --- /dev/null +++ b/adualattentionnetworkforjointnamedentityrecognitionandsentenceclassificationofadversedrugevents/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1a183f2bd01bb71be9039ec30820fa44354696e747fd274435f790cb7bcacffe +size 436540 diff --git a/adualattentionnetworkforjointnamedentityrecognitionandsentenceclassificationofadversedrugevents/layout.json b/adualattentionnetworkforjointnamedentityrecognitionandsentenceclassificationofadversedrugevents/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..4379afd771f022b828345c76f5aca03f4136e99b --- /dev/null +++ b/adualattentionnetworkforjointnamedentityrecognitionandsentenceclassificationofadversedrugevents/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f212fca7a39439af7adfe3be5131249d3fc7acabaca764950b1e5ba6cec7f1f9 +size 346152 diff --git a/adversarialaugmentationpolicysearchfordomainandcrosslingualgeneralizationinreadingcomprehension/8a16cae5-0817-4264-ad57-3e71cf79a236_content_list.json b/adversarialaugmentationpolicysearchfordomainandcrosslingualgeneralizationinreadingcomprehension/8a16cae5-0817-4264-ad57-3e71cf79a236_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..a6bb723167b4d913c7f58654e4b23fc1dd52f366 --- /dev/null +++ b/adversarialaugmentationpolicysearchfordomainandcrosslingualgeneralizationinreadingcomprehension/8a16cae5-0817-4264-ad57-3e71cf79a236_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c14627d5616b9445b3bf4e3b01e0ee4f03a36b9f7f49b14efd489a49c5eefa0 +size 109503 diff --git a/adversarialaugmentationpolicysearchfordomainandcrosslingualgeneralizationinreadingcomprehension/8a16cae5-0817-4264-ad57-3e71cf79a236_model.json b/adversarialaugmentationpolicysearchfordomainandcrosslingualgeneralizationinreadingcomprehension/8a16cae5-0817-4264-ad57-3e71cf79a236_model.json new file mode 100644 index 0000000000000000000000000000000000000000..519fe2994cbc91fe69ca0aa9878e9de51583a363 --- /dev/null +++ b/adversarialaugmentationpolicysearchfordomainandcrosslingualgeneralizationinreadingcomprehension/8a16cae5-0817-4264-ad57-3e71cf79a236_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1bc6fd5454fcb459ca0caf92fa8130a2331a00d5c5e445aa60b767b607086779 +size 131435 diff --git a/adversarialaugmentationpolicysearchfordomainandcrosslingualgeneralizationinreadingcomprehension/8a16cae5-0817-4264-ad57-3e71cf79a236_origin.pdf b/adversarialaugmentationpolicysearchfordomainandcrosslingualgeneralizationinreadingcomprehension/8a16cae5-0817-4264-ad57-3e71cf79a236_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4ba77796feff215a99406646787691c13ef45d77 --- /dev/null +++ b/adversarialaugmentationpolicysearchfordomainandcrosslingualgeneralizationinreadingcomprehension/8a16cae5-0817-4264-ad57-3e71cf79a236_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:71a1acf56520db608ce6b5b3674ab5599581ea06d92c5810adc527e9cc81a0a6 +size 412564 diff --git a/adversarialaugmentationpolicysearchfordomainandcrosslingualgeneralizationinreadingcomprehension/full.md b/adversarialaugmentationpolicysearchfordomainandcrosslingualgeneralizationinreadingcomprehension/full.md new file mode 100644 index 0000000000000000000000000000000000000000..abcd432101eac7e04e0d8522d21435e79010e3b7 --- /dev/null +++ b/adversarialaugmentationpolicysearchfordomainandcrosslingualgeneralizationinreadingcomprehension/full.md @@ -0,0 +1,377 @@ +# Adversarial Augmentation Policy Search for Domain and Cross-Lingual Generalization in Reading Comprehension + +Adyasha Maharana Mohit Bansal + +UNC Chapel Hill + +{adyasha,mbansal}@cs.unc.edu + +# Abstract + +Reading comprehension models often overfit to nuances of training datasets and fail at adversarial evaluation. Training with adversarially augmented dataset improves robustness against those adversarial attacks but hurts generalization of the models. In this work, we present several effective adversaries and automated data augmentation policy search methods with the goal of making reading comprehension models more robust to adversarial evaluation, but also improving generalization to the source domain as well as new domains and languages. We first propose three new methods for generating QA adversaries, that introduce multiple points of confusion within the context, show dependence on insertion location of the distractor, and reveal the compounding effect of mixing adversarial strategies with syntactic and semantic paraphrasing methods. Next, we find that augmenting the training datasets with uniformly sampled adversaries improves robustness to the adversarial attacks but leads to decline in performance on the original unaugmented dataset. We address this issue via RL and more efficient Bayesian policy search methods for automatically learning the best augmentation policy combinations of the transformation probability for each adversary in a large search space. Using these learned policies, we show that adversarial training can lead to significant improvements in in-domain, out-of-domain, and crosslingual (German, Russian, Turkish) generalization. + +# 1 Introduction + +There has been growing interest in understanding NLP systems and exposing their vulnerabilities through maliciously designed inputs (Iyyer et al., 2018; Belinkov and Bisk, 2018; Nie et al., 2019; + +Gurevych and Miyao, 2018). Adversarial examples are generated using search (Alzantot et al., 2018), heuristics (Jia and Liang, 2017) or gradient (Ebrahimmi et al., 2018) based techniques to fool the model into giving the wrong outputs. Often, the model is further trained on those adversarial examples to make it robust to similar attacks. In the domain of reading comprehension (RC), adversaries are QA samples with distractor sentences that have significant overlap with the question and are randomly inserted into the context. By having a fixed template for creating the distractors and training on them, the model identifies learnable biases and overfits to the template instead of being robust to the attack itself (Jia and Liang, 2017). Hence, we first build on Wang and Bansal (2018)'s work of adding randomness to the template and significantly expand the pool of distractor candidates by introducing multiple points of confusion within the context, adding dependence on insertion location of the distractor, and further combining distractors with syntactic and semantic paraphrases to create combinatorially adversarial examples that stress-test the model's language understanding capabilities. These adversaries inflict up to $45\%$ drop in performance of RC models built on top of large pretrained models like RoBERTa (Liu et al., 2019). + +Next, to improve robustness to the aforementioned adversaries, we finetune the RC model with a combined augmented dataset containing an equal number of samples from all of the adversarial transformations. While it improves robustness by a significant margin, it leads to decline in performance on the original unaugmented dataset. Hence, instead of uniformly sampling from the various adversarial transformations, we propose to perform a search for the best adversarial policy combinations that improve robustness against the adversarial attacks and also preserve/improve accuracy on the original dataset via data augmentation. However, it + +is slow, expensive and inductive-biased to manually tune the transformation probability for each adversary and repeat the process for each target dataset, and so we present RL and Bayesian search methods to learn this policy combination automatically. + +For this, we create a large augmentation search space of up to $10^{6}$ , with four adversarial methods, two paraphrasing methods and a discrete binning of probability space for each method (see Figure 1). Cubuk et al. (2019) showed via AutoAugment that a RNN controller can be trained using reinforcement learning to find the best policy in a large search space. However, AutoAugment is computationally expensive and relies on the assumption that the policy searched using rewards from a smaller model and reduced dataset will generalize to bigger models. Alternatively, the augmentation methods can be modelled with a surrogate function, such as Gaussian processes (Rasmussen, 2003), and subjected to Bayesian optimization (Snoek et al., 2012), drastically reducing the number of training iterations required for achieving similar results (available as a software package for computer vision). Hence, we extend these ideas to NLP and perform a systematic comparison between AutoAugment and our more efficient BayesAugment. + +Finally, there has been limited previous work exploring the role of adversarial data augmentation to improve generalization of RC models to out-of-domain and cross-lingual data. Hence, we also perform automated policy search of adversarial transformation combinations for enhancing generalization from English Wikipedia to datasets in other domains (news, web) and languages (Russian, German, Turkish). Policy search methods like BayesAugment can be readily adapted for low-resource scenarios where one only has access to a small development set that the model can use as a black-box evaluation function (for rewards, but full training or gradient access on that data is unavailable). We show that augmentation policies for the source domain learned using target domain performance as reward, improves the model's generalization to the target domain with only the use of a small development set from that domain. Similarly, we use adversarial examples in a pivot language (in our case, English) to improve performance on other languages' RC datasets using rewards from a small development set from that language. + +Our contributions can be summarized as follows: + +- We first propose novel adversaries for reading comprehension that cause up to $45\%$ drop in large pretrained models' performance. Augmenting the training datasets with uniformly sampled adversaries improves robustness to the adversarial attacks but leads to decline in performance on the original unaugmented dataset. +- We next demonstrate that optimal adversarial policy combinations of transformation probabilities (for augmentation and generalization) can be automatically learned using policy search methods. Our experiments show that efficient Bayesian optimization achieves similar results as AutoAugment with a fraction of the resources. +- By training on the augmented data generated via the learned policies, we not only improve adversarial robustness of the models but also show significant gains i.e., up to $2.07\%$ , $5.0\%$ , and $2.21\%$ improvement for in-domain, out-of-domain, and cross-lingual evaluation respectively. + +Overall, the goal of our paper is to make reading comprehension models robust to adversarial attacks as well as out-of-distribution data in cross-domain and cross-lingual scenarios. + +# 2 Related Work + +Adversarial Methods in NLP: Following the introduction of adversarial evaluation for RC models by Jia and Liang (2017); Wang and Bansal (2018), several methods have been developed for probing the sensitivity and stability of NLP models (Nie et al., 2019; Glockner et al., 2018). Zhao et al. (2018) employ GANS to generate semantically meaningful adversaries. Ren et al. (2019) and Alzantot et al. (2018) use a synonym-substitution strategy while Ebrahimi et al. (2018) create gradient-based perturbations. Iyyer et al. (2018) construct a syntactic paraphrasing network to introduce syntactic variance in adversaries. + +Augmentation and Generalization: Goodfellow et al. (2015) and Miyato et al. (2018) use adversarial training to demonstrate improvement in image recognition. Xie et al. (2020) improve the adversarial training scheme with auxiliary batch normalization modules. Back-translation (Yu et al., 2018), pre-training with other QA datasets (Devlin et al., 2019; Lewis et al., 2019; Talmor and Berant, 2019) and virtual adversarial training (Miyato et al., 2017; Yang et al., 2019) are shown to be effective augmentation techniques for RC datasets. Cao et al. (2020) propose a conditional adversarial + +
Adversary MethodDescriptionOriginal Question/Sentence and Corresponding Distractor
AddSentDiverse(Jia and Liang, 2017; Wang and Bansal, 2018)Q: In what country is Normandy located? +D: D-Day is located in the country of Sri Lanka.
AddKSentDiverseMultiple AddSentDiverse distractors are inserted randomly in the context.Q: Which county is developing its business center? +D1: The county of Switzerland is developing its art periphery. +D2: The county of Switzerland is developing its home center.
AddAnswerPositionAnswer span is preserved in this distractor. It is most misleading when inserted before the original answer.Q: What is the steam engine's thermodynamic basis? +A: The Rankine cycle is the fundamental thermodynamic underpin-ning of the steam engine. +D: Rankine cycle is the air engine's thermodynamic basis.
InvalidateAnswerAddSentDiverse and additional elimination of the original answer.Q: Where has the official home of the Scottish Parliament been since 2004? +D: Since October 2002, the unofficial abroad of the Welsh Assembly has been a old Welsh Assembly Houses, in the Golden Gate Bridge area of Glasgow.
PerturbAnswerContent words (except named entities) are algo-rithmically replaced with synonyms and evaluated for consistency using language model.A: The UK refused to sign the Social Charter and was exempt from the legislation covering Social Charter issues unless it agreed to be bound by the legislation. +P: The UK repudiated to signature the Social Charter and was ex-empt from the legislation encompassing Social Charter issues unless it consented to be related by the legislation.
PerturbQuestionSyntacting paraphrasing network is used to generate the source question with a dif-ferent syntax.Q: In what country is Normandy located? +P: Where does Normany exist?
+ +Table 1: Demonstration of the various adversary functions used in our experiments (Q=Question, D=Distractor, A=Answer, P=Paraphrase). Words that have been modified using adversarial methods are italicized in the distractor. + +self-training method to reduce domain distribution discrepancy. Lee et al. (2019); Wang et al. (2019) use a discriminator to enforce domain-invariant representation learning (Fisch et al., 2019); Chen et al. (2018) and Zhang et al. (2017) learn language-invariant representations for cross-lingual tasks. We show that heuristics-based adversaries can be used for augmentation as well as generalization. + +Policy Search: Cubuk et al. (2019) present the AutoAugment algorithm which uses reinforcement learning to find the best augmentation policies in a large search space, and then follow-up with RandAugment (Cubuk et al., 2020) which reduces the task to simple grid-search. Niu and Bansal (2019) use AutoAugment to discover perturbation policies for dialogue generation. Ho et al. (2019) use population-based augmentation (PBA) techniques (Jaderberg et al., 2017) and significantly reduce the compute time required by AutoAugment. We are the first to adapt RandAugment style techniques for NLP via our BayesAugment method. RandAugment enforces uniform transformation probability on all augmentation methods and collapses the augmentation policy search space to two global parameters. BayesAugment eliminates the need to choose between adversarial methods and optimizes only for their transformation probabilities (see Sec. 3.2). + +# 3 Adversary Policy Design + +As shown by Jia and Liang (2017), QA models are susceptible to random, semantically meaningless and minor changes in the data distribution. We extend this work and propose adversaries that exploit the model's sensitivity to insert location of distractor, number of distractors, combinatorial adversaries etc. After exposing the model's weaknesses, we strengthen them by training on these adversaries and show that the model's robustness to adversarial attacks significantly increases due to it. Finally, in Sec. 4, we automatically learn the right combination of transformation probability for each adversary in response to a target improvement using policy search methods. + +# 3.1 Adversary Transformations + +We present two types of adversaries, namely positive perturbations and negative perturbations (or attacks) (Figure 1). Positive perturbations are adversaries generated using methods that have been traditionally used for data augmentation in NLP i.e., semantic and syntactic transformations. Negative perturbations are distractor sentences based on the classic AddSent model (Jia and Liang, 2017) that exploits the RC model's shallow language understanding to mislead it to incorrect answers. We use + +the method outlined by Wang and Bansal (2018) for AddSentDiverse to generate a distractor sentence (see Table 1) and insert it randomly within the context of a QA sample. + +We introduce more variance to adversaries with AddKSentDiverse, wherein multiple distractor sentences are generated using AddSentDiverse and are inserted at independently sampled random positions within the context. For AddAnswerPosition, the original answer span is retained within the distractor sentence and the model is penalized for incorrect answer span location. We remove the sentence containing the answer span from the context and introduce a distractor sentence to create In-validateAnswer adversarial samples which are no longer answerable. PerturbAnswer adversaries are created by following the Perturb subroutine (Alzantot et al., 2018) and generating semantic paraphrases of the sentence containing the answer span. We use the syntactic paraphrase network (Iyyer et al., 2018) to create PerturbQuestion adversarial samples by replacing the original question with its paraphrase. + +Finally, we combine negative and positive perturbations to create adversaries which double-down on the model's language understanding. It always leads to a larger drop in performance when tested on the RC models trained on original unaugmented datasets. See Appendix for more details. + +# 3.2 Adversarial Policy & Search Space + +Reading comprehension models are often trained with adversarial samples in order to improve robustness to the corresponding adversarial attack. We seek to find the best combination of adversaries for data augmentation that also preserves/improves accuracy on source domain and improves generalization to a different domain or language. + +AutoAugment: Following previous work in AutoAugment policy search (Cubuk et al., 2019; Niu and Bansal, 2019), we define a sub-policy to be a set of adversarial transformations which are applied to a QA sample to generate an adversarial sample. We show that adversaries are most effective when positive and negative perturbations are applied together (Table 2). Hence, to prepare one sub-policy, we select one of the four negative perturbations (or none), combine it with one of the two positive perturbations (or none) and assign the combination a transformation probability (see Figure 1). The probability space $[0,1]$ is discretized into 6 + +equally spaced bins. This leads to a search space of $5 * 3 * 6 = 90$ for a single sub-policy. Next, we define a complete adversarial policy as a set of $n$ sub-policies with a search space of $90^n$ . For each input QA sample, one of the sub-policies is randomly sampled and applied (with a probability equal to the transformation probability) to generate the adversarial sample. Thus, each original QA sample ends up with one corresponding adversarial sample or none. + +BayesAugment: We adopt a simplified formulation of the policy for our BayesAugment method, following Ho et al. (2019) and RandAugment (Cubuk et al., 2020). Sampling of positive and negative adversaries is eliminated and transformation probabilities of all possible combinations of adversaries are optimized over a continuous range [0, 1].3 Consequently, one of these combinations is randomly sampled for each input QA sample to generate adversaries. Empirically, the dominant adversary in a policy is the attack with highest transformation probability (see policies in Table 8 in Appendix). Due to the probabilistic nature of the policy, it is possible for the model to not add any adversarial sample at all, but the probability of this happening is relatively low. + +# 4 Automatic Policy Search + +Next, we need to perform search over the large space of augmentation policies in order to find the best policy for a desired outcome. Performing naive search (random or grid) or manually tuning the transformation probabilities is slow, expensive and largely impractical due to resource constraints. Hence, we compare two different approaches for learning the best augmentation policy in fewer searches: AutoAugment and BayesAugment. We follow the optimization procedure as demonstrated in Figure 1. For $t = 1,2,\ldots$ , do: + +- Sample the next policy $p_t$ (sample) +- Transform training data with $p_t$ and generate augmented data (apply, transform) +- Train the downstream task model with augmented data (train) + +![](images/1b4842310c5da62cd4e765b940124d3d382c1b3160edfcc775f684db9478a11e.jpg) +Figure 1: Flow chart of training loop for AutoAugment controller and Bayesian optimizer. See Sec. 4. + +- Obtain score on validation dataset as reward $r_t$ +- Update Gaussian Process or RNN Controller with $r_t$ (update) + +# 4.1 AutoAugment + +Our AutoAugment model (see Figure 1) consists of a recurrent neural network-based controller and a downstream task model. The controller has $n$ output blocks for $n$ sub-policies; each output block generates distributions for the three components of sub-policies i.e., neg, pos and probability. The adversarial policy is generated by sampling from these distributions and applied on input dataset to create adversarial samples, which are added to the original dataset to create an augmented dataset. The downstream model is trained on the augmented dataset till convergence and evaluated on a given metric, which is then fed back to the controller as a reward (see the update flow in figure). We use REINFORCE (Sutton et al., 1999; Williams, 1992) to train the controller. + +# 4.2 BayesAugment + +Typically, it takes thousands of steps to train an AutoAugment controller using reinforcement learning which prohibits the use of large pretrained models as task model in the training loop. For example, the controllers in Cubuk et al. (2019) were trained for 15,000 samples or more. To circumvent this computational issue, we frame our adversarial policy + +search as a hyperparameter optimization problem and use Bayesian methods to perform the search. Bayesian optimization techniques use a surrogate model to approximate the objective function $f$ and an acquisition function to sample points from areas where improvement over current result is most likely. The prior belief about $f$ is updated with samples drawn from $f$ in order to get a better estimate of the posterior that approximates $f$ . Bayesian methods attempt to find global maximum in the minimum number of steps. + +# 4.3 Rewards + +The F1 score of downstream task model on development set is used as reward during policy search. To discover augmentation policies which are geared towards improving generalization of RC model, we calculate the F1 score of task model (trained on source domain) on out-of-domain or cross-lingual development datasets, and feed it as the reward to the optimizer. + +# 4.4 Datasets + +We use SQuAD v2.0 (Rajpurkar et al., 2018) and NewsQA (Trischler et al., 2017) for adversarial evaluation and in-domain policy-search experiments. Further, we measure generalization from SQuAD v2.0 to NewsQA and TriviaQA (Joshi et al., 2017), and from SQuAD v1.1 (Rajpurkar et al., 2016) to German dataset from MLQA (Lewis et al., 2020) and Russian, Turkish datasets from XQuAD (Artetxe et al., 2020). See Appendix for more details on datasets and training. + +# 4.5 Reading Comprehension Models + +We use RoBERTaBASE as the primary RC model for all our experiments. For fair baseline evaluation on out-of-domain and cross-lingual datasets, we also use the development set of the target task to select the best checkpoint. Search algorithms like AutoAugment require a downstream model that can be trained and evaluated fast, in order to reduce training time. So, we use distilRoBERTaBASE (Sanh et al., 2019) for AutoAugment training loops. BayesAugment is trained for fewer iterations than AutoAugment and hence, allows us to use RoBERTaBASE model directly in the training loop. See Appendix for more details and baseline performances of these models. + +
Adversary MethodSQuADNewsQA
Baseline (No Adversaries)81.1758.40
AddSentDiverse65.5051.47
AddKSentDiverse (K=2)45.3148.31
AddAnswerPosition68.9149.20
InvalidateAnswer77.7524.03
PerturbQuestion43.6736.76
PerturbAnswer71.9759.08
Effect of Multiple Distractors
AddSentDiverse65.5051.47
Add2SentDiverse45.3148.31
Add3SentDiverse43.4944.81
Combinatorial effect
AddSentDiverse65.5051.47
+ PerturbAnswer50.7151.43
AddKSentDiverse45.3148.31
+ PerturbQuestion31.5629.56
Effect of Insert Location of AddAnswerPosition
Random68.9149.20
Prepend66.5248.01
Append67.8448.76
+ +# 4.6 Evaluation Metrics + +We use the official SQuAD evaluation script for evaluation of robustness to adversarial attacks and performance on in-domain and out-of-domain datasets. For cross-lingual evaluation, we use the modified Translate-Test method as outlined in Lewis et al. (2020); Asai et al. (2018). QA samples in languages other than English are first translated to English and sent as input to RoBERTaBASE finetuned on SQuAD v1.1. The predicted answer spans within English context are then mapped back to the context in original language using alignment scores from the translation model. We use the top-ranked German→English and Russian→English models in WMT19 shared news translation task, and train a Turkish→English model using a similar architecture, to generate translations and alignment scores (Ng et al., 2019). + +# 5 Results + +First, in Sec. 5.1, we perform adversarial evaluation of baseline RC models for various categories of adversaries. Next, in Sec. 5.2, we train the RC + +Table 2: Adversarial evaluation of baseline RoBERTaBASE trained on SQuAD v2.0 and NewsQA. Compare to corresponding rows in Table 3 to observe difference in performance after adversarial training. Results (F1 score) are shown on dev set. + +
Adversary MethodSQuADNewsQA
AddSentDiverse68.0061.13
AddKSentDiverse (K=2)79.4462.31
AddAnswerPosition80.1656.90
InvalidateAnswer91.4167.57
PerturbQuestion60.9144.99
PerturbAnswer76.4260.74
Original Dev (No Adversaries)78.8358.08
+ +Table 3: Adversarial evaluation after training RoBERTaBASE with the original dataset augmented with equally sampled adversarial data. Compare to corresponding rows in Table 2 to observe difference in performance after adversarial training. Results (F1 score) are shown on dev set. + +models with an augmented dataset that contains equal ratios of adversarial samples and show that it improves robustness to adversarial attacks but hurts performance of the model on original unaugmented dataset. Finally, in Sec. 5.3, we present results from AutoAugment and BayesAugment policy search and the in-domain, out-of-domain and cross-lingual performance of RC models trained using augmentation data generated from the learned policies with corresponding target rewards. + +# 5.1 Adversarial Evaluation + +Table 2 shows results from adversarial evaluation of RoBERTaBASE finetuned with SQuAD v2.0 and NewsQA respectively. All adversarial methods lead to a significant drop in performance for the finetuned models i.e., between $4 - 45\%$ for both datasets. The decrease in performance is maximum when there are multiple distractors in the context (Add3SentDiverse) or perturbations are combined with one another (AddSentDiverse + PerturbAnswer). These results show that, in spite of being equipped with a broader understanding of language from pretraining, the finetuned RC models are shallow and over-stabilized to textual patterns like n-gram overlap. Further, the models aren't robust to semantic and syntactic variations in text. + +Additionally, we performed manual evaluation of 96 randomly selected adversarial samples (16 each from attacks listed in Table 1) and found that a human annotator picked the right answer for $85.6\%$ of the questions. + +# 5.2 Manual Adversarial Training + +Next, in order to remediate the drop in performance observed in Table 2 and improve robustness to adversaries, the RC models are further finetuned for 2 epochs with an adversarially augmented training set. The augmented training set contains each + +
Search MethodIn-domainSQuAD →
SQuADNewsQANewsQATriviaQA
Validation
Base81.17 / 77.5458.40 / 47.0448.36 / 36.0641.60 / 34.86
UniS78.83 / 74.6858.08 / 46.7948.24 / 36.0342.04 / 35.11
Auto81.63 / 78.0662.17 / 49.4150.57 / 38.5642.41 / 35.41
Bayes81.71 / 78.1258.62 / 47.2149.73 / 38.3843.96 / 36.67
Test
Base80.64 / 77.1957.02 / 45.2944.95 / 34.6836.01 / 29.23
UniS78.42 / 75.8757.21 / 45.3646.30 / 35.9437.83 / 30.52
Auto81.06 / 77.7959.09 / 45.4946.82 / 35.7537.88 / 30.60
Bayes80.88 / 77.5757.63 / 45.3248.95 / 37.4440.99 / 33.68
+ +Table 4: Baseline results (first row) and evaluation after finetuning baseline models with the adversarial policies derived from AutoAugment and BayesAugment for in-domain improvements and out-of-domain generalization from Wikipedia (SQuAD) to news (NewsQA) and web (TriviaQA) domains. Results (F1 / Exact Match) are shown on validation and test sets. (Base=Baseline, UniS=Uniform Sampling, Auto=AutoAugment, Bayes=BayesAugment) + +QA sample from the original training set and a corresponding adversarial QA sample by randomly sampling from one of the adversary methods. Table 3 shows results from adversarial evaluation after adversarial training. Adding perturbed data during training considerably improves robustness of the models to adversarial attacks. For instance, RoBERTaBASE performs with 79.44 F1 score on SQuAD AddKSentDiverse samples (second row, Table 3), as compared to 45.31 F1 score without adversarial training (third row, Table 2). Similarly, RoBERTaBASE performs with 44.99 F1 score on NewsQA PerturbQuestion samples (fifth row, Table 3), as compared to a baseline score of 36.76 F1 score (sixth row, Table 2). However, this manner of adversarial training also leads to drop in performance on the original unaugmented development set, e.g., RoBERTaBASE performs with 78.83 and 58.08 F1 scores on the SQuAD and NewsQA development sets respectively, which is 2.34 and 0.32 points lesser than the baseline (first row, Table 2). + +# 5.3 Augmentation Policy Search for Domain and Language Generalization + +Following the conclusion from Sec. 5.2 that uniform sampling of adversaries is not the optimal approach for model performance on original unaugmented dataset, we perform automated policy search over a large search space using BayesAugment and AutoAugment for in-domain as well as cross-domain/lingual improvements (as discussed in Sec. 4). For AutoAugment, we choose the number of sub-policies in a policy to be $n = 3$ as a trade-off between search space dimension and + +
Search MethodCross-lingual generalization from English SQuAD →
MLQA (de)XQuAD (ru)XQuAD (tr)
Validation
Baseline58.58 / 36.4167.89 / 44.6242.95 / 25.09
UniformS58.97 / 36.6868.11 / 44.8443.12 / 25.26
BayesAug59.40 / 37.1168.73 / 45.3444.09 / 25.73
Test
Baseline57.56 / 36.0160.81 / 33.4740.49 / 23.14
UniformS58.27 / 36.4561.87 / 34.3141.04 / 23.78
BayesAug59.02 / 38.0163.03 / 34.8541.95 / 24.17
+ +Table 5: Cross-lingual QA: Translate-Test (Lewis et al., 2020) evaluation after finetuning the baseline with adversarial policies derived from BayesAugment for generalization to German (de), Russian (ru) and Turkish (tr) RC datasets. Results (F1 / Exact Match) are shown on validation and test sets. + +optimum results. We search for the best transformation policies for the source domain that lead to improvement of the model in 3 areas: 1. in-domain performance 2. generalization to other domains and 3. generalization to other languages. These results are presented in Tables 4 and 5, adversarial evaluation of the best BayesAugment models is presented in Table 6, and the learned policies are shown in the Appendix. + +In-domain evaluation: The best AutoAugment augmentation policies for improving in-domain performance of RoBERTa $_{\text{BASE}}$ on the development sets result in $0.46\%$ and $3.77\%$ improvement in F1 score over baseline for SQuAD v2.0 and NewsQA respectively (see Table 4). Similarly, we observe $0.54\%$ ( $p = 0.021$ ) and $0.22\%$ ( $p = 0.013$ ) absolute improvement in F1 Score for SQuAD and NewsQA respectively by using BayesAugment policies. This trend is reflected in results on the test set as well. AutoAugment policies result in most improvement i.e., $0.42\%$ ( $p = 0.014$ ) and $2.07\%$ ( $p = 0.007$ ) for SQuAD and NewsQA respectively. Additionally, both policy search methods outperform finetuning with a dataset of uniformly sampled adversaries (see row 2 in Table 4). + +Out-of-domain evaluation: To evaluate generalization of the RC model from Wikipedia to news articles and web, we train RoBERTaBASE on SQuAD and evaluate on NewsQA and TriviaQA respectively. The baseline row in Table 4 presents results of RoBERTaBASE trained on original unaugmented SQuAD and evaluated on NewsQA and TriviaQA. Next, we reiterate results from Table 3 and show that finetuning with uniformly sampled dataset (see UniS in Table 4) of adversaries results in drop in performance on the validation sets of + +SQuAD and NewsQA. By training on adversarially augmented SQuAD with AutoAugment policy, we see $2.21\%$ and $0.81\%$ improvements on the development sets of NewsQA (SQuAD $\rightarrow$ NewsQA) and TriviaQA (SQuAD $\rightarrow$ TriviaQA) respectively. Similarly, BayesAugment provides $1.37\%$ and $2.36\%$ improvements over baseline for development sets of TriviaQA and NewsQA, proving as a competitive and less computationally intensive substitute to AutoAugment. BayesAugment outperforms AutoAugment at out-of-domain generalization by providing $4.0\% (\mathrm{p} < 0.001)$ and $4.98\%$ jump on test sets for NewsQA and TriviaQA respectively, as compared to $1.87\%$ improvements with AutoAugment. + +Our experiments suggest that AutoAugment finds better policies than BayesAugment for indomain evaluation. We hypothesize that this might be attributed to a difference in search space between the two policy search methods. AutoAugment is restricted to sampling at most 3 sub-policies while BayesAugment has to simultaneously optimize the transformation probability for ten or more different augmentation methods. A diverse mix of adversaries from the latter is shown to be more beneficial for out-of-domain generalization but results in minor improvements for in-domain performance. Moving ahead, due to better performance for out-of-domain evaluation and more efficient trade-off with computation, we only use BayesAugment for our cross-lingual experiments. + +Cross-lingual evaluation: Table 5 shows results of RoBERTaBASE finetuned with adversarially augmented SQuAD v1.17 and evaluated on RC datasets in non-English languages. The baseline row presents results from RoBERTaBASE trained on original unaugmented SQuAD and evaluated on German MLQA(de), Russian XQuAD(mu) and Turkish XQuAD(tr) datasets; F1 scores on the development sets are 58.58, 67.89 and 42.95 respectively. These scores depend on quality of the translation model as well as the RC model. We observe significant improvements on the development as well as test sets by finetuning baseline RC model with adversarial data from English SQuAD. Uniformly sampled adversarial dataset results in $0.71\%$ $(p = 0.063)$ , $1.06\%$ $(p = 0.037)$ , and $0.55\%$ $(p = 0.18)$ improvement for test sets of MLQA(de), XQuAD(mu) and XQuAD(tr), respectively. BayesAugment policies outperform + +uniform sampling and result in $1.47\%$ $(p = 0.004)$ , $2.21\%$ $(p = 0.007)$ and $1.46\%$ $(p = 0.021)$ improvement for test sets of MLQA(de), XQuAD(mu) and XQuAD(tr), respectively. + +Adversarial evaluation: We show results from the adversarial evaluation of RoBERTa $_{\text{BASE}}$ models finetuned with adversariably augmented SQuAD using policies learned from BayesAugment in Table 6. We use the best models for out-of-domain and cross-lingual generalization as shown in Tables 4 and 5, and evaluate their performance on the adversaries discussed in Section 5.1. Results show that the policies learnt from BayesAugment significantly improve resilience to the proposed adversarial attacks in addition to improving performance on the target datasets. The performance on adversaries varies with the transformation probability of the respective adversaries in the learned policies. For example, the transformation probability of PerturbQuestion adversaries is 0.83 and 0.0 for SQuAD $\rightarrow$ TriviaQA and SQuaD $\rightarrow$ NewsQA models respectively (see Table 8). Consequently, the former has a higher performance on PerturbQuestion adversaries. + +# 6 Analysis and Discussion + +Having established the efficacy of automated policy search for adversarial training, we further probe the robustness of adversially trained models to unseen adversaries. We also analyze the convergence of BayesAugment for augmentation policy search and contrast its requirement of computational resources with that of AutoAugment. See Appendix for more analysis on domain independence of adversarial robustness and augmentation data size. + +Robustness to Unseen Adversaries: We train RoBERTa $_{\text{BASE}}$ on SQuAD v2.0 augmented with the AddSentDiverse counterpart of each QA sample and evaluate it on other adversarial attacks, to analyze robustness of the model to unseen adversaries. As seen from the results in Table 7, training with AddSentDiverse leads to large improvements on AddKSentDiverse and small improvements on PerturbQuestion and PerturbAnswer i.e., $31.21\%$ (45.31 vs. 76.52), $1.56\%$ (43.67 vs. 45.23) and $5.31\%$ (71.97 vs. 77.28) respectively, showing that the model becomes robust to multiple distractors within the same context and it also gains some resilience to paraphrasing operations. Conversely, we see a drop in performance on InvalidateAnswer, showing that it is easier for the model to be dis + +
Adversary MethodOut-of-domain generalizationCross-lingual generalization
TriviaQANewsQAMLQA (de)XQuAD (ru)XQuAD (tr)
AddSentDiverse67.17 / 65.6066.26 / 64.5963.68 / 61.0965.21 / 64.0465.17 / 63.83
AddKSentDiverse (K=2)78.48 / 76.3277.13 / 75.8076.91 / 74.4577.76 / 75.2077.93 / 75.37
AddAnswerPosition80.05 / 77.4179.46 / 76.3178.62 / 75.5980.24 / 77.3879.51 / 76.28
InvalidateAnswer88.23 / 85.5690.18 / 78.25---
PerturbQuestion60.39 / 58.0254.65 / 51.4858.14 / 56.3360.15 / 57.9259.71 / 56.27
PerturbAnswer77.12 / 75.3876.30 / 74.1277.28 / 75.8274.31 / 72.8874.72 / 73.16
+ +Table 6: Adversarial evaluation after finetuning the baseline with adversarial policies derived from BayesAugment for generalization from SQuAD2.0 to TriviaQA, NewsQA, and SQuAD1.1 to German (de), Russian (ru) and Turkish (tr) RC datasets. Results (F1 / Exact Match) are shown on validation sets. Compare to corresponding rows in Table 3 to observe difference in performance between models finetuned with uniformly sampled dataset vs. dataset derived from learned policies. + +
Adversary AttackTrained on SQuADTrained on SQuAD+AddSentDiverse
AddKSentDiverse45.3176.52
InvalidateAnswer77.7570.91
PerturbQuestion43.6745.23
PerturbAnswer71.9777.28
+ +Table 7: Robustness of RoBERTaBASE trained on a subset of adversaries to unseen adversaries. Results (F1 score) are shown on SQuAD dev set. + +![](images/fe902d76b9bb0176dc6cf47248d7852c37fc2850f39661d79948011bf0e39a7d.jpg) +Figure 2: Demonstration of variation in distance between neighboring samples picked by Bayesian optimizer with increasing training iterations. The red line represents moving average of distances. + +tracted by adversaries when the original answer is removed from context. + +Bayesian Convergence: In comparison to the thousands of training loops or more for AutoAugment, we run BayesAugment for only 100 training loops with 20 restarts. To show that BayesAugment converges within the given period, we plot the distance between transformation probabilities chosen by the Bayesian optimizer for the AddSentDiverse-PerturbQuestion augmentation method. As shown in Figure 2, the distance between the samples decreases with progression in training, showing that the optimizer becomes more confident about the narrow range of probability which should be sampled for maximum performance on validation set. + +Analysis of Resources for AutoAugment vs BayesAugment: With lesser number of training loops, BayesAugment uses only $10\%$ of the GPU + +resources required for AutoAugment. Our AutoAugment experiments have taken more than 1000 iterations and upto 5-6 days for convergence, requiring many additional days for hyperparameter tuning. In contrast, our BayesAugment experiment ran for 36-48 hours on 2 1080Ti GPUs and achieved comparable performance with 100 iterations or less. If large pretrained models are replaced with smaller distilled models in future work, BayesAugment will provide even more gains in time/computation. + +# 7 Conclusion + +We show that adversarial training can be leveraged to improve robustness of reading comprehension models to adversarial attacks and also to improve performance on source domain and generalization to out-of-domain and cross-lingual data. We present BayesAugment for policy search, which achieves results similar to the computationally-intensive AutoAugment method but with a fraction of computational resources. By combining policy search with rewards from the corresponding target development sets' performance, we show that models trained on SQuAD can be generalized to NewsQA and German, Russian, Turkish cross-lingual datasets without any training data from the target domain or language. + +# Acknowledgments + +We thank the reviewers for their useful feedback. This work was supported by DARPA MCS Grant #N66001-19-2-4031, DARPA KAIROS Grant #FA8750-19-2-1004, ONR Grant #N00014-18-1-2871, and awards from Google, Facebook, and Amazon (plus Amazon and Google GPU cloud credits). The views are those of the authors and not of the funding agency. + +# References + +Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2890–2896, Brussels, Belgium. Association for Computational Linguistics. +Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of monolingual representations. In ACL. +Akari Asai, Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. 2018. Multilingual extractive reading comprehension by runtime machine translation. arXiv preprint arXiv:1809.03275. +Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine translation. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. +Yu Cao, Meng Fang, Baosheng Yu, and Joey Tianyi Zhou. 2020. Unsupervised domain adaptation on reading comprehension. In AAAI. +Xilun Chen, Yu Sun, Ben Athiwaratkun, Claire Cardie, and Kilian Weinberger. 2018. Adversarial deep averaging networks for cross-lingual sentiment classification. Transactions of the Association for Computational Linguistics, 6:557-570. +Ekin D. Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V. Le. 2019. Autoaugment: Learning augmentation strategies from data. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). +Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. 2020. RandAugment: Practical data augmentation with no separate search. In CVPR Workshops. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. HotFlip: White-box adversarial examples for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 31-36, Melbourne, Australia. Association for Computational Linguistics. + +Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eunsol Choi, and Danqi Chen. 2019. MRQA 2019 shared task: Evaluating generalization in reading comprehension. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pages 1-13, Hong Kong, China. Association for Computational Linguistics. +Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking NLI systems with sentences that require simple lexical inferences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 650-655, Melbourne, Australia. Association for Computational Linguistics. +Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In ICLR. +Iryna Gurevych and Yusuke Miyao. 2018. Proceedings of the 56th annual meeting of the association for computational linguistics (volume 1: Long papers). In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). +Daniel Ho, Eric Liang, Xi Chen, Ion Stoica, and Pieter Abbeel. 2019. Population based augmentation: Efficient learning of augmentation policy schedules. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 2731-2741. PMLR. +Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1875-1885, New Orleans, Louisiana. Association for Computational Linguistics. +Max Jaderberg, Valentin Dalibard, Simon Osindero, Wojciech M Czarnecki, Jeff Donahue, Ali Razavi, Oriol Vinyals, Tim Green, Iain Dunning, Karen Simonyan, et al. 2017. Population based training of neural networks. DeepMind tech report. arXiv preprint arXiv:1711.09846. +Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2021-2031, Copenhagen, Denmark. Association for Computational Linguistics. +Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601-1611. + +Seanie Lee, Donggyu Kim, and Jangwon Park. 2019. Domain-agnostic question-answering with adversarial training. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pages 196-202, Hong Kong, China. Association for Computational Linguistics. +Patrick Lewis, Ludovic Denoyer, and Sebastian Riedel. 2019. Unsupervised question answering by cloze translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4896-4910, Florence, Italy. Association for Computational Linguistics. +Patrick Lewis, Barlas Oğuz, Rudy Rinott, Sebastian Riedel, and Holger Schwenk. 2020. MLQA: Evaluating cross-lingual extractive question answering. In ACL. +Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In EMNLP. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. +Takeru Miyato, Andrew M. Dai, and Ian J. Goodfellow. 2017. Adversarial training methods for semi-supervised text classification. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. +Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. 2018. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE transactions on pattern analysis and machine intelligence, 41(8):1979-1993. +Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. 2019. Facebook FAIR's WMT19 News Translation Task Submission. In WMT. +Yixin Nie, Yicheng Wang, and Mohit Bansal. 2019. Analyzing compositionality-sensitivity of NLI models. In AAAI, pages 6867-6874. +Tong Niu and Mohit Bansal. 2018. Adversarial oversensitivity and over-stability strategies for dialogue models. In CoNLL. +Tong Niu and Mohit Bansal. 2019. Automatically learning data augmentation policies for dialogue tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1317-1323, Hong Kong, China. Association for Computational Linguistics. + +Eric W Noreen. 1989. Computer-intensive methods for testing hypotheses. Wiley New York. +Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532-1543. +Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. *OpenAI Technical Report*. +Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784-789, Melbourne, Australia. Association for Computational Linguistics. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: $100,000+$ questions for machine comprehension of text. In EMNLP. +Carl Edward Rasmussen. 2003. Gaussian processes in machine learning. In Summer School on Machine Learning, pages 63-71. Springer. +Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial examples through probability weighted word saliency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1085-1097, Florence, Italy. Association for Computational Linguistics. +Victor Sanh, Lysandre Debut, Julien Chaumont, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. In 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing - NeurIPS 2019. +Jasper Snoek, Hugo Larochelle, and Ryan P Adams. 2012. Practical bayesian optimization of machine learning algorithms. In Advances in neural information processing systems, pages 2951-2959. +Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA, pages 4444-4451. AAAI Press. +Niranjan Srinivas, Andreas Krause, Sham M Kakade, and Matthias Seeger. 2010. Gaussian process optimization in the bandit setting: No regret and experimental design. In ICML. +Richard S. Sutton, David A. McAllester, Satinder P. Singh, and Yishay Mansour. 1999. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processing Systems 12, [NIPS Conference, Denver, Colorado, USA, November 29 - December 4, 1999], pages 1057-1063. The MIT Press. + +Alon Talmor and Jonathan Berant. 2019. MultiQA: An empirical investigation of generalization and transfer in reading comprehension. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4911-4921, Florence, Italy. Association for Computational Linguistics. +Robert J Tibshirani and Bradley Efron. 1993. An introduction to the bootstrap. Monographs on statistics and applied probability, 57:1-436. +Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. NewsQA: A machine comprehension dataset. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 191-200, Vancouver, Canada. Association for Computational Linguistics. +Huazheng Wang, Zhe Gan, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, and Hongning Wang. 2019. Adversarial domain adaptation for machine reading comprehension. In EMNLP. +Yicheng Wang and Mohit Bansal. 2018. Robust machine comprehension models via adversarial training. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 575-581, New Orleans, Louisiana. Association for Computational Linguistics. +Ronald J. Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn., 8:229-256. +Cihang Xie, Mingxing Tan, Boqing Gong, Jiang Wang, Alan Yuille, and Quoc V Le. 2020. Adversarial examples improve image recognition. In CVPR. +Ziqing Yang, Yiming Cui, Wanxiang Che, Ting Liu, Shijin Wang, and Guoping Hu. 2019. Improving machine reading comprehension via adversarial training. arXiv preprint arXiv:1911.03614. +Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V. Le. 2018. QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension. In ICLR. OpenReview.net. +Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Adversarial training for unsupervised bilingual lexicon induction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1959-1970. +Zhengli Zhao, Dheeru Dua, and Sameer Singh. 2018. Generating natural adversarial examples. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. Open-Review.net. + +# Appendix + +# A Adversary Transformations + +We present two types of adversaries, namely positive perturbations and negative perturbations (or attacks). Positive perturbations are adversaries generated using methods that have been traditionally used for data augmentation in NLP i.e., semantic and syntactic transformations. Negative perturbations are adversaries based on the classic AddSent model (Jia and Liang, 2017) that exploit the RC model's shallow language understanding to mislead it to incorrect answers. + +AddSentDiverse: We use the method outlined by Wang and Bansal (2018) for AddSentDiverse to generate a distractor sentence and insert it randomly within the context of a QA sample. In addition to WordNet, we use ConceptNet (Speer et al., 2017) for a wider choice of antonyms during generation of adversary. QA pairs that do not have an answer within the given context are also augmented with AddSentDiverse adversaries. + +AddKSentDiverse: The AddSentDiverse method is used to generate multiple distractor sentences for a given context. Each of the distractor sentences is then inserted at independently sampled random positions within the context. The distractors may or may not be similar to each other. Introducing multiple points of confusion is a more effective technique for misleading the model and reduces the scope of learnable biases during adversarial training by adding variance. + +AddAnswerPosition: The original answer span is retained and placed within a distractor sentence generated using a combination of AddSentDiverse and random perturbations to maximize semantic mismatch. We modify the evaluation script to compare exact answer span locations in addition to the answer phrase and fully penalize incorrect locations. For practical purposes, if the model predicts the answer span within adversarial sentence as output, it does not make a difference. However, it brings into question the interpretability of such models. This distractor is most effective when placed right before the original answer sentence, showing dependence on insert location of distractor. + +InvalidateAnswer: The sentence containing the original answer is removed from the context. Instead, a distractor sentence generated using AddSentDiverse is introduced to the context. This + +method is used to augment the adversarial NoAnswer-style samples in SQuAD v2.0. + +# PerturbAnswer (Semantic Paraphrasing): + +Following Alzantot et al. (2018), we perform semantic paraphrasing of the sentence containing the answer span. Instead of using genetic algorithm, we adapt their Perturb subroutine to generate paraphrases in the following steps: + +1. Select word locations for perturbations, which includes locations within any content phrase that does not appear within the answer span. Here, content phrases are verbs, adverbs and adjectives. +2. For location $k_{i}$ in the set of word locations $\{k\}$ , compute 20 nearest neighbors of the word at given location using GloVe embeddings, create a candidate sentence by perturbing the word location with each of the substitute words and rank perturbed sentences using a language model. +3. Select the perturbed sentence with highest rank and perform Step 2 for the next location $k_{i+1}$ using the perturbed sentence. + +We use the OpenAI-GPT model (Radford et al., 2018) to evaluate paraphrases. + +PerturbQuestion (Syntactic Paraphrasing): We use the syntactic paraphrase network introduced by Iyyer et al. (2018) to generate syntactic adversaries. Sentences from the context of QA samples tend to be long and have complicated syntax. The corresponding syntactic paraphrases generated by the paraphrasing network usually miss out on half of the source sentence. Therefore, we choose to perform paraphrasing on the questions. We generate 10 paraphrases for each question and rank them based on cosine similarity, computed between the mean of word embeddings (Pennington et al., 2014) of source sentence and generated paraphrases (Niu and Bansal, 2018; Liu et al., 2016). + +Finally, we combine negative perturbations with positive perturbations to create adversaries which double-down on the model's language understanding capabilities. It always leads to a larger drop in performance when tested on the reading comprehension models trained on original unaugmented datasets. + +Semantic Difference Check: To make sure that the distractor sentences are sufficiently different from the original sentence, we perform a semantic difference check in two steps: + +1. Extract content phrases from original sentence. + +Content phrase is any common NER phrase or + +one of the four: noun, verb, adverb, adjective. 2. There should be at least 2 content phrases in the original text that aren't found in the distractor. We examined 100 randomly sampled original-distractor sentence pairs and found that our semantic difference check works for $96\%$ of the cases. + +# B BayesAugment + +We use Gaussian Process (GP) (Rasmussen, 2003) as surrogate function and Upper Confidence Bound (UCB) (Srinivas et al., 2010) as the acquisition function. GP is a non-parametric model that is fully characterized by a mean function $\mu_0: \chi \mapsto \mathbb{R}$ and a positive-definite kernel or covariance function $k: \chi \times \chi \mapsto \mathbb{R}$ . Let $x_1, x_2, \ldots, x_n$ denote any finite collections of $n$ points, where each $x_i$ represents a choice of sampling probabilities for each of the augmentation methods and $f_i = f(x_i)$ is the (unknown) function value evaluated at $x_i$ . Let $y_1, y_2, \ldots, y_n$ be the corresponding noisy observations (the validation performance at the end of training). In the context of GP Regression (GPR), $f = f_1, \ldots, f_n$ are assumed to be jointly Gaussian. Then, the noisy observations $y = y_1, \ldots, y_n$ are normally distributed around $f$ as $y|f \sim \mathcal{N}(f, \sigma^2 I)$ . The Gaussian Process upper confidence bound (GP-UCB) algorithm measures the optimistic performance upper bound of the sampling probabilities. + +# C Datasets + +SQuAD v2.0 (Rajpurkar et al., 2018) is a crowdsourced dataset consisting of 100,000 questions from SQuAD v1.1 (Rajpurkar et al., 2016) and an additional 50,000 questions that do not have answers within the given context. We split the official development set into 2 randomly sampled sets of validation and test for our experiments. + +NewsQA is also a crowd-sourced extractive RC dataset based on 10,000 news articles from CNN, containing both answerable and unanswerable questions. (Trischler et al., 2017) To accommodate very long contexts from NewsQA in models like Bert (Devlin et al., 2019) and RoBERTa (Liu et al., 2019), we sample two instances from the set of overlapping instances for the final training data. + +TriviaQA (Joshi et al., 2017) questions were crawled from the web and have two variants. One variant includes Wikipedia articles as contexts; we use the other variant which involves web snippets and documents from Bing search engine as contexts. The development and test sets are large + +
AutoAugment Policies
SQuAD → SQuAD(AddS, None, 0.2) → (IA, None, 0.4) → (AddA, None, 0.2)
SQuAD → NewsQA(None, PA, 0.4) → (None, PA, 0.6) → (AddS, PA, 0.4)
SQuAD → TriviaQA(AddS, None, 0.9) → (AddS, PA, 0.7) → (AddKS, PQ, 0.9)
NewsQA → NewsQA(AddA, PA, 0.2) → (AddKS, None, 0.2) → (AddA, PA, 0.4)
BayesAugment Policies
SQuAD → SQuAD(AddS, 0.29), (AddA, 0.0), (AddA-PA, 0.0), (AddA-PQ, 0.0), (AddKS, 0.0), (AddKS-PA,0.0)
(AddKS-PQ, 0.0), (AddS-PA, 0.0), (AddS-PQ, 0.0), (PA, 0.61), (PQ, 0.0), (IA, 1.0)
SQuAD → NewsQA(AddS, 1.0), (AddA, 0.0), (AddA-PA, 1.0), (AddA-PQ, 0.0), (AddKS, 0.0), (AddKS-PA, 0.0)
(AddKS-PQ, 0.0), (AddS-PA, 1.0), (AddS-PQ, 0.0), (PA, 0.48), (PQ, 0.0), (IA, 0.0)
SQuAD → TriviaQA(AddS, 1.0), (AddA, 1.0), (AddA-PA, 0.21), (AddA-PQ, 0.18), (AddKS, 0.86), (AddKS-PA, 0.37)
(AddKS-PQ, 0.25), (AddS-PA, 0.12), (AddS-PQ, 0.49), (PA, 0.91), (PQ, 0.83), (IA, 0.26)
SQuAD → MLQA(de)(AddS, 0.042), (AddA-PA, 0.174), (AddA-PQ, 0.565), (AddKS, 0.173), (AddKS-PA, 0.567)
(AddA, 0.514), (AddS-PA, 0.869), (AddS-PQ, 0.720), (PA, 0.903), (PQ, 0.278), (AddKS-PQ, 0.219)
SQuAD → XQuAD(ru)(AddS, 0.147), (AddA-PA, 0.174), (AddA-PQ, 0.79), (AddKS, 0.55), (AddKS-PA, 0.97)
(AddA, 0.77), (AddS-PA, 0.02), (AddS-PQ, 0.59), (PA, 0.11), (PQ, 0.95), (AddKS-PQ, 0.725)
SQuAD → XQuAD(tr)(AddS, 0.091), (AddA-PA, 0.463), (AddA-PQ, 0.64), (AddKS, 0.32), (AddKS-PA, 0.86)
(AddA, 0.34), (AddS-PA, 0.37), (AddS-PQ, 0.43), (PA, 0.27), (PQ, 0.81), (AddKS-PQ, 0.493)
NewsQA → NewsQA(AddS, 1.0), (AddA, 1.0), (AddA-PA, 1.0), (AddA-PQ, 0.0), (AddKS, 0.0), (AddKS-PA, 1.0)
(AddKS-PQ, 0.156), (AddS-PA, 0.0), (AddS-PQ, 0.720), (PA, 0.0), (PQ, 0.0), (IA, 1.0)
+ +Table 8: Best Policies suggested by BayesAugment and AutoAugment methods for different scenarios; AddS = AddSentDiverse, AddKS = AddKSentDiverse, AddA = AddAnswerPosition, IA = InvalidateAnswer, PA = PerturbAnswer, PQ = PerturbQuestion. + +
ModelSQuADv1.1SQuADv2.0NewsQA
RoBERTa89.73 / 82.3881.17 / 77.5458.40 / 47.04
DistilRoBERTa84.57 / 75.8173.29 / 69.4754.21 / 42.76
+ +Table 9: Comparison of performance (F1 Score / Exact Match) of different models on SQuAD v1.1, SQuaD v2.0 and NewsQA datasets. RoBERTaBASE is the baseline model; DistilRoBERTaBASE is the task model used during AutoAugment policy search. + +
HyperparameterSQuAD v1.1SQuAD v2.0NewsQA
Learning Rate3e-51.5e-51.6e-5
Batch Size241624
Warmup Ratio0.060.060.08
No. of Epochs255
Weight Decay0.010.010.01
+ +Table 10: Best hyperparameters for training RoBERTaBASE on SQuAD v2.0 and NewsQA. + +with more than 60K samples in each. For faster BayesAugment and AutoAugment iterations, we randomly select 10K samples from the development set to generate rewards. + +MLQA (Lewis et al., 2020) is the multilingual extension to SQuAD v1.1 consisting of evaluation (development and test) data only. We use German (de) MLQA in our experiments. + +XQuAD is a multilingual version of SQuAD (Artetxe et al., 2020) containing only test sets. We use Russian (ru) and Turkish (tr) XQuAD which contain nearly 1100 QA samples that are further split equally and randomly into development and test sets. + +# D Training Details + +Reading Comprehension Models: We use RoBERTaBASE as the primary RC model for all our experiments. Search algorithms like AutoAugment require a downstream model that can be trained and evaluated fast, in order to reduce training time. So, we use distilRoBERTaBASE (Sanh et al., 2019) for AutoAugment training loops, which has $40\%$ lesser parameters than RoBERTaBASE. It should be noted that the distilRoBERTa model used in our experiments is trained on SQuAD without distillation. BayesAugment is trained for fewer iterations than AutoAugment and hence, allows us to use RoBERTaBASE model directly in the training loop. + +Model Hyperparameters: We trained + +
NewsQA AdversarySQuADSQuAD → NewsQA
AddSentDiverse42.39 / 32.7949.54 / 38.02
PerturbAnswer39.95 / 27.6045.52 / 32.49
AddSentDiv-PertrbAns35.08 / 26.3343.63 / 32.76
+ +Table 11: Comparison of robustness between RoBERTaBASE finetuned on original unaugmented SQuAD and our best SQuAD → NewsQA generalized model. Results (F1 score/Exact Match) are shown on dev set. + +RoBERTaBASE for 5 epochs on SQuAD and NewsQA respectively and selected the best-performing checkpoint as baseline. We perform a hyperparameter search for both datasets using Bayesian optimization search (Snoek et al., 2012). The RNN controller in AutoAugment training loop consists of a single LSTM cell with a single hidden layer and hidden layer dimension of 100. The generated policy consists of 3 sub-policies; each sub-policy is structured as discussed in main text. BayesAugment is trained for 100 iterations with 20 restarts. During AutoAugment and BayesAugment training loops, RoBERTaBASE or distilRoBERTaBASE (which has already been trained on unaugmented SQuAD) is further finetuned on the adversarially augmented dataset for 2 epochs with a warmup ratio of 0.2 and learning rate decay (lr=1e-5) thereafter. After the policy search, further hyperparameter optimization is performed for best results from fine-tuning. We do not perform this last step of hyperparameter tuning on cross-lingual data to avoid the risk of overfitting the small datasets. For generalization from SQuAD v1.1 to cross-lingual datasets, we do not consider the adversary Invalidate because NoAnswer samples do not exist for these datasets. + +# E Analysis + +In this section, we show the impact of adversarial augmentation ratio in training dataset and the size of training dataset on the generalization of RC model to out-of-domain data. Next, we show more experiments on robustness to unseen adversaries. Finally, we analyze the domain-independence of adversarial robustness by training on adversari-ally augmented SQuAD and testing on adversarial NewsQA samples. + +Effect of Augmentation Ratio: To assess the importance of adversarial augmentation in the dataset, we experimented with different ra + +
Augmentation RatioNewsQA
RoBERTa48.36 / 36.06
+ 1x augmentation49.73 / 38.38
+ 2x augmentation49.84 / 37.97
+ 3x augmentation49.62 / 38.01
+ +Table 12: Effect of augmentation ratio for generalization from SQuAD $\rightarrow$ NewsQA. Results (F1 score/Exact Match) are shown on NewsQA dev set. + +![](images/74559f54b55ed3aa6e3f72fb7c17706a11624e80c9af128d240dc4cc40bc8ffa.jpg) +Figure 3: Performance of SQuAD $\rightarrow$ NewsQA model on NewsQA dev set (F1 score) with increasing size of finetuning dataset. + +tios i.e., 1x, 2x and 3x, of augmented samples to the original dataset, for generalization from SQuAD to NewsQA using the augmentation policy learnt by BayesAugment. The performance of SQuAD $\rightarrow$ NewsQA models on NewsQA validation set were 49.73, 49.84 and 49.62 for 1x, 2x and 3x augmentations respectively, showing slight improvement for twice the number of augmentations. However, the performance starts decreasing at 3x augmentations, showing that too many adversaries in the training data starts hurting generalization. + +Effect of Augmented Dataset Size: We experimented with $20\%$ , $40\%$ , $60\%$ , $80\%$ and $100\%$ of the original dataset to generate augmented dataset using the BayesAugment policy for generalization of RoBERTaBASE trained on SQuAD to NewsQA and observed little variance in performance with increasing data, as seen from Figure 3. The augmentation ratio in these datasets is 1:1. We hypothesize that the model is saturated early on during training, within the first tens of thousands of adversarially augmented samples. Exposing the model to more SQuAD samples gives little boost to performance on NewsQA thereafter. + +Robustness to Unseen Adversaries: We train RoBERTaBASE on SQuAD which has been augmented with an adversarial dataset of the same size as SQuAD and contains equal number of samples + +
Adversary AttackTrained on SQuADTrained on SQ+ASD/PQ/PA
AddSentDiverse+PerturbAnswer50.7184.37
AddKSentDiverse+PerturbQuestion31.5678.91
AddAnswerPosition68.9180.87
AddKSentDiverse45.3176.14
InvalidateAnswer77.7571.62
+ +Table 13: Robustness of RoBERTaBASE trained on a subset of adversaries to unseen adversaries. Results (F1 score) are shown on SQuAD dev set (ASD=AddSentDiverse, PQ=PerturbQuestion, PA=PerturbAnswer, SQ=SQuAD). + +
HyperparameterRange
Learning Rate[1e-5, 2e-5]
Batch Size{8, 16, 24, 32}
Warmup Ratio[0.01, 0.5]
Weight Decay[0.01, 0.1]
+ +Table 14: Bayesian Optimization Ranges for Finetuning RoBERTA with AutoAugment and Bayesaugment policies (32 iterations with 8 restarts). + +from AddSentDiverse, PerturbQuestion and PerturbAnswer. In Table 13, We see that the model is significantly more robust to combinatorial adversaries like AddSentDiverse+PerturbAnswer when trained on the adversaries AddSentDiverse and PerturbAnswer individually. We also see a decline in performance on InvalidateAnswer. + +Domain-Independence of Robustness to Adversarial Attacks: We have shown that a reading comprehension model trained on SQuAD can be generalized to NewsQA by finetuning the model with adversariably transformed samples from SQuAD dataset. It is expected that this model will be robust to similar attacks on SQuAD. To assess if this robustness generalizes to NewsQA as well, we evaluate our best SQuAD $\rightarrow$ NewsQA model on adversarially transformed NewsQA samples from the development set. The SQuAD column in Table 11 shows results from evaluation of RoBERTa $_{\text{BASE}}$ finetuned with original unaugmented SQuAD, on adversarially transformed NewsQA samples. Interestingly, the generalized model (rightmost column) is $5 - 8\%$ more robust to adversarial NewsQA without being trained on any NewsQA samples, showing that robustness to adversarial attacks in source domain easily generalizes to a different domain. \ No newline at end of file diff --git a/adversarialaugmentationpolicysearchfordomainandcrosslingualgeneralizationinreadingcomprehension/images.zip b/adversarialaugmentationpolicysearchfordomainandcrosslingualgeneralizationinreadingcomprehension/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..5b867af24d6bfcbfc316b122ac5b5c23bce79ff0 --- /dev/null +++ b/adversarialaugmentationpolicysearchfordomainandcrosslingualgeneralizationinreadingcomprehension/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3d52a4dc8bb7e91cea31bc4d9fc2c153d2d8022112719b2327290b5c18087d2e +size 773437 diff --git a/adversarialaugmentationpolicysearchfordomainandcrosslingualgeneralizationinreadingcomprehension/layout.json b/adversarialaugmentationpolicysearchfordomainandcrosslingualgeneralizationinreadingcomprehension/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..dd9bb841c916c1c494a52795789704cbcc0b475f --- /dev/null +++ b/adversarialaugmentationpolicysearchfordomainandcrosslingualgeneralizationinreadingcomprehension/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3ef0ac9ee0035b3ce462e0060eb45d776f6c9d00e0b4ace95bab75f3776c5934 +size 466149 diff --git a/adversarialgrammaticalerrorcorrection/45c54a2c-f4d5-429a-b46f-921cfd37f6c5_content_list.json b/adversarialgrammaticalerrorcorrection/45c54a2c-f4d5-429a-b46f-921cfd37f6c5_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..56708854c3518699f7bf3bfb2d3ce350db06349a --- /dev/null +++ b/adversarialgrammaticalerrorcorrection/45c54a2c-f4d5-429a-b46f-921cfd37f6c5_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2f7a8e31535f28d16b2ee555245494d519e09edf53a8d364a061123d4608d114 +size 86521 diff --git a/adversarialgrammaticalerrorcorrection/45c54a2c-f4d5-429a-b46f-921cfd37f6c5_model.json b/adversarialgrammaticalerrorcorrection/45c54a2c-f4d5-429a-b46f-921cfd37f6c5_model.json new file mode 100644 index 0000000000000000000000000000000000000000..020f0cb0ab70336498f652c414fbdb5e585e7cca --- /dev/null +++ b/adversarialgrammaticalerrorcorrection/45c54a2c-f4d5-429a-b46f-921cfd37f6c5_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ea6fb7763af29a238c40e1fc6030ca4f7e9e772a4578f70250048b28ef302547 +size 107643 diff --git a/adversarialgrammaticalerrorcorrection/45c54a2c-f4d5-429a-b46f-921cfd37f6c5_origin.pdf b/adversarialgrammaticalerrorcorrection/45c54a2c-f4d5-429a-b46f-921cfd37f6c5_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f7ad4e270f6e8760ca1257b554e24654f8784a63 --- /dev/null +++ b/adversarialgrammaticalerrorcorrection/45c54a2c-f4d5-429a-b46f-921cfd37f6c5_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b035e713bce32b91c210c5a4456fc92c51f2ba486b3b4848d68556b1d85cc1d2 +size 709131 diff --git a/adversarialgrammaticalerrorcorrection/full.md b/adversarialgrammaticalerrorcorrection/full.md new file mode 100644 index 0000000000000000000000000000000000000000..2ae3863e05d317c1c0a68f00618cadb828761fcf --- /dev/null +++ b/adversarialgrammaticalerrorcorrection/full.md @@ -0,0 +1,303 @@ +# Adversarial Grammatical Error Correction + +Vipul Raheja Dimitris Alikaniotis + +Grammarly + +firstname.lastname@grammarly.com + +# Abstract + +Recent works in Grammatical Error Correction (GEC) have leveraged the progress in Neural Machine Translation (NMT), to learn rewrites from parallel corpora of grammatically incorrect and corrected sentences, achieving state-of-the-art results. At the same time, Generative Adversarial Networks (GANs) have been successful in generating realistic texts across many different tasks by learning to directly minimize the difference between human-generated and synthetic text. In this work, we present an adversarial learning approach to GEC, using the generator-discriminator framework. The generator is a Transformer model, trained to produce grammatically correct sentences given grammatically incorrect ones. The discriminator is a sentence-pair classification model, trained to judge a given pair of grammatically incorrect-correct sentences on the quality of grammatical correction. We pre-train both the discriminator and the generator on parallel texts and then fine-tune them further using a policy gradient method that assigns high rewards to sentences which could be true corrections of the grammatically incorrect text. Experimental results on FCE, CoNLL-14, and BEA-19 datasets show that Adversarial-GEC can achieve competitive GEC quality compared to NMT-based baselines. + +# 1 Introduction + +Grammatical Error Correction (GEC) has grown into a popular NLP task that deals with building systems for automatically correcting errors in written text (Ng et al., 2013, 2014). Evolving from the approaches of building error-specific machine learning classifiers (Tetreault and Chodorow, 2008; De Felice and Pulman, 2008; Tetreault et al., 2010; Dahlmeier and Ng, 2011; Rozovskaya and Roth, 2014), it has gained popularity as a monolingual Machine Translation (MT) problem, where the system learns to "translate" a given erroneous text to + +its corrected form (Brockett et al., 2006; Felice et al., 2014; Susanto et al., 2014). Initially, Statistical phrase-based Machine Translation (SMT) techniques were successfully applied to the task (Yuan and Felice, 2013; Junczys-Dowmunt and Grundkiewicz, 2016; Yuan et al., 2016) as a way to handle all error types concurrently. More recently, several Neural Machine Translation (NMT) systems have been developed with promising results (Sutskever et al., 2014; Bahdanau et al., 2015; Cho et al., 2014), and their successful application to GEC, either in combination with SMT models (Chollampatt et al., 2016; Yuan and Briscoe, 2016; Yannakoudakis et al., 2017; Grundkiewicz and Junczys-Dowmunt, 2018), or strictly as neural models, has emerged as the new state-of-the-art (Xie et al., 2016; Schmaltz et al., 2017; Sakaguchi et al., 2017; Ji et al., 2017; Ge et al., 2018; Junczys-Dowmunt et al., 2018; Chollampatt and Ng, 2018a,b; Zhao et al., 2019). + +Despite the successes of NMT-based models for GEC, a major challenge still lies in the definition of the evaluation metrics. Ideally, the metric should be able to quantify the (a) lexical overlap, (b) semantic similarity, and (c) grammaticality of a generated sentence, given a grammatically incorrect input sentence. In a straightforward application of NMT-based models to the GEC task, one would minimize a surrogate loss (e.g., cross-entropy), which is an upper bound on the true loss, and hence a loose approximation of these complex criteria. Moreover, NMT-based GEC models try to maximize n-gram or edit-based metrics, such as $M^2$ (Dahlmeier and Ng, 2012), $I$ -Measure (Felice and Briscoe, 2015), or GLEU (Napoles et al., 2015) pushing the NMT-based models to generate sentences with n-gram precisions as high as possible, which may not necessarily lead to high-quality generation for the GEC task. In order to avoid these issues, we take a different approach, inspired by Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), which + +provide a framework that can be leveraged to directly model the task based on the differences in the input-output distributions and the complex criteria mentioned above. Moreover, GANs have shown remarkable ability to generate coherent and semantically meaningful text in many natural language processing tasks such as machine translation (Wu et al., 2018; Yang et al., 2018), dialogue generation (Li et al., 2017), and abstractive summarization (Liu et al., 2018; Wang and Lee, 2018) among others. + +We propose a GAN-based generator-discriminator framework for grammatical error correction. The generator is a Sequence-to-Sequence (Seq2Seq) model, which is trained to "translate" a grammatically incorrect sentence to its grammatically correct rewrite. The discriminator, a deep neural sentence-pair classification model is trained to evaluate the probability of the generated sentence being a lexically-similar, meaning-preserving, and grammatically correct rewrite of the incorrect input sentence. Adversarial training between the two models is set up as optimizing a min-max objective, where the discriminator learns to distinguish whether a given input is sampled from the ground-truth (human-generated) or generator (artificially-generated) distributions, maximizing the difference between them. The generator, on the other hand, learns to trick the discriminator by producing high-quality correction candidates, thus, minimizing the difference between its output and a ground-truth corrected sentence. Further, the discriminator is used to fine-tune the generator using a policy gradient (Williams, 1992; Yu et al., 2017; Wu et al., 2018), rewarding high quality generated text when conditioned on the source, improving, thus, the generation results. By minimizing the difference between the human- and the artificially-generated distribution, we aim at directly optimizing the task based on the criteria mentioned above. + +We evaluate the effectiveness of our approach on three standard datasets on the task, observing that the discriminator can provide reasonably consistent guidance to the generator and further help improve its performance. Experimental results indicate that our model can achieve significantly better performance than strong NMT-based baselines. In summary, we make the following contributions: + +- This work is, to the best of our knowledge, the first to apply generative adversarial training to + +![](images/66cfb94882e204bc753f9b9a804dba63348064e9b2f27083f8d4e7be3fcb2588.jpg) +Figure 1: Adversarial-GEc training. Left: $D$ is trained over the real and the generated data by a pre-trained $G$ . Right: $G$ is further trained by policy gradient where the final reward is provided by $D$ and is passed back to the generator. + +the GEC task. + +- We propose a sentence-pair classification-based discriminator, that can better distinguish grammatical text from ungrammatical text by learning to directly optimize the task rather than constructing or relying on n-gram or edit-based metrics. We analyze different formulations of the discriminator, and provide insights into how its setup, pre-training and integration into the framework can be leveraged for stable training and better performance. +- We conduct extensive experiments on standard GEC datasets and evaluate the system against strong baselines, showing that the proposed model consistently achieves better results in a self-contained single model setting, without relying on any resources other than just the training data. + +# 2 Adversarial GEC + +Fig. 1 outlines our approach which consists of two components the (a) Generator $(G)$ and (b) Discriminator $(D)$ . + +# 2.1 Generator + +Following recent NMT-based state-of-the-art GEC systems, we treat a grammatically incorrect sentence as the source and its grammatically corrected counterpart as the target. Formally, given a sequence $x = [x_{1},x_{2},\dots,x_{S}]$ , we aim to generate another sequence $y = [y_{1},y_{2},\dots,y_{T}]$ which is the grammatically corrected form of $x$ . We denote a pair of incorrect-correct sentences as $(x,y)$ . Given a sequence $x$ , the generator learns to produce another sequence $y' \approx y$ . + +While the generator can be any Seq2Seq model, we use two common Encoder-Decoder architectures for GEC; an attention-based RNN (Luong et al., 2015) and a Transformer (Vaswani et al., 2017). + +# 2.2 Discriminator + +In this framework, a critical component is a discriminator that is responsible for providing the appropriate reward to the generator based on the quality of the generated text. Most GAN architectures typically use a single-sentence real-vs-fake classifier1 as the discriminator (Yu et al., 2017). However, we argue that such a formulation does not accurately express the GEC task objective. A conventional GAN discriminator would provide the probability of a sentence being grammatically correct as the reward. However, it would be especially harder for such a classifier to differentiate between a ground-truth correction and a generated sentence that fits the distribution of real-world text and is far from the generated data, but does not make the intended corrections or changes the semantics of the source. Moreover, it would also be unable to provide a proportionate reward to a partially corrected sentence. Due to the lack of contextual knowledge about what has been corrected, such a classifier would struggle to differentiate between low-quality or unsuitably corrected sequences. Consequently, it will end up giving them rewards comparable to sentences which are truly the corrected forms of given incorrect source sentences. + +In the GEC task, we ultimately want the generator to generate corrected sentences that fit the constraints mentioned in Section 1. Hence, we formulate the objective of the discriminator as being two-fold: first, to be able to evaluate the quality of the generated text in terms of its validity compared to the ground-truth distribution, and second, to measure its quality as the appropriate rewrite for a given input sentence. In summary, the discriminator needs to be able to measure the degree of "grammatical correctness" of an output sentence, given its corresponding input sentence, instead of only distinguishing between real-vs-fake Therefore, instead of training a single-sentence classifier, we train on incorrect-correct sentence pairs. We consider ground-truth data $(x,y)$ as high-quality corrections (positive examples), while data sampled from the generator $(x,y^{\prime})$ as low-quality (negative examples). We experiment with two discriminator models for both the single-sentence and sentence-pair formulations: CNN- and RNN-based due to their simplicity, widespread use in sentence-pair modeling tasks, and ease of implementation. + +# 2.3 Adversarial Training + +Adversarial training between $G$ and $D$ (parameterized by $\theta$ and $\phi$ , respectively) is set up as optimizing a min-max objective, formulated as the following objective function $V(G_{\theta}, D_{\phi})$ : + +$$ +\begin{array}{l} \min _ {\theta} \max _ {\phi} V \left(G _ {\theta}, D _ {\phi}\right) \tag {1} \\ = \mathbb {E} _ {(x, y) \sim P _ {d a t a}} [ \log D _ {\phi} (x, y) ] + \\ \mathbb {E} _ {x \sim P _ {d a t a}, y ^ {\prime} \sim P _ {G _ {\theta} (\cdot | x)}} [ \log (1 - D _ {\phi} (x, y ^ {\prime})) ] \\ \end{array} +$$ + +where $P_{data}$ is the underlying training data distribution and $P_{G_{\theta}(\cdot |x)}$ is the distribution of the generator output. + +With this objective function, the discriminator learns to predict whether a given sentence pair has been sampled from the ground-truth data $(x,y)$ or from $G_{\theta}$ : $(x,y')$ . $G_{\theta}$ tries to confuse $D_{\phi}$ by generating high-quality corrected samples $y' \approx y$ , given a ground-truth input sentence $x$ . Formally, the objective function of $D_{\phi}$ is defined as the standard binary cross entropy (BCE) loss: + +$$ +\begin{array}{l} \mathcal {L} _ {d} = \mathbb {E} _ {(x, y) \sim P _ {d a t a}} \log D _ {\phi} (x, y) \tag {2} \\ + \mathbb {E} _ {x \sim P _ {d a t a}, y ^ {\prime} \sim P _ {G _ {\theta} (\cdot | x)}} \log \left(1 - D _ {\phi} (x, y ^ {\prime})\right) \\ \end{array} +$$ + +The objective of the generator can be formulated as optimizing the following loss: + +$$ +\mathcal {L} _ {g} = \mathbb {E} _ {x \sim P _ {\text {d a t a}}, y ^ {\prime} \sim P _ {G _ {\theta (\cdot | x)}}} \log \left(1 - D _ {\phi} \left(x, y ^ {\prime}\right)\right) \tag {3} +$$ + +However, since the generator performs discrete sampling to obtain $y'$ , we cannot directly use the gradient-based approach to backpropagate the gradients, making $V(G_{\theta}, D_{\phi})$ non-differentiable with respect to $\theta$ . To address this issue, borrowing from Cai and Wang (2018) and Wu et al. (2018), we use single-sample based REINFORCE (Williams, 1992), a Monte-Carlo policy gradient method to optimize $G_{\theta}$ . In Reinforcement Learning (RL) terms, the generator $G_{\theta}$ acts as the agent under the policy $G_{\theta}(\cdot | x)$ , and the generated grammatically corrected sentence $y'$ is the action. The environment is characterized via the input sequence $x$ and the discriminator $D_{\phi}$ , which provides the reward $-\log(1 - D_{\phi}(x, y'))$ based on the discriminative loss of $D_{\phi}(x, y')$ . The generator improves itself by maximizing the reward returned from the environment. The gradients $\nabla_{\phi} \mathcal{L}_d$ and $\nabla_{\theta} \mathcal{L}_g$ can thus be estimated by sampling a correction from the generator $y' \sim G(\cdot | x)$ as follows: + +$$ +\nabla_ {\phi} \mathcal {L} _ {d} = \nabla_ {\phi} \log D _ {\phi} (x, y) + \nabla_ {\phi} \log \left(1 - D _ {\phi} \left(x, y ^ {\prime}\right)\right) \tag {4} +$$ + +$$ +\nabla_ {\theta} \mathcal {L} _ {g} = \nabla_ {\theta} \log G _ {\theta} \left(y ^ {\prime} | x\right) \log \left(1 - D _ {\phi} \left(x, y ^ {\prime}\right)\right) \tag {5} +$$ + +where $\phi$ and $\theta$ can be updated as per the REINFORCE algorithm. + +# 2.4 Training Strategies + +While REINFORCE provides a framework where the reward function does not have to be differentiable, the discrete reward space due to the use of a single sampled $y'$ to perform the Monte Carlo estimation leads to the problem of high variance, resulting in unstable training - a widely acknowledged limitation of RL methods. In practice, we find that adversarially training the generator solely with Eq. 3 is unstable, even when it is pre-trained. This is due to the sparsity of the rewards provided to the generator, which happens only once it has fully generated a sentence. This is also compounded by the fact that we do not generate multiple samples for computational efficiency. Hence, the generator training becomes brittle and finds it extremely difficult to get out of bad local minima or mode collapse. To alleviate this issue, we leverage the following measures: baseline reward, and teacher forcing/interleaved training to train the generator. + +Baseline Reward A popular technique to alleviate the variance issue is the subtraction of baseline values from the original rewards. The baseline reward could be computed using various approaches. Yang et al. (2018) use a constant value, Rennie et al. (2017) use the reward of the sequence obtained by the current model with a greedy sampling strategy, Ranzato et al. (2016), Bahdanau et al. (2017), and Liu et al. (2017) use an MLP to estimate the baseline reward. However, these methods rely on approximating the terminal reward using intermediate states, or incorporating word-level rewards via rollout strategies for better credit assignment. Moreover, such approaches have been found to be extremely time-consuming, given the large decoding space. Based on prior works on RL for modeling dialog systems, which also have discrete action-reward spaces (Sankar and Ravi, 2019; Su et al., 2015), we use a moving average of the historical reward values as the baseline, which stabilizes the training process and is computationally tractable. + +Interleaved Training Following Guo et al. (2018) and Wu et al. (2018), we interleave MLE and Policy Gradient training. This combination of an adversarial objective with MLE is an important factor in successfully training $G$ . By some probability $\lambda$ (more details in Section 5.3), randomly chosen mini-batches are trained with the Policy + +
SplitDatasetSentencesTokens
TrainFCE-train27k454k
BEA19-train34k628k
CoNLL14-train57k1.1M
Lang-81M13M
DevCoNLL131.3k28k
FCE-dev1.9k28k
BEA19-dev4.3k87k
TestCoNLL14-test1.3k30k
FCE-test2.4k36k
BEA19-test4.4k85k
+ +Table 1: Dataset splits and sizes. + +Gradient (discriminator reward), while other minibatches are trained using MLE. This alternation improves training stability, as MLE acts as a regularizer to ensure a smoother model update, alleviating the negative effects brought by high gradient estimation variance of the one-step Monte Carlo sample in REINFORCE. After this generator update, it is used to generate more realistic corrections, which are then used to train the discriminator. This approach is equivalent to the teacher forcing step in Li et al. (2017) and Yang et al. (2018), where, after every iteration of policy gradient training update, they update the generator using teacher forcing by making the discriminator automatically assign a reward of 1 to the ground-truth data, which is used by the generator to further update itself. + +# 3 Experiments + +# 3.1 Data + +In line with previous works, we use the public NUCLE corpus (used in the CoNLL 2014 GEC Shared Task (Ng et al., 2014; Dahlmeier et al., 2013)), the FCE Corpus (Yannakoudakis et al., 2011), the Lang-8 Corpus of Learner English (Tajiri et al., 2012), and the Write & Improve and LOCNESS (W&I+L) dataset from the BEA 2019 Shared Task (Bryant et al., 2019; Granger, 1998), as our parallel training datasets. We use CoNLL-2013 (Ng et al., 2013), FCE-dev, and BEA19-dev as our development sets, and for our test splits, we use the FCE-test, CoNLL-2014 (Ng et al., 2014) test, and the BEA19 test set (evaluated by ERRANT (Bryant et al., 2017)). We report $F_{0.5}$ scores evaluated by the $M^2$ scorer (Dahlmeier and Ng, 2012) for both of these test datasets. + +# 3.2 Baselines + +We use the two generators introduced in Section 2.1 as baseline generators. Building on these baselines, we develop GAN frameworks, in combination with the following setups of discriminators - a) SS: CNN- and RNN-based Single Sentence classifier, $^{3}$ and b) SP: CNN- and RNN-based Sentence-Pair classifier (Section 2.2). We also experiment with using the GLEU score directly as the reward for an input-output sentence pair. This setting overlaps with the work of Sakaguchi et al. (2017). $^{4}$ + +# 3.3 Implementation Details + +# 3.3.1 Data + +Following Junczys-Dowmunt et al. (2018), we use byte-pair encoding (BPE) sub-word units (Sennrich et al., 2016), which is also the way to address the issue of out-of-vocabulary words. The vocabulary is based on $35\mathrm{k}$ most frequent BPE subword units, where both the source and target side use the same vocabulary. + +# 3.3.2 Generators + +We refer to Junczys-Dowmunt et al. (2018) for our training setup, who laid out extensive guidelines for adapting NMT-based models for the GEC task. For the RNN-based generator, following Luong et al. (2015), we use 4 layers of bi-directional GRUs in both the encoder and decoder. We set the word embedding size to 512, size of hidden units for both encoder and decoder as 1024. For the Transformer, following the BASE model in Vaswani et al. (2017), we set up the model architecture with the encoder and decoder both having a stack of six layers of self-attention/feed-forward sub-layers. The word embedding size is set to 512, and the number of attention heads to 8. The size of the inner layer in the position-wise feed-forward network is set to 2048. In order to discourage copying (Gal and Ghahramani, 2016; Junczys-Dowmunt et al., 2018; Grundkiewicz et al., 2019) we use strong dropout for regularization: layer dropout of 0.3 for both the RNN and Transformer models, attention dropout of 0.1, and source and target word dropout of 0.2 and 0.1 respectively. These hyperparameters were chosen as prescribed in the referred works, + +Algorithm 1 Adversarial-GEC +1: Initialize $G_{\theta}$ , $D_{\phi}$ with random weights $\theta, \phi$ . +2: Pre-train $G_{\theta}$ on ground-truth dataset $\mathcal{D} = (X,Y)$ with MLE loss +3: Generate negative samples $\mathcal{D}' = (X,Y')$ using $G_{\theta}$ for training $D_{\phi}$ +4: Pre-train $D_{\phi}$ on $\mathcal{D}$ and $\mathcal{D}'$ until initial accuracy $\varepsilon$ with BCE loss +5: while not converged do +6: Sample $(X,\tilde{Y}) \sim P_{data}$ +7: Sample $Y' \sim G_{\theta}(\cdot|X)$ +8: Sample $\rho \sim [0,1]$ to determine interleaving +9: if $\rho \leq \lambda$ then +10: Compute Rewards $R$ for $(X,Y')$ using $D_{\phi}$ +11: Update $G_{\theta}$ via Policy Gradient using $R$ +12: else +13: Update $G_{\theta}$ via teacher-forcing using MLE +14: Train $D_{\phi}$ using Eqn. 2, on $(X,Y)$ and $(X,Y')$ +15: *Parameter update equations for $G_{\theta}$ and $D_{\phi}$ are as follows: +16: $\theta \gets \theta - \alpha_g\nabla_{\theta_G}$ +17: $\phi \gets \phi - \alpha_d\nabla_{\phi_D}$ + +but also worked well in practice when tuned on the development sets. + +# 3.3.3 Sentence-Pair Discriminators + +The RNN-based discriminator model is set up as a siamese network, sharing the same embeddings and weights, each processing one of the two sentences. The RNN-based model, for each sentence in the pair, consists of a word embedding layer of size 300, followed by two layers of bi-directional GRU, with hidden size of 128. There are residual connections at each time step between the layers. The bi-directional outputs of the last recurrent layer of both the sentences in the pair are concatenated, and used as input to a dense feed-forward layer with an output of size 128, followed by a sigmoid. We use dropout on the recurrent units and between layers (both with probability 0.2). For the CNN-based discriminator, we use the convolutional matching model used by Wu et al. (2018) since Hu et al. (2014) found it to have a superior performance to the siamese architecture. + +# 3.3.4 Training + +A major challenge with GANs is that the joint training between the generator and the discriminator needs to be carefully coordinated, in order to stabilize the training (Yu et al., 2017; Li et al., 2017; Yang et al., 2018; Wu et al., 2018; Fedus et al., 2018; Wang and Lee, 2018). Therefore, we first pre-train the generator model $G_{\theta}$ using maximum likelihood estimation (MLE) on the ground-truth training dataset until convergence. This stage is + +
PFCE RF0.5CoNLL14PBEA19 RF0.5
PRF0.5
Baselines
RNN58.5020.8542.9760.3718.7441.8049.2134.4445.32
Transformer60.8725.0347.3063.9821.5245.8850.3835.4346.45
Adversarial-GE (Our System)
RNN + CNN64.2122.4646.8159.3121.0143.4654.2134.3748.6
Transformer + CNN62.5327.8250.0464.6822.5747.1053.7836.5249.13
Recent GEC Systems
Ji et al. (2017)†-----41.53---
Grundkiewicz and Junczys-Dowmunt (2018)††---66.6117.5842.76---
Chollampatt and Ng (2018a)‡,†---59.6823.1545.36---
Zhao et al. (2019)¶---55.9630.7348.07---
Kaneko et al. (2020)61.746.457.959.231.250.251.543.249.6
+ +Table 2: Results of Adversarial-GEc against single-model NMT baselines of state-of-the-art GEC systems. $\dagger$ Trained on non-public CLC data, $^\dagger \dagger$ Trained on NUCLE and Lang-8, $^\ddagger$ MLConv - single model, $^\ddagger$ Trained on One-Billion Word Benchmark + +essential to enable the joint training to converge later, since the action space during generation is immense and applying Policy Gradient training from scratch would lead to slow and unstable training. The pre-trained model is then used to decode the training data $x$ using beam search (size 4), and generate the output sentences $y'$ , essentially building the negative examples in the training data for the discriminator $(x, y')$ . The discriminator is initially pre-trained on a combination of the ground-truth parallel data $(x, y)$ and the machine-generated data $(x, y')$ , where $y'$ is sampled from the pre-trained generator model. The discriminator is trained until the classification accuracy reaches $\varepsilon$ (further analysis in Section 5.2). Once the generator and the discriminator have been pre-trained, they are adversarially co-trained, where the generator is trained with a combination of MLE and Policy Gradient (and teacher forcing), until the performance of $G_{\theta}$ does not improve on the development set. + +# 4 Results + +In contrast to related works on Neural GEC, we do not use a lot of the heuristics that most recent systems leverage in order to enhance their model performance pre- and post-training. These heuristics include using spellcheckers to correct spelling errors in the data, pre-trained language models trained on large quantities of external data, synthetic data generation, re-ranking systems to sort the outputs of the generator model, among others. We chose to keep our framework simple compared to most contemporary works in that we do not lever + +age anything beyond what the raw training data and the baseline architectures have to offer, which makes it simple and self-contained. This decision was in the interest of system complexity, training time, and clear evaluations. The goal of this work is not to build a state-of-the-art GEC system but to demonstrate the value of adversarial training. Hence, we report results in a single-model setting, without the use of any external data or resources beyond the training data. + +The results of Adversarial-GEc compared to baseline models are presented in Table 2.6 These results are based on the best performing (on the development set) parameters $\varepsilon = 0.7$ , $\lambda = 0.4$ using the CNN sentence-pair discriminator. The results demonstrate a substantial improvement in $F_{0.5}$ for both adversarially trained models, across all evaluation datasets. Overall, the RNN model achieves greater gains on precision than the Transformer, which achieves greater gains on recall. We carry out statistical significance tests with bootstrap resampling, and correcting for multiple comparisons, obtain significant gains over the baselines $(p < 0.01)$ . + +As mentioned in Sections 2.2 and 3.2, we experiment with three discriminator formulations (SS, SP, GLEU) in the Adversarial-GEc setting to provide the rewards to guide the generators. Table 3 describes the results of using the two kinds of discriminators in each formulation (CNN, RNN) of + +
GeneratorFCECoNLL14BEA19
SS: Single-Sentence Discriminator
CNNRNN41.6840.2345.53
Transformer43.4541.5246.31
RNNRNN41.2139.2545.58
Transformer41.3639.8446.86
SP: Sentence-Pair Discriminator
CNNRNN46.8143.4648.6
Transformer50.0447.1049.13
RNNRNN46.4543.1748.11
Transformer49.8846.9549.02
GLEURNN43.3542.146.68
Transformer45.6545.947.84
+ +Table 3: Impact of training different Discriminator task formulations and models on $F_{0.5}$ test splits. + +the discriminative task, and doesn't show a significant difference in either formulation. + +# 5 Discussion + +In this section, we describe experimental results on adversarial training strategies, based on validation data splits. There are three parts to making the training work (a) formulating the discriminator task to compute the reward, (b) reducing the variance in rewards for better gradient estimation, and (c) combining the MLE and Adversarial objectives for more stable training. + +# 5.1 Discriminator Formulation + +We observe in Table 3 that the single-sentence discriminator (SS) performs the worst against all discriminator formulations. Furthermore, SS performs even worse than the baseline generators, which points to the direction that it acts as a barrier in their ability to generalize. + +We attribute this performance limitation to two factors. First, since the model does not consider the original sentence, it lacks the ability to learn the parts of the sentence which make it ungrammatical, rewarding similarly marginally correct and highly incorrect sentences. We investigate this idea by feeding the discriminator incorrect sentences sampled from $P_{data}$ and observe that they get nearly the same reward from SS despite their varying degrees of incorrectness. This impedes generator improvement as any inaccuracies are penalized disproportionately. Secondly, producing grammatically correct sequences is not enough to solve the task. A generated sequence can be grammatically correct, albeit semantically or lexically different. A discriminator which lacks the contextual information provided by the original sentence can reward such + +sequences with a high reward propagating such false starts. Therefore, a generator that produces only one grammatical sentence would receive a high reward from the discriminator. + +On the other hand, GLEU achieves better performance compared to SS but weaker when compared to SP. This corroborates the above argument as GLEU, essentially being a special case of the SP formulation, is able to provide higher quality reward since it tries to account for fluency and grammaticality in evaluation on references. SP, on the other hand, is able to go beyond the GLEU score's low-level n-gram matching criteria, learning latent characteristics of the GEC task and providing a more appropriate reward to the generator. Acting in this way provides a much smoother objective compared with GLEU since the latter is quite sensitive to slight translation differences at the word or phrase level. Second, the generator and discriminator co-evolve. The dynamics of the discriminator make the generator grow in an adaptive way rather than controlled by a fixed evaluation metric such as GLEU, achieving better distributional alignment, which is further verified by its superior performance. + +# 5.2 Balancing Discriminator Pre-Training + +Since GAN training is a min-max loss optimization with alternating updates to the generator and the discriminator, it is hard to reach a global optimum, which is a saddle point. To successfully reach the saddle point, balancing the generator and the discriminator co-training is essential. But the discriminator usually converges faster than the generator, so it is hard to achieve that balance. Failure to do so often leads to problems like mode collapse or inability to learn altogether. While the generator is pre-trained to reach the best development-set performance, we control the discriminator pre-training to balance the adversarial training. Hence, we evaluate the impact of the pre-trained discriminator's accuracy $\varepsilon$ as a tunable hyperparameter. We pretrain seven RNN discriminators to reach accuracy in the range [0.6, 0.9]. With these discriminators, we train corresponding Adversarial-GEC models (using a Transformer generator, $\lambda = 0.4$ ) and evaluate their performance on the development set at regular intervals. Fig. 2 shows that the initial accuracy of the discriminator significantly impacts the final performance and needs to be set carefully. If it is either too high (0.85 and 0.9) or too low (0.6 and + +![](images/2ce8ec721e2fc7816702c416484616f2943dd7882ccd7fe65fff71d934476f43.jpg) +Figure 2: $F_{0.5}$ scores on the dev set using pre-trained Transformer, and CNN discriminators with varying initial accuracy $\varepsilon$ . + +0.65), the model performs poorly. This points to the need for a balanced relationship between the generator and the discriminator. If the discriminator is too strong, the generator is extremely penalized for its erroneous predictions, and the performance progressively gets worse. On the other hand, if the discriminator is too weak, it is unable to give the most appropriate guidance to the generator. Empirically, we pre-train the discriminator until its accuracy reaches the 0.7-0.75 range. + +# 5.3 Combining MLE and Adversarial Objectives + +As noted in Section 2.4, a key factor in successfully training $G_{\theta}$ is the combination of adversarial and MLE objectives where we define the hyperparameter $\lambda$ to control the trade-off between MLE and adversarial training. That is, for any mini-batch, determined by a probability $\lambda$ , $G_{\theta}$ is optimized by the MLE objective or adversarial objective to improve the stability in model training. We experiment with the range [0.2, 0.8] for $\lambda$ . The results in Fig. 3 show that combining the MLE objective with the adversarial objective is helpful to stabilize the training and improve the model performance, as we expected. This confirms prior findings that MLE acts as a regularizer to guarantee smooth model updates, alleviating the negative effects brought by high gradient estimation variance of the one-step Monte-Carlo sample in REINFORCE. However, further increasing $\lambda$ does not bring more gain. The best trade-off between MLE and adversarial objective in our experiment is $\lambda = 0.4$ , which is the value we use in our experiments. + +![](images/0b4f5d8ea1d4e78817a419eca9fdea22c0d6552bdfa5bd373a490c40d511c707.jpg) +Figure 3: Adversarial-GE performance on the dev set (Transformer + CNN), varying parameter $\lambda$ to alternate between MLE and Policy Gradient training. + +# 5.4 Experiments with Language Models + +In the SS setting, we also experimented with a locally-normalized language model as a discriminator. The intuition here was that using a language model with token-level locally normalized probabilities could offer a more direct training signal to the generator. If a generated sentence does not match the distribution of ground-truth data, it will have high perplexity when evaluated by a language model that was trained on ground-truth data. Not only can it provide an overall evaluation score for the whole sentence, but can also assign a probability to each token, thus providing more information on which word is to blame if the overall perplexity is very high. However, in spite of all the training strategies described in Section 2.4, training a language model was highly unstable, due to the use of a single sample to approximate the expected gradient, leading to high variance in gradient estimates. In future works, we aim to explore this idea using better generator models and better, larger-scale language models such as BERT (Devlin et al., 2018) and GPT-3 (Brown et al., 2020). + +# 6 Related Work + +While the choice of a sentence-pair discriminator is close to Yang et al. (2018) and Wu et al. (2018), our work differs from Yang et al. (2018) in that their learning objective is a combination of the discriminator reward $(D)$ and a smoothed sentence-level BLEU (Papineni et al., 2002) as the static reward $(Q)$ . The use of a sentence-pair discriminator is related to our work, we do not combine rewards from $D$ and $Q$ . Incorporating $Q$ in the objective stems from the motivation to directly optimize for + +the evaluation metric, we choose to not force the evaluation metric-based reward into the objective, since most GEC metrics are reference-based, and have shown to be limiting for the task (Choshen and Abend, 2018; Chollampatt and Ng, 2018c). Similarly, among existing works for GEC, our work is the closest to Sakaguchi et al. (2017), but they also directly maximize GLEU in training their GEC system, using a REINFORCE-based approach similar to ours. We instead let the model learn the latent nuances of the objective directly from the data, and provide the appropriate reward to the generator, preserving the learning objective as in Yu et al. (2017), albeit with a different discriminator framework. Our work is closest to Wu et al. (2018), who built an RNNSearch-based Generator (Bahdanau et al., 2015) and a CNN-based sentence-pair discriminator for NMT. + +# 7 Conclusion + +We propose a task-appropriate training objective for GEC, using an adversarial training framework consisting of a generator and a discriminator, based on the Adversarial-NMT framework of Wu et al. (2018). The generator is modeled as a Seq2Seq model, and the discriminator is modeled as a deep sentence-pair matching model, which provides rewards to the generator input-output. The framework supervises the generator to reflect the mapping within (source, target) sentence, and an efficient policy gradient algorithm to tackle the optimization difficulty brought by the discrete nature of generation. Experiments on standard GEC test datasets demonstrate the effectiveness of our framework for the task. Additionally, we provide insights into how the discriminator setup, pre-training and integration into the framework can be optimized for stable training and better performance. We show that the proposed framework consistently achieves better results in a self-contained single model setting, without relying on any external resources. In the future, we plan to improve the task-specific framework and training techniques based on recent state-of-the-art methods (Grundkiewicz et al., 2019; Choe et al., 2019), and improve issues with sparse rewards by exploring better credit assignment techniques. + +# Acknowledgments + +We would like to thank our friends and colleagues: Vivek Kulkarni, Artem Chernodub, Kostiantyn + +Omelianchuk, Oleksandr Skurzhanskyi, Oleksiy Syvokon, and Chad Mills, for their insightful feedback, and the anonymous reviewers for their helpful comments. + +# References + +Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron C. Courville, and Yoshua Bengio. 2017. An actor-critic algorithm for sequence prediction. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Chris Brockett, William B. Dolan, and Michael Gamon. 2006. Correcting esl errors using phrasal smt techniques. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 249-256, Sydney, Australia. Association for Computational Linguistics. +Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. +Christopher Bryant, Mariano Felice, Øistein E. Andersen, and Ted Briscoe. 2019. The BEA-2019 shared task on grammatical error correction. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 52–75, Florence, Italy. Association for Computational Linguistics. +Christopher Bryant, Mariano Felice, and Ted Briscoe. 2017. Automatic annotation and evaluation of error types for grammatical error correction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 793-805, Vancouver, Canada. Association for Computational Linguistics. +Liwei Cai and William Yang Wang. 2018. KBGAN: Adversarial learning for knowledge graph embeddings. In Proceedings of the 2018 Conference of the + +North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1470-1480, New Orleans, Louisiana. Association for Computational Linguistics. +Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724-1734, Doha, Qatar. Association for Computational Linguistics. +Yo Joong Choe, Jiyeon Ham, Kyubyong Park, and Yeoil Yoon. 2019. A neural grammatical error correction system built on better pre-training and sequential transfer learning. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 213-227, Florence, Italy. Association for Computational Linguistics. +Shamil Chollampatt and Hwee Tou Ng. 2018a. A multilayer convolutional encoder-decoder neural network for grammatical error correction. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence. +Shamil Chollampatt and Hwee Tou Ng. 2018b. Neural quality estimation of grammatical error correction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2528-2539, Brussels, Belgium. Association for Computational Linguistics. +Shamil Chollampatt and Hwee Tou Ng. 2018c. A reassessment of reference-based grammatical error correction metrics. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2730-2741, Santa Fe, New Mexico, USA. Association for Computational Linguistics. +Shamil Chollampatt, Kaveh Taghipour, and Hwee Tou Ng. 2016. Neural network translation models for grammatical error correction. In Proceedings of the 25th International Joint Conference on Artificial Intelligence, New York, USA. +Leshem Choshen and Omri Abend. 2018. Inherent biases in reference-based evaluation for grammatical error correction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 632-642, Melbourne, Australia. Association for Computational Linguistics. +Daniel Dahlmeier and Hwee Tou Ng. 2011. Grammatical error correction with alternating structure optimization. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 915-923, Portland, Oregon, USA. Association for Computational Linguistics. + +Daniel Dahlmeier and Hwee Tou Ng. 2012. Better evaluation for grammatical error correction. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 568-572, Montreal, Canada. Association for Computational Linguistics. +Daniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu. 2013. Building a large annotated corpus of learner English: The NUS corpus of learner English. In Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications, pages 22-31, Atlanta, Georgia. Association for Computational Linguistics. +Rachele De Felice and Stephen G. Pulman. 2008. A classifier-based approach to preposition and determiner error correction in 12 english. In Proceedings of the 22Nd International Conference on Computational Linguistics - Volume 1, COLING '08, pages 169-176, Stroudsburg, PA, USA. Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +William Fedus, Ian Goodfellow, and Andrew M. Dai. 2018. MaskGAN: Better text generation via filling in the _____. In International Conference on Learning Representations. +Mariano Felice and Ted Briscoe. 2015. Towards a standard evaluation method for grammatical error detection and correction. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 578-587, Denver, Colorado. Association for Computational Linguistics. +Mariano Felice, Zheng Yuan, Øistein E. Andersen, Helen Yannakoudakis, and Ekaterina Kochmar. 2014. Grammatical error correction using hybrid systems and type filtering. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task, pages 15-24, Baltimore, Maryland. Association for Computational Linguistics. +Yarin Gal and Zoubin Ghahramani. 2016. A theoretically grounded application of dropout in recurrent neural networks. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16, page 1027-1035, Red Hook, NY, USA. Curran Associates Inc. +Tao Ge, Furu Wei, and Ming Zhou. 2018. Fluency boost learning and inference for neural grammatical error correction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1055-1065, Melbourne, Australia. Association for Computational Linguistics. + +Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS'14, pages 2672-2680, Cambridge, MA, USA. MIT Press. +Sylviane Granger. 1998. The computerized learner corpus: a versatile new source of data for sla research. +Roman Grundkiewicz and Marcin Junczys-Dowmunt. 2018. Near human-level performance in grammatical error correction with hybrid machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 284-290, New Orleans, Louisiana. Association for Computational Linguistics. +Roman Grundkiewicz, Marcin Junczys-Dowmunt, and Kenneth Heafield. 2019. Neural grammatical error correction systems with unsupervised pre-training on synthetic data. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 252-263, Florence, Italy. Association for Computational Linguistics. +Jiaxian Guo, Sidi Lu, Han Cai, Weinan Zhang, Yong Yu, and Jun Wang. 2018. Long text generation via adversarial training with leaked information. In AAAI Conference on Artificial Intelligence. +Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS'14, page 2042-2050, Cambridge, MA, USA. MIT Press. +Jianshu Ji, Qinlong Wang, Kristina Toutanova, Yongen Gong, Steven Truong, and Jianfeng Gao. 2017. A nested attention neural hybrid model for grammatical error correction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 753-762, Vancouver, Canada. Association for Computational Linguistics. +Marcin Junczys-Dowmunt and Roman Grundkiewicz. 2016. Phrase-based machine translation is state-of-the-art for automatic grammatical error correction. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1546-1556, Austin, Texas. Association for Computational Linguistics. +Marcin Junczys-Dowmunt, Roman Grundkiewicz, Shubha Guha, and Kenneth Heafield. 2018. Approaching neural grammatical error correction as a low-resource machine translation task. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational + +Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 595-606, New Orleans, Louisiana. Association for Computational Linguistics. +Masahiro Kaneko, Masato Mita, Shun Kiyono, Jun Suzuki, and Kentaro Inui. 2020. Encoder-decoder models can benefit from pre-trained masked language models in grammatical error correction. +Shun Kiyono, Jun Suzuki, Masato Mita, Tomoya Mizumoto, and Kentaro Inui. 2019. An empirical study of incorporating pseudo data into grammatical error correction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1236-1242, Hong Kong, China. Association for Computational Linguistics. +Jiwei Li, Will Monroe, Tianlin Shi, Sébastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2157-2169, Copenhagen, Denmark. Association for Computational Linguistics. +Linqing Liu, Yao Lu, Min Yang, Qiang Qu, Jia Zhu, and Hongyan Li. 2018. Generative adversarial network for abstractive text summarization. In AAAI Conference on Artificial Intelligence. +Siqi Liu, Zhenhai Zhu, Ning Ye, Sergio Guadarrama, and Kevin Murphy. 2017. Improved image captioning via policy gradient optimization of spider. 2017 IEEE International Conference on Computer Vision (ICCV), pages 873-881. +Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421, Lisbon, Portugal. Association for Computational Linguistics. +Courtney Napolles, Keisuke Sakaguchi, Matt Post, and Joel Tetreault. 2015. Ground truth for grammatical error correction metrics. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), volume 2, pages 588-593. +Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christopher Bryant. 2014. The conll-2014 shared task on grammatical error correction. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task, pages 1-14, Baltimore, Maryland. Association for Computational Linguistics. + +Hwee Tou Ng, Siew Mei Wu, Yuanbin Wu, Christian Hadiwinoto, and Joel Tetreault. 2013. The conll-2013 shared task on grammatical error correction. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning: Shared Task, pages 1-12, Sofia, Bulgaria. Association for Computational Linguistics. +Kostiantyn Omelianchuk, Vitaliy Atrasevych, Artem Chernodub, and Oleksandr Skurzhanskyi. 2020. GECToR - grammatical error correction: Tag, not rewrite. In Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 163-170, Seattle, WA, USA $\hat{a}^{\dagger}$ Online. Association for Computational Linguistics. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. +Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings. +Steven J. Rennie, Etienne Marcheret, Youssef Mroueh, Jerret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). +Alla Rozovskaya and Dan Roth. 2014. Building a state-of-the-art grammatical error correction system. Transactions of the Association for Computational Linguistics, 2:419-434. +Keisuke Sakaguchi, Matt Post, and Benjamin Van Durme. 2017. Grammatical error correction with neural reinforcement learning. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 366-372, Taipei, Taiwan. Asian Federation of Natural Language Processing. +Chinnadhurai Sankar and Sujith Ravi. 2019. Deep reinforcement learning for modeling chit-chat dialog with discrete attributes. In Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue, pages 1-10, Stockholm, Sweden. Association for Computational Linguistics. +Allen Schmaltz, Yoon Kim, Alexander Rush, and Stuart Shieber. 2017. Adapting sequence models for sentence correction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2807-2813, Copenhagen, Denmark. Association for Computational Linguistics. + +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics. +Pei-Hao Su, David Vandyke, Milica Gašić, Nikola Mrkšić, Tsung-Hsien Wen, and Steve Young. 2015. Reward shaping with recurrent neural networks for speeding up on-line policy learning in spoken dialogue systems. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 417-421, Prague, Czech Republic. Association for Computational Linguistics. +Raymond Hendy Susanto, Peter Phandi, and Hwee Tou Ng. 2014. System combination for grammatical error correction. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 951-962, Doha, Qatar. Association for Computational Linguistics. +Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104-3112. Curran Associates, Inc. +Toshikazu Tajiri, Mamoru Komachi, and Yuji Matsumoto. 2012. Tense and aspect error correction for ESL learners using global context. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 198-202, Jeju Island, Korea. Association for Computational Linguistics. +Joel Tetrault, Jennifer Foster, and Martin Chodorow. 2010. Using parse features for preposition selection and error detection. In Proceedings of the ACL 2010 Conference Short Papers, pages 353-358, Uppsala, Sweden. Association for Computational Linguistics. +Joel R. Tetreault and Martin Chodorow. 2008. The ups and downs of preposition error detection in ESL writing. In Proceedings of the 22Nd International Conference on Computational Linguistics - Volume 1, COLING '08, pages 865-872, Stroudsburg, PA, USA. Association for Computational Linguistics. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS, pages 6000-6010. +Yaushian Wang and Hung-yi Lee. 2018. Learning to encode text as human-readable summaries using generative adversarial networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4187-4195, Brussels, Belgium. Association for Computational Linguistics. + +Ronald J. Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn., 8(3-4):229-256. +Lijun Wu, Yingce Xia, Fei Tian, Li Zhao, Tao Qin, Jianhuang Lai, and Tie-Yan Liu. 2018. Adversarial neural machine translation. In Proceedings of The 10th Asian Conference on Machine Learning, volume 95 of Proceedings of Machine Learning Research, pages 534-549. PMLR. +Ziang Xie, Anand Avati, Naveen Arivazhagan, Daniel Jurafsky, and Andrew Y. Ng. 2016. Neural language correction with character-based attention. CoRR, abs/1603.09727. +Zhen Yang, Wei Chen, Feng Wang, and Bo Xu. 2018. Improving neural machine translation with conditional sequence generative adversarial nets. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1346-1355, New Orleans, Louisiana. Association for Computational Linguistics. +Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A new dataset and method for automatically grading ESOL texts. In ACL, pages 180-189. The Association for Computer Linguistics. +Helen Yannakoudakis, Marek Rei, Øistein E. Andersen, and Zheng Yuan. 2017. Neural sequence-labelling models for grammatical error correction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2795-2806, Copenhagen, Denmark. Association for Computational Linguistics. +Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI'17, pages 2852-2858. AAAI Press. +Zheng Yuan and Ted Briscoe. 2016. Grammatical error correction using neural machine translation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 380-386, San Diego, California. Association for Computational Linguistics. +Zheng Yuan, Ted Briscoe, and Mariano Felice. 2016. Candidate re-ranking for SMT-based grammatical error correction. In Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications, pages 256-266, San Diego, CA. Association for Computational Linguistics. +Zheng Yuan and Mariano Felice. 2013. Constrained grammatical error correction using statistical machine translation. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning: Shared Task, pages 52-61, Sofia, Bulgaria. Association for Computational Linguistics. + +Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, and Jingming Liu. 2019. Improving grammatical error correction via pre-training a copy-augmented architecture with unlabeled data. CoRR, abs/1903.00138. \ No newline at end of file diff --git a/adversarialgrammaticalerrorcorrection/images.zip b/adversarialgrammaticalerrorcorrection/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..69ccefaa2191488f4b15647c9e594f57017137a2 --- /dev/null +++ b/adversarialgrammaticalerrorcorrection/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fcef968ef06f33164a7c9a8ed1552a258176221318b424719a153b85acc9379c +size 238565 diff --git a/adversarialgrammaticalerrorcorrection/layout.json b/adversarialgrammaticalerrorcorrection/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..fc2264f8c7665462493778fb81336a3d415a3c34 --- /dev/null +++ b/adversarialgrammaticalerrorcorrection/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0938af805686214405a244debc7ac50eac1fc33dbd284eab1a283836843197f5 +size 421463 diff --git a/adversarialsubwordregularizationforrobustneuralmachinetranslation/889ae361-fa71-4b38-885b-281148ae18d0_content_list.json b/adversarialsubwordregularizationforrobustneuralmachinetranslation/889ae361-fa71-4b38-885b-281148ae18d0_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..254b28fa9f411142014bf0d9ad9c91b0a960870b --- /dev/null +++ b/adversarialsubwordregularizationforrobustneuralmachinetranslation/889ae361-fa71-4b38-885b-281148ae18d0_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:daaf8b055473de301e85c619e4de2653c17c7d5aade23db719e3578596ac707e +size 57386 diff --git a/adversarialsubwordregularizationforrobustneuralmachinetranslation/889ae361-fa71-4b38-885b-281148ae18d0_model.json b/adversarialsubwordregularizationforrobustneuralmachinetranslation/889ae361-fa71-4b38-885b-281148ae18d0_model.json new file mode 100644 index 0000000000000000000000000000000000000000..4e1e92e1c5f0786f1deb50c45e61f20d937e89a0 --- /dev/null +++ b/adversarialsubwordregularizationforrobustneuralmachinetranslation/889ae361-fa71-4b38-885b-281148ae18d0_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3cb364b83dbe97eca476b877d6b8747b192917b303d75d57f205993aead02e5d +size 72387 diff --git a/adversarialsubwordregularizationforrobustneuralmachinetranslation/889ae361-fa71-4b38-885b-281148ae18d0_origin.pdf b/adversarialsubwordregularizationforrobustneuralmachinetranslation/889ae361-fa71-4b38-885b-281148ae18d0_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..38a3ef32930489000370af40dada4b2f2cd65ff8 --- /dev/null +++ b/adversarialsubwordregularizationforrobustneuralmachinetranslation/889ae361-fa71-4b38-885b-281148ae18d0_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e32f92c0b4c74fcae75ab3682f17cf498de7362c5dfb81fee63ec7d28362b5dc +size 432395 diff --git a/adversarialsubwordregularizationforrobustneuralmachinetranslation/full.md b/adversarialsubwordregularizationforrobustneuralmachinetranslation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b686947318b4f9e42b363c4a62969bcb7c01966b --- /dev/null +++ b/adversarialsubwordregularizationforrobustneuralmachinetranslation/full.md @@ -0,0 +1,284 @@ +# Adversarial Subword Regularization for Robust Neural Machine Translation + +Jungsoo Park Mujeen Sung Jinhyuk Lee† Jaewoo Kang† + +Korea University + +{jungsoopark,mujeensung,jinhyuk_lee,kangj} $@$ korea.ac.kr + +# Abstract + +Exposing diverse subword segmentations to neural machine translation (NMT) models often improves the robustness of machine translation as NMT models can experience various subword candidates. However, the diversification of subword segmentations mostly relies on the pre-trained subword language models from which erroneous segmentations of unseen words are less likely to be sampled. In this paper, we present adversarial subword regularization (ADvSR) to study whether gradient signals during training can be a substitute criterion for exposing diverse subword segmentations. We experimentally show that our model-based adversarial samples effectively encourage NMT models to be less sensitive to segmentation errors and improve the performance of NMT models in low-resource and out-domain datasets. + +# 1 Introduction + +Subword segmentation is a method of segmenting an input sentence into a sequence of subword units (Sennrich et al., 2016; Wu et al., 2016; Kudo, 2018). Segmenting a word to the composition of subwords alleviates the out-of-vocabulary problem while retaining encoded sequence length compactly. Due to its effectiveness in the open vocabulary set, the method has been applied to many NLP tasks including neural machine translation (NMT) and others (Gehring et al., 2017; Vaswani et al., 2017; Devlin et al., 2019; Yang et al., 2019). + +Recently, Byte-Pair-Encoding(BPE) (Sennrich et al., 2016) has become one of the de facto subword segmentation methods. However, as BPE deterministically segments each word into subword units, NMT models with BPE always observe the + +# Original Text + +Input Depetitesfenêtes,une tailledeshumanisante. + +Ref Small windows, dehumanizing scale. + +# Subword Segmentation of the Input + +De petites fenêtre_s, uneaille dés human isant_e. + +# Noisy Text + +Input Depetite fenèpres, une taïle déshumanisante. + +Base Small chicks, a dehumanizing carve. + +SR Small fentanyl, a dehumanizing tail. + +Ours Small windows, a dehumanizing size. + +# Subword Segmentation of the Input + +De petites $f_{-}$ en_e_pre_s, une tail_e dés humanisant_e. + +Figure 1: NMT models suffer from typos (character drop, character replacement) in the source text due to the unseen subword compositions ('_ denotes segmentation). On the other hand, Ours correctly decodes them. Base: standard training, SR: subword regularization (Kudo, 2018) + +same segmentation result for each word and often fail to learn diverse morphological features. In this regard, Kudo (2018) proposed subword regularization, a training method that exposes multiple segmentations using a unigram language model. Starting from machine translation, it has been shown that subword regularization can improve the robustness of NLP models in various tasks (Kim, 2019; Provilkov et al., 2019; Drexler and Glass, 2019; Müller et al., 2019). + +However, subword regularization relies on the unigram language models to sample candidates, where the language models are optimized based on the corpus-level statistics from training data with no regard to the translation task objective. This causes NMT models to experience a limited set of subword candidates which are frequently observed in the training data. Thus, NMT models trained with the subword regularization can fail to inference the meaning of unseen words having + +unseen segmentations. This issue can be particularly problematic for low resource languages and noisy text where many morphological variations are not present in the training data. The suboptimality issue of the subword segmentation methods has been also raised in many prior works (Kreutzer and Sokolov, 2018; Wang et al., 2019b; Ataman et al., 2019; Salesky et al., 2020). + +To tackle the problem of unigram language models, we search for a different sampling strategy using gradient signals which does not rely on corpus-level statistics and is oriented to the task objective. We adopt the adversarial training framework (Goodfellow et al., 2014; Miyato et al., 2016; Ebrahimi et al., 2017; Cheng et al., 2019) to search for a subword segmentation that effectively regularizes the NMT models. Our proposed method, adversarial subword regularization (ADVSR), greedily searches for a diverse, yet adversarial subword segmentation which will likely incur the highest translation loss. Our experiment shows that the NMT models trained with ADVSR improve the performance of baseline NMT models up to 3.2 BLEU scores in IWSLT datasets while outperforming the standard subword regularization method. We also highlight that NMT models trained with the proposed method are highly robust to character-level input noises. $^{1}$ + +# 2 Background + +Subword Regularization Subword regularization (Kudo, 2018) exposes multiple subword candidates during training via on-the-fly data sampling. The proposed training method optimizes the parameter set $\theta$ with marginal log-likelihood: + +$$ +\mathcal {L} (\theta) = \sum_ {s = 1} ^ {D} \underset {\mathbf {y} \sim P _ {s e g} (\mathbf {y} | X ^ {(s)})} {\mathbb {E}} \left[ \log P (\mathbf {y} | \mathbf {x}; \theta) \right] (1) +$$ + +where $\mathbf{x} = (x_{1},\ldots ,x_{M})$ and $\mathbf{y} = (y_{1},\dots ,y_{N})$ are sampled segmentations (in a subword unit) from a source sentence $X$ and a target sentence $Y$ through the unigram language model (subwordlevel) $P_{seg}(\cdot)$ and $D$ denotes the number of samples. Generally, a single sample per epoch is used during training to approximate Eq 1. + +The probability of a tokenized output is obtained by the product of each subword's occurrence + +probability where subword occurrence probabilities are attained through the Bayesian EM algorithm (Dempster et al., 1977; Liang et al., 2007; Liang and Klein, 2009). Segmentation output with maximum probability is acquired by using Viterbi algorithm (Viterbi, 1967). + +Adversarial Regularization in NLP Adversarial samples are constructed by corrupting the original input with a small perturbation which distorts the model output. Miyato et al. (2016) adopted the adversarial training framework to the task of text classification where input embeddings are perturbed with adversarial noise $\hat{r}$ : + +$$ +e _ {i} ^ {\prime} = E x _ {i} + \hat {r} _ {i} \tag {2} +$$ + +$$ +\text {w h e r e}, \hat {r} = \underset {r, \| r \| \leq \epsilon} {\operatorname {a r g m a x}} \left\{\ell (X, r, Y; \theta) \right\} \tag {3} +$$ + +$E$ is an embedding matrix, $e_i^\prime$ is an perturbed embedding vector, and $\ell (\cdot)$ is loss function obtained with the input embeddings perturbed with noise $r$ . Note that Miyato et al. (2016) use a word for the unit of $x_{i}$ unlike our definition. As it is computationally expensive to exactly estimate $\hat{r}$ in Eq 3, Miyato et al. (2016) resort to the linear approximation method (Goodfellow et al., 2014), where $\hat{r}_i$ is approximated as follows: + +$$ +\hat {r} _ {i} = \epsilon \frac {g _ {i}}{\| g \| _ {2}}, \quad g _ {i} = \nabla_ {e _ {i}} \ell (X, Y; \theta) \tag {4} +$$ + +$\epsilon$ indicates the degree of perturbation and $g_{i}$ denotes a gradient of the loss function with respect to a word vector. Moreover, Ebrahimi et al. (2017) extended adversarial training framework to directly perturb discrete input space, i.e. character, through the first-order approximation by the use of gradient signals. + +# 3 Approach + +Relying on the subword language models might bias NMT models to frequent segmentations, hence hinders the NMT model in understanding diverse segmentations. This may harm the translation quality of the NMT models when diverse morphological variations occur. + +However, simply exposing diverse segmentations uniformly leads to a decrease in performance (Kudo, 2018). In this regard, we utilize gradient signals for exposing diverse, yet adversarial subword segmentation inputs for effectively + +regularizing NMT models. Kreutzer and Sokolov (2018) proposed to jointly learn to segment and translate by using hierarchical RNN (Graves, 2016), but the method is not model-agnostic and slow due to the increased sequence length of character-level inputs. On the other hand, our method is model-agnostic and operates on the word-level. Our method seeks adversarial segmentations on-the-fly, thus the model chooses the subword candidates that are vulnerable to itself according to the state of the model at each training step. + +# 3.1 Problem Definition + +Our method generates a sequence of subwords by greedily replacing the word's original segmentation to that of adversarial ones estimated by gradients. Given a source sentence $X$ and a target sentence $Y$ , we want to find the sequence of subwords $\hat{\mathbf{x}}$ and $\hat{\mathbf{y}}$ which incurs the highest loss: + +$$ +\hat {\mathbf {x}}, \hat {\mathbf {y}} = \underset { \begin{array}{c} \mathbf {x} \in \Omega (X) \\ \mathbf {y} \in \Omega (Y) \end{array} } {\operatorname {a r g m a x}} \left\{\ell (\mathbf {x}, \mathbf {y}; \theta) \right\} \tag {5} +$$ + +$\Omega(X)$ and $\Omega(Y)$ denote all the subword segmentation candidates of $X$ and $Y$ and $\ell(\cdot)$ denotes loss function. + +Our method operates on a word unit split by whitespaces, each of which consists of variable length subwords. We first define a sequence of words in $X$ as $\mathbf{w} = (w_{1},\dots ,w_{M^{\prime}})$ where $M^{\prime}$ denotes the length of the word-level sequence. Then, we can segment $w_{j}$ as $\mathbf{s}_j = (s_1^j,\ldots ,s_K^j)$ which are $K$ subword units of the $j$ -th word (note that now we can represent input $X$ as a sequence of $\mathbf{s}_j$ as $\mathbf{s} = (\mathbf{s}_1,\dots ,\mathbf{s}_{M'}))$ ). For example, as for the $j$ -th word "lovely", its tokenized output "love" and "ly" will be $s_1^j$ and $s_2^j$ respectively. Then, we define the embedding and the gradient of the word segmentation as the aggregation of $K$ subwords consisting it: + +$$ +e \left(\mathbf {s} _ {j}\right) = f \left(\left[ e \left(s _ {1} ^ {j}\right), \dots , e \left(s _ {K} ^ {j}\right) \right]\right) \in \mathbb {R} ^ {d} \tag {6} +$$ + +$$ +g _ {\mathbf {s} _ {j}} = f \left(\left[ g _ {s _ {1} ^ {j}}, \dots , g _ {s _ {K} ^ {j}} \right]\right) \in \mathbb {R} ^ {d} \tag {7} +$$ + +where $g_{s_k^j} = \nabla_{e\left(s_k^j\right)}\ell (\mathbf{x},\mathbf{y};\theta)\in \mathbb{R}^d$ (8) + +where $e$ denotes the embedding lookup operation, $d$ denotes the hidden dimension of embeddings. We simply use the element-wise average operation for $f$ . Therefore if the segmentation of the word changes, the corresponding embedding and gradient vector will change accordingly. + +Algorithm 1: AdvSR function +input :input sentence $X$ , probability $R$ +output:adversarial subword sequence $\hat{\mathbf{x}}$ +Function AdvSR(X, R): + $\hat{\mathbf{x}}\gets []//initialize empty list$ $\tilde{\mathbf{x}}\gets \operatorname{argmax}_{\mathbf{x}\in \Omega (X)}P_{seg}(\mathbf{x}|X)$ $\tilde{\mathbf{s}}\gets \mathrm{group}(\tilde{\mathbf{x}}) // \mathrm{group~subwords~as~word-level}$ +for $j\gets 1$ to $M^{\prime}$ do + $r\gets$ uniform(0,1) +if $r < R$ then +// compute Eq 7. + $g_{\tilde{\mathbf{s}}_j}\gets f([g_{\tilde{\mathbf{s}}_1^j},\dots,g_{\tilde{\mathbf{s}}_K^j}])$ +// compute Eq 9. + $\hat{\mathbf{s}}_j\gets \mathrm{argmax}_{\mathbf{s}_j\in \Omega (w_j)}g_{\tilde{\mathbf{s}}_j}^T\cdot [e(\mathbf{s}_j) - e(\tilde{\mathbf{s}}_j)]$ +else + $\begin{array}{rl}{\hat{\mathbf{s}}_j}\leftarrow \tilde{\mathbf{s}}_j} & {} \end{array}$ $\hat{\mathbf{x}}\gets \hat{\mathbf{x}} +\hat{\mathbf{s}}_j//append$ +return $\hat{\mathbf{x}}$ + +# 3.2 Adversarial Subword Regularization + +As it is intractable to find the most adversarial sequence of subwords given combinatorially large space, we approximately search for word-wise adversarial segmentation candidates. We seek for the adversarial segmented result of a $j$ -th word, i.e. $w_{j}$ from the sentence $X$ by following criteria which was originally proposed by Ebrahimi et al. (2017) and applied to many other NLP tasks (Cheng et al., 2019; Wallace et al., 2019; Michel et al., 2019). More formally, we seek an adversarial segmentation $\hat{\mathbf{s}}_j$ of the $j$ -th word $w_{j}$ as + +$$ +\hat {\mathbf {s}} _ {j} = \underset {\mathbf {s} _ {j} \in \Omega (w _ {j})} {\operatorname {a r g m a x}} g _ {\tilde {\mathbf {s}} _ {j}} ^ {T} \cdot \left[ e \left(\mathbf {s} _ {j}\right) - e \left(\tilde {\mathbf {s}} _ {j}\right) \right] \tag {9} +$$ + +where $\mathbf{s}_j$ represents one of the tokenized output among the possible candidates $\Omega(w_j)$ which are obtained by SentencePiece tokenizer (Kudo and Richardson, 2018). $\tilde{\mathbf{s}}_j$ denotes an original deterministic segmentation of $j$ -th word. Note that for computing $g_{\tilde{\mathbf{s}}_j}$ , we use $\ell(\tilde{\mathbf{x}}, \tilde{\mathbf{y}})$ which is from the original deterministic segmentation results. We applied L2 normalization to the gradient vectors and embedding vectors. + +We uniformly select words in the sentence with a probability $R$ and replace them into adversarial subword composition according to the Eq 9. We perturb both the source and the target sequences. We summarize our method in Algorithm 1. The existing adversarial training methods in the NLP domain generally train the model with both the original samples and the adversarial samples (Miyato et al., 2016; Ebrahimi et al., 2017; Cheng et al., + +
DatasetLang PairNumber of sentences (train/valid/test)
IWSLT17FR ↔ EN232k / 890 / 1210
AR ↔ EN231k / 888 / 1205
IWSLT15CS ↔ EN105k / 1385 / 1327
VI ↔ EN133k / 1553 / 1268
IWSLT13TR ↔ EN132k / 887 / 1568
PL ↔ EN144k / 767 / 1564
MTNT1.1FR → EN19k / 886 / 1022 (1233)
EN → FR35k / 852 / 1020 (1401)
+ +Table 1: Data statistics. The number in the parentheses denotes the number of sentences in the MTNT2019 test set which was provided by the WMT Robustness Shared Task (Li et al., 2019) + +
Lang PairBASESRADVSR
IWSLT17
FR → EN37.938.138.5
EN → FR38.839.139.8
AR → EN31.732.332.6
EN → AR14.414.314.9
IWSLT15
CS → EN28.930.532.1
EN → CS20.421.723.0
VI → EN28.128.429.3
EN → VI30.931.732.4
IWSLT13
PL → EN19.119.720.6
EN → PL13.514.115.1
TR → EN21.322.624.0
EN → TR12.614.414.6
+ +Table 2: BLEU scores on the main results. Bold indicates the best score and all scores whose difference from the best is not statistically significant computed via bootstrapping (Koehn, 2004) $(p$ -value $< 0.05)$ . + +2019; Motoki Sato, 2019). However, we train the model with only the adversarial samples for the sake of fair comparison with the baselines. More details are described in Appendix A.1. + +# 4 Experimental Setup + +# 4.1 Datasets and Implementation Details + +We conduct experiments on a low-resource multilingual dataset, IWSLT $^2$ , where unseen morphological variations outside the training dataset can occur frequently. We also test NMT models on MTNT (Michel and Neubig, 2018), a testbed for evaluating the NMT systems on the noisy text. We used the English-French language pair. Moreover, for evaluating the robustness to the typos, we generate the synthetic test data with character-level noises using the IWSLT dataset. + +For all experiments, we use Transformer-Base (Vaswani et al., 2017) as a backbone model $(\mathrm{L} = 6,\mathrm{H} = 512)$ and follow the same regularization and optimization procedures. We train our models with a joined dictionary of the size $16k$ . Our implementation is based on Fairseq (Ott et al., 2019). Further details on the experimental setup are described in Appendix A.2. + +# 4.2 Evaluation + +For inference, we use a beam search with a beam size of 4. For the evaluation, we used the checkpoint which performed the best in the validation dataset. We evaluated the translation quality through BLEU (Papineni et al., 2002) computed by SacreBleu (Post, 2018). Our baselines are NMT models trained with deterministic segmentations (BASE) and models trained with the subword regularization method (SR) (Kudo, 2018). We set the hyperparameters of subword regularization equivalent to those of Kudo (2018). + +# 5 Experiments + +# 5.1 Results on Low-Resource Dataset + +Table 2 shows the main results on IWSLT datasets. Our method significantly outperforms both the BASE and the SR. This shows that leveraging translation loss to expose various segmentations is more effective than constraining the NMT models to observe limited sets of segmentations. Specifically, ADVSR improves 1.6 BLEU over SR and 3.2 BLEU over BASE in the Czech to English dataset. We assume that the large gains are due to the morphological richness of Czech. The performance improvement over the baselines can also be explained by the robustness to unseen lexical variations, which are shown in Appendix B. + +# 5.2 Results on Out-Domain Dataset + +Table 3 shows the results on the MTNT dataset where we utilized the NMT models trained from Section 5.1. We also experiment with the domain adaptive fine-tuning with the MTNT dataset (denoted as $+\mathbf{FT}$ ). + +Generally, exposing multiple subword candidates to the NMT models shows superior performance in domain adaptation, which matches the finding from Müller et al. (2019). Above all, NMT models trained with our proposed method outperforms BASE up to 2.3 and SR up to 0.9 BLEU scores. + +
DatasetBASESRADVSR
MTNT2018
FR → EN25.727.627.2
EN → FR26.727.528.2
MTNT2018 + FT
FR → EN36.537.938.8
EN → FR33.234.435.3
MTNT2019
FR → EN27.629.330.2
EN → FR22.823.824.1
MTNT2019 + FT
FR → EN36.238.138.6
EN → FR27.628.228.9
+ +Table 3: BLEU scores on the MTNT (Michel and Neubig,2018) dataset. FT denotes finetuning. + +
Method0.10.20.30.40.5
FR → EN
BASE30.725.620.316.211.4
SR33.228.523.318.714.7
ADVSR34.831.128.725.021.8
EN → FR
BASE31.124.218.614.610.6
SR34.227.823.918.914.4
ADVSR35.130.326.423.019.1
+ +Table 4: BLEU scores on the synthetic dataset of typos. The column lists results for different noise fractions. + +# 5.3 Results on Synthetic Dataset + +Additionally, we conduct an experiment to see the changes in translation quality according to different noise ratios. Using IWSLT17 (FR $\leftrightarrow$ EN), we synthetically generated 3 types of noise, 1. character drop, 2. character replacement, 3. character insertion and perturbed each word with the given noise probability. Table 4 shows that as the noise fraction increases, our method proves its robustness compared to the baseline models improving BASE up to 10.4 and SR up to 7.1 BLEU scores. + +# 6 Related Work + +Subword segmentation has been widely used as a standard in the NMT community since the Byte-Pair-Encoding (Sennrich et al., 2016) was proposed. Kudo (2018) introduced the training method of subword regularization. Most recently, the BPE-dropout (Provilkov et al., 2019) was introduced which modifies the original BPE's encoding process to enable stochastic segmentation. Our work shares the motivation of exposing diverse subword candidates to the NMT models with previous works but differs in that our method uses gradient signals. Other segmentation methods include word + +piece (Schuster and Nakajima, 2012) and variable length encoding scheme (Chitnis and DeNero, 2015). Also, there is another line of research that utilizes character-level segmentation (Luong and Manning, 2016; Lee et al., 2017; Cherry et al., 2018). + +Other works explored generating synthetic or natural noise for regularizing NMT models (Belinkov and Bisk, 2018; Sperber et al., 2018; Karpukhin et al., 2019). Michel and Neubig (2018) introduced a dataset scraped from Reddit for testing the NMT systems on the noisy text. Recently, a shared task on building the robust NMT models was held (Li et al., 2019; Bérard et al., 2019). + +Our method extends the adversarial training framework, which was initially developed in the vision domain (Goodfellow et al., 2014) and has begun to be adopted in the NLP domain recently (Jia and Liang, 2017; Belinkov and Bisk, 2018; Samanta and Mehta, 2017; Miyato et al., 2016; Michel et al., 2019; Motoki Sato, 2019; Wang et al., 2019a; Cheng et al., 2019). Miyato et al. (2016) adopted the adversarial training framework on text classification by perturbing embedding space with continuous adversarial noise. Cheng et al. (2019) introduced an adversarial training framework by discrete word replacements where candidates were generated from the language model. However, our method does not replace the word but replaces its subword composition. + +# 7 Conclusions + +In this study, we propose adversarial subword regularization which samples subword segmentations that maximize the translation loss. Segmentations from the subword language model might bias NMT models to frequent segmentations in the training set. On the other hand, our method regularizes the NMT models to be invariant to unseen segmentations. Experimental results on low resource and out-domain datasets demonstrate the effectiveness of our method. + +# Acknowledgement + +This research was supported by the National Research Foundation of Korea (NRF-2020R1A2C3010638, NRF-2016M3A9A7916996) and Korea Health Technology R&D Project through the Korea Health Industry Development Institute (KHIDI), funded by the Ministry of Health & Welfare, Republic of Korea (grant number: HR20C0021). + +# References + +Duygu Ataman, Wilker Aziz, and Alexandra Birch. 2019. A latent morphology model for open-vocabulary neural machine translation. In ICLR. +Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine translation. In ICLR. +Alexandre Bérard, Ioan Calapodescu, and Claude Roux. 2019. Naver labs europe's systems for the wmt19 machine translation robustness task. In WMT. +Yong Cheng, Lu Jiang, and Wolfgang Macherey. 2019. Robust neural machine translation with doubly adversarial inputs. In ACL. +Colin Cherry, George Foster, Ankur Bapna, Orhan First, and Wolfgang Macherey. 2018. Revisiting character-based neural machine translation with capacity and compression. In EMNLP. +Rohan Chitnis and John DeNero. 2015. Variable-length word encodings for neural translation models. In EMNLP. +Arthur P Dempster, Nan M Laird, and Donald B Rubin. 1977. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society: Series B (Methodological), 39(1):1-22. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT. +Jennifer Drexler and James Glass. 2019. Subword regularization and beam search decoding for end-to-end automatic speech recognition. In ICASSP. IEEE. +Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2017. Hotflip: White-box adversarial examples for text classification. In ACL. +Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. In ICLR. +Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. In ICLR. +Alex Graves. 2016. Adaptive computation time for recurrent neural networks. arXiv preprint arXiv:1603.08983. +Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In EMNLP. +Vladimir Karpukhin, Omer Levy, Jacob Eisenstein, and Marjan Ghazvininejad. 2019. Training on synthetic noise improves robustness to natural noise in machine translation. In W-NUT. + +Gyuwan Kim. 2019. Subword language model for query auto-completion. In EMNLP. +Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In ACL. +Julia Kreutzer and Artem Sokolov. 2018. Learning to segment inputs for nmt favors character-level processing. arXiv preprint arXiv:1810.01480. +Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In ACL. +Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tokenizer and tokenizer for neural text processing. In EMNLP: System Demonstrations. +Jason Lee, Kyunghyun Cho, and Thomas Hofmann. 2017. Fully character-level neural machine translation without explicit segmentation. In TACL. +Xian Li, Paul Michel, Antonios Anastasopoulos, Yonatan Belinkov, Nadir Durrani, Orhan Firat, Philipp Koehn, Graham Neubig, Juan Pino, and Hassan Sajjad. 2019. Findings of the first shared task on machine translation robustness. In WMT. +Percy Liang and Dan Klein. 2009. Online em for unsupervised models. In ACL. +Percy Liang, Slav Petrov, Michael I Jordan, and Dan Klein. 2007. The infinite pcfg using hierarchical dirichlet processes. In EMNLP-CoNLL. +Minh-Thang Luong and Christopher D. Manning. 2016. Achieving open vocabulary neural machine translation with hybrid word-character models. In ACL. +Paul Michel, Xian Li, Graham Neubig, and Juan Miguel Pino. 2019. On evaluation of adversarial perturbations for sequence-to-sequence models. In NAACL. +Paul Michel and Graham Neubig. 2018. Mtnt: A testbed for machine translation of noisy text. In EMNLP. +Takeru Miyato, Andrew M. Dai, and Ian Goodfellow. 2016. Adversarial training methods for semi-supervised text classification. In ICLR. +Shun Kiyono Motoki Sato, Jun Suzuki. 2019. Effective adversarial regularization for neural machine translation. In ACL. +Mathias Müller, Annette Rios, and Rico Sennrich. 2019. Domain robustness in neural machine translation. arXiv preprint arXiv:1911.03109. +Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In *NAACL-HLT* 2019: Demonstrations. + +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL. +Matt Post. 2018. A call for clarity in reporting bleu scores. In WMT. +Ivan Provilkov, Dmitrii Emelianenko, and Elena Voita. 2019. Bpe-dropout: Simple and effective subword regularization. arXiv preprint arXiv:1910.13267. +Elizabeth Salesky, Andrew Runge, Alex Coda, Jan Niehues, and Graham Neubig. 2020. Optimizing segmentation granularity for neural machine translation. Machine Translation, pages 1-19. +Suranjana Samanta and Sameep Mehta. 2017. Towards crafting text adversarial samples. arXiv preprint arXiv:1707.02812. +Mike Schuster and Kaisuke Nakajima. 2012. Japanese and korean voice search. In ICASSP. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In ACL. +Matthias Sperber, Jan Niehues, and Alex Waibel. 2018. Toward robust neural machine translation for noisy input sequences. In ACL. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS. +Andrew Viterbi. 1967. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE transactions on Information Theory, 13(2):260-269. +Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial triggers for attacking and analyzing nlp. In EMNLP. +Dilin Wang, Chengyue Gong, and Qiang Liu. 2019a. Improving neural language modeling via adversarial training. In ICLR. +Xinyi Wang, Hieu Pham, Philip Arthur, and Graham Neubig. 2019b. Multilingual neural machine translation with soft decoupled encoding. In ICLR. +Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. +Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In NeurIPS. + +# A Implementation Details + +# A.1 Details of Training + +During training, we set $R = \{0.25, 0.33\}$ based on the validation performance. The words which are not perturbed according to adversarial criterion are deterministically segmented by the SentencePiece. Note that no other hyper-parameters are tuned. + +We use SentencePiece (Kudo and Richardson, 2018) toolkit for acquiring a pre-defined number of subword candidates where we generated up to 9 segmentation candidates per word. We use the same SentencePiece tokenizer for training SR and for generating segmentation candidates from ADvSR. + +While training, translation pairs were batched together by their sequence lengths. For all the experiments, the values of batch sizes (number of source tokens) is set to 4096. All our experiments were conducted with a single GPU (TitanXP or Tesla P40) and accumulated gradients for 8 training steps. Note that the number of parameters of the model (i.e. Transformer Base) is the same for the baselines and our method. + +# A.2 Details of Experimental Settings + +Multilingual dataset IWSLT can be downloaded from https://wit3.fbk.eu/ and the MTNT dataset can be downloaded from https://www.cs.cmu.edu/~pmichel1/mntt/. We use the training and validation dataset of MTNT 2018 version for finetuning our model in Section 5.2. To be specific, we finetune each NMT model in Section 5.1 for 30 epochs. We utilized the checkpoint which performed best in the MTNT validation dataset. + +Also, for experimenting the SR, we set the hyperparameters alpha and $l$ as 0.1 and 64, respectively which is equivalent to that of original paper. Byte Pair Encoding (Sennrich et al., 2016) is not used as the baseline model since the performance is almost the same as that of BASE. Kudo (2018) also report scores using n-best decoding, which averages scores from n-best segmentation results. However, n-best decoding is n-times time consuming compared to the standard decoding method. Therefore we only use 1-best decoding which is the standard decoding framework for evaluating the translation quality. Our BLEU scores are calculated through SacreBLEU where our signature is as follows: + +BLEU+case.1c+lang.[src-lang] +- [dst-lang] + numrefs.1+smooth.exp ++tok.13a+version.1.4.2 + +# B Sampled Translation Outputs + +
PL→ENCS→ENFR→EN
InputChodź, zatańcz ze mną.My aktivujeme komunitu.Profitez de votre(soirée).
Seg.Chodź, za_ta_n_cz ze mnąMy aktiv_ujeme komunitu.Pro_fi_t_ez de votre"Soirée'.
REF.Come, dance with me.We activate the community.Enjoy your night.
BASECome with meWe act the community.Get out of your night.
SRCome on. Stay with me.We act a community.Protect your evening.
ADVSRCome, dance with me.We activate the community.Enjoy your evening.
+ +Table B.1: Excerpt from the translation results of the NMT models trained with different training methods. Presented samples demonstrate how our method infers the meaning of rarely appearing words' variations. Despite its low frequency of appearance, the NMT model trained with our method infers the meaning of the observed word's morphosyntactic variation. This can be explained by the fact that our method encourages the NMT model to be segmentation invariant, and is better at inferring the meaning from unseen subword composition. \ No newline at end of file diff --git a/adversarialsubwordregularizationforrobustneuralmachinetranslation/images.zip b/adversarialsubwordregularizationforrobustneuralmachinetranslation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..c04297fd4e06b2c1abf5183290808728380c17f5 --- /dev/null +++ b/adversarialsubwordregularizationforrobustneuralmachinetranslation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:96af37da0a89ff124f1d76031568ca6da2c367fea0f898729bf0f7a0f775d813 +size 235967 diff --git a/adversarialsubwordregularizationforrobustneuralmachinetranslation/layout.json b/adversarialsubwordregularizationforrobustneuralmachinetranslation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..8978e4ea34e22fbc9e0314a052d9266676060619 --- /dev/null +++ b/adversarialsubwordregularizationforrobustneuralmachinetranslation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:df3a19f942d27403640b3215b6a51ae97f46c0405d18aff7cb03cb0cd274a953 +size 312354 diff --git a/afullyhyperbolicneuralmodelforhierarchicalmulticlassclassification/cf0958c1-16c1-43c6-bc0f-ed831640328d_content_list.json b/afullyhyperbolicneuralmodelforhierarchicalmulticlassclassification/cf0958c1-16c1-43c6-bc0f-ed831640328d_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..510c4b05101c618dff4a720e9ece8db4e0e6d605 --- /dev/null +++ b/afullyhyperbolicneuralmodelforhierarchicalmulticlassclassification/cf0958c1-16c1-43c6-bc0f-ed831640328d_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:96a5a993bbd40962ef210fce7b993da06abfd056c855e292e310d12b7627f922 +size 120604 diff --git a/afullyhyperbolicneuralmodelforhierarchicalmulticlassclassification/cf0958c1-16c1-43c6-bc0f-ed831640328d_model.json b/afullyhyperbolicneuralmodelforhierarchicalmulticlassclassification/cf0958c1-16c1-43c6-bc0f-ed831640328d_model.json new file mode 100644 index 0000000000000000000000000000000000000000..12a932a413e654e45d7d9ee940ed8167d00c9c19 --- /dev/null +++ b/afullyhyperbolicneuralmodelforhierarchicalmulticlassclassification/cf0958c1-16c1-43c6-bc0f-ed831640328d_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6212e722e053d7d95323ba07aa0a2393e911e080c1cf07b5f786c841986679bf +size 141283 diff --git a/afullyhyperbolicneuralmodelforhierarchicalmulticlassclassification/cf0958c1-16c1-43c6-bc0f-ed831640328d_origin.pdf b/afullyhyperbolicneuralmodelforhierarchicalmulticlassclassification/cf0958c1-16c1-43c6-bc0f-ed831640328d_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..acf9bc49b410b36ed7cc3b08e0e202095ab299ff --- /dev/null +++ b/afullyhyperbolicneuralmodelforhierarchicalmulticlassclassification/cf0958c1-16c1-43c6-bc0f-ed831640328d_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bd2cc7ae5376aea5b7e8408e66f4d37d2cf3cf46a1459dfa8f45c3f6a11e029e +size 1624600 diff --git a/afullyhyperbolicneuralmodelforhierarchicalmulticlassclassification/full.md b/afullyhyperbolicneuralmodelforhierarchicalmulticlassclassification/full.md new file mode 100644 index 0000000000000000000000000000000000000000..42b1b5b4e9f34ca84859fd31d9aab62e146409d5 --- /dev/null +++ b/afullyhyperbolicneuralmodelforhierarchicalmulticlassclassification/full.md @@ -0,0 +1,542 @@ +# A Fully Hyperbolic Neural Model for Hierarchical Multi-Class Classification + +Federico López + +Michael Strube + +Heidelberg Institute for Theoretical Studies + +Research Training Group AIPHES + +firstname.lastname@h-its.org + +# Abstract + +Label inventories for fine-grained entity typing have grown in size and complexity. Nonetheless, they exhibit a hierarchical structure. Hyperbolic spaces offer a mathematically appealing approach for learning hierarchical representations of symbolic data. However, it is not clear how to integrate hyperbolic components into downstream tasks. This is the first work that proposes a fully hyperbolic model for multi-class multi-label classification, which performs all operations in hyperbolic space. We evaluate the proposed model on two challenging datasets and compare to different baselines that operate under Euclidean assumptions. Our hyperbolic model infers the latent hierarchy from the class distribution, captures implicit hyponymic relations in the inventory, and shows performance on par with state-of-the-art methods on fine-grained classification with remarkable reduction of the parameter size. A thorough analysis sheds light on the impact of each component in the final prediction and showcases its ease of integration with Euclidean layers. + +# 1 Introduction + +Entity typing classifies textual mentions of entities, according to their semantic class, within a set of labels (or classes) organized in an inventory. The task has progressed from recognizing a few coarse classes (Sang and De Meulder, 2003), to extremely large inventories, with hundreds (Gillick et al., 2014) or thousands of labels (Choi et al., 2018). Therefore, exploiting inter-label correlations has become critical to improve performance. + +Large inventories tend to exhibit a hierarchical structure, either by an explicit tree-like arrangement of the labels (coarse labels at the top, fine-grained at the bottom), or implicitly through the + +![](images/2908642836cdb6da96db2c3e8b1813c2ef39543bfec33b605d055cb44d718829.jpg) +Figure 1: Tree embedded in hyperbolic space. Items at the top of the hierarchy are placed near the origin of the space, and lower items near the boundary. Moreover, the hyperbolic distance (Eq. 1) between sibling nodes resembles the one through the common ancestor, analogous to the distance in the tree. That is $d_{\mathbb{D}}(\mathrm{D},\mathrm{E}) \approx d_{\mathbb{D}}(\mathrm{D},\mathrm{B}) + d_{\mathbb{D}}(\mathrm{B},\mathrm{E})$ . + +label distribution in the dataset (coarse labels appear more frequently than fine-grained ones). Prior work has integrated only explicit hierarchical information by formulating a hierarchy-aware loss (Murty et al., 2018; Xu and Barbosa, 2018) or by representing instances and labels in a joint Euclidean embedding space (Shimaoka et al., 2017; Abhishek et al., 2017). However, the resulting space is hard to interpret, and these methods fail to capture implicit relations in the label inventory. Hyperbolic space is naturally equipped for embedding symbolic data with hierarchical structures (Nickel and Kiela, 2017). Intuitively, that is because the amount of space grows exponentially as points move away from the origin. This mirrors the exponential growth of the number of nodes in trees with increasing distance from the root (Cho et al., 2019) (see Figure 1). + +In this work, we propose a fully hyperbolic neural model for fine-grained entity typing. Noticing a perfect match between hierarchical label inventories in the linguistic task and the benefits of hyperbolic spaces, we endow a classification model with + +a suitable geometry to capture this fundamental property of the data distribution. By virtue of the hyperbolic representations, the proposed approach automatically infers the latent hierarchy arising from the class distribution and achieves a meaningful and interpretable organization of the label space. This arrangement captures implicit hyponymic relations (is-a) in the inventory and enables the model to excel at fine-grained classification. To the best of our knowledge, this work is the first to apply hyperbolic geometry from beginning to end to perform multi-label classification on real NLP datasets. + +Recent work has proposed hyperbolic neural components, such as word embeddings (Tifrea et al., 2019), recurrent neural networks (Ganea et al., 2018) and attention layers (Gulcehre et al., 2019). However, researchers have incorporated these isolated components into neural models, whereas the rest of the layers and algorithms operate under Euclidean assumptions. This impedes models from fully exploiting the properties of hyperbolic geometry. Furthermore, there are different analytic models of hyperbolic space, and not all previous work operates in the same one, which hinders their combination, and hampers their adoption for downstream tasks (e.g. Tifrea et al. (2019) learn embeddings in the Poincaré model, Gulcehre et al. (2019) aggregate points in the Klein model, or Nickel and Kiela (2018) perform optimization in the Lorentz model). We address these issues. Our model encodes textual inputs, applies a novel attention mechanism, and performs multi-class multi-label classification, executing all operations in the Poincaré model of hyperbolic space (\$4). + +We evaluate the model on two datasets, namely Ultra-Fine (Choi et al., 2018) and OntoNotes (Gillick et al., 2014), and compare to Euclidean baselines as well as to state-of-the-art methods for the task (Xiong et al., 2019; Onoe and Durrett, 2019). The hyperbolic system has competitive performance when compared to an ELMo model (Peters et al., 2018) and a BERT model (Devlin et al., 2019) on very fine-grained types, with remarkable reduction of the parameter size ( $\S 6$ ). Instead of relying on large pre-trained models, we impose a suitable inductive bias by choosing an adequate metric space to embed the data, which does not introduce extra burden on the parameter footprint. + +By means of the exponential and logarithmic maps (explained in §2) we are able to mix hyperbolic and Euclidean components into one model, + +![](images/675f52d8f2e3d6946edf5fc9fa664c20daad62986e3b5523a11e349010574e38.jpg) +Figure 2: Visualization of Möbius operations. Left: Möbius addition (noncommutative). Right: Matrix-vector multiplication and pointwise non-linearity. + +![](images/8381828cdaa02b3d170d70228701f19bc9a064029fd846f68890ef49d57fea28.jpg) + +aiming to exploit their strengths at different levels of the representation. We perform a thorough ablation that allows us to understand the impact of each hyperbolic component in the final performance of the system (§6.1.1 and §6.1.2), and showcases its ease of integration with Euclidean layers. + +# 2 Hyperbolic Neural Networks + +In this section we briefly recall the necessary background on hyperbolic neural components. The terminology and formulas used throughout this work follow the formalism of Möbius gyrovector spaces (Ungar, 2008a,b), and the definitions of hyperbolic neural components of Ganea et al. (2018). For more information about Riemannian geometry and Möbius operations see Appendix A and B. In the following, $\langle \cdot ,\cdot \rangle$ and $\| \cdot \|$ are the inner product and norm inherited from the Euclidean space. + +Hyperbolic space: It is a non-Euclidean space with constant negative curvature. We adopt the Poincaré ball model of hyperbolic space (Cannon et al., 1997). In the general $n$ -dimensional case, it becomes $\mathbb{D}^n = \{x \in \mathbb{R}^n \mid \|x\| < 1\}^2$ . The Poincaré model is a Riemannian manifold equipped with the Riemannian metric $g_x^{\mathbb{D}} = \lambda_x^2 g^E$ , where $\lambda_x := \frac{2}{1 - \|x\|^2}$ is called the conformal factor and $g^E = \mathrm{I}_n$ is the Euclidean metric tensor. The distance between two points $x, y \in \mathbb{D}^n$ is given by: + +$$ +d _ {\mathbb {D}} (x, y) = \cosh^ {- 1} \left(1 + 2 \frac {\| x - y \| ^ {2}}{(1 - \| x \| ^ {2}) (1 - \| y \| ^ {2})}\right) \tag {1} +$$ + +Mobius addition: It is the hyperbolic analogous to vector addition in Euclidean space. Given two + +![](images/5d849a2c017d81821e3d9cb490cd3d79ae2a75c9fe3f85ccaaaf739a550b4418.jpg) +Figure 3: Overview of the proposed model. The mention encoder extracts word and char-level entity representations. The context encoder is based on a bidirectional-GRU with attention. The outputs of both encoders are concatenated and passed to a classifier based on a multinomial logistic regression. + +points $x, y \in \mathbb{D}^n$ , it is defined as: + +$$ +x \oplus y = \frac {\left(1 + 2 \langle x , y \rangle + \| y \| ^ {2}\right) x + \left(1 - \| x \| ^ {2}\right) y}{1 + 2 \langle x , y \rangle + \| x \| ^ {2} \| y \| ^ {2}} \tag {2} +$$ + +Mobius matrix-vector multiplication: Given a linear map $M:\mathbb{R}^n\to \mathbb{R}^m$ , which we identify with its matrix representation, and a point $x\in$ $\mathbb{D}^n$ $Mx\neq 0$ , it is defined as: + +$$ +M \otimes x = \tanh \left(\frac {\| M x \|}{\| x \|} \tanh ^ {- 1} (\| x \|)\right) \frac {M x}{\| M x \|} \tag {3} +$$ + +Pointwise non-linearity: If we model it as $\varphi : \mathbb{R}^n \to \mathbb{R}^n$ , then its Möbius version $\varphi^\otimes$ can be applied using the same formulation of the matrix-vector multiplication. A visualization of the aforementioned operations can be seen in Figure 2. + +By combining these operations we obtain a one-layer feed-forward neural network (FFNN) in hyperbolic space, described as $y = \varphi^{\otimes}(M \otimes x \oplus b)$ with $M \in \mathbb{R}^{m \times n}$ and $b \in \mathbb{D}^m$ as trainable parameters. Note that the parameter $b$ lies in the hyperbolic space, thus its updates during training need to be corrected for this geometry. + +Exponential and logarithmic maps: For each point $x \in \mathbb{D}^n$ , let $T_x\mathbb{D}^n$ denote the associated tangent space, which is always a subset of Euclidean space (Liu et al., 2019). We make use of the exponential map $\exp_x: T_x\mathbb{D}^n \to \mathbb{D}^n$ and the logarithmic map $\log_x: \mathbb{D}^n \to T_x\mathbb{D}^n$ to map points in the hyperbolic space to the Euclidean space, and vice-versa. At the origin of the space, they are given for $v \in T_0\mathbb{D}^n \setminus \{0\}$ and $y \in \mathbb{D}^n \setminus \{0\}$ : + +$$ +\exp_ {\mathbf {0}} (v) = \tanh (\| v \|) \frac {v}{\| v \|} \tag {4} +$$ + +$$ +\log_ {\mathbf {0}} (y) = \operatorname {a r c t a n h} (\| y \|) \frac {y}{\| y \|} +$$ + +To map a point $y \in \mathbb{D}^n$ onto the Euclidean space we apply $\log_0(y)$ . Conversely, to map a + +point $v\in \mathbb{R}^n$ onto the hyperbolic space, we assume $\mathbb{R}^n = T_0\mathbb{D}^n$ and apply $\exp_{\mathbf{0}}(v)$ . This allows to mix hyperbolic and Euclidean neural layers as shown in §6.1.2. + +# 3 Fine-grained Entity Typing + +Given a context sentence $s$ containing an entity mention $m$ , the goal of entity typing is to predict the correct type labels $t_m$ that describe $m$ from a type inventory $T$ . The ground-truth type set $t_m$ may contain multiple types, making the task a multiclass multi-label classification problem. + +For fine-grained entity typing the type inventory $T$ tends to contain hundreds to thousands of labels. Encoding hierarchical information from large type inventories has been proven critical to improve performance (López et al., 2019). Thus we hypothesize that our proposed hyperbolic model will benefit from this representation. + +# 4 Hyperbolic Classification Model + +In this section we propose a general hyperbolic neural model for classification with sequential data as input. The building blocks are defined in a generic manner such that they can be applied to different tasks, or integrated with regular Euclidean layers. Our proposed architecture resembles recent neural models applied to entity typing (Choi et al., 2018). For the encoders we employ the neural networks introduced in Ganea et al. (2018), we propose a novel attention mechanism operating entirely in the Poincaré model, and we extend the hyperbolic classifier to multi-class multi-label setups. An overview of the model can be seen in Figure 3. + +# 4.1 Mention Encoder + +To represent the mention, we combine word and char-level features, similar to Lee et al. (2017). Given a sequence of $k$ tokens in a mention span, we represent them using pre-trained word embeddings $w_{i}\in \mathbb{D}^{n}$ which we assume to lie in hyperbolic space. We apply a hyperbolic FFNN, described as: + +$$ +m _ {i} = \tanh ^ {\otimes} \left(W ^ {M} \otimes w _ {i} \oplus b ^ {M}\right) \tag {5} +$$ + +with $m_i \in \mathbb{D}^{d_M}$ , and where $W^M \in \mathbb{R}^{d_M \times n}$ , $b^M \in \mathbb{D}^{d_M}$ are parameters of the model. We combine the resulting $m_1, \ldots, m_k$ into a single mention representation $\mathbf{m} \in \mathbb{D}^{d_M}$ by computing a weighted sum of the token representations in hyperbolic space with the attention mechanism explained in §4.4. + +Moreover, we extract features from the sequence of characters in the mention span with a recurrent neural network (RNN) (Lample et al., 2016). We represent each character with a char-embedding $c_{i} \in \mathbb{D}^{d_{C}}$ that we train in the Poincaré ball. An RNN operating in hyperbolic space is defined by: + +$$ +h _ {t + 1} = \varphi^ {\otimes} \left(W ^ {C} \otimes h _ {t} \oplus U ^ {C} \otimes c _ {t} \oplus b ^ {C}\right) \tag {6} +$$ + +where $W^{C}, U^{C} \in \mathbb{R}^{d_{C} \times d_{C}}, b^{C}, h_{t} \in \mathbb{D}^{d_{C}}$ , and $\varphi$ is a pointwise non-linearity function. Finally, we obtain a single representation $\mathbf{c} \in \mathbb{D}^{d_{C}}$ by taking the midpoint of the states $h_{i}$ using Equation 9. + +# 4.2 Context Encoder + +To encode the context we apply a hyperbolic version of gated recurrent units (GRU) (Cho et al., 2014) proposed in Ganea et al. $(2018)^{3}$ . Given a sequence of $l$ tokens, we represent them with a pre-trained word embedding $w_{i} \in \mathbb{D}^{n}$ , and apply a forward and backward GRU, producing contextualized representations $\vec{h_i}, \vec{h_i} \in \mathbb{D}^{d_S}$ for each token. We concatenate the resulting states into a single embedding $s_i = \mathrm{concat}(\vec{h_i}, \vec{h_i})$ (see concat in §4.3), where $s_i \in \mathbb{D}^{2d_S}$ . Ultimately, we combine $s_1, \ldots, s_l$ into a single context representation $\mathbf{s} \in \mathbb{D}^{2d_S}$ with the distance-based attention mechanism. + +# 4.3 Concatenation + +If we model the concatenation of two vectors in the Poincaré ball as appending one to the other, this does not guarantee that the result remains inside the ball. Thus, we apply a generalized version of + +the concatenation operation. For $x\in \mathbb{D}^k,y\in \mathbb{D}^l$ then $\operatorname {concat}:\mathbb{D}^k\times \mathbb{D}^l\to \mathbb{D}^n$ is defined as: + +$$ +\operatorname {c o n c a t} (x, y) = M _ {1} \otimes x \oplus M _ {2} \otimes y \oplus b \tag {7} +$$ + +where $M_1 \in \mathbb{R}^{n \times k}$ , $M_2 \in \mathbb{R}^{n \times l}$ , $b \in \mathbb{D}^n$ are parameters of the model. In Euclidean architectures, the concatenation of vectors is usually followed by a linear layer, which takes the form of Equation 7 when written explicitly. + +# 4.4 Distance-based Attention + +Previous approaches to hyperbolic attention (Gulcehre et al., 2019; Chami et al., 2019) require mappings of points to different spaces, which hinders their adoption into neural models. We propose a novel attention mechanism in the Poincaré model of hyperbolic space. We cast attention as a weighted sum of vectors in this geometry, without requiring any extra mapping of the inputs. In this manner, we make consistent use of the same analytical model of hyperbolic space across all components, which eases their integration. + +To obtain the attention weights, we exploit the hyperbolic distance between points (Gulcehre et al., 2019). Given a sequence of states $x_{i} \in \mathbb{D}^{n}$ , we combine them with a trainable position embedding $p_i \in \mathbb{D}^n$ such that $r_i = x_i \oplus p_i$ . We use addition as the standard method to enrich the states with positional information (Vaswani et al., 2017; Devlin et al., 2019). We apply two different linear transformations on $r_i$ to obtain vectors $q_{i}$ and $k_{i}$ both lying in the Poincaré ball. We compute the distance between these two points and finally obtain the weight by applying a softmax over the sequence in the following manner: + +$$ +\begin{array}{l} q _ {i} = W ^ {Q} \otimes r _ {i} \oplus b ^ {Q}, \quad k _ {i} = W ^ {K} \otimes r _ {i} \oplus b ^ {K} \tag {8} \\ \alpha \left(q _ {i}, k _ {i}\right) = \operatorname {s o f t m a x} \left(- \beta d _ {\mathbb {D}} \left(q _ {i}, k _ {i}\right)\right) \\ \end{array} +$$ + +where $W^{Q}, W^{K} \in \mathbb{R}^{n \times n}, b^{Q}, b^{K} \in \mathbb{D}^{n}$ and $\beta \in \mathbb{R}$ are parameters of the model. Attention weights will be higher for elements with $q_{i}$ and $k_{i}$ vectors placed close to each other. + +The positional embeddings are trained along with the model as a hyperbolic parameter. For the context encoder, they reflect relative distances between the $i$ -th word and the entity mention. For the mention encoder, they represent the absolute position of the word inside the mention span. + +To aggregate the points as a weighted summation in hyperbolic space we propose to apply the + +Mobius midpoint, which obeys many of the properties that we expect from a weighted average in Euclidean space (Ungar (2010), Theorem 4.6): + +$$ +m = \frac {1}{2} \otimes \frac {\sum_ {i = 1} ^ {n} \alpha_ {i} \gamma \left(x _ {i}\right) ^ {2} x _ {i}}{\sum_ {i = 1} ^ {n} \alpha_ {i} \left(\gamma \left(x _ {i}\right) ^ {2} - \frac {1}{2}\right)} \tag {9} +$$ + +where $x_{i}$ are the states in the sequence, $\alpha_{i}$ the weights corresponding to each state, and $\gamma (x_{i})$ the Lorentz factors. By applying the Möbius midpoint we develop an attention mechanism that operates entirely in the Poincaré model of hyperbolic space. Detailed formulas and experimental observations can be found in Appendix D. + +# 4.5 Classification in the Poincaré Ball + +The input of the classifier is the concatenation of mention and context features. To perform multiclass classification in the Poincaré ball, we adapt the generalized multinomial logistic regression (MLR) from Ganea et al. (2018). Given $K$ classes and $k \in \{1, \dots, K\}$ , $p_k \in \mathbb{D}^m$ , $a_k \in T_{p_k} \mathbb{D}^m \backslash \{0\}$ , the formula for the hyperbolic MLR is: + +$$ +\begin{array}{l} p (y = k | x) \propto \\ f \left(\lambda_ {p _ {k}} \| a _ {k} \| \sinh^ {- 1} \left(\frac {2 \langle - p _ {k} \oplus x , a _ {k} \rangle}{\left(1 - \| - p _ {k} \oplus x \| ^ {2}\right) \| a _ {k} \|\right)\right) \tag {10} \\ \end{array} +$$ + +Where $x \in \mathbb{D}^m$ , and $p_k$ and $a_k$ are trainable parameters. It is based on formulating logits as distances to margin hyperplanes. The hyperplanes in hyperbolic space are defined by the union of all geodesics passing through $p_k$ and orthogonal to $a_k$ . + +Although this formulation was made for one-label classification, the underlying notion also holds for multi-label setups. In that case, we need to be able to select several classes by considering the distances (logits) to all hyperplanes. To achieve that we employ the sigmoid function as $f$ , instead of a softmax, and predict the given class if $p(y = k|x) > 0.5$ . More details in Appendix E. + +Figure 4 shows examples of the hyperbolic definition of multiple hyperplanes, which follow the curvature of the space. + +# 4.6 Optimization + +With the proposed classification model, we aim to minimize variants of the binary cross-entropy loss function as the training objective. + +The model has trainable parameters in both Euclidean and hyperbolic space. We apply the Geoopt implementation of Riemannian Adam (Kochurov et al., 2020) as a Riemannian adaptive optimization + +method (Bécigneul and Ganea, 2019) to carry out a gradient-based update of the parameters in their respective geometry. + +# 5 Experiments + +We evaluate the proposed hyperbolic model on two different datasets for fine-grained entity typing, and compare to Euclidean baselines as well as state-of-the-art models. + +# 5.1 Data + +For analysis and evaluation of the model, we focus on the Ultra-Fine entity typing dataset (Choi et al., 2018). It contains 10,331 target types defined as free-form noun phrases and divided in three levels of granularity: coarse, fine and ultra-fine. Besides this segregation, the dataset does not provide any further explicit information about the relations among the types. The data consist of 6,000 crowdsourced examples and 6M training samples in the open-source version, automatically extracted with distant supervision. Our evaluation is done on the original crowdsourced dev/test splits. + +To gain a better understanding of the proposed model, we also experiment on the OntoNotes dataset (Gillick et al., 2014) as it is a standard benchmark for entity typing. + +# 5.2 Setup + +The MLR classifier operates in a hyperbolic space of $m$ dimensions with $m = d_{M} + d_{C} + 2d_{S}$ . By setting different values, we experiment with three models: BASE ( $m = 100$ ), LARGE ( $m = 250$ ) and XLARGE ( $m = 500$ ). + +As word embeddings we employ Poincaré GloVe embeddings (Tifrea et al., 2019), which are pretrained in the Poincaré model. Hence, the input to the encoders is already in hyperbolic space and all operations can be performed in this geometry. These embeddings are not updated during training. Low values of dropout are used since the model was very sensitive to this parameter given the behaviour of the hyperbolic distance. + +On the Ultra-Fine dataset, for each epoch, we train over the entire training set, and we run extra iterations over the crowdsourced split before evaluating. In this way, the model benefits from the large amount of noisy, automatically-generated data, and is fine-tuned with high-quality crowdsourced samples. As previous work (Xiong et al., + +
ModelTotalCoarseFineUltra-Fine# Params
PRF1PRF1PRF1PRF1
DENOISED50.733.140.166.980.773.241.746.243.845.617.425.231.0M
BERT51.632.840.167.480.673.441.654.747.346.315.623.4110.0M
LABELGCN49.328.135.866.268.867.543.940.742.242.414.221.35.1M
MULTITASK48.023.031.060.061.061.040.038.039.042.08.014.06.1M
HY BASE48.529.136.364.472.268.139.438.538.939.314.521.21.8M
HY LARGE42.333.537.463.672.167.636.348.341.433.319.724.74.6M
HY XLARGE43.434.238.261.473.967.135.746.640.436.519.925.79.5M
+ +Table 1: Macro-averaged P, R and ${\mathrm{F}}_{1}$ on the Ultra-Fine dev set for different baselines and models. We only reproduced LABELGCN. Values for other baselines are taken from the original publications. + +2019; Onoe and Durrett, 2019), we optimize the multi-task objective proposed by Choi et al. (2018). + +For evaluation we report Macro-averaged and Micro-averaged $\mathrm{F}_1$ metrics computed from the precision/recall scores over the same three granularities established by Choi et al. (2018). For all models we optimize Total Macro-averaged $\mathrm{F}_1$ on the validation set, and evaluate on the test set. Following Ganea et al. (2018), we report the average of three runs given the highly non-convex spectrum of hyperbolic neural networks. Hyperparameters are detailed in Appendix F along with other practical aspects to ensure numerical stability. + +# 5.3 Baselines + +Euclidean baseline: We replace all operations of the hyperbolic model by their Euclidean counterpart. To map the Poincaré GloVe embeddings to the Euclidean space we apply $\log_0$ . We do not apply any kind of normalization or correction over the weights to circumscribe them into the unit ball. On the contrary, we grant them freedom over the entire Euclidean space to establish a fair comparison. + +Multi-task: Model proposed by Choi et al. (2018), along with the Ultra-Fine dataset. + +LabelGCN: Model introduced by Xiong et al. (2019). A label-relational inductive bias is imposed by means of a graph propagation layer that encodes label co-occurrence statistics. + +BERT: We compare to the setup of Onoe and Durrett (2019) in which BERT (Devlin et al., 2019) is adapted for this task and fine-tuned on the crowdsourced train split. + +Denoised: An ELMo-based model (Peters et al., 2018) proposed by Onoe and Durrett (2019) trained on raw and denoised distantly-labeled data. + +# 6 Results and Discussion + +Following previous work (Choi et al., 2018; Onoe and Durrett, 2019), we report results on the devel + +opment set in Table 1. All hyperbolic models outperform MULTITASK and LABELGCN baselines on Total Macro $\mathrm{F_1}$ . DENOISED and BERT systems, based on large pre-trained models, show the best Total performance. Nonetheless, HY XLARGE has a competitive performance, and surpasses both systems on ultra-fine $\mathrm{F_1}$ . In the hyperbolic model, fine-grained types are placed near the boundary of the ball, where the amount of space grows exponentially. Furthermore, the underlying structure of the type inventory is hierarchical, thus the hyperbolic definition of the hyperplanes is well-suited to improve the classification in this case (see comparison with Euclidean classifiers on Figure 4). These properties combined enable the hyperbolic model to excel at classifying hierarchical labels, with outstanding improvements on very fine-grained types. + +The reduction of the parameter size is also remarkable: $70\%$ and $91\%$ versus DENOISED and BERT respectively. This emphasizes the importance of choosing a suitable metric space that fits the data distribution (hierarchical in this case) as a powerful and efficient inductive bias. Through adequate tools and formulations, we are able to exploit this bias without introducing an overload + +![](images/0e80b9bf7136c0225710da074b86df2cf0926969c8381c85c93a9886c72a3d70.jpg) +(a) Euclidean Space. +Figure 4: Classification hyperplanes for the types person (red), artist (blue) and musician (green). The hyperbolic formulation of the hyperplanes is better suited for hierarchical inventories. + +![](images/5d7e26e90024f92917c22fa80e05a6a99a3269950304db9445a0733c899b4e5c.jpg) +(b) Hyperbolic Space. + +
personartistmusician
TypesdDTypesdDTypesdD
artist0.26musician0.25singer0.24
author0.28actor0.26actor0.25
actor0.30person0.26artist0.25
speaker0.30author0.26composer0.27
leader0.30singer0.28band0.27
+ +on the parameter cost. + +Correspondence of results between HY BASE and LABELGCN suggest that both models capture similar information. LABELGCN requires label co-occurrence statistics represented as a weighted graph, from where a hierarchy can be easily derived (Krioukov et al., 2010). The similarity of results indicates that the hyperbolic model is able to implicitly encode the latent hierarchical information in the label co-occurrences without additional inputs or the burden of the graph layer. + +To shed light on this aspect, we inspect the points $p_k$ learned by HY BASE to define the hyperplanes of Equation 10. Table 2 shows the types corresponding to the closest points to the label person and its subtypes, measured by hyperbolic distance. The types are highly correlated given that they often co-occur in similar contexts. Moreover, the model captures hyponymic relations (is-a) present in the label co-occurrences. An analogous behaviour is observed for other types (see tables in Appendix G). The inductive bias given by the hyperbolic geometry allows the model to capture the hierarchy, deriving a meaningful and interpretable representation of the label space: coarse labels near the origin, fine-grained labels near the boundary, and hyponymic relations are preserved. It is also noteworthy that the model learns these relations automatically without requiring the explicit data encoded in the graph. + +# 6.1 Comparison of the Spaces + +A comparison of the metric spaces for different models on the test set is shown in Table 3. It can be seen that the hyperbolic model outperforms its Euclidean variants in all settings. It is notable that this trend holds even in high-dimensional spaces (500 dimensions for XLARGE). Since the label inventory exhibits a clearly hierarchical structure, it perfectly suits the hyperbolic classification method. + +Table 2: Closest $p_k$ points in the Poincaré Ball to different Ultra-Fine entity types. The model is able to capture hierarchical relations such as singer is-a musician is-a artist is-a person. + +
ModelCoarseFineUltra
MaMiMaMiMaMi
BASEHY69.667.342.039.721.219.1
EU68.566.139.836.517.816.1
LARGEHY67.965.438.436.324.322.3
EU67.163.836.734.722.019.7
XLARGEHY69.166.239.737.226.124.0
EU67.965.437.835.322.220.0
+ +The hyperbolic model brings considerable gains as the granularity becomes finer: $5.1\%$ and $16.2\%$ relative improvement in fine and ultra-fine Macro $\mathrm{F_1}$ respectively for the BASE model over its Euclidean counterpart. We also observe that as the size of the model increases, the Euclidean baseline becomes more competitive for ultra-fine. This is due to the Euclidean model gaining enough capacity to accommodate the separation hyperplanes with higher dimensions, thus reducing the gap. + +It is noticeable that the BASE model outperforms the larger ones on coarse and fine granularities. That is due to the larger models overfitting given the low dropout applied. Moreover, Euclidean and hyperbolic models exhibit a similar performance on the coarse granularity when compared to each other. A possible explanation is that the separation planes for these labels are located closer to the origin of the space. In this region, the spaces behave alike in terms of the distance calculation, and this similarity is reflected in the results as well. + +# 6.1.1 Word Embeddings Ablation + +The input for both the Euclidean and hyperbolic models are Poincaré GloVe embeddings, which are originally trained in hyperbolic space (Tifrea et al., 2019). This might favor the hyperbolic model, despite the application of the $\log_0$ map in the Euclidean case. Thus, we replace the hyperbolic embeddings by the regular GloVe embeddings (Pennington et al., 2014), and use $\exp_0$ on the hyperbolic model to project them into the ball. + +Table 4 shows that the tendency of the BASE hyperbolic model outperforming the Euclidean one + +Table 3: Results on Ultra-Fine test set for macro and micro $\mathbf{F}_1$ across metric spaces and dimensions. + +
BASE +ModelCoarseFineUltra
MaMiMaMiMaMi
HY GLOVE68.766.641.538.822.120.1
EU GLOVE67.865.339.736.020.718.6
+ +Table 4: Test results on Ultra-Fine. Poincaré GloVe embeddings are replaced by regular GloVe embeddings. + +
ModelCoarseFineUltra
MaMiMaMiMaMi
HY BASE69.667.342.039.721.219.1
EU Encoder68.866.341.738.922.020.1
EU Attention68.966.440.838.020.118.4
EU Concat68.666.140.637.521.819.8
EU MLR69.267.140.838.017.315.8
+ +holds, and that the improvement does not come from the embeddings. Also, in this way we showcase how the hyperbolic model can be easily integrated with regular word embeddings. + +# 6.1.2 Component Ablation + +With the aim of analyzing the contribution of the different hyperbolic components, we perform an ablation study on the BASE model. We divide the system in encoder, attention (both in the mention and context encoders), concatenation, and MLR, and replace them, one at a time, by their Euclidean counterparts. Note that when Euclidean and hyperbolic components are mixed, we convert the internal representations from one manifold to the other with the $\exp_0$ and $\log_0$ maps. + +As we see in Table 5, MLR is the component that contributes the most to the ultra-fine classification. The hierarchical structure of the type inventory combined with the hyperbolic definition of the hyperplanes are the reason of this (see Figure 4). + +Hyperbolic attention and concatenation are relevant for coarse and fine-grained classification (here is where the biggest drop appears when they are removed), but do not play a major role in the ultra-fine granularity. + +Finally, the encoders do not benefit from the hyperbolic representation. As the reason for this we consider that the model is not able to capture tree-like relations among the input tokens such that they can be exploited for the task. + +This ablation suggests that the main benefits of hyperbolic layers arise when they are incorporated at deeper levels of representation in the model, and not over low-level features or raw text. + +Computing time: Möbius operations are more expensive than their Euclidean counterparts. Due to this, in our experiments we found the hyperbolic encoder to be twice slower, and the MLR 1.5 times slower than their Euclidean versions. + +Table 5: Results on Ultra-Fine test set. Ablation of the hyperbolic model, replacing one component by its Euclidean counterpart at a time. + +
ModelCoarseFineUltra
MaMiMaMiMaMi
BASEHY82.080.241.841.423.9
EU81.880.337.736.117.5
LARGEHY83.181.342.041.424.0
EU82.480.938.236.718.9
+ +# 6.2 OntoNotes Dataset + +To further understand the capabilities of the proposed model we also perform an evaluation on the OntoNotes dataset (Gillick et al., 2014). In this case, we apply the standard binary cross-entropy loss, since fine-grained labels are scarce in this dataset. Following previous work (Xiong et al., 2019), we train over the dataset augmented by Choi et al. (2018). Results for the three granularities for BASE and LARGE models are presented in Table 6. The hyperbolic models outperform the Euclidean baselines in both cases, and the difference is noticeable for fine and ultra-fine (42.0 vs 38.2 and 24.0 vs 18.9 on Macro $\mathbf{F}_1$ for the LARGE model), in accordance with the results on Ultra-Fine. + +We report a comparison with neural systems in Table 7. The hyperbolic model, without requiring the explicit hierarchy provided in this dataset, achieves a competitive performance. Nonetheless, the advantages of the hyperbolic model are mitigated by the low multiplicity of fine-grained labels, and the lower hierarchy. + +# 7 Related Work + +Type inventories for the task of fine-grained entity typing (Ling and Weld, 2012; Yosef et al., 2012) have grown in size and complexity (Del Corro et al., 2015; Choi et al., 2018). Researchers have tried to incorporate hierarchical information on the type distribution in different manners (Shimaoka et al., 2016; Ren et al., 2016a). Shimaoka et al. (2017) encode the hierarchy through a sparse matrix. Xu + +Table 6: Macro and micro ${\mathrm{F}}_{1}$ on OntoNotes test set for different granularities. + +
ModelAcc.Ma-F1Mi-F1
Shimaoka et al. (2017)51.770.964.9
AFET (Ren et al., 2016a)55.171.164.7
PLE (Ren et al., 2016b)57.271.566.1
BERT51.876.669.1
MULTITASK59.576.871.8
LABELGCN59.677.872.2
HY LARGE47.475.869.4
+ +Table 7: Total accuracy, macro and micro $\mathrm{F}_1$ scores on OntoNotes test set. + +and Barbosa (2018) model the relations through a hierarchy-aware loss function. Xiong et al. (2019) derive a graph from type co-occurrence statistics in the dataset. Experimental evidence suggests that our model encodes similar hierarchical information without the need to provide it explicitly. + +Hyperbolic representations have been employed for Question Answering (Tay et al., 2018), in Machine Translation (Gulcehre et al., 2019), and modeling language (Dhingra et al., 2018; Tifrea et al., 2019). We build upon the hyperbolic neural layers introduced in Ganea et al. (2018), and develop the missing components to perform, not binary, but multi-class multi-label text classification. We test the proposed model not with a synthetic dataset, but on a concrete downstream tasks, such as entity typing. Our work resembles López et al. (2019) and Chen et al. (2019), though they separately learn embeddings for type labels and text representations in hyperbolic space, whereas we do it in an integrated fashion. + +# 8 Conclusions + +Incorporating hierarchical information from the label inventory into neural models has become critical to improve performance. Hyperbolic spaces are an exciting approach since they are naturally equipped to model hierarchical structures. However, previous work integrated isolated components into neural systems. In this work we propose a fully hyperbolic model and showcase its effectiveness on challenging datasets. Our hyperbolic model automatically infers the latent hierarchy from the class distribution, captures implicit hyponymic relations in the inventory and achieves a performance comparable to state-of-the-art systems on very fine-grained labels with a remarkable reduction of the parameter size. This emphasizes the importance of choosing a metric space suitable to the data distribution as an effective inductive bias to capture fundamental properties, such as hierarchical structure. + +Moreover, we illustrate ways to integrate different components with Euclidean layers, showing their strengths and drawbacks. An interesting future direction is to employ hyperbolic representations in combination with contextualized word embeddings. We release our implementation with the aim to ease the adoption of hyperbolic components into neural models, yielding lightweight and efficient systems. + +# Acknowledgments + +This work has been supported by the German Research Foundation (DFG) as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) under grant No. GRK 1994/1 and the Klaus Tschira Foundation, Heidelberg, Germany. + +# References + +Abhishek Abhishek, Ashish Anand, and Amit Awekar. 2017. Fine-grained entity type classification by jointly learning representations and label embeddings. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 797-807, Valencia, Spain. Association for Computational Linguistics. +Gary Bécigneul and Octavian-Eugen Ganea. 2019. Riemannian adaptive optimization methods. In 7th International Conference on Learning Representations, ICLR, New Orleans, LA, USA. +James W. Cannon, William J. Floyd, Richard Kenyon, and Walter R. Parry. 1997. Hyperbolic Geometry, volume 31. Flavors of Geometry. +Ines Chami, Zhitao Ying, Christopher Ré, and Jure Leskovec. 2019. Hyperbolic graph convolutional neural networks. In Advances in Neural Information Processing Systems 32, pages 4869-4880. Curran Associates, Inc. +Boli Chen, Xin Huang, Lin Xiao, Zixin Cai, and Liping Jing. 2019. Hyperbolic interaction model for hierarchical multi-label classification. CoRR, abs/1905.10802. +Hyunghoon Cho, Benjamin DeMeo, Jian Peng, and Bonnie Berger. 2019. Large-margin classification in hyperbolic space. In Proceedings of Machine Learning Research, volume 89 of Proceedings of Machine Learning Research, pages 1832-1840. PMLR. +Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724-1734, Doha, Qatar. Association for Computational Linguistics. +Eunsol Choi, Omer Levy, Yejin Choi, and Luke Zettlemoyer. 2018. Ultra-fine entity typing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 87-96, Melbourne, Australia. Association for Computational Linguistics. + +Luciano Del Corro, Abdalghani Abujabal, Rainer Gemulla, and Gerhard Weikum. 2015. FINET: Context-aware fine-grained named entity typing. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 868-878, Lisbon, Portugal. Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Bhuwan Dhingra, Christopher Shallue, Mohammad Norouzi, Andrew Dai, and George Dahl. 2018. Embedding text in hyperbolic spaces. In Proceedings of the Twelfth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-12), pages 59–69, New Orleans, Louisiana, USA. Association for Computational Linguistics. +Octavian Ganea, Gary Becigneul, and Thomas Hofmann. 2018. Hyperbolic neural networks. In Advances in Neural Information Processing Systems 31, pages 5345-5355. Curran Associates, Inc. +Dan Gillick, Nevena Lazic, Kuzman Ganchev, Jesse Kirchner, and David Huynh. 2014. Context-Dependent Fine-Grained Entity Type Tagging. ArXiv e-prints. +Caglar Gulcehre, Misha Denil, Mateusz Malinowski, Ali Razavi, Razvan Pascanu, Karl Moritz Hermann, Peter Battaglia, Victor Bapst, David Raposo, Adam Santoro, and Nando de Freitas. 2019. Hyperbolic attention networks. In 7th International Conference on Learning Representations, ICLR, New Orleans, LA, USA. +Max Kochurov, Rasul Karimov, and Sergei Kozlukov. 2020. Geoopt: Riemannian optimization in PyTorch. ArXiv, abs/2005.02819. +Dmitri Krioukov, Fragkiskos Papadopoulos, Maksim Kitsak, Amin Vahdat, and Marián Boguña. 2010. Hyperbolic geometry of complex networks. *Physical review. E*, Statistical, nonlinear, and soft matter physics, 82:036106. +Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260-270, San Diego, California. Association for Computational Linguistics. +Kenton Lee, Luheng He, Mike Lewis, and Luke Zettle-moyer. 2017. End-to-end neural coreference resolution. In Proceedings of the 2017 Conference on + +Empirical Methods in Natural Language Processing, pages 188-197, Copenhagen, Denmark. Association for Computational Linguistics. +Xiao Ling and Daniel S. Weld. 2012. Fine-grained entity recognition. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, AAAI'12, pages 94-100. AAAI Press. +Qi Liu, Maximilian Nickel, and Douwe Kiela. 2019. Hyperbolic graph neural networks. In Advances in Neural Information Processing Systems 32, pages 8228-8239. Curran Associates, Inc. +Federico López, Benjamin Heinzerling, and Michael Strube. 2019. Fine-grained entity typing in hyperbolic space. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 169–180, Florence, Italy. Association for Computational Linguistics. +Shikhar Murty, Patrick Verga, Luke Vilnis, Irena Radovanovic, and Andrew McCallum. 2018. Hierarchical losses and new resources for fine-grained entity typing and linking. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 97-109, Melbourne, Australia. Association for Computational Linguistics. +Maximilian Nickel and Douwe Kiela. 2017. Poincaré embeddings for learning hierarchical representations. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 6341-6350. Curran Associates, Inc. +Maximillian Nickel and Douwe Kiela. 2018. Learning continuous hierarchies in the Lorentz model of hyperbolic geometry. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 3779-3788, Stockholm, Sweden. PMLR. +Yasumasa Onoe and Greg Durrett. 2019. Learning to denoise distantly-labeled data for entity typing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 2407-2417, Minneapolis, Minnesota. Association for Computational Linguistics. +Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics. +Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke + +Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics. +Xiang Ren, Wenqi He, Meng Qu, Lifu Huang, Heng Ji, and Jiawei Han. 2016a. AFET: Automatic fine-grained entity typing by hierarchical partial-label embedding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1369-1378, Austin, Texas. Association for Computational Linguistics. +Xiang Ren, Wenqi He, Meng Qu, Clare R. Voss, Heng Ji, and Jiawei Han. 2016b. Label noise reduction in entity typing by heterogeneous partial-label embedding. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16, pages 1825-1834, New York, NY, USA. ACM. +Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003 - Volume 4, CONLL '03, pages 142-147, Stroudsburg, PA, USA. Association for Computational Linguistics. +Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, and Sebastian Riedel. 2016. An attentive neural architecture for fine-grained entity type classification. In Proceedings of the 5th Workshop on Automated Knowledge Base Construction, pages 69-74, San Diego, CA. Association for Computational Linguistics. +Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, and Sebastian Riedel. 2017. Neural architectures for fine-grained entity type classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1271-1280, Valencia, Spain. Association for Computational Linguistics. +Yi Tay, Luu Anh Tuan, and Siu Cheung Hui. 2018. Hyperbolic representation learning for fast and efficient neural question answering. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, WSDM '18, pages 583-591, New York, NY, USA. ACM. +Alexandru Tifrea, Gary Becigneul, and Octavian-Eugen Ganea. 2019. Poincare Glove: Hyperbolic word embeddings. In 7th International Conference on Learning Representations, ICLR, New Orleans, LA, USA. +Abraham Albert Ungar. 2008a. Analytic Hyperbolic Geometry and Albert Einstein's Special Theory of Relativity. World Scientific. + +Abraham Albert Ungar. 2008b. A Gyrovector Space Approach to Hyperbolic Geometry. Morgan & Claypool. +Abraham Albert Ungar. 2010. Barycentric Calculus in Euclidean and Hyperbolic Geometry: A Comparative Introduction. World Scientific. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998-6008. Curran Associates, Inc. +Wenhan Xiong, Jiawei Wu, Deren Lei, Mo Yu, Shiyu Chang, Xiaoxiao Guo, and William Yang Wang. 2019. Imposing label-relational inductive bias for extremely fine-grained entity typing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 773-784, Minneapolis, Minnesota. Association for Computational Linguistics. +Peng Xu and Denilson Barbosa. 2018. Neural fine-grained entity type classification with hierarchy-aware loss. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 16-25, New Orleans, Louisiana. Association for Computational Linguistics. +Mohamed Amir Yosef, Sandro Bauer, Johannes Hoffart, Marc Spaniol, and Gerhard Weikum. 2012. HYENA: Hierarchical type classification for entity names. In Proceedings of COLING 2012: Posters, pages 1361-1370, Mumbai, India. + +# A Basics of Riemannian Geometry + +Manifold: a $n$ -dimensional manifold $\mathcal{M}$ is a space that can locally be approximated by $\mathbb{R}^n$ . It generalizes the notion of a 2D surface to higher dimensions. More concretely, for each point $x$ on $\mathcal{M}$ , we can find a homeomorphism (continuous bijection with continuous inverse) between a neighbourhood of $x$ and $\mathbb{R}^n$ . + +Tangent space: the tangent space $T_x\mathcal{M}$ at a point $x$ on $\mathcal{M}$ is a $n$ -dimensional hyperplane in $\mathbb{R}^{n+1}$ that best approximates $\mathcal{M}$ around $x$ . It is the first order linear approximation. + +Riemannian metric: A Riemannian metric $g = (g_x)_{x \in \mathcal{M}}$ on $\mathcal{M}$ is a collection of inner-products $g_x : T_x\mathcal{M} \times T_x\mathcal{M} \to \mathbb{R}$ varying smoothly with $x$ on tangent spaces. Riemannian metrics can be used to measure distances on manifolds + +Riemannian manifold: is a pair $(\mathcal{M},g)$ , where $\mathcal{M}$ is a smooth manifold and $g = (g_{x})_{x\in \mathcal{M}}$ is a Riemannian metric. + +Geodesics: $\gamma :[0,1]\to \mathcal{M}$ are the generalizations of straight lines to Riemannian manifolds, i.e., constant speed curves that are locally distance minimizing. In the Poincaré disk model, geodesics are circles that are orthogonal to the boundary of the disc as well as diameters. + +Parallel transport: defined as $P_{x \to y}: T_x\mathcal{M} \to T_y\mathcal{M}$ , is a linear isometry between tangent spaces that corresponds to moving tangent vectors along geodesics. It is a generalization of translation to non-Euclidean geometry, and it defines a canonical way to connect tangent spaces. + +# B Möbius Operations + +Mobius scalar multiplication: for $x\in \mathbb{D}^n\backslash \{0\}$ the Mobius scalar multiplication by $r\in \mathbb{R}$ is defined as: + +$$ +r \otimes x = \tanh (r \tanh ^ {- 1} (\| x \|)) \frac {x}{\| x \|} \tag {11} +$$ + +and $r\otimes 0\coloneqq 0$ . By making use of the exp and log maps, this expression is reduced to: + +$$ +r \otimes x = \exp_ {\mathbf {0}} (r \log_ {\mathbf {0}} (x)), \quad \forall r \in \mathbb {R}, x \in \mathbb {D} ^ {n} \tag {12} +$$ + +Exponential and logarithmic maps: The mapping between the tangent space and hyperbolic space is done by the exponential map $\exp_x : T_x\mathbb{D}^n \to \mathbb{D}^n$ and the logarithmic map $\log_x : \mathbb{D}^n \to$ + +$T_{x}\mathbb{D}^{n}$ . They are given for $v\in T_x\mathbb{D}^n\backslash \{0\}$ and $y\in \mathbb{D}^n\backslash \{0\} ,y\neq x$ .. + +$$ +\exp_ {x} (v) = x \oplus \left(\tanh \left(\frac {\lambda_ {x} \| v \|}{2}\right) \frac {v}{\| v \|}\right) \tag {13} +$$ + +$$ +\log_ {x} (y) = \frac {2}{\lambda_ {x}} \tanh ^ {- 1} (\| - x \oplus y \|) \frac {- x \oplus y}{\| - x \oplus y \|} +$$ + +These expressions become more appealing when $x = 0$ , that is, at the origin of the space. It can be seen that the matrix-vector multiplication formula is derived from $M \otimes y = \exp_{\mathbf{0}}(M \log_{\mathbf{0}}(y))$ . The point $y \in \mathbb{D}^n$ is mapped to the tangent space $T_0\mathbb{D}^n$ , the linear mapping $M$ is applied in the Euclidean subspace, and finally the result is mapped back into the ball. A similar approach holds for the Möbius scalar multiplication and the application of pointwise non-linearity functions to elements in the Poincaré ball (see Ganea et al. (2018), Section 2.4). + +Parallel transport with exp and log maps: By applying the exp and log maps the parallel transport in the Poincaré ball for a vector $v \in T_0\mathbb{D}^n$ to another tangent space $T_{x}\mathbb{D}^{n}$ , is given by: + +$$ +P _ {\mathbf {0} \rightarrow x} (v) = \log_ {x} \left(x \oplus \exp_ {\mathbf {0}} (v)\right) = \frac {\lambda_ {\mathbf {0}}}{\lambda_ {x}} v \tag {14} +$$ + +This result is used to define and optimize the $a_{k} = (\lambda_{0} / \lambda_{p_{k}}) a_{k}'$ in the Hyperbolic MLR (Appendix E) + +# C Hyperbolic Gated Recurrent Unit + +To encode the context we apply a hyperbolic version of gated recurrent units (GRU) (Cho et al., 2014) proposed in Ganea et al. (2018): + +$$ +\begin{array}{l} r _ {t} = \sigma \left(\log_ {0} \left(W ^ {r} \otimes h _ {t - 1} \oplus U ^ {r} \otimes x _ {t} \oplus b ^ {r}\right)\right) \\ z _ {t} = \sigma \left(\log_ {0} \left(W ^ {z} \otimes h _ {t - 1} \oplus U ^ {z} \otimes x _ {t} \oplus b ^ {z}\right)\right) \\ \tilde {h _ {t}} = \tanh ^ {\otimes} \left(\left(W d i a g \left(r _ {t}\right)\right) \otimes h _ {t - 1} \oplus U \otimes x _ {t} \oplus b\right) \tag {15} \\ h _ {t} = h _ {t - 1} \oplus d i a g (z _ {t}) \otimes (- h _ {t - 1} \oplus \tilde {h _ {t}}) \\ \end{array} +$$ + +where $W \in \mathbb{R}^{d_S \times d_S}, U \in \mathbb{R}^{d_S \times n}, x_t \in \mathbb{D}^n$ and $b \in \mathbb{D}^{d_S}$ (superscripts are omitted). $r_t$ is the reset gate, $z_t$ is the update gate, $\operatorname{diag}(x)$ denotes a diagonal matrix with each element of the vector $x$ on its diagonal, and $\sigma$ is the sigmoid function. + +# D Distance-based Attention + +# D.1 Formulation + +In Equation 9 we calculate the Lorentz factors for each point $x_{i}$ . The Lorentz factors are given by: + +$$ +\gamma (x) = \frac {1}{\sqrt {1 - \| x \| ^ {2}}} \tag {16} +$$ + +In the case of Gulcehre et al. (2019), the application of the Einstein midpoint (Ungar, 2010, Theorem 4.4) requires the mapping of the points onto the Klein model. By applying the Möbius midpoint, we avoid this mapping, and achieve an attention mechanism that operates only in one model of hyperbolic space. + +# D.2 Experimental Observations + +To obtain the weights for the attention mechanism, initially Equation 8 was given by: + +$$ +\alpha \left(q _ {i}, k _ {i}\right) = f \left(- \beta d _ {\mathbb {D}} \left(q _ {i}, k _ {i}\right) - c\right) \tag {17} +$$ + +We experimented with replacing $f$ for sigmoid and softmax functions. We found better performance with the latter one. Moreover, empirical observation lead us to remove the $c$ value, since it converged to zero in all experiments. We believe that the biases $b^{Q}$ and $b^{K}$ from Equation 8 compensate for this $c$ . + +# D.3 Queries and Keys + +To further analyze the attention mechanism we plot the query $q_{i}$ and key $k_{i}$ points of Equation 8 for both models in Figure 5. It must be recalled that the shorter the distance between points, the higher the attention weight that the word gets assigned. Furthermore, we observed that the attention gets prominently centered on the mention in both models, assigning very low weights on the rest of the words in the context. + +In the Euclidean space we can clearly distinguish the two clusters which make the distance-based attention to give very low weights on most words of the context. The small red cluster on the top right of the image belongs to points corresponding to words in the mention span. These words get projected very close to the key vector, in order to + +![](images/e50bb5cbe5943069c1e6039557cf0d7ab59466283767e208098f73dba8edbee8.jpg) +(a) Euclidean Space. + +![](images/58aeff92258df2d931c62f42f1dfe66afe52249b4909ebe9b89a21c3d38d0ddd.jpg) +(b) Hyperbolic Space. +Figure 5: Queries (red) and keys (blue) projected in 2D for different spaces. + +minimize the distance and increase the attention weight. + +On the hyperbolic model, the queries get clustered at the bottom of the plot, whereas the keys are the points adjusting the distance to define the weight on each word. + +# E Multinomial Logistic Regression + +# E.1 Hyperbolic MLR + +The original formula from Ganea et al. (2018) for MLR in the Poincaré ball, given $K$ classes and $k \in \{1, \dots, K\}$ , $p_k \in \mathbb{D}^n$ , $a_k \in T_{p_k} \mathbb{D}^n \setminus \{0\}$ , the formula for the hyperbolic MLR is: + +$$ +\begin{array}{l} p (y = k | x) \propto \\ f \left(\frac {\lambda_ {p _ {k}} ^ {c} \| a _ {k} \|}{\sqrt {c}} \sinh^ {- 1} \left(\frac {2 \sqrt {c} \langle - p _ {k} \oplus x , a _ {k} \rangle}{(1 - c \| - p _ {k} \oplus x \| ^ {2}) \| a _ {k} \|}\right)\right) \tag {18} \\ \end{array} +$$ + +Where $x \in \mathbb{D}^n$ , $p_k$ and $a_k$ are trainable parameters, and $c$ is a parameter in relation to the radius of the Poincaré ball $r = 1 / \sqrt{c}$ which in this work we assume to be $c = 1$ , hence it is omitted of the formulations. Since $a_k \in T_{p_k} \mathbb{D}^n$ and therefore depends on $p_k$ , it is unclear how to perform optimization. The solution proposed by Ganea et al. (2018) is to re-express it as: + +$$ +a _ {k} = P _ {\mathbf {0} \rightarrow p _ {k}} \left(a _ {k} ^ {\prime}\right) = \frac {\lambda_ {\mathbf {0}}}{\lambda_ {p _ {k}}} a _ {k} ^ {\prime} \tag {19} +$$ + +where $a_{k}^{\prime}\in T_{\mathbf{0}}\mathbb{D}^{n} = \mathbb{R}^{n}$ . In this way we can optimize $a_{k}^{\prime}$ as a Euclidean parameter. Finally, when we use $a_{k}^{\prime}$ instead of $a_{k}$ , the formula for the MLR is: + +$$ +\begin{array}{l} p (y = k | x) \propto \\ f \left(2 \| a _ {k} ^ {\prime} \| \sinh^ {- 1} \left(\frac {2 \langle - p _ {k} \oplus x , a _ {k} ^ {\prime} \rangle}{\left(1 - \| - p _ {k} \oplus x \| ^ {2}\right) \| a _ {k} ^ {\prime} \|}\right)\right) \tag {20} \\ \end{array} +$$ + +# E.2 Euclidean MLR + +The Euclidean formulation of the MLR is given by: + +$$ +p (y = k | x) \propto f \left(4 \langle - p _ {k} \oplus x, a _ {k} \rangle\right) \tag {21} +$$ + +This equation arises from taking the limit of $c\to 0$ in Equation 18. In that case, $f(4\langle -p_k\oplus x,a_k\rangle) = f((\lambda_{p_k}^0)^2\langle -p_k\oplus x,a_k\rangle) = f(\langle -p_k\oplus x,a_k\rangle_0)$ + +# F Experimental Details + +For the context-GRU we use tanh as non-linearity to establish a fair comparison against the classical GRU (Cho et al., 2014). On the char-RNN we use the identity (no non-linearity). The MLR is fed with the final representation achieved by + +the concatenation of mention and context features: $\operatorname{concat}(\mathbf{M},\mathbf{C},\mathbf{S})\in \mathbb{D}^m$ with $m = d_M + d_C + 2d_S$ . + +In the XLARGE model, we use the Euclidean encoder in all experiments given time constraints. + +Hyperparameters: Both hyperbolic and Euclidean models were trained with the hyperparameters detailed in Table 8. + +Dropout: We apply low values of dropout given that the model was very sensitive to the this parameter. We consider this a logical behaviour since the distances in hyperbolic space grow exponentially with the norm of the points, making the model very responsive to this parameter. + +Numerical Errors: they appear when the norm of the hyperbolic vectors is very close to 1 or 0. To avoid them we follow the recommendations reported on Ganea et al. (2018). The result of hyperbolic operations is always projected in the ball of radius $1 - \epsilon$ , where $\epsilon = 10^{-5}$ . When vectors are very close to 0, they are perturbed with an $\varepsilon = 10^{-15}$ before they are used in any of the above operations. Finally, arguments of the tanh function are clipped between $\pm 15$ , while arguments of tanh $^{-1}$ are clipped in the interval $[-1 + 10^{-15}, 1 - 10^{-15}]$ . Finally, and by recommendations of the Geoopt developers (Kochurov et al., 2020), we operate on floating point of 64 bits. + +Initialization: we initialize character and positional embeddings randomly from the uniform distribution $U(-0.0001, 0.0001)$ . In the case of the hyperbolic model, we map them into the ball with the $\exp_0$ map. We initialize all layers in the model using Glorot uniform initialization. + +Exponential and logarithmic map: In the case of the Glove embedding ablation (Section 6.1.1), we used the $100d$ version, trained over Wikipedia and Gigaword4. By directly applying the logarithmic map, the embeddings were projected close to the border of the ball, making the model very unstable. To overcome this, we use a parameter $c$ described in Ganea et al. (2018) to adjust the radius of the ball, which helps to project the embeddings closer to the origin of the space. + +Hardware: All experiments for the hyperbolic and Euclidean models were performed using 2 NVIDIA P40 GPUs, with the batch sizes specified in Table 8. + +
ParameterValue
Batch size BASE900
Batch size LARGE350
Batch size XLARGE160
BASE dM40
BASE dC20
BASE dS20
BASE dM + dC + 2dS100
LARGE dM100
LARGE dC50
LARGE dS50
LARGE dM + dC + 2dS250
XLARGE dM200
XLARGE dC100
XLARGE dS100
XLARGE dM + dC + 2dS500
Mention non-linearitytanh
Context non-linearitytanh
Epochs40
Crowd cycles5
Input dropout0.2
Concat dropout0.1
Learning rate0.0005
Weight decay0.0
Max. gradient norm5
+ +Table 8: Hyperparameters of the models. + +# G Closest Types + +We report the points $p_k$ learned by the model to define the hyperplanes of Equation 10. Table 9 shows the types corresponding to the closest points, measured by their hyperbolic distance $d_{\mathbb{D}}$ (see Eq 1), to the coarse labels. We observe that the types are highly correlated given that they often co-occur in the same context. + +# H More Experimental Observations + +Text vectors norms: By "text vector" we refer the concatenated vector of the context, mention and char-level mention representations before the MLR layer. We report the average norm of this vectors per training epoch, for the 20D Euclidean and hyperbolic model on Figure 6. The norm of the vectors of the hyperbolic model are measured according to the hyperbolic distance $d_{\mathbb{D}}$ (see Equation 1). That is, we take the hyperbolic distance from the origin to the point, thus the values are above one. The norm of the Euclidean model is measured according to the Euclidean norm. We + +![](images/0deb21cba4a2ba43633385b1d4c0e47456a1f05517226f849cbf5ab6dee7840f.jpg) +Figure 6: Norm of text vectors for the Euclidean and hyperbolic model. The hyperbolic norm is measured as the hyperbolic distance $d_{\mathbb{D}}$ from the origin to the point, hence the values can be greater than 1. + +observe that both models learn to reduce the norm of the vectors, and it is noticeable that the convergence value for the Euclidean model is higher than for the hyperbolic model. + +
organization +Typesinstitutionfirmgroupunitdivision
dDdDTypesdDTypesdDTypesdDTypesdD
institution0.34firm0.24business0.23unit0.34division0.26subsidiary
company0.35company0.26institution0.24gathering0.34theatre0.28unit
news_agency0.36university0.26company0.25subject0.34activist0.28track
business0.38operator0.28maker0.27administration0.36position0.28half
administration0.40maker0.28operator0.28affiliation0.36half0.28activist
location +Typesstatecountryplacespacehalf
dDTypesdDTypesdDTypesdDTypesdDTypes
state0.33country0.29state0.31space0.40half0.28peak
cemetery0.35half0.31nation0.31localization0.40shopping_mall0.29operator
space0.35agency0.31agency0.32place_name0.40venue0.29theatre
half0.35activist0.32kingdom0.34close0.41landmark0.30placement
area0.36unit0.32world0.35birthplace0.41localization0.30summit
event +Typesconflictwartimedurationcalendar
dDTypesdDTypesdDTypesdDTypesdDTypes
conflict0.44war0.34guerrilla0.32duration0.40calendar0.30date
activist0.45dispute0.36conflict0.34period0.43peak0.31phrase
election0.45series0.37military0.35length0.46half0.32second
activity0.46guerrilla0.38citizen0.36month0.46second0.32activist
holiday0.46future0.38situation0.36date0.46fantasy0.32need
object +Typesmachinecomputerentityseparationplacement
dDTypesdDTypesdDTypesdDTypesdDTypes
machine0.37computer0.29version0.29separation0.43placement0.27position
arrangement0.39theatre0.30machine0.30relative0.44missionary0.27localization
medium0.39operator0.30communication0.30meaning0.44meaning0.27half
method0.39card_game0.31activist0.31warlord0.45variation0.27separation
representation0.39core0.31maker0.32baseball0.45phrase0.27winner
+ +Table 9: Closest $p_k$ points in the Poincaré Ball to coarse entity types, with their hyperbolic distance. In many cases, a hierarchical relation holds with the closest type. For example: firm is-a institution is-a organization. \ No newline at end of file diff --git a/afullyhyperbolicneuralmodelforhierarchicalmulticlassclassification/images.zip b/afullyhyperbolicneuralmodelforhierarchicalmulticlassclassification/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..3fac8e9a2cb42d358325f9bd84b62e4b52132e03 --- /dev/null +++ b/afullyhyperbolicneuralmodelforhierarchicalmulticlassclassification/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7e533fdced50be2cbed50a65edd52807d7a71bd899e5b3983d4c1197f94b7057 +size 765316 diff --git a/afullyhyperbolicneuralmodelforhierarchicalmulticlassclassification/layout.json b/afullyhyperbolicneuralmodelforhierarchicalmulticlassclassification/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..abfa64f0008a0da4aaf9ce7bdaec3764a0173a71 --- /dev/null +++ b/afullyhyperbolicneuralmodelforhierarchicalmulticlassclassification/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:442b20ee108a18eb14d48bfbbc4ffccb74ead2fa98ab6e78d73aa9e18e049c23 +size 620057 diff --git a/agifanadaptivegraphinteractiveframeworkforjointmultipleintentdetectionandslotfilling/fb00967f-b343-4aaa-9219-994a0914a86f_content_list.json b/agifanadaptivegraphinteractiveframeworkforjointmultipleintentdetectionandslotfilling/fb00967f-b343-4aaa-9219-994a0914a86f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b2669b5b889a4017d8dd5b3e641868c2a23d0c25 --- /dev/null +++ b/agifanadaptivegraphinteractiveframeworkforjointmultipleintentdetectionandslotfilling/fb00967f-b343-4aaa-9219-994a0914a86f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1470bd84510fd43c14519d71449c09265155861e18b0a9b00270e670287f21d4 +size 74943 diff --git a/agifanadaptivegraphinteractiveframeworkforjointmultipleintentdetectionandslotfilling/fb00967f-b343-4aaa-9219-994a0914a86f_model.json b/agifanadaptivegraphinteractiveframeworkforjointmultipleintentdetectionandslotfilling/fb00967f-b343-4aaa-9219-994a0914a86f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..376f572e2f79f739816a993aad0c69f91d59a0a5 --- /dev/null +++ b/agifanadaptivegraphinteractiveframeworkforjointmultipleintentdetectionandslotfilling/fb00967f-b343-4aaa-9219-994a0914a86f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6a6f48cea2850af5dd13325230cc5effd0a3b5bbef785f81b9a485207c73babb +size 89953 diff --git a/agifanadaptivegraphinteractiveframeworkforjointmultipleintentdetectionandslotfilling/fb00967f-b343-4aaa-9219-994a0914a86f_origin.pdf b/agifanadaptivegraphinteractiveframeworkforjointmultipleintentdetectionandslotfilling/fb00967f-b343-4aaa-9219-994a0914a86f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..12ce3de89edf6f6f3e1d6eabe670608894bdb067 --- /dev/null +++ b/agifanadaptivegraphinteractiveframeworkforjointmultipleintentdetectionandslotfilling/fb00967f-b343-4aaa-9219-994a0914a86f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:441ec1eca4b3930fc5534e68657743b1bad7c62065b936d73bf83a567337f8e1 +size 524584 diff --git a/agifanadaptivegraphinteractiveframeworkforjointmultipleintentdetectionandslotfilling/full.md b/agifanadaptivegraphinteractiveframeworkforjointmultipleintentdetectionandslotfilling/full.md new file mode 100644 index 0000000000000000000000000000000000000000..9b96f4f536962d711c91a34574d811ec41570863 --- /dev/null +++ b/agifanadaptivegraphinteractiveframeworkforjointmultipleintentdetectionandslotfilling/full.md @@ -0,0 +1,344 @@ +# AGIF: An Adaptive Graph-Interactive Framework for Joint Multiple Intent Detection and Slot Filling + +Libo Qin, Xiao Xu, Wanxiang Che*, Ting Liu + +Research Center for Social Computing and Information Retrieval + +Harbin Institute of Technology, China + +{lbqin,xxu,car,tliu} $@$ ir.hit.edu.cn + +# Abstract + +In real-world scenarios, users usually have multiple intents in the same utterance. Unfortunately, most spoken language understanding (SLU) models either mainly focused on the single intent scenario, or simply incorporated an overall intent context vector for all tokens, ignoring the fine-grained multiple intents information integration for token-level slot prediction. In this paper, we propose an Adaptive Graph-Interactive Framework (AGIF) for joint multiple intent detection and slot filling, where we introduce an intent-slot graph interaction layer to model the strong correlation between the slot and intents. Such an interaction layer is applied to each token adaptively, which has the advantage to automatically extract the relevant intents information, making a fine-grained intent information integration for the token-level slot prediction. Experimental results on three multi-intent datasets show that our framework obtains substantial improvement and achieves the state-of-the-art performance. In addition, our framework achieves new state-of-the-art performance on two single-intent datasets. + +# 1 Introduction + +Spoken language understanding (SLU) (Young et al., 2013) is a core component of task-oriented dialog systems. It consists of two typical subtasks, intent detection and slot filling (Tur and De Mori, 2011). Take the utterance "Please play happy birthday" for example, the intent detection can be seen as a classification task to classify the intent label (i.e., PlayMusic) while the slot filling can be treated as a sequence labeling task to predict the slot label sequence (i.e., O, O, B-music, I-music). Dominant SLU systems in the literature (Goo et al., 2018; Li et al., 2018; E et al., 2019; Liu et al., 2019; Qin et al., 2019) adopt joint + +models to model the relation between the two tasks, which is a direction we follow. + +Though achieving promising performances, most prior work only focus on the simple single intent scenario. Their models are trained based on the assumption that each utterance only has one single intent. Actually, users usually express multiple intents in an utterance and Gangadharaiah and Narayanaswamy (2019) shows that $52\%$ of examples are multi-intent in the amazon internal dataset. Nevertheless, the existing trained single intent SLU models fail to effectively handle the multi-intent settings with the original network structure. Ideally, when an SLU system meets an utterance with multiple intents, as shown in Figure 1(a), the model should directly detect its all intents (PlayMusic and GetWeather). Hence, it is important to consider multi-intent SLU. + +Unlike the prior single intent SLU model which can simply leverage the utterance's single intent to guide slot prediction (Goo et al., 2018; Qin et al., 2019), multi-intent SLU faces to multiple intents and presents a unique challenge that is worth studying: how to effectively incorporate multiple intents information to lead the slot prediction. To this end, Gangadharaiah and Narayanaswamy (2019) first explored the multi-task framework with the slot-gated mechanism (Goo et al., 2018) for joint multiple intent detection and slot filling. Their model incorporated intent information by simply treating an intent context vector as multiple intents information. While this is a direct method for incorporating multiple intents information, it does not offer fine-grained intent information integration for token-level slot filling in the sense that each token is guided with the same complex intents information, which is shown in Figure 1(a). In addition, providing the same intent information for all tokens may introduce ambiguity, where it's hard for each token to capture the related intent information. As + +![](images/cda499c40b6888b91b0dc4fc227ba6dfc974ba536e61d72aa3c31eeee480ba93.jpg) +Figure 1: Prior model simply treat multiple intents as an overall intent information (a) vs. our fine-grained multiple intents integration method (b). + +shown in Figure 1(b), these tokens "happy birthday" should focus on the intent "PlayMusic" while tokens "deepwater bonaire" depend on the intent "GetWeather". Thus, each token should focus on the corresponding intent and it's critical to make a fine-grained intent information integration for the token-level slot prediction. + +In this paper, we propose an Adaptive Graph-Interactive Framework (AGIF) to address the aforementioned concern. The core module is the proposed adaptive intent-slot graph interaction layer, which is constructed of each token's hidden state of slot filling decoder and embeddings of predicted multiple intents. In this graph, each token's slot node directly connects all predicted intent nodes to explicitly build the correlation between slot and intents. Such an interaction graph is applied to each token adaptively, which make each token has the ability to capture different relevant intent information so that fine-grained multiple intents integration can be achieved. In contrast to prior work simply incorporate multiple intents information statically where the same intents information is used for guiding all tokens, our intent-slot interaction graph is constructed adaptively with graph attention network over each token. This encourages our model to automatically filter the irrelevant information and capture important intent at the token-level. + +We first conduct experiments on the multi-intent benchmark dataset DSTC4 (Schuster et al., 2019). Then, to verify the generalization of our framework, we empirically construct two large-scale multi- intent datasets MixATIS (Hemphill et al., 1990) and MixSNIPS (Coucke et al., 2018). The results of these experiments show the effectiveness of our framework by outperforming the current state-of + +the-art method. To the best of our knowledge, there are no public large-scale multiple intents datasets and we hope the release of it would push forward the research of multi-intent SLU. In addition, our framework achieves state-of-the-art performance on two public single-intent datasets including ATIS (Tur and De Mori, 2011) and SNIPS (Coucke et al., 2018), which further verifies the generalization of the proposed model. + +To facilitate future research in this area, all datasets and codes are publicly available at https://github.com/LooperXX/AGIF. + +# 2 Approach + +The architecture of our framework is demonstrated in Figure 2, which consists of a shared encoder, an adaptive intent-slot graph interaction layer and two separate decoders. First, the encoder (§2.1) uses a shared self-attentive encoder to represent an utterance, which can grasp the shared information between intent detection and slot filling. Then, the intent detection decoder (§2.2) performs the multi-label classification to detect multiple intents. Finally, we introduce the adaptive intent-slot graph interaction layer (§2.3) to explicitly leverage the multiple intents information for guiding slot prediction. Both intent detection and slot filling are optimized simultaneously via a multi-task learning scheme. + +# 2.1 Self-Attentive Encoder + +In the self-attentive encoder, following Qin et al. (2019), we use BiLSTM with the self-attention mechanism to leverage both advantages of temporal features within word orders and contextual information. + +![](images/da7549f70188d9640511e3669fd9f3b20f610e4b4c1ec5bd9dd3a54c66026ed5.jpg) +(a) + +![](images/05e89e218dbe4509f722ce8bf4fcd29d3044abb3dd773e85e9b6cd92eacde9e8.jpg) +(b) +Figure 2: The overflow of model architecture (a) and adaptive intent-slot graph interaction module (b). + +Bidirectional LSTM A bidirectional LSTM (BiLSTM) (Hochreiter and Schmidhuber, 1997) consists of two LSTM layers. For the input sequence $\{x_{1}, x_{2}, \ldots, x_{T}\}$ ( $T$ is the number of tokens in the input utterance), the BiLSTM reads it forwardly from $x_{1}$ to $x_{T}$ and backwardly from $x_{T}$ to $x_{1}$ to produce a series of context-sensitive hidden states $\pmb{H} = \{\pmb{h}_{1}, \pmb{h}_{2}, \ldots, \pmb{h}_{T}\}$ . + +Self-Attention We follow Vaswani et al. (2017) to use a self-attention mechanism over word embedding to capture context-aware features. We first map the matrix of input vectors $\mathbf{X} \in \mathbb{R}^{T \times d}$ ( $d$ represents the mapped dimension) to queries $\mathbf{Q}$ , keys $\mathbf{K}$ and values $\mathbf{V}$ matrices by using different linear projections parameters $\mathbf{W}_q, \mathbf{W}_k, \mathbf{W}_v$ . Attention weight is computed by dot product between $\mathbf{Q}, \mathbf{K}$ and the self-attention output $\mathbf{A} \in \mathbb{R}^{T \times d}$ is a weighted sum of values: + +$$ +\boldsymbol {A} = \operatorname {s o f t m a x} \left(\frac {\boldsymbol {Q} \boldsymbol {K} ^ {\top}}{\sqrt {d _ {k}}}\right) \boldsymbol {V}, \tag {1} +$$ + +where $d_{k}$ denotes the dimension of keys. + +We concatenate these two representations as the final encoding representation: + +$$ +\boldsymbol {E} = \left[ \boldsymbol {H} \right| | \boldsymbol {A} ], \tag {2} +$$ + +where $\pmb{E} = \{\pmb{e}_1, \dots, \pmb{e}_T\} \in \mathbb{R}^{T \times 2d}$ and $||$ is concatenation operation. + +# 2.2 Intent Detection Decoder + +We follow Gangadharaiah and Narayanaswamy (2019) to perform multiple intent detection as the + +multi-label classification problem. We compute the utterance context vector over $\mathbf{E} = \{e_1, \dots, e_T\} \in \mathbb{R}^{T \times 2d}$ . In our case, we use a self-attention module (Zhong et al., 2018; Goo et al., 2018) to capture relevant context: + +$$ +p _ {t} = \operatorname {s o f t m a x} \left(\boldsymbol {w} _ {e} \boldsymbol {e} _ {t} + \boldsymbol {b}\right), \tag {3} +$$ + +$$ +\boldsymbol {c} = \sum_ {t} p _ {t} \boldsymbol {e} _ {t}, \tag {4} +$$ + +where $\pmb{w}_e\in \mathbb{R}^{1\times 2d}$ is the trainable parameters, $p_t$ is corresponding normalized self-attention score. + +$c$ is the weighted sum of each element $e_t$ and utilized for intent detection: + +$$ +\boldsymbol {y} ^ {I} = \sigma \left(\boldsymbol {W} _ {i} \left(\text {L e a k y R e L U} \left(\boldsymbol {W} _ {c} \boldsymbol {c} + \boldsymbol {b} _ {c}\right)\right) + \boldsymbol {b} _ {i}\right), \tag {5} +$$ + +where $\mathbf{W}_i, \mathbf{W}_c$ are trainable parameters of the intent decoder, $\mathbf{y}^I = \{y_1^I, \dots, y_{N_I}^I\}$ is the intent output of the utterance and $N_I$ is the number of single intent labels. $\sigma$ represents the sigmoid activation function. + +During inference, we predict intents $\mathbf{I} = \{I_1,\dots ,I_n\}$ and $I_{i}$ represents probability $y_{I_i}^I$ greater than $t_u$ , where $0 < t_u < 1.0$ is a hyperparameter tuned using the validation set. For example, if the $\mathbf{y}^{I} = \{0.9,0.3,0.6,0.7,0.2\}$ and the $t_u$ is 0.5, we predict intents $\mathbf{I} = \{1,3,4\}$ . + +# 2.3 Adaptive Intent-Slot Graph Interaction for Slot Filling + +In this paper, one of the core contribution is adaptively leveraging multiple intents to guide the slot + +prediction, encouraging each token to capture the corresponding relevant intent information. In particular, we adopt the graph attention network (GAT) (Veličković et al., 2017) to model the interaction between intents and slot at the token-level. + +In this section, we first describe the vanilla graph attention network. Then, we show how to directly leverage multiple intents information for slot prediction with the adaptive intent-slot graph interaction layer. + +Vanilla Graph Attention Network For a given graph with $N$ nodes, one-layer GAT take the initial node features $\tilde{H} = \{\tilde{h}_1,\dots ,\tilde{h}_N\}$ $\tilde{h}_n\in \mathbb{R}^F$ as input, aiming at producing more abstract representation, $\tilde{H}^{\prime} = \{\tilde{h}_{1}^{\prime},\ldots ,\tilde{h}_{N}^{\prime}\}$ $\tilde{h}_n^\prime \in \mathbb{R}^{F^\prime}$ , as its output. The graph attention operated on the node representation can be written as: + +$$ +\mathcal {F} (\tilde {\boldsymbol {h}} _ {i}, \tilde {\boldsymbol {h}} _ {j}) = \mathrm {L e a k y R e L U} \left(\mathbf {a} ^ {\top} [ \boldsymbol {W} _ {h} \tilde {\boldsymbol {h}} _ {i} \| \boldsymbol {W} _ {h} \tilde {\boldsymbol {h}} _ {j} ]\right), +$$ + +$$ +\alpha_ {i j} = \frac {\exp (\mathcal {F} (\tilde {\boldsymbol {h}} _ {i} , \tilde {\boldsymbol {h}} _ {j}))}{\sum_ {j ^ {\prime} \in \mathcal {N} _ {i}} \exp (\mathcal {F} (\tilde {\boldsymbol {h}} _ {i} , \tilde {\boldsymbol {h}} _ {j ^ {\prime}}))}, +$$ + +$$ +\tilde {\boldsymbol {h}} _ {i} ^ {\prime} = \sigma \left(\sum_ {j \in \mathcal {N} _ {i}} \alpha_ {i j} \boldsymbol {W} _ {h} \tilde {\boldsymbol {h}} _ {j}\right), \tag {6} +$$ + +where $\mathcal{N}_i$ is the first-order neighbors of node $i$ (including $i$ ) in the graph, $\pmb{W}_h \in \mathbb{R}^{F' \times F}$ and $\mathbf{a} \in \mathbb{R}^{2F'}$ is the trainable weight matrix, $\alpha_{ij}$ is the normalized attention weight denoting the importance of each $\tilde{\pmb{h}}_j$ to $\tilde{\pmb{h}}_i$ and $\sigma$ represents the nonlinearity activation function. + +GAT inject the graph structure into the mechanism by performing masked attention, i.e, GAT only compute $\mathcal{F}(\tilde{\pmb{h}}_i,\tilde{\pmb{h}}_j)$ for nodes $j\in \mathcal{N}_i$ .To stabilize the learning process of self-attention, GAT extend the above mechanism to employ multi-head attention from Vaswani et al. (2017): + +$$ +\tilde {\boldsymbol {h}} _ {i} ^ {\prime} = \underset {k = 1} {\mid} \begin{array}{l} K \\ \| \end{array} \sigma \left(\sum_ {j \in \mathcal {N} _ {i}} \alpha_ {i j} ^ {k} \boldsymbol {W} _ {h} ^ {k} \tilde {\boldsymbol {h}} _ {j}\right), \tag {7} +$$ + +where $\alpha_{ij}^{k}$ is the normalized attention weight computed by the $k$ -th function $\mathcal{F}_k$ , || is concatenation operation and $K$ is the number of heads. Thus, the output $\tilde{\pmb{h}}_n'$ will consists of $KF'$ features in the middle layers and the final prediction layer will employ averaging instead of concatenation to get the final prediction results. + +# Adaptive Intent-Slot Graph Interaction for Slot + +Prediction We use a unidirectional LSTM as the slot filling decoder. At each decoding step $t$ , the + +decoder state $s_t$ is calculated by previous decoder state $s_{t-1}$ , the previous emitted slot label distribution $y_{t-1}^S$ and the aligned encoder hidden state $e_t$ : + +$$ +\boldsymbol {s} _ {t} = \operatorname {L S T M} \left(\boldsymbol {s} _ {t - 1}, \boldsymbol {y} _ {t - 1} ^ {S}, \boldsymbol {e} _ {t}\right). \tag {8} +$$ + +Instead of directly utilizing the $s_t$ to predict the slot label, we build a graphic structure named adaptive intent-slot graph interaction to explicitly leverage multiple intents information to guide the $t$ -th slot prediction. In this graph, the slot hidden state at $t$ time step is $s_t$ and predicted multiple intents information $I = \{I_1, \ldots, I_n\}$ , where $n$ denotes the number of predicted intents, are used as the initialized representations at $t$ time step $\tilde{H}^{[0,t]} = \{s_t, \phi^{emb}(I_1), \ldots, \phi^{emb}(I_n)\} \in \mathbb{R}^{(n+1) \times d}$ , where $d$ represents the dimension of vertices representation and $\phi^{emb}(\cdot)$ represents the embedding matrix of intents. In addition, the predicted intents are connected to each other to consider their mutual interaction because all of them express the same utterance's intent. + +For convenience, we use $\tilde{\pmb{h}}_i^{[l,t]}$ to represent node $i$ in the $l$ -th layer of the graph consisting of the decoder state node and predicted intent nodes at $t$ time step. $\tilde{\pmb{h}}_0^{[l,t]}$ is the slot hidden state representation in the $l$ -th layer. To explicitly leverage the multiple intents information, the slot hidden state node is directly connected to all predicted intents and the slot node representation in the $l$ -th layer can be calculated as: + +$$ +\tilde {\boldsymbol {h}} _ {i} ^ {[ l, t ]} = \sigma \big (\sum_ {j \in \mathcal {N} _ {i}} \alpha_ {i j} ^ {[ l, t ]} \boldsymbol {W} _ {h} ^ {[ l ]} \tilde {\boldsymbol {h}} _ {j} ^ {[ l - 1, t ]} \big), \tag {9} +$$ + +where $\mathcal{N}_i$ represents the first-order neighbors of node $i$ , i.e., the decoder state node and the predicted intent nodes, and the update process of all node representations can be calculated by Equation 6, 7 and 9. + +With $L$ -layer adaptive intent-slot graph interaction, we obtain the final slot hidden state representation $\tilde{\pmb{h}}_0^{[L,t]}$ at $t$ time step, which adaptively capture important intents information at token-level. The representation $\tilde{\pmb{h}}_0^{[L,t]}$ is utilized for slot filling: + +$$ +\boldsymbol {y} _ {t} ^ {S} = \operatorname {s o f t m a x} \left(\boldsymbol {W} _ {s} \tilde {\boldsymbol {h}} _ {0} ^ {[ L, t ]}\right), \tag {10} +$$ + +$$ +o _ {t} ^ {S} = \arg \max \left(\boldsymbol {y} _ {t} ^ {S}\right), \tag {11} +$$ + +where $o_t^S$ is the predicted slot label of the $t$ -th word in the utterance. + +# 2.4 Multi-Task Training + +Following Qin et al. (2020), we adopt a joint model to consider the two tasks and update parameters by joint optimizing. The intent detection objective is: + +$$ +\mathcal {L} _ {1} \triangleq - \sum_ {k = 1} ^ {N _ {I}} \left(\hat {y} _ {k} ^ {I} \log \left(y _ {k} ^ {I}\right) + \left(1 - \hat {y} _ {k} ^ {I}\right) \log \left(1 - y _ {k} ^ {I}\right)\right) \tag {12} +$$ + +Similarly, the slot filling task objective is defined as: + +$$ +\mathcal {L} _ {2} \triangleq - \sum_ {i = 1} ^ {M} \sum_ {j = 1} ^ {N _ {S}} \hat {y} _ {i} ^ {(j, S)} \log \left(y _ {i} ^ {(j, S)}\right), \tag {13} +$$ + +where $N_{I}$ is the number of single intent labels, $N_{S}$ is the number of slot labels and $M$ is the number of words in an utterance. + +The final joint objective is formulated as: + +$$ +\mathcal {L} = \alpha \mathcal {L} _ {1} + (1 - \alpha) \mathcal {L} _ {2}, \tag {14} +$$ + +where $\alpha$ is hyper-parameter. + +# 3 Experiments + +# 3.1 Datasets + +Multiple Intent Datasets We conduct experiments on the benchmark DSTC4 (Kim et al., 2017b), which is human-human multi-turn dialogues. We adopt the same dataset partition in the DSTC4 main task and we regard its speech act attributes as intents. It has 12,759 utterances for training, 4,812 utterances for validation and 7,848 utterances for testing. + +To verify the generalization of the proposed model, we construct the multi-intent SLU dataset, MixSNIPS. MixSNIPS dataset is collected from the Snips personal voice assistant (Coucke et al., 2018) by using conjunctions, e.g., "and", to connect sentences with different intents and ensure that the ratio of sentences has 1-3 intents is $[0.3, 0.5, 0.2]$ . Finally, we get the 45,000 utterances for training, 2,500 utterances for validation and 2500 utterances for testing on the MixSNIPS dataset. Similarly, we construct another multi-intent SLU dataset, MixATIS, from the ATIS dataset (Hemphill et al., 1990). There are 18,000 utterances for training, 1,000 utterances for validation and 1,000 utterances for testing. The constructed datasets have been released for future research. + +Single Intent Datasets In addition, we also conduct experiments on two public benchmark single-intent datasets to validate the efficiency of our proposed model. One is the ATIS dataset (Hemphill et al., 1990) and the other is SNIPS dataset (Coucke et al., 2018), which are widely used as benchmark in SLU research. Both datasets follow the same format and partition as in Goo et al. (2018) and Qin et al. (2019). + +# 3.2 Experimental Settings + +The self-attentive encoder hidden units is 256 in all datasets. $\ell_2$ regularization is $1\times 10^{-6}$ and dropout rate is 0.4 for reducing overfitting. We use Adam (Kingma and Ba, 2014) to optimize the parameters in our model and adopted the suggested hyper-parameters for optimization. The graph layer number is 3 for DSTC4 dataset and 2 for the other datasets. For all the experiments, we select the model which works the best on the dev set and then evaluate it on the test set. All experiments are conducted at TITAN Xp and GeForce RTX 2080Ti. The epoch number is 50 for MixSNIPS and 100 for MixATIS and DSTC4. + +# 3.3 Baselines + +We first compare our model with the existing state-of-the-art multi-intent SLU baseline: + +Joint Multiple ID-SF. Gangadharaiah and Narayanaswamy (2019) proposes a multi-task framework with the slot-gated mechanism for multiple intent detection and slot filling. + +Then, we compare our framework with the existing state-of-the-art single-intent SLU: + +1) Attention BiRNN. Liu and Lane (2016) propose an alignment-based RNN with the attention mechanism, which implicitly learns the relationship between slot and intent. +2) Slot-Gated Atten. Goo et al. (2018) proposes a slot-gated joint model to explicitly consider the correlation between slot filling and intent detection. +3) Bi-Model. Wang et al. (2018) proposes the Bi-model to consider the cross-impact between the intent detection and slot filling. +4) SF-ID Network. Haihong et al. (2019) proposes an SF-ID network to establish direct connections for the slot filling and intent detection to help them promote each other mutually. +5) Stack-Propagation. Qin et al. (2019) adopts a joint model with Stack-Propagation to capture the intent semantic knowledge and perform the token-level intent detection to further alleviate the error + +
ModelMixATISMixSNIPS
Slot (F1)Intent (F1)Intent (Acc)Overall (Acc)Slot (F1)Intent (F1)Intent (Acc)Overall (Acc)
Attention BiRNN86.6-71.638.789.4-94.162.2
Slot-Gated88.1-65.738.987.8-96.056.5
Slot-gated Intent86.7-66.239.687.9-94.257.6
Bi-Model85.5-72.339.186.8-95.353.9
SF-ID87.7-63.736.289.6-96.359.3
Stack-Propagation (concatenation)86.6-76.042.893.9-96.475.5
Stack-Propagation (sigmoid-decoder)87.479.071.941.093.297.694.671.9
Joint Multiple ID-SF87.580.673.138.191.098.295.766.6
AGIF88.181.2*75.844.5*94.5*98.6*96.5*76.4*
+ +propagation. This model achieves the state-of-the-art performance. + +To enable single-intent SLU baselines can handle the multi-intent utterances, we follow Gangadharaiah and Narayanaswamy (2019) to connect them with # to get the single multi-intent label for a fair comparison, we name it as concatenation version. To further verify the effectiveness of our framework, we change the state-of-the-art baseline Stack-Propagation to directly predict the multi-intent label by changing the inten decoder with replacing softmax as sigmoid and using binary cross-entropy loss. We refer it as the sigmoid-decoder. + +For the Attention BiRNN, Slot-Gated Atten, SF-ID Network and Stack-Propagation, we run their official source code to obtain the results. For the Bi-Model and Joint Multiple ID-SF, we re-implemented the models and obtained the results on the same datasets because the original paper did not release their codes. + +# 3.4 Main Results + +Following Goo et al. (2018) and Qin et al. (2019), we evaluate the performance of slot filling using F1 score, intent prediction using accuracy and macro F1 score, the sentence-level semantic frame parsing using overall accuracy which represents all metrics are right in an utterance. Table 1 shows the experiment results of the proposed models on the MixATIS and MixSNIPS datasets. + +From the results, we have three observations: + +1) Our framework outperforms Joint Multiple ID-SF baseline by a large margin and achieves state-of-the-art performance. On the MixATIS dataset, we achieve $0.6\%$ improvement on Slot (F1) score, $0.6\%$ improvement on Intent (F1), $2.7\%$ improvement on Intent (Acc). On the MixSNIPS dataset, we achieve $3.5\%$ improvement on Slot (F1) score, + +Table 1: Slot filling and intent detection results on two self-constructed multi-intent datasets. The numbers with * indicate that the improvement of our model over all the compared baselines is statistically significant with $p < 0.05$ under the t-test. + +
ModelDSTC4
Slot (F1)Intent (F1)Intent (Acc)Overall (Acc)
Attention BiRNN44.0-42.132.6
Slot-Gated45.0-42.532.5
Slot-gated Intent50.2-40.631.7
Bi-Model44.6-41.330.5
SF-ID51.4-41.833.0
Stack-Propagation (1)52.8-44.934.6
Stack-Propagation (2)51.939.239.230.5
Joint Multiple ID-SF48.037.539.029.4
AGIF53.940.046.135.2
+ +Table 2: Slot filling and intent detection results on the DSTC4 dataset. Stack-Propagation (1) denotes the Stack-Propagation (concatenation) version and Stack-Propagation (2) denotes the Stack-Propagation (sigmoid-decoder) version. + +0.4% improvement on Intent (F1), 0.8% improvement on Intent (Acc). This indicates that our adaptive intent-slot graph interaction successfully incorporates relevant intent information to improve slot prediction. In addition, we obtain 6.4% improvement and 9.8% improvement on Overall (Acc) on MixATIS and MixSNIPS dataset, respectively. We attribute this to the fact that our adaptive intent-slot graph interaction mechanism can better help grasp the relationship between the intent and slots and improve the whole SLU. + +2) The concatenation outperforms the sigmoid-decoder version, this is because concatenation can greatly reduce the multi-intent search space, which makes it easier for single intent systems to predict multiple intents. For example, on the ATIS dataset, there exist 17 single intents and 4 combined multi-intent in the training data. The multi-intent systems make a binary prediction at each intent while the concatenation model predicts the limited combined intent search space $(17 + 4)$ . + +3) Though facing the difficulty of multi-intent prediction, our framework outperforms the state-of-the-art single-intent model (Stack-Propagation (concatenation)), which further proves the proposed token-level adaptive graph interaction layer + +![](images/f2644a927e9330a6675a5fdf28b7b531346d330f8298f9cf21bf0d5fdf1d1a5c.jpg) +Figure 3: A case study between our model and Joint Multiple ID-SF. The green slot is correct while the red one is wrong. Better viewed in color. + +
ModelMixSNIPS
Slot (F1)Intent (F1)Intent (Acc)Overall (Acc)
Vanilla Attention Interaction93.898.095.274.0
GCN-based Interaction93.398.396.072.7
Sentence-Level Augmented + More Parameters93.898.195.773.9
94.198.696.673.6
AGIF94.598.696.576.4
+ +Table 3: Ablation Study on MixSNIPS Datasets. + +can improve the SLU performance. + +# 3.5 Analysis + +# 3.5.1 Performance on the DSTC4 dataset + +To further analyze the performance of the AGIF model, we conduct experiments on the real-world multi-intent SLU dataset, DSTC4. The results are shown in Table 2. From the results, we achieve $5.9\%$ improvement on Slot (F1) score, $2.5\%$ improvement on Intent (F1), $7.1\%$ improvement on Intent (Acc) and $5.8\%$ improvement on Overall (Acc) compared with Joint Multiple ID-SF. This further proves that our adaptive intent-slot graph interaction could aggregate the pertinent intent information to enhance the token-level slot prediction. + +# 3.5.2 Effectiveness of Intent-Slot Graph Interaction Mechanism + +- Graph Attention Mechanism vs. Vanilla Attention Mechanism Instead of adopting the GAT to model the interaction between the predicted intents and slot, we utilize the attention mechanism to incorporate the intents information for slot filling at the token-level. We name it as Vanilla Attention Interaction. We first use the hidden state of slot filling decoder as the query to attend to the intent embedding to obtain the context intent vector, and then we sum the vector and the hidden state of slot filling decoder to get the final slot prediction. The results are shown in Vanilla Attention Interaction row in Table 3, we observe the overall performance drops $2.4\%$ on the MixSNIPS dataset. We attribute it to the fact that the multi-layer graph attention net + +![](images/ce9ea6696f8fa747bc13524bb55d15e6cb03b19dc8b92d4660bca1cc64e6fc01.jpg) +Figure 4: Visualization. Y-axis is the predicted intents and X-axis is the input utterance where slot tokens are surrounded by $*$ . For each column, the darker the color, the more relevant they are. + +work can automatically capture relevant intents information and better aggregate intents information for each token slot prediction. + +- Graph Attention Mechanism vs. Graph Convolution Mechanism We replace the graph attention layer with the graph convolution layer and keep other components unchanged. We refer to it as GCN-based Interaction. The results are shown in GCN-based Interaction row in Table 3, we observe the performance drops in all metrics in the MixS-NIPS dataset. We suggested that GCN-based Interaction cannot adaptively attribute different weights to each node in the intent-slot graph while our graph attention mechanism can automatically filter irrelevant intent information for each token. + +# 3.5.3 Effectiveness of Adaptive Intent-Slot Interaction Mechanism + +- Adaptive Interaction Mechanism vs. Sentence-Level Augmented Mechanism We first conduct experiments by statically providing the same intent information for all tokens slot prediction where we sum the predicted intent embeddings and directly add it to the hidden state of slot filling decoder. We refer to it as sentence-level augmented. The result is shown in Table 3. We can observe that if we only provide overall intent information for slot filling, we obtain the worse results, which demonstrates the + +
ModelATISSNIPS
Slot (F1)Intent (F1)Intent (Acc)Overall (Acc)Slot (F1)Intent (F1)Intent (Acc)Overall (Acc)
SF-ID95.6-96.68690.5-9778.4
Stack-Propagation95.9-96.986.594.2-98.086.9
Joint Multiple ID-SF94.2-95.4-88.0-97.2-
AGIF96.080.297.187.294.898.398.187.3
+ +Table 4: Slot filling and intent detection results on two single-intent datasets. + +effectiveness of adaptively incorporating intent information at the token-level. We believe the reason is that providing the same intents for all tokens can cause the ambiguity where each token is hard to extract the relevant intent information while our adaptive intent interaction mechanism can achieve the fine-grained intent interaction and capture the related intent information to guide the slot prediction. + +A natural question that raised is whether the more parameters involved by AGIF contribute to the final performance. To verify that the proposed adaptive interaction mechanism rather than the added parameters works, for sentence-level augmented mechanism model, we apply multiple LSTM layers (2-layers) to slot filling decoder and we name it as more parameters. The results in Table 3 show that our framework outperforms the more parameters model in overall accuracy, which verifies that the improvements come from the proposed adaptive intent-slot interaction mechanism rather than the involved parameters. + +- Qualitative Analysis. We provide a case study to intuitively understand the token-level adaptive intent-slot interaction mechanism. As shown in Figure 3, AGIF predicts "I-movie-name" correctly for the slot label of "before" while Joint Multiple ID-SF predicts it as "I-object-name" incorrectly. We observed that "I-object-name" doesn't belong to the intent "SearchScreeningEvent" but to the intent "RateBook". We attribute it to the reason that each token is guided with the same complex intents information making it incorrectly and confusedly capture the information of the other intent "RateBook". In contrast, our adaptive graph interaction mechanism can offer fine-grained intent information integration for token-level slot filling to predict the slot label correctly. + +# 3.5.4 Visualization + +With the attempt to better understand what the adaptive intent-slot graph interaction layer has learned, we visualize the intent attention weights of slot filling hidden states node in the output head of the adaptive intent-slot graph interaction layer, which is shown in Figure 4. Based on the utterance "can you add confessions to my playlist called clásica and what is the weather forecast for close-by burkina" and the intents "AddTo Playlist" and "GetWeather", we can clearly see the attention weights successfully focus on the correct intent, which means our graph interaction layer can learn to incorporate the correlated intent information at each slot. More specifically, our model properly aggregates the corresponding "AddTo Playlist" intent information at slots "confessions, my, clásica" and "GetWeather" intent information at slots "close-by burkina". + +# 3.5.5 Evaluation on the Single-Intent Datasets + +We conduct experiments on two public single-intent benchmarks to evaluate the generalizability of our framework. We compare our model with the single-intent state-of-the-art models including SF-ID, Stack-Propagation and multi-intent model including Joint Multiple ID-SF. Table 4 shows the experiment results of the proposed models on the ATIS and SNIPS datasets. From the table, we can see that our model outperforms all the compared baselines and achieves state-of-the-art performance. This demonstrates the generalizability and effectiveness of our framework whether handling multi-intent or single-intent SLU. + +# 4 Related Work + +Intent Detection Intent detection is formulated as an utterance classification problem. Different classification methods, such as support vector machine (SVM) and RNN (Haffner et al., 2003; Sarikaya et al., 2011), have been proposed to solve it. Xia et al. (2018) adopts a capsule-based neural network with self-attention for intent detection. + +However, the above models mainly focus on the single intent scenario, which can not handle the complex multiple intent scenario. Xu and Sarikaya (2013b) and Kim et al. (2017a) explore the complex scenario, where multiple intents are assigned to a user's utterance. Xu and Sarikaya (2013b) use log-linear models to achieve this, while we use neural network models. Compared with their work, we jointly perform multi-label intent detection and slot prediction, while they only consider the subtask intent detection. + +Slot Filling Slot filling can be treated as a sequence labeling task. The popular approaches are conditional random fields (CRF) (Raymond and Riccardi, 2007) and recurrent neural networks (RNN) (Xu and Sarikaya, 2013a; Yao et al., 2014). Recently, Shen et al. (2018) and Tan et al. (2018) introduce the self-attention mechanism for CRF-free sequential labeling. + +Joint Model To consider the high correlation between intent and slots, many joint models (Goo et al., 2018; Li et al., 2018; Xia et al., 2018; E et al., 2019; Liu et al., 2019; Qin et al., 2019) are proposed to solve two tasks. Goo et al. (2018); Li et al. (2018); Zhang et al. (2019) propose to utilize the intent information to guide the slot filling. Qin et al. (2019) further utilize a stack-propagation framework for better leveraging intent semantic information to guide the slot filling, which achieves the state-of-the-art performance. Wang et al. (2018) and E et al. (2019) consider the cross-impact between the slot and intents. Our framework follows those state-of-the-art joint model paradigm, and further focus on the multiple intents scenario while the above joint models do not consider. Recently, Gangadharajaiah and Narayanaswamy (2019) propose a joint model to consider the multiple intent detection and slot filling simultaneously where they explicitly leverage overall intent information with the gate mechanism to guide all tokens slot prediction. Compared with this work, the main differences are as following: 1) Our framework exploits a fine-grained intent information transfer with a unified graph interaction architecture while their work simply incorporates the same intents information for all tokens slot prediction. 2) As far as we know, their corpus and code are not distributed, which makes it hard to follow. In contrast, we empirically construct two large-scale multi-intent SLU datasets where all datasets and code have been released. We + +hope it would push forward the research of multi- intent SLU. + +# 5 Conclusion + +In our paper, we propose a token-level adaptive graph-interactive framework to model the interaction between multiple intents and slot at each token, which can make a fine-grained intent information transfer for slot prediction. To our best of knowledge, this is the first work to explore fine-grained intents information transfer in multi-intent SLU. In addition, we release two multi-intent datasets and hope it can push forward the research this area. Experiments on four datasets show the effectiveness of the proposed models and achieve state-of-the-art performance. + +# Acknowledgements + +We thank the anonymous reviewers for their helpful comments and suggestions. This work was supported by the National Natural Science Foundation of China (NSFC) via grant 61976072, 61632011 and 61772153. + +# References + +Alice Coucke, Alaa Saade, Adrien Ball, Théodore Bluche, Alexandre Caulier, David Leroy, Clément Doumouro, Thibault Gisselbrecht, Francesco Caltagirone, Thibaut Lavril, et al. 2018. Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces. arXiv preprint arXiv:1805.10190. +Haihong E, Peiqing Niu, Zhongfu Chen, and Meina Song. 2019. A novel bi-directional interrelated model for joint intent detection and slot filling. In Proc. of ACL. +Rashmi Gangadharaiiah and Balakrishnan Narayanaswamy. 2019. Joint multiple intent detection and slot labeling for goal-oriented dialog. In Proc. of NAACL, pages 564-569. +Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and YunNung Chen. 2018. Slot-gated modeling for joint slot filling and intent prediction. In Proc. of NAACL. +Patrick Haffner, Gokhan Tur, and Jerry H Wright. 2003. Optimizing svms for complex call classification. In In Proc. of ICASSP. +E Haihong, Peiqing Niu, Zhongfu Chen, and Meina Song. 2019. A novel bi-directional interrelated model for joint intent detection and slot filling. In Proc. of ACL, pages 5467-5471. + +Charles T Hemphill, John J Godfrey, and George R Doddington. 1990. The atis spoken language systems pilot corpus. In Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27, 1990. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural computation*, 9(8). +Byeongchang Kim, Seonghan Ryu, and Gary Geunbae Lee. 2017a. Two-stage multi-intent detection for spoken language understanding. Multimedia Tools and Applications, 76(9):11377-11390. +Seokhwan Kim, Luis Fernando D'Haro, Rafael E Banchs, Jason D Williams, and Matthew Henderson. 2017b. The fourth dialog state tracking challenge. In Dialogues with Social Robots, pages 435-449. Springer. +Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. +Changliang Li, Liang Li, and Ji Qi. 2018. A self-attentive model with gate mechanism for spoken language understanding. In Proc. of EMNLP. +Bing Liu and Ian Lane. 2016. Attention-based recurrent neural network models for joint intent detection and slot filling. arXiv preprint arXiv:1609.01454. +Yijin Liu, Fandong Meng, Jinchao Zhang, Jie Zhou, Yufeng Chen, and Jinan Xu. 2019. Cm-net: A novel collaborative memory network for spoken language understanding. arXiv preprint arXiv:1909.06937. +Libo Qin, Wanxiang Che, Yangming Li, Minheng Ni, and Ting Liu. 2020. Dcr-net: A deep co-interactive relation network for joint dialog act recognition and sentiment classification. In AAAI, pages 8665-8672. +Libo Qin, Wanxiang Che, Yangming Li, Haoyang Wen, and Ting Liu. 2019. A stack-propagation framework with token-level intent detection for spoken language understanding. In Proc. of EMNLP. +Christian Raymond and Giuseppe Riccardi. 2007. Generative and discriminative algorithms for spoken language understanding. In *Eighth Annual Conference of the International Speech Communication Association*. +Ruhi Sarikaya, Geoffrey E Hinton, and Bhuvana Ramabhadran. 2011. Deep belief nets for natural language call-routing. In ICASSP. +Sebastian Schuster, Sonal Gupta, Rushin Shah, and Mike Lewis. 2019. Cross-lingual transfer learning for multilingual task oriented dialog. In Proc. of NAACL. +Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, and Chengqi Zhang. 2018. Disan: Directional self-attention network for rnn/cnn-free language understanding. In Proc. of AAAI. + +Zhixing Tan, Mingxuan Wang, Jun Xie, Yidong Chen, and Xiaodong Shi. 2018. Deep semantic role labeling with self-attention. In Proc. of AAAI. +Gokhan Tur and Renato De Mori. 2011. Spoken language understanding: Systems for extracting semantic information from speech. John Wiley & Sons. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. +Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903. +Yu Wang, Yilin Shen, and Hongxia Jin. 2018. A bi-model based rnn semantic frame parsing model for intent detection and slot filling. In Proc. of ACL. +Congying Xia, Chenwei Zhang, Xiaohui Yan, Yi Chang, and Philip Yu. 2018. Zero-shot user intent detection via capsule neural networks. In Proc. of EMNLP, pages 3090-3099, Brussels, Belgium. Association for Computational Linguistics. +Puyang Xu and Ruhi Sarikaya. 2013a. Convolutional neural network based triangular crf for joint intent detection and slot filling. In 2013 IEEE Workshop on Automatic Speech Recognition and Understanding. +Puyang Xu and Ruhi Sarikaya. 2013b. Exploiting shared information for multi-intent natural language sentence classification. In *INTERSPEECH*. +Kaisheng Yao, Baolin Peng, Yu Zhang, Dong Yu, Geoffrey Zweig, and Yangyang Shi. 2014. Spoken language understanding using long short-term memory neural networks. In SLT. +Steve Young, Milica Gašić, Blaise Thomson, and Jason D Williams. 2013. Pomdp-based statistical spoken dialog systems: A review. +Chenwei Zhang, Yaliang Li, Nan Du, Wei Fan, and Philip Yu. 2019. Joint slot filling and intent detection via capsule neural networks. In Proc. of ACL. +Victor Zhong, Caiming Xiong, and Richard Socher. 2018. Global-locally self-attentive encoder for dialogue state tracking. In Proc. of ACL. \ No newline at end of file diff --git a/agifanadaptivegraphinteractiveframeworkforjointmultipleintentdetectionandslotfilling/images.zip b/agifanadaptivegraphinteractiveframeworkforjointmultipleintentdetectionandslotfilling/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..6e91631625e8732be4346b1cd3255e22b528e9fb --- /dev/null +++ b/agifanadaptivegraphinteractiveframeworkforjointmultipleintentdetectionandslotfilling/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0d6b31ec21fbd39d51389c33d559599b67ae9efa0398e7d6c7c7ab1b6b1ff65a +size 404137 diff --git a/agifanadaptivegraphinteractiveframeworkforjointmultipleintentdetectionandslotfilling/layout.json b/agifanadaptivegraphinteractiveframeworkforjointmultipleintentdetectionandslotfilling/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..e0627f08af0f343a1d74ea65611aab070e602a22 --- /dev/null +++ b/agifanadaptivegraphinteractiveframeworkforjointmultipleintentdetectionandslotfilling/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:13586392efda3276c27c4944b2c6b166fc47f56ef3a36c0941a070cee4e1cc5f +size 379157 diff --git a/agreedybitfliptrainingalgorithmforbinarizedknowledgegraphembeddings/dfb20dbb-0b80-4423-94db-a389b1598537_content_list.json b/agreedybitfliptrainingalgorithmforbinarizedknowledgegraphembeddings/dfb20dbb-0b80-4423-94db-a389b1598537_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..cf52816b407d58557c188341f38b22b1b46cd29c --- /dev/null +++ b/agreedybitfliptrainingalgorithmforbinarizedknowledgegraphembeddings/dfb20dbb-0b80-4423-94db-a389b1598537_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:faf7071a313cc812a435dd78470b4baf07f88a7740ec7d7a07aec034e80c80c8 +size 45054 diff --git a/agreedybitfliptrainingalgorithmforbinarizedknowledgegraphembeddings/dfb20dbb-0b80-4423-94db-a389b1598537_model.json b/agreedybitfliptrainingalgorithmforbinarizedknowledgegraphembeddings/dfb20dbb-0b80-4423-94db-a389b1598537_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c0683109174a376987a5fbc1e10ff8e57c4c4eb2 --- /dev/null +++ b/agreedybitfliptrainingalgorithmforbinarizedknowledgegraphembeddings/dfb20dbb-0b80-4423-94db-a389b1598537_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e1fccec57dd28bb8092fc24224febabe6b0bd8f37052bd9672a9384858185924 +size 57404 diff --git a/agreedybitfliptrainingalgorithmforbinarizedknowledgegraphembeddings/dfb20dbb-0b80-4423-94db-a389b1598537_origin.pdf b/agreedybitfliptrainingalgorithmforbinarizedknowledgegraphembeddings/dfb20dbb-0b80-4423-94db-a389b1598537_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ed941f664a3973f2c452f522984b2514496f1a0b --- /dev/null +++ b/agreedybitfliptrainingalgorithmforbinarizedknowledgegraphembeddings/dfb20dbb-0b80-4423-94db-a389b1598537_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6b3c2e8a33db27e7125abfad57345208863b54e20993339b925c2fe0d37f2a8a +size 258436 diff --git a/agreedybitfliptrainingalgorithmforbinarizedknowledgegraphembeddings/full.md b/agreedybitfliptrainingalgorithmforbinarizedknowledgegraphembeddings/full.md new file mode 100644 index 0000000000000000000000000000000000000000..8dfbac60c65a366bb9c7ddea17a2d616ca12a7c3 --- /dev/null +++ b/agreedybitfliptrainingalgorithmforbinarizedknowledgegraphembeddings/full.md @@ -0,0 +1,226 @@ +# A Greedy Bit-flip Training Algorithm for Binarized Knowledge Graph Embeddings + +Katsuhiko Hayashi $^{a,d,*}$ + +khayashi0201@gmail.com + +Koki Kishimoto $^{b,\ast}$ + +ko.kishimoto8@gmail.com + +Masashi Shimbo $^{c,d}$ + +shimbo@stair_center + +aThe University of Tokyo bOsaka University Chiba Institute of Technology Riken AIP + +# Abstract + +This paper presents a simple and effective discrete optimization method for training binarized knowledge graph embedding model B-CP. Unlike the prior work using a SGD-based method and quantization of real-valued vectors, the proposed method directly optimizes binary embedding vectors by a series of bit flipping operations. On the standard knowledge graph completion tasks, the B-CP model trained with the proposed method achieved comparable performance with that trained with SGD as well as state-of-the-art real-valued models with similar embedding dimensions. + +# 1 Introduction + +Knowledge graph embedding (KGE) has a wide range of applications in AI and NLP, such as knowledge acquisition, question answering, and recommender systems. Most of the existing KGE models represent entities and relations as real or complex-valued vectors thus consuming a large amount of memory (Nickel et al., 2011; Bordes et al., 2013; Socher et al., 2013; Yang et al., 2014; Wang et al., 2014; Lin et al., 2015; Nickel et al., 2016; Trouillon et al., 2016; Hayashi and Shimbo, 2017; Liu et al., 2017; Manabe et al., 2018; Kazemi and Poole, 2018; Dettmers et al., 2018; Balažević et al., 2019a; Xu and Li, 2019; Balažević et al., 2019b). To deal with knowledge graphs with more than a million entities, more lightweight models are desirable for faster processing and to reduce memory consumption, as AI applications on mobile devices are becoming more common. + +Kishimoto et al. (2019b) proposed a binarized KGE model B-CP, wherein all vector components are binarized, allowing them to be stored compactly with bitwise representation. Despite the reduced + +memory consumption by more than a magnitude, B-CP performed as well as the existing real-valued KGE models on benchmark tasks. + +B-CP is based on the CP decomposition of a knowledge graph (Lacroix et al., 2018; Kazemi and Poole, 2018). It is fully expressive (Kishimoto et al., 2019a), meaning that any knowledge graph can be represented as a B-CP model. + +During the training of B-CP, however, real-valued embeddings are maintained and are quantized at each training step (Kishimoto et al., 2019b). The loss function is computed with respect to the quantized vectors, but stochastic gradient descent is performed on the real vectors with the help of Hinton's "straight-through" estimator (HSTE) (Bengio et al., 2013). Thus, training does not benefit significantly from the compact bitwise representations, although score computation is faster by a bitwise technique. DKGE (Li et al., 2020) is another binary KGE model proposed recently, but it also maintains real-valued vectors during training, as it solves a relaxed optimization problem with continuous variables. + +In this paper, we propose greedy bit flipping, a new training approach for B-CP in which binary vectors are directly optimized, i.e., without the intervention of real-valued vectors. A bit in binary vectors is sequentially flipped in a greedy manner so that the objective loss is improved. The advantages of greedy bit flipping are: (1) It does not need to maintain real-valued vectors even during training; (2) it is simple and is easy to implement; and (3) it has only a few hyperparameters. + +# 2 Binarized CP Decomposition for Knowledge Graphs + +A knowledge graph is a set of triples $(e_i, e_j, r_k)$ , where $e_i, e_j$ are subject and object entities (represented as nodes in the graph), respectively, and $r_k$ is + +the label of the relation between them (corresponding to labeled arcs in the graph). When a triple is in a knowledge graph it is called a fact. + +A knowledge graph can be equivalently represented by a third-order boolean tensor $\mathcal{X} = [x_{ijk}] \in \{0,1\}^{N_e \times N_e \times N_r}$ , where $N_e$ is the number of entities in the graph, and $N_r$ is the number of relation labels; if a triple $(e_i,e_j,r_k)$ is a fact, $x_{ijk} = 1$ , and 0 otherwise. + +CP decomposition (Hitchcock, 1927) is a general technique for decomposing a tensor into a sum of rank-1 tensors. For a third-order tensor $\mathcal{X}$ representing a knowledge graph, its approximate CP decomposition is given by $\mathcal{X} \approx \sum_{d=1}^{D} \mathbf{a}_d \otimes \mathbf{b}_d \otimes \mathbf{c}_d$ where $\otimes$ denotes outer product, and $\mathbf{a}_d, \mathbf{b}_d \in \mathbb{R}^{N_e}$ and $\mathbf{c}_d \in \mathbb{R}^{N_r}$ are real (column) vectors. In this case, matrices $\mathbf{A} = [\mathbf{a}_1\mathbf{a}_2 \cdots \mathbf{a}_D] \in \mathbb{R}^{N_e \times D}$ , $\mathbf{B} = [\mathbf{b}_1\mathbf{b}_2 \cdots \mathbf{b}_D] \in \mathbb{R}^{N_e \times D}$ , and $\mathbf{C} = [\mathbf{c}_1\mathbf{c}_2 \cdots \mathbf{c}_D] \in \mathbb{R}^{N_r \times D}$ are called factor matrices. For any matrix $\mathbf{M}$ , let $\mathbf{m}_i$ : denote its $i$ th row vectors. Then, the component $x_{ijk}$ of $\mathcal{X}$ can be written as $x_{ijk} \approx \langle \mathbf{a}_i, \mathbf{b}_j, \mathbf{c}_k \rangle = \sum_{d=1}^{D} a_{id} b_{jd} c_{kd}$ . Thus, vectors $\mathbf{a}_i, \mathbf{b}_j, \mathbf{c}_k$ can be regarded as the $D$ -dimensional vectors representing the subject entity $e_i$ , object entity $e_j$ , and relation label $r_k$ , respectively. + +The B-CP decomposition of a knowledge graph (Kishimoto et al., 2019b) differs from the standard CP, in that $\mathcal{X}$ is decomposed in terms of binary vectors $\mathbf{a}_d,\mathbf{b}_d\in \{-1, + 1\}^{N_e},\mathbf{c}_d\in \{-1, + 1\}^{N_r}$ . As with CP, B-CP decomposition can be cast as a problem of binary classification, and solved by logistic regression. First, each $x_{ijk}$ is assumed to be a random variable sampled independently from a probability distribution parameterized by A,B,C: + +$$ +p (\mathcal {X} | \mathbf {A}, \mathbf {B}, \mathbf {C}) = \prod_ {i = 1} ^ {N _ {e}} \prod_ {j = 1} ^ {N _ {e}} \prod_ {k = 1} ^ {N _ {r}} p (x _ {i j k} | \boldsymbol {\theta} _ {i j k}). +$$ + +where $\theta_{ijk} = \langle \mathbf{a}_{i:},\mathbf{b}_{j:},\mathbf{c}_{k:}\rangle$ is called the score of triple $(e_i,e_j,r_k)$ , and + +$$ +p (x _ {i j k} | \boldsymbol {\theta} _ {i j k}) = \left\{ \begin{array}{l l} \sigma (\boldsymbol {\theta} _ {i j k}) & \quad \text {i f x _ {i j k} = 1}, \\ 1 - \sigma (\boldsymbol {\theta} _ {i j k}) & \quad \text {i f x _ {i j k} = 0}, \end{array} \right. +$$ + +is a Bernoulli distribution. Function $\sigma(x) = 1 / (1 + \exp(-x))$ is a sigmoid function. + +To train factor matrices to match observed/ unobserved facts encoded as $\mathcal{X}$ , we minimize the + +# Algorithm 1: Greedy Bit-flip Training + +input: Pos: set of training triples (facts), including those for reciprocal relations (see Sec. 3.2) + +input: $N_{e},N_{r}$ : numbers of entities and relations + +input: $I$ : maximum number of iterations + +output: $\mathbf{A},\mathbf{B}\in \{-1, + 1\}^{N_e\times D}$ : factor matrices of subject and object entity embeddings + +output: $\mathbf{C}\in \{-1, + 1\}^{N_r\times D}$ : factor matrix of relation embeddings + +1 Initialize binary factor matrices $\mathbf{A},\mathbf{B},\mathbf{C}$ randomly +2 foreach iter $\in \{1,\dots ,I\}$ do +3 Neg $\leftarrow$ negative samples (see Sec. 3.2) +4 Update(C,Nr,D,Pos,Neg) //relation embeddings +5 Update(A, $N_{e}$ ,D,Pos,Neg // subject embeddings +6 Update(B,Ne,D,Pos,Neg //objectembeddings +7 Check convergence + +# Algorithm 2: Update(M, N, D, Pos, Neg) + +input: $\mathbf{M}\in \{-1, + 1\}^{N\times D}$ : factor matrix to update + +input: Pos: set of positive triples (facts) + +input: Neg: set of negative triples (non-facts) + +output: M: updated factor matrix + +1 $C\gets$ random permutation of indices 1,...,D +2 foreach $i\in \{1,\dots ,N\}$ do//run in parallel +3 foreach $j\in C$ do//run sequentially, but in random order +4 if $\Delta (m_{ij}) < 0$ then $m_{ij}\gets -m_{ij}$ + +negative log likelihood of the posterior probability: + +$$ +\min _ {\mathbf {A}, \mathbf {B}, \mathbf {C}} \sum_ {i = 1} ^ {N _ {e}} \sum_ {j = 1} ^ {N _ {e}} \sum_ {k = 1} ^ {N _ {r}} E _ {i j k} \tag {1} +$$ + +s.t. $E_{ijk} = -x_{ijk}\log (\sigma (\theta_{ijk}))$ + +$$ +- \left(x _ {i j k} - 1\right) \log \left(1 - \sigma \left(\theta_ {i j k}\right)\right). \tag {2} +$$ + +# 3 Greedy Bit-flip Training for B-CP + +The proposed training method randomly samples an element (or a bit) of the factor matrices $\mathbf{A},\mathbf{B},\mathbf{C}$ of B-CP, and negates its sign if this "bit flipping" reduces the objective loss. This process is repeated until the loss does not improve further or a specified number of iterations is reached. The pseudocode of the algorithm is depicted in Algorithms 1 and 2. + +In Algorithm 1, when a factor matrix is updated, the other two factor matrices are fixed. As the number $N_r$ of relations is considerably smaller than the number $N_e$ of entities in general, the change in the relation matrix $\mathbf{C}$ influences the total loss much more significantly than entity matrices $\mathbf{A}$ and $\mathbf{B}$ . For this reason, we update $\mathbf{C}$ prior to $\mathbf{A}$ and $\mathbf{B}$ in each iteration to promote faster convergence. + +Actual update is carried out in Algorithm 2. As remarked on Line 2, a row of a factor matrix, which represents a single entity or a relation, can be processed in parallel, because the score of an individ + +ual triple depends only on a single subject, object, and relation it contains; for instance, even when all subject embeddings are updated simultaneously, only one of them can change the score of any given triple. This means that, when multiple rows of a factor matrix are updated, the change in the total loss in Eq. (1) is invariant to the order of the updates, as long as the other two factor matrices are fixed. Since Algorithm 2 updates only one matrix, we see that its rows can be processed in parallel. + +By contrast, the loss is dependent on the order of updated columns (i.e., bits) within a row, i.e., components in an embedding vector. We thus change the order of updated columns every time Algorithm 2 is called, by shuffling the set $[D]$ of dimensions in Line 1. + +In Algorithm 2, each bit in a factor matrix is examined to see if it is worth being flipped. For instance, consider a component (bit) $a_{ij}$ of $\mathbf{A}$ . Let $E(\mathbf{A},\mathbf{B},\mathbf{C}) = \sum_{i=1}^{N_e} \sum_{j=1}^{N_e} \sum_{k=1}^{N_r} E_{ijk}$ denote the loss in Eq. (1), and let $\mathbf{A}'$ denote the factor matrix $\mathbf{A}$ after $a_{ij}$ is flipped to $-a_{ij}$ . The change in the loss is then + +$$ +\begin{array}{l} \Delta (a _ {i j}) = E (\mathbf {A} ^ {\prime}, \mathbf {B}, \mathbf {C}) - E (\mathbf {A}, \mathbf {B}, \mathbf {C}) \\ = - \sum_ {y = 1} ^ {N _ {e}} \sum_ {z = 1} ^ {N _ {r}} \left(x _ {i y z} \log \frac {\sigma \left(\theta_ {i y z}\right)}{\sigma \left(\theta_ {i y z} - 2 a _ {i j} b _ {y j} c _ {z j}\right)}\right) \\ - \left. (1 - x _ {i y z}) \log \frac {1 - \sigma (\theta_ {i y z})}{1 - \sigma (\theta_ {i y z} - 2 a _ {i j} b _ {y j} c _ {z j})}\right), \\ \end{array} +$$ + +where $\theta_{ijk}$ is computed before the update (i.e., using $\mathbf{A}$ , not $\mathbf{A}'$ ) by Eq. (2). Only if $\Delta(a_{ij})$ is found to be negative, i.e., the loss is decreased, $a_{ij}$ is actually flipped. The same rule applies to the bits in the factor matrices $\mathbf{B}$ and $\mathbf{C}$ . Repeated application of this update guarantees the loss to be non-increasing. However, the loss may be stuck in a local minimum, depending on the order of updates. Training is terminated when the objective loss does not improve, or if a pre-determined number of epochs has elapsed. + +# 3.1 Fast Score Computation by Bitwise Operations + +In this section, we describe the implementation detail necessary to speed up training. + +As Algorithm 2 involves repeated computation of scores $\theta_{ijk}$ , fast computation of scores is a key + +![](images/4dfb243675e36e2ef3f56eb0e68834b4a95740e89e1d3ddaa209bd7f975d14e8.jpg) +Figure 1: Real-valued vs. Bit vectors: Score computation time comparison. + +to speed up training. Although one easy approach is to cache all scores in memory, the number of facts may be huge in knowledge graphs. We thus consider bitwise computation to speed up score computation. + +We can compute the B-CP scores by bitwise operation, as follows. + +$$ +\theta_ {i j k} = D - 2 h \left(\mathbf {a} _ {i:}, \operatorname {X N O R} \left(\mathbf {b} _ {j:}, \mathbf {c} _ {k:}\right)\right), \tag {3} +$$ + +where $h(\cdot, \cdot)$ is Hamming distance and $\mathrm{XNOR}(\cdot, \cdot)$ is the negation of exclusive-or. As shown in Figure 1, bitwise score computation is much faster than naive computation of scores by Eq. (3), making the cost of score computation negligible. + +# 3.2 Negative Sampling and Reciprocal Relations + +Before calling Algorithm 1, for each $(e_i,e_j,r_k)$ in the training set $Pos$ , we introduce its reciprocal triple $(e_j,e_i,r_k^{-1})$ in the set, with a new relation label $r_k^{-1}$ . This technique was used by Lacroix et al. (2018) and Kazemi and Poole (2018), and is effective for models such as CP and B-CP, in which an entity has separate embeddings for subject and object. + +Following prior work, we also approximate the objective loss by sampling negative examples (Algorithm 1, Line 3) to cope with the enormous size and sparsity of knowledge graphs. Specifically, for each $(e_i,e_j,r_k)$ in the training set, a predetermined number of entities are first sampled randomly. Then, for each sampled entity $e$ , we create a negative triple $(e_i,e,r_k)$ and its reciprocal negative triple $(e,e_i,r_k^{-1})$ . + +# 4 Experiments + +# 4.1 Experimental Setup + +For evaluation, we performed entity prediction on two standard knowledge graph completion (KGC) + +![](images/fe8a61384f7bbabc6277ca6d7b0720e4c16141d3e74f4a247bfbbf4bfa6160bb.jpg) +Figure 2: Training epochs vs. filtered MRR on the WN18RR and FB15k-237 validation datasets. + +datasets, WN18RR (Dettmers et al., 2018) and FB15k-237 (Toutanova and Chen, 2015) with their default training/validation/test splits. + +In the entity prediction task, a KGE model is given a set of incomplete triples, each of which is generated by hiding one of the entities in a positive triple in the test set; i.e., from a positive triple $(e_i, e_j, r_k)$ , incomplete triples $(?, e_j, r_k)$ and $(e_i,?, r_k)$ are generated. For each such incomplete triple, the KGE model must produce a list of all entities (including the correct entity, $e_i$ or $e_j$ ) ranked by the score when each of these entities is plugged instead of the placeholder “?” in the triple. The quality of the output ranking list is then measured by two standard evaluation measures for the KGC task: Mean Reciprocal Rank (MRR) and Hits@10, in the “filtered” setting (Bordes et al., 2013). + +We selected the hyperparameter $D$ of the proposed method (henceforth denoted as "Bit-flip B-CP") via grid search over the range $D \in \{200, 400, 600\}$ , such that the filtered MRR is maximized on the validation set. The maximum number of training epochs was set to 20. We generated 20 negative triples per positive training triple for FB15k-237 and 5 for WN18RR. Bit-flip B-CP was implemented in Java, and ran on a laptop PC with 2.7GHz Intel Core i7 CPU. Our implementation with $D = 400$ took about 5 minutes to finish 20 training epochs on the WN18RR training dataset. + +# 4.2 Results + +Training Convergence Figure 2 shows the MRR scores on the validation set at each training epoch. For comparison, we also trained B-CP using HSTE-based stochastic gradient descent for optimization and the best hyperparameters reported by Kishimoto et al. (2019a). + +The figure shows greedy bit flipping (Bit-flip B-CP) requires a much smaller number of training epochs to converge than HSTE-based training (HSTE B-CP). For both datasets, the best MRR + +
ModelsWN18RRFB15k-237
MemoryMRRHits@10MRRHits@10
DistMult*79.2445.253.134.353.1
ComplEx*39.6247.554.734.853.6
ConvE*79.2444.250.433.952.1
HSTE B-CP**3.8745.052.029.246.1
DKGE***2.6235.050.636.850.7
HSTE B-CP3.8744.247.227.143.7
†HSTE B-CP19.3446.451.228.946.0
Bit-flip B-CP3.8747.753.327.645.7
(±0.2)(±0.1)(±0.0)(±0.1)
†Bit-flip B-CP19.3449.155.029.547.8
+ +Table 1: KGC results on WN18RR and FB15k-237: Memory consumption (MB), Filtered MRR and Hits@10 (%). *, ** and *** indicate the results taken from (Ruffinelli et al., 2020), (Kishimoto et al., 2019b) and (Li et al., 2020), respectively. The memory consumption figures for these models are estimated from the reported number of parameters. + +for Bit-flip B-CP was obtained when $D = 400$ , and thus, we used this setting for the following test evaluations. + +KGC Performance Table 1 summarizes the performance on the entity prediction task. The table lists the proposed Bit-flip B-CP, and several state-of-the-art models, including B-CP trained with HSTE (HSTE B-CP). We can see that Bit-flip B-CP achieved comparable results to other KGE models. + +To examine the dependence on initial parameter values, we trained five bit-flip-trained B-CP models using different initial values generated with varied random seeds. The performance figures in the table for Bit-flip B-CP are the average over these five models, with the standard deviation shown in parentheses. The small standard deviations indicate that bit flipping training is stable over different random seeds. + +Notice that B-CP consists of binary vectors, which makes the memory consumption approximately 1/20 of that of real-valued models DistMult and ConvE. Taking advantage of the small memory consumption of B-CP, we created an ensemble of five B-CP models; i.e., the score $\theta_{ijk}$ is computed by the sum of the scores of all models in the ensemble. Its performance is shown in the rows titled "†Bit-flip B-CP" of Table 1. For comparison, we also show the result for the ensemble of five HSTE-trained B-CP models ("†HSTE B-CP"). As we can see from the table, ensemble improved the task performance. Note that even the ensemble models consume much less memory than existing models using 32-bit real embeddings. + +# 5 Conclusion + +In this paper, we have introduced greedy bit flipping, a simple yet effective discrete optimization method for training the binarized KGE model B-CP. + +On the standard benchmark datasets of KGC, B-CP models trained by bit flipping were on per with HSTE-trained B-CP in terms of accuracy. Experimental results show that the KGC performance was stable over different initial values. Making ensemble of multiple B-CP models is made tractable by the small memory consumption of B-CP, which brought further performance improvement. + +Bit flipping is unique in that it does not require the loss function to be differentiable, making it potentially applicable to a wide range of loss functions. We plan to investigate this direction in our future work. Application of bit flipping to other binarized KGE models is another interesting direction. A binary version of DistMult looks interesting as a starting point, as it is closely related to DKGE (Li et al., 2020), a recently proposed binarized model. + +# Acknowledgments + +This work was partially supported by JSPS Kakenhi Grants 18K11457 and 19H04173. + +# References + +Ivana Balažević, Carl Allen, and Timothy M Hospedales. 2019a. Hypernetwork knowledge graph embeddings. In Proceedings of the 28th International Conference on Artificial Neural Networks (ICANN), pages 553-565. +Ivana Balažević, Carl Allen, and Timothy M. Hospedales. 2019b. TuckER: Tensor factorization for knowledge graph completion. In Proceedings of the 2019 Conference on Empirical Methods in Natural, pages 5184-5193. +Yoshua Bengio, Nicholas Léonard, and Aaron C. Courville. 2013. Estimating or propagating gradients through stochastic neurons for conditional computation. CoRR, abs/1308.3432. +Antoine Bordes, Nicolas Usunier, Alberto García-Durán, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Advances in Neural Information Processing Systems, pages 2787-2795. +Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In Proceedings of + +the Thirty-Second AAAI Conference on Artificial Intelligence. +Katsuhiko Hayashi and Masashi Shimbo. 2017. On the equivalence of holographic and complex embeddings for link prediction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 554-559. +F. L. Hitchcock. 1927. The expression of a tensor or a polyadic as a sum of products. J. Math. Phys, 6(1):164-189. +Seyed Mehran Kazemi and David Poole. 2018. Simple embedding for link prediction in knowledge graphs. In Advances in Neural Information Processing Systems, pages 4289-4300. +Koki Kishimoto, Katsuhiko Hayashi, Genki Akai, and Masashi Shimbo. 2019a. Binarized canonical polyadic decomposition for knowledge graph completion. arXiv preprint arXiv:1912.02686. +Koki Kishimoto, Katsuhiko Hayashi, Genki Akai, Masashi Shimbo, and Kazunori Komatani. 2019b. Binarized knowledge graph embeddings. In Proceedings of the 41st European Conference on Information Retrieval, pages 181-196. +Timothée Lacroix, Nicolas Usunier, and Guillaume Obozinski. 2018. Canonical tensor decomposition for knowledge base completion. In Proceedings of the 35th International Conference on Machine Learning, pages 2869-2878. +Yunqi Li, Shuyuan Xu, Bo Liu, Zuohui Fu, Shuchang Liu, Xu Chen, and Yongfeng Zhang. 2020. Discrete knowledge graph embedding based on discrete optimization. In Proceedings of the AAAI-20 Workshop on Knowledge Discovery from Unstructured Data in Financial Services. +Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, pages 2181-2187. +Hanxiao Liu, Yuexin Wu, and Yiming Yang. 2017. Analogical inference for multi-relational embeddings. In Proceedings of the 34th International Conference on Machine Learning, pages 2168-2178. +Hitoshi Manabe, Katsuhiko Hayashi, and Masashi Shimbo. 2018. Data-dependent learning of symmetric/antisymmetric relations for knowledge base completion. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence. +Maximilian Nickel, Lorenzo Rosasco, and Tomaso A. Poggio. 2016. Holographic embeddings of knowledge graphs. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pages 1955-1961. + +Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In Proceedings of the 28th International Conference on Machine Learning, pages 809-816. +Daniel Ruffinelli, Samuel Broscheit, and Rainer Gemulla. 2020. You CAN teach an old dog new tricks! On training knowledge graph embeddings. In Proceedings of 8th International Conference on Learning Representations (ICLR). +Richard Socher, Danqi Chen, Christopher D. Manning, and Andrew Y. Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Advances in Neural Information Processing Systems, pages 926-934. +Kristina Toutanova and Danqi Chen. 2015. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality, pages 57-66. +Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In Proceedings of the 33nd International Conference on Machine Learning, pages 2071-2080. +Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, pages 1112-1119. +Canran Xu and Ruijiang Li. 2019. Relation embedding with dihedral group in knowledge graph. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 263-272. +Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2014. Embedding entities and relations for learning and inference in knowledge bases. CoRR, abs/1412.6575. \ No newline at end of file diff --git a/agreedybitfliptrainingalgorithmforbinarizedknowledgegraphembeddings/images.zip b/agreedybitfliptrainingalgorithmforbinarizedknowledgegraphembeddings/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e09d7a5bd6968430579c187b9d6c58b1c080955d --- /dev/null +++ b/agreedybitfliptrainingalgorithmforbinarizedknowledgegraphembeddings/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f686fc00ca6c3c7ef1c9a8b7029a4545de8f7a94b7218abd2fc47057eafdbfdd +size 110691 diff --git a/agreedybitfliptrainingalgorithmforbinarizedknowledgegraphembeddings/layout.json b/agreedybitfliptrainingalgorithmforbinarizedknowledgegraphembeddings/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..44f4330cd8ac780b88c03b2d9e303111656c0a8b --- /dev/null +++ b/agreedybitfliptrainingalgorithmforbinarizedknowledgegraphembeddings/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cc88185b4fe5dc42d457894ad9ce26a5695ac524c6cd1d9682d839988a86c6cb +size 276314 diff --git a/ahierarchicalnetworkforabstractivemeetingsummarizationwithcrossdomainpretraining/5172d946-dc2d-415d-acf3-6bd5168d4517_content_list.json b/ahierarchicalnetworkforabstractivemeetingsummarizationwithcrossdomainpretraining/5172d946-dc2d-415d-acf3-6bd5168d4517_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..955d33a06d65d4af460caeb3ac3a6bd881fa229c --- /dev/null +++ b/ahierarchicalnetworkforabstractivemeetingsummarizationwithcrossdomainpretraining/5172d946-dc2d-415d-acf3-6bd5168d4517_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:48decdd5817b3c04609f775e4f94d2ed8e885a024ff068a1d8d9cae8e32760eb +size 71465 diff --git a/ahierarchicalnetworkforabstractivemeetingsummarizationwithcrossdomainpretraining/5172d946-dc2d-415d-acf3-6bd5168d4517_model.json b/ahierarchicalnetworkforabstractivemeetingsummarizationwithcrossdomainpretraining/5172d946-dc2d-415d-acf3-6bd5168d4517_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b4dc40129d6da04950bfd3378d868fc9c574f8f6 --- /dev/null +++ b/ahierarchicalnetworkforabstractivemeetingsummarizationwithcrossdomainpretraining/5172d946-dc2d-415d-acf3-6bd5168d4517_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9bfa8886ba2bd4cfe99740d4b3d263ce7b382c3ae2adc68d1fc1c73c792f849f +size 89362 diff --git a/ahierarchicalnetworkforabstractivemeetingsummarizationwithcrossdomainpretraining/5172d946-dc2d-415d-acf3-6bd5168d4517_origin.pdf b/ahierarchicalnetworkforabstractivemeetingsummarizationwithcrossdomainpretraining/5172d946-dc2d-415d-acf3-6bd5168d4517_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..880262e6c509e6a8166ed24c6d75c21d7df57cad --- /dev/null +++ b/ahierarchicalnetworkforabstractivemeetingsummarizationwithcrossdomainpretraining/5172d946-dc2d-415d-acf3-6bd5168d4517_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9577e6292263c62c8c2b67dc2fd9128b9876887d3c677f4583736e4198e036a7 +size 496795 diff --git a/ahierarchicalnetworkforabstractivemeetingsummarizationwithcrossdomainpretraining/full.md b/ahierarchicalnetworkforabstractivemeetingsummarizationwithcrossdomainpretraining/full.md new file mode 100644 index 0000000000000000000000000000000000000000..0a91dcd4eee4c78220ad51e154c23106c393d7cc --- /dev/null +++ b/ahierarchicalnetworkforabstractivemeetingsummarizationwithcrossdomainpretraining/full.md @@ -0,0 +1,313 @@ +# A Hierarchical Network for Abstractive Meeting Summarization with Cross-Domain Pretraining + +Chenguang Zhu*, Ruochen Xu*, Michael Zeng, Xuedong Huang + +Microsoft Cognitive Services Research Group + +{chezhu,ruox,nzeng,xdh}@microsoft.com + +# Abstract + +With the abundance of automatic meeting transcripts, meeting summarization is of great interest to both participants and other parties. Traditional methods of summarizing meetings depend on complex multi-step pipelines that make joint optimization intractable. Meanwhile, there are a handful of deep neural models for text summarization and dialogue systems. However, the semantic structure and styles of meeting transcripts are quite different from articles and conversations. In this paper, we propose a novel abstractive summary network that adapts to the meeting scenario. We design a hierarchical structure to accommodate long meeting transcripts and a role vector to depict the difference among speakers. Furthermore, due to the inadequacy of meeting summary data, we pretrain the model on large-scale news summary data. Empirical results show that our model outperforms previous approaches in both automatic metrics and human evaluation. For example, on ICSI dataset, the ROUGE-1 score increases from $34.66\%$ to $46.28\%$ . + +# 1 Introduction + +Meetings are a very common forum where people exchange ideas, make plans, and share information. With the ubiquity of automatic speech recognition systems come vast amounts of meeting transcripts. Therefore, the need to succinctly summarize the content of a meeting naturally arises. + +Several methods of generating summaries for meetings have been proposed (Mehdad et al., 2013; Murray et al., 2010; Wang and Cardie, 2013; Oya et al., 2014; Shang et al., 2018; Li et al., 2019). As Murray et al. (2010) points out, users prefer abstractive meeting summaries to extractive summaries. While these methods are mostly abstractive, they require complicated multi-stage machine + +# Meeting Transcript (163 turns) + +PM: ... another point is we have to skip the teletext, because in the world of upcoming internet we think teletext is going to be a thing of the past. +ID: ... first about how it works. It's really simple. Everybody knows how a remote works. The user presses a button. The remote determines what button it is, +PM: ... Few buttons, we talked about that. Docking station, LCD. general functions And default materials... + +# Summary from our model (23 sentences) + +The Project Manager announced that the project would not include a teletext feature. +The Industrial Designer gave a presentation of the functions of the remote. +The group decided on features to include in the remote, to include an LCD screen, and a docking station to change the layout of the interface. + +Table 1: Example excerpt of a meeting transcript and the summary generated by our model in AMI dataset. Keywords are in bold. PM (program manager) and ID (industrial designer) are roles of the speakers. The meeting transcript contains word errors and grammatical glitches as it is the result from the automatic speech recognition system. + +learning pipelines, such as template generation, sentence clustering, multi-sentence compression, candidate sentence generation and ranking. As these approaches are not end-to-end optimisable, it is hard to jointly improve various parts in the pipeline to enhance the overall performance. Moreover, some components, e.g., template generation, require extensive human involvement, rendering the solution not scalable or transferrable. + +Meanwhile, many end-to-end systems have been successfully employed to tackle document summarization, such as the pointer-generator network (See et al., 2017), reinforced summarization network (Paulus et al., 2018) and memory network + +(Jiang and Bansal, 2018). These deep learning methods can effectively generate abstractive document summaries by directly optimizing pre-defined goals. + +However, the meeting summarization task inherently bears a number of challenges that make it more difficult for end-to-end training than document summarization. We show an example of a meeting transcript from the AMI dataset and the summary generated by our model in Table 1. + +First, the transcript and summary of a single meeting are usually much longer than those of a document. For instance, in CNN/Daily Mail dataset (Hermann et al., 2015), there are on average 781 tokens per article and 56 tokens per summary, while AMI meeting corpus contains meetings with 4,757 tokens per transcript and 322 tokens per summary on average. And the structure of a meeting transcript is very distinct from news articles. These challenges all prevent existing news summarization models to be successfully applied to meetings. + +Second, a meeting is carried out between multiple participants. The different semantic styles, standpoints, and roles of each participant all contribute to the heterogeneous nature of the meeting transcript. + +Third, compared with news, there is very limited labelled training data for meeting summary (137 meetings in AMI v.s. 312K articles in CNN/DM). This is due to the privacy of meetings and the relatively high cost of writing summaries for long transcripts. + +To tackle these challenges, we propose an end-to-end deep learning framework, Hierarchical Meeting summarization Network (HMNet). HMNet leverages the encoder-decoder transformer architecture (Vaswani et al., 2017) to produce abstractive summaries based on meeting transcripts. To adapt the structure to meeting summarization, we propose two major design improvements. + +First, as meeting transcripts are usually lengthy, a direct application of the canonical transformer structure may not be feasible. For instance, conducting the multi-head self-attention mechanism on a transcript with thousands of tokens is very time consuming and may cause memory overflow problem. Therefore, we leverage a hierarchical structure to reduce the burden of computing. As a meeting consists of utterances from different participants, it forms a natural multi-turn hierarchy. Thus, the hierarchical structure carries out both token + +level understanding within each turn and turn-level understanding across the whole meeting. During summary generation, HMNet applies attention to both levels of understanding to ensure that each part of the summary stems from different portions of the transcript with varying granularities. + +Second, to accommodate multi-speaker scenario, HMNet incorporates the role of each speaker1 to encode different semantic styles and standpoints among participants. For example, a program manager usually emphasizes the progress of the project while a user interface designer tends to focus on user experience. In HMNet, we train a role vector for each meeting participant to represent the speaker's information during encoding. This role vector is appended to the turn-level representation for later decoding. + +To tackle the problem of insufficient training data for meeting summarization, we leverage the idea of pretraining (Devlin et al., 2018). We collect summarization data from the news domain and convert them into the meeting format: a group of several news articles forms a multi-person meeting and each sentence becomes a turn. The turns are reshuffled to simulate a mixed order of speakers. We pretrain the HMNet model on the news task before finetuning it on meeting summarization. Empirical results show that this cross-domain pretraining can effectively enhance the model quality. + +To evaluate our model, we employ the widely used AMI and ICSI meeting corpus (McCowan et al., 2005; Janin et al., 2003). Results show that HMNet significantly outperforms previous meeting summarization methods. For example, on ICSI dataset, HMNet achieves 11.62 higher ROUGE-1 points, 2.60 higher ROUGE-2 points, and 6.66 higher ROUGE-SU4 points compared with the previous best result. Human evaluations further show that HMNet generates much better summaries than baseline methods. We then conduct ablation studies to verify the effectiveness of different components in our model. + +# 2 Problem Formulation + +We formalize the problem of meeting summarization as follows. The input consists of meeting transcripts $\mathcal{X}$ and meeting participants $\mathcal{P}$ . Suppose there are $s$ meetings in total. The tran + +scripts are $\mathcal{X} = \{X_1,\ldots ,X_s\}$ . Each meeting transcript consists of multiple turns, where each turn is the utterance of a participant. Thus, $X_{i} = \{(p_{1},u_{1}),(p_{2},u_{2}),\dots,(p_{L_{i}},u_{L_{i}})\}$ , where $p_j\in \mathcal{P},1\leq j\leq L_i$ , is a participant and $u_{j} = (w_{1},\dots,w_{l_{j}})$ is the tokenized utterance from $p_j$ . The human-labelled summary for meeting $X_{i}$ , denoted by $Y_{i}$ , is also a sequence of tokens. For simplicity, we will drop the meeting index subscript. So the goal of the system is to generate meeting summary $Y = (y_{1},\dots,y_{n})$ given the transcripts $X = \{(p_{1},u_{1}),(p_{2},u_{2}),\dots,(p_{m},u_{m})\}$ . + +# 3 Model + +Our hierarchical meeting summarization network (HMNet) is based on the encoder-decoder transformer structure (Vaswani et al., 2017), and its goal is to maximize the conditional probability of meeting summary $Y$ given transcript $X$ and network parameters $\theta$ : $P(Y|X;\theta)$ . + +# 3.1 Encoder + +# 3.1.1 Role Vector + +Meeting transcripts are recorded from various participants, who may have different semantic styles and viewpoints. Therefore, the model has to take the speaker's information into account while generating summaries. + +To incorporate the participants' information, we integrate the speaker role component. In the experiments, each meeting participant has a distinct role, e.g., program manager, industrial designer. For each role, we train a vector to represent it as a fixed-length vector $r_p$ , $1 \leq p \leq P$ , where $P$ is the number of roles. Such distributed representation for a role/person has been proved to be useful for sentiment analysis (Chen et al., 2016). This vector is appended to the embedding of the speaker's turn (Section 3.1.2). According to the results in Section 4.5, the vectorized representation of speaker roles plays an important part in boosting the performance of summarization. + +This idea can be extended if richer data is available in practice: + +- If an organization chart of participants is available, we can add in representations of the relationship between participants, e.g., manager and developers, into the network. +- If there is a pool of registered participants, each participant can have a personal vector + +which acts as a user portrait and evolves as more data about this user is collected. + +# 3.1.2 Hierarchical Transformer + +Transformer. Recall that a transformer block consists of a multi-head attention layer and a feed-forward layer, both followed by layer-norm with residuals: LayerNorm $(x + Layer(x))$ , where Layer can be the attention or feed-forward layer (Vaswani et al., 2017). + +As the attention mechanism is position agnostic, we append positional encoding to input vectors: + +$$ +\mathrm {P E} _ {(i, 2 j)} = \sin (i / 1 0 0 0 0 ^ {\frac {2 j}{d}}) \tag {1} +$$ + +$$ +\mathrm {P E} _ {(i, 2 j + 1)} = \cos (i / 1 0 0 0 0 ^ {\frac {2 j}{d}}), \tag {2} +$$ + +where $\mathrm{PE}_{(i,j)}$ stands for the $j$ -th dimension of positional encoding for the $i$ -th word in input sequence. We choose sinusoidal functions as they can extend to arbitrary input length during inference. + +In summary, a transformer block on a sequence of $n$ input embeddings can generate $n$ output embeddings of the same dimension as input. Thus, multiple transformer blocks can be sequentially stacked to form a transformer network: + +$$ +\operatorname {T r a n s f o r m e r} \left(\left\{x _ {1}, \dots , x _ {n} \right\}\right) = \left\{y _ {1}, \dots , y _ {n} \right\} \tag {3} +$$ + +Long transcript problem. As the canonical transformer has the attention mechanism, its computational complexity is quadratic in the input length. Thus, it struggles to handle very long sequences, e.g. 5,000 tokens. However, meeting transcripts are usually fairly long, consisting of thousands of tokens. + +We note that meetings come with a natural multi-turn structure with a reasonable number of turns, e.g. 289 turns per meeting on average in AMI dataset. And the number of tokens in a turn is much less than that in the whole meeting. Therefore, we employ a two-level transformer structure to encode the meeting transcript. + +Word-level Transformer. The word-level transformer processes the token sequence of one turn in the meeting. We encode each token in one turn using a trainable embedding matrix $\mathcal{D}$ . Thus, the $j$ -th token in the $i$ -th turn, $w_{i,j}$ , is associated with a uniform length vector $\mathcal{D}(w_{i,j}) = g_{i,j}$ . To incorporate syntactic and semantic information, we also train two embedding matrices to represent the part-of-speech (POS) and entity (ENT) tags. Therefore, the token $w_{i,j}$ is represented by the vector + +![](images/4a8d6f7da4b966a79811e3d8487dcb30e39480de9a9cf8092289bfc9ebfca923.jpg) +Figure 1: Hierarchical Meeting Summary Network (HMNet) model structure. [BOS] is the special start token inserted before each turn, and its encoding is used in turn-level transformer encoder. Other tokens' encodings enter the cross-attention module in decoder. + +$x_{i,j} = [g_{i,j};POS_{i,j};ENT_{i,j}]$ Note that we add a special token $w_{i,0} = [\mathrm{BOS}]$ before the sequence to represent the beginning of a turn. Then, we denote the output of the word-level transformer as follows: Word-Transformer $\left(\left\{x_{i,0},\dots,x_{i,L_i}\right\}\right) =$ $\{x_{i,0}^{\mathcal{W}},\ldots ,x_{i,L_i}^{\mathcal{W}}\}$ + +Turn-level Transformer. The turn-level transformer processes the information of all $m$ turns in a meeting. To represent the $i$ -th turn, we employ the output embedding of the special token [BOS] from the word-level transformer, i.e. $x_{i,0}^{\mathcal{W}}$ . Furthermore, we concatenate it with the role vector of the speaker for this turn, $p_i$ . It follows that the output of the turn-level transformer is: $\mathrm{Turn - Transformer}(\{[x_{1,0}^{\mathcal{W}};p_1],\dots,[x_{m,0}^{\mathcal{W}};p_m]\}) = \{x_1^\mathcal{T},\dots,x_m^\mathcal{T}\}$ . + +# 3.2 Decoder + +The decoder is a transformer to generate the summary tokens. The input to the decoder transformer contains the $k - 1$ previously generated summary tokens $\hat{y}_1,\dots ,\hat{y}_{k - 1}$ . Each token is represented by a vector using the same embedding matrix $\mathcal{D}$ as the encoder, $\mathcal{D}(\hat{y}_i) = g_i$ . + +The decoder transformer uses a lower triangular mask to prevent the model to look at future tokens. Moreover, the transformer block includes two cross-attention layers. After self-attention, the embeddings first attend with token-level outputs $\{x_{i,j}^{\mathcal{W}}\}_{i = 1,j = 1}^{m,L_i}$ , and then with turn-level outputs $\{x_i^T\}_{i = 1}^m$ , each followed by layer-norm. This makes the model attend to different parts of the inputs with varying scales at each inference step. + +The output of the decoder transformer is denoted as: Decoder-Transformer $\left(\left\{g_{1},\dots,g_{k - 1}\right\}\right) =$ $\{v_{1},\ldots ,v_{k - 1}\}$ + +To predict the next token $\hat{y}_k$ , we reuse the weight of embedding matrix $\mathcal{D}$ to decode $v_{k - 1}$ into a probability distribution over the vocabulary: + +$$ +P \left(\hat {y} _ {k} \mid \hat {y} _ {< k}, X\right) = \operatorname {s o f t m a x} \left(v _ {k - 1} \mathcal {D} ^ {T}\right) \tag {4} +$$ + +We illustrate the Hierarchical Meeting summary Network (HMNet) in Fig. 1. + +Training. During training, we seek to minimize the cross entropy: + +$$ +L (\theta) = - \frac {1}{n} \sum_ {k = 1} ^ {n} \log P \left(y _ {k} \mid y _ {< k}, X\right) \tag {5} +$$ + +We use teacher-forcing in decoder training, i.e. the decoder takes ground-truth summary tokens as input. + +Inference. During inference, we use beam search to select the best candidate. The search starts with the special token $\langle \mathrm{BEGIN} \rangle$ . We employ the commonly used trigram blocking (Paulus et al., 2018): during beam search, if a candidate word would create a trigram that already exists in the previously generated sequence of the beam, we forcibly set the word's probability to 0. Finally, we select the summary with the highest average log-likelihood per token. + +# 3.3 Pretraining + +As there is limited availability of meeting summarization data, we propose to utilize summary data from the news domain to pretrain HMNet. This can warm up model parameters on summarization tasks. However, the structure of news articles is very different from meeting transcripts. Therefore, we transform news articles into the meeting format. + +We concatenate every $M$ news articles into an $M$ -people meeting, and treat each sentence as a single turn. The sentences from article $i$ is considered to be utterances from the $i$ -th speaker, named as [Dataset- $i$ ]. For instance, for each XSum meeting, the speakers' names are [XSum-1] to [XSum- $M$ ]. To simulate the real meeting scenario, we randomly shuffle all the turns in these pseudo meetings. The target summary is the concatenation of the $M$ summaries. + +We pretrain HMNet model with a large collection of news summary data (details in Section 4.1), and then finetune it on real meeting summary task. + +# 4 Experiment + +# 4.1 Datasets + +We employ the widely used AMI (McCowan et al., 2005) and ICSI (Janin et al., 2003) meeting corpora. The two datasets contain meeting transcripts from automatic speech recognition (ASR), respectively. We follow Shang et al. (2018) to use the same train/development/test split: 100/17/20 for AMI and 43/10/6 for ICSI. Each meeting has an abstractive summary written by human annotators. Furthermore, each participant has an associated role, e.g. project manager, marketing expert2. Since there is only one speaker per role in each meeting + +and no other speaker identification information, we use a single role vector to model both speaker and role information simultaneously. + +In AMI, there are on average 4,757 words with 289 turns in the meeting transcript and 322 words in the summary. In ICSI, there are on average 10,189 words with 464 turns in the meeting transcript and 534 words in the summary. As the transcript is produced by the ASR system, there is a word error rate of $36\%$ for AMI and $37\%$ for ICSI (Shang et al., 2018). + +The pretraining is conduct on the news summarization datasets CNN/DailyMail (Hermann et al., 2015), NYT (Sandhaus, 2008) and XSum (Narayan et al., 2018), containing 312K, 104K and 227K article-summary pairs. We take the union of three datasets for the pretraining. We choose groups of $M = 4$ news articles to match the 4-speaker setting in AMI dataset. These converted meetings contain on average 2,812 words with 128 turns and 176 words in the summary. + +# 4.2 Baseline models + +For comparison, we select a variety of baseline systems from previous literatures: the basic baselines Random (Riedhammer et al., 2008) and Copy from Train, which randomly copies a summary from the training set as the prediction3; the template-based method Template (Oya et al., 2014); the ranking systems TextRank (Mihalcea and Tarau, 2004) and ClusterRank (Garg et al., 2009); the unsupervised method UNS; the document summarization model PGNet4 (See et al., 2017); and the multi-modal model MM (Li et al., 2019). + +In addition, we implement the baseline model Extractive Oracle, which concatenates top sentences with the highest ROUGE-1 scores with the golden summary. The number of sentences is determined by the average length of golden summary: 18 for AMI and 23 for ICSI. + +# 4.3 Metrics + +Following Shang et al. (2018), we employ ROUGE-1, ROUGE-2 and ROUGE-SU4 metrics (Lin, 2004) to evaluate all meeting summarization models. These three metrics respectively evaluate the accuracy of unigrams, bigrams, and unigrams plus + +
AMIICSI
ModelROUGE-1R-2R-SU4ROUGE-1R-2R-SU4
Random35.136.2613.1729.283.7810.29
Template31.506.8011.40///
TextRank35.256.913.6229.74.0910.64
ClusterRank35.146.4613.3527.643.689.77
UNS37.867.8414.7131.604.8311.35
Extractive Oracle39.499.6513.2034.668.0010.49
PGNet40.7714.8718.6832.007.7012.46
Copy from Train43.2412.1514.0134.655.5510.65
MM (TopicSeg+VFOA)*53.2913.51////
MM (TopicSeg)*51.5312.23////
HMNet53.0218.57**24.85**46.28**10.60**19.12**
+ +Table 2: ROUGE-1, ROUGE-2, ROUGE-SU4 scores of generated summary in AMI and ICSI datasets. Numbers in bold are the overall best result. * The two baseline MM models require additional human annotations of topic segmentation and visual signals from cameras. ** Results are statistically significant at level 0.05. + +skip-bigrams with a maximum skip distance of 4. These metrics have been shown to highly correlate with the human judgment (Lin, 2004). + +# 4.4 Implementation Details + +We employ spaCy (Honnibal and Johnson, 2015) as the word tokenizer and embed POS and NER tags into 16-dim vectors. The dimension of the role vector is 32. + +All transformers have 6 layers and 8 heads in attention. The dimension for each word is 512 and thus the input and output dimensions of transformers $d_{model}$ are 512 for the decoder, $512 + 16 + 16 = 544$ for the word-level transformer, and $512 + 16 + 16 + 32 = 576$ for the turn level transformer. For all transformers, the inner-layer always has dimensionality $d_{ff} = 4 \times d_{model}$ . HMNet has 204M parameters in total. We use a dropout probability of 0.1 on all layers. + +We pretrain HMNet on news summarization data using the RAdam optimizer (Liu et al., 2020) with $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ . The initial learning rate is set to $1e - 9$ and linearly increased to 0.001 with 16000 warmup steps. For finetuning on the meeting data, the optimization setup is the same except the initial learning rate is set to 0.0001. We use gradient clipping with a maximum norm of 2 and gradient accumulation steps as 16. + +# 4.5 Results + +Table 2 shows the ROUGE scores of generated summaries in AMI and ICSI datasets. As shown, except for ROUGE-1 in AMI, HMNet outperforms all + +baseline models in all metrics, and the result is statistically significant at level 0.05, under paired t-test with the best baseline results. On ICSI dataset, HMNet achieves 11.62, 2.60 and 6.66 higher ROUGE points than previously best results. + +Note that MM is a multi-modal model which requires human annotation of topic segmentation (TopicSeg) and visual focus on attention (VFOA) collected from cameras, which is rarely available in practice. In comparison, our model HMNet is entirely based on transcripts from ASR pipelines. Still, on AMI dataset, HMNet outperforms MM(TopicSeg) by 1.49 points in ROUGE-1 and 6.34 points in ROUGE-2, and is higher than MM(TopicSeg + VFOA) by 5.06 points in ROUGE-2. + +Moreover, HMNet significantly outperforms the document summarization model PGNet, indicating that traditional summarization models must be carefully adapted to meeting scenarios. HMNet also compares favorably to the extractive oracle, showing that human summaries are more abstractive rather than extractive for meetings. + +It's worth noting that Copy from Train obtains a surprisingly good result in both AMI and ICSI, higher than most baselines including PGNet. The reason is that the meetings in AMI and ICSI are not isolated events. Instead, they form a series of related discussions on the same project. Thus, many project keywords appear in multiple meetings and their summaries. It also explains the relatively high ROUGE scores in the evaluation. However, HMNet can focus on salient information and as a + +
ModelROUGE-1R-2R-SU4
AMI
HMNet53.018.624.9
-pretrain48.718.423.5
-role vector47.817.221.7
-hierarchy45.115.920.5
ICSI
HMNet46.310.619.1
-pretrain42.310.617.8
-role vector44.09.618.2
-hierarchy41.09.316.8
+ +result, achieves a considerably higher score than Copy from Train baseline. + +Ablation Study. Table 3 shows the ablation study of HMNet on the test set of AMI and ICSI. As shown, the pretraining on news summarization data can help increase the ROUGE-1 on AMI by 4.3 points and on ICSI by 4.0 points. When the role vector is removed, the ROUGE-1 score drops 5.2 points on AMI and 2.3 points on ICSI. When HMNet is without the hierarchy structure, i.e. the turnlevel transformer is removed and role vectors are appended to word-level embeddings, the ROUGE-1 score drops as much as 7.9 points on AMI and 5.3 points on ICSI. Thus, all these components we propose both play an important role in the summarization capability of HMNet. + +# 4.6 Human Evaluation + +We conduct a human evaluation of the meeting summary to assess its readability and relevance. Readability measures how fluent the summary language is, including word and grammatical error rate. Relevance measures how well the summary sums up the main ideas of the meeting. + +As MM model (Li et al., 2019) does not have summarization text or trained model available, we compare the results of HMNet and UNS (Shang et al., 2018). For each meeting in the test set of AMI and ICSI, we have 5 human evaluators from Amazon Mechanical Turk label summaries from HMNet and UNS. We choose labelers with high approval rating $(>98\%)$ to increase the credibility of results. + +Each annotator is presented with the meeting transcript and the summaries. The annotator needs to give a score from 1 to 5 (higher is better) for readability (whether the summary consists of flu + +![](images/451aa0a693f710d42035509dc8495dfeb85880618c8ca484c91bd8d9340a4c69.jpg) +Figure 2: Percentage of novel n-grams in the reference and the summaries generated by HMNet and UNS (Shang et al., 2018) in AMI's test set. + +Table 3: Ablation study of HMNet. + +
DatasetAMI
SourceHMNetUNS
Readability4.17 (.38)2.19 (.57)
Relevance4.08 (.45)2.47 (.67)
DatasetICSI
SourceHMNetUNS
Readability4.24 (.20)2.08 (.20)
Relevance4.02 (.55)1.75 (.61)
+ +Table 4: Average scores (1-5) of readability and relevance of summaries on AMI and ICSI's test sets. Each summary is judged by 5 human evaluators. Standard deviation is shown in parenthesis. + +ent and coherent sentences and easy to understand) and likewise for relevance (whether the summary contains important information from the meeting). The annotators need to read both the meeting transcript and the summary to give evaluations. To reduce bias, for each meeting, the two versions of summaries are randomly ordered. + +Table 4 shows that HMNet achieves much higher scores in both readability and relevance than UNS in both datasets. And the scores for HMNet are all above 4.0, indicating that it can generate both readable and highly relevant meeting summaries. + +# 5 Insights + +# 5.1 How abstractive is our model? + +An abstractive system can be innovative by using words that are not from the transcript in the summary. Similar to See et al. (2017), we measure the abstractiveness of a summary model via the ratio of novel words or phrases in the summary. A higher ratio could indicate a more abstractive system. + +Fig. 2 displays the percentage of novel n-grams, i.e. that do not appear in the meeting transcript, in the summary from reference, HMNet, and UNS. + +As shown, both reference and HMNet summaries have a large portion of novel n-grams $(n > 1)$ . Almost no 4-grams are copied from the transcript. In contrast, UNS has a much lower ratio of novel n-grams, because it generates a summary mainly from the original word sequence in transcripts. + +# 5.2 Error Analysis + +We qualitatively examine the outputs of HMNet and summarize two major types of errors: + +1. Due to the nature of long meeting transcripts, the system sometimes summarizes salient information from parts of the meeting different from the reference summaries. +2. Our system sometimes summarizes meetings at a high level (e.g. topics, decisions) and not to cover all detailed items as in the reference. + +# 6 Related Work + +Meeting Summarization. There are a number of studies on generating summaries for meetings and dialogues (Zhao et al., 2019; Liu and Chen, 2019; Chen and Metze, 2012; Liu et al., 2019b,a). Mehdad et al. (2013) uses utterance clustering, an entailment graph, a semantic word graph and a ranking strategy to construct meeting summaries. Murray et al. (2010) and Wang and Cardie (2013) focus on various aspects of meetings such as decisions and action items. Oya et al. (2014) employs multi-sentence fusion to construct summarization templates for meetings, leading to summaries with higher readability and informativeness. Recently, Shang et al. (2018) leverages a multisentence compression graph and budgeted submodular maximization to generate meeting summaries. In general, these multi-step methods make joint optimization intractable. Li et al. (2019) proposes an encoder-decoder structure for end-to-end multimodal meeting summarization, but it depends on manual annotation of topic segmentation and visual focus, which may not be available in practice. In comparison, our model only requires meeting transcripts directly from speech recognition. + +Document Summarization. Rush et al. (2015) first introduces an attention-based seq2seq (Sutskever et al., 2014) model to the abstractive sentence summarization task. However, the quality of the generated multi-sentence summaries for long documents is often low, and out of vocabulary (OOV) words cannot be efficiently handled. To tackle these challenges, See et al. (2017) proposes + +a pointer-generator network that can both produce words from the vocabulary via a generator and copy words from the source text via a pointer. Paulus et al. (2018) further adds reinforcement learning to improve the result. Gehrmann et al. (2018) uses a content selector to over-determine phrases in source documents that helps constrain the model to likely phrases and achieves state-of-the-art results in several document summarization datasets. Recently several works on using large-scale pretrained language models for summarization are proposed and achieves very good performance (Liu, 2019; Zhu et al., 2019; Raffel et al., 2019; Lewis et al., 2019; Zhang et al., 2019). + +Hierarchical Neural Architecture. As a variety of NLP data (e.g., conversation, document) has an internal hierarchical structure, there have been many works applying hierarchical structures in NLP tasks. Li et al. (2015) proposes a hierarchical neural auto-encoder for paragraph and document reconstruction. It applies two levels of RNN: one on tokens within each sentence and the other on all sentences. Lin et al. (2015) applies a hierarchical RNN language model (HRNNLM) to document modeling, which similarly encodes token-level and turn-level information for better language modeling performance. Serban et al. (2016) puts forward a hierarchical recurrent encoder-decoder network (HRED) to model open-domain dialogue systems and generate system responses given the previous context. Nallapati et al. (2016) proposes the hierarchical attention mechanism on word-level and turn-level in the encoder-decoder structure for abstractive document summarization. + +# 7 Conclusion + +In this paper, we present an end-to-end hierarchical neural network, HMNet, for abstractive meeting summarization. We employ a two-level hierarchical structure to adapt to the long meeting transcript, and a role vector to represent each participant. We also alleviate the data scarcity problem by pretraining on news summarization data. Experiments show that HMNet achieves state-of-the-art performance in both automatic metrics and human evaluation. Through an ablation study, we show that the role vector, hierarchical architecture, and pretraining all contribute to the model's performance. + +For future work, we plan to utilize organizational chart, knowledge graph and topic modeling to generate better meeting summaries, which can better + +capture salient information from the transcript. + +# Acknowledgement + +We thank William Hinthorn for proof-reading this paper. We thank the anonymous reviewers for their valuable comments. + +# References + +Tao Chen, Ruifeng Xu, Yulan He, Yunqing Xia, and Xuan Wang. 2016. Learning user and product distributed representations using a sequence model for sentiment analysis. IEEE Computational Intelligence Magazine, 11(3):34-44. +Yun-Nung Chen and Florian Metze. 2012. Integrating intra-speaker topic modeling and temporal-based inter-speaker topic modeling in random walk for improved multi-party meeting summarization. In Thirteenth Annual Conference of the International Speech Communication Association. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Nikhil Garg, Benoit Favre, Korbinian Reidhammer, and Dilek Hakkani-Tür. 2009. Clusterrank: a graph based method for meeting summarization. Tenth Annual Conference of the International Speech Communication Association.. +Sebastian Gehrmann, Yuntian Deng, and Alexander M Rush. 2018. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4098-4109. +Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. Advances in neural information processing systems., pages 1693-1701. +Matthew Honnibal and Mark Johnson. 2015. An improved non-monotonic transition system for dependency parsing. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing., pages 1373-1378. +Adam Janin, Don Baron, Jane Edwards, Dan Ellis, David Gelbart, Nelson Morgan, Barbara Peskin, Thilo Pfau, Elizabeth Shriberg, Andreas Stolcke, et al. 2003. The icsi meeting corpus. 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP'03), 1:I-I. +Yichen Jiang and Mohit Bansal. 2018. Closed-book training to improve summarization encoder memory. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4067-4077. + +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. +Jiwei Li, Minh-Thang Luong, and Dan Jurafsky. 2015. A hierarchical neural autoencoder for paragraphs and documents. arXiv preprint arXiv:1506.01057. +Manling Li, Lingyu Zhang, Heng Ji, and Richard J Radke. 2019. Keep meeting summaries on topic: Abstractive multi-modal meeting summarization. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics., pages 2190-2196. +Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out.. +Rui Lin, Shujie Liu, Muyun Yang, Mu Li, Ming Zhou, and Sheng Li. 2015. Hierarchical recurrent neural network for document modeling. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing., pages 899-907. +Chunyi Liu, Peng Wang, Jiang Xu, Zang Li, and Jieping Ye. 2019a. Automatic dialogue summary generation for customer service. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1957-1965. +Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Jiawei Han. 2020. On the variance of the adaptive learning rate and beyond. In International Conference on Learning Representations. +Yang Liu. 2019. Fine-tune bert for extractive summarization. arXiv preprint arXiv:1903.10318. +Zhengyuan Liu and Nancy Chen. 2019. Reading turn by turn: Hierarchical attention architecture for spoken dialogue comprehension. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5460-5466, Florence, Italy. Association for Computational Linguistics. +Zhengyuan Liu, Angela Ng, Sheldon Lee, Ai Ti Aw, and Nancy F Chen. 2019b. Topic-aware pointer-generator networks for summarizing spoken conversations. arXiv preprint arXiv:1910.01335. +Iain McCowan, Jean Carletta, Wessel Kraaij, Simone Ashby, S Bourban, M Flynn, M Guillemot, Thomas Hain, J Kadlec, Vasilis Karaiskos, et al. 2005. The ami meeting corpus. Proceedings of the 5th International Conference on Methods and Techniques in Behavioral Research., 88:100. +Yashar Mehdad, Giuseppe Carenini, Frank Tompa, et al. 2013. Abstractive meeting summarization with + +entailment and fusion. Proceedings of the 14th European Workshop on Natural Language Generation., pages 136-146. +Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. Proceedings of the 2004 conference on empirical methods in natural language processing.. +Gabriel Murray, Giuseppe Carenini, and Raymond Ng. 2010. Generating and validating abstracts of meeting conversations: a user study. Proceedings of the 6th International Natural Language Generation Conference., pages 105-113. +Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Căglar Gulçehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 280-290. +Shashi Narayan, Shay B Cohen, and Mirella Lapata. 2018. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797-1807. +Tatsuro Oya, Yashar Mehdad, Giuseppe Carenini, and Raymond Ng. 2014. A template-based abstractive meeting summarization: Leveraging summary and source text relationships. Proceedings of the 8th International Natural Language Generation Conference (INLG), pages 45-53. +Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In International Conference on Learning Representations. +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683. +Korbinian Riedhammer, Dan Gillick, Benoit Favre, and Dilek Hakkani-Tür. 2008. Packing the meeting summarization knapsack. Ninth Annual Conference of the International Speech Communication Association.. +Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379-389. +Evan Sandhaus. 2008. The new york times annotated corpus. Linguistic Data Consortium, Philadelphia, 6(12):e26752. +Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational + +Linguistics (Volume 1: Long Papers), pages 1073-1083. +Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. Thirtieth AAAI Conference on Artificial Intelligence.. +Guokan Shang, Wensi Ding, Zekun Zhang, Antoine Tixier, Polykarpos Meladianos, Michalis Vazirgiannis, and Jean-Pierre Lorré. 2018. Unsupervised abstractive meeting summarization with multisentence compression and budgeted submodular maximization. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 664-674. +Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. Advances in neural information processing systems., pages 3104-3112. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems., pages 5998-6008. +Lu Wang and Claire Cardie. 2013. Domain-independent abstract generation for focused meeting summarization. Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 1:1395-1405. +Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J Liu. 2019. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. arXiv preprint arXiv:1912.08777. +Zhou Zhao, Haojie Pan, Changjie Fan, Yan Liu, Linlin Li, Min Yang, and Deng Cai. 2019. Abstractive meeting summarization via hierarchical adaptive segmental network learning. In The World Wide Web Conference, pages 3455-3461. +Chenguang Zhu, Ziyi Yang, Robert Gmyr, Michael Zeng, and Xuedong Huang. 2019. Make lead bias in your favor: A simple and effective method for news summarization. arXiv preprint arXiv:1912.11602. \ No newline at end of file diff --git a/ahierarchicalnetworkforabstractivemeetingsummarizationwithcrossdomainpretraining/images.zip b/ahierarchicalnetworkforabstractivemeetingsummarizationwithcrossdomainpretraining/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..478c5b9bef48432dcfd6c676d8b4133e7a58c7ae --- /dev/null +++ b/ahierarchicalnetworkforabstractivemeetingsummarizationwithcrossdomainpretraining/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c2105ad45d7ef19e554382e7372319b641181dec5cd202f181630189d4b32460 +size 272116 diff --git a/ahierarchicalnetworkforabstractivemeetingsummarizationwithcrossdomainpretraining/layout.json b/ahierarchicalnetworkforabstractivemeetingsummarizationwithcrossdomainpretraining/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..e22ab6981195c710b65bcd91d8e51b1f7ff6b2b2 --- /dev/null +++ b/ahierarchicalnetworkforabstractivemeetingsummarizationwithcrossdomainpretraining/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:533dda1c966b0fe8e488c5b58255c38dd90cbf433ca0f6bbe6a704690dbf348e +size 345968 diff --git a/aknowledgedrivenapproachtoclassifyingobjectandattributecoreferencesinopinionmining/953ece2f-869b-45d9-9ee6-d40893bba61c_content_list.json b/aknowledgedrivenapproachtoclassifyingobjectandattributecoreferencesinopinionmining/953ece2f-869b-45d9-9ee6-d40893bba61c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..5595dcaa82396277e527a417916ece473cdfded1 --- /dev/null +++ b/aknowledgedrivenapproachtoclassifyingobjectandattributecoreferencesinopinionmining/953ece2f-869b-45d9-9ee6-d40893bba61c_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:510fb940f2d472fa2200e16e966172180024fa5c27d386e322aef598842c80a6 +size 79471 diff --git a/aknowledgedrivenapproachtoclassifyingobjectandattributecoreferencesinopinionmining/953ece2f-869b-45d9-9ee6-d40893bba61c_model.json b/aknowledgedrivenapproachtoclassifyingobjectandattributecoreferencesinopinionmining/953ece2f-869b-45d9-9ee6-d40893bba61c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..4472aedc2de6e0dfd8c2e88ef64cd4eae6d217d2 --- /dev/null +++ b/aknowledgedrivenapproachtoclassifyingobjectandattributecoreferencesinopinionmining/953ece2f-869b-45d9-9ee6-d40893bba61c_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:44d34c6360b855053fda086f5bf90f5a35edce3967a01ec3bb9fc092d9216b50 +size 99042 diff --git a/aknowledgedrivenapproachtoclassifyingobjectandattributecoreferencesinopinionmining/953ece2f-869b-45d9-9ee6-d40893bba61c_origin.pdf b/aknowledgedrivenapproachtoclassifyingobjectandattributecoreferencesinopinionmining/953ece2f-869b-45d9-9ee6-d40893bba61c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6798a684c00ef5fbb55a7c2511fe725083e26153 --- /dev/null +++ b/aknowledgedrivenapproachtoclassifyingobjectandattributecoreferencesinopinionmining/953ece2f-869b-45d9-9ee6-d40893bba61c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d0d0ec4ff3ae6c5f0a1c1b302e079b091203e1fb675f774de3c20501a49afa45 +size 459605 diff --git a/aknowledgedrivenapproachtoclassifyingobjectandattributecoreferencesinopinionmining/full.md b/aknowledgedrivenapproachtoclassifyingobjectandattributecoreferencesinopinionmining/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c6baa8e175eb8b2005a2c08f19d08285a39fd0ac --- /dev/null +++ b/aknowledgedrivenapproachtoclassifyingobjectandattributecoreferencesinopinionmining/full.md @@ -0,0 +1,364 @@ +# A Knowledge-Driven Approach to Classifying Object and Attribute Coreferences in Opinion Mining + +Jiahua Chen, Shuai Wang, Sahisnu Mazumder, Bing Liu + +Department of Computer Science, University of Illinois at Chicago, USA + +jiahuaqy@gmail.com, shuaiwanghk@gmail.com + +sahisnumazumder@gmail.com, liub@uic.edu + +# Abstract + +Classifying and resolving coreferences of objects (e.g., product names) and attributes (e.g., product aspects) in opinionated reviews is crucial for improving the opinion mining performance. However, the task is challenging as one often needs to consider domain-specific knowledge (e.g., iPad is a tablet and has aspect resolution) to identify coreferences in opinionated reviews. Also, compiling a handcrafted and curated domain-specific knowledge base for each domain is very time consuming and arduous. This paper proposes an approach to automatically mine and leverage domain-specific knowledge for classifying objects and attribute coreferences. The approach extracts domain-specific knowledge from unlabeled review data and trains a knowledge-aware neural coreference classification model to leverage (useful) domain knowledge together with general commonsense knowledge for the task. Experimental evaluation on real-world datasets involving five domains (product types) shows the effectiveness of the approach. + +# 1 Introduction + +Coreference resolution (CR) aims to determine whether two mentions (linguistic referring expressions) corefer or not, i.e., they refer to the same entity in the discourse model (Jurafsky, 2000; Ding and Liu, 2010; Atkinson et al., 2015; Lee et al., 2017, 2018; Joshi et al., 2019; Zhang et al., 2019b). The set of coreferring expressions forms a coreference chain or a cluster. Let's have an example: + +[S1] I bought a green Moonbeam for myself. [S2] I like its voice because it is loud and long. + +Here all colored and/or underlined phrases are mentions. Considering S1 (sentence-1) and S2 (sentence-2), the three mentions "I", "myself" in + +S1 and “ $I$ ” in S2 all refer to the same person and form a cluster. Similarly, “ $\text{its}$ ” in S2 refers to the object “a green Moonbeam” in S1 and the cluster is $\{\text{"its"} \text{(S2)}, \text{"a green Moonbeam"} \text{(S1)}\}$ . The mentions “ $\text{its voice}$ ” and “ $\text{it}$ ” in S2 refer to the same attribute of the object “a green Moonbeam” in S1 and form cluster $\{\text{"its voice"} \text{(S2)}, \text{"it"} \text{(S2)}\}$ . + +CR is beneficial for improving many downstream NLP tasks such as question answering (Dasigi et al., 2019), dialog systems (Quan et al., 2019), entity linking (Kundu et al.), and opinion mining (Nicolov et al., 2008). Particularly, in opinion mining tasks (Liu, 2012; Wang et al., 2016; Zhang et al., 2018; Ma et al., 2020), Nicolov et al. (2008) reported performance improves by $10\%$ when CR is used. The study by Ding and Liu (2010) also supports this finding. Considering the aforementioned example, without resolving "it" in S2, it is difficult to infer the opinion about the attribute "voice" (i.e., the voice, which "it" refers to, is "loud and long"). Although CR plays such a crucial role in opinion mining, only limited research has been done for CR on opinionated reviews. CR in opinionated reviews (e.g., Amazon product reviews) mainly concerns about resolving coreferences involving objects and their attributes. The objects in reviews are usually the names of products or services while attributes are aspects of those objects (Liu, 2012). + +Resolving coreferences in text broadly involves performing three tasks (although they are often performed jointly or via end-to-end learning): (1) identifying the list of mentions in the text (known as mention detection); (2) given a pair of candidate mentions in text, making a binary classification decision: coreferring or not (referred to as coreference classification), and (3) grouping coreferring mentions (referring to the same discourse entity) to form a coreference chain (known as clustering). In reviews, mention detection is equiv- + +alent to extracting entities and aspects in reviews which has been widely studied in opinion mining or sentiment analysis (Hu and Liu, 2004; Qiu et al., 2011; Xu et al., 2019; Luo et al., 2019; Wang et al., 2018; Dragoni et al., 2019; Asghar et al., 2019). Also, once the coreferring mentions are detected via classification, clustering them could be straightforward1. Thus, following (Ding and Liu, 2010), we only focus on solving the coreference classification task in this work, which we refer to as the object and attribute coreference classification (OAC2) task onwards. We formulate the OAC2 problem as follows. + +Problem Statement. Given a review text $u$ (context), an anaphor $^2$ $p$ and a mention $m$ which refers to either an object or an attribute (including their position information), our goal is to predict whether the anaphor $p$ refers to mention $m$ , denoted by a binary class $y \in \{0,1\}$ . Note: an anaphor here can be a pronoun (e.g., "it") or definite noun phrase (e.g., "the clock") or ordinal (e.g., "the green one"). + +In general, to classify coreferences, one needs intensive knowledge support. For example, to determine that "it" refers to "its voice" in S2, we need to know that "voice" can be described as "loud and long" and "it" can not refer to "a green Moonbeam" in S1, since "Moonbeam" is a clock which cannot be described as "long". + +Product reviews contain a great many such domain-specific concepts like brands (e.g., "Apple" in the laptop domain), product name (e.g., "T490" in the computer domain), and aspects (e.g. "hand" in the alarm clock domain) that often do not exist in general knowledge bases (KBs) like WordNet (Miller, 1998), ConceptNet (Speer and Havasi, 2013), etc. Moreover, even if a concept exists in a general KB, its semantics may be different than that in a given product domain. For example, "Moonbeam" in a general KB is understood as "the light of the moon" or the name of a song, rather than a clock (in the alarm clock domain). To encode such domain-specific concepts, we need to mine and feed domain knowledge (e.g., "clock" for "Moonbeam", "laptop" for "T490") to a coreference classification model. Existing CR methods (Zhang et al., + +2019b) do not leverage such domain knowledge and thus, often fail to resolve such co-references that require explicit reasoning over domain facts. + +In this paper, we propose to automatically mine such domain-specific knowledge from unlabeled reviews and leverage the useful pieces of the extracted domain knowledge together with the (general/comensense) knowledge from general KBs to solve the OAC2 task $^3$ . Note the extracted domain knowledge and the general knowledge from the existing general KBs are both considered as candidate knowledge. To leverage such knowledge, we design a novel knowledge-aware neural coreference classification model that selects the useful (candidate) knowledge with attention mechanism. We discuss our approach in details in Section 3. + +The main contributions of this work can be summarized: + +1. We propose a knowledge-driven approach to solving OAC2 in opinionated reviews. Unlike existing approaches that mostly dealt with general CR corpus and pronoun resolution, we show the importance of leveraging domain-specific knowledge for OAC2. +2. We propose a method to automatically mine domain-specific knowledge and design a novel knowledge-aware coreference classification model that leverages both domain-specific and general knowledge. +3. We collect a new review dataset4 with five domains or product types (including both unlabeled and labeled data) for evaluation. Experimental results show the effectiveness of our approach. + +# 2 Related Work + +Coreference resolution has been a long-studied problem in NLP. Early approaches were mainly rule-based (Hobbs, 1978) and feature-based (Ding and Liu, 2010; Atkinson et al., 2015) where researchers focused on leveraging lexical, grammatical properties and semantic information. Recently, end-to-end solutions with deep neural models (Lee + +et al., 2017, 2018; Joshi et al., 2019) have dominated the coreference resolution research. But they did not use external knowledge. + +Conisdering CR approaches that use external knowledge, Aralikatte et al. (2019) solved CR task by incorporate knowledge or information in reinforcement learning models. Emami et al. (2018) solved the binary choice coreference-resolution task by leveraging information retrieval results from search engines. Zhang et al. (2019a,b) solved pronoun coreference resolutions by leveraging contextual, linguistic features, and external knowledge where knowledge attention was utilized. However, these works did not deal with opinionated reviews and also did not mine or use domain-driven knowledge. + +In regard to CR in opinion mining, Ding and Liu (2010) formally introduced the OAC2 task for opinionated reviews, which is perhaps the only prior study on this problem. However, it only focused on classifying coreferences in comparative sentences (not on all review sentences). We compare our approach with (Ding and Liu, 2010) in Section 4. + +Many existing general-purpose CR datasets are not suitable for our task, which include MUC-6 and MUC-7 (Hirschman and Chinchor, 1998), ACE (Doddington et al., 2004), OntoNotes (Pradhan et al., 2012), and WikiCoref (Ghaddar and Langlais, 2016). Bailey et al. (2015) proposed an alternative Turing test, comprising a binary choice CR task that requires significant commonsense knowledge. Yu et al. (2019) proposed visual pronoun coreference resolution in dialogues that require the model to incorporate image information. These datasets are also not suitable for us as they are not opinionated reviews. We do not focus on solving pronoun resolution here because, for opinion text such as reviews, discussions and blogs, personal pronouns mostly refer to one person (Ding and Liu, 2010). Also, we aim to leverage domain-specific knowledge on (unlabeled) domain-specific reviews to help the CR task which has not been studied by any of these existing CR works. + +# 3 Proposed Approach + +Model Overview. Our approach consists of the following three main steps: (1) knowledge acquisition, where given the (input) pair of mention $m$ (e.g., "a green Moonbeam") and anaphor $p$ (e.g., "it") and the context $t$ (i.e., the review text), we acquire candidate knowledge involving $m$ , denoted + +Table 1: Summary of notations (non-exhaustive list) + +
da domain
ta review text or context
ma mention
pan anaphor
Kmdomain+general) knowledge involving m for domain d
Kmddomain knowledge involving m for d
Smsyntax-related phrases of m
Spsyntax-related phrases of p
Tdlabeled reviews in d
Tdunlabeled reviews in d
+ +as $K_{m}$ . $K_{m}$ consists of both domain knowledge (mined from unlabeled reviews) as well as general knowledge (compiled from existing general KBs) (discussed in Section 3.1). Next, in (2) syntax-based span representation, we extract syntax-related phrases for mention $m$ and anaphor $p$ . Syntax-related phrases are basically noun phrases, verbs or adjectives that have a dependency relation with $m$ (or $p$ ). For example, "bought" is a syntax-related phrase of the mention "a green Moonbeam" and "like" and "voice" are two syntax-related phrases for the anaphor "it" in the example review text in Section 1. Once the syntax-related phrases are extracted and the candidate knowledge is prepared for $m$ and $p$ , we learn vector representations of the phrases and the knowledge (discussed in Section 3.2), which are used in step-3. Finally, in (3) knowledge-driven OAC2 model, we select and leverage useful candidate domain knowledge together with general knowledge to solve the OAC2 task. Figure 1 shows our model architecture. Table 1 summarizes a (non-exhaustive) list of notations, used repeatedly in subsequent sections. + +# 3.1 Knowledge Acquisition + +Domain Knowledge Mining. Given the mention $m$ , we first split the mention into words. Here, we only keep the words that satisfy one of the following two conditions: (1) a word is a noun (determined by its POS tag); (2) a word is part of a named entity (by NER). For example, "a westclox clock" will result in words "westclox" and "clock". We use the mention words as the keys to search a domain knowledge base (KB) to retrieve domain + +![](images/e9f21f187204950521febbcf3c7b9173354d0c141ff4a56fab4d1aadfd463b45.jpg) +Figure 1: The architecture of our knowledge-driven OAC2 model. + +knowledge for the mention $m$ + +To construct the domain KB, we use unlabeled review data in the particular domain. Specifically, all unlabeled sentences that contain mention words are extracted. Next, we collect domain knowledge for $m$ as $K_{m}^{d}$ , where $K_{m}^{d} = \{k_{m,1}^{d}, k_{m,2}^{d}, \ldots\}$ . The elements in $K_{m}^{d}$ are phrases of nouns, adjectives, and verbs co-occurring with $m$ in the unlabeled review sentences. + +Domain Knowledge Filtering. Some domain knowledge (i.e., co-occurring phrases) can be too general to help reason over the mention. For example, given mention "Moonbeam", the verb "like" can be related to any objects or attributes and thus, is not a very useful knowledge for describing the mention. To filter such unimportant phrases from $K_{m}^{d}$ , we use $tf-idf$ (Aizawa, 2003) scoring. + +Given mention $m$ and a phrase $k \in K_m^d$ , we compute $tfidf$ score of $k$ , denoted as $tfidf_k$ as given below: + +$$ +t f _ {k} = \frac {C _ {k}}{\max _ {k ^ {\prime} \in K _ {m} ^ {d}} C _ {k ^ {\prime}}} \tag {1} +$$ + +$$ +i d f _ {k} = \log \frac {| \overline {{T}} _ {d} |}{| \{t ^ {\prime} \in \overline {{T}} _ {d}: k \in t ^ {\prime} \} |} \tag {2} +$$ + +$$ +t f - i d f _ {k} = t f _ {k} \cdot i d f _ {k} \tag {3} +$$ + +where $C_k$ denotes the co-occurrence count of phrase $k$ with $m$ in unlabeled domain reviews $\overline{T}_d$ and $|\cdot|$ denotes set count. We retain phrase $k$ in $k_m^d$ , if $tf-idf_k \geq \rho$ , where $\rho$ is a (empirically set) threshold value. + +General Knowledge Acquisition. General Knowledge bases like ConceptNet, WordNet, etc. + +store facts as triples of the form $(e_1, r, e_2)$ , denoting entity $e_1$ is related to entity $e_2$ by a relation $r$ . e.g., ("clock", "UsedFor", "set an alarm"). + +To acquire and use general knowledge for mention $m$ , we first split $m$ into words (in the same way as we do during domain knowledge construction) and use these words as keywords to retrieve triples such that one of the entities (in a given triple) contains a word of $m$ . Finally, we collect the set of entities (from the retrieved triples) as general knowledge for $m$ , by selecting the other entity (i.e., instead of the entity involving a mention word) from each of those retrieved triples. + +# 3.2 Syntax-based Span Representation + +Once the domain-specific and general knowledge for mention $m$ is acquired, we extract all syntax-related phrases for $m$ and anaphor $p$ from review text $t$ (see "Model Overview" in Section 3). We denote the syntax-related phrases of $m$ and $p$ as $S_{m}$ and $S_{p}$ respectively. + +We represent mention, anaphor, the syntax-related phrases, and also the phrases of knowledge from domain-specific and general KBs as spans (a continuous sequence of words), and learn a vector representation for each span (we call it a span vector) based on the embeddings of words that compose the span. The span vectors are then used by our knowledge-driven OAC2 model (discussed in Section 3.3) for solving the OAC2 task. Below, we discuss the span vector representation learning for a given span (corresponding to a syntax-related phrase or a phrase in KB). + +We use BERT (Devlin et al., 2019) to learn the vector representation for each span. To encode + +the words in a span, we use BERT's WordPiece tokenizer. Given a span $x$ , let $\{x_{i}\}_{i = 1}^{N_{1}}$ be the output token embeddings of $x$ from BERT, where $N_{1}$ is the total number of word-piece tokens for span $x$ . + +BERT is a neural model consisting of stacked attention layers. To incorporate the syntax-based information, we want the head of a span and words that have a modifier relation to the head to have higher attention weights. To achieve the goal, we adopt syntax-based attention (He et al., 2018). The weight of a word in a span depends on the dependency parsing result of the span. Note, the dependency parsing of a span is different from what is described in Section 3.1. The dependency parsing in Section 3.1 extracts the relation between chunks of words while here we extract relations between single words. + +An example has been shown in top left corner of Figure 1. The head of "a green Moonbeam" is "Moonbeam" that we want to have the highest attention weight when computing the embedding of the span. The distance of ("a", "Moonbeam") and ("green", "Moonbeam") considering the dependency path are both 1. + +To learn the span vector $v_{x}$ for span $x$ , we first compute the attention weights $b_{i}$ 's for each $x_{i}$ , as: + +$$ +f _ {i} = F F N _ {1} \left(\left[ x _ {i}, x _ {\text {h e a d}}, x _ {i} \odot x _ {\text {h e a d}} \right]\right) \tag {4} +$$ + +$$ +a _ {i} = \left\{ \begin{array}{l l} \frac {1}{2 ^ {l _ {i}}} \cdot e x p \left(f _ {i}\right), & i f l _ {i} \leq L \\ 0, & o t h e r w i s e \end{array} \right. \tag {5} +$$ + +$$ +b _ {i} = \frac {a _ {i}}{\sum_ {j = 1} ^ {N _ {1}} a _ {j}} \tag {6} +$$ + +where $FFN_{1}$ is a feed-forward layer that projects the input into a score $f_{i}$ , $\odot$ is element-wise multiplication, $[,]$ is concatenation, $x_{head}$ is the head of the span, $l_{i}$ is the distance to the head along the dependency path, $L$ is the attention window size. + +Next, we learn the attention-based representation of the span $x$ , denoted as $\hat{x}$ as: + +$$ +\hat {x} = \sum_ {i = 1} ^ {N _ {1}} b _ {i} \cdot x _ {i} \tag {7} +$$ + +Finally, we concatenate the start and end word embeddings of the span $x_{start}$ and $x_{end}$ , attention-based representation $\hat{x}$ and a length feature $\phi(x)$ following (Lee et al., 2017) to learn span vector $v_x$ : + +$$ +v _ {x} = F F N _ {2} ([ x _ {\text {s t a r t}}, x _ {\text {e n d}}, \hat {x}, \phi (x) ]). \tag {8} +$$ + +where $FFN_{2}$ is a feed-forward layer. + +# 3.3 Knowledge-driven OAC2 Model + +The knowledge-driven OAC2 model leverages the syntax-related phrases together with the domain knowledge and general knowledge to solve the OAC2 task. The model first computes three relevance scores: (a) a contextual relevance score $F_{C}$ between $m$ and $p$ , (b) a knowledge-based relevance score $F_{K}$ between $m$ and $p$ , and (c) a relevance score $F_{SK}$ between knowledge and syntax-related phrases (see Figure 1) and then, these scores are summed up to compute the final prediction score $\hat{F}$ , as shown below: + +$$ +\hat {F} = \operatorname {s i g m o i d} \left(F _ {C} + F _ {K} + F _ {S K}\right) \tag {9} +$$ + +(a) Contextual Relevance Score $(F_C)$ . $F_C$ is computed based on the context $t$ , mention $m$ and anaphor $p$ . We use BERT to encode $t$ . Let the output BERT embeddings of words in $t$ be $\{t_i\}_{i=1}^{N_2}$ where $N_2$ is length of $t$ . Also, let the span vector representations of $m$ and $p$ are $v_m$ and $v_p$ respectively. Then, for each $v \in \{v_m, v_p\}$ , we compute cross attention between $t$ and $v$ as follows: + +$$ +g _ {i} = F F N _ {3} ([ t _ {i}, v, t _ {i} \odot v ]) \tag {10} +$$ + +$$ +w _ {i} ^ {v} = \frac {e ^ {g _ {i}}}{\sum_ {j = 1} ^ {N _ {2}} e ^ {g _ {j}}} \cdot t _ {i} \tag {11} +$$ + +where $FFN_{3}$ is a feed-forward layer. + +We learn the interaction of $\{t_i\}_{i = 1}^{N_2}$ with $v_{m}$ and $v_{p}$ to get attention-based vector representations $\{w_i^m\}_{i = 1}^{N_2}$ and $\{w_i^p\}_{i = 1}^{N_2}$ for $m$ and $p$ respectively. Next, we concatenate these vectors and their pointwise multiplication for each context word, sum up the concatenated representations and feed it to a feed-forward layer to compute $F_{C}\in \mathcal{R}^{1\times 1}$ : + +$$ +F _ {C} = F F N _ {4} \left(\sum_ {i = 1} ^ {N _ {2}} \left[ w _ {i} ^ {m}, w _ {i} ^ {p}, w _ {i} ^ {m} \odot w _ {i} ^ {p} \right]\right) \tag {12} +$$ + +where $FFN_{4}$ is a feed-forward layer. + +(b) Knowledge-based Relevance Score $(F_K)$ . The OAC2 model leverages the external knowledge to compute a relevance score $F_K$ between $m$ and $p$ . Let $v_m$ and $v_p$ be the span vectors for $m$ and $p$ and $\{v_i^K\}_{i=1}^{N_3}$ be the span vectors for phrases in $K_m$ (see Sec 3.1 and Table 1), where $N_3$ is size of $K_m$ . Then, we compute $F_K$ using $v_m$ , $v_p$ and $\{v_i^K\}_{i=1}^{N_3}$ as discussed below. + +To leverage external knowledge information, we first learn cross attention between the mention and + +the knowledge as: + +$$ +h _ {i} = F F N _ {5} \left(\left[ v _ {i} ^ {K}, v _ {m}, v _ {i} ^ {K} \odot v _ {m} \right]\right) \tag {13} +$$ + +$$ +c _ {i} = \frac {e ^ {h _ {i}}}{\sum_ {j = 1} ^ {N _ {3}} e ^ {h _ {j}}} \tag {14} +$$ + +where $FFN_{5}$ is a feed-forward layer. + +Next, we learn an attention-based representation $\hat{v}_m$ of mention $m$ as: + +$$ +\hat {v} _ {m} = \sum_ {i = 1} ^ {N _ {3}} c _ {i} \cdot v _ {i} ^ {K} \tag {15} +$$ + +We now concatenate $v_{m}, v_{p}$ , the attention-based representation $\hat{v}_{m}$ and learn interaction between them to compute $F_{K} \in \mathcal{R}^{1 \times 1}$ as: + +$$ +F _ {K} = F F N _ {6} ([ v _ {m}, v _ {p}, \hat {v} _ {m}, v _ {p} \odot \hat {v} _ {m}, v _ {p} \odot \hat {v} _ {m} ]) \tag {16} +$$ + +where $FFN_{6}$ is a feed-forward layer. + +(c) Syntax-related Phrase Relevance Score $(F_{SK})$ . $F_{SK}$ measures the relevance between the knowledge (i.e., phrases) in $K_{m}$ and the syntax-related phrases in $S_{m}(S_{p})$ corresponding to $m(p)$ . + +Let $v_{i}^{K}$ be the span vector for $i^{th}$ phrase in $K_{m}$ and $v_{i}^{m}(v_{i}^{p})$ be the span vector for $i^{th}$ phrase in $S_{m}(S_{p})$ . Then, we concatenate these span vectors row-wise to form matrices $M_{K} = v_{i}^{K}\big\|_{i = 1}^{N_{3}} \in \mathcal{R}^{N_{3}\times d}$ , $M_{Sm} = v_{i}^{m}\big\|_{i = 1}^{N_{4}}\in \mathcal{R}^{N_{4}\times d}$ and $M_{Sp} = v_{i}^{p}\big\|_{i = 1}^{N_{5}}\in \mathcal{R}^{N_{5}\times d}$ respectively, where $\| _ { i = 1 } ^ { Q }$ denotes concatenation of $Q$ elements, $d$ is dimension of span vector, $N_4(N_5)$ is size of $S_{m}(S_{p})$ . + +Next, we learn interaction between these matrices using scaled dot attention (Vaswani et al., 2017) as: + +$$ +\tilde {M} _ {S m} = \operatorname {s o f t m a x} \left(\frac {M _ {S m} M _ {K} ^ {T}}{\sqrt {d}}\right) M _ {K} \tag {17} +$$ + +$$ +\tilde {M} _ {S p} = \operatorname {s o f t m a x} \left(\frac {M _ {S p} M _ {K} ^ {T}}{\sqrt {d}}\right) M _ {K} \tag {18} +$$ + +Finally, the syntax-related phrase relevance score $F_{SK} \in \mathcal{R}^{1 \times 1}$ is computed as: + +$$ +F _ {S K} = F F N _ {8} \left(F F N _ {7} \left(\tilde {M} _ {S m} \tilde {M} _ {S p} ^ {T}\right)\right) \tag {19} +$$ + +where $FFN_{7}$ and $FFN_{8}$ are two feed-forward network layers. + +Loss Function. As shown in Equation 9, given three scores $F_{C}, F_{K}$ , and $F_{SG}$ , we sum them up + +Table 2: Dataset Statistics. #R means the number of annotated reviews and #E indicates total entities that refer to objects or attributes. P and N stand for positive and negative examples and the values under them are the numbers of those examples. + +
Domain#R#ETrainDevTest
PNPNPN
alarm10092464715339624389187
camera10087163217096916083174
cellphone10093867916936214873189
computer1001035703184786227112273
laptop10089364116188824477209
+ +and then feed the sum into a sigmoid function to get the final prediction $\hat{F}$ . The proposed model is trained in an end-to-end manner by minimizing the following cross-entropy loss $\mathcal{L}$ : + +$$ +\mathcal {L} = - \frac {1}{N} \sum_ {i} ^ {N} \left[ y _ {i} \cdot \log \left(\hat {F} _ {i}\right) + \left(1 - y _ {i}\right) \cdot \log \left(1 - \hat {F} _ {i}\right) \right] \tag {20} +$$ + +where, $N$ is the number of training examples and $y_{i}$ is the ground truth label of $i^{th}$ training example. + +# 4 Experiments + +We evaluate our proposed approach using five datasets associated with five different domains: (1) alarm clock, (2) camera, (3) cellphone, (4) computer, and (5) laptop and perform both quantitative and qualitative analysis in terms of predictive performance and domain-specific knowledge usage ability of the proposed model. + +# 4.1 Evaluation Setup + +Labelled Data Collection. We use the product review dataset from Chen and Liu (2014), where each product (domain) has 1,000 unlabeled reviews. For each domain, we randomly sample 100 reviews, extract a list of (mention, anaphor) pairs from each of those reviews and label them manually with ground truths. That is, given a review text and a candidate (mention, anaphor) pair, we assign a binary label to denote whether they co-refer or not. In other words, we view each labeled example as a triple $(u, m, p)$ , consisting of the context $u$ , a mention $m$ and an anaphor $p$ . Considering the review example (in Section 1), the triple ("I bought ... loud and long", "a green Moonbeam", "its") is a positive example, since "a green Moonbeam" and "its" refers to the same entity (i.e., they are + +in the same coreference cluster). Negative examples are naturally constructed by selecting $m$ and $p$ from two different clusters under the same context like ("I bought ... loud and long", "a green Moonbeam", "its voice"). + +Next, we randomly split the set of all labeled examples (for a given domain) into $80\%$ for training, $10\%$ as development, and rest $10\%$ as test data. The remaining 900 unlabeled reviews form the unlabeled domain corpus is used for domain-specific knowledge extraction (as discussed in Section 3.1). All sentences in reviews and (mention, anaphor) pairs were annotated by two annotators independently who strictly followed the MUC-7 annotation standard (Hirschman and Chinchor, 1998). The Cohen's kappa coefficient between two annotators is 0.906. When disagreement happens, two annotators adjudicate to make a final decision. Table 2 provides the statistics of labeled dataset used for training, development and test for each of the five domains. + +Knowledge Resources. We used three types of knowledge resources as listed below. The first two are general KBs, while the third one is our mined domain-specific KB. + +1. Commonsense knowledge graph (OMCS). We use the open mind common sense (OMCS) KB as general knowledge (Speer and Havasi, 2013). OMCS contains 600K crowd-sourced commonsense triplets such as (clock, UsedFor, keeping time). We follow (Zhang et al., 2019b) to select highly-confident triplets and build the OMCS KG consisting of total 62,730 triplets. +2. Senticnet (Cambria et al., 2016). Senticnet is another commonsense knowledge base that contains $50\mathrm{k}$ concepts associated with affective properties including sentiment information. To make the knowledge base fit for deep neural models, we concatenate SenticNet embeddings with BERT embeddings to extend the embedding information. +3. Domain-specific KB. This is mined from the unlabeled review dataset as discussed in Sec 3.1. + +Hyper-parameter Settings. Following the previous work of (Joshi et al., 2019; Lee et al., 2018), we use (Base) BERT $^{8}$ embeddings of context and knowledge representation (as discussed in Section 3). The number of training epochs is empirically set as 20. We train five models on five datasets sepa + +rately, because the domain knowledge learned from a certain domain may conflict with that from others. Without loss of generality and model extensibility, we use the same set of hyper-parameter settings for all models built on each of the five different domains. We select the best model setting based on its performance on the development set, by averaging five F1-scores on the five datasets. The best model uses maximum length of a sequence as 256, dropout as 0.1, learning rate as $3e^{-5}$ with linear decay as $1e^{-4}$ for parameter learning, and $\rho = 5.0$ (threshold for tfidf) in domain-specific knowledge extraction (Section 3.1). The tuning of the other baseline models is the same as we do for our model. + +Baselines. We compare following state-of-the-art models from existing works on CR task: + +(1) Review CR (Ding and Liu, 2010): A review-specific CR model that incorporates opinion mining based features and linguistic features. +(2) Review CR+BERT: For a fairer comparison, we further combine BERT with features from (Ding and Liu, 2010) as additional features. Specifically, we combine the context-based BERT to compute $F_{C}(m,p)$ (see Section 3.3 (a)). +(3) C2f-Coref (Lee et al., 2018): A state-of-the-art end-to-end model that leverages contextual information and pre-trained Glove embeddings. +(4) C2f-Coref+BERT (Joshi et al., 2019): This model integrates BERT into C2f-Coref. We use its independent setting which uses non-overlapping segments of a paragraph, as it is the best performing model in Joshi et al. (2019). +(5) Knowledge+BERT (Zhang et al., 2019b): This is a state-of-the-art knowledge-base model, which leverages different types of general knowledge and contextual information by incorporating an attention module over knowledge. General knowledge includes the aforementioned OMCS, linguistic feature and selectional preference knowledge extracted from Wikipedia. To have a fair comparison, we replace the entire LSTM-base encoder with BERT-base transformer. + +To accommodate the aforementioned baseline models into our settings, which takes context, anaphor, and mention as input and perform binary classification, we change the input and output of the baseline models, i.e., the models compute a score between mention and anaphor and feeds the score to a sigmoid function to get a score within [0, 1]. Note, this setting is consistently used for all + +Table 3: Performance (+ve F1 scores) of all models on all test datasets. Here, "cam", "com", "lap" are the abbreviation for "camera", "computer", "laptop" respectively. + +
Modelalarmcamphonecomlapaverage
Review CR58.260.557.759.658.958.98
Review CR +BERT67.269.367.068.466.767.72
C2f-Coref68.870.167.269.567.468.60
C2f-Coref +BERT70.271.668.671.368.269.98
Knowledge +BERT72.073.471.872.670.071.96
Our model73.674.572.473.871.373.12
+ +candidate models (including our proposed model). + +Evaluation Metrics. As we aim to solve the OAC2 problem, a focused coreference classification task, we use the standard evaluation metrics $F1$ -score ( $F1$ ), following the same setting of the prior study (Ding and Liu, 2010). In particular, we report positive (+ve) F1-score [F1(+)]. The average +ve F1-score is computed over five domains. + +# 4.2 Results and Analysis + +Comparison with baselines. Table 3 reports F1 scores of all models for each of five domains and average F1 over all domains. We observe the following: (1) Overall, our model performs the best considering all five domains, outperforming the no-knowledge baseline model C2f-Coref+BERT by $3.14\%$ . On the cellphone domain, our model outperforms it by $3.8\%$ . (2) Knowledge+BERT turns out to be the strongest baseline, outperforming the other three baselines, which also shows the importance of leveraging external knowledge for the OAC2 task. However, our model achieves superior performance over Knowledge+BERT which indicates leveraging domain-specific knowledge indeed helps. (3) C2f-Coref+BERT achieves better scores than C2f-Coref and Review CR. This demonstrates that both representation (using pre-trained BERT) and neural architectures are important for feature fusions in this task. + +Ablation study. To gain further insight, we ablate various components of our model with the results reported in Table 4. For simplicity, we only show the average F1-scores on the five domain datasets. The results indicate how each knowledge resource or module contributes, from which we have the following observations. + +1. From comparison Knowledge resources in Table 4, we see that domain knowledge con + +Table 4: Performance of our model with different types of knowledge or module removed (-). $\Delta$ F1(+) is the performance difference between our model and model with module remove. + +
ComparisonModelAvg. F1(+)Δ F1(+)
Our model73.120.00
Knowledge source-OMCS knowledge72.280.84
-Domain knowledge72.220.90
-Senticnet72.820.30
-all knowledge70.562.56
Score-contextFc71.141.98
-knowledgeFK71.801.48
-phraseFSG72.580.56
attention-syntax-based attention72.500.62
+dot attention72.960.16
+ +attributes the most. General OMCS knowledge also contributes 0.84 to the model on average, so general knowledge is still needed. Senticnet contributes the least as it is more about sentiment rather than the relatedness between mentions. If we remove all knowledge sources (-all knowledge), performance drop becomes the highest which shows the importance of leveraging external knowledge in OAC2. + +2. Considering comparisons of various types of scores in Table 4, we see that the disabling the use of context score $F_{C}$ has the highest drop in performance, showing the importance of contextual information for this task. Disabling the use of knowledge scores $F_{G}$ and $F_{SG}$ also impact the predictive performance of the model, by causing a drop in performance. +3. From the comparison of attention mechanism for span representation in Table 4, we see that, before summing up the embedding of each word of the span, the attention layer is necessary. Note, we use the selected attention instead of popular dot attention in (Vaswani et al., 2017) during span representation. The influence of the syntax-based attention layer is slightly better than the dot attention layer. Therefore, we use the selected attention for better interpretability. + +Qualitative Evaluation. We first give a real example to show the effectiveness of our model by comparing it with two baseline models C2f-coref+BERT and Knowledge+BERT. Table 5 shows a sample in the alarm domain. Here the major difficulty is to identify "Moonbeam" as a "clock". Knowledge+BERT fails due to its lack of domain-specific knowledge. C2f-coref+BERT + +Table 5: A test example from alarm domain with class probability distributions by three models during prediction. + +
Context...after I bought (a green Moonbeam for myself ... potential buyer also should know that , as with (the other Westclox clock), (the clock) also have (a gold band) ...
(Mention, Anaphor)(a darkgreen Moonbeam, the clock)
Domain knowledgedrop, hang, clock, put, alarm, clear, beautiful, expensive, worthwhile ...
Our model(0: 0.47, 1: 0.53)
Knowledge+BERT(0: 0.87, 1: 0.13)
C2f-coref+BERT(0: 0.79, 1: 0.21)
+ +Table 6: An example showing the domain knowledge extraction quality of our model from laptop domain. + +
Mention (Domain)windows (laptop)
Extracted knowledge (before filtering)keep, like, product, battery, fast, microsoft, system, upgrade, xp, laptop..
Candidate knowledge (after filtering by tfidf)microsoft, system, upgrade, xp, laptop..
+ +fails as well because it simply tries to infer from contextual information only, where there is no domain knowledge support. In contrast, with our domain-specific knowledge base incorporated, "Moonbeam" can be matched to the knowledge like "clock", "alarm", and "hang" which are marked with green color. So our model successfully addresses this case. In other words, in our model, not only the mention "a green Moonbeam" but also syntax-related phrase "a gold band" of "the clock" will be jointly considered in reasoning. We can see the modeling superiority of our knowledge-aware solution. Table 6 shows the effectiveness of our extraction module introduced in Section 3.1, especially the usage of $tfidf$ to filter out useless knowledge. + +# 5 Conclusion + +This paper proposed a knowledge-driven approach for object and attribute coreference classification in opinion mining. The approach can automatically extract domain-specific knowledge from unlabeled data and leverage it together with the general knowledge for solving the problem. We also created a set of annotated opinionated review data (including 5 domains) for object and attribute coreference evaluation. Experimental results show that our approach achieves state-of-the-art performance. + +# Acknowledgments + +This work was supported in part by two grants from National Science Foundation: IIS-1910424 and IIS-1838770, and one research gift from Tencent. + +# References + +Akiko Aizawa. 2003. An information-theoretic perspective of tfidf measures. Information Processing & Management, pages 45-65. +Rahul Aralikatte, Heather Lent, Ana Valeria González-Garduño, Daniel Hershcovich, Chen Qiu, Anders Sandholm, Michael Ringaard, and Anders Søgaard. 2019. Rewarding coreference resolvers for being consistent with world knowledge. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP, pages 1229-1235. +Muhammad Zubair Asghar, Aurangzeb Khan, Syeda Rabail Zahra, Shakeel Ahmad, and Fazal Masud Kundi. 2019. Aspect-based opinion mining framework using heuristic patterns. Cluster Computing, pages 7181-7199. +John Atkinson, Gonzalo Salas, and Alejandro Figueroa. 2015. Improving opinion retrieval in social media by combining features-based coreferencing and memory-based learning. Information Sciences, pages 20-31. +Daniel Bailey, Amelia J. Harrison, Yuliya Lierler, Vladimir Lifschitz, and Julian Michael. 2015. The winograd schema challenge and reasoning about correlation. In 2015 AAAI Spring Symposia. AAAI Press. +Erik Cambria, Soujanya Poria, Rajiv Bajpai, and Björn Schuller. 2016. Senticnet 4: A semantic resource for sentiment analysis based on conceptual primitives. In COLING, pages 2666-2677. +Zhiyuan Chen and Bing Liu. 2014. Topic modeling using topics from many domains, lifelong learning and big data. In ICML, pages 703-711. +Pradeep Dasigi, Nelson F. Liu, Ana Marasovic, Noah A. Smith, and Matt Gardner. 2019. Quoref: A reading comprehension dataset with questions requiring coreferential reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5925-5932. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171-4186. + +Xiaowen Ding and Bing Liu. 2010. Resolving object and attribute coreference in opinion mining. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 268-276. +George R Doddington, Alexis Mitchell, Mark A Przybocki, Lance A Ramshaw, Stephanie M Strassel, and Ralph M Weischedel. 2004. The automatic content extraction (ace) program-tasks, data, and evaluation. In Lrec, pages 837-840. +Mauro Dragoni, Marco Federici, and Andi Rexha. 2019. An unsupervised aspect extraction strategy for monitoring real-time reviews stream. Inf. Process. Manag., pages 1103-1118. +Ali Emami, Adam Trischler, Kaheer Suleman, and Jackie Chi Kit Cheung. 2018. A generalized knowledge hunting framework for the winograd schema challenge. In NAACL-HLT Workshop, pages 25-31. +Abbas Ghaddar and Philippe Langlais. 2016. Wiki-coref: An english coreference-annotated corpus of wikipedia articles. In LREC, pages 136-142. +Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2018. Effective attention modeling for aspect-level sentiment classification. In COLING, pages 1121-1131. +Lynette Hirschman and Nancy Chinchor. 1998. Appendix F: MUC-7 coreference task definition (version 3.0). In Seventh Message Understanding Conference: Proceedings of a Conference Held in Fairfax, Virginia, USA. ACL. +Jerry R Hobbs. 1978. Resolving pronoun references. *Lingua*, pages 311-338. +Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 168-177. ACM. +Mandar Joshi, Omer Levy, Luke Zettlemoyer, and Daniel Weld. 2019. BERT for coreference resolution: Baselines and analysis. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5803-5808. +Dan Jurafsky. 2000. Speech & language processing. Pearson Education India. +Gourab Kundu, Avirup Sil, Radu Florian, and Wael Hamza. Neural cross-lingual coreference resolution and its application to entity linking. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL, pages 395–400. +Kenton Lee, Luheng He, Mike Lewis, and Luke Zettle-moyer. 2017. End-to-end neural coreference resolution. In Proceedings of the 2017 Conference on + +Empirical Methods in Natural Language Processing, pages 188-197. +Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-order coreference resolution with coarse-to-fine inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 687-692. +Bing Liu. 2012. Sentiment analysis and opinion mining. Synthesis lectures on human language technologies, pages 1-167. +Huaishao Luo, Tianrui Li, Bing Liu, and Junbo Zhang. 2019. DOER: dual cross-shared RNN for aspect term-polarity co-extraction. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL, pages 591-601. +Nianzu Ma, Sahisnu Mazumder, Hao Wang, and Bing Liu. 2020. Entity-aware dependency-based deep graph attention network for comparative preference classification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5782-5788. +George A Miller. 1998. WordNet: An electronic lexical database. MIT press. +Nicolas Nicolov, Franco Salvetti, and Steliana Ivanova. 2008. Sentiment analysis: Does coreference matter. In AISB 2008 convention communication, interaction and social intelligence, page 37. +Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. Conll-2012 shared task: Modeling multilingual unrestricted coreference in ontonotes. In Joint Conference on EMNLP and CoNLL-Shared Task, pages 1-40. +Guang Qiu, Bing Liu, Jiajun Bu, and Chun Chen. 2011. Opinion word expansion and target extraction through double propagation. Computational linguistics, 37(1):9-27. +Jun Quan, Deyi Xiong, Bonnie Webber, and Changjian Hu. 2019. GECOR: an end-to-end generative ellipsis and co-reference resolution model for task-oriented dialogue. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP, pages 4546-4556. +Robert Speer and Catherine Havasi. 2013. Conceptnet 5: A large semantic network for relational knowledge. In The People's Web Meets NLP, pages 161-176. Springer. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS, pages 5998-6008. + +Shuai Wang, Mianwei Zhou, Sahisnu Mazumder, Bing Liu, and Yi Chang. 2018. Disentangling aspect and opinion words in target-based sentiment analysis using lifelong learning. arXiv preprint arXiv:1802.05818. +Yequan Wang, Minlie Huang, Xiaoyan Zhu, and Li Zhao. 2016. Attention-based LSTM for aspect-level sentiment classification. In Proceedings of the 2016 conference on empirical methods in natural language processing, pages 606-615. +Hu Xu, Bing Liu, Lei Shu, and Philip S. Yu. 2019. BERT post-training for review reading comprehension and aspect-based sentiment analysis. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, pages 2324-2335. +Xintong Yu, Hongming Zhang, Yangqiu Song, Yan Song, and Changshui Zhang. 2019. What you see is what you get: Visual pronoun coreference resolution in dialogues. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP, pages 5122-5131. +Hongming Zhang, Yan Song, and Yangqiu Song. 2019a. Incorporating context and external knowledge for pronoun coreference resolution. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 872-881. +Hongming Zhang, Yan Song, Yangqiu Song, and Dong Yu. 2019b. Knowledge-aware pronoun coreference resolution. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 867-876. +Lei Zhang, Shuai Wang, and Bing Liu. 2018. Deep learning for sentiment analysis: A survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, page e1253. \ No newline at end of file diff --git a/aknowledgedrivenapproachtoclassifyingobjectandattributecoreferencesinopinionmining/images.zip b/aknowledgedrivenapproachtoclassifyingobjectandattributecoreferencesinopinionmining/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..996e7e0b1072402566d62cd762f6b75470763f49 --- /dev/null +++ b/aknowledgedrivenapproachtoclassifyingobjectandattributecoreferencesinopinionmining/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2202395442a9b0cf6a2197bc0e46fc8a27d6bd5d3b3c647aaaf7c2bee24690da +size 373942 diff --git a/aknowledgedrivenapproachtoclassifyingobjectandattributecoreferencesinopinionmining/layout.json b/aknowledgedrivenapproachtoclassifyingobjectandattributecoreferencesinopinionmining/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..7bd78f47973e86e31a8d35dc6ce476d83588300e --- /dev/null +++ b/aknowledgedrivenapproachtoclassifyingobjectandattributecoreferencesinopinionmining/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f4234e3bea902c514302b57fa0b35e6b03de433ecaca0777f7dc152387fb059 +size 490377 diff --git a/alinguisticanalysisofvisuallygroundeddialoguesbasedonspatialexpressions/5b1daaa5-a862-4afe-aa54-05a9388a7df7_content_list.json b/alinguisticanalysisofvisuallygroundeddialoguesbasedonspatialexpressions/5b1daaa5-a862-4afe-aa54-05a9388a7df7_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f6e19623a8881547a96d41eb70f3f4169a59cc7e --- /dev/null +++ b/alinguisticanalysisofvisuallygroundeddialoguesbasedonspatialexpressions/5b1daaa5-a862-4afe-aa54-05a9388a7df7_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3d2078b1bdcf1cf2687ff02783520d0f6b052f81e25196b994876bf9d878bc95 +size 115782 diff --git a/alinguisticanalysisofvisuallygroundeddialoguesbasedonspatialexpressions/5b1daaa5-a862-4afe-aa54-05a9388a7df7_model.json b/alinguisticanalysisofvisuallygroundeddialoguesbasedonspatialexpressions/5b1daaa5-a862-4afe-aa54-05a9388a7df7_model.json new file mode 100644 index 0000000000000000000000000000000000000000..0354849fbd9ecd963f1e93429cf5653e83bdfd28 --- /dev/null +++ b/alinguisticanalysisofvisuallygroundeddialoguesbasedonspatialexpressions/5b1daaa5-a862-4afe-aa54-05a9388a7df7_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:50a33b74c53e4cf9cd17ae5a0734f56fdd7bc8e71fdcba43ddeaf91d13d9abe7 +size 140003 diff --git a/alinguisticanalysisofvisuallygroundeddialoguesbasedonspatialexpressions/5b1daaa5-a862-4afe-aa54-05a9388a7df7_origin.pdf b/alinguisticanalysisofvisuallygroundeddialoguesbasedonspatialexpressions/5b1daaa5-a862-4afe-aa54-05a9388a7df7_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..973eb0a5037cfb2bdd6ea71aa0d23546768dfada --- /dev/null +++ b/alinguisticanalysisofvisuallygroundeddialoguesbasedonspatialexpressions/5b1daaa5-a862-4afe-aa54-05a9388a7df7_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f33965dd1ceb7d4f9c092914fb97bb124902330d9b123a01f0e95d781dadceaa +size 5115194 diff --git a/alinguisticanalysisofvisuallygroundeddialoguesbasedonspatialexpressions/full.md b/alinguisticanalysisofvisuallygroundeddialoguesbasedonspatialexpressions/full.md new file mode 100644 index 0000000000000000000000000000000000000000..d7e27da74e877983d40cc4d8ba7f06f2a47b5ef6 --- /dev/null +++ b/alinguisticanalysisofvisuallygroundeddialoguesbasedonspatialexpressions/full.md @@ -0,0 +1,494 @@ +# A Linguistic Analysis of Visually Grounded Dialogues Based on Spatial Expressions + +Takuma Udagawa $^{1}$ Takato Yamazaki $^{1}$ Akiko Aizawa $^{1,2}$ + +The University of Tokyo, Tokyo, Japan + +National Institute of Informatics, Tokyo, Japan + +{takuma.udagawa,takatoy,aizawa}@nii.ac.jp + +# Abstract + +Recent models achieve promising results in visually grounded dialogues. However, existing datasets often contain undesirable biases and lack sophisticated linguistic analyses, which make it difficult to understand how well current models recognize their precise linguistic structures. To address this problem, we make two design choices: first, we focus on OneCommon Corpus (Udagawa and Aizawa, 2019, 2020), a simple yet challenging common grounding dataset which contains minimal bias by design. Second, we analyze their linguistic structures based on spatial expressions and provide comprehensive and reliable annotation for 600 dialogues. We show that our annotation captures important linguistic structures including predicate-argument structure, modification and ellipsis. In our experiments, we assess the model's understanding of these structures through reference resolution. We demonstrate that our annotation can reveal both the strengths and weaknesses of baseline models in essential levels of detail. Overall, we propose a novel framework and resource for investigating fine-grained language understanding in visually grounded dialogues. + +# 1 Introduction + +Visual dialogue is the task of holding natural, often goal-oriented conversation in a visual context (Das et al., 2017a; De Vries et al., 2017). This typically involves two types of advanced grounding: symbol grounding (Harnad, 1990), which bridges symbolic natural language and continuous visual perception, and common grounding (Clark, 1996), which refers to the process of developing mutual understandings through successive dialogues. As noted in Monroe et al. (2017); Udagawa and Aizawa (2019), the continuous nature of visual context introduces challenging symbol grounding of nuanced and pragmatic expressions. Some further incorporate par + +tial observability where the agents do not share the same context, which introduces complex misunderstandings that need to be resolved through advanced common grounding (Udagawa and Aizawa, 2019; Haber et al., 2019). + +Despite the recent progress on these tasks, it remains unclear what types of linguistic structures can (or cannot) be properly recognized by existing models for two reasons. First, existing datasets often contain undesirable biases which make it possible to make correct predictions without recognizing the precise linguistic structures (Goyal et al., 2017; Cirik et al., 2018; Agarwal et al., 2020). Second, existing datasets severely lack in terms of sophisticated linguistic analysis, which makes it difficult to understand what types of linguistic structures exist or how they affect model performance. + +To address this problem, we make the following design choices in this work: + +- We focus on OneCommon Corpus (Udagawa and Aizawa, 2019, 2020), a simple yet challenging collaborative referring task under continuous and partially-observable context. In this dataset, the visual contexts are kept simple and controllable to remove undesirable biases while enhancing linguistic variety. In total, 5,191 natural dialogues are collected and fully annotated with referring expressions (which they called markables) and their referents, which can be leveraged for further linguistic analysis. + +- To capture the linguistic structures in these dialogues, we propose to annotate spatial expressions which play a central role in visually grounded dialogues. We take inspiration from the existing annotation frameworks (Pustejovsky et al., 2011a,b; Petruck and Ellsworth, 2018; Ulinski et al., 2019) but also make several simplifications and modifications to improve coverage, + +![](images/4a0ce7b47daeba41daa23f997fb2f6a24f69b6c547a0c8509d6b82d1ed05dee9.jpg) +Figure 1: Example dialogue from OneCommon Corpus with reference resolution annotation (left) and our spatial expression annotation (right). We consider spatial expressions as predicates and annotate their arguments as well as modifiers. For further details of the original dataset and our annotation schema, see Section 3. + +![](images/868f13b29a1d41486c11ba78c2fe4f4901f7dffd40594f7c00720600a65d80c3.jpg) + +efficiency and reliability. 1 + +As shown in Figure 1, we consider spatial expressions as predicates with existing markables as their arguments. We distinguish the argument roles based on subjects and objects2 and annotate modifications based on nuanced expressions (such as slightly). By allowing the arguments to be in previous utterances, our annotation also captures argument ellipsis in a natural way. + +In our experiments, we focus on reference resolution to study the model's comprehension of these linguistic structures. Since we found the existing baseline to perform relatively poorly, we propose a simple method of incorporating numerical constraints in model predictions, which significantly improved its prediction quality. + +Based on our annotation, we conduct a series of analyses to investigate whether the model predictions are consistent with the spatial expressions. Our main finding is that the model is adept at recognizing entity-level attributes (such as color and size), but mostly fails in capturing inter-entity relations (especially placements): using the terminologies from Landau and Jackendoff (1993), the model can recognize the what but not the where in spatial language. We also conduct further analyses to investigate the effect of other linguistic factors. + +Overall, we propose a novel framework and re + +source for conducting fine-grained linguistic analyses in visually grounded dialogues. All materials in this work will be publicly available at https://github.com/Alab-NII/onecommon to facilitate future model development and analyses. + +# 2 Related Work + +Linguistic structure plays a critical role in dialogue research. From theoretical aspects, various dialogue structures have been studied, including discourse structure (Stent, 2000; Asher et al., 2003), speech act (Austin, 1962; Searle, 1969) and common grounding (Clark, 1996; Lascarides and Asher, 2009). In dialogue system engineering, various linguistic structures have been considered and applied, including syntactic dependency (Davidson et al., 2019), predicate-argument structure (PAS) (Yoshino et al., 2011), ellipsis (Quan et al., 2019; Hansen and Søgaard, 2020), intent recognition (Silva et al., 2011; Shi et al., 2016), semantic representation/parsing (Mesnil et al., 2013; Gupta et al., 2018) and frame-based dialogue state tracking (Williams et al., 2016; El Asri et al., 2017). However, most prior work focuses on dialogues where information is not grounded in external, perceptual modality such as vision. In this work, we propose an effective method of analyzing linguistic structures in visually grounded dialogues. + +Recent years have witnessed an increasing attention in visually grounded dialogues (Zarrieß et al., 2016; de Vries et al., 2018; Alamri et al., 2019; Narayan-Chen et al., 2019). Despite the impressive progress on benchmark scores and model architec + +tures (Das et al., 2017b; Wu et al., 2018; Kottur et al., 2018; Gan et al., 2019; Shukla et al., 2019; Niu et al., 2019; Zheng et al., 2019; Kang et al., 2019; Murahari et al., 2019; Pang and Wang, 2020), there have also been critical problems pointed out in terms of dataset biases (Goyal et al., 2017; Chattopadhyay et al., 2017; Massiceti et al., 2018; Chen et al., 2018; Kottur et al., 2019; Kim et al., 2020; Agarwal et al., 2020) which obscure such contributions. For instance, Cirik et al. (2018) points out that existing dataset of reference resolution may be largely solvable without recognizing the full referring expressions (e.g. based on object categories only). To circumvent these issues, we focus on OneCommon Corpus where the visual contents are simple (exploitable categories are removed) and well-balanced (by sampling from uniform distributions) to minimize dataset biases. + +Although various probing methods have been proposed for models and datasets in NLP (Belinkov and Glass, 2019; Geva et al., 2019; Kaushik et al., 2020; Gardner et al., 2020; Ribeiro et al., 2020), fine-grained analyses of visually grounded dialogues have been relatively limited. Instead, Kottur et al. (2019) proposed a diagnostic dataset to investigate model's language understanding: however, their dialogues are generated artificially and may not reflect the true nature of visual dialogues. Shekhar et al. (2019) also acknowledges the importance of linguistic analysis but only dealt with coarse-level features that can be computed automatically (such as dialogue topic and diversity). Most similar and related to our research are Yu et al. (2019); Udagawa and Aizawa (2020), where they conducted additional annotation of reference resolution in visual dialogues: however, they still do not capture more sophisticated linguistic structures such as PAS, modification and ellipsis. + +Finally, spatial language and cognition have a long history of research (Talmy, 1983; Herskovits, 1987). In computational linguistics, (Kordjamshidi et al., 2010; Pustejovsky et al., 2015) developed the task of spatial role labeling to capture spatial information in text: however, they do not fully address the problem of annotation reliability nor grounding in external visual modality. In computer vision, the VisualGenome dataset (Krishna et al., 2017) provides rich annotation of spatial scene graphs constructed from raw images, but not from raw dialogues. Ramisa et al. (2015); Platonov and Schubert (2018) also worked on modelling spa + +tial prepositions in single sentences. To the best of our knowledge, our work is the first to apply, model and analyze spatial expressions in visually grounded dialogues at full scale. + +# 3 Annotation + +# 3.1 Dataset + +Our work extends OneCommon Corpus originally proposed in Udagawa and Aizawa (2019). In this task, two players A and B are given slightly different, overlapping perspectives of a 2-dimensional grid with 7 entities in each view (Figure 1, left). Since only some (4, 5 or 6) of them are in common, this setting is partially-observable where complex misunderstandings and partial understandings are introduced. In addition, each entity only has continuous attributes (x-value, y-value, color and size), which introduce various nuanced and pragmatic expressions. Note that all entity attributes are generated randomly to enhance linguistic diversity and reduce dataset biases. Under this setting, two players were instructed to converse freely in natural language to coordinate attention on one of the same, common entities. Basic statistics of the dialogues are shown at the top of Table 1 and the frequency of nuanced expressions estimated in Table 2. + +
Total dialogues6,760
Avg. utterances per dialogue4.76
Avg. tokens per utterance12.37
Successful dialogues5,191
Annotated markables40,172
% markables with 1 referent71.81
% markables with 2 referents14.85
% markables with ≥3 referents12.03
% markables with 0 referent1.31
+ +Table 1: OneCommon Corpus statistics. + +
Nuance Type% UtteranceExample Usage
Approximation3.98almost in the middle
Exactness2.71exactly horizontal
Subtlety9.37slightly to the right
Extremity9.35very light dot
Uncertainty5.79Maybe it's different
+ +Table 2: Estimated frequency of nuanced expressions from Udagawa and Aizawa (2019). + +More recently, Udagawa and Aizawa (2020) curated all successful dialogues from the corpus and additionally conducted reference resolution annotation. Specifically, they detected all referring expressions (markables) based on minimal noun + +phrases by trained annotators and identified their referents by multiple crowdworkers (Figure 1 left, highlighted). Both annotations were shown to be reliable with high overall agreement. We show their dataset statistics at the bottom of Table 1. + +In this work, we randomly sample 600 dialogues from the latest corpus (5,191 dialogues annotated with reference resolution) to conduct further annotation of spatial expressions. + +# 3.2 Annotation Schema + +Our annotation procedure consists of three steps: spatial expression detection, argument identification and canonicalization. Based on these annotations, we conduct fine-grained analyses of the dataset (Subsection 3.3) as well as the baseline models (Subsection 4.2). For further details and examples of our annotation, see Appendix A. + +# 3.2.1 Spatial Expression Detection + +Based on the definition from Pustejovsky et al. (2011a,b), spatial expressions are "constructions that make explicit reference to the spatial attributes of an object or spatial relations between objects".3 We generally follow this definition and detect all spans of spatial attributes and relations in the dialogue. To make the distinction clear, we consider entity-level information like color and size as spatial attributes, and other information such as location and explicit attribute comparison as spatial relations. Spatial attributes could be annotated as adjectives ("dark"), prepositional phrases ("of light color") or noun phrases ("a black dot"), while spatial relations could be adjectives ("lighter"), prepositions ("near"), and so on. We also detect modifiers of spatial expressions based on nuanced expressions (c.f. Table 2). + +Although we allow certain flexibility in determining their spans, holistic/dependent expressions (such as "all shades of gray", "sloping up to the right", "very slightly") were instructed to be annotated as a single span. Independent expressions (e.g. connected by conjunctions) could be annotated separately or jointly if they had the same structure (e.g. same arguments and modifiers). + +For the sake of efficiency, we do not annotate spatial attributes and their modifiers inside markables (see Figure 1), since their spans and arguments are easy to be detected automatically. + +# 3.2.2 Argument Identification + +Secondly, we consider the detected spatial expressions as predicates and annotate referring expressions (markables) as their arguments. This approach has several advantages: first, it has broad coverage since referring expressions are prevalent in visual dialogues. In addition, by leveraging exophoric references which directly bridge natural language and the visual context, we can conduct essential analyses related to symbol grounding across the two modalities (Subsection 4.2). + +To be specific, we distinguish the argument roles based on subjects and objects. We allow arguments to be in previous utterances only if they are unavailable in the present utterance. Multiple markables can be annotated for the subject/object roles, and no object need to be annotated in cases of spatial attributes, nominal/verbal expressions ("triangle", "clustered") or implicit global objects as in superlatives ("darkest (of all)"). If the arguments are indeterminable based on these roles (as in enumeration, e.g. "From left to right, there are ..."), they were marked as unannotatable. Modificands of the modifiers (which could be either spatial attributes or relations) were also identified in this step. + +# 3.2.3 Canonicalization + +Finally, we conduct canonicalization of the spatial expressions and modifiers. Since developing a complete ontology for this domain is infeasible or too expensive, we focus on canonicalizing the central spatial relations in this work: we do not canonicalize spatial attributes manually, since we can canonicalize the central spatial attributes automatically (c.f. Subsubsection 4.2.1). + +According to Landau (2017), there are 2 classes of relations in spatial language: functional class whose core meanings engage force-dynamic relationship (such as on, in) and geometric class whose core meanings engage geometry (such as left, above). Since functional relations are less common in this dataset and more difficult to define due to their vagueness and context dependence (Platonov and Schubert, 2018), we focus on the following 5 categories of geometric relations and attribute comparisons, including a total of 24 canonical relations which can be defined explicitly. + +Direction requires the subjects and objects to be placed in certain orientation: left, right, above, below, horizontal, vertical, diagonal. + +Proximity is related to distance between subjects, objects or other entities: near, far, alone. + +Region restricts the subjects to be in a certain region specified by the objects: interior, exterior. + +Color comparison is related to comparison of color between subjects and objects: lighter, lightest, darker, darkest, same color, different color. + +Size comparison is related to comparison of size between subjects and objects: smaller, smallest, larger, largest, same size, different size. + +To be specific, we annotate whether each detected spatial relation implies any of the 24 canonical relations. Each spatial relation can imply multiple canonical relations (e.g. "on the upper right" implies right and above) or none (e.g. "triangle" does not imply any of the above relations). + +In addition, we define 6 modification types (substlety, extremity, uncertainty, certainty, neutrality and negation) and canonicalize each modifier into one type. For example, "very slightly" is considered to have the overall type of substlety. + +# 3.3 Results + +# 3.3.1 Annotation Reliability + +
Annotation% AgreementCohen's κ
Attribute Span98.50.88
Relation Span95.10.87
Modifier Span99.20.86
Subject Ident.98.80.96
Object Ident.95.90.79
Modificand Ident.99.60.98
Relation Canon.99.70.96
Modifier Canon.87.50.83
+ +To test the reliability of our annotation, two trained annotators (the authors) independently detected the spatial expressions and modifiers in 50 dialogues. Then, using the 50 dialogues from one of the annotators, two annotators independently conducted argument identification and canonicalization. We show the observed agreement and Cohen's $\kappa$ (Cohen, 1968) in Table 3. + +For span detection, we computed the token level agreement of spatial expressions and modifiers. Despite having certain freedom for determining their spans, we observed very high agreement (including their starting positions, see Appendix B). + +For argument identification, we computed the exact match rate of the arguments and modificands. As a result, we observed near perfect agreement for subject/modificand identification. For object identification, the result seems relatively worse: + +however, upon further inspection, we verified that $73.5\%$ of the disagreements were essentially based on the same markables (e.g. coreferences). + +Finally, we observed reasonably high agreement for relation/modifier canonicalization as well. Overall, we conclude that all steps of our annotation can be conducted with high reliability. + +# 3.3.2 Annotation Statistics + +Table 3: Results of our reliability analysis. + +
AttributeRelation
Total3784,300
Unique1211,139
Avg. per dialogue0.637.17
% inter-utterance subject1.591.37
% inter-utterance object-14.65
% no object-30.84
% modified36.5116.86
% unannotatable0.790.79
+ +Table 4: Statistics of our spatial expression annotation in 600 randomly sampled dialogues. + +The basic statistics of our annotation are summarized in Table 4. Note that there are relatively few spatial attributes annotated, since most of them appeared inside the markables (hence not detected manually). However, a large number of spatial relations with non-obvious structures were identified. + +In both expressions, we found over $1\%$ of the subjects and $14\%$ of the objects to be present only in previous utterances, which indicates that argument level ellipses are common and need to be resolved in visual dialogues. For spatial relations, about $30\%$ did not have any explicit objects. + +Our annotation also verified that a large portion of the spatial expressions (37% for spatial attributes and 17% for relations) accompanied modifiers. + +Finally, less than $1\%$ of spatial expressions were unannotatable based on our schema, which verifies its broad coverage. Overall, our annotation can capture important linguistic structures of visually grounded dialogues, and it is straightforward to conduct even further analyses (e.g. by focusing on specific canonical relations or modifications). + +# 4 Experiments + +# 4.1 Reference Resolution + +Reference resolution is an important subtask of visual dialogue that can be used for probing model's understanding of intermediate dialogue process (Udagawa and Aizawa, 2020). As illustrated in Figure 1 (left), this is a simple task of predicting the referents for each markable based on the speaker's + +perspective. To collect model predictions for all dialogues, we split the whole dataset into 10 equal-sized bins and use each bin as the test set in 10 rounds of the experiments. For a more detailed setup of our experiments, see Appendix C. + +# 4.1.1 Models + +![](images/dbbc6e3208526ba17797b6948f63b2033d76830813597f53f1b1c74eb1357d88.jpg) +Figure 2: Our model architecture. REF prediction flow is shown in blue and NUMREF prediction flow in red. + +As a baseline, we use the REF model proposed in Udagawa and Aizawa (2020). As shown in Figure 2, this model has two encoders: dialogue encoder based on a simple GRU (Cho et al., 2014) and entity encoder which outputs entity-level representation of the observation based on MLP and relational network (Santoro et al., 2017). To predict the referents, REF takes the GRU's start position of the markable, end position of the markable and end position of the utterance to compute entity-level scores and judge whether each entity is a referent based on logistic regression. + +However, since the predictions are made independently for each entity, this model often predicts the wrong number of referents, leading to low performance in terms of exact match rate. To address this issue, we trained a separate module to track the number of referents in each markable. We formulate this as a simple classification task between 0, 1, ..., 7, which can be predicted reliably with an average accuracy of $92\%$ . Based on this module's prediction $k$ , we simply take the top $k$ entities with the highest scores as the referents. We refer to this numerically constrained model as NUMREF. + +Furthermore, we conduct feature level ablations to study the importance of each feature: for in- + +stance, we remove the xy-values from the structured input to ablate the location feature. + +# 4.1.2 Results + +
Entity-Level AccuracyMarkable-Level Exact Match
REF85.71±0.2333.15±1.00
REF-location84.28±0.2730.53±0.84
REF-color83.08±0.3217.09±1.04
REF-size83.50±0.2219.41±0.98
NUMREF86.03±0.3354.94±0.76
NUMREF-location83.35±0.2649.77±0.64
NUMREF-color81.19±0.4139.74±1.31
NUMREF-size82.39±0.2043.40±0.67
Human96.2686.90
+ +Table 5: Reference resolution results. + +We report the mean and standard deviation of the entity-level accuracy and markable-level exact match rate in Table 5. Compared to REF, our NUMREF model slightly improves the entity-level accuracy and significantly outperforms it in terms of exact match rate, which validates our motivation. From the ablation studies, we can see that all features contribute to the overall performance, but color and size seem to have the largest impact. + +However, it is difficult to see how and where these models struggle based on mere accuracy. For further investigation, we need more sophisticated behavioral testing (namely black-box testing) to verify whether each model has the capability of recognizing certain concepts or linguistic structures (Ribeiro et al., 2020). + +# 4.2 Model Analysis + +To study the current model's strengths and weaknesses in detail, we investigate whether their predictions are consistent with the central spatial expressions. + +# 4.2.1 Spatial Attributes + +First, we analyze whether the model predictions are consistent with the entity-level spatial attributes. Since most of them were confirmed to appear inside the markables (Subsection 3.3), we automatically detect all expressions of color in the markables, plot the distributions of the actual referent color, and compare the results between gold human annotation and model predictions (Figure 3). + +From the figure, we can verify that the two distributions look almost identical for the common color expressions, and our NUMREF model seems + +![](images/bf05c7af4355c6fd45929a67b9f1366b93df4fa60f3a37cebc9ed8269e39eaa3.jpg) + +![](images/f3c2c21665c92d81153a4dd20b6665b266301b0a1cf03a02acf8c207abe72788.jpg) +Figure 3: Referent color distributions. Top is human, bottom is NUMREF (smaller is darker in color axis). + +to capture important characteristics of pragmatic expressions (same expression being used for wide range of colors) and modifications such as neutrality (medium) and extremity (very dark, very light).4 We observed very similar result with the size distributions, which is available in Appendix D. + +Based on these results, we argue that the current model can capture entity-level attributes very well, including basic modification. + +# 4.2.2 Spatial Relations + +Next, we investigate whether the model predictions are consistent with the central spatial relations. Based on our annotation (Subsection 3.2), we conduct simple tests to check whether the predicted referents satisfy each canonical relation. To be specific, our tests check for two conditions: whether the predictions are valid (satisfy the minimal requirements, e.g. at least 2 referents predicted for near relation), and if they are valid, whether the predictions actually satisfy the canonical relation (e.g. referents are closer than a certain threshold). + +Algorithm 1 shows our test for the canonical left + +relation. Note that if no objects are annotated, we simply test whether the referents are on the left side of the player's view. For further details/examples of our canonical relation tests, see Appendix E. + +# Algorithm 1: Test for left relation + +Input: subject referents $S$ , object referents $\mathcal{O}$ , boolean no_object + +Output: boolean satisfy,boolean valid + +if no_object then + +$$ +v a l i d \leftarrow | \mathcal {S} | > 0 +$$ + +$$ +s a t i s f y \leftarrow v a l i d \wedge m e a n (\mathcal {S}. x) < 0 +$$ + +else + +$$ +v a l i d \leftarrow | \mathcal {S} | > 0 \wedge | \mathcal {O} | > 0 +$$ + +$$ +s a t i s f y \leftarrow v a l i d \wedge m e a n (\mathcal {S}. x) < m e a n (\mathcal {O}. x) +$$ + +return satisfy, valid + +The results of our tests are summarized in Table 6. We also compare with the feature ablated models to estimate the test cases which can be satisfied without using the corresponding features, i.e. location for direction/proximity/region categories, color for color comparison, and size for size comparison. + +First, we can verify that human annotation passes most of our tests, which is an important evidence of the validity of our annotations and tests. We also confirmed that REF models often make invalid predictions with overall poor performance, which is consistent with our expectation. + +In direction, proximity and region categories, we found that NUMREF model performs on par or only marginally better than its ablated version (and even underperforms it for simple relations like right and above): these results indicate that current model is still incapable of leveraging locational features to make more consistent predictions. + +In color/size comparison, NUMREF performs reasonably well, outperforming all other models: this indicates that the model can not only capture but also compare entity-level attributes to a certain extent. However, there is still room left for improvement in almost all relations. It is also worth noting that size comparison may be easier, as the range of size is limited (only 6 compared to 150 for color). + +Overall, we conclude that current models still struggle in capturing most of the inter-entity relations, especially those related to placements. + +# 4.2.3 Further Analyses + +Finally, we conduct further analyses to study other linguistic factors that affect model performance. + +
ModelsREFREF-ablNUMREFNUMREF-ablHuman
CategoryRelation# Casessatisfyvalidsatisfyvalidsatisfyvalidsatisfyvalidsatisfyvalid
Directionleft41223.532.321.128.967.099.562.499.595.997.6
right46828.035.524.630.867.398.768.298.795.396.4
above51428.637.424.733.165.299.266.599.496.798.6
below44425.234.521.627.966.099.162.299.196.496.8
horizontal3754.170.327.059.559.5100.051.497.391.9100.0
vertical4637.073.923.954.343.595.745.795.782.6100.0
diagonal5048.074.030.050.060.098.060.098.090.0100.0
All1,97127.837.623.431.965.599.064.199.095.597.6
Proximitynear27149.461.329.949.177.194.556.195.295.296.7
far2729.640.733.340.777.8100.092.6100.096.396.3
alone11136.944.145.054.168.594.667.694.691.994.6
All40944.755.334.249.974.894.961.695.494.496.1
Regioninterior13538.552.627.439.362.293.358.594.196.3100.0
exterior6240.348.440.353.280.698.487.198.498.498.4
All19739.151.331.543.768.094.967.595.497.099.5
Colorlighter14723.125.96.88.284.4100.057.199.397.398.0
lightest4245.266.714.333.361.9100.031.0100.083.3100.0
darker17124.026.37.010.583.099.453.299.495.998.8
darkest4856.264.614.633.366.7100.035.4100.089.697.9
same5012.030.08.030.040.088.032.086.092.096.0
different1464.371.471.471.464.3100.078.692.992.9100.0
All47228.835.410.418.074.898.549.297.994.198.3
Sizesmaller21327.731.57.59.980.8100.059.6100.098.699.5
smallest5271.273.121.234.686.598.148.198.192.398.1
larger23823.128.69.716.073.599.648.799.698.398.3
largest6152.560.711.524.673.8100.039.3100.096.7100.0
same10334.042.718.427.280.688.365.091.398.1100.0
different1275.075.066.766.791.791.783.383.391.791.7
All67933.438.712.418.978.297.854.398.197.699.0
+ +Table 6: Canonical relation test results. We compute the satisfy and valid rate of the predictions for each canonical relation. Best scores of the models are in bold (-abl shows the corresponding feature ablated results). + +
Linguistic Factors# CasesNUMREFHuman
strong modification14976.5195.97
neutral3,09470.4695.77
weak modification49066.1295.10
inter-utterance subject1457.1492.86
inter-utterance object26572.0894.72
no object1,12774.4592.99
ignorable object1,80569.6497.23
unignorable object79665.3396.11
All3,72870.1795.71
+ +Table 7: Satisfy rate classified by linguistic factors. + +Table 7 shows the results of our relation tests classified by notable linguistic structures. + +In terms of modification, we can confirm that human performance is consistently high, while the model performs best for strong modification + +(modification types of extremity or certainty), decently for neutrals (neutrality or no modification), and worst on weak modification (subtlety or uncertainty). This indicates that large, conspicuous features are easier for the model to capture compared to small or more ambiguous features. + +In terms of subject/object properties, human performance is also consistently high. In contrast, model performance is significantly worse for subject ellipsis (inter-utterance subject), while remaining high for object ellipsis and no object cases. + +We also hypothesize that a large portion of the relations can actually be satisfied without considering the objects, e.g. by simply predicting very dark dots as the subjects when the relation is darker or darkest. To distinguish such easy cases, we consider a relation as ignorable object if the relation + +can be satisfied even if we ignore the objects (i.e. remove all object relations) based on gold referents. Our result verifies that there are indeed many cases of ignorable object, and they seem slightly easier for the model to satisfy. + +
ModelsNUMREFHuman
valuemod-typediff.# validdiff.# valid
xy-valuestrong86.063989.1537
neutral80.921,58673.521,558
weak80.3520053.53198
colorstrong66.231591.8015
neutral56.9823460.14232
weak37.736828.5566
sizestrong3.6084.298
neutral2.673372.70320
weak1.951051.58104
+ +Table 8: Absolute difference in comparative relations (number of valid predictions shown in shade). + +In Table 8, we study the effect of modification based on the absolute difference between subject and object features in comparative relations. + +In human annotation, the absolute difference naturally increases as the modification gets stronger. While model predictions also show this tendency, their results seem less sensitive to modification (particularly for locational features, i.e. xy-value) and may not be reflecting their full effect. + +# 5 Discussion and Conclusion + +In this work, we focused on the recently proposed OneCommon Corpus as a suitable testbed for fine-grained language understanding in visually grounded dialogues. To analyze its linguistic structures, we proposed a novel framework of annotating spatial expressions in visual dialogues. We showed that our annotation can be conducted reliably and efficiently by leveraging referring expressions prevalent in visual dialogues, while capturing important linguistic structures such as PAS, modification and ellipsis. Although our current analysis is limited to this domain, we expect that upon appropriate definition of spatial expressions, argument roles and canonicalization, the general approach can be applied to a wider variety of domains: adapting and validating our approach in different domains (especially with more realistic visual contexts) are left as future work. + +Secondly, we proposed a simple idea of incorporating numerical constraints to improve exophoric reference resolution. We expect that a similar approach of identifying and incorporating semantic constraints (e.g. coreferences and spatial constraints) is a promising direction to improve the model's performance even further. + +Finally, we demonstrated the advantages of our annotation for investigating the model's understanding of visually grounded dialogues. Our tests are completely agnostic to the models and only require referent predictions made by each model. By designing simple tests like ours (Subsubsection 4.2.1/4.2.2), we can diagnose the model's performance at the granularity of canonical attributes/relations under consideration: such analyses are easy to extend (by adding more tests) and critical for verifying what capabilities current models have (or do not have). Based on further analyses (Subsubsection 4.2.3), we also revealed various linguistic structures that affect model performance: we expect that capturing and studying such effects will be essential for advanced model probing in visual dialogue research. + +Overall, we expect our framework and resource to be fundamental for conducting sophisticated linguistic analyses of visually grounded dialogues. + +# Acknowledgements + +This work was supported by JSPS KAKENHI Grant Number 18H03297 and NEDO SIP-2 "Big-data and AI-enabled Cyberspace Technologies." We also thank the anonymous reviewers for their valuable suggestions and comments. + +# References + +Shubham Agarwal, Trung Bui, Joon-Young Lee, Ioannis Konstas, and Verena Rieser. 2020. History for visual dialog: Do we really need it? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8182-8197, Online. Association for Computational Linguistics. + +Huda Alamri, Vincent Cartillier, Abhishek Das, Jue Wang, Anoop Cherian, Irfan Essa, Dhruv Batra, Tim K Marks, Chiori Hori, Peter Anderson, et al. 2019. Audio visual scene-aware dialog. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7558-7567. + +Nicholas Asher, Nicholas Michael Asher, and Alex Lascarides. 2003. *Logics of conversation*. Cambridge University Press. + +John Langshaw Austin. 1962. How to do things with words. Oxford university press. +Yonatan Belinkov and James Glass. 2019. Analysis methods in neural language processing: A survey. Transactions of the Association for Computational Linguistics (TACL), 7:49-72. +Prithvijit Chattopadhyay, Deshraj Yadav, Viraj Prabhu, Arjun Chandrasekaran, Abhishek Das, Stefan Lee, Dhruv Batra, and Devi Parikh. 2017. Evaluating visual conversational agents via cooperative humanai games. In Proceedings of the Fifth AAAI Conference on Human Computation and Crowdsourcing (HCOMP). +Hongge Chen, Huan Zhang, Pin-Yu Chen, Jinfeng Yi, and Cho-Jui Hsieh. 2018. Attacking visual language grounding with adversarial examples: A case study on neural image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2587-2597, Melbourne, Australia. Association for Computational Linguistics. +Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bah-danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103-111. Association for Computational Linguistics. +Volkan Cirik, Louis-Philippe Morency, and Taylor Berg-Kirkpatrick. 2018. Visual referring expression recognition: What do systems actually learn? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 781-787, New Orleans, Louisiana. Association for Computational Linguistics. +Herbert H Clark. 1996. Using language. Cambridge university press. +Jacob Cohen. 1968. Weighted kappa: nominal scale agreement provision for scaled disagreement or partial credit. Psychological bulletin, 70(4):213. +Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José MF Moura, Devi Parikh, and Dhruv Batra. 2017a. Visual dialog. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, volume 2. +Abhishek Das, Satwik Kottur, Jose M. F. Moura, Stefan Lee, and Dhruv Batra. 2017b. Learning cooperative visual dialog agents with deep reinforcement learning. In The IEEE International Conference on Computer Vision (ICCV). +Sam Davidson, Dian Yu, and Zhou Yu. 2019. Dependency parsing for spoken dialog systems. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the + +9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1513-1519, Hong Kong, China. Association for Computational Linguistics. +Harm De Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, and Aaron Courville. 2017. Guesswhat?! visual object discovery through multi-modal dialogue. In Proc. of CVPR. +Layla El Asri, Hannes Schulz, Shikhar Sharma, Jeremie Zumer, Justin D. Harris, Emery Fine, Rahul Mehrotra, and Kaheer Suleman. 2017. Frames: A corpus for adding memory to goal-oriented dialogue systems. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 207-219. Association for Computational Linguistics. +Zhe Gan, Yu Cheng, Ahmed Kholy, Linjie Li, Jingjing Liu, and Jianfeng Gao. 2019. Multi-step reasoning via recurrent dual attention for visual dialog. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6463-6474, Florence, Italy. Association for Computational Linguistics. +Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, et al. 2020. Evaluating nlp models via contrast sets. arXiv preprint arXiv:2004.02709. +Mor Geva, Yoav Goldberg, and Jonathan Berant. 2019. Are we modeling the task or the annotator? an investigation of annotator bias in natural language understanding datasets. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1161-1166, Hong Kong, China. Association for Computational Linguistics. +Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6904-6913. +Sonal Gupta, Rushin Shah, Mrinal Mohit, Anuj Kumar, and Mike Lewis. 2018. Semantic parsing for task oriented dialog using hierarchical representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2787-2792, Brussels, Belgium. Association for Computational Linguistics. +Janosch Haber, Tim Baumgartner, Ece Takmaz, Lieke Gelderloos, Elia Bruni, and Raquel Fernandez. 2019. The PhotoBook dataset: Building common ground through visually-grounded dialogue. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1895-1910, Florence, Italy. Association for Computational Linguistics. + +Victor Petrén Bach Hansen and Anders Søgaard. 2020. What do you mean 'why'?: Resolving sluices in conversations. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 7887-7894. +Stevan Harnad. 1990. The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1-3):335-346. +Annette Herskovits. 1987. Language and spatial cognition. Cambridge university press. +Gi-Cheon Kang, Jaeseo Lim, and Byoung-Tak Zhang. 2019. Dual attention networks for visual reference resolution in visual dialog. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2024-2033, Hong Kong, China. Association for Computational Linguistics. +Divyansh Kaushik, Eduard Hovy, and Zachary C Lipton. 2020. Learning the difference that makes a difference with counterfactually-augmented data. International Conference on Learning Representations (ICLR). +Hyounghun Kim, Hao Tan, and Mohit Bansal. 2020. Modality-balanced models for visual dialogue. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8091-8098. +Parisa Kordjamshidi, Martijn Van Otterlo, and Marie-Francine Moens. 2010. Spatial role labeling: Task definition and annotation scheme. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10), Valletta, Malta. European Language Resources Association (ELRA). +Satwik Kottur, Jose M. F. Moura, Devi Parikh, Dhruv Batra, and Marcus Rohrbach. 2018. Visual coreference resolution in visual dialog using neural module networks. In The European Conference on Computer Vision (ECCV). +Satwik Kottur, José M. F. Moura, Devi Parikh, Dhruv Batra, and Marcus Rohrbach. 2019. CLEVR-dialog: A diagnostic dataset for multi-round reasoning in visual dialog. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 582-595, Minneapolis, Minnesota. Association for Computational Linguistics. +Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123(1):32-73. + +Barbara Landau. 2017. Update on "what" and "where" in spatial language: A new division of labor for spatial terms. Cognitive science, 41:321-350. +Barbara Landau and Ray Jackendoff. 1993. "what" and "where" in spatial language and spatial cognition. Behavioral and brain sciences, 16(2):217-238. +Alex Lascarides and Nicholas Asher. 2009. Agreement, disputes and commitments in dialogue. Journal of semantics, 26(2):109-158. +Daniela Massiceti, Puneet K Dokania, N Siddharth, and Philip HS Torr. 2018. Visual dialogue without vision or dialogue. arXiv preprint arXiv:1812.06417. +Grégoire Mesnil, Xiaodong He, Li Deng, and Yoshua Bengio. 2013. Investigation of recurrent-neural-network architectures and learning methods for spoken language understanding. In *Interspeech*, pages 3771-3775. +Will Monroe, Robert X. D. Hawkins, Noah D. Goodman, and Christopher Potts. 2017. Colors in context: A pragmatic neural model for grounded language understanding. Transactions of the Association for Computational Linguistics, 5:325-338. +Vishvak Murahari, Dhruv Batra, Devi Parikh, and Abhishek Das. 2019. Large-scale pretraining for visual dialog: A simple state-of-the-art baseline. arXiv preprint arXiv:1912.02379. +Anjali Narayan-Chen, Prashant Jayannavar, and Julia Hockenmaier. 2019. Collaborative dialogue in Minecraft. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5405-5415, Florence, Italy. Association for Computational Linguistics. +Yulei Niu, Hanwang Zhang, Manli Zhang, Jianhong Zhang, Zhiwu Lu, and Ji-Rong Wen. 2019. Recursive visual attention in visual dialog. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). +Wei Pang and Xiaojie Wang. 2020. Visual dialogue state tracking for question generation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 11831-11838. +Miriam R. L. Petruck and Michael J. Ellsworth. 2018. Representing spatial relations in FrameNet. In Proceedings of the First International Workshop on Spatial Language Understanding, pages 41-45, New Orleans. Association for Computational Linguistics. +Georgiy Platonov and Lenchart Schubert. 2018. Computational models for spatial prepositions. In Proceedings of the First International Workshop on Spatial Language Understanding, pages 21-30. +James Pustejovsky, Parisa Kordjamshidi, Marie-Francine Moens, Aaron Levine, Seth Dworman, and Zachary Yocum. 2015. Semeval-2015 task 8: + +Spaceeval. In Proceedings of the 9th International Workshop on Semantic Evaluation (semeval 2015), pages 884-894. ACL. +James Pustejovsky, Jessica L Moszkowicz, and Marc Verhagen. 2011a. Iso-space: The annotation of spatial information in language. In Proceedings of the Sixth Joint ISO-ACL SIGSEM Workshop on Interoperable Semantic Annotation, volume 6, pages 1-9. +James Pustejovsky, Jessica L Moszkowicz, and Marc Verhagen. 2011b. Using iso-space for annotating spatial information. In Proceedings of the International Conference on Spatial Information Theory. +Jun Quan, Deyi Xiong, Bonnie Webber, and Changjian Hu. 2019. GECOR: An end-to-end generative ellipsis and co-reference resolution model for task-oriented dialogue. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4547-4557, Hong Kong, China. Association for Computational Linguistics. +Arnau Ramisa, Josiah Wang, Ying Lu, Emmanuel Dellandrea, Francesc Moreno-Noguer, and Robert Gaizauskas. 2015. Combining geometric, textual and visual features for predicting prepositions in image descriptions. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 214-220, Lisbon, Portugal. Association for Computational Linguistics. +Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Behavioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4902-4912, Online. Association for Computational Linguistics. +Adam Santoro, David Raposo, David G Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Tim Lillicrap. 2017. A simple neural network module for relational reasoning. In Advances in neural information processing systems, pages 4967-4976. +John R Searle. 1969. Speech acts: An essay in the philosophy of language, volume 626. Cambridge university press. +Ravi Shekhar, Aashish Venkatesh, Tim Baumgartner, Elia Bruni, Barbara Plank, Raffaella Bernardi, and Raquel Fernandez. 2019. Beyond task success: A closer look at jointly learning to see, ask, and Guess-What. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2578-2587, Minneapolis, Minnesota. Association for Computational Linguistics. + +Yangyang Shi, Kaisheng Yao, Le Tian, and Daxin Jiang. 2016. Deep LSTM based feature mapping for query classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1501-1511, San Diego, California. Association for Computational Linguistics. +Pushkar Shukla, Carlos Elmadjian, Richika Sharan, Vivek Kulkarni, Matthew Turk, and William Yang Wang. 2019. What should I ask? using conversationally informative rewards for goal-oriented visual dialog. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6442-6451, Florence, Italy. Association for Computational Linguistics. +Joao Silva, Luísca Coheur, Ana Cristina Mendes, and Andreas Wichert. 2011. From symbolic to sub-symbolic information in question classification. Artificial Intelligence Review, 35(2):137-154. +Amanda Stent. 2000. Rhetorical structure in dialog. In INLG'2000 Proceedings of the First International Conference on Natural Language Generation, pages 247-252, Mitzpe Ramon, Israel. Association for Computational Linguistics. +Leonard Talmy. 1983. How language structures space. In Spatial orientation, pages 225-282. Springer. +Takuma Udagawa and Akiko Aizawa. 2019. A natural language corpus of common grounding under continuous and partially-observable context. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7120-7127. +Takuma Udagawa and Akiko Aizawa. 2020. An annotated corpus of reference resolution for interpreting common grounding. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 9081-9089. +Morgan Ulinski, Bob Coyne, and Julia Hirschberg. 2019. SpatialNet: A declarative resource for spatial relations. In Proceedings of the Combined Workshop on Spatial Language Understanding (SpLU) and Grounded Communication for Robotics (RoboNLP), pages 61-70, Minneapolis, Minnesota. Association for Computational Linguistics. +Harm de Vries, Kurt Shuster, Dhruv Batra, Devi Parikh, Jason Weston, and Douwe Kiela. 2018. Talk the walk: Navigating new york city through grounded dialogue. arXiv preprint arXiv:1807.03367. +Jason Williams, Antoine Raux, and Matthew Henderson. 2016. The dialog state tracking challenge series: A review. Dialogue & Discourse, 7(3):4-33. +Qi Wu, Peng Wang, Chunhua Shen, Ian Reid, and Anton van den Hengel. 2018. Are you talking to me? reasoned visual dialog generation through adversarial learning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). + +Koichiro Yoshino, Shinsuke Mori, and Tatsuya Kawahara. 2011. Spoken dialogue system based on information extraction using similarity of predicate argument structures. In Proceedings of the SIGDIAL 2011 Conference, pages 59-66. Association for Computational Linguistics. + +Xintong Yu, Hongming Zhang, Yangqiu Song, Yan Song, and Changshui Zhang. 2019. What you see is what you get: Visual pronoun coreference resolution in dialogues. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5123-5132, Hong Kong, China. Association for Computational Linguistics. + +Sina Zarrieß, Julian Hough, Casey Kennington, Ramesh Manuvinakurike, David DeVault, Raquel Fernández, and David Schlangen. 2016. PentoRef: A corpus of spoken references in task-oriented dialogues. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 125-131, Porto Roz, Slovenia. European Language Resources Association (ELRA). + +Zilong Zheng, Wenguan Wang, Siyuan Qi, and Song-Chun Zhu. 2019. Reasoning visual dialogs with structural and partial observations. In Computer Vision and Pattern Recognition (CVPR), 2019 IEEE Conference on. + +# A Annotation Examples and Details + +![](images/dd9b820fd187f878fdec691d562d2e0c5bfcd18c9fba26cfcd144da8e62ffb46.jpg) +Figure 4: Example with spatial attributes. + +Here, we show additional examples of our spatial expression annotation. In Figure 4, we show an example dialogue annotated with spatial attributes (colored in red). Since our goal is not to achieve strict inter-annotator agreement but to conduct efficient and useful analysis, we allow certain flexibility in determining the spans of spatial expressions: for instance, the coordinated spatial expression ("small and light") can be annotated as a single expression or as different expressions ("small and light"). Copulas (is, being), articles (a, the), particles (to, with) and modifiers were allowed to be either omitted or included in spatial expressions. Spans were allowed to be non-contiguous, but must + +be annotated at the token level and restricted to be within a single utterance. Note that spatial attributes (tiny, light) in the first markable ("a lonely tiny light dot") are not annotated, since they are inside the markable and their spans and subjects are relatively obvious. + +In terms of argument identification, we prioritize markables in the following manner: + +1. Markables in the present utterance (i.e. same utterance as the spatial expression). +2. Markables in the closest previous utterance of the same speaker. +3. Markables in the closest previous utterance of different speakers. + +As long as these priorities are satisfied, we did not distinguish between coreferences. Furthermore, for object identification, we did not distinguish between markables which include/exclude subject referents: for example, the object markable for lighter in "I have [three dots], [two] dark and [one] lighter" could be either three dots or two. + +![](images/bd7bb98ab601c23ec4ed93f3cd9f6753fbd8c46870b7fb7504b591551186d9da.jpg) +Figure 5: Example with subject ellipsis. + +In Figure 5, we show an example dialogue where the subject markable only appears in the previous utterance ("smaller?") in B's utterance), which demonstrates the case of subject ellipsis. Note that since we only detect expressions that contain specific spatial information of the visual context, we do not annotate black dots in the first interrogative utterance ("how many black dots do u see?"). + +In Figure 6, we show an example dialogue with unannotatable relation ("going [small], [medium], [large]") which cannot be captured based on the simple argument roles of subjects and objects. In general, similar strategies of enumeration are difficult to be captured, as well as predications with exceptions (such as "[All dots] are dark except [one + +![](images/fa2e844e3d73271dd9cf3e30722b14571efed50776715d483a835317d3912072.jpg) +Figure 6: Example with unannotatable relation. + +dot]") or cases with bundled subjects ("[Two dots] are dark and darker"). + +Finally, we only annotate explicit spatial attributes and relations: therefore, we do not annotate implicit relations such as darker in "One is dark and the other is light gray", although it is inferable. When the spans are difficult to annotate, annotators were encouraged to make the best effort to capture the constructions which refer to specific spatial information. + +# B Annotation Results + +
Annotation% AgreementCohen's κ
Attribute Start98.50.84
Relation Start95.10.77
Modifier Start98.70.82
+ +In Table 9, we show the results of token level agreement for the starting positions of spatial expressions and modifiers. Despite having certain freedom as discussed in Appendix A, we can verify that these also have reasonably high agreement. + +Table 9: Additional results of our reliability analysis. + +
AttributeRelation
% mod-subtlety1.068.12
% mod-extremity9.002.16
% mod-uncertainty7.414.26
% mod-certainty0.271.40
% mod-neutrality19.310.67
% mod-negation0.530.42
+ +Table 10: Additional statistics of our spatial expression annotation. + +In Table 10, we show the frequency of each modification types. Based on these results, we can see that neutrality is the most common type of modification for spatial attributes (as in medium gray, medium sized), and subtlety and uncertainty to be + +the most common types for spatial relations. It is interesting to note that the frequencies of modification types vary significantly with spatial attributes and relations, except for negation. + +In Table 11 and 12, we show the statistics and examples of canonical relations and modification types annotated for our analyses. Note that a single expression can imply multiple canonical relations (e.g. "identical looking" implies same color and same size) or no canonical relation at all (e.g. "forms a triangle"). In contrast, a modifier can have only one modification type: for instance, almost exactly is considered to have the overall modification type of certainty. + +# C Experiment Setup + +We use the dataset, baselines, hyperparameters and evaluation metrics publicly available at https://github.com/Alab-NII/onecommon. + +In order to collect model predictions for all dialogues and markables, we randomly split the whole dataset into 10 equal sized bins $z_{i}$ ( $i \in \{0,1,2,\dots,9\}$ ) and at each round $r \in \{0,1,2,\dots,9\}$ we use $z_{r \pmod{10}}$ , $z_{r+1 \pmod{10}}$ , ..., $z_{r+7 \pmod{10}}$ for model training, $z_{r+8 \pmod{10}}$ for validation, and $z_{r+9 \pmod{10}}$ for testing. We report the mean and standard deviation of the entity-level accuracy and markable-level exact match rate in these 10 rounds of the experiments. + +In our NUMREF model, we train a separate module for predicting the number of referents based on a simple MLP (single layer, 256 hidden units). Reference resolution and number prediction are trained jointly with the loss weighted by 32:1. We conducted minimal hyperparameter tuning since the results did not change dramatically. + +# D Size Distribution Plots + +Figure 7 shows the referent size distributions based on human annotation (top) and NUMREF predictions (bottom). We can verify that the two distributions look almost identical for all common expressions, as observed in the color distributions. + +# E Canonical Relation Tests + +For canonical relation tests, we only use relations that are not negated and have all arguments in the same speaker's utterances (so that referent predictions are based on the same player's observation). As illustrative examples, we show the algorithms for testing the horizontal relation (Algorithm 2), + +
CategoryRelationUniqueExamples
Directionleft150to the left (78), on the left (35), left most (5), furthest left (2)
right192to the right (120), on the right (38), lower right (6), to the northeast (1)
above190above (118), top (92), on top (33), up (17), higher (10), just above (4)
below179below (88), bottom (56), lower (38), down (14), lowest (7), beneath (4)
horizontal19horizontal (12), in a horizontal line (4), side by side (3), across from (1)
vertical29vertical (7), on top of (5), on a vertical line (4), aligned vertically with (1)
diagonal38diagonal (5), in a diagonal line (5), sloping down to the right (1), slanted (1)
Proximitynear59close together (63), cluster (32), next to (28), close to (22), near (13)
far21far (5), away from (4), set apart from (1), a ways above (1), a distance from (1)
alone13by (38), lonely (30), alone (21), lonesome (1), isolated (1)
Regioninterior47middle (41), in the middle (19), between (9), in the center of (2)
exterior46close to the border (5), all around (1), on the outside of (1), surrounding (1)
Colorlighter22lighter (102), lighter than (10), lighter gray (8), larger lighter (4)
lightest11lightest (28), lightest shade (3), the lightest of (2), lightest and smallest (2)
darker30darker (130), darker than (16), smaller and darker (4), darker in color (3)
darkest10darkest (40), smallest darkest (2), the darkest of (1), darkest/largest of (1)
same9same color (9), identical looking (2), similar shades (1), equally black (1)
different11different shades (3), different sizes and shades (2), of varying shades (1)
Sizesmaller17smaller (209), smaller than (5), smaller and lighter (4), tinier (1)
smallest8smallest (40), tiniest (4), smallest darkest (2), smallest of (1)
larger32larger (178), bigger than (7), larger in size (2), double the size of (1)
largest10largest (41), biggest (11), largest of (2), biggest one of (1)
same32same size (24), same sized (12), similar in size (5), identical in size (3)
different8different sizes (3), of different sizes (1), varying sizes (1), opposite in sizes (1)
+ +Table 11: Unique numbers and examples of spatial relations which imply each canonical relation (frequencies shown in parentheses). + +
ModificationUniqueExamples
Subtlety27slightly (235), a little (48), a bit (35), a tiny bit (8), very slightly (5)
Extremity15very (87), much (17), pretty (8), quite (3), really (2)
Uncertainty36almost (85), about (49), kind of (23), smallish (6), not completely (3)
Certainty13directly (28), exactly (2), perfect (2), almost exactly (2)
Neutrality16medium (59), med (9), fairly (4), mid-size (3), slightly medium (1)
Negation4not (17), isn’t (1), not perceptibly (1)
+ +near relation (Algorithm 3), interior relation (Algo- rithm 4) and same color relation (Algorithm 5). Note that each algorithm can take a variety of inputs, such as all referents including both subjects and objects $(\mathcal{A})$ or all observable entities of the player $(\mathcal{E})$ + +Table 12: Unique numbers and examples of modifiers with each modification type (frequencies in parentheses). + +
Algorithm 2: Test for horizontal relation
Input: all referents A
Output: boolean satisfy, boolean valid
valid ← |A| > 1
if valid then
// Conduct linear regression and check if coefficient is small
reg.fit(A)
satisfy ← reg.coef < 1/3
else
satisfy ← False
return satisfy, valid
+ +![](images/78f01e3c6dd4f4800fb6633d188aa26c902c5f1ca5b0e40cdda2116e178106b2.jpg) + +![](images/4978fe77408c90381b28b54cdfdda50211a750a9e50a0573281b2d6d316aed80.jpg) +Figure 7: Referent size distributions (top is human, bottom is NUMREF). + +Algorithm 4: Test for interior relation +Input: subject referents $S$ , object referents $\mathcal{O}$ , boolean no_object +Output: boolean satisfy, boolean valid +if no_object then +// If any subject referent is far from the center, satisfy is False +valid $\leftarrow |\mathcal{S}| > 0$ +satisfy $\leftarrow$ valid +center $\leftarrow (0,0)$ +for $s \in S$ do +if dist(s, center) > 120 then +satisfy $\leftarrow$ False +else +// If any subject referent is outside the region of objects, satisfy is False +valid $\leftarrow |\mathcal{S}| > 0 \land |\mathcal{O}| > 1$ +satisfy $\leftarrow$ valid +for $s \in S$ do +if $(s.x < \min(\mathcal{O}.x) \lor \max(\mathcal{O}.x) < s.x) \land (s.y < \min(\mathcal{O}.y) \lor \max(\mathcal{O}.y) < s.y)$ then +satisfy $\leftarrow$ False +return satisfy, valid + +Algorithm 3: Test for near relation +Input: all referents $\mathcal{A}$ observable entities $\mathcal{E}$ +Output: boolean satisfy,boolean valid +valid $\leftarrow |\mathcal{A}| > 1$ +if valid then // Compute distance for every pair in the set $A_{-}dists\gets dist(x,y)$ for $x,y$ in combination(A) $E_{-}dists\gets dist(x,y)$ for $x,y$ in combination(E) // Check if mean distance is smaller satisfy $\leftarrow$ valid $\land$ mean(A_dists) $< <$ mean(E_dists) +else satisfy $\leftarrow$ False +return satisfy,valid + +Algorithm 5: Test for same color relation +Input: all referents A +Output: boolean satisfy, boolean valid +valid $\leftarrow |\mathcal{A}| > 1$ +// Check if range of color is smaller than the threshold +satisfy $\leftarrow$ valid ∧ +max(A.color) - min(A.color) < 30 +return satisfy, valid \ No newline at end of file diff --git a/alinguisticanalysisofvisuallygroundeddialoguesbasedonspatialexpressions/images.zip b/alinguisticanalysisofvisuallygroundeddialoguesbasedonspatialexpressions/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..479ee761413258b1c1a965768ccef9e9a9b4ff08 --- /dev/null +++ b/alinguisticanalysisofvisuallygroundeddialoguesbasedonspatialexpressions/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fcf11ddba3cb089d43f95eea7c865131620d1d830218c021488169f47a98efbd +size 1048710 diff --git a/alinguisticanalysisofvisuallygroundeddialoguesbasedonspatialexpressions/layout.json b/alinguisticanalysisofvisuallygroundeddialoguesbasedonspatialexpressions/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d3127bb662ab83b56d2356d29760f4b85f9414d7 --- /dev/null +++ b/alinguisticanalysisofvisuallygroundeddialoguesbasedonspatialexpressions/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:208e8ec7d1e68d47f2cae0dd5d3ffc9f639576af2f998632ead5d041827389be +size 490476 diff --git a/alittlegoesalongwayimprovingtoxiclanguageclassificationdespitedatascarcity/9bfb649b-b577-4f5f-8784-e9b9d8fc464d_content_list.json b/alittlegoesalongwayimprovingtoxiclanguageclassificationdespitedatascarcity/9bfb649b-b577-4f5f-8784-e9b9d8fc464d_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..238b531cd093d16b2bd545a1e7ec6a6da5535671 --- /dev/null +++ b/alittlegoesalongwayimprovingtoxiclanguageclassificationdespitedatascarcity/9bfb649b-b577-4f5f-8784-e9b9d8fc464d_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a279051675292fcfbeb298ef185e79ff0fa1c5681bc0d04d90de8863a97265fb +size 128136 diff --git a/alittlegoesalongwayimprovingtoxiclanguageclassificationdespitedatascarcity/9bfb649b-b577-4f5f-8784-e9b9d8fc464d_model.json b/alittlegoesalongwayimprovingtoxiclanguageclassificationdespitedatascarcity/9bfb649b-b577-4f5f-8784-e9b9d8fc464d_model.json new file mode 100644 index 0000000000000000000000000000000000000000..fca76aee98991311919c2ac1876146831c2c9d04 --- /dev/null +++ b/alittlegoesalongwayimprovingtoxiclanguageclassificationdespitedatascarcity/9bfb649b-b577-4f5f-8784-e9b9d8fc464d_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ac88bac0cb8fa5ea0e2d07e0ae5275996f02a2e49a5fa80c585c0ab654c79945 +size 153219 diff --git a/alittlegoesalongwayimprovingtoxiclanguageclassificationdespitedatascarcity/9bfb649b-b577-4f5f-8784-e9b9d8fc464d_origin.pdf b/alittlegoesalongwayimprovingtoxiclanguageclassificationdespitedatascarcity/9bfb649b-b577-4f5f-8784-e9b9d8fc464d_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d18a3f7a4653394b53c94178390a56266c48131f --- /dev/null +++ b/alittlegoesalongwayimprovingtoxiclanguageclassificationdespitedatascarcity/9bfb649b-b577-4f5f-8784-e9b9d8fc464d_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f3912f94abd52f65822270f08d48ea65f4070f31fc536257af57954bc0ce447 +size 395042 diff --git a/alittlegoesalongwayimprovingtoxiclanguageclassificationdespitedatascarcity/full.md b/alittlegoesalongwayimprovingtoxiclanguageclassificationdespitedatascarcity/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c375a70d351ce51427a9c11a38275d3dc9948ee4 --- /dev/null +++ b/alittlegoesalongwayimprovingtoxiclanguageclassificationdespitedatascarcity/full.md @@ -0,0 +1,451 @@ +# A little goes a long way: Improving toxic language classification despite data scarcity + +Mika Juuti1, Tommi Gröndahl2, Adrian Flanagan3, N. Asokan1,2 + +University of Waterloo + +Aalto University2 + +Huawei Technologies Oy (Finland) Co Ltd3 + +mika.juuti@kela.fi, tommi.grondahl@aalto.fi + +adrian.flanagan@huawei.com, asokan@acm.org + +# Abstract + +Detection of some types of toxic language is hampered by extreme scarcity of labeled training data. Data augmentation – generating new synthetic data from a labeled seed dataset – can help. The efficacy of data augmentation on toxic language classification has not been fully explored. We present the first systematic study on how data augmentation techniques impact performance across toxic language classifiers, ranging from shallow logistic regression architectures to BERT – a state-of-the-art pretrained Transformer network. We compare the performance of eight techniques on very scarce seed datasets. We show that while BERT performed the best, shallow classifiers performed comparably when trained on data augmented with a combination of three techniques, including GPT-2-generated sentences. We discuss the interplay of performance and computational overhead, which can inform the choice of techniques under different constraints. + +# 1 Introduction + +Toxic language is an increasingly urgent challenge in online communities (Mathew et al., 2019). Although there are several datasets, most commonly from Twitter or forum discussions (Badjatiya et al., 2017; Davidson et al., 2017; Waseem and Hovy, 2016; Wulczyn et al., 2017; Zhang et al., 2018), high class imbalance is a problem with certain classes of toxic language (Breitfeller et al., 2019). Manual labeling of toxic content is onerous, hazardous (Newton, 2020), and thus expensive. + +One strategy for mitigating these problems is data augmentation (Wang and Yang, 2015; Ratner et al., 2017; Wei and Zou, 2019): complementing the manually labeled seed data with new synthetic documents. The effectiveness of data augmentation for toxic language classification has not yet been thoroughly explored. On relatively + +small toxic language datasets, shallow classifiers have been shown to perform well (Gröndahl et al., 2018). At the same time, pre-trained Transformer networks (Vaswani et al., 2017) have led to impressive results in several NLP tasks (Young et al., 2018). Comparing the effects of data augmentation between shallow classifiers and pre-trained Transformers is thus of particular interest. + +We systematically compared eight augmentation techniques on four classifiers, ranging from shallow architectures to BERT (Devlin et al., 2019), a popular pre-trained Transformer network. We used downsampled variants of the Kaggle Toxic Comment Classification Challenge dataset (Jigsaw 2018; §3) as our seed dataset. We focused on the threat class, but also replicated our results on another toxic class (§4.6). With some classifiers, we reached the same F1-score as when training on the original dataset, which is $20\mathrm{x}$ larger. However, performance varied markedly between classifiers. + +We obtained the highest overall results with BERT, increasing the F1-score up to $21\%$ compared to training on seed data alone. However, augmentation using a fine-tuned GPT-2 (§3.2.4) - a pre-trained Transformer language model (Radford et al., 2019) - reached almost BERT-level performance even with shallow classifiers. Combining multiple augmentation techniques, such as adding majority class sentences to minority class documents (§3.2.3) and replacing subwords with embedding-space neighbors (Heinzerling and Strube, 2018) (§3.2.2), improved performance on all classifiers. We discuss the interplay of performance and computational requirements like memory and run-time costs (§4.5). We release our source code. $^{1}$ + +# 2 Preliminaries + +Data augmentation arises naturally from the problem of filling in missing values (Tanner and Wong, 1987). In classification, data augmentation is applied to available training data. Classifier performance is measured on a separate (non-augmented) test set (Krizhevsky et al., 2012). Data augmentation can decrease overfitting (Wong et al., 2016; Shorten and Khoshgoftaar, 2019), and broaden the input feature range by increasing the vocabulary (Fadaee et al., 2019). + +Simple oversampling is the most basic augmentation technique: copying minority class datapoints to appear multiple times. This increases the relevance of minority class features for computing the loss during training (Chawla et al., 2002). + +EDA is a prior technique combining four text transformations to improve classification with CNN and RNN architectures (Wei and Zou, 2019). It uses (i) synonym replacement from WordNet (§3.2.1), (ii) random insertion of a synonym, (iii) random swap of two words, and (iv) random word deletion. + +Word replacement has been applied in several data augmentation studies (Zhang et al., 2015; Wang and Yang, 2015; Xie et al., 2017; Wei and Zou, 2019; Fadaee et al., 2019). We compared four techniques, two based on semantic knowledge bases (§3.2.1) and two on pre-trained (sub)word embeddings (§3.2.2). + +Pre-trained Transformer networks feature prominently in state-of-the-art NLP research. They are able to learn contextual embeddings, which depend on neighboring subwords (Devlin et al., 2019). Fine-tuning – adapting the weights of a pre-trained Transformer to a specific corpus – has been highly effective in improving classification performance (Devlin et al., 2019) and language modeling (Radford et al., 2019; Walton; Branwen, 2019). State-of-the-art networks are trained on large corpora: GPT-2's corpus contains 8M web pages, while BERT's training corpus contains 3.3B words. + +# 3 Methodology + +We now describe the data (3.1), augmentation techniques (3.2), and classifiers (3.3) we used. + +# 3.1 Dataset + +We used Kaggle's toxic comment classification challenge dataset (Jigsaw, 2018). It contains human-labeled English Wikipedia comments in six + +different classes of toxic language.2 The median length of a document is three sentences, but the distribution is heavy-tailed (Table 1). + +
MeanStd.MinMax25%50%75%
461683235
+ +Some classes are severely under-represented: e.g., 478 examples of threat vs. 159093 non-threat examples. Our experiments concern binary classification, where one class is the minority class and all remaining documents belong to the majority class. We focus on threat as the minority class, as it poses the most challenge for automated analysis in this dataset (van Aken et al., 2018). To confirm our results, we also applied the best-performing techniques on a different type of toxic language, the identity-hate class (§4.6). + +Our goal is to understand how data augmentation improves performance under extreme data scarcity in the minority class (threat). To simulate this, we derive our seed dataset (SEED) from the full data set (GOLD STANDARD) via stratified bootstrap sampling (Bickel and Freedman, 1984) to reduce the dataset size $k$ -fold. We replaced newlines, tabs and repeated spaces with single spaces, and lower-cased each dataset. We applied data augmentation techniques on SEED with $k$ -fold oversampling of the minority class, and compared each classifier architecture (§3.3) trained on SEED, GOLD STANDARD, and the augmented datasets. We used the original test dataset (TEST) for evaluating performance. We detail the dataset sizes in Table 2. + +Table 1: Document lengths (number of sentences; tokenized with NLTK sent_tokenize (Bird et al., 2009)). + +
GOLD STD.SEEDTEST
Minority47825211
Majority159,093795563,767
+ +Table 2: Number of documents (minority: threat) + +Ethical considerations. We used only public datasets, and did not involve human subjects. + +# 3.2 Data augmentation techniques + +We evaluated six data augmentation techniques on four classifiers (Table 3). We describe each aug- + +mentation technique (below) and classifier (§3.3). For comparison, we also evaluated simple oversampling (COPY) and EDA (Wei and Zou, 2019), both reviewed in §2. Following the recommendation of Wei and Zou (2019) for applying EDA to small seed datasets, we used $5\%$ augmentation probability, whereby each word has a $1 - 0.95^4 \approx 19\%$ probability of being transformed by at least one of the four EDA techniques. + +Four of the six techniques are based on replacing words with semantically close counterparts; two using semantic knowledge bases (§3.2.1) and two pre-trained embeddings (§3.2.2). We applied 25% of all possible replacements with these techniques, which is close to the recommended substitution rate in EDA. For short documents we ensured that at least one substitution is always selected. We also added majority class material to minority class documents (§3.2.3), and generated text with the GPT-2 language model fine-tuned on SEED (§3.2.4). + +# 3.2.1 Substitutions from a knowledge base + +WordNet is a semantic knowledge base containing various properties of word senses, which correspond to word meanings (Miller, 1995). We augmented SEED by replacing words with random synonyms. While EDA also uses WordNet synonyms (§2), we additionally applied word sense disambiguation (Navigli, 2009) and inflection. + +For word sense disambiguation we used simple Lesk from PyWSD (Tan, 2014). As a variant of the Lesk algorithm (Lesk, 1986) it relies on overlap in definitions and example sentences (both provided in WordNet), compared between each candidate sense and words in the context. + +Word senses appear as uninflected lemmas, which we inflected using a dictionary-based technique. We lemmatized and annotated a large corpus with NLTK (Bird et al., 2009), and mapped each combination to its most common surface form. The corpus contains 8.5 million short sentences ( $\leq 20$ words) from multiple open-source corpora (see Appendix E). We designed it to have both a large vocabulary for wide coverage (371125 lemmas), and grammatically simple sentences to maximize correct tagging. + +Paraphrase Database (PPDB) was collected from bilingual parallel corpora on the premise that English phrases translated identically to another language tend to be paraphrases (Ganitkevitch et al., 2013; Pavlick et al., 2015). We used phrase pairs tagged as equivalent, constituting 245691 para + +phrases altogether. We controlled substitution by grammatical context as specified in PPDB. In single words this is the part-of-speech tag; whereas in multi-word paraphrases it also contains the syntactic category that appears after the original phrase in the PPDB training corpus. We obtained grammatical information with the $Spacy^3$ parser. + +# 3.2.2 Embedding neighbour substitutions + +Embeddings can be used to map units to others with a similar occurrence distribution in a training corpus (Mikolov et al., 2013). We considered two alternative pre-trained embedding models. For each model, we produced top-10 nearest embedding neighbours (cosine similarity) of each word selected for replacement, and randomly picked the new word from these. + +Twitter word embeddings (GLOVE) (Pennington et al., 2014) were obtained from a Twitter corpus, and we deployed these via Gensim (Rehürek and Sojka, 2010). + +Subword embeddings (BPEMB) have emerged as a practical pre-processing tool for overcoming the challenge of low-prevalence words (Sennrich et al., 2016). They have been applied in Transformer algorithms, including WordPiece (Wu et al., 2016) for BERT (Devlin et al., 2019), and BPE (Sennrich et al., 2016) for GPT-2 (Radford et al., 2019). BPEMB (Heinzerling and Strube, 2018) provides pre-trained GloVe embeddings, constructed by applying SentencePiece (Kudo and Richardson, 2018) on the English Wikipedia. We use 50-dimensional BPEMB-embeddings with vocabulary size 10,000. + +# 3.2.3 Majority class sentence addition (ADD) + +Adding unrelated material to the training data can be beneficial by making relevant features stand out (Wong et al., 2016; Shorten and Khoshgoftaar, 2019). We added a random sentence from a majority class document in SEED to a random position in a copy of each minority class training document. + +# 3.2.4 GPT-2 conditional generation + +GPT-2 is a Transformer language model pre-trained on a large collection of Web documents. We used the 110M parameter GPT-2 model from the Transformers library (Wolf et al., 2019) We discuss parameters in Appendix F. We augmented as follows $(N$ -fold oversampling): + +
AugmentationTypeUnit#ParametersPre-training Corpus
ADDNon-toxic corpusSentenceNANA
PPDBKnowledge BaseN-gramNANA
WORDNETKnowledge BaseWordNANA
GLOVEGloVeWord30MTwitter
BPEMBGloVeSubword0.5MWikipedia
GPT-2TransformerSubword117MWebText
ClassifierModel TypeUnit#ParametersPre-training Corpus
Char-LRLogistic regressionCharacter30K-
Word-LRLogistic regressionWord30K-
CNNConvolutional networkWord3M-
BERTTransformerSubword110MWikipedia & BookCorpus
+ +Table 3: Augmentation techniques and classifiers considered in this study. + +1. $\hat{G} \gets$ briefly train GPT-2 on minority class documents in SEED. +2. generate $N - 1$ novel documents $\hat{\mathbf{x}}\gets \hat{G} (\mathbf{x})$ for all minority class samples $\mathbf{x}$ in SEED. +3. assign the minority class label to all documents $\hat{\mathbf{x}}$ +4. merge $\hat{\mathbf{x}}$ with SEED. + +# 3.3 Classifiers + +Char-LR and Word-LR. We adapted the logistic regression pipeline from the Wiki-detox project (Wulczyn et al., 2017). We allowed n-grams in the range 1–4, and kept the default parameters: TF-IDF normalization, vocabulary size at 10,000 and parameter $C = 10$ (inverse regularization strength). + +CNN. We applied a word-based CNN model with 10 kernels of sizes 3, 4 and 5. Vocabulary size was 10,000 and embedding dimensionality 300. For training, we used the dropout probability of 0.1, and the Adam optimizer (Kingma and Ba, 2014) with the learning rate of 0.001. + +BERT. We used the pre-trained Uncased BERT-Base and trained the model with the training script from Fast-Bert.6 We set maximum sequence length to 128 and mixed precision optimization level to O1. + +# 4 Results + +We compared precision and recall for the minority class (threat), and the macro-averaged F1 + +score for each classifier and augmentation technique. (For brevity, we use "F1-score" from now on.) The majority class F1-score remained 1.00 (two digit rounding) across all our experiments. All classifiers are binary, and we assigned predictions to the class with the highest conditional probability. We relax this assumption in §4.4, to report area under the curve (AUC) values (Murphy, 2012). + +To validate our results, we performed repeated experiments with the common random numbers technique (Glasserman and Yao, 1992), by which we controlled the sampling of SEED, initial random weights of classifiers, and the optimization procedure. We repeated the experiments 30 times, and report confidence intervals. + +# 4.1 Results without augmentation + +We first show classifier performance on GOLD STANDARD and SEED in Table 4. van Aken et al. (2018) reported F1-scores for logistic regression and CNN classifiers on GOLD STANDARD. Our results are comparable. We also evaluate BERT, which is noticeably better on GOLD STANDARD, particularly in terms of threat recall. + +All classifiers had significantly reduced F1-scores on SEED, due to major drops in threat recall. In particular, BERT was degenerate, assigning all documents to the majority class in all 30 repetitions. Devlin et al. (2019) report that such behavior may occur on small datasets, but random restarts may help. In our case, random restarts did not impact BERT performance on SEED. + +# 4.2 Augmentations + +We applied all eight augmentation techniques (§3.2) to the minority class of SEED (threat). Each + +
GOLD STANDARD
Char-LRWord-LRCNNBERT
Precision0.610.430.600.54
Recall0.340.360.330.54
F10.720.690.710.77
SEED
Char-LRWord-LRCNNBERT
Precision0.640.470.410.00
Recall0.030.040.090.00
F10.520.530.570.50
+ +technique retains one copy of each SEED document, and adds 19 synthetically generated documents per SEED document. Table 5 summarizes augmented dataset sizes. We present our main results in Table 6. We first discuss classifier-specific observations, and then make general observations on each augmentation technique. + +Table 4: Classifier performance on GOLD STANDARD and SEED. Precision and recall for threat; F1-score macro-averaged from both classes. + +
SEEDAugmented
Minority2525→500
Majority79557955
+ +Table 5: Number of documents in augmented datasets. We retained original SEED documents and expanded the dataset with additional synthetic documents (minority: threat) + +We compared the impact of augmentations on each classifier, and therefore our performance comparisons below are local to each column (i.e., classifier). We identify the best performing technique for the three metrics and report the p-value when its effect is significantly better than the other techniques (based on one-sided paired t-tests, $\alpha = 5\%$ ).7 + +BERT. COPY and ADD were successful on BERT, raising the F1-score up to 21 percentage points above SEED to 0.71. But their impacts on BERT were different: ADD led to increased recall, while COPY resulted in increased precision. PPDB precision and recall were statistically indistinguishable from COPY, which indicates that it did few alterations. GPT-2 led to significantly better recall $(p < 10^{-5}$ for all pairings), even surpassing GOLD STANDARD. Word substitution methods like EDA, WORDNET, GLOVE, and BPEMB improved on + +SEED, but were less effective than COPY in both precision and recall. Park et al. (2019) found that BERT may perform poorly on out-of-domain samples. BERT is reportedly unstable on adversarially chosen subword substitutions (Sun et al., 2020). We suggest that non-contextual word embedding schemes may be sub-optimal for BERT since its pre-training is not conducted with similarly noisy documents. We verified that reducing the number of replaced words was indeed beneficial for BERT (Appendix G). + +Char-LR. BPEMB and ADD were effective at increasing recall, and reached similar increases in F1-score. GPT-2 raised recall to GOLD STANDARD level $(p < 10^{-5}$ for all pairings), but precision remained 16 percentage points below GOLD STANDARD. It led to the best increase in F1-score: 16 percentage points above SEED $(p < 10^{-3}$ for all pairings). + +Word-LR. Embedding-based BPEMB and GLOVe increased recall by at least 13 percentage points, but the conceptually similar PPDB and WORDNET were largely unsuccessful. We suggest this discrepancy may be due to WORDNET and PPDB relying on written standard English, whereas toxic language tends to be more colloquial. GPT-2 increased recall and F1-score the most: 15 percentage points above SEED $(p < 10^{-10}$ for all pairings). + +CNN. GLOVE and ADD increased recall by at least 10 percentage points. BPEMB led to a large increase in recall, but with a drop in precision, possibly due to its larger capacity to make changes in text - GLOVE can only replace entire words that exist in the pre-training corpus. GPT-2 yielded the largest increases in recall and F1-score $(p < 10^{-4}$ for all pairings). + +We now discuss each augmentation technique. + +COPY emphasizes the features of original minority documents in SEED, which generally resulted in fairly high precision. On Word-LR, COPY is analogous to increasing the weight of words that appear in minority documents. + +EDA behaved similarly to COPY on Char-LR, Word-LR and CNN; but markedly worse on BERT. + +ADD reduces the classifier's sensitivity to irrelevant material by adding majority class sentences to minority class documents. On Word-LR, ADD is analogous to reducing the weights of majority class words. ADD led to a marginally better F1-score than any other technique on BERT. + +
AugmentationMetricChar-LRWord-LRCNNBERT
SEED +No OversamplingPrecision0.68 ± 0.220.43 ± 0.270.45 ± 0.140.00 ± 0.00
Recall0.03 ± 0.020.04 ± 0.020.08 ± 0.050.00 ± 0.00
F1 (macro)0.53 ± 0.020.54 ± 0.020.56 ± 0.030.50 ± 0.00
COPY +Simple OversamplingPrecision0.67 ± 0.070.38 ± 0.240.40 ± 0.080.49 ± 0.07
Recall0.16 ± 0.030.03 ± 0.020.07 ± 0.030.36 ± 0.09
F1 (macro)0.63 ± 0.020.53 ± 0.020.56 ± 0.020.70 ± 0.03
EDA +Wei and Zou (2019)Precision0.66 ± 0.060.36 ± 0.190.26 ± 0.090.21 ± 0.03
Recall0.13 ± 0.030.08 ± 0.040.07 ± 0.010.06 ± 0.01
F1 (macro)0.61 ± 0.020.56 ± 0.030.55 ± 0.010.54 ± 0.01
ADD +Add Majority-class SentencePrecision0.58 ± 0.070.36 ± 0.210.45 ± 0.070.36 ± 0.04
Recall0.24 ± 0.040.06 ± 0.040.19 ± 0.070.52 ± 0.07
F1 (macro)0.67 ± 0.030.55 ± 0.030.63 ± 0.040.71 ± 0.01
PPDB +Phrase SubstitutionsPrecision0.16 ± 0.080.41 ± 0.270.37 ± 0.090.48 ± 0.06
Recall0.10 ± 0.030.04 ± 0.020.08 ± 0.040.34 ± 0.08
F1 (macro)0.56 ± 0.020.53 ± 0.020.57 ± 0.020.70 ± 0.03
WORDNET +Word SubstitutionsPrecision0.16 ± 0.060.36 ± 0.240.41 ± 0.080.47 ± 0.08
Recall0.11 ± 0.030.05 ± 0.030.11 ± 0.050.29 ± 0.07
F1 (macro)0.56 ± 0.020.54 ± 0.020.58 ± 0.030.68 ± 0.03
GLOVE +Word SubstitutionsPrecision0.15 ± 0.040.39 ± 0.120.38 ± 0.080.43 ± 0.11
Recall0.14 ± 0.030.16 ± 0.050.18 ± 0.060.18 ± 0.06
F1 (macro)0.57 ± 0.020.61 ± 0.030.62 ± 0.030.62 ± 0.03
BPEMB +Subword SubstitutionsPrecision0.56 ± 0.070.33 ± 0.070.25 ± 0.070.38 ± 0.12
Recall0.22 ± 0.030.22 ± 0.040.37 ± 0.080.16 ± 0.04
F1 (macro)0.66 ± 0.020.63 ± 0.020.64 ± 0.030.61 ± 0.03
GPT-2 +Conditional GenerationPrecision0.45 ± 0.080.35 ± 0.070.31 ± 0.080.15 ± 0.05
Recall0.33 ± 0.040.42 ± 0.050.46 ± 0.100.62 ± 0.09
F1 (macro)0.69 ± 0.020.69 ± 0.020.68 ± 0.020.62 ± 0.03
+ +Table 6: Comparison of augmentation techniques for $20\mathrm{x}$ augmentation on SEED/threat: means for precision, recall and macro-averaged F1-score shown with standard deviations (30 paired repetitions). Precision and recall for threat; F1-score macro-averaged from both classes. Bold figures represent techniques that are either best, or not significantly different ( $\alpha = 5\%$ ) from this best technique. Double underlines indicate the best technique (for a given metric and classifier) significantly better ( $\alpha = 1\%$ ) than all other techniques. + +Word replacement was more effective with GLOVE and BPEMB than with PPDB or WORDNET. PPDB and WORDNET generally replace few words per document, which often resulted in similar performance to COPY. BPEMB was generally the most effective among these techniques. + +GPT-2 had the best improvement overall, leading to significant increases in recall across all classifiers, and the highest F1-score on all but BERT. The increase in recall can be attributed to GPT-2's capacity for introducing novel phrases. We corroborated this hypothesis by measuring the overlap between the original and augmented test sets and an offensive/profane word list from von Ahn.8 GPT-2 + +augmentations increased the intersection cardinality by $260\%$ from the original; compared to only $84\%$ and $70\%$ with the next-best performing augmentation techniques (ADD and BPEMB, respectively). This demonstrates that GPT-2 significantly increased the vocabulary range of the training set, specifically with offensive words likely to be relevant for toxic language classification. However, there is a risk that human annotators might not label GPT-2-generated documents as toxic. Such label noise may decrease precision. (See Appendix H, Table 22 for example augmentations that display the behavior of GPT-2 and other techniques.) + +# 4.3 Mixed augmentations + +In §4.2 we saw that the effect of augmentations differ across classifiers. A natural question is whether it is beneficial to combine augmentation techniques. For all classifiers except BERT, the best performing techniques were GPT-2, ADD, and BPEMB (Table 6). They also represent each of our augmentation types (§3.2), BPEMB having the highest performance among the four word replacement techniques (§3.2.1–§3.2.2) in these classifiers. + +We combined the techniques by merging augmented documents in equal proportions. In ABG, we included documents generated by ADD, BPEMB or GPT-2. Since ADD and BPEMB impose significantly lower computational and memory requirements than GPT-2, and require no access to a GPU (Appendix C), we also evaluated combining only ADD and BPEMB (AB). + +ABG outperformed all other techniques (in F1-score) on Char-LR and CNN with statistical significance, while being marginally better on Word-LR. On BERT, ABG achieved a better F1-score and precision than GPT-2 alone $(p < 10^{-10})$ , and a better recall $(p < 0.05)$ . ABG was better than AB in recall on Word-LR and CNN, while the precision was comparable. + +Augmenting with ABG resulted in similar performance as GOLD STANDARD on Word-LR, CharLR and CNN (Table 4). Comparing Tables 6 and 7, it is clear that much of the performance improvement came from the increased vocabulary coverage of GPT-2-generated documents. Our results suggest that in certain types of data like toxic language, consistent labeling may be more important than wide coverage in dataset collection, since auto + +
AB
Char-LRWord-LRCNNBERT
Precision0.560.370.330.41
Recall0.260.180.360.36
F10.680.620.670.69
ABG
Char-LRWord-LRCNNBERT
Precision0.480.370.310.28
Recall0.360.390.520.65
F10.700.690.690.69
+ +mated data augmentation can increase the coverage of language. Furthermore, Char-LR trained with ABG was comparable (no statistically significant difference) to the best results obtained with BERT (trained with ADD, $p > 0.2$ on all metrics). + +# 4.4 Average classification performance + +The results in Tables 6 and 7 focus on precision, recall and the F1-score of different models and augmentation techniques where the probability threshold for determining the positive or negative class is 0.5. In general the level of precision and recall are adapted based on the use case for the classifier. Another general evaluation of a classifier is based on the ROC-AUC metric, which is the area under the curve for a plot of true-positive rate versus the false-positive rate for a range of thresholds varying over [0, 1]. Table 8 shows the ROC-AUC scores for each of the classifiers for the best augmentation techniques from Tables 6 and 7. + +BERT with ABG gave the best ROC-AUC value of 0.977 which is significantly higher than BERT with any other augmentation technique $(p < 10^{-6})$ . CNN exhibited a similar pattern: ABG resulted in the best ROC-AUC compared to the other augmentation techniques $(p < 10^{-6})$ . For Word-LR, ROC-AUC was highest for ABG, but the difference to GPT-2 was not statistically significant $(p > 0.05)$ . In the case of Char-LR, none of the augmentation techniques improved on SEED $(p < 0.05)$ . Char-LR produced a more consistent averaged performance across all augmentation methods with ROC-AUC values varying between (0.958, 0.973), compared to variations across all augmentation techniques of (0.792, 0.962) and (0.816, 0.977) for CNN and BERT respectively. + +Table 7: Effects of mixed augmentation (20x) on SEED/threat (Annotations as in Table 6). Precision and recall for threat; F1-score macro-averaged from both classes. + +
Char-LRWord-LRCNNBERT
SEED0.9730.9680.9220.816
COPY0.9720.9370.7920.898
ADD0.9580.9550.9040.956
BPEMB0.9680.9680.9400.868
GPT-20.9690.9730.9530.964
ABG0.9720.9730.9620.977
+ +Table 8: Comparison of ROC-AUC for augmentation (20x) on SEED/threat (Annotations as in Table 6). + +Our results highlight a difference between the results in Tables 6 and 7: while COPY reached a high F1-score on BERT, our results on ROC-AUC highlight that such performance may not hold while + +varying the decision threshold. We observe that a combined augmentation method such as ABG provides an increased ability to vary the decision threshold for the more complex classifiers such as CNN and BERT. Simpler models performed consistently across different augmentation techniques. + +# 4.5 Computational requirements + +BERT has significant computational requirements (Table 9). Deploying BERT on common EC2 instances requires 13 GB GPU memory. ABG on EC2 requires 4 GB GPU memory for approximately 100s (for 20x augmentation). All other techniques take only a few seconds on ordinary desktop computers (See Appendices C-D for additional data on computational requirements). + +
ADDBPEMBGPT-2ABG
CPU-1003,6003,600
GPU--3,6003,600
Char-LRWord-LRCNNBERT
CPU10010040013,000
GPU10010040013,000
+ +Table 9: Memory (MB) required for augmentation techniques and classifiers. Rounded to nearest ${100}\mathrm{{MB}}$ . + +# 4.6 Alternative toxic class + +In order to see whether our results described so far generalize beyond threat, we repeated our experiments using another toxic language class, identity-hate, as the minority class. Our results for identity-hate are in line with those for threat. All classifiers performed poorly on SEED due to very low recall. Augmentation with simple techniques helped BERT gain more than 20 percentage points for the F1-score. Shallow classifiers approached BERT-like performance with appropriate augmentation. We present further details in Appendix B. + +# 5 Related work + +Toxic language classification has been conducted in a number of studies (Schmidt and Wiegand, 2017; Davidson et al., 2017; Wulczyn et al., 2017; Grondahl et al., 2018; Qian et al., 2019; Breitfeller et al., 2019). NLP applications of data augmentation include text classification (Ratner et al., 2017; Wei and Zou, 2019; Mesbah et al., 2019), user behavior categorization (Wang and Yang, 2015), + +dependency parsing (Vania et al., 2019), and machine translation (Fadaee et al., 2019; Xia et al., 2019). Related techniques are also used in automatic paraphrasing (Madnani and Dorr, 2010; Li et al., 2018) and writing style transfer (Shen et al., 2017; Shetty et al., 2018; Mahmood et al., 2019). + +Hu et al. (2017) produced text with controlled target attributes via variational autoencoders. Mesbah et al. (2019) generated artificial sentences for adverse drug reactions using Reddit and Twitter data. Similarly to their work, we generated novel toxic sentences from a language model. Petroni et al. (2019) compared several pre-trained language models on their ability to understand factual and commonsense reasoning. BERT models consistently outperformed other language models. Petroni et al. suggest that large pre-trained language models may become alternatives to knowledge bases in the future. + +# 6 Discussion and conclusions + +Our results highlight the relationship between classification performance and computational overhead. Overall, BERT performed the best with data augmentation. However, it is highly resource-intensive (§4.5). ABG yielded almost BERT-level F1- and ROC-AUC scores on all classifiers. While using GPT-2 is more expensive than other augmentation techniques, it has significantly less requirements than BERT. Additionally, augmentation is a one-time upfront cost in contrast to ongoing costs for classifiers. Thus, the trade-off between performance and computational resources can influence which technique is optimal in a given setting. + +We identify the following further topics that we leave for future work. + +SEED coverage. Our results show that data augmentation can increase coverage, leading to better toxic language classifiers when starting with very small seed datasets. The effects of data augmentation will likely differ with larger seed datasets. + +Languages. Some augmentation techniques are limited in their applicability across languages. GPT-2, WORDNET, PPDB and GLOVE are available for certain other languages, but with less coverage than in English. BPEMB is nominally available in 275 languages, but has not been thoroughly tested on less prominent languages. + +Transformers. BERT has inspired work on other pre-trained Transformer classifiers, leading to better classification performance (Liu et al., 2019; + +Lewis et al., 2019) and better trade-offs between memory consumption and classification performance (Sanh et al., 2019; Jiao et al., 2019). Exploring the effects of augmentation on these Transformer classifiers is left for future work. + +Attacks. Training classifiers with augmented data may influence their vulnerability for model extraction attacks (Tramér et al., 2016; Krishna et al.), model evasion (Gröndahl et al., 2018), or backdoors (Schuster et al., 2020). We leave such considerations for future work. + +# Acknowledgments + +We thank Jonathan Paul Fernandez Strahl, Mark van Heeswijk, and Kuan Eeik Tan for valuable discussions related to the project, and Karthik Ramesh for his help with early experiments. We also thank Prof. Yaoliang Yu for providing compute resources for early experiments. Tommi Gröndahl was funded by the Helsinki Doctoral Education Network in Information and Communications Technology (HICT). + +# References + +Betty van Aken, Julian Risch, Ralf Krestel, and Alexander Löser. 2018. Challenges for toxic comment classification: An in-depth error analysis. In Proceedings of the 2nd Workshop on Abusive Language Online (ALW2), pages 33-42. +Pinkesh Badjatiya, Shashank Gupta, Manish Gupta, and Vasudeva Varma. 2017. Deep learning for hate speech detection in tweets. In Proceedings of the 26th International Conference on World Wide Web Companion, pages 759-760. +Peter J. Bickel and David A. Freedman. 1984. Asymptotic normality and the bootstrap in stratified sampling. The annals of statistics, 12(2):470-482. +Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python: Analyzing Text with the Natural Language Toolkit. O'Reilly, Beijing. +Gwern Branwen. 2019. Gpt-2 neural network poetry. https://www.gwern.net/GPT-2 Last accessed May 2020. +Luke Breitfeller, Emily Ahn, David Jurgens, and Yulia Tsvetkov. 2019. Finding microaggressions in the wild: A case for locating elusive phenomena in social media posts. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1664-1674. + +Nitesh V. Chawla, Kevin W. Bowyer, Lawrence O. Hall, and W. Philip Kegelmeyer. 2002. SMOTE: Synthetic minority over-sampling technique. Journal of Artificial Intelligence Research, 16:321-357. +Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the 11th Conference on Web and Social Media, pages 512-515. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), pages 4171-4186. +Marzieh Fadaee, Arianna Bisazza, and Christof Monz. 2019. Data augmentation for low resource neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Short Papers), pages 567-573. +Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The paraphrase database. In Proceedings of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 758-764. +Paul Glasserman and David D Yao. 1992. Some guidelines and guarantees for common random numbers. Management Science, 38(6):884-908. +Tommi Gröndahl, Luca Pajola, Mika Juuti, Mauro Conti, and N. Asokan. 2018. All you need is “love”: Evading hate speech detection. In Proceedings of the 11th ACM Workshop on Artificial Intelligence and Security (AISec'11), pages 2-12. +Benjamin Heinzerling and Michael Strube. 2018. BPEmb: Tokenization-free pre-trained subword embeddings in 275 languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), pages 2989-2993. +Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. 2017. Toward controlled generation of text. In Proceedings of the 34th International Conference on Machine Learning, pages 1587-1596. +Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019. Tinybert: Distilling bert for natural language understanding. arXiv preprint arXiv:1909.10351. +Jigsaw. 2018. Toxic comment classification challenge identify and classify toxic online comments. Available in https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge, accessed last time in May 2020. + +Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR). +Kalpesh Krishna, Gaurav Singh Tomar, Ankur Parikh, Nicolas Papernot, and Mohit Iyyer. Thieves of sesame street: Model extraction on bert-based apis. In Proceedings of the International Conference on Learning Representations (ICLR). +Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Proceedings of Neural Information Processing Systems (NIPS), pages 1097-1105. +Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tokenizer and tokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71. +Michael Lesk. 1986. Automatic sense disambiguation using machine readable dictionaries: how to tell a pine code from an ice cream cone. In Proceedings of the 5th Annual International Conference on Systems Documentation, pages 24-26. +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. +Zichao Li, Xin Jiang, Lifeng Shang, and Hang Li. 2018. Paraphrase Generation with Deep Reinforcement Learning. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3865-3878. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692. +Nitin Madnani and Bonnie Dorr. 2010. Generating phrasal and sentential paraphrases: A survey of data-driven methods. Journal of Computational Linguistics, 36(3):341-387. +Asad Mahmood, Faizan Ahmad, Zubair Shafiq, Padmini Srinivasan, and Fareed Zaffar. 2019. A girl has no name: Automated authorship obfuscation using Mutant-X. In Proceedings on Privacy Enhancing Technologies (PETS), pages 54-71. +Binny Mathew, Ritam Dutt, Pawan Goyal, and Animesh Mukherjee. 2019. Spread of hate speech in online social media. In Proceedings of the 10th ACM Conference on Web Science (WebSci '19), pages 173-182. + +Sepideh Mesbah, Jie Yang, Robert-Jan Sips, Manuel Valle Torre, Christoph Lofi, Alessandro Bozzon, and Geert-Jan Houben. 2019. Training data augmentation for detecting adverse drug reactions in user-generated content. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2349-2359. +Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems (NIPS), pages 3111-3119. +George A. Miller. 1995. WordNet: A lexical database for English. Communications of the ACM, 38(11):39-41. +Kevin P. Murphy. 2012. Machine learning: a probabilistic perspective. MIT press, Cambridge. +Roberto Navigli. 2009. Word sense disambiguation: A survey. ACM Computing Surveys, 41(2):1-69. +Casey Newton. 2020. Facebook will pay $52 million in settlement with moderators who developed PTSD on the job. The Verge. https://www.theverge.com/2020/5/12/21255870/facebook-content-moderator-settlement-scola-ptsd-mental-health/Last accessed May 2020. +Cheoneum Park, Juae Kim, Hyeon-gu Lee, Reinald Kim Amplayo, Harksoo Kim, Jungyun Seo, and Changki Lee. 2019. ThisIsCompetition at SemEval-2019 Task 9: BERT is unstable for out-of-domain samples. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 1254–1261. +Ellie Pavlick, Pushpendre Rastogi, Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2015. PPDB 2.0: Better paraphrase ranking, fine-grained entailment relations, word embeddings, and style classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Short Papers), pages 425-430. +Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543. +Fabio Petroni, Tim Rocttäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language + +Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2463-2473. +Jing Qian, Anna Bethke, Yinyin Liu, Elizabeth Belding, and William Yang Wang. 2019. A benchmark dataset for learning to intervene in online hate speech. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4757-4766. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9. +Alexander J. Ratner, Henry R. Ehrenberg, Zeshan Hussain, Jared Dunnmon, and Christopher Ré. 2017. Learning to compose domain-specific transformations for data augmentation. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017). +Radim Šehúrek and Petr Sojka. 2010. Software framework for topic modelling with large corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45-50, Valletta, Malta. ELRA. http://is.muni.cz/publication/884893/en. +Victor Sanh, Lysandre Debut, Julien Chaumont, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. +Anna Schmidt and Michael Wiegand. 2017. A survey on hate speech detection using natural language processing. In Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media, pages 1-10. +Roei Schuster, Tal Schuster, Yoav Meri, and Vitaly Shmatikov. 2020. Humpty dumpty: Controlling word meanings via corpus poisoning. arXiv preprint arXiv:2001.04935. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725. +Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In Proceedings of Neural Information Processing Systems (NIPS). +Rakshith Shetty, Bernt Schiele, and Mario Fritz. 2018. A4NT: Author attribute anonymity by adversarial training of neural machine translation. In Proceedings of the 27th USENIX Security Symposium, pages 1633-1650. + +Connor Shorten and Taghi M. Khoshgoftaar. 2019. A survey on image data augmentation for deep learning. Journal of Big Data, 6. +Lichao Sun, Kazuma Hashimoto, Wenpeng Yin, Akari Asai, Jia Li, Philip Yu, and Caiming Xiong. 2020. Adv-BERT: BERT is not robust on misspellings! Generating nature adversarial samples on BERT. arXiv preprint arXiv:2003.04985. +Liling Tan. 2014. Pywsd: Python implementations of word sense disambiguation (WSD) technologies [software]. https://github.com/alvations/pywsd. +Martin A Tanner and Wing Hung Wong. 1987. The calculation of posterior distributions by data augmentation. Journal of the American statistical Association, 82(398):528-540. +Florian Tramère, Fan Zhang, Ari Juels, Michael K Reiter, and Thomas Ristenpart. 2016. Stealing machine learning models via prediction apis. In Proceedings of the 25th USENIX Security Symposium, pages 601-618. +Clara Vania, Yova Kementchedjhieva, Anders Sogaard, and Adam Lopez. 2019. A systematic comparison of methods for low-resource dependency parsing on genuinely low-resource languages. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1105-1116. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS), pages 5998-6008. +Nick Walton. Ai dungeon 2. https://aidungeon.io/ Last accessed May 2020. +William Yang Wang and Diyi Yang. 2015. That's so annoying!!!: A lexical and frame-semantic embedding based data augmentation approach to automatic categorization of annoying behaviors using #petpeeve tweets. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2557-2563. +Zeerak Waseem and Dirk Hovy. 2016. Hateful symbols or hateful people? Predictive features for hate speech detection on twitter. In Proceedings of the NAACL Student Research Workshop, pages 88-93. +Jason Wei and Kai Zou. 2019. EDA: Easy data augmentation techniques for boosting performance on text classification tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6382-6388. + +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtopicz, et al. 2019. Transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771. +Sebastien C. Wong, Adam Gatt, Victor Stamatescu, and Mark D. McDonnell. 2016. Understanding data augmentation for classification: When to warp? In Proceedings of the 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA), pages 1-6. +Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. +Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2017. Ex machina: Personal attacks seen at scale. In Proceedings of the 26th International Conference on World Wide Web, pages 1391-1399. +Mengzhou Xia, Xiang Kong, Antonios Anastasopoulos, and Graham Neubig. 2019. Generalized data augmentation for low-resource translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5786-5796. +Ziang Xie, Sida I. Wang, Jiwei Li, Daniel Levy, Aiming Nie, Dan Jurafsky, and Andrew Y. Ng. 2017. Data-noising as smoothing in neural network language models. In Proceedings of the International Conference on Learning Representations (ICLR 2017). +Tom Young, Devamanyu Hazarika, Soujanya Poria, and Erik Cambria. 2018. Recent trends in deep learning based natural language processing. IEEE Computational Intelligence Magazine, 13(3):55-75. +Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Proceedings of the 28th International Conference on Neural Information Processing Systems (NIPS 2015). +Ziqi Zhang, David Robinson, and Jonathan Tepper. 2018. Detecting hate speech on twitter using a convolution-gru based deep neural network. In Proceedings of the Extended Semantic Web Conference (ESWC), pages 745-760. + +# A Class overlap and interpretation of "toxicity" + +Kaggle's toxic comment classification challenge dataset9 contains six classes, one of which is called toxic. But all six classes represent examples of toxic speech: toxic, severe toxic, obscene, threat, insult, and identity-hate. Of the threat documents in the full training dataset (GOLD STANDARD), 449/478 overlap with toxic. For identity-hate, overlap with toxic is 1302/1405. Therefore, in this paper, we use the term toxic more generally, subsuming threat and identity-hate as particular types of toxic speech. To confirm that this was a reasonable choice, we manually examined the 29 threat datapoints not overlapping with toxic. All of these represent genuine threats, and are hence toxic in the general sense. + +# B The "Identity hate" class + +
GOLD STD.SEEDTEST
Minority1,40575712
Majority158,1667,91063,266
+ +Table 10: Corpus size for identity-hate (minority) and non-identity-hate (majority). + +
GOLD STANDARD
CharWordCNNBERT
Precision0.640.540.700.55
Recall0.400.310.200.62
F1 (macro)0.740.690.650.79
+ +Table 11: Classifier performance on GOLD STANDARD. Precision and recall for identity-hate; F1-score macro-averaged from both classes. + +To see if our results generalize beyond threat, we experimented on the identity-hate class in Kaggle's toxic comment classification dataset. Again, we used a $5\%$ stratified sample of GOLD STANDARD as SEED. We first show the number of samples in GOLD STANDARD, SEED and TEST in Table 10. There are approximately 3 times more minority-class samples in identity-hate than in threat. Next, we show classifier performance + +on GOLD STANDARD/identity-hate in Table 11. The results closely resemble those on GOLD STANDARD/threat in Table 4 (§4.1). + +We compared SEED and COPY with the techniques that had the highest performance on threat: ADD, BPEMB, GPT-2, and their combination ABG. Table 12 shows the results. + +Like in threat, BERT performed the poorest on SEED, with the lowest recall (0.06). All techniques decreased precision from SEED, and all increased recall except COPY with CNN. With COPY, the F1-score increased with Char-LR (0.12) and BERT (0.21), but not Word-LR (0.01) or CNN $(-0.04)$ . This is in line with corresponding results from threat (§4.2, Table 6): COPY did not help either of the word-based classifiers (Word-LR, CNN) but helped the character- and subword-based classifiers (Char-LR, BERT). + +Of the individual augmentation techniques, ADD increased the F1-score the most with CharLR (0.15) and BERT (0.20); and GPT-2 increased it the most with Word-LR (0.07) and CNN (0.07). Here again we see the similarity between the two word-based classifiers, and the two that take inputs below the word-level. Like in threat, COPY and ADD achieved close F1-scores with BERT, but with different relations between precision and recall. BPEMB was not the best technique with any classifier, but increased F1-score everywhere except in CNN, where precision dropped drastically. + +In the combined ABG technique, Word-LR and CNN reached their highest F1-score increases (0.08 and 0.07, respectively). With Char-LR F1-score was also among the highest, but did not reach ADD. Like with threat, ABG increased precision and recall more than GPT-2 alone. + +Overall, our results on identity-hate closely resemble those we received in threat, resulting in more than 20 percentage point increases in the F1-score for BERT on augmentations with COPY and ADD. Like in threat, the impact of most augmentations was greater on Char-LR than on Word-LR or CNN. Despite their similar F1-scores in SEED, Char-LR exhibited much higher precision, which decreased but remained generally higher than with other classifiers. Combined with an increase in recall to similar or higher levels than with other classifiers, Char-LR reached BERT-level performance with proper data augmentation. + +
AugmentationMetricChar-LRWord-LRCNNBERT
SEEDPrecision0.85 ± 0.040.59 ± 0.050.52 ± 0.080.65 ± 0.46
No OversamplingRecall0.11 ± 0.040.12 ± 0.030.11 ± 0.040.06 ± 0.10
F1 (macro)0.60 ± 0.030.60 ± 0.020.59 ± 0.020.54 ± 0.08
COPYPrecision0.61 ± 0.020.54 ± 0.040.27 ± 0.060.52 ± 0.06
Simple OversamplingRecall0.34 ± 0.040.14 ± 0.030.07 ± 0.010.50 ± 0.06
F1 (macro)0.72 ± 0.020.61 ± 0.020.55 ± 0.010.75 ± 0.01
ADDPrecision0.54 ± 0.040.54 ± 0.050.43 ± 0.050.43 ± 0.05
Add Majority-class SentenceRecall0.47 ± 0.050.21 ± 0.030.21 ± 0.040.58 ± 0.08
F1 (macro)0.75 ± 0.010.65 ± 0.010.64 ± 0.020.74 ± 0.01
BPEMBPrecision0.43 ± 0.040.30 ± 0.030.15 ± 0.050.29 ± 0.06
Subword SubstitutionsRecall0.38 ± 0.040.29 ± 0.010.32 ± 0.050.23 ± 0.03
F1 (macro)0.70 ± 0.010.64 ± 0.010.59 ± 0.020.62 ± 0.02
GPT-2Precision0.41 ± 0.050.30 ± 0.030.33 ± 0.080.22 ± 0.05
Conditional GenerationRecall0.34 ± 0.040.39 ± 0.030.34 ± 0.090.59 ± 0.06
F1 (macro)0.68 ± 0.010.67 ± 0.010.66 ± 0.010.65 ± 0.02
ABGPrecision0.41 ± 0.040.32 ± 0.030.28 ± 0.060.27 ± 0.05
ADD,BPEMB,GPT-2 MixRecall0.50 ± 0.040.41 ± 0.020.46 ± 0.050.62 ± 0.07
F1 (macro)0.72 ± 0.010.68 ± 0.010.66 ± 0.020.68 ± 0.02
+ +# C Augmentation computation performance + +Table 13 reports computational resources required for replicating augmentations. GPU computations were performed on a GeForce RTX 2080 Ti. CPU computations were performed with an Intel Core i9-9900K CPU @ 3.60GHz with 8 cores, where applicable. Memory usage was collected using nvidia-smi and htop routines. Usage is rounded to nearest 100 MiB. Computation time includes time to load library from file and is rounded to nearest integer. Computation time (training and prediction) is shown separately for GPT-2. + +We provide library versions in Table 14. We used sklearn.metrics(metricsainerfcore_support for calculating minority-class precision, recall and macro-averaged F1-score. For the first two, we applied pos_label $= 1$ ,and set average $=$ 'macro' for the third. For ROC-AUC, we used sklearn.metrics.roc_auc_score with default par a-meters. For t-tests, we used scipy.stats.ttest rel12, + +Table 12: Comparison of augmentation techniques for ${20}\mathrm{x}$ augmentation on SEED/identity-hate: means for precision, recall and macro-averaged F1-score shown with standard deviations (10 repetitions). Precision and recall for identity-hate; F1-score macro-averaged from both classes. + +
Augmentation
Memory (MiB)Runtime (s)
GPUCPUGPUCPU
COPY---< 1
EDA-100-1
ADD---1
WORDNET-4000-1
PPDB-2900-3
GLOVE-600-32
BPEMB-100-< 1
GPT-23600360012 + 78-
+ +Table 13: Computational resources (MiB and seconds) required for augmenting 25 examples to 500 examples. GPT-2 takes approximately 6 seconds to train per epoch, and 3 seconds to generate 19 new documents. + +which gives p-values for two-tailed significance tests. We divided the p-values in half for the one-tailed significance tests. + +
LibraryVersion
https://github.com/Nov 8, 201913
jasonwei20/eda_nlp
apex0.1
bpemb0.3.0
fast-bert1.6.5
gensim3.8.1
nlltk3.4.5
numpy1.17.2
pywsd1.2.4
scikit-learn0.21.3
scipy1.4.1
spacy2.2.4
torch1.4.0
transformers2.8.0
+ +# D Classifier training and testing performance + +Table 15 specifies the system resources training and prediction required on our setup (Section C). The SEED dataset has 8,955 documents and test dataset 63,978 documents. We used the 12-layer, 768-hidden, 12-heads, 110M parameter BERT-Base, Uncased-model. $^{14}$ + +# E Lemma inflection in WORDNET + +Lemmas appear as uninflected lemmas WordNet. To mitigate this limitation, we used a dictionary-based method for mapping lemmas to surface manifestations with NLTK part-of-speech (POS) tags. For deriving the dictionary, we used 8.5 million short sentences ( $\leq$ 20 words) from seven corpora: Stanford NMT, $^{15}$ OpenSubtitles 2018, $^{16}$ Tatoeba, $^{17}$ SNLI, $^{18}$ SICK, $^{19}$ Aristo-mini (December 2016 re + +Table 14: Library versions required for replicating this study. Date supplied if no version applicable. + +
Training
Memory (MB)Runtime (s)
GPUCPUGPUCPU
Char-LR-100-4
Word-LR-100-3
CNN400400-13
BERT38001500757-
Prediction
Memory (MB)Runtime (s)
GPUCPUGPUCPU
Char-LR-100-25
Word-LR-100-5
CNN400400-42
BERT46004200464-
+ +Table 15: Computational resources (MB and seconds) required for training classifiers on the SEED dataset and test dataset. Note that BERT results here were calculated with mixed precision arithmetic (currently supported by Nvidia Turing architecture). We measured memory usage close to 13 GB in the general case. + +lease),[20] and WordNet example sentences.[21] The rationale for the corpus was to have a large vocabulary along with relatively simple grammatical structures, to maximize both coverage and the correctness of POS-tagging. We mapped each lemma-POS-pair to its most common inflected form in the corpus. When performing synonym replacement in WORDNET augmentation, we lemmatized and POS-tagged the original word with NLTK, chose a random synonym for it, and then inflected the synonym with the original POS-tag if it was present in the inflection dictionary. + +# F GPT-2 parameters + +Table 16 shows the hyperparameters we used for fine-tuning our GPT-2 models, and for generating outputs. Our fine-tuning follows the transformers examples with default parameters.[22] + +For generation, we trimmed input to be at most 100 characters long, further cutting off the input at the last full word or punctuation to ensure gener- + +ated documents start with full words. Our generation script follows transformers examples.[23] + +
Fine-tuning
Batch size1
Learning rate2e-5
Epochs2
Generation
Input cutoff100 characters
Temperature1.0
Top-p0.9
Repetition penalty1
Output cutoff100 subwords or EOS generated
+ +Table 16: GPT-2 parameters. + +In §4.2 - §4.4, we generated novel documents with GPT-2 fine-tuned on threat documents in SEED for 2 epochs. In Table 17, we show the impact of changing the number of fine-tuning epochs for GPT-2. Precision generally increased as the number of epochs was increased. However, recall simultaneously decreased. + +# G Ablation study + +In §4.2 - §4.4, we investigated several word replacement techniques with a fixed change rate. In those experiments, we allowed $25\%$ of possible replacements. Here we study each augmentation technique's sensitivity to the replacement rate. As done in previous experiments, we ensured that at least one augmentation is always performed. Experiments are shown in tables 18-21. + +Interestingly, all word replacements decreased classification performance with BERT. We suspect this occurred because of the pre-trained weights in BERT. + +We show threat precision, recall and macroaveraged F1-scores for PPDB in Table 18. Changing the substitution rate had very little impact to the performance on any classifier. This indicates that there were very few n-gram candidates that could be replaced. We show results on WORDNET in Table 19. As exemplified for substitution rate $25\%$ in H, PPDB and WORDNET substitutions replaced very few words. Both results were close to COPY (§4.2, Table 6). + +We show results for GLOVE in Table 20. Word-LR performed better with higher substitution rates (increased recall). Interestingly, Char-LR performance (particularly precision) dropped with GLOVE compared to using COPY. For CNN, smaller substitution rates seem preferable, since precision decreased quickly as the number of substitutions increased. + +BPEMB results in Table 21 are consistent across the classifiers Char-LR, Word-LR and CNN. Substitutions in the range $12\% -37\%$ increased recall over COPY. However, precision dropped at different points, depending on the classifier. CNN precision dropped earlier than on other classifiers, already at $25\%$ change rate. + +# H Augmented threat examples + +We provide examples of augmented documents in Table 22. We picked a one-sentence document as the seed. We remark that augmented documents created by GPT-2 have the highest novelty, but may not always be considered threat (see example GPT-2 #1. in Table 22). + +
ClassifierMetricFine-tuning epochs on GPT-2
12345678910
Char-LRPrecision0.380.430.450.490.510.490.520.500.510.51
Recall0.340.340.320.310.310.290.280.280.270.28
F1 (macro)0.680.690.680.680.690.680.680.680.680.68
Word-LRPrecision0.300.330.340.340.360.350.350.340.340.34
Recall0.470.450.430.400.400.380.370.360.350.35
F1 (macro)0.680.690.690.680.680.680.670.670.670.67
CNNPrecision0.260.280.300.320.330.320.310.310.310.32
Recall0.490.500.470.500.480.480.480.460.470.46
F1 (macro)0.660.670.680.690.690.680.680.680.680.68
BERTPrecision0.110.140.150.150.160.170.170.190.170.17
Recall0.620.660.670.640.650.620.620.620.610.61
F1 (macro)0.590.610.620.620.620.630.630.640.630.62
+ +Table 17: Impact of changing number of fine-tuning epochs on GPT-2-augmented datasets. Mean results for 10 repetitions. Highest numbers highlighted in bold. + +
MetricPPDB: N-gram substitution rate
012253750100
Char-LR
Pre.0.140.140.130.130.130.14
Rec.0.090.090.090.080.070.05
F1 ma.0.550.550.550.550.540.54
Word-LR
Pre.0.320.330.380.440.410.34
Rec.0.040.040.040.040.030.01
F1 ma.0.530.530.530.530.530.51
CNN
Pre.0.440.410.390.360.380.32
Rec.0.090.090.100.090.080.05
F1 ma.0.570.570.570.570.560.54
BERT
Pre.0.450.450.460.460.470.48
Rec.0.370.370.370.350.330.25
F1 ma.0.700.700.700.700.690.66
+ +Table 18: Impact of changing the proportion of substituted words on PPDB-augmented datasets. Mean results for 10 repetitions. Classifier's highest numbers highlighted in bold. + +
MetricWORDNET: Word substitution rate
012253750100
Char-LR
Pre.0.150.150.140.140.120.10
Rec.0.100.100.100.100.090.07
F1 ma.0.560.560.560.550.550.54
Word-LR
Pre.0.280.290.300.310.340.31
Rec.0.040.040.040.050.040.02
F1 ma.0.530.530.530.540.540.52
CNN
Pre.0.420.430.420.450.440.32
Rec.0.100.110.110.120.100.07
F1 ma.0.580.580.580.590.580.55
BERT
Pre.0.450.440.430.430.420.35
Rec.0.310.310.290.260.240.18
F1 ma.0.680.680.670.660.650.61
+ +Table 19: Impact of changing the proportion of substituted words on WORDNET-augmented datasets. Mean results for 10 repetitions. Classifier's highest numbers highlighted in bold. + +
MetricGLOVE: Word substitution rate
012253750100
Char-LR
Pre.0.160.150.140.140.140.32
Rec.0.110.120.130.130.130.05
F1 ma.0.560.560.570.570.570.54
Word-LR
Pre.0.310.370.350.330.330.30
Rec.0.070.100.160.190.190.09
F1 ma.0.550.580.610.620.620.57
CNN
Pre.0.410.440.390.350.280.15
Rec.0.130.180.190.200.170.06
F1 ma.0.590.620.620.620.600.54
BERT
Pre.0.440.430.400.360.330.13
Rec.0.350.270.160.130.110.03
F1 ma.0.690.660.610.590.580.52
+ +Table 20: Impact of changing the proportion of substituted words on GLOVE-augmented datasets. Mean results for 10 repetitions. Classifier's highest numbers highlighted in bold. + +
MetricBPEMB: Subword substitution rate
012253750100
Char-LR
Pre.0.650.640.560.520.490.37
Rec.0.170.200.220.200.170.06
F1 ma.0.630.650.650.640.630.55
Word-LR
Pre.0.260.340.310.300.250.19
Rec.0.070.130.220.250.230.13
F1 ma.0.550.590.630.630.620.57
CNN
Pre.0.420.370.220.140.090.03
Rec.0.170.310.380.310.270.10
F1 ma.0.620.660.630.590.560.52
BERT
Pre.0.430.410.330.320.250.08
Rec.0.370.220.150.130.100.03
F1 ma.0.700.640.600.590.570.52
+ +Table 21: Impact of changing the proportion of substituted subwords on BPEMB-augmented datasets. Mean results for 10 repetitions. Classifier's highest numbers highlighted in bold. + +
#Document sample
SEED: No Oversampling +if you do not stop, the wikapidea nijas will come to your house and kill you
COPY: Simple Oversampling
1.if you do not stop, the wikapidea nijas will come to your house and kill you
2.if you do not stop, the wikapidea nijas will come to your house and kill you
3.if you do not stop, the wikapidea nijas will come to your house and kill you
EDA: Easy Data Augmentation16
1.if you do put up not stop the wikapidea nijas will come to your house and kill you
2.if you do not stopover the wikapidea nijas will come to your house and kill you
3.if you do not break the wikapidea nijas will come to your house and kill you
ADD: Add Majority-class Sentence
1.We thank you both for your contributions to Wikipedia at-large and your use of the tool. if you do not stop, the wikapidea nijas will come to your house and kill you
2.if you do not stop, the wikapidea nijas will come to your house and kill you (Honest!)
3.*** username, I am on a shared IP address. if you do not stop, the wikapidea nijas will come to your house and kill you
PPDB Phrase Substitutions
1.if you do not be halted, the wikapidea nijas will come to your home and kill you
2.if you do not stop, the wikapidea nijas comes along to your house and been murdered you
3.if you do not stop, the wikapidea nijas will arrive to your home and kill you
WORDNET Word Substitutions
1.if you do not stop, the wikapidea nijas will amount to your house and kill you
2.if you do not stop, the wikapidea nijas will come to your house and down you
3.if you do not stop, the wikapidea nijas will come to your theatre and kill you
GLOVE Word Substitutions
1.if you so not stop, the wikapidea nijas will come to yours house have kill you
2.mean know do not stop? the wikapidea nijas will cos to your house and kill you
3.if you do not thinking, and wikapidea nijas will come to your going and kill you
BPEMB Subword Substitutions
1.whether you know not stop, theldikapideina niji give come to your house and kill you
2.shouldnever do not stop, the wikamyla njag will come to your house and kill want
3.if you let not train, the wkanapidei nijas will come tonever house and dead you
GPT-2 Conditional Generation
1.for what you have done, you don't have a choice. You are too big and weak to stop.
2.! So if you don't stop this, you will be murdered and your people will be ripped apart. I hope you die and die and die. No, you will die and die and die. Just stop and kill yourself for it. +Advertisements
3.. You will be hanged like a rat. The next day I will get a letter telling you that you were kicked out of your home by the wikapidea nijas and that you will die there. You are an idiot for calling the wikapidea nijas to death and you should die.
ABG: Mixed Augmentation (ADD, BPEMB & GPT-2)
1.Support - The other article is minor, and not worth a separate entry. if you do not stop, the wikapidea nijas will come to your house and kill you
2.. You will be hanged like a rat. The next day I will get a letter telling you that you were kicked out of your home by the wikapidea nijas and that you will die there. You are an idiot for calling the wikapidea nijas to death and you should die.
3.if you let not train, the wkanapidei nijas will come tonever house and dead you
+ +Table 22: Documents generated by selected augmentation techniques in this study. Changes to original seed highlighted. The selected sample is shorter than average (see §3.1, Table 1). We anonymized the username in ADD (#3.). Three samples generated by each technique shown. \ No newline at end of file diff --git a/alittlegoesalongwayimprovingtoxiclanguageclassificationdespitedatascarcity/images.zip b/alittlegoesalongwayimprovingtoxiclanguageclassificationdespitedatascarcity/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..8e4492e074ad4d7c8ff503c3878c6607ccd1b038 --- /dev/null +++ b/alittlegoesalongwayimprovingtoxiclanguageclassificationdespitedatascarcity/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:470c83a8409e13a4089b94219018c400df25af9d3b8214725691903bfd6d06aa +size 1587715 diff --git a/alittlegoesalongwayimprovingtoxiclanguageclassificationdespitedatascarcity/layout.json b/alittlegoesalongwayimprovingtoxiclanguageclassificationdespitedatascarcity/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..6fec66c9138407d53f507558484199683ed5ac0e --- /dev/null +++ b/alittlegoesalongwayimprovingtoxiclanguageclassificationdespitedatascarcity/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:49e77ce7101a9eda898ef9fbd3464e73a6145d0d90254d41757d50ef99ced0b9 +size 509303 diff --git a/amultilingualviewofunsupervisedmachinetranslation/ef174462-f91f-405a-a360-28b904832c82_content_list.json b/amultilingualviewofunsupervisedmachinetranslation/ef174462-f91f-405a-a360-28b904832c82_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..58623b221ab27f667f8dc548a3cd7ca5e47cc47f --- /dev/null +++ b/amultilingualviewofunsupervisedmachinetranslation/ef174462-f91f-405a-a360-28b904832c82_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:72d21db70283fe2f3ea359d4cb41901a8a43f89a01cd5aa8f5732fc2e70af1d2 +size 80896 diff --git a/amultilingualviewofunsupervisedmachinetranslation/ef174462-f91f-405a-a360-28b904832c82_model.json b/amultilingualviewofunsupervisedmachinetranslation/ef174462-f91f-405a-a360-28b904832c82_model.json new file mode 100644 index 0000000000000000000000000000000000000000..d93fafde5e14086c40c36668f35168e16d03b7b0 --- /dev/null +++ b/amultilingualviewofunsupervisedmachinetranslation/ef174462-f91f-405a-a360-28b904832c82_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb51b46cfb5f7ee97b11eed1e001611141ce58d656a101d4085a9dc1c5cd53c7 +size 98024 diff --git a/amultilingualviewofunsupervisedmachinetranslation/ef174462-f91f-405a-a360-28b904832c82_origin.pdf b/amultilingualviewofunsupervisedmachinetranslation/ef174462-f91f-405a-a360-28b904832c82_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..55ff1e2669a47cf123acc679b244d81999d8b479 --- /dev/null +++ b/amultilingualviewofunsupervisedmachinetranslation/ef174462-f91f-405a-a360-28b904832c82_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eb3a965ae890634ae9b43afed665c98cd1c5cf582a5d4ff7e6e8afe8b0abf4c9 +size 403466 diff --git a/amultilingualviewofunsupervisedmachinetranslation/full.md b/amultilingualviewofunsupervisedmachinetranslation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..f08fde33782f08fd0704598d60f5d9ce26a5e201 --- /dev/null +++ b/amultilingualviewofunsupervisedmachinetranslation/full.md @@ -0,0 +1,369 @@ +# A Multilingual View of Unsupervised Machine Translation + +Xavier Garcia* + +Pierre Foret* + +Thibault Sellam + +Ankur P. Parikh + +{xgarcia,pierreforet,tsellam,aparikh}@google.com + +# Abstract + +We present a probabilistic framework for multilingual neural machine translation that encompasses supervised and unsupervised setups, focusing on unsupervised translation. In addition to studying the vanilla case where there is only monolingual data available, we propose a novel setup where one language in the (source, target) pair is not associated with any parallel data, but there may exist auxiliary parallel data that contains the other. This auxiliary data can naturally be utilized in our probabilistic framework via a novel cross-translation loss term. Empirically, we show that our approach results in higher BLEU scores over state-of-the-art unsupervised models on the WMT'14 English-French, WMT'16 English-German, and WMT'16 English-Romanian datasets in most directions. + +# 1 Introduction + +The popularity of neural machine translation systems (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015; Wu et al., 2016) has exploded in recent years. Those systems have obtained state-of-the-art results for a wide collection of language pairs, but they often require large amounts of parallel (source, target) sentence pairs to train (Koehn and Knowles, 2017), making them impractical for scenarios with resource-poor languages. As a result, there has been interest in unsupervised machine translation (Ravi and Knight, 2011), and more recently unsupervised neural machine translation (UNMT) (Lample et al., 2018; Artetxe et al., 2018), which uses only monolingual source and target corpora for learning. Unsupervised NMT systems have achieved rapid progress recently (Lample and Conneau, 2019; Artetxe et al., 2019; Ren et al., 2019; Li et al., 2020a), largely thanks to two key ideas: one-the-fly back-translation (i.e., minimizing round-trip translation inconsistency) (Bannard and Callison-Burch, + +![](images/35c3aad0bfc31d7881cbf3677cb286934b33329d2fc500d74441d170547623ef.jpg) +(a) Multilingual NMT + +![](images/5fe44d516c69ddf35f059ee1473cb6fbd99b7f1276ca22d811e269aec8516087.jpg) +(b) Zero-Shot NMT + +![](images/07013d5dc6ae00869ddc85d863dcaa3741037c5700b0c4f9f0cf16fa7a22caea.jpg) +(c) M-UNMT (w/o auxiliary parallel data) + +![](images/eab0a919b4b5968758a6ee1a4bc9fdcf3b4efd68ef1f24addf5fd372e048cb90.jpg) +(d) M-UNMT (w/ auxiliary parallel data) +Figure 1: Different setups for English (En), French (Fr) and Romanian (Ro). The dashed edge indicates the target language pair. Full edges indicate the existence of parallel training data. + +2005; Sennrich et al., 2015; He et al., 2016; Artetxe et al., 2018) and pretrained language models (Lample and Conneau, 2019; Song et al., 2019). Despite the difficulty of the problem, those systems have achieved surprisingly strong results. + +In this work, we investigate Multilingual UNMT (M-UNMT), a generalization of the UNMT setup that involves more than two languages. Multilinguality has been explored in the supervised NMT literature, where it has been shown to enable information sharing among related languages. This allows higher resource language pairs (e.g. English-French) to improve performance among lower resource pairs (e.g., English-Romanian) (Johnson et al., 2017; First et al., 2016). Yet multilingual translation has only received little attention in the unsupervised literature, and the performance of preliminary works (Sen et al., 2019; Xu et al., 2019) is considerably below that of state-of-the-art bilingual unsupervised systems (Lample and Conneau, 2019; Song et al., 2019). Another line + +of work has studied zero-shot translation in the presence of a "pivot" language, e.g., using French-English and English-Romanian corpora to model French-Romanian (Johnson et al., 2017; Arivazhagan et al., 2019; Gu et al., 2019; Al-Shedivat and Parikh, 2019). However, zero-shot translation is not unsupervised since one can perform two-step supervised translation through the pivot language. + +We introduce a novel probabilistic formulation of multilingual translation, which encompasses not only existing supervised and zero-shot setups, but also two variants of Multilingual UNMT: (1) a strict M-UNMT setup in which there is no parallel data for any pair of language, and (2) a novel, looser setup where there exists parallel data that contains one language in the (source, target) pair but not the other. We illustrate those two variants and contrast them to existing work in Figure 1. As shown in Figures 1(c) and 1(d), the defining feature of M-UNMT is that the (source, target) pair of interest is not connected in the graph, precluding the possibility of any direct or multi-step supervised solution. Leveraging auxiliary parallel data for UNMT as shown in Figure 1(d) has not been well studied in the literature. However, this setup may be more realistic than the strictly unsupervised case since it enables the use of high resource languages (e.g. En) to aid translation into rare languages. + +For the strict M-UNMT setup pictured in Figure 1(c), our probabilistic formulation yields a multi-way back-translation objective that is an intuitive generalization of existing work (Artetxe et al., 2018; Lample et al., 2018; He et al., 2020). We provide a rigorous derivation of this objective as an application of the Expectation Maximization algorithm (Dempster et al., 1977). Effectively utilizing the auxiliary parallel corpus pictured in Figure 1(d) is less straightforward since the common approaches for UNMT are explicitly designed for the bilingual case. For this setting, we propose two algorithmic contributions. First, we derive a novel cross-translation loss term from our probabilistic framework that enforces cross-language pair consistency. Second, we utilize the auxiliary parallel data for pre-training, which allows the model to build representations better suited to translation. + +Empirically, we evaluate both setups, demonstrating that our approach of leveraging auxiliary parallel data offers quantifiable gains over existing state-of-the-art unsupervised models on 3 language pairs: En-Ro, En-Fr, and En-De. Finally, we + +perform a series of ablation studies that highlight the impact of the additional data, our additional loss terms, as well as the choice of auxiliary language. + +# 2 Background and Overview + +Notation: Before discussing our approach, we introduce some notation. We denote random variables by capital letters $X$ , $Y$ , $Z$ , and their realizations by their corresponding lowercase version $x$ , $y$ , $z$ . We abuse this convention to compactly write objects like the conditional density $p(Y = y|X = x)$ as $p(y|x)$ or the marginalized distributions $p(X = x)$ as $p(x)$ , with the understanding that the lowercase variables are connected to their corresponding uppercase random variables. Given a random variable $X$ , we write $\mathbb{E}_{x \sim X}$ to mean the expectation with respect to $x$ , where $x$ follows the distribution of $X$ . We use a similar convention for conditional distributions e.g. we write $\mathbb{E}_{y \sim p(\cdot | x)}$ to denote the expectation of $Y$ conditioned on $X = x$ . Similarly, we write $H(X)$ or $H(p(x))$ to denote the entropy of the random variable $X$ i.e. $H(X) = \mathbb{E}_{x \sim X}[-\log p(x)]$ . We reserve the use of typewriter font for languages e.g. X. + +Neural Machine Translation: In bilingual supervised machine translation we are given a training dataset $\mathcal{D}_{\mathbf{x},\mathbf{y}}$ . Each $(x,y) \in \mathcal{D}_{\mathbf{x},\mathbf{y}}$ is a (source, target) pair consisting of a sentence $x$ in language X and a semantically equivalent sentence $y$ in language Y. We train a translation model using maximum likelihood: + +$$ +L _ {s u p} (\theta) = \sum_ {(x, y) \in \mathcal {D} _ {\mathbf {x}, \mathbf {y}}} \log p _ {\theta} (y | x) +$$ + +In neural machine translation, $p_{\theta}(y|x)$ is modelled with the encoder-decoder paradigm where $x$ is encoded into a set of vectors via a neural network $\mathrm{enc}_{\theta}$ and a decoder neural network defines $p_{\theta}(y|\mathrm{enc}_{\theta}(x))$ . In this work, we use a transformer (Vaswani et al., 2017) as the encoder and decoder network. At inference time, computing the most likely target sentence $y$ is intractable since it requires enumerating over all possible sequences, and is thus approximated via beam search. + +Unsupervised Machine Translation: The requirement of a training dataset $\mathcal{D}_{\mathbf{x},\mathbf{y}}$ with source-target pairs can often be prohibitive for rare or low resource languages. Bilingual unsupervised translation attempts to learn $p_{\theta}(y|x)$ using monolingual + +corpora $\mathcal{D}_x$ and $\mathcal{D}_y$ . For each sentence $x \in \mathcal{D}_x$ , $\mathcal{D}_y$ may not contain an equivalent sentence in $\Upsilon$ , and vice versa. + +State of the art unsupervised methods typically work as follows. They first perform pre-training and learn an initial set of parameters $\theta$ based on a variety of language modeling or noisy reconstruction objectives (Lample and Conneau, 2019; Lewis et al., 2019; Song et al., 2019) over $\mathcal{D}_x$ and $\mathcal{D}_y$ . A fine-tuning stage then follows which typically uses back-translation (Sennrich et al., 2016; Lample and Conneau, 2019; He et al., 2016) that involves translating $x$ to the target language Y, translating it back to a sentence $x'$ in X, and penalizing the reconstruction error between $x$ and $x'$ . + +Overview of our Approach: The following sections describe a probabilistic MT framework that justifies and generalizes the aforementioned approaches. We first model the case where we have access to several monolingual corpora, pictured in Figure 1(c). We introduce light independence assumptions to make the joint likelihood tractable and derive a lower bound, obtaining a generalization of the back-translation loss. We then extend our model to include the auxiliary parallel data pictured in Figure 1(d). We demonstrate the emergence of a cross-translation loss term, which binds distinct pairs of languages together. Finally, we present our complete training procedure, based on the EM algorithm. Building upon existing work (Song et al., 2019), we introduce a pre-training step that we run before maximizing the likelihood to obtain good representations. + +# 3 Multilingual Unsupervised Machine Translation + +In this section, we formulate our approach for M-UNMT. We restrict ourselves to three languages, but the arguments naturally extend to an arbitrary number of languages. Inspired by the recent style transfer literature (He et al., 2020) and some approaches from multilingual supervised machine translation (Ren et al., 2018), we introduce a generative model of which the available data can be seen as partially-observed samples. We first investigate the strict unsupervised case, where only monolingual data is available. Our framework naturally leads to an aggregate back-translation loss that generalizes previous work. We then incorporate the auxiliary corpus, introducing a novel cross-translation term. To optimize our loss, we leverage + +the EM algorithm, giving a rigorous justification for the stop-gradient operation that is usually applied in the UNMT and style transfer literature (Lample and Conneau, 2019; Artetxe et al., 2019; He et al., 2020). + +# 3.1 M-UNMT - Monolingual Data Only + +We begin with the assumption that we have three sets of monolingual data, $\mathcal{D}_{\mathbf{x}},\mathcal{D}_{\mathbf{y}},\mathcal{D}_{\mathbf{z}}$ for languages X,Y and Z respectively. We take the viewpoint that these datasets form the visible parts of a larger dataset $\mathcal{D}_{\mathbf{x},\mathbf{y},\mathbf{z}}$ of triplets $(x,y,z)$ which are translations of each other. We think of these translations as samples of a triplet $(X,Y,Z)$ of random variables and write the observed data log-likelihood as: + +$$ +\mathcal {L} (\theta) = \mathcal {L} _ {\mathcal {D} _ {\mathrm {x}}} + \mathcal {L} _ {\mathcal {D} _ {\mathrm {y}}} + \mathcal {L} _ {\mathcal {D} _ {\mathrm {z}}} +$$ + +Our goal however is to learn a conditional translation model $p_{\theta}$ . We thus rewrite the log likelihood as a marginalization over the unobserved variables for each dataset as shown below: + +$$ +\begin{array}{l} \mathcal {L} (\theta) = \sum_ {x \in \mathcal {D} _ {x}} \log \underset {\sim (Y, Z)} {\mathbb {E}} p _ {\theta} (x | y, z) (1) \\ + \sum_ {y \in \mathcal {D} _ {\mathbf {y}}} \log \underset {\sim (X, Z)} {\mathbb {E}} p _ {\theta} (y | x, z) (2) \\ + \sum_ {z \in \mathcal {D} _ {\mathbf {z}}} \log \underset {\sim (X, Y)} {\mathbb {E}} p _ {\theta} (z | x, y) (3) \\ \end{array} +$$ + +Learning a model for $p_{\theta}(x|y,z)$ is not practical since the translation task is to translate $z \to x$ without access to $y$ , or $y \to x$ without access to $z$ . Thus, we make the following structural assumption: given any variable in the triplet $(X,Y,Z)$ , the remaining two are independent. We implicitly think of the conditioned variable as detailing the content and the two remaining variables as independent manifestations of this content in the respective languages. Using the fact that $p_{\theta}(x|y,z) = p_{\theta}(x|y) = p_{\theta}(x|z)$ under this assumption, we rewrite the summand in (1) as follows: + +$$ +\log \underset { \begin{array}{c} (y, z) \\ \sim (Y, Z) \end{array} } {\mathbb {E}} p _ {\theta} (x | y, z) = \log \underset { \begin{array}{c} (y, z) \\ \sim (Y, Z) \end{array} } {\mathbb {E}} \sqrt {p _ {\theta} (x | y) p _ {\theta} (x | z)}. +$$ + +Next, note that all these expectations in Eq. 1, 2, and 3 are intractable to compute due to the number of possible sequences in each language. We address this problem through the Expectation Maximization (EM) algorithm (Dempster et al., 1977). + +We first use Jensen's inequality1: + +$$ +\begin{array}{l} \log \underset { \begin{array}{c} (y, z) \\ \sim (Y, Z) \end{array} } {\mathbb {E}} p _ {\theta} (x | y, z) = \log \underset { \begin{array}{c} (y, z) \\ \sim (Y, Z) \end{array} } {\mathbb {E}} \frac {p _ {\theta} (x | y , z)}{p _ {\theta} (y , z | x)} p _ {\theta} (y, z | x) \\ = \log \underset { \begin{array}{c} (y, z) \\ \sim p _ {\theta} (y, z | x) \end{array} } {\mathbb {E}} \frac {p _ {\theta} (x | y , z)}{p _ {\theta} (y , z | x)} p (y, z) \\ = \underset { \begin{array}{c} (y, z) \\ \sim p _ {\theta} (y, z | x) \end{array} } {\mathbb {E}} \left[ \log p _ {\theta} (x | y, z) + \log p (y, z) \right] \\ + H \left(p _ {\theta} (y, z | x)\right) \\ \end{array} +$$ + +Since the entropy of a random variable is always non-negative, we can bound the quantity on the right from below as follows: + +$$ +\begin{array}{l} \log \underset { \begin{array}{c} (y, z) \\ \sim (Y, Z) \end{array} } {\mathbb {E}} p _ {\theta} (x | y, z) \geqslant \underset { \begin{array}{c} (y, z) \\ \sim p _ {\theta} (y, z | x) \end{array} } {\mathbb {E}} [ \log p _ {\theta} (x | y, z) ] \\ + \underset { \begin{array}{c} (y, z) \\ \sim p _ {\theta} (y, z | x) \end{array} } {\mathbb {E}} \left[ \log p _ {\theta} (y, z) \right] \\ = \frac {1}{2} \underset {y \sim p _ {\theta} (y \mid x)} {\mathbb {E}} \log p _ {\theta} (x \mid y) \\ + \frac {1}{2} \underset {z \sim p _ {\theta} (z | x)} {\mathbb {E}} \log p _ {\theta} (x | z) \\ +\underset { \begin{array}{c}(y,z)\\ \sim p_{\theta}(y,z|x) \end{array} }{\mathbb{E}}\log p(y,z) \\ \end{array} +$$ + +Applying the above strategy to (2) and (3) and rearranging terms gives us: + +$$ +\begin{array}{l} \mathcal {L} (\theta) \geqslant \frac {1}{2} \underset {y \sim p _ {\theta} (\cdot | x)} {\mathbb {E}} \log p _ {\theta} (x | y) \\ + \frac {1}{2} \underset {z \sim p _ {\theta} (\cdot | x)} {\mathbb {E}} \log p _ {\theta} (x | z) + \frac {1}{2} \underset {x \sim p _ {\theta} (\cdot | y)} {\mathbb {E}} \log p _ {\theta} (y | x) \\ + \frac {1}{2} \underset {z \sim p _ {\theta} (\cdot | y)} {\mathbb {E}} \log p _ {\theta} (y | z) + \frac {1}{2} \underset {y \sim p _ {\theta} (\cdot | z)} {\mathbb {E}} \log p _ {\theta} (z | y) \\ + \frac {1}{2} \underset {x \sim p _ {\theta} (\cdot | z)} {\mathbb {E}} \log p _ {\theta} (z | x) + \underset { \begin{array}{c} y, z \\ \sim p _ {\theta} (\cdot , \cdot | x) \end{array} } {\mathbb {E}} \log p (y, z) \\ + \underset {\sim p _ {\theta} (\cdot , \cdot | y)} {\mathbb {E}} \log p (x, z) + \underset {\sim p _ {\theta} (\cdot , \cdot | z)} {\mathbb {E}} \log p (x, y) \tag {4} \\ \end{array} +$$ + +This lower-bound contains two types of terms. The back-translation terms, e.g., + +$$ +\underset {y \sim p _ {\theta} (\cdot | x)} {\mathbb {E}} \log p _ {\theta} (x | y), \tag {5} +$$ + +![](images/6f8de0de5852727c9da178047fbfbdda1005abe79638792f6d159fff3388fee8.jpg) +(a) Back-translation + +![](images/e1caa67858a1ed9b6ce009aca45be76480d96bf5e303e0383c84543b3c091291.jpg) +(b) Cross-translation +Figure 2: Illustration of the back-translation and cross-translation losses. Stop gradient is applied on step 1. + +enforce that reciprocal translation models are consistent. The joint terms e.g. $\mathbb{E}_{(x,y)\sim p_{\theta}(\cdot ,\cdot |z)}\log p(x,y)$ will vanish in our optimization procedure, as explained next. + +We use the EM algorithm to maximize Eq. 4. In our setup, the $\mathbf{E}$ -step at iteration $t$ amounts to computing the expectations against the conditional distributions evaluated at the current set of parameters $\theta = \theta^{(t)}$ . We approximate this by removing the expectations and replacing the random variable with the mode of its distribution i.e. $\mathbb{E}_{y\sim p_{\theta^{(t)}}(\cdot |x)}\log p_{\theta^{(t)}}(x|y)\approx p_{\theta^{(t)}}(x|\hat{y})$ where $\hat{y} = \arg \max_yp_{\theta^{(t)}}(y|x)$ . In practice, this amounts to running a greedy decoding procedure for the relevant translation models. + +The M-step then corresponds to choosing the $\theta$ which maximizes the resulting terms after we perform the E-step. Notice that for this step, the last three terms in Eq. 4 no longer possess a $\theta$ dependence, as the expectation was computed in the E-step with a dependence on $\theta^{(t)}$ . These terms can therefore be safely ignored, leaving us with only the back-translation terms. By our approximation to the E-step, these expressions become exactly the loss terms that appear in the current UNMT literature (Artetxe et al., 2019; Lample and Conneau, 2019; Song et al., 2019), see Figure 2(a) for a graphical depiction. Since computing the argmax is a difficult task, we perform a single gradient update for the M-step and define $\theta^{(t + 1)}$ inductively this way. + +# 3.2 Auxiliary parallel data + +We now extend our framework with an auxiliary parallel corpus (Figure 1(d)). We assume that we wish to translate from $\mathbf{X}$ to $\mathbb{Z}$ , and that we have access to a parallel corpus $\mathcal{D}_{\mathbf{x},\mathbf{y}}$ that maps sentences from $\mathbf{X}$ to $\mathbb{Y}$ . To leverage this source of data, we + +augment the log-likelihood $\mathcal{L}$ as follows: + +$$ +\mathcal{L}_{\mathrm{aug}}(\theta) = \mathcal{L}(\theta) + \sum_{\substack{(x,y)\\ \in \mathcal{D}_{\mathbf{x},\mathbf{y}}}}\log_{z\sim Z}\mathbb{E}_{p_{\theta}}(x,y|z) \quad (6) +$$ + +Similar to how we handled the monolingual terms, we can utilize the EM algorithm to obtain an objective amenable to gradient optimization. By using the EM algorithm, we can substitute the distribution of $Z$ in Eq. 6 with the one given by $p_{\theta}(z|x,y)$ . The structural assumption we made in the case of monolingual data still holds: given any variable in the triplet $(X,Y,Z)$ , the remaining two are independent. Using this assumption, we can rewrite the distribution $p_{\theta}(z|x,y)$ as either $p_{\theta}(z|x)$ or $p_{\theta}(z|y)$ . Since we can decompose $\log p_{\theta}(x,y|z) = \log p_{\theta}(x|z) + \log p_{\theta}(y|z)$ , we can leverage both formulations with an argument analogous to the one in §3.1: + +$$ +\begin{array}{l} \log_ {z \sim Z} \mathbb {E} _ {p _ {\theta}} (x, y | z) = \log_ {z \sim Z} \mathbb {E} _ {p _ {\theta}} (x | z) p _ {\theta} (y | z) \\ \geqslant \underset {z \sim p _ {\theta} (\cdot | y)} {\mathbb {E}} \log p _ {\theta} (x | z) \\ + \operatorname * {\mathbb {E}} _ {z \sim p _ {\theta} (\cdot | x)} \log p _ {\theta} (y | z) \\ + \underset {z \sim p _ {\theta} (\cdot | y)} {\mathbb {E}} \log p (z) + \underset {z \sim p _ {\theta} (\cdot | x)} {\mathbb {E}} \log p (z) \tag {7} \\ \end{array} +$$ + +A key feature of this lower bound is the emergence of the expressions: + +$$ +\underset {z \sim p _ {\theta} (\cdot | y)} {\mathbb {E}} \log p _ {\theta} (x | z) \text {a n d} \underset {z \sim p _ {\theta} (\cdot | x)} {\mathbb {E}} \log p _ {\theta} (y | z). \tag {8} +$$ + +Intuitively, those terms ensure that the models can accurately translate from Y to Z, then Z to X (resp. X to Z, then Z to Y). Because they enforce cross-language pair consistency, we will refer to them as cross-translation terms. In contrast, the back-translation terms, e.g., Eq. 5, only enforced monolingual consistency. We provide a graphical depiction of these terms in Figure 2(b). + +As in the case of monolingual data, we optimize the full likelihood with EM. During the E-step, we approximate the expectation with evaluation of the expectant at the mode of the distribution. As with §3.1, the last two terms in Eq. 7 disappear in the M-step. + +# 3.3 Connections with supervised and zero shot methods + +So far, we have only discussed multilingual unsupervised neural machine translation setups. We + +now derive the other configurations of Figure 1, that is, supervised and zero-shot translation, through our framework. + +Supervised translation: Deriving supervised translation is straightforward. Given the parallel data dataset $\mathcal{D}_{\mathbf{x},\mathbf{y}}$ , we can rewrite the likelihood as: + +$$ +\sum_{(x,y)\in \mathcal{D}_{\mathbf{x},\mathbf{y}}}\log p_{\theta}(x,y) = \sum_{\substack{(x,y)\\ \in \mathcal{D}_{\mathbf{x},\mathbf{y}}}}\log p_{\theta}(y|x) + \log p(x) +$$ + +where the second term is a language model that does not depend on $\theta$ . + +Zero-shot translation: We can also connect the cross-translation term to the zero-shot MT approach from Al-Shedivat and Parikh (2019). Simplifying their setup, they consider three languages X, Y and Z with parallel data between X and Y as well as X and Z. In addition to the usual cross-entropy objective, they also add agreement terms i.e. $\mathbb{E}_{z\sim p_{\theta}(\cdot |x)}\log p(z|y)$ and $\mathbb{E}_{z\sim p_{\theta}(\cdot |y)}\log p(z|x)$ . We show that these agreement terms are operationally equivalent to the cross-translation terms i.e. Eq. 8. We first obtain the following equality by a simple application of Bayes' theorem: + +$$ +\log p _ {\theta} (y | z) = \log p _ {\theta} (z | y) + \log p (y) - \log p (z). +$$ + +We then apply the expectation operation $\mathbb{E}_{z\sim p_{\theta}(\cdot |x)}$ to both sides of this equation. From an optimization perspective, we are only interested in terms involving the learnable parameters so we can dispose of the term involving $\log p(y)$ on the right. Applying the same argument to $\log p_{\theta}(x|z)$ , we obtain: + +$$ +\begin{array}{l} \operatorname *{\mathbb{E}}_{z\sim p_{\theta}(\cdot |x)}\log p_{\theta}(y|z) + \operatorname *{\mathbb{E}}_{z\sim p_{\theta}(\cdot |y)}\log p_{\theta}(x|z) \\ = \underset {z \sim p _ {\theta} (\cdot | x)} {\mathbb {E}} \log p _ {\theta} (z | y) + \underset {z \sim p _ {\theta} (\cdot | y)} {\mathbb {E}} \log p _ {\theta} (z | x) \\ - \underset {z \sim p _ {\theta} (\cdot | x)} {\mathbb {E}} \log p (z) - \underset {z \sim p _ {\theta} (\cdot | y)} {\mathbb {E}} \log p (z) \\ \end{array} +$$ + +By adding the quantity $\mathbb{E}_{z\sim p_{\theta}(\cdot |x)}\log p(z) + \mathbb{E}_{z\sim p_{\theta}(\cdot |y)}\log p(z)$ to both sides of this inequality, the left-hand side becomes the lower bound introduced in the previous subsection, consisting of the cross-translations terms. The right-hand side consists of the agreement terms from Al-Shedivat and Parikh (2019). We tried using this term instead of our cross-translation terms, but found it to be unstable. This could be attributed to the fact that we lack $\mathrm{x}\leftrightarrow \mathrm{z}$ parallel data, which is available in the setup of Al-Shedivat and Parikh (2019). + +Algorithm 1 PRE-TRAINING +Input: Datasets $\mathfrak{D}$ , number of steps $N$ +1: Initialize $\theta \leftarrow \theta_0$ +2: for step in 1, 2, 3, ..., $N$ do +3: Choose dataset $D$ at random from $\mathfrak{D}$ +4: if $D$ consists of monolingual data then +5: Sample batch $x$ from $D$ +6: Masked version of $x\colon x_M\gets \mathrm{MASK}(x)$ +7: MASS Loss: $\mathrm{ml}\gets \log p_{\theta}(x|x_M)$ +8: Update: $\theta \leftarrow$ optimizer_update(ml, $\theta$ +9: else if $D$ consists of parallel data then +10: Sample batch $(x,y)$ from $D$ +11: $\mathrm{tl}\gets \log p_{\theta}(y|x) + \log p_{\theta}(x|y)$ +12: $\theta \leftarrow$ optimizer_update(tl, $\theta$ +13: end if +14: end for + +# 4 Training algorithms + +We now discuss how to train the model end-to-end. We introduce a pre-training phase that we run before the EM procedure to initialize the model. Pretraining is known to be crucial for UNMT (Lample and Conneau, 2019; Song et al., 2019). We make use of an existing method, MASS, and enrich it with the auxiliary parallel corpus if available. We refer to the EM algorithm described in §3 as finetuning for consistency with the literature. + +# 4.1 Pre-training + +The aim of the pre-training phase is to produce an intermediate translation model $p_{\theta}$ , to be refined during the fine-tuning step. We pre-train the model differently based on the data available to us. For monolingual data, we use the MASS objective (Song et al., 2019). The MASS objective consists of masking randomly-chosen contiguous segments² of the input then reconstructing the masked portion. We refer to this operation as MASK. If we have auxiliary parallel data, we use the traditional cross-entropy translation objective. We describe the full procedure in Algorithm 1. + +# 4.2 Fine-tuning + +During the fine-tuning phase, we utilize the objectives derived in Section 3. At each training step we choose a dataset (either monolingual or bilingual), sample a batch, compute the loss, and update the weights. If the corpus is monolingual, we use the back-translation loss i.e. Eq. 5. If the corpus is bilingual, we compute the cross-translation terms i.e. Eq. 8 in both directions and perform one update + +Algorithm 2 FINE-TUNING +Input: Datasets $\mathfrak{D}$ languages $\mathfrak{L}$ initialize parameters from pre-training + $\theta_0$ +1: Initialize $\theta \leftarrow \theta_0$ +2: while not converged do +3: for $D$ in $\mathfrak{D}$ do +4: if $D$ consists of monolingual data then +5: $l_{D}\gets$ Language of $D$ +6: Sample batch $x$ from $D$ +7: for $l$ in $\mathfrak{L},l\neq l_D$ do +8: $\hat{y}_l\gets$ Decode $p_\theta (\hat{y}_l|x)$ +9: $\mathrm{bt}_{l_D,l}\gets \log p_\theta (x|\hat{y}_l)$ +10: $\theta \gets$ optimizer_update(btld,1,0). +11: end for +12: else if $D$ consists of parallel data then +13: Sample batch $(x,y)$ from $D$ +14: $l_{x}\gets$ Language of $x$ +15: $l_{y}\gets$ Language of $y$ +16: for $l$ in $\mathfrak{L},l\neq l_x,l_y$ do +17: $\hat{z}_l\gets$ Decode $p_\theta (\hat{z}_l|x)$ +18: ct $\leftarrow$ log $p_\theta (y|\hat{z}_l)$ +19: $\theta \gets$ optimizer_update(ct,0) +20: end for +21: end if +22: end for +23: end while + +for each term. We detail the steps in Algorithm 2. + +# 5 Experiments + +We conduct experiments on the language triplets English-French-Romanian with English-French parallel data, English-Czech-German with English-Czech parallel data and English-Spanish-French with English-Spanish parallel data, with the unsupervised directions chosen solely for the purposes of comparing with previous recent work (Lample and Conneau, 2019; Song et al., 2019; Ren et al., 2019; Artetxe et al., 2019). + +# 5.1 Datasets and preprocessing + +We use the News Crawl datasets from WMT as our sole source of monolingual data for all the languages considered. We used the data from years 2007-2018 for all languages except for Romanian, for which we use years 2015-2018. We ensure the monolingual data is properly labeled by using the fastText language classification tool (Joulin et al., 2016) and keep only the lines of data with the appropriate language classification. For parallel data, we used the UN Corpus (Ziemski et al., 2016) for English-Spanish, the $10^{9}$ French-English Gigaword corpus3 for the English-French and the CzEng 1.7 dataset (Bojar et al., 2016) for English-Czech. We preprocess all text by using the tools from Moses (Koehn et al., 2007), and apply the Moses tokenizer to separate the text inputs into tokens. We normalize punctuation, remove non + +printing characters, and replace unicode symbols with their non-unicode equivalent. For Romanian, we also use the scripts from Sennrich4 to normalize the scripts and remove diacretics. For a given language triplet, we select 10 million lines of monolingual data from each language and use SentencePiece (Kudo and Richardson, 2018) to create vocabularies containing 64,000 tokens of each. We then remove lines with more than 100 tokens from the training set. + +# 5.2 Model architectures + +We use Transformers (Vaswani et al., 2017) for our translation models $p_{\theta}$ with a 6-layer encoder and decoder, a hidden size of 1024 and a 4096 feedforward filter size. We share the same encoder for all languages. Following XLM (Lample and Conneau, 2019), we use language embeddings to differentiate between the languages by adding these embeddings to each token's embedding. Unlike XLM, we only use the language embeddings for the decoder side. We follow the same modification as done in Song et al. (2019) and modify the output transformation of each attention head in each transformer block in the decoder to be distinct for each language. Besides these modifications, we share the parameters of the decoder for every language. + +# 5.3 Training configuration + +For pre-training, we group the data into batches of 1024 examples each, where each batch consists of either monolingual data of a single language or parallel data, but not both at once. We pad sequences up to a maximum length of 100 SentencePiece tokens. During pre-training, we used the Adam optimizer (Kingma and Ba, 2015) with initial learning rate of 0.0002 and weight decay parameter of 0.01, as well as 4,000 warmup steps and a linear decay schedule for 1.2 million steps. For fine-tuning, we used Adamax (Kingma and Ba, 2015) with the same learning rate and warmup steps, no weight decay, and trained the models until convergence. We used Google Cloud TPUs for pre-training and 8 NVIDIA V100 GPUs with a batch size of 3,000 tokens per GPU for fine-tuning. + +# 5.4 Results + +Evaluation We use tokenized BLEU to measure the performance of our models, using the multi-bleu.pl script from Moses. Recent work (Post, + +2018) has shown that the choice of tokenizer and preprocessing scheme can impact BLEU scores tremendously. Bearing this in mind, we chose to follow the same evaluation procedures used by the majority of the baselines that we consider, which involves the use of tokenized BLEU as opposed to the scores given by sacreBLEU. Given the rise of popularity of SacreBLEU (Post, 2018), we also include BLEU scores computed from sacreBLEU on the detokenized text for French and German. We exclude Romanian since most works in the literature traditionally use additional tools from Sennrich not used in sacreBLEU. + +Baselines We list our results in Table 1. We also include the results of six strong unsupervised baselines: (1) XLM (Lample and Conneau, 2019), a cross-lingual language model fine-tuned with backtranslation; (2) MASS (Song et al., 2019), which uses the aforementioned pre-training task with back-translation during fine-tuning; (3) D2GPo (Li et al., 2020a), which builds on MASS and leverages an additional regularizer by use of a data-dependent Gaussian prior; (4) The recent work of Artetxe et al. (2019) which leverages tools from statistical MT as well subword information to enrichen their models; (5) the work of Ren et al. (2019) that explicitly attempts to pre-train for UNMT by building cross-lingual $n$ -gram tables and building a new pretraining task based on them; (6) mBART (Liu et al., 2020), which pre-trains on a variety of language configurations and fine-tunes with traditional on-the-fly back-transaltion. mBART also leverages Czech-English data for the Romanian-English language pair. + +Furthermore, we include concurrent work that also uses auxiliary parallel data: (8) The work of Bai et al. (2020), which performs pre-training and fine-tuning in one stage and replaces MASS with a denoising autoencoding objective; (9) the work of Li et al. (2020b) which also leverage a cross-translation term and additionally include a knowledge distillation objective. We also include the results of our model after pre-training i.e. no back-translation or cross-translation objective, under the title M-UNMT (Only Pre-Train). + +Our models with auxiliary data obtain better scores for almost all translation directions. Pretraining with the auxiliary data by itself gives com + +
En - FrFr - EnEn - DeDe - EnEn - RoRo - En
Models without auxiliary parallel data
XLM (Lample and Conneau, 2019)33.433.327.034.333.331.8
MASS (Song et al., 2019)37.534.928.335.235.233.1
D2GPo (Li et al., 2020a)37.934.928.435.636.333.4
Artetxe et al. (2019)36.233.526.934.4--
Ren et al. (2019)35.434.927.735.634.934.1
mBART (Liu et al., 2020)--29.834.035.030.5
M-UNMT36.333.5025.532.334.8732.1
Models with auxiliary parallel data
mBART (Liu et al., 2020)-----33.9
Bai et al. (2020) (Concurrent work)36.533.426.630.135.131.6
Li et al. (2020b) (Concurrent work)----37.134.7
M-UNMT (Only Pre-Train)29.233.818.329.025.332.6
M-UNMT (Fine-Tuned)38.336.128.736.037.435.8
detok SacreBLEU36.135.828.935.8--
+ +petitive results in two of the three X - En directions. Moreover, our approach outperforms all the baselines which also which also leverage auxiliary parallel data. This suggests that our improved performance comes from both our choice of objectives and the additional data. + +# 6 Ablations + +We perform a series of ablation studies to determine which aspects of our formulation explain the improved performance. + +Impact of the auxiliary data We first examine the value provided by the inclusion of the auxiliary data, focusing on the triplet English-French-Romanian. To that end, we study four types of training configurations: (1) Our implementation of MASS (Song et al., 2019), with only English and Romanian data. (2) No auxiliary parallel data during pre-training and fine-tuning with only the multi-way back-translation objective (3) No parallel data during the pre-training phase but available during the fine-tuning phase, allowing us to leverage the cross-translation terms. (4) Auxiliary parallel data available during both the pre-training and the fine-tuning phases of training. We also include the numbers reported in the original MASS paper (Song et al., 2019) as well as the best-performing model of the WMT'16 Romanian-English news translation task (Sennrich et al., 2016) and report them in Table 2. + +The results show that leveraging the auxiliary data induces superior performance, even surpassing the supervised scores of Sennrich et al. (2016). These gains can manifest in either pre-training or + +Table 1: BLEU scores of various models for UNMT. M-UNMT refers to our approach. The $\mathrm{En} - \mathrm{Fr} / \mathrm{Fr} - \mathrm{En}$ directions were on newstest2014, while the $\mathrm{En} - \mathrm{Ro} / \mathrm{Ro} - \mathrm{En}$ and and $\mathrm{En} - \mathrm{De} / \mathrm{De} - \mathrm{En}$ directions were on newstest2016. To be consistent with previous work, we report tokenized BLEU. However, to aid future reproducibility, we also report sacreBLEU scores. We do not report sacreBLEU scores for Romanian since it is common to include additional preprocessing from Sennrich $^5$ (such as removing diacretics) which is not natively supported by sacreBLEU. See 5.4 for details. + +
ConfigurationEn - RoRo - En
Bilingual configurations
MASS (Song et al., 2019)35.2033.10
MASS (Our implementation)34.1431.78
M-UNMT configurations
No auxiliary data.34.8732.10
Auxiliary data in fine-tuning36.5734.32
Auxiliary data in both phases37.435.75
Supervised
(Sennrich et al., 2016)28.233.9
mBART (Liu et al., 2020)38.539.9
+ +Table 2: En - Ro and Ro - En BLEU scores on newstest2016 for different ways of leveraging multilinguality and the auxiliary parallel data. M-UNMT refers to our approach. + +
LanguagesEn - RoRo - En
En, Fr, Ro37.2135.5
En, Es, Ro37.3835.21
En, Cs, Ro36.3734.15
+ +Table 3: En-Ro and Ro-En BLEU scores for varying choices of auxiliary language on WMT newstest2016. + +fine-tuning, with superior performance when the auxiliary data is available in both training phases. + +Impact of the additional objectives Given the strong performance of our model just after the pretraining phase, it would be plausible that the gains from multilinguality arise exclusively during the pre-training phase. To demonstrate that this is not the case, we investigate three types of finetuning configurations: (1) Disregard the auxiliary language and fine-tune using only back-translation with English and Romanian data as per Song et al. (2019). (2) Finetune with our multi-way back-translation objective. (3) Finetune with our multi + +![](images/40ce32f598cb5fd53abaef79697dc9b03fcec6131d125316648b25a95a133c84.jpg) +(a) Ro - En BLEU score + +![](images/4980b44829881b4a8b1cce28cdc515996757087b0bcc280efc21977430ec3aa9.jpg) +(b) En-Ro BLEU score + +![](images/ca5cdb11a7d4cc40033c0309e95ced20b5db0b1e2ba04b15503d5864049e3e67.jpg) +(c) Ro $\Lsh$ translation loss + +![](images/0c8131523d98869b6b1e7bba230e06b44e9fb6444cf4535f36d7f30e0a70b257.jpg) +(d) En translation loss +Figure 3: Back-translation losses and BLEU scores for the three configurations on our modified version of the WMT'16 dev set. + +way back-translation objective and leverage the auxiliary parallel data through the cross-translation terms. We name these configurations BT, M-BT, and Full respectively. We plot the results of training for 100k steps in Figure 3, reporting the numbers on a modified version of the dev set from the WMT'16 Romanian-English competition where all samples with more than 100 tokens were removed. + +In the Ro - En direction, the BLEU score of the Full setup dominates the score of the other approaches. Furthermore, the performance of BT decays after a few training steps. In the En - Ro direction, the BLEU score for the BT and M-BT reach a plateau about 1 point under Full. Those charts illustrate the positive effect of the cross-translation terms. We contrast the BLEU curves with the back-translation loss curves in Figure 3(c) and 3(d). We see that even that though the BT configuration achieves the lowest back-translation loss, it does not attain the largest BLEU score. This demonstrates that using back-translation for the desired (source, target) pair alone is not the best task for the fine-tuning phase. We see that the multilinguality helps, as adding more back-translation terms with other languages involved improves the BLEU score at the cost of higher back-translation errors. From this viewpoint, the multilinguality acts as a regularizer, as it does for traditional supervised machine translation. + +Impact of the choice of auxiliary language In this study, we examine the impact of the choice of auxiliary language. We perform the same pretraining and fine-tuning procedure using either + +French, Spanish or Czech as the auxiliary language for the English-Romanian pair, with relevant parallel data of this auxiliary language into English. To isolate the effect of the language choice, we fixed the amount of monolingual data of the auxiliary language to roughly 40 million examples, as well as roughly 12.5 million lines of parallel data in the X-English direction. Table 3 shows the results, indicating that using French or Spanish yields similar BLEU scores. Using Czech induces inferior performance, demonstrating that choosing a suitable auxiliary language plays an important role for optimal performance. The configuration using Czech still outperforms the baselines, showing the value of having any auxiliary parallel data at all. + +# 7 Conclusion and Future Work + +In this work, we explored a simple multilingual approach to UNMT and demonstrated that multilinguality and auxiliary parallel data offer quantifiable gains over strong baselines. We hope to explore massively multilingual unsupervised machine translation in the future. + +# References + +Maruan Al-Shedivat and Ankur P Parikh. 2019. Consistency by agreement in zero-shot neural machine translation. In *NAACL*. +Naveen Arivazhagan, Ankur Bapna, Orhan First, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George Foster, Colin Cherry, et al. 2019. Massively multilingual neural machine translation in the wild: Findings and challenges. arXiv preprint arXiv:1907.05019. +Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2019. An effective approach to unsupervised machine translation. In ACL. +Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural machine translation. In ICLR. +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. +Hongxiao Bai, Mingxuan Wang, Hai Zhao, and Lei Li. 2020. Unsupervised neural machine translation with indirect supervision. arXiv preprint arXiv:2004.03137. +Colin Bannard and Chris Callison-Burch. 2005. Paraphrasing with bilingual parallel corpora. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 597-604. Association for Computational Linguistics. + +Ondrej Bojar, Ondrej Dušek, Tom Kocmi, Jindrich Libovický, Michal Novák, Martin Popel, Roman Sudarikov, and Dušan Variš. 2016. CzEng 1.6: Enlarged Czech-English Parallel Corpus with Processing Tools Dockered. In Text, Speech, and Dialogue: 19th International Conference, TSD 2016, number 9924 in Lecture Notes in Computer Science, pages 231–238, Cham / Heidelberg / New York / Dordrecht / London. Masaryk University, Springer International Publishing. +Arthur P Dempster, Nan M Laird, and Donald B Rubin. 1977. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society: Series B (Methodological), 39(1):1-22. +Orhan First, Kyunghyun Cho, and Yoshua Bengio. 2016. Multi-way, multilingual neural machine translation with a shared attention mechanism. In *NAACL*. +Jiatao Gu, Yong Wang, Kyunghyun Cho, and Victor OK Li. 2019. Improved zero-shot neural machine translation via ignoring spurious correlations. In ACL. +Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. In Advances in neural information processing systems, pages 820-828. +J. He, X. Wang, G. Neubig, and T. Berg-Kirkpatrick. 2020. A probabilistic formulation of unsupervised text style transfer. In ICLR. +Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, et al. 2017. Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339-351. +Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. In EACL. +Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1700-1709. +Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In *ICLR*. +Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th annual meeting of the association for computational linguistics companion volume proceedings of the demo and poster sessions, pages 177-180. + +Philipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. arXiv preprint arXiv:1706.03872. +Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tokenizer and tokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71. +Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. arXiv preprint arXiv:1901.07291. +Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018. Unsupervised machine translation using monolingual corpora only. In ICLR. +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. +Zuchao Li, Rui Wang, Kehai Chen, Masso Utiyama, Eiichiro Sumita, Zhuosheng Zhang, and Hai Zhao. 2020a. Data-dependent gaussian prior objective for language generation. In ICLR. +Zuchao Li, Hai Zhao, Rui Wang, Masao Utiyama, and Eiichiro Sumita. 2020b. Reference language based unsupervised neural machine translation. arXiv preprint arXiv:2004.02127. +Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre-training for neural machine translation. arXiv preprint arXiv:2001.08210. +Matt Post. 2018. A call for clarity in reporting bleu scores. arXiv preprint arXiv:1804.08771. +Sujith Ravi and Kevin Knight. 2011. Deciphering foreign language. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 12-21. +Shuo Ren, Wenhu Chen, Shujie Liu, Mu Li, Ming Zhou, and Shuai Ma. 2018. Triangular architecture for rare language translation. arXiv preprint arXiv:1805.04813. +Shuo Ren, Yu Wu, Shujie Liu, Ming Zhou, and Shuai Ma. 2019. Explicit cross-lingual pre-training for unsupervised machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 770-779. + +Sukanta Sen, Kamal Kumar Gupta, Asif Ekbal, and Pushpak Bhattacharyya. 2019. Multilingual unsupervised nmt using shared encoder and language-specific decoders. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3083-3089. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Edinburgh neural machine translation systems for wmt 16. arXiv preprint arXiv:1606.02891. +Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2019. Mass: Masked sequence to sequence pre-training for language generation. In International Conference on Machine Learning, pages 5926-5936. +Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104-3112. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008. +Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. +Chang Xu, Tao Qin, Gang Wang, and Tie-Yan Liu. 2019. Polygon-net: a general framework for jointly boosting multiple unsupervised neural machine translation models. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, pages 5320-5326. AAAI Press. +Michal Ziemski, Marcin Junczys-Dowmunt, and Bruno Pouliquen. 2016. The united nations parallel corpus v1. 0. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 3530-3534. \ No newline at end of file diff --git a/amultilingualviewofunsupervisedmachinetranslation/images.zip b/amultilingualviewofunsupervisedmachinetranslation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..7b2d528171089dc1e79c3fa45a7231bfcd7dca86 --- /dev/null +++ b/amultilingualviewofunsupervisedmachinetranslation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a14888285d86605e4b8e0841fb2d58fa27148bf1aa4465f2bd6b5bd815358602 +size 355787 diff --git a/amultilingualviewofunsupervisedmachinetranslation/layout.json b/amultilingualviewofunsupervisedmachinetranslation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..028f8dd7af68b43a128ee296221f489a1af7b5e4 --- /dev/null +++ b/amultilingualviewofunsupervisedmachinetranslation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5e77970bc4adcbbd7622594011415c74648fd92f4537ccbaa98a9c05f75ccca4 +size 447144 diff --git a/amultipersonachatbotforhotlinecounselortraining/a092fda5-6652-4407-b362-4aede914647f_content_list.json b/amultipersonachatbotforhotlinecounselortraining/a092fda5-6652-4407-b362-4aede914647f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..242b4118fd1d80fe9ce77c875fcb4c4ee7087ea0 --- /dev/null +++ b/amultipersonachatbotforhotlinecounselortraining/a092fda5-6652-4407-b362-4aede914647f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:57af06f76016a58067c4fa7ec2cb65586d6bad790e4403e1d5d7118bbd9ba312 +size 95565 diff --git a/amultipersonachatbotforhotlinecounselortraining/a092fda5-6652-4407-b362-4aede914647f_model.json b/amultipersonachatbotforhotlinecounselortraining/a092fda5-6652-4407-b362-4aede914647f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b9154400e0e52979c132f2f9a7308c4d9560d7d9 --- /dev/null +++ b/amultipersonachatbotforhotlinecounselortraining/a092fda5-6652-4407-b362-4aede914647f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:18230af904d1443a66b54a31f14cd3b707b2a212c9b5b2506bdafc9e7d5de40f +size 112944 diff --git a/amultipersonachatbotforhotlinecounselortraining/a092fda5-6652-4407-b362-4aede914647f_origin.pdf b/amultipersonachatbotforhotlinecounselortraining/a092fda5-6652-4407-b362-4aede914647f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8d60c2352db92b0a16c5bdd0030d60fa7941f500 --- /dev/null +++ b/amultipersonachatbotforhotlinecounselortraining/a092fda5-6652-4407-b362-4aede914647f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:91998e92564997f08eeaeed5440fb3a5014ca9867412a7d9e6efe1359691027b +size 347541 diff --git a/amultipersonachatbotforhotlinecounselortraining/full.md b/amultipersonachatbotforhotlinecounselortraining/full.md new file mode 100644 index 0000000000000000000000000000000000000000..8996771a58a830aab1f9d7f76372c0d5d869740b --- /dev/null +++ b/amultipersonachatbotforhotlinecounselortraining/full.md @@ -0,0 +1,366 @@ +# A Multi-Persona Chatbot for Hotline Counselor Training + +Orianna DeMasi, Yu Li, Zhou Yu + +University of California, Davis + +[odemasi, yooli, joyu]@ucdavis.edu + +# Abstract + +Suicide prevention hotline counselors aid individuals during difficult times through millions of calls and chats. A chatbot cannot safely replace a counselor, but we explore whether a chatbot can be developed to help train human counselors. Such a system needs to simulate intimate situations across multiple practice sessions. Open-domain dialogue systems frequently suffer from generic responses that do not characterize personal stories, so we look to infuse conversations with persona information by mimicking prototype conversations. Towards building a "Crisisbot" hotline visitor simulation, we propose a counseling strategy annotation scheme and a multi-task framework that leverages these counselor strategies to retrieve similar examples, generate diverse subutterances, and interleave prototype and generated sub-utterances into complex responses. We evaluate this framework with crowdworkers and experienced hotline counselors. The framework considerably increases response diversity and specificity, with limited impact to coherence. Our results also show a considerable discrepancy between crowdworker and counselor judgements, which emphasizes the importance of including target populations in system development and evaluation. + +# 1 Introduction + +Crisis hotlines provide the opportunity for individuals to reach out, be heard, and get help during intense times of need. They are invaluable, highly used resources, as evidenced by The National Suicide Prevention Lifeline centers alone handling 2.2 million calls during 2018 (Vibrant Emotional Health, 2019). Further, many helplines have expanded services to chat, in addition to voice calls, to support texting and increased demand. To help visitors who contact them, helplines rely on counselors trained in various active listening tech- + +![](images/322f0048689e87384fa09db087d66f3398a23258f4e2c3aa7a9f445b1920332b.jpg) +Figure 1: Snippet of a conversation illustrating a counselor practicing counseling with Crisisbot. Realistic visitor messages are often complex and include multiple issues that characterize a personal situation. + +niques and de-escalation strategies (Gould et al., 2013; Knox et al., 2012; Labouliere et al., 2020). + +As in other high-stakes settings, training hotline counselors suffers from the fundamental challenge that counselors must practice counseling skills without putting any distressed hotline visitors in danger. Crisis counselors usually obtain experience by practicing with experienced counselors who role-play hotline visitors. Role-playing has been shown to improve crisis intervention (Cross et al., 2011), but requires significant resources that can be challenging for hotlines to provide on tight budgets. For example, it requires experienced counselors' time that could instead be used to help hotline visitors in need. + +To provide a realistic, low-risk practice environment to augment crisis hotline counselor training, we seek to develop Crisisbot: a dialogue agent that can simulate a hotline visitor and be used to train human counselors. A chatbot clearly cannot safely and thus ethically take on a counseling role, so we instead explore this direction of using a chatbot to support human counselors. In particular, we consider whether a chatbot can be used to simulate a + +visitor to better train human counselors. By chatting with Crisisbot, novice counselors could safely practice counseling and build confidence before interacting with hotline visitors. + +In addition to returning relevant messages, Crisisbot needs to simulate multiple visitor stories to begin to expose counselors to the astonishing breadth of scenarios that they may encounter. Multiple stories are also needed to support multiple practice sessions, and these visitor stories must be intimate conversations with realistically complex messages (as in Figure 1). However, collecting pre-defined personas in addition to example conversations burdens a limited pool of skilled workers, so we seek to develop stories by mimicking example conversations directly. + +Developing personas for chatbots is challenging, as generic responses remain a shortfall of dialogue generation models (Li et al., 2016; Roller et al., 2020). Generic responses are especially problematic for characterizing distinct personas and for carrying engaging conversations. Without some specificity in responses, different scenarios cannot be simulated for multiple training sessions. + +To simulate multiple, specific, and distinct personas, we propose a counselor strategy annotation scheme and a novel multi-task training framework that constructs persona-relevant responses from example transcripts rather than pre-defined personas. Our framework leverages counselor strategies, a pre-trained generative model, utterance segmentation, and sub-utterance selection. It allows for interleaving generated and prototype text to construct complex responses, which are more realistic. + +We explore the utility of our strategy annotation scheme and multi-task framework in the context of hotline counseling. We use our scheme to annotate a corpus of realistic conversations and then use our framework to leverage these annotated conversations for response construction. We first test the trained model components through interactive chats with untrained crowdworkers to assess response quality and experience. We then test the framework with experienced hotline counselors to additionally assess how realistic and useful the system is perceived to be. The interactive conversations are evaluated with both subjective human judgements of user experience and automatically calculated measures of response diversity. + +We find that our approach considerably increases the diversity of and the presence of persona + +relevant sub-utterances in chatbot responses. We also find discrepancy between crowdworker and counselor evaluation. This discrepancy stresses the importance of our two-fold evaluation approach and the general need for testing within a target setting, especially for specialized systems. + +While additional improvements are needed for conversational coherence between turns, our annotation scheme and modeling approach contribute to curating chat experiences with richer responses that characterize distinct personas. We study this framework in the context of counseling, but it could be used more broadly in applications that require simulating multiple scenarios. We make anonymized conversations available for research use1. + +# 2 Related work + +Our work builds on a growing interest in using natural language processing to address challenges in mental health care. Unlike previous work that has predominantly focused on studying the quality of care provided or delivering mental health care with chatbots, we focus on training hotline counselors with a Crisisbot. With this approach, we must address the persistent challenge of generic dialogue response generation (Li et al., 2016; Roller et al., 2020) to avoid tedious and unrealistic training experiences. We attempt to construct personas by mimicking personas from prototype conversations with a multi-task framework that enables constructing responses from pre-written and generated text. + +# 2.1 NLP for hotline and therapy insights + +Recent interest in hotline conversations has analyzed the structure of real conversation transcripts with the hope of better understanding what makes hotline conversations perceived to be helpful (Althoff et al., 2016; Zhang et al., 2019; Zhang and Danescu-Niculescu-Mizil, 2020). Similar work has considered talk therapy conversations and tried to understand conversations and predict counseling acts (Pérez-Rosas et al., 2017, 2019) or forecast therapists' next actions (Cao et al., 2019). + +# 2.2 Chatbots for therapy care and training + +An alternative line of work has followed the seminal Eliza system (Weizenbaum, 1966) and attempted to create dialogue agents to provide therapeutic benefit, i.e., "therapybots" (Fitzpatrick et al., + +2017; Inkster et al., 2018). Hotlines are too high-risk to responsibly replace human counselors with chatbots, so, for safety, we instead seek to support mental health professionals with improved training and practice. Our approach is most similar to work that explored building a chatbot to train talk therapists (Tanana et al., 2019), who have different methods than hotline counselors. This work did not explore enabling multiple personas, but trained a single general persona and found generic responses to be prevalent and tedious for crowdworkers. + +# 2.3 Personality in chatbots + +There has recently been a surge of interest in generating coherent and consistent dialogues grounded on pre-defined persona profile information from the PersonaChat dataset (Zhang et al., 2018; Dinan et al., 2020). Approaches to enforce consistent personas on this dataset have included retrieving relevant profile facts (Zhang et al., 2018), retrieving and refining relevant utterances (Weston et al., 2018), increasing the probability of copying a word from the profile (Yavuz et al., 2019), tuning to discourage inconsistent responses (Li et al., 2019a), re-ranking candidate responses (Welleck et al., 2019), and combining natural language inference with reinforcement learning (Song et al., 2019). Unfortunately, these methods fall short of generating responses that are as grammatical, diverse, engaging, and descriptive as natural human generated conversation (See et al., 2019; Roller et al., 2020). Further, pre-defined profile facts, which these works rely on, are not present or realistic for our application, which only has example conversations. To leverage the rich example transcripts for simulating personas, we propose a multi-task training framework. + +# 2.4 Multi-task fine-tuning for dialogue response generation + +Recent advances in large pre-trained models (Devlin et al., 2018; Radford et al., 2018) have led researchers to consider leveraging these models for dialogue response generation. A particular approach has been using multi-task training, such as for generating persona-relevant responses (Wolf et al., 2018) and adapting to domains (Li et al., 2019b). We build on this multi-task approach by using components that enable us to interleave pre-written and generated sub-utterances to construct descriptive and persona-consistent responses. + +# 2.5 Retrieving and generating messages + +Previous work has considered combining retrieval and generation methods for dialogue systems. The majority of this work has taken a retrieve and modify approach, e.g., use a retrieved prototype response to guide the generation of a full response for social chat (Cai et al., 2019; Weston et al., 2018; Wu et al., 2019) or tech support systems (Pandey et al., 2018). Additional work retrieved facts to generate knowledge grounded systems (Dinan et al., 2018). Most closely related to our work are studies that use rankers to choose between generated or retrieved responses in social chatbots (Song et al., 2016; Papaioannou et al., 2017a,b; Shalyminov et al., 2018) and in knowledge-grounded systems (Yang et al., 2019). Our work differs from these approaches, which used fully generated or fully retrieved messages, by interleaving both selected text and generated text into complex messages. Further, unlike other work, we do this to curate personas and as a component of a larger multi-task framework that combines counselor strategy detection, diverse generation, and a fine-tuned transformer model. + +# 3 Dataset + +To scaffold conversations and help the model identify similar contexts, we develop a counselor strategy annotation scheme. We use this scheme to annotate a dataset of realistic hotline training conversations that was previously collected with experienced hotline counselors (DeMasi et al., 2019). The counselors were asked to role-play hotline conversations, as during training, and to preserve the privacy of all hotline visitors. These conversations are representative of how counselors currently acquire experience during training. However, as they are synthetic, we are able to consider models that include repeating portions of text without violating the privacy of vulnerable individuals. + +Conversations were required to be 40 total turns in length and allowed to be up to 60 total turns. Additional dataset statistics can be found in Table 1. The dataset was collected in multiple phases with unique conversations collected first and then additional examples built by paraphrasing this initial set. Here, we consider the conversations collected in all phases to initially fine-tune a pre-trained language model. Then, for the full multi-task framework, we consider an annotated subset of the unique conversations and not the paraphrased conversations. The + +
Dataset componentCount
Annotated conversations160
Unannotated conversations94
Paraphrased conversations300
Counselor messages8,319
Visitor messages9,062
+ +corpus contains extremely diverse scenarios that are usually characterized by relatively long messages with multiple sentences. The considerable diversity of scenarios makes our task challenging, but emphasizes the need for Crisisbot to simulate multiple personas. + +# 3.1 Sub-utterance segmentation + +Counselor and visitor messages were automatically segmented into sub-utterances, e.g., separate sentences, with a sentence tokenizer. Leading greetings, positive, and negative responses were also separated as sub-utterances for consistent strategy annotation. Segmentation and spelling errors were manually corrected during annotation. Splitting sub-utterances was necessary for annotation, as counselors are often encouraged to reply with multiple components, e.g., to acknowledge feelings before asking a question. For visitors, segmenting messages enables construction of more complex messages, e.g., generating a sub-utterance to respond to an unseen question and then selecting an informative sub-utterance from a prototype conversation to elaborate on the response. + +# 3.2 Counselor strategy annotation + +To understand input counselor messages and enable the model to look up similar contexts, we develop a counselor strategy annotation scheme and annotate the strategies of counselor sub-utterances in 160 transcripts. The counselor annotation scheme was based on conversations with professionals who train hotline counselors and training materials. It was iteratively refined and expanded by the researchers to cover sub-utterances observed. The annotation scheme consists of 25 annotations, including active listening techniques, de-escalation strategies, and social norms (examples in Table 2, additional details in the Appendix). This approach relates to previous work that annotated conversa + +Table 1: Dataset statistics. For paraphrased conversations, counselors were asked to reword messages from other conversations to generate message variability with less burden. + +
ClassCounselor strategyExample sub-utterance
Functionalgreet“Hello!”
self-intro“I’m a counselor.”
Active listeningfeelings“It sounds like you feel overwhelmed.”
open-prompt“Would you like to tell me more?”
Proceduralidentify-resource“Have you talked to anyone about this?”
identify-risk“Are you having thoughts of hurting yourself?”
+ +Table 2: Example strategies from our scheme based on training materials. Each counselor sub-utterance was annotated with one of 25 strategies. + +tion strategies for persuasion (Wang et al., 2019), negotiation (He et al., 2018), and motivational interviewing (Cao et al., 2019; Pérez-Rosas et al., 2017), but these annotations are specialized for a hotline counseling context. However, they could be generalized to related applications, and used to extract related conversational exchanges to generate diverse response components. They could also be used to contrast with similar domains and understand the distinction of hotline counseling. + +# 4 Method + +To simulate realistic hotline visitors, we seek to construct complex responses that characterize specific, consistent personas across multiple turns. Towards persona consistency, we base each conversation on a prototype conversation from the corpus and try to mimic the visitor's persona in that conversation. As a result, we do not generate personas, but model conversations on individual training example conversations. To do this, we propose an end-to-end multi-task modeling framework that leverages our annotation scheme and a large pre-trained language model. This framework fine-tunes a generative language model while learning two additional model components: a strategy detector to select similar conversational exchanges, i.e., message-response pairs, and a next sub-utterance classifier that can interleave generated sub-utterances and sub-utterances selected from the prototype conversation into a coherent response. An overview of using these components to construct responses is in + +![](images/7cc6f7a9023755cfc6112fa5b513d79234aee3a01a3d8263f059e3ad94557d01.jpg) +Figure 2: Overview of process to construct responses to messages input by a counselor. The framework bases each conversation on a single prototype conversation and learns three components to construct responses: a counselor strategy classifier to select relevant exchanges to condition the generator on, a generator that decodes diverse sub-utterances, and a classifier that selects to append either a generated sub-utterance or one selected from the prototype conversation. +Figure 2. We combine generated and selected prototype sub-utterances so that the model can respond with diverse, specific responses and appropriately respond to unseen contexts. + +# 4.1 Detecting counselor strategies + +To identify user intent, i.e., strategies used in a counselor's input message, we use a counselor strategy classifier. This classifier is trained on the counselor strategies that we annotated for each sub-utterance, as counselor messages typically contain multiple sub-utterances with different strategies (as shown in the introductory example (Figure 1). The counselor strategy classifier predicts the probability of a strategy $s_t^i$ for the $i$ -th sentence at turn $t$ to be + +$$ +p \left(s _ {t} ^ {i}\right) = \operatorname {s o f t m a x} \left(W _ {s} \cdot \left[ h _ {t - 1} ^ {l}; h _ {t} ^ {i} \right]\right) +$$ + +where $h_{t-1}^{l}$ and $h_{t}^{i}$ are hidden states from an underlying fine-tuned language model for the last sentence $l$ of the visitor's message at the previous turn $t-1$ and at the end of the $i$ -th sentence of the counselor's input message at turn $t$ , respectively. The weights $W_{s}$ are learned. At each training step, + +this classifier results in a cross-entropy loss across strategies $\mathcal{L}_{strat}$ . + +# 4.2 Extracting conversational exchanges based on strategies + +We use the last detected counselor strategy to identify similar, relevant conversational exchanges in the prototype conversation. Relevant exchanges are selected as counselor message and visitor response pairs where the last counselor sub-utterance strategy matches the last strategy in the counselor's current input message. + +# 4.3 Generating diverse sub-utterances conditioned on exchanges + +To generate sub-utterances characteristic of a visitor's persona, we condition generation from a finetuned language model on the selected conversational exchanges. The exchanges, and the speaker roles within them, are distinguished from the current context messages by pre-pending special tokens to each message. The examples and roles are further distinguished from current conversation roles by token embeddings, as in previous work distinguishing speaker roles (Wolf et al., 2018). This approach differs from imbuing conversation with factual knowledge (Dinan et al., 2018) or profile facts (Wolf et al., 2018; Li et al., 2019b) in that we condition on conversational exchanges and in how we select the exchanges. + +# 4.4 Choosing a selected or generated sub-utterance + +To add detail to responses, while maintaining persona consistency, we allow sub-utterances to be selected from the prototype conversation. We also allow generated sub-utterances, so that the model can appropriately respond to any input. For example, our system needs to generate a response to an unseen direct question, but then may elaborate with text selected from the prototype conversation. + +We consider sub-utterances as individual units and train a next-sub-utterance classifier to select between generated sub-utterance candidates or selecting a sub-utterance from the prototype conversation. This classifier takes the form + +$$ +p \left(y _ {t} ^ {i}\right) = \operatorname {s o f t m a x} \left(W _ {y} \cdot h _ {t} ^ {i}\right) +$$ + +where $y_{t}^{i}$ is the prediction of whether a candidate sub-utterances is the true $i$ -th sub-utterance in turn $t$ . Here $h_{t}^{i}$ is the last hidden state of extending turn + +$t$ with an $i$ -th candidate sub-utterance. This model component results in a loss $\mathcal{L}_{next}$ . + +To train this model component, a set of five examples is constructed for each sub-utterance in the ground truth response: the true next sub-utterance, three randomly selected sub-utterances, and one sub-utterance from the visitor's recent conversation history, i.e., everything the generator considers. All of these candidates are selected from the same prototype conversation and the inclusion of a recently said sub-utterance is used to decrease repetition. + +# 4.5 Multi-task training + +The above described model components are trained in a multi-task fashion by minimizing the composite loss function + +$$ +\mathcal {L} = \mathcal {L} _ {\text {s t r a t}} + \mathcal {L} _ {\text {n e x t}} + \lambda \mathcal {L} _ {L M} +$$ + +where $\mathcal{L}_{LM}$ is the cross-entropy loss from the finetuned pre-trained language model and the other losses are as described above. The parameter $\lambda$ controls the focus on the language model and is empirically set to 2.0 in all experiments. The language model that we use is first tuned for dialogue in a multi-task framework on the PersonaChat dataset (Wolf et al., 2018), then fine-tuned on the full corpus of transcripts collected, and finally tuned on our annotated subset of transcripts in the above framework. + +# 5 Experimental setup + +To evaluate the proposed full framework and ablated versions, we consider human judgements of technical conversation quality and both automatic measures and human judgements of engaging persona development. We collect these measures first with untrained crowdworkers. Then, based on crowdworker assessments, we select a subset of models to have counselors assess. Not all of the models are evaluated by counselors, as a limited number of experienced hotline counselors were able to be recruited and participate. Further, each model needs to be considered multiple times by each counselor in order for them to assess distinctness of personas between conversations. All human ratings used 5-point Likert scales. This study was approved by the internal review board. + +# 5.1 Baseline models + +We consider three models with subsets of the full model's components. The Tuned model is a pre + +trained language model (Wolf et al., 2018) finetuned to the full 554 counseling conversations collected. The +rank model jointly trains the next sub-utterance classifier and fine-tunes the pre-trained language model to the subset of annotated transcripts. The sub-utterance classifier is the mechanism used for increasing response length and diversity by interleaving generated sub-utterances with those selected from the prototype conversation. The third model +cond trains the strategy classifier and fine-tunes the pre-trained language model dynamically conditioned on example exchanges from the prototype conversation. It uses no sub-utterance ranking. Finally, we consider the Full model that fine-tunes the language model with the strategy and next sub-utterance classifiers. + +# 5.2 Training and interaction details + +Our annotated dataset was randomly split with $90\%$ of conversations in the training set and the rest used for validation. During testing, we randomly select a prototype conversation from the validation set at the start of the conversation and only consider selecting sub-utterances and conditioning on examples from that prototype conversation. This approach is intended to focus the conversation on a single visitor for persona consistency. To reduce repetition, we remove examples and sub-utterance from further consideration if a sub-utterance is copied from dynamically conditioning on prototype examples or is selected from the set of visitor's sub-utterances. + +To encourage visitor coherence, we use five turns of conversation history for context. To accommodate limited context length, we consider the last sub-utterance of the counselor's messages, as this is typically where counselors will advance the conversation and require a response. We found in preliminary testing that this balance improved the visitor coherence and decreased repetition without significant detriment to responsiveness. Hyper-parameter settings are in the Appendix. + +# 5.3 Automatic measures of response diversity + +We calculate various metrics to compare the diversity of responses generated by each model. We consider the specificity of a models with the average negative log likelihood (nll.) of each response generated for its context with respect to the tuned model, so that it is comparable across models. We calculate the breadth of responses with the entropy (ent.) of a unigram language model of all tokens + +
Crowdworker judgementsAutomatic measures of diversity
Modelcoh.con.flu.det.hum.use.nll.ent.d1d2#tok
Tuned3.933.934.032.973.873.831.884.380.090.308.15
+rank4.003.934.073.403.874.032.524.860.100.349.55
+cond4.003.634.003.233.633.732.814.890.130.397.94
Full3.933.933.973.773.874.232.994.970.120.3912.25
+ +Table 3: Crowdworker evaluation of models on 1-5 scales for coherency (coh.), consistency (con.), fluency (flu.), level of detail (det.), human likeness (hum.), and usefulness (use.). Higher is better for all evaluation metrics. + +
Hotline counselor judgementsAutomatic measures of diversity
Modelcoh.con.flu.det.hum.use.dis.rea.nll.ent.d1d2#tok
Tuned2.802.734.272.532.673.003.002.671.614.190.180.468.31
Full1.872.333.602.402.002.202.901.873.004.870.190.5014.93
+ +Table 4: Hotline counselor evaluation of models on 1-5 scales for coherency (coh.), consistency (con.), fluency (flu.), level of detail (det.), human likeness (hum.), usefulness (use.), distinctness (dis.), and realiziness (rea.). Higher is better for all evaluation metrics. Standard deviation for judgements of each dimension ranged 0.45-1.20. + +generated and the diversity of response elements with d1 and d2, the ratios of the number of unique to number of total unigrams or bigrams generated, respectively (Li et al., 2016), Finally we consider the length of responses with the average number of tokens in a response (#tok). + +# 5.4 Human judgements of quality + +For conversation quality, we asked both crowdworker and counselor participants to rate Crisisbot on coherency, fluency and grammatical correctness of responses, and human likeness. + +# 5.5 Human judgements of persona + +We also sought to gauge the quality of persona simulation in conversations. To do this, we asked all participants to rate each conversation on persona consistency and level of response detail. We additionally asked counselor participants to rate each conversation on how realistic it was of a hotline visitor. After their second and third conversation with Crisisbot, we asked counselors to rate how distinct the persona was from previous conversations. Finally, at the end of each interaction (one conversation for crowdworkers and three conversations for counselors), we asked participants to rate how useful they thought such conversations would be for training. All ratings used 5-point Likert scales. + +# 5.6 Crowdworker user study + +Crowdworkers with US-based location qualification who chatted with Crisisbot were required to write at least 10 turns before rating their experience. + +The interface they interacted with included suggestions of things a counselor might say and it was suggested that they explore the problems facing Crisisbot. A persona, i.e., prototype conversation, was randomly selected for each conversation. + +# 5.7 Hotline counselor user study + +After crowdworkers' preliminary evaluation, we asked experienced hotline counselors to chat with Crisisbot. Counselors were informed of the purpose of the study and asked to interact with Crisisbot for three conversations in two different settings, i.e., three conversations with each of two models. In each setting a counselor interacted with a single model, but three personas from the model and the personas were fixed between models. As with crowdworkers, counselors were required to write at least 10 messages per conversation. A survey followed each conversation. Unlike in the crowdworker interface, the counselor interface did not include examples of things to say. + +# 6 Evaluation Results + +Each of the four models was interactively evaluated by 30 crowdworkers and we calculated measures of model response diversity on all responses generated (Table 3). Based on the initial crowdworker evaluation, two models were selected for counselor testing. Only two models were tested by counselors because testing each model took significant participant commitment, as counselors had to chat with each model multiple times to consider if the model simulated distinct personas. Due to our ability to + +
TurnHotline counselorCrisibot
0STARThi, i'm michael and i kind of need somebody to talk to today. i'm feeling really down.
1hi michael, sorry to hear you are feeling down, but i am so glad you reached out. what's up?i'm having suicidal thoughts lately. my life sucks.
2sounds like things have been really rough, so much that you are thinking about suicide. do you feel like killing yourself today?sometimes i think about it.
3thank you for sharing with me. have you done anything to harm yourself today?no. i'm feeling a bit better now. but i just want to talk to someone. and i don't know how to help my mom because it hurts me to see that she's in so much distress.
4okay, i am definitely here to talk. i hear that you want to help your mom, and that you are worried about her. what is going on with your mom?she's been really depressed lately. my parents want to divorce and i'm so stressed that i can't focus on school.
+ +Table 5: Excerpt from a selected interaction of a counselor with Crisisbot with the Full model. The entire transcript continues similarly (see Appendix) and highlights high counselor expectations despite reasonable model performance (judged: coh.=1, rea.=3, con.=1, flu.=4, det.=3, hum.=1). + +only access a limited pool of experienced counselors, a total of five counselors interacted with the two selected models, resulting in the evaluation of 15 conversations for each model (Table 4). + +# 6.1 Crowdworker evaluation + +Crowdworkers generally positively reviewed all model variants. While the Full model had some slightly lower quality measures (e.g. coherence and fluency), it was rated to have the most level of detail and hypothesized utility for training, which coincided with increased response diversity across all measures of diversity (Table 3). This result indicates that crowdworkers had a preference in their user experience for the utility of diverse responses to train different scenarios and could accept minimal impact to other quality measures. + +# 6.2 Hotline counselor evaluation + +Full model responses were more diverse across all measures when responding to counselor messages than the Tuned model (Table 4). However, in ratings, counselors seemed to prefer the coherence of vague model responses. They more favorably scored conversations with the Full model, even though the responses were less diverse. The increased scores were likely because the Full model's short responses were perceived as more coherent and consistent. + +In addition to ratings, we asked counselors for open feedback about what they liked and disliked about the conversations, and their responses to these questions revealed different conclusions than their ratings. In open responses, counselors did value the variety of the Full model and reported frustration with dull responses from the Tuned + +model. An exhaustive list of open responses is in Table 6. In open feedback, all of the counselors cited variability in responses and specificity of issues as what they liked about the Full model. When asked what they disliked, themes emerged about counselors feeling that trying to get more information from the Tuned model was difficult and the simulated problem scenarios did not feel real. In contrast, none of the counselors reported disliking anything related to specificity with the Full model, but some issues arose with responsiveness and consistency across turns, which is an ongoing challenge for dialogue systems (Roller et al., 2020). + +# 6.3 Qualitative review of conversations + +Reviewing the interactive conversations reveals considerable difference between counselor and crowdworker conversation approaches, enormous variability in counselor ratings, and inconsistency in how counselor ratings relate to researcher perceptions of individual conversation quality. Examples of conversations can be found in the Appendix. Counselors were generally more engaged than crowdworkers, asked followup questions, and, as they are trained to do, tried to explore issues confronting visitors. Conversely, crowdworker messages would be more focused within individual turns and freely hop between topics. This unsurprising difference emphasizes the need for a specialized chatbot to train counselors how to followup. + +As the average quantitative ratings reflect in Tables 3 and 4, even relatively coherent conversations that counselors had with Crisisbot received considerably lower scores than a comparable conversation with a crowdworker. These deflated counselor rat + +
ModelCounselor liked
Tuneddepression is simulated well, but the train of thought doesn’t last long
TunedThey are good practice for thinking on your toes and receiving answers to your messaging, so you can work on various approaches depending on the situations. Also, there was a range of suicidality and crisis level.
TunedFirst two were generally more coherent than in my previous session. The chatbot seems to respond well to messages showing validation and empathy.
TunedThey were a good opportunity to work with ambivalence and practice some text-based OARS
TunedContent seemed pretty realistic but still not quite coherent enough to be realistic
FullMore crises situations thrown in. Variety from previous submission.
FullThere was a variety of responses that prompted critical thinking and creativity.
Fulli think the statements about depression and past suicidal ideation and attempts are useful for training counselors. likewise the mention of bullying.
FullVariety of issues presented
ModelCounselor disliked
TunedThe third chatbot tried to end the conversation early and kept saying thanks, goodbye, etc. but I had to keep the conversation going for testing - not sure if this was an error.
TunedTrying to get more information was very difficult when answers were short and didn’t really make sense
Tunedi did not feel like i was talking to someone with a real life or real problems
TunedThe responses were not connected to the unfolding narrative of the conversation, so it does not replicate the rapport building process that is needed in this work.
FullWhen they would mimic my name choice, took me out of it, although I suppose it does happen occasionally.
FullThere were many inconsistencies in the conversation and the flow was not there. It felt like responding to one individual statement at a time more than a series of statements that went together.
Fulli think the bot has to be more flexible in responding to my input-often the topic got changed altogether
FullIt would help if the personas and storylines were more defined and consistent.
+ +Table 6: All open feedback provided by counselors about what they liked and disliked about their interactions. + +ings could relate to their increased experience interacting with humans over chat and their training to ask follow up questions that help individuals dive deeper into conversation, but are more challenging for a bot to respond to. Similarly, it could relate to crowdworkers' increased experience with chatbots and thus lower expectations. Regardless, dialogue systems are notoriously difficult to evaluate and it is also possible that our participants struggled to separate potential improvements on fine-grained dimensions from overall chatbot performance, which resulted in different trends between the evaluations. + +# 7 Conclusion + +We explored developing a Crisisbot to imitate hotline visitors' varied personal stories for better training human counselors. Towards this goal, we developed a counselor strategy annotation scheme and proposed a multi-task framework. This framework bases conversations on personas from prototype conversations and interleaves generated text with text selected from the prototype. Through two user studies, we observed increased response diversity overall, but a considerable difference be + +tween crowdworker and counselor perceptions and inconsistency with how ratings reflect conversations. These results reinforce the known challenge of dialogue system evaluation and cautions the use of crowdworkers alone for developing system insights. The disparity in ratings between counselors and crowdworkers stresses that the involvement of specialized, intended users in system development and evaluation is crucial for success. While perhaps intuitive, this result is sometimes overlooked during system evaluation. Counselor open feedback indicates that additional development is required for coherence across turns, but that our framework adds detail and variety in personas, which could enable better variety for training counselors. + +# Acknowledgements + +We greatly appreciate everyone who participated in this study. In particular, we'd like to thank the counselors who shared their time and experience. We would also like to thank the reviewers for valuable feedback and suggestions. + +# References + +Tim Althoff, Kevin Clark, and Jure Leskovec. 2016. Large-scale analysis of counseling conversations: An application of natural language processing to mental health. Transactions of the Association for Computational Linguistics, 4:463-476. +Deng Cai, Yan Wang, Wei Bi, Zhaopeng Tu, Xiaojiang Liu, and Shuming Shi. 2019. Retrieval-guided dialogue response generation via a matching-to-generation framework. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1866-1875. +Jie Cao, Michael Tanana, Zac Imel, Eric Poitras, David Atkins, and Vivek Srikumar. 2019. Observing dialogue in therapy: Categorizing and forecasting behavioral codes. Computational linguistics-Association for Computational Linguistics. +Wendi F Cross, David Seaburn, Danette Gibbs, Karen Schmeelk-Cone, Ann Marie White, and Eric D Caine. 2011. Does practice make perfect? a randomized control trial of behavioral rehearsal on suicide prevention gatekeeper skills. The journal of primary prevention, 32(3-4):195. +Orianna DeMasi, Marti A Hearst, and Benjamin Recht. 2019. Towards augmenting crisis counselor training by improving message retrieval. In Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology, pages 1-11. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Emily Dinan, Varvara Logacheva, Valentin Malykh, Alexander Miller, Kurt Shuster, Jack Urbanek, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, et al. 2020. The second conversational intelligence challenge (convai2). In *The NeurIPS'18 Competition*, pages 187-208. Springer. +Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2018. Wizard of wikipedia: Knowledge-powered conversational agents. In International Conference on Learning Representations. +Kathleen Kara Fitzpatrick, Alison Darcy, and Molly Vierhile. 2017. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (woebot): a randomized controlled trial. *JMIR mental health*, 4(2):e19. +Madelyn S Gould, Wendi Cross, Anthony R Pisani, Jimmie Lou Munfakh, and Marjorie Kleinman. 2013. Impact of applied suicide intervention skills training on the national suicide prevention lifeline. Suicide and Life-Threatening Behavior, 43(6):676-691. + +He He, Derek Chen, Anusha Balakrishnan, and Percy Liang. 2018. Decoupling strategy and generation in negotiation dialogues. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2333-2343. +Becky Inkster, Shubhankar Sarda, and Vinod Subramanian. 2018. An empathy-driven, conversational artificial intelligence agent (wysa) for digital mental well-being: real-world data evaluation mixed-methods study. JMIR mHealth and uHealth, 6(11):e12106. +Kerry L Knox, Janet Kemp, Richard McKeon, and Ira R Katz. 2012. Implementation and early utilization of a suicide hotline for veterans. American journal of public health, 102(S1):S29-S32. +Christa D Labouliere, Barbara Stanley, Alison M Lake, and Madelyn S Gould. 2020. Safety planning on crisis lines: feasibility, acceptability, and perceived helpfulness of a brief intervention to mitigate future suicide risk. *Suicide and Life-Threatening Behavior*, 50(1):29-41. +Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110-119. +Margaret Li, Stephen Roller, Ilia Kulikov, Sean Welleck, Y-Lan Boureau, Kyunghyun Cho, and Jason Weston. 2019a. Don't say that! making inconsistent dialogue unlikely with unlikelihood training. arXiv preprint arXiv:1911.03860. +Yu Li, Kun Qian, Weiyan Shi, and Zhou Yu. 2019b. End-to-end trainable non-collaborative dialog system. arXiv preprint arXiv:1911.10742. +Gaurav Pandey, Danish Contractor, Vineet Kumar, and Sachindra Joshi. 2018. Exemplar encoder-decoder for neural conversation generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1329-1338. +Ioannis Papaioannou, Amanda Cercas Curry, Jose L Part, Igor Shalyminov, Xinnuo Xu, Yanchao Yu, Ondrej Dusek, Verena Rieser, and Oliver Lemon. 2017a. Alana: Social dialogue using an ensemble model and a ranker trained on user feedback. Alexa Prize Proceedings. +Ioannis Papaioannou, Amanda Cercas Curry, Jose L Part, Igor Shalyminov, Xinnuo Xu, Yanchao Yu, Ondrej Dusek, Verena Rieser, and Oliver Lemon. 2017b. An ensemble model with ranking for social dialogue. arXiv preprint arXiv:1712.07558. +Verónica Pérez-Rosas, Rada Mihalcea, Kenneth Resnicow, Satinder Singh, and Lawrence An. 2017. Understanding and predicting empathic behavior in + +counseling therapy. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1426-1435. +Verónica Pérez-Rosas, Xinyi Wu, Kenneth Resnicow, and Rada Mihalcea. 2019. What makes a good counselor? learning to distinguish between high-quality and low-quality counseling conversations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 926-935. +Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai-assetss/researchcovers/languageunsupervised/language understanding paper.pdf. +Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M Smith, et al. 2020. Recipes for building an open-domain chatbot. arXiv preprint arXiv:2004.13637. +Abigail See, Stephen Roller, Douwe Kiela, and Jason Weston. 2019. What makes a good conversation? how controllable attributes affect human judgments. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1702-1723. +Igor Shalyminov, Ondrej Dusek, and Oliver Lemon. 2018. Neural response ranking for social conversation: A data-efficient approach. In Proceedings of the 2018 EMNLP Workshop SCAI: The 2nd International Workshop on Search-Oriented Conversational AI, pages 1-8. +Haoyu Song, Wei-Nan Zhang, Yiming Cui, Dong Wang, and Ting Liu. 2019. Exploiting persona information for diverse generation of conversational responses. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, pages 5190-5196. AAAI Press. +Yiping Song, Rui Yan, Xiang Li, Dongyan Zhao, and Ming Zhang. 2016. Two are better than one: An ensemble of retrieval-and generation-based dialog systems. arXiv preprint arXiv:1610.07149. +Michael J Tanana, Christina S Soma, Vivek Srikumar, David C Atkins, and Zac E Imel. 2019. Development and evaluation of clientbot: Patient-like conversational agent to train basic counseling skills. Journal of medical Internet research, 21(7):e12529. +Vibrant Emotional Health. 2019. California callers in 2018 to the national suicide prevention lifeline. +Xuewei Wang, Weiyan Shi, Richard Kim, Yoojung Oh, Sijia Yang, Jingwen Zhang, and Zhou Yu. 2019. Persuasion for good: Towards a personalized persuasive dialogue system for social good. In Proceedings of + +the 57th Annual Meeting of the Association for Computational Linguistics, pages 5635-5649. +Joseph Weizenbaum. 1966. Eliza—a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1):36-45. +Sean Welleck, Jason Weston, Arthur Szlam, and Kyunghyun Cho. 2019. Dialogue natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3731-3741. +Jason Weston, Emily Dinan, and Alexander Miller. 2018. Retrieve and refine: Improved sequence generation models for dialogue. In Proceedings of the 2018 EMNLP Workshop SCAI: The 2nd International Workshop on Search-Oriented Conversational AI, pages 87-92. +Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. 2018. Transfertransfo: A transfer learning approach for neural network based conversational agents. NeurIPS CAI Workshop. +Yu Wu, Furu Wei, Shaohan Huang, Yunli Wang, Zhoujun Li, and Ming Zhou. 2019. Response generation by context-aware prototype editing. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7281-7288. +Liu Yang, Junjie Hu, Minghui Qiu, Chen Qu, Jianfeng Gao, W Bruce Croft, Xiaodong Liu, Yelong Shen, and Jingjing Liu. 2019. A hybrid retrieval-generation neural conversation model. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pages 1341-1350. +Semin Yavuz, Abhinav Rastogi, Guan-Lin Chao, and Dilek Hakkani-Tur. 2019. Deepcopy: Grounded response generation with hierarchical pointer networks. In Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue, pages 122-132. +Justine Zhang and Cristian Danescu-Niculescu-Mizil. 2020. Balancing objectives in counseling conversations: Advancing forwards or looking backwards. arXiv preprint arXiv:2005.04245. +Justine Zhang, Robert Filbin, Christine Morrison, Jaclyn Weiser, and Cristian Danescu-Niculescu-Mizil. 2019. Finding your voice: The linguistic development of mental health counselors. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. +Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204-2213. + +
Strategy classStrategy
Functionalgreet +self-intro +close-conversation +ask-name +thanks
Proceduralidentify-coping-plan +identify-resource +intent-to-self-harm +identify-self-harm-plan +access-to-self-harm-plan +when-self-harm
Active Listeningaffirmation +caring +shared-understanding +feelings +feelings-and-summary +summary +looking-for-info-answer +open-prompt +reframing +validation-of-feelings +vindication +here-for-you
Othersuggest/advise +other
+ +Table 7: Counselor sub-utterance strategy annotations. + +# A Appendices + +We present auxiliary materials for the appreciation of our work, including additional detail on our counselor strategy annotation scheme, hyperparameter settings, information about evaluation scales, example model output from interactive testing, and open counselor feedback. + +# B Counselor strategies + +Each counselor sub-utterance was labeled with a conversation strategy, which can be grouped into classes. A full list of the strategies annotated is in Table 7. + +# C Training details + +The framework was trained with 4 gradient accumulation steps, a batch size of 1, the Adam optimizer with a learning rate decay of .75 for 4-5 epochs, based on minimum validation loss. For decoding, The max decoding length was set to 50 tokens and tokens were decoded with temperature 0.7 with nu + +cleus sampling and with probability mass 0.9. For selecting sub-utterances, 10 candidates were generated to consider. To avoid the model only learning to copy from selected exchanges, we condition the language model on exchanges for a randomly selected counselor strategy for $20\%$ of messages during training. + +# D Human judgment scales + +The scales used by crowdworkers and counselor participants were labeled as follows. + +- How coherent was the conversation? 1 (didn't make sense) - 5 (could have been a human) +- How realistic was the conversation as a simulation of a hotline visitor? 1 (it wasn't anything like chatting with a visitor) - 5 (very realistic and representative) +- How consistent was the persona? 1 (many contradictory statements) - 5 (consistent and plausible person) +- How fluent and grammatical were the responses? 1 (constant mistakes, poor english) - 5 (no mistakes) +- How detailed were the chatbot's responses? 1 (only vague answers without any detail) - 5 (lots of details) +- Overall, how human-like was the chatbot? 1 (not at all) - 5 (could have been a human) +- If you were training to be a counselor, how useful would these conversations have been for your training? 1 (not at all useful) - 5 (very useful) + +# E Example conversations + +Example conversations between crowdworkers and counselors can be found in Tables 8-11. + +# F Selected successful transcript from counselor + +The full conversation of the example of a counselor interacting with the Full model that was excerpted in the main text is in Table 12. This conversation highlights low counselor feedback in spite of relatively coherent responses. + +
TurnCrowdworkerCrisisbot
0STARThi, helpline?
1hi, what can i do for you?i don't really know.
2is something wrong? how are you feeling?a little better. it's really hard to breathe. i'm just so overwhelmed.
3that sounds hard. do you want to tell me more?i just feel overwhelmed.
4ok, so you're feeling overwhelmed. that happens to a lot of people. have you tried talking to someone?no. i don't think i want to talk to anyone.
5i understand, talking about problems can be difficult. no pressure. have you tried doing something to relax?no, not really.
6do you have any hobbies you enjoy?not really.
7what do you think could help you relax a bit?i don't really know.
8ok, would you like me to give you some self-help materials?i'll talk to my friend. ok.
9having a close friend to talk to in times like this is great. that should help you!i will. thank you for your help.
10ok, thanks for chatting with me! i hope everything goes well.i will. thank you so much for your help.
+ +Table 8: Model: Tuned. Judgements: coh.=3, con.=4, flu.=4, det.=2, hum.=3, use.=2 + +
TurnCrowdworkerCrisisbot
0STARThey
1haiwe all just started at a new middle school and everything is different
2in what way it is differentwell i got in a fight with a woman at school
3why the fight was happens?because of the fight, i'm having a really hard time at school
4oh my got, leave the problem,that sounds like it could be fun
5can you identify the reason for the fight happens?well last year i had two best friends
6thenthey got in a fight and it was really upset
7soexactly, so they started fighting
8how you get caught in thatso they started fighting about me?
9ohhhexactly, they're such fascists
10exactlyi feel like i heard something about that in the announcements the other day, it was the same guy from school, he was supposed to be my friend
11leave this thing, take a long breath, enjoy a favorite music, take a healthy food, go to sleep, and calm down your self, you will be alrite.ok thanks!
12u welcomebye
13bye tcbye
+ +Table 9: Model: Full. Judgements: coh. $= 5$ ,con. $= 4$ ,flu. $= 5$ ,det. $= 5$ ,hum. $= 5$ ,use. $= 5$ + +
TurnCounselorCrisisbot
0STARThi, i need help
1hi, i'm here to help you. my name is bob.i'm michael.
2hi michael? what do you need help with?i don't know.
3things are a bit overwhelming right now? is that right?yeah, i have to go now.
4michael, i'm here if you need me. here to talk, here to hear youthanks, i appreciate it.
5no problem at all! we're here anytimethanks, i appreciate your help.
6it can be hard to reach out for help, even texting this line can be scaryyeah, it is.
7it's scary texting, and yet you're doing it! that's amazingyeah, i just really need to talk to someone about it.
8what is it that you want to talk about, michael?i don't really know. i just don't want to talk to anyone about it.
9i'm hearing how you want to talk about it and at the same time, you don't. it's kind of this massive thing, is that right?yeah, it's a lot of my family stuff. i'm not sure how i'll feel to move on.
10mmm, yeah, i hear how you're really wrestling with uncertaintyyeah, it's like my whole life is so much more comfortable to deal with.
+ +Table 10: Model: Tuned. Judgements: coh.=4, rea.=3, con.=2, flu.=4, det.=2, hum.=3 + +
TurnCounselorCrisisbot
0STARTi don’t want to be here anymore
1i’m here to listen to you. my name is bob.i’m christopher.
2hi christopher. you said, you don’t want to be here anymore. are you having thoughts of killing yourself or harming yourself?yes
3thanks for letting me know about that christopher. do you have a plan for how you would do that?od
4overdose on what? pills?no its ok
5christopher, i’m here. do you currently have the drugs you would use to overdose with you?yeah. they’re in my room
6is there anyone else in the house with you?i don’t want to bother anyone. i have to go thanks
7christopher, it’s no bother. if you can just put the drugs in another room or give them to someone you trust, i’m here to listenok, thanks.
8it’s not a problem at all– please check in with meok, bye
9goodnightok
10again, we are available 24/7 on our call lineok
+ +Table 11: Model: Full. Judgements: coh.=4, rea.=2, con.=2, flu.=2, det.=3, hum.=3 + +
TurnCounselorCrisisbot
0STARThi, i'm michael and i kind of need somebody to talk to today. i'm feeling really down.
1hi michael, sorry to hear you are feeling down, but i am so glad you reached out. what's up?i'm having suicidal thoughts lately. my life sucks.
2sounds like things have been really rough, so much that you are thinking about suicide. do you feel like killing yourself today?sometimes i think about it.
3thank you for sharing with me. have you done anything to harm yourself today?no. i'm feeling a bit better now. but i just want to talk to someone. and i don't know how to help my mom because it hurts me to see that she's in so much distress.
4okay, i am definitely here to talk. i hear that you want to help your mom, and that you are worried about her. what is going on with your mom?she's been really depressed lately. my parents want to divorce and i'm so stressed that i can't focus on school.
5i'm sorry to hear that. divorce is very stressful for the whole family, and it makes sense that you are stressed and can't focus.yeah, i can't even face my own dad right now. he's always pressuring me.
6wow, on top of that, you are feeling pressured by your dad and it sounds like that is a difficult relationship, too. has anything felt helpful to you when you have felt stressed in the past?sometimes i get really overwhelmed by stuff, and i can't focus on school. i feel like i'm a burden and i don't know how to deal with school.
7i hear that life feels very overwhelming. no wonder you can't focus on school! has there been a time in the past when you were overwhelmed, but things got better?hopefully, i'll be able to set an appointment with a therapist soon and also see my girlfriend in the meantime though.
8yes, i hear that talking to your therapist is something that helps you, and also spending time with your girlfriend. are you able to reach out to either of them today?i'm not ready to think about that yet. but i do miss her and she's the only person who's ever really supported me emotionally.
9i see, maybe it feels like you are not ready for that kind of social contact yet? is there something that you can do on your own that helps you feel more relaxed?okay, i'll keep that in mind next time i check in. thank you. bye
10okay, thank you so much for reaching out today. we are always here to talk. how are you feeling after our chat today?not great, but i'm okay. i've been really stressed lately.
+ +Table 12: Model: Full. Judgements: sim. $= 1$ ,coh. $= 1$ ,rea. $= 3$ ,con. $= 1$ ,flu. $= 4$ ,det. $= 3$ ,hum. $= 1$ ,use. $= 3$ \ No newline at end of file diff --git a/amultipersonachatbotforhotlinecounselortraining/images.zip b/amultipersonachatbotforhotlinecounselortraining/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..058e5b3bd0fc641db4086db5c46b34c01016ad57 --- /dev/null +++ b/amultipersonachatbotforhotlinecounselortraining/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dea77206ac38aab021b93b17dd5a0a3e2cb4b6bd65635991de6053f7b3d4e414 +size 1273426 diff --git a/amultipersonachatbotforhotlinecounselortraining/layout.json b/amultipersonachatbotforhotlinecounselortraining/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..245bb46a16496ac2dfd8092f8f0b986315f5be2b --- /dev/null +++ b/amultipersonachatbotforhotlinecounselortraining/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:54b4127afb6196a45823eea39324ec5d52358a5e861c9d3acd70e84965efdf65 +size 363573 diff --git a/amultitasklearningframeworkforopiniontripletextraction/172e026b-f1e4-4416-bab8-cf50329c934e_content_list.json b/amultitasklearningframeworkforopiniontripletextraction/172e026b-f1e4-4416-bab8-cf50329c934e_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..626b683a3cf8ef18b38670859f353af397ef5dbf --- /dev/null +++ b/amultitasklearningframeworkforopiniontripletextraction/172e026b-f1e4-4416-bab8-cf50329c934e_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c057c12c217ba931e02d30dc3d9d45e9875d786b4bbeaf2774825164472d10c +size 71287 diff --git a/amultitasklearningframeworkforopiniontripletextraction/172e026b-f1e4-4416-bab8-cf50329c934e_model.json b/amultitasklearningframeworkforopiniontripletextraction/172e026b-f1e4-4416-bab8-cf50329c934e_model.json new file mode 100644 index 0000000000000000000000000000000000000000..ce670b07dd8f8e1d7730b756391d229afa933e0f --- /dev/null +++ b/amultitasklearningframeworkforopiniontripletextraction/172e026b-f1e4-4416-bab8-cf50329c934e_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:92820039da9b69c8c07950903d911ca50c05ba015e30c1888cb5017c95dedbde +size 83309 diff --git a/amultitasklearningframeworkforopiniontripletextraction/172e026b-f1e4-4416-bab8-cf50329c934e_origin.pdf b/amultitasklearningframeworkforopiniontripletextraction/172e026b-f1e4-4416-bab8-cf50329c934e_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..73244e241c2ddadd08268950d89f5d0493e733df --- /dev/null +++ b/amultitasklearningframeworkforopiniontripletextraction/172e026b-f1e4-4416-bab8-cf50329c934e_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0e7114e03df098d775801d911259f906b4d70e8b4270a6d5cfe4fa5ec57e1470 +size 403765 diff --git a/amultitasklearningframeworkforopiniontripletextraction/full.md b/amultitasklearningframeworkforopiniontripletextraction/full.md new file mode 100644 index 0000000000000000000000000000000000000000..595802810f70b3d6c77d38e0c80feffd7221ccef --- /dev/null +++ b/amultitasklearningframeworkforopiniontripletextraction/full.md @@ -0,0 +1,290 @@ +# A Multi-task Learning Framework for Opinion Triplet Extraction + +Chen Zhang $^{1}$ , Qiuchi Li $^{2}$ , Dawei Song $^{1*}$ , Benyou Wang $^{2}$ + +1 Beijing Institute of Technology, Beijing, China. + +2 University of Padova, Padova, Italy. + +{czhang,dwsong}@bit.edu.cn,{qiuchili,wang}@dei.unipd.it + +# Abstract + +The state-of-the-art Aspect-based Sentiment Analysis (ABSA) approaches are mainly based on either detecting aspect terms and their corresponding sentiment polarities, or coextracting aspect and opinion terms. However, the extraction of aspect-sentiment pairs lacks opinion terms as a reference, while coextraction of aspect and opinion terms would not lead to meaningful pairs without determining their sentiment dependencies. To address the issue, we present a novel view of ABSA as an opinion triplet extraction task, and propose a multi-task learning framework to jointly extract aspect terms and opinion terms, and simultaneously parses sentiment dependencies between them with a biaffine scorer. At inference phase, the extraction of triplets is facilitated by a triplet decoding method based on the above outputs. We evaluate the proposed framework on four SemEval benchmarks for ASBA. The results demonstrate that our approach significantly outperforms a range of strong baselines and state-of-the-art approaches. + +# 1 Introduction + +Aspect-based sentiment analysis (ABSA), also termed as Target-based Sentiment Analysis in some literature (Liu, 2012), is a fine-grained sentiment analysis task. It is usually formulated as detecting aspect terms and sentiments expressed in a sentence towards the aspects (Li et al., 2019; He et al., 2019; Luo et al., 2019; Hu et al., 2019). This type of formulation is referred to as aspect-sentiment pair extraction. Meanwhile, there exists another type of approach to ABSA, referred to as aspect-opinion co-extraction, which focuses on jointly deriving aspect terms (a.k.a. opinion targets) and + +
Example sentence:The atmosphere is attractive, but a little uncomfortable.
Aspect-sentiment pair extraction:[(atmosphere, positive), (atmosphere, negative)]
Aspect-opinion co-extraction:[atmosphere, attractive, uncomfortable]
Opinion triplet extraction:[(atmosphere, attractive, positive), (atmosphere, uncomfortable, negative)]
+ +Figure 1: Differences among aspect-sentiment pair extraction, aspect-opinion co-extraction, and opinion triplet extraction. Words in blue are aspect terms. Words in red are opinion terms. [ ] denotes a set of extracted patterns, and () denotes an extracted pattern. + +opinion terms (a.k.a. opinion expressions) from sentences, yet without figuring out their sentiment dependencies (Wang et al., 2017; Li et al., 2018b). The compelling performances of both directions illustrate a strong dependency between aspect terms, opinion terms and the expressed sentiments. + +This motivates us to put forward a new perspective for ABSA as joint extraction of aspect terms, opinion terms and sentiment polarities,[2] in short opinion triplet extraction. An illustrative example of differences among aspect-sentiment pair extraction, aspect-opinion co-extraction, and opinion triplet extraction is given in Figure 1. Opinion triplet extraction can be viewed as an integration of aspect-sentiment pair extraction and aspect-opinion co-extraction, by taking into consideration their complementary nature. It brings in two-fold advantages: (1) the opinions can boost the expressive power of models and help better determine aspect-oriented sentiments; (2) the sentiment dependencies between aspects and opinions can bridge the gap of how sentiment decisions are made and further promote interpretability of models. + +There is some prior research with a similar viewpoint. Peng et al. (2019) proposes to extract opin + +ion tuples, i.e., (aspect-sentiment pair, opinion)s, by first jointly extracting aspect-sentiment pairs and opinions by two sequence taggers, in which sentiments are attached to aspects via unified tags, $^3$ and then pairing the extracted aspect-sentiments and opinions by an additional classifier. Despite of remarkable performance the approach has achieved, two issues need to be addressed. + +The first issue arises from the prediction of aspects and sentiments with a set of unified tags thus degrading the sentiment dependency parsing process to a binary classification. As is discussed in prior studies on aspect-sentiment pair extraction (He et al., 2019; Luo et al., 2019; Hu et al., 2019), although the concerned framework with unified tagging scheme is theoretically elegant and mitigates the computational cost, it is insufficient to model the interaction between the aspects and sentiments (He et al., 2019; Luo et al., 2019). + +Secondly, the coupled aspect-sentiment formalization disregards the importance of their interaction with opinions. Such interaction has been shown important to handle the overlapping circumstances where different triplet patterns share certain elements, in other triplet extraction-based tasks such as relation extraction (Fu et al., 2019). To show why triplet interaction modelling is crucial, we divide triplets into three categories, i.e., aspect overlapped, opinion overlapped, and normal ones. Examples of these three kinds of triplets are shown in Figure 2. We can observe that two triplets tend to have the same sentiment if they share the same aspect or opinion. Hence, modelling triplet interaction shall benefit the ASBA task, yet it can not be explored with the unified aspect-sentiment tags in which sentiments have been attached to aspects without considering the overlapping cases. + +To circumvent the above issues, we propose a multi-task learning framework for opinion triplet extraction, namely OTE-MTL, to jointly detect aspects, opinions, and sentiment dependencies. On one hand, the aspects and opinions can be extracted with two independent heads in the multi-head architecture we propose. On the other hand, we decouple sentiment prediction from aspect extraction. Instead, we employ a sentiment dependency parser as the third head, to predict word-level sentiment + +![](images/64abf3820f255e9e2cdccbb7bbaedb0fc903e86b55777e5f93ea2759d093e0bf.jpg) +Figure 2: Categories of triplets. Spans in blue are aspects and spans in red are opinions. Arcs indicate sentiment dependencies and are always directed from an aspect to opinion. + +dependencies, which will be utilized to further decode span-level4 dependencies when incorporated with the detected aspects and opinions. In doing so, we expect to alleviate issues brought by the unified tagging scheme. Specifically, we exploit sequence tagging strategies (Lample et al., 2016) for extraction of aspects and opinions, whilst taking advantage of a biaffine scorer (Dozat and Manning, 2017) to obtain word-level sentiment dependencies. Additionally, since these task-heads are jointly trained, the learning objectives of aspect and opinion extraction could be considered as regularization applied on the sentiment dependency parser. In this way, the parser is learned with aspect- and opinion-aware constraints, therefore fulfilling the demand of triplet interaction modelling. Intuitively, if we are provided with a sentence containing two aspects but only one opinion (e.g., the third example in Figure 2), we can identify triplets with overlapped opinion thereby. + +Extensive experiments are carried out on four SemEval benchmarking data collections for ABSA. Our framework are compared with a range of state-of-the-art approaches. The results demonstrate the effectiveness of our overall framework and individual components within it. A further case study shows that how our model better handles overlapping cases. + +# 2 Proposed Framework + +# 2.1 Problem Formulation + +Given an input sentence $S = \{w_{i}\}_{i=1}^{|S|}$ , our model aims to output a set of triplets $T = \{t_{j}\}_{j=1}^{|T|}$ where $|S|, |T|$ are the lengths of the sentence and the triplet set, respectively. A triplet $t_{j}$ consists of three elements, i.e., $[m_{j}^{(ap)}, m_{j}^{(op)}, m_{j}^{(st)}]$ , + +![](images/446502d1eb378012f3ea99cb03856249a5114ad25d47157a8df44f499785b204.jpg) +Figure 3: An overview of our proposed framework. + +which separately stand for aspect span, opinion span, and sentiment. While the aspects and opinions are usually spans over several words in the sentence, we simplify the notation with the start position (denoted as $sp$ ) and end position (denoted as $ep$ ) of a span. Accordingly, $m_j^{(ap)}$ and $m_j^{(op)}$ can be represented as $(sp_j^{(ap)},ep_j^{(ap)})$ and $(sp_j^{(op)},ep_j^{(op)})$ . Thus, the problem is formulated as finding a function $\mathcal{F}$ that accurately maps the sentence $S = \{w_i\}_{i=1}^{|S|}$ onto a triplet set $T = \{t_j | t_j = [(sp_j^{(ap)},ep_j^{(ap)}),(sp_j^{(op)},ep_j^{(op)}),m_j^{(st)}]\}_{j=1}^{|T|}$ . + +# 2.2 The OTE-MTL Framework + +Our proposed OTE-MTL framework folds the triplet extraction process into two stages, i.e., prediction stage and decoding stage. An overview of our framework is presented in Figure 3. The prediction stage is parameterized by neural models and thus is trainable. It builds upon a sentence encoding module based on word embedding and a bidirectional LSTM structure, to learn an abstract representation of aspects and opinions. Underpinned by the abstract representation, there are three core components, accounting for three subgoals, i.e., aspect tagging, opinion tagging, and word-level sentiment dependency parsing. After the aspects, opinions and word-level dependencies have been detected, a decoding stage is then carried out to produce triplets based on heuristic rules. + +# 2.3 Sentence Encoding + +Context awareness is crucial for sentence encoding, i.e., encoding a sentence into a sequence of vectors. Hence, we adopt a bidirectional Long Short-term + +Memory network (LSTM) (Hochreiter and Schmidhuber, 1997) as our sentence encoder, owing to the context modelling capability of LSTMs. In order to encode the input sentence, we first embed each word in a sentence to a low-dimensional vector space (Bengio et al., 2003) with pre-trained word embeddings5. With the embedded word representations $E = \{\mathbf{e}_i \mid \mathbf{e}_i \in \mathbb{R}^{d_e}\}_{i=1}^{|S|}$ , the bidirectional LSTM is employed to attain contextualized representations of words $H = \{\mathbf{h}_i \mid \mathbf{h}_i \in \mathbb{R}^{2d_h}\}_{i=1}^{|S|}$ by the following operation: + +$$ +\mathbf {h} _ {i} = \left[ \overrightarrow {\mathrm {L S T M}} (\mathbf {e} _ {i}) \oplus \overleftarrow {\mathrm {L S T M}} (\mathbf {e} _ {i}) \right] \tag {1} +$$ + +where $d_{e}$ and $d_{h}$ denote the dimensionality of a word embedding and a hidden state from an unidirectional LSTM, while $\overrightarrow{\mathrm{LSTM}} (\cdot)$ and $\overleftrightarrow{\mathrm{LSTM}} (\cdot)$ stand for forward and backward LSTM, respectively. $\oplus$ means vector concatenation. + +# 2.4 Aspect and Opinion Representation + +We then extract the aspect- and opinion-specific features from the encoded hidden states, by applying dimension-reducing linear layers and nonlinear functions, rather than directly feeding the hidden states into the next components, for two reasons. First, the hidden states might contain superfluous information for follow-on computations, potentially causing a risk of overfitting. Second, such operations are expected to strip away irrelevant features for aspect tagging and opinion tagging. The computation process is formulated as + +below: + +$$ +\mathbf {r} _ {i} ^ {(a p)} = g \left(\mathbf {W} _ {r} ^ {(a p)} \mathbf {h} _ {i} + \mathbf {b} _ {r} ^ {(a p)}\right) \tag {2} +$$ + +$$ +\mathbf {r} _ {i} ^ {(o p)} = g \left(\mathbf {W} _ {r} ^ {(o p)} \mathbf {h} _ {i} + \mathbf {b} _ {r} ^ {(o p)}\right) \tag {3} +$$ + +where $\mathbf{r}_i^{(ap)}\in \mathbb{R}^{d_r}$ and $\mathbf{r}_i^{(op)}\in \mathbb{R}^{d_r}$ are aspect and opinion representations, $d_{r}$ is the dimensionality of the representation. $\mathbf{W}_r^{(ap)},\mathbf{W}_r^{(op)}\in \mathbb{R}^{d_r\times 2d_h}$ and $\mathbf{b}_r^{(ap)},\mathbf{b}_r^{(op)}\in \mathbb{R}^{d_r}$ are learnable weights and biases. Here, $g(\cdot)$ is a nonlinear function, which is ReLU, i.e., $\max (\cdot ,0)$ , in our case. + +Note that above representations are prepared for tagging. Likewise, we obtain another set of representations $\mathbf{r}_i^{(ap)'}$ , $\mathbf{r}_i^{(op)'} \in \mathbb{R}^{d_r}$ for sentiment parsing, following the same procedure as Equation 2 and 3 but with different parameters. + +# 2.5 Multi-task Architecture + +The multi-task architecture includes two parts: aspect and opinion tagging, and word-level sentiment dependency parsing. + +Aspect and Opinion Tagging. Following the $\{\mathsf{B},\mathsf{I},\mathsf{O}\}$ tagging scheme, we tag each word in the sentence with two taggers, i.e., one tagger for aspect, and the other for opinion. In particular, we receive two series of distributions over $\{\mathsf{B},\mathsf{I},\mathsf{O}\}$ tags $\mathbf{p}_i^{(ap)}$ and $\mathbf{p}_i^{(op)}\in \mathbb{R}^3$ through: + +$$ +\mathbf {p} _ {i} ^ {(a p)} = \operatorname {s o f t m a x} \left(\mathbf {W} _ {t} ^ {(a p)} \mathbf {r} _ {i} ^ {(a p)} + \mathbf {b} _ {t} ^ {(a p)}\right) \tag {4} +$$ + +$$ +\mathbf {p} _ {i} ^ {(o p)} = \operatorname {s o f t m a x} \left(\mathbf {W} _ {t} ^ {(o p)} \mathbf {r} _ {i} ^ {(o p)} + \mathbf {b} _ {t} ^ {(o p)}\right) \tag {5} +$$ + +where $\mathbf{W}_t^{(ap)}$ , $\mathbf{W}_t^{(op)} \in \mathbb{R}^{3 \times d_r}$ and $\mathbf{b}_t^{(ap)}$ , $\mathbf{b}_t^{(op)} \in \mathbb{R}^3$ are trainable parameters. + +Accordingly, we can deduce the loss function, typically cross entropy with categorical distribution, for tagging as: + +$$ +\begin{array}{l} \mathcal {L} _ {\text {t a g}} = - \frac {1}{| S |} \sum_ {i} \sum_ {k} \hat {\mathbf {p}} _ {i, k} ^ {(a p)} \log \left(\mathbf {p} _ {i, k} ^ {(a p)}\right) \tag {6} \\ - \frac {1}{| S |} \sum_ {i} \sum_ {k} \hat {\mathbf {p}} _ {i, k} ^ {(o p)} \log \left(\mathbf {p} _ {i, k} ^ {(o p)}\right) \\ \end{array} +$$ + +where $\hat{\mathbf{p}}_i^{(ap)}$ and $\hat{\mathbf{p}}_i^{(op)}$ respectively denote the ground truth aspect and opinion tag distributions of each word, and $k$ is an enumerator over each item in a categorical distribution. + +Word-level Sentiment Dependency Parsing. There are $|S|^2$ possible word pairs (including self-pairing cases) in each sentence and we intend to determine dependency type of every word pair. The + +set of dependency types is defined as {NEU, NEG, POS, NO-DEP}, so as to address all kinds of dependencies. Here, NO-DEP denotes no sentiment dependency. In addition, inspired by the table filling methods (Miwa and Sasaki, 2014; Bekoulis et al., 2018), sentiment dependencies are considered only for a pair of words that are exactly the last word of an aspect and the last word of an opinion in a triplet. Recall the example sentence "Great battery, start up speed." For the triplet (start up speed, great, POS), the sentiment dependency is simplified to (speed, great, POS). As such, the learning redundancy for the parser is much reduced, while the span-level sentiment dependency is still available when it is combined with extracted aspect and opinion spans. + +We utilize a biaffine scorer to capture the interaction of two words in each word pair, due to its proven expressive power in syntactic dependency parsing (Dozat and Manning, 2017). The score assignment to each word pair is as below: + +$$ +\begin{array}{l} \tilde {\mathbf {s}} _ {i, j, k} = \left[ \mathbf {W} ^ {(k)} \mathbf {r} _ {i} ^ {(a p) \prime} + \mathbf {b} ^ {(k)} \right] ^ {\top} \mathbf {r} _ {j} ^ {(o p) \prime} \tag {7} \\ = \left[ \mathbf {W} ^ {(k)} \mathbf {r} _ {i} ^ {(a p) \prime} \right] ^ {\top} \mathbf {r} _ {j} ^ {(o p) \prime} + \mathbf {b} ^ {(k)} ^ {\top} \mathbf {r} _ {j} ^ {(o p) \prime} \\ \end{array} +$$ + +where $\tilde{\mathbf{s}}_{i,j,k}$ stands for score of the $k$ -th dependency type for a word pair $(w_i, w_j)$ . $\mathbf{W}^{(k)}$ and $\mathbf{b}^{(k)}$ are trainable weight and bias for producing the $k$ -th score, respectively. Moreover, we use $\mathbf{s}_{i,j}$ to indicate a softmax-normalized vector of scores, which contains probabilities of all dependency types for the word pair $(w_i, w_j)$ : + +$$ +\mathbf {s} _ {i, j, k} = \operatorname {s o f t m a x} \left(\tilde {\mathbf {s}} _ {i, j, k}\right) \tag {8} +$$ + +As observed from the factorization in Equation 7, conceptually the biaffine scorer can not only model the likelihood of $w_{i}$ receiving $w_{j}$ as a dependent of a specific type (the first term), but also include the prior probability of $w_{j}$ being a dependent of such type (the second term). When it is implemented, the scorer is essentially an affine transform followed by matrix multiplication. + +Thereafter, the loss function for word-level sentiment dependency parsing is a cross entropy function given below: + +$$ +\mathcal {L} _ {d e p} = - \frac {1}{| S | ^ {2}} \sum_ {(i, j)} \sum_ {k} \hat {\mathbf {s}} _ {i, j, k} \log \left(\mathbf {s} _ {i, j, k}\right) \tag {9} +$$ + +where $\hat{\mathbf{s}}_{i,j}$ is the ground-truth dependency distribution for each word pair $(w_i, w_j)$ . + +Overall Learning Objective. Ultimately, we can conduct joint training of the multi-task learning framework with the following objective: + +$$ +\min _ {\theta} \mathcal {L} = \min _ {\theta} \mathcal {L} _ {t a g} + \alpha \mathcal {L} _ {d e p} + \gamma | | \theta | | _ {2} \tag {10} +$$ + +where $\alpha$ is a trade-off term to balance the learning between tagging and sentiment dependency parsing. $\theta$ stands for trainable parameters. $||\theta ||_2$ and $\gamma$ are $L_{2}$ regularization of $\theta$ and a controlling term, respectively. + +# 2.6 Triplet Decoding + +Upon obtaining the extracted aspects, opinions, and word-level sentiment dependencies, we conduct a triplet decoding process using heuristic rules. Basically, we view the sentiment dependencies resulted from the biaffine scorer as pivots, and carry out a reverse-order traverse on tags generated by the aspect and opinion taggers. + +For example, from word sequence "Great battery, start up speed.", we get aspect tags $\{0, B, 0, B, I, I, O\}$ , opinion tags $\{B, 0, 0, 0, 0, 0, 0\}$ , and a word-level sentiment dependency, which is represented in index form, (6, 1, POS). The yielded sentiment dependency typically means that the last word of aspect is the 6-th word (speed), the last word of opinion is the 1-th word (Great), and they together form a positive sentiment. The traverse is conducted based on the aspect and opinion index (pivots) and the word sequence following stop-on-non-1 criterion. And the final output should be [(4, 6), (1, 1), POS]. Details of the algorithm is shown in 1. + +Algorithm 1 Decoding w/ stop-on-non- $\mathbb{I}$ criterion. +Input: aspect tags $\{g_i^{(ap)}\}_{i = 1}^n$ , opinion tags + $\{g_{i}^{(op)}\}_{i = 1}^{n}$ , sentiment dependency $(j,k,p)$ +Output: triplet $t$ +1: $j^{\prime}\gets j$ +2: while $g_{j^{\prime}}^{(ap)}$ is I do $\triangleleft$ stop on B and O. +3: $j^{\prime}\gets j^{\prime} - 1$ +4: if $j^{\prime}\leq 0$ then $\triangleleft$ or exceeding boundary. +5: break +6: $k^{\prime}\gets k$ +7: while $g_{k^{\prime}}^{(op)}$ is I do +8: $k^{\prime}\gets k^{\prime} - 1$ +9: if $k^{\prime}\leq 0$ then +10: break +11: $t\gets [(j',j),(k',k),p]$ + +# 3 Experimental Setup + +# 3.1 Datasets and Evaluation Metrics + +We conduct experiments on three datasets in the "restaurant" domain from SemEval 2014, 2015 and 2016 (Pontiki et al., 2014, 2015, 2016), and one dataset in the "laptop" domain from SemEval 2014. Hereafter, we will refer to them as REST14, REST15, REST16, and LAPTOP14 respectively. Since they are originally annotated with aspects and sentiments only, we additionally adopt annotations of opinion terms from Wang et al. (2017) and Peng et al. (2019). Each dataset is split to three subsets, namely, training set, validation set, and test set. The statistics of these datasets are shown in Table 1. It is worth noting that, in (Peng et al., 2019), the opinion overlapped triplets (in short OOTs) are removed from all four datasets in the preprocessing step. However, these cases are preserved in our setting. A key observation from the statistics is that there are large amounts of overlapping cases in the datasets, on average accounting for $24.2\%$ of the total number of triplets across all four datasets. This phenomenon suggests the need and significance of triplet interaction modelling. + +Moreover, we adopt precision, recall, and micro F1-measure as our evaluation metrics for triplet extraction. Only exactly matched triplets, i.e., with all of the aspect, opinion and sentiment matched against gold standards, are viewed as true positives during evaluation. All results are reported by averaging 10 runs with random initialization. Paired t-test is used to examine statistical significance of the results. + +# 3.2 Implementation Details + +In our experiments, the word embeddings are initialized with pretrained GloVe word vectors (Pennington et al., 2014). The dimensionalities of embeddings $d_{e}$ , hidden states $d_{h}$ , aspect and opinion representations $d_{r}$ are set to 300, 300, 100, respectively. The trade-off term in learning objective, i.e., $\alpha$ , is set to be 1. The coefficient for $L_{2}$ regularization, i.e., $\gamma$ , is $10^{-5}$ . Dropout is applied on embeddings to avoid overfitting and the drop rate is 0.5. The learning rate during training is $10^{-3}$ while the batch size is 32. All the parameters are initialized with uniform distribution and optimized with the Adam optimizer. Besides, we set a patience number 5, so that we could stop the learning process early if there is no further performance improvement on validation set. + +
Dataset# sentence# triplet# sentence w/ overlap# triplet w/ overlap
REST14train13002409437578
val.32359092147
test4961014193389
REST15train593977151189
val.1481604262
test3184796871
REST16train8421370208256
val.2103345261
test32050777120
LAPTOP14train9201451263365
val.22838080101
test339552103140
+ +Table 1: Statistics of datasets. Sentence w/ overlap means sentence containing overlapped triplets and triplet w/ overlap denotes triplet that overlaps with other triplets. + +# 3.3 Baselines and Variants + +To perform a systematic comparison, we introduce a variety of baselines, which can be classified into two groups, i.e., pipeline methods proposed in Peng et al. (2019) and joint methods we adapted from previous aspect-opinion co-extraction systems based on our framework OTE-MTL. + +First, we list the baselines with a pipeline structure. (1) Pipeline (Peng et al., 2019) decomposes triplet extraction to two stages: stage one for predicting unified aspect-sentiment and opinion tags, while stage two for pairing the two results from stage one. We further include three models adjusted in accordance with Pipeline: (2) Unified+ (Li et al., 2019) is a typical aspect-sentiment pair extraction system, in which the unified tagging scheme is used. (3) RENANTE+ (Dai and Song, 2019) is originally an aspect-opinion co-extraction system in a weakly-supervised manner. (4) CMLA+ (Wang et al., 2017) is an aspect-opinion co-extraction system modelling the interaction between the aspects and opinions. Additionally, we adapt two extra baseline models to the multi-task leaning, resulting in: (5) CMLA-MTL and (6) HAST-MTL (Li et al., 2018b), which are extended from existing state-of-the-art aspect-opinion co-extraction systems. + +We also propose a list of variants of our proposed OTE-MTL framework to examine the efficacy of different components in it. (a) OTE-MTL-Inter feeds the prediction of aspects and opinions to the biaffine scorer by imposing tag embedding + +and concatenating tag embeddings to the input of the scorer. (b) OTE-MTL-Concat replaces the biaffine scorer with an activated linear layer applied on the concatenated vectors of aspect and opinion representations. (c) OTE-MTL-Unified uses unified aspect-sentiment tagging scheme and degrades the biaffine scorer to a binary pair classifier, which is similar to Pipeline but is jointly trained. (d) OTE-MTL-Collapsed combines the aspect and opinion tagging components into one single module via a collapsed tag set $\{B - AP, I - AP, B - OP, B - OP, O\}$ , thus is forced to account for the constraint that aspects and opinions would never overlap. + +# 4 Results and Analysis + +# 4.1 Quantitative Evaluation + +Comparison with Baselines. The results in comparison with baselines are shown in Table 2, both on datasets with and without OOTs for a fair comparison. Our propose model OTE-MTL consistently outperforms all state-of-the-art baselines on all datasets with and without OOTs. Thus, we conclude OTE-MTL is effective in dealing with opinion triplet extraction task. + +We observe that the results of OTE-MTL on datasets without OOTs are generally better than those with OOTs except for LAPTOP14, implying that datasets without OOTs is comparably simpler and easier to achieve a good performance. Hence, we believe that overlapping cases bring challenges and can be partly addressed via triplet interaction modelling. Nevertheless, CMLA+ presents a worse performance in contrast to superior performance produced by CMLA-MTL. This fact suggests that, through decoupling aspect and sentiment predictions and putting them under the multi-task learning framework, the model can be enhanced and gain better results. + +Comparison with Variants. The comparison with variants of OTE-MTL shown in Table 2 aims to verify the effectiveness of different components of OTE-MTL. As a whole, OTE-MTL surpasses all its variants. Specifically, OTE-MTL is slightly better than OTE-MTL-Inter, however, OTE-MTL exceeds other variants by large margins. + +Rather than implicitly modelling the interaction between tagging and sentiment dependency parsing, OTE-MTL-Inter explicitly feeds emebddings of predicted tags to the biaffine scorer. It gets an inferior performance. We conjecture the reason lies in the latent error propagation when tags are par + +
ModelREST14REST15REST16LAPTOP14
pre.rec.f1.pre.rec.f1.pre.rec.f1.pre.rec.f1.
\( RENANTE^{+*} \)30.9038.3034.2029.4026.9028.0027.1020.5023.3023.1017.6020.00
\( CMLA^{+*} \)38.8047.1042.5034.4037.6035.9043.6039.8041.6031.4034.6032.90
\( Unified^{+*} \)43.8362.3851.4343.3450.7346.6938.1953.4744.5142.2542.7842.47
\( Pipeline^{+*} \)42.2964.0750.9040.9754.6846.7946.7662.9753.6240.4047.2443.50
OTE-MTL (ours)*66.0456.25\( 60.62^‡ \)57.5143.96\( 49.76^‡ \)64.6854.97\( 59.36^‡ \)50.5239.71\( 44.31^‡ \)
CMLA-MTL43.2444.9543.9735.8739.8537.5544.2246.4345.0133.6136.1134.68
HAST-MTL58.9746.7552.0441.4837.5839.3252.3248.5649.9247.7025.7433.24
OTE-MTL (ours)64.5455.57\( 59.67^‡ \)54.1845.20\( 48.97^‡ \)58.1654.02\( 55.83^‡ \)48.1742.43\( 45.05^‡ \)
OTE-MTL-Inter66.2454.3859.6149.3246.1247.3357.7153.0655.1747.6641.8544.43
OTE-MTL-Concat48.7948.2848.4646.8842.6144.5352.5548.0350.0946.8138.4642.14
OTE-MTL-Unified51.1944.6547.6440.3234.3837.0148.5240.3043.8537.4234.1735.54
OTE-MTL-Collapsed45.3836.2640.1932.5529.5230.6837.8633.0635.1932.5627.2329.60
+ +Table 2: Quantitative evaluation results (\%). Results of models with marker * are reported on datasets without OOTs. Results of models with marker † are directly cited from Peng et al. (2019). F1 measures in bold are the best performing numbers on each dataset. F1 measures with marker ‡ are significantly better than other numbers on each dataset with paired t-test ( $p < 0.01$ ). + +tially wrong, therefore hinting implicit modelling is a promising choice. The failure of OTE-MTL-Concat, which cannot model priors, supports the idea of leveraging biaffine scorer as word-level sentiment dependency parser. The result of OTE-MTL-Unified indicates that coupling aspect and sentiment extraction is suboptimal. Furthermore, we use OTE-MTL-Collapsed to account for non-overlap constraint of aspects and opinions, however, it obtains unexpectedly poor results. A possible explanation is that simultaneously collapsing aspect and opinion representations into one space may cause limited capacity for expressiveness. + +# 4.2 Qualitative Evaluation + +Case Study. To understand in what way our framework overwhelms the other unified tagging-based approaches, we perform a case study on three representative examples from test sets, as displayed in Table 3. + +We notice that both OTE-MTL-Unified and OTE-MTL are working well for the first case which involves no overlapping. Nonetheless, OTE-MTL-Unified performs less well when faced with the second sample which contains aspect overlapped triplets and requires triplet interaction modelling. This case also shows conflicting opinions to an aspect (Tan et al., 2019), which is not covered by the training set but exists in real-world applications. It cannot be coped with by coupled aspect-sentiment tags since a tag should not have diverse sentiments. Thus decoupling sentiments from aspect tags is necessary. In the third example with long-range de + +pendency, both aspect overlap and opinion overlap exist. For this case, OTE-MTL is not strong enough to make all correct predictions, but still seems to work better than OTE-MTL-Unified. + +Error Analysis. To further find out the strengths and limitations of OTE-MTL, we conduct a detailed analysis of false positives (extracted by the system but not existing in ground truth) and false negatives (not extracted by the system but existing in ground truth) on REST14. For false positives, we categorize them into four classes: false aspect, false opinion, false sentiment, and other (mixed) case. For false negatives, we divide them according to categories of overlap (i.e., aspect overlapped, opinion overlapped, normal). + +Figure 4 shows the analysis result. False positives are largely triggered by only one false element, especially, aspect or opinion, of an extracted triplet, motivating us to develop more robust span detection algorithms. In addition, the circumstance might also reflect that exact match is not an ideal metric when systems are evaluated, since minor discrepancy in a span may be harmless for opinion interpretation in practice, as we could observe in Table 3. Likewise, from Figure 4, we posit that overlapping cases are still non-trivial to solve given they have almost taken half of the false negatives. + +# 5 Related Work + +# 5.1 Aspect-based Sentiment Analysis + +Our work falls in the broad scope of ABSA. As we have previously discussed, there are two types + +
CaseGround truthOTE-MTL-UnifiedOTE-MTL
Great food but the service was dreadful![(food, Great, POS), (service, dreadful, NEG)][(food, Great, POS), (service, dreadful, NEG)][(food, Great, POS), (service, dreadful, NEG)]
The atmosphere is attractive, but a little uncomfortable.[(atmosphere, attractive, POS), (atmosphere, uncomfortable, NEG)][(atmosphere, attractive, POS), (atmosphere, uncomfortable, POSx)][(atmosphere, attractive, POS), (atmosphere, uncomfortable, NEG)]
I am pleased with the fast log on, speedy WiFi connection and the long battery life.[(log on, fast, POS), (WiFi connection, speedy, POS), (battery life, long, POS), (log on, pleased, POS), (WiFi connection, pleased, POS), (battery life, pleased, POS)][(logx, fast, POS), (WiFi connection, speedy, POS), (battery life, long, POS), (logx, pleased, POS), ()x, ()x][(logx, fast, POS), (WiFi connection, speedy, POS), (battery life, long, POS), (logx, pleased, POS), (WiFix, pleased, POS), ()x]
+ +Table 3: Case study. Marker ${}^{x}$ indicates incorrect predictions. + +![](images/e1a0f95e69af27e7b2324516c1c52fb7f12656f4e6ed4478a9098bf2b9fce210.jpg) +Figure 4: Components of false positives and false negatives. + +of approaches in ABSA: aspect-sentiment pair extraction that concentrates on collaboratively detecting aspects and attached sentiment orientations (Li et al., 2019; He et al., 2019; Luo et al., 2019; Hu et al., 2019), and aspect-opinion co-extraction that tends to co-extract aspects and opinions (Wang et al., 2017; Li et al., 2018b). Alternatively, ABSA is also formulated as determining sentiment polarity of a given aspect in a sentence (Jiang et al., 2011; Dong et al., 2014; Tang et al., 2016a,b; Li et al., 2018a; Zhang et al., 2019), which is inflexible for practical use since aspects are not naturally accessible. + +In this paper, we unify the aspect-sentiment pair extraction and aspect-opinion co-extraction, and formulate them as a triplet extraction problem. Our work is also aimed at addressing several issues in Peng et al. (2019), as discussed in the Introduction Section. + +# 5.2 Triplet Extraction-based Task + +Other than ABSA, a majority of triplet extraction-based tasks lies in the area of natural language processing. For example, Joint Entity and Rela + +tion Extraction (JERE) aims at detecting a pair of entity mentions in a sentence and predicting relation between the two. Approaches to JERE can be sorted into four streams: pipeline-based, table filling-based (Miwa and Sasaki, 2014; Bekoulis et al., 2018; Fu et al., 2019), tagging-based (Zheng et al., 2017), and encoder decoder-based (Zeng et al., 2018). Our work is motivated by table filling methods in Miwa and Sasaki (2014) and Bekoulis et al. (2018). We decompose triplet extraction to three subtasks, in which word-level sentiment dependency parsing can actually be viewed as a table filling problem, and solve them jointly in a multitask learning framework. + +# 6 Conclusions and Future Work + +Our work put forwards an opinion triplet extraction perspective for aspect-based sentiment analysis. Existing works that are applicable to opinion triplet extraction have been shown insufficient, owing to the use of unified aspect-sentiment tagging scheme and ignorance of the interaction between elements in the triplet. Thus, we propose a multi-task learning framework to address the limitations by highlighting the uses of joint training, decoupled aspect and sentiment prediction, and regularization among correlated tasks during learning. Experimental results verify the effectiveness of our framework in comparison with a wide range of strong baselines. Comparison results with different variants of the proposed framework signify the necessity of the core components in the framework. + +Based on the observations from a case study and error analysis, we plan to carry out further research in the following aspects: (1) more robust taggers for aspect and opinion extraction, (2) more flexible evaluation metric for triplet extraction, and (3) more mighty triplet interaction mechanism (e.g., encoder decoder structure). + +# Acknowledgments + +This work is supported by The National Key Research and Development Program of China (grant No. 2018YFC0831704) and Natural Science Foundation of China (grant No. U1636203, U1736103). + +# References + +Giannis Bekoulis, Johannes Deleu, Thomas Demeester, and Chris Develder. 2018. Joint entity recognition and relation extraction as a multi-head selection problem. *Expert Systems with Applications*, 114:34-45. +Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of machine learning research, 3(Feb):1137-1155. +Hongliang Dai and Yangqiu Song. 2019. Neural aspect and opinion term extraction with mined rules as weak supervision. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5268-5277. +Li Dong, Furu Wei, Chuanqi Tan, Duyu Tang, Ming Zhou, and Ke Xu. 2014. Adaptive recursive neural network for target-dependent twitter sentiment classification. In Proceedings of the 52nd annual meeting of the association for computational linguistics (volume 2: Short papers), pages 49-54. +Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency parsing. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. +Tsu-Jui Fu, Peng-Hsuan Li, and Wei-Yun Ma. 2019. Graphrel: Modeling text as relational graphs for joint entity and relation extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1409-1418. +Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2019. An interactive multi-task learning network for end-to-end aspect-based sentiment analysis. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 504-515. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780. +Minghao Hu, Yuxing Peng, Zhen Huang, Dongsheng Li, and Yiwei Lv. 2019. Open-domain targeted sentiment analysis via span-based extraction and classification. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 537-546. + +Long Jiang, Mo Yu, Ming Zhou, Xiaohua Liu, and Tiejun Zhao. 2011. Target-dependent twitter sentiment classification. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 151-160. Association for Computational Linguistics. +Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of NAACL-HLT, pages 260-270. +Xin Li, Lidong Bing, Wai Lam, and Bei Shi. 2018a. Transformation networks for target-oriented sentiment classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 946-956. +Xin Li, Lidong Bing, Piji Li, and Wai Lam. 2019. A unified model for opinion target extraction and target sentiment prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6714-6721. +Xin Li, Lidong Bing, Piji Li, Wai Lam, and Zhimou Yang. 2018b. Aspect term extraction with history attention and selective transformation. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, pages 4194-4200. +Bing Liu. 2012. Sentiment analysis and opinion mining. Synthesis lectures on human language technologies, 5(1):1-167. +Huaishao Luo, Tianrui Li, Bing Liu, and Junbo Zhang. 2019. Doer: Dual cross-shared rnn for aspect temporality co-extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 591-601. +Makoto Miwa and Yutaka Sasaki. 2014. Modeling joint entity and relation extraction with table representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1858-1869. +Haiyun Peng, Lu Xu, Lidong Bing, Fei Huang, Wei Lu, and Luo Si. 2019. Knowing what, how and why: A near complete solution for aspect-based sentiment analysis. CoRR, abs/1911.01616. +Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543. +Maria Pontiki, Dimitrios Galanis, Harris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, AL-Smadi Mohammad, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orphee De Clercq, et al. 2016. Semeval-2016 task 5: Aspect based sentiment analysis. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 19-30. + +Maria Pontiki, Dimitrios Galanis, Harris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015. Semeval-2015 task 12: Aspect based sentiment analysis. In Proceedings of the 9th international workshop on semantic evaluation (SemEval 2015), pages 486-495. +Maria Pontiki, Dimitrios Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. Semeval-2014 task 4: Aspect based sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 27-35. +Xingwei Tan, Yi Cai, and Changxi Zhu. 2019. Recognizing conflict opinions in aspect-level sentiment classification with dual attention networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3426-3431. +Duyu Tang, Bing Qin, Xiaocheng Feng, and Ting Liu. 2016a. Effective lstms for target-dependent sentiment classification. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3298-3307. +Duyu Tang, Bing Qin, and Ting Liu. 2016b. Aspect level sentiment classification with deep memory network. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 214-224. +Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2017. Coupled multi-layer attentions for co-extraction of aspect and opinion terms. In Thirty-First AAAI Conference on Artificial Intelligence. +Xiangrong Zeng, Daojian Zeng, Shizhu He, Kang Liu, and Jun Zhao. 2018. Extracting relational facts by an end-to-end neural model with copy mechanism. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 506-514. +Chen Zhang, Qiuchi Li, and Dawei Song. 2019. Aspect-based sentiment classification with aspect-specific graph convolutional networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4560-4570. +Suncong Zheng, Feng Wang, Hongyun Bao, Yuexing Hao, Peng Zhou, and Bo Xu. 2017. Joint extraction of entities and relations based on a novel tagging scheme. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1227-1236. \ No newline at end of file diff --git a/amultitasklearningframeworkforopiniontripletextraction/images.zip b/amultitasklearningframeworkforopiniontripletextraction/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..02e53253ed4b2e7982e57576352624f0c454a3d8 --- /dev/null +++ b/amultitasklearningframeworkforopiniontripletextraction/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:68cc914b064a81808de7baf8cb4af03af54143c7f45152c1906ddb1ab14fd52c +size 392561 diff --git a/amultitasklearningframeworkforopiniontripletextraction/layout.json b/amultitasklearningframeworkforopiniontripletextraction/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..05a685b9572dabb245c1ec017110220fb6959c90 --- /dev/null +++ b/amultitasklearningframeworkforopiniontripletextraction/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f06e3aa9140b498d747a132b21e1f8a0f3b8b551df3db6786f75bc7fa2b7263 +size 333792 diff --git a/anovelchallengesetforhebrewmorphologicaldisambiguationanddiacriticsrestoration/b0138688-b544-4e7e-a01f-b9b03e1d2ca3_content_list.json b/anovelchallengesetforhebrewmorphologicaldisambiguationanddiacriticsrestoration/b0138688-b544-4e7e-a01f-b9b03e1d2ca3_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..dc47c47f99ea84332558407306f7c92efee13388 --- /dev/null +++ b/anovelchallengesetforhebrewmorphologicaldisambiguationanddiacriticsrestoration/b0138688-b544-4e7e-a01f-b9b03e1d2ca3_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dd04e1c53e188ef406306c35d7de9e967087af88b426e9f36c4c2e968e7c9994 +size 82774 diff --git a/anovelchallengesetforhebrewmorphologicaldisambiguationanddiacriticsrestoration/b0138688-b544-4e7e-a01f-b9b03e1d2ca3_model.json b/anovelchallengesetforhebrewmorphologicaldisambiguationanddiacriticsrestoration/b0138688-b544-4e7e-a01f-b9b03e1d2ca3_model.json new file mode 100644 index 0000000000000000000000000000000000000000..9a401374415fc14f29295b04a48711a5285b9eda --- /dev/null +++ b/anovelchallengesetforhebrewmorphologicaldisambiguationanddiacriticsrestoration/b0138688-b544-4e7e-a01f-b9b03e1d2ca3_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e497e7786050fcd28f0f7e8183f2fa420bd440af7a6f69966b02f54f7c979eea +size 93391 diff --git a/anovelchallengesetforhebrewmorphologicaldisambiguationanddiacriticsrestoration/b0138688-b544-4e7e-a01f-b9b03e1d2ca3_origin.pdf b/anovelchallengesetforhebrewmorphologicaldisambiguationanddiacriticsrestoration/b0138688-b544-4e7e-a01f-b9b03e1d2ca3_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d77fb4f54e191cc943e7aa46023b286a0e9f24ad --- /dev/null +++ b/anovelchallengesetforhebrewmorphologicaldisambiguationanddiacriticsrestoration/b0138688-b544-4e7e-a01f-b9b03e1d2ca3_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a6da0000f27d75ea6f63aa581974bcf0ce87338c12a7deae82002c98fd4879bb +size 332639 diff --git a/anovelchallengesetforhebrewmorphologicaldisambiguationanddiacriticsrestoration/full.md b/anovelchallengesetforhebrewmorphologicaldisambiguationanddiacriticsrestoration/full.md new file mode 100644 index 0000000000000000000000000000000000000000..d8e9869693ab02bfcd253a4b9652e060f3b0c0e0 --- /dev/null +++ b/anovelchallengesetforhebrewmorphologicaldisambiguationanddiacriticsrestoration/full.md @@ -0,0 +1,175 @@ +# A Novel Challenge Set for Hebrew Morphological Disambiguation and Diacritics Restoration + +Avi Shmidman $^{1,2}$ , Joshua Guedalia $^{1,2}$ , Shaltiel Shmidman $^{1,2}$ , Moshe Koppel $^{1,2}$ , Reut Tsarfaty $^{1,3}$ + +$^{1}$ Bar Ilan University / Ramat Gan, Israel $^{2}$ DICTA / Jerusalem, Israel + +3Allen Institute for Artificial Intelligence + +{avi.shmidman, josh.guedalia, shaltiel.shmidman, + +moshe.koppel,reut.tsarfaty}@biu.ac.il + +# Abstract + +One of the primary tasks of morphological parsers is the disambiguation of homographs. Particularly difficult are cases of unbalanced ambiguity, where one of the possible analyses is far more frequent than the others. In such cases, there may not exist sufficient examples of the minority analyses in order to properly evaluate performance, nor to train effective classifiers. In this paper we address the issue of unbalanced morphological ambiguities in Hebrew. We offer a challenge set for Hebrew homographs — the first of its kind — containing substantial attestation of each analysis of 21 Hebrew homographs. We show that the current SOTA of Hebrew disambiguation performs poorly on cases of unbalanced ambiguity. Leveraging our new dataset, we achieve a new state-of-the-art for all 21 words, improving the overall average F1 score from 0.67 to 0.95. Our resulting annotated datasets are made publicly available for further research. + +# 1 Introduction + +It is a known phenomenon that the distribution of linguistic units, or words, in a language follows a Zipf law distribution (Zipf, 1949), wherein a relatively small number of words appear frequently, and a much larger number of items appear in a long tail of words, as rare events (Czarnowska et al., 2019). Significantly, this also applies to the distribution of analyses of a given homograph. Take for instance the simple POS-tag ambiguity in English between noun and verb (Elkahky et al., 2018). The word "fair" can be used as an adjective ("a fair price") or as a noun ("she went to the fair"). Yet, the distribution of these two analyses is certainly not fair; the adjectival usage is far more frequent than the nominal usage (e.g., in Bird et al. (2008) the latter is six times more frequent than the former). We will call such cases "unbalanced homographs". + +Cases of unbalanced homographs pose a formidable challenge for automated morphological parsers and segmenters. In tagged training corpora, the frequent option will naturally dominate the overwhelming majority of the occurrences. If the training corpus is not sufficiently large, then the sparsity of the minority analysis will prevent generalization by machine-learning models. By the same token, it can be difficult to evaluate the performance of tagging systems regarding unbalanced homographs, because the sparsity of the minority analysis prevents computation of adequate scoring. + +The empirical consequences of unbalanced homographs are magnified in morphologically rich languages (MRLs), including many Semitic languages, where distinct morphemes are often affixed to the word itself, resulting in additional ambiguity (Fabri et al., 2014; Habash et al., 2009). Furthermore, in many Semitic MRLs, the letters are almost entirely consonantal, omitting vowels. This results in a particularly high number of homographs, each with a different pronunciation and meaning. + +In this paper, we focus upon unbalanced homographs in Hebrew, a highly ambiguous MRL in which vowels are generally omitted (Itai and Wintner, 2008; Adler and Elhadad, 2006). Take for example the Hebrew word. "This frequent word is generally read as a single nominal morpheme, meaning "country". However, it can also be read as "from the law/judgment of her", wherein the initial and final letters both serve as distinct morphemes. This last usage is far less common, and, in an overall distribution, it would be relegated to the long tail, with very few attestations in any given corpus. + +Hebrew is a low-resource language, and as such, the problem of unbalanced homographs is particularly acute. Existing tagged corpora of Hebrew are of limited size, and in most cases of unbalanced homographs, the corpora do not provide sufficient + +examples to evaluate performance regarding minority analyses, nor to train an effective classifier. + +Here, we propose to overcome this difficulty by means of a challenge set: a group of specialized training sets which each focus upon one particular homograph, offering substantial attestations of the competing analysis. Designing such contrast sets that expose particularly hard unbalanced cases was recently proposed as a complementary evaluation effort for a range of NLP tasks by Gardner et al. (2020). Notably, all tasks therein focus exclusively on English, and do not make any reference to morphology. Another, particularly successful, instance of this approach is the Noun/Verb challenge set for English built by Elkahky et al. (2018). Yet, heretofore, no challenge sets have been built to address cases of unbalanced homographs in Hebrew. + +In order to fill this lacuna, we built a challenge set for 12 frequent cases of unbalanced Hebrew homographs. Each of these words admits of two possible analyses, each with its own diacritization and interpretation. $^{1}$ For each of the possible analyses, we gather 400-2,500 sentences exemplifying such usage, from a varied corpus consisting of news, books, and Wikipedia. Furthermore, in order to highlight the particular problem regarding unbalanced homographs, we add an additional 9 cases of balanced homographs, for contrast and comparison. All in all, the corpus contains over 56K sentences. $^{2}$ + +# 2 Description of the Corpus + +In Table 1 we list the 21 homographs addressed in our challenge set. For each case, we specify the frequency of each analysis in naturally-occurring Hebrew text, and the ratio between them. $^3$ The 21 homographs include a wide range of homograph types. Some are cases of different POS types: Adj vs. Prep (13), Noun vs. Verb (15, 18), Pronoun vs. Prep (2,4), Noun vs. Prep (9), etc. Other cases differ in terms of whether the final letter should be segmented as a suffix (10, 13, 20). In some instances, the morphology is the same, but the difference lies in the stem/lexeme (5, 7, 8, 11). + +In choosing our 21 homographs, we first assembled a list of the most frequent homographs in the + +Hebrew language. For the simplicity of this initial proof of concept, we constrained our list to homographs with only two primary analyses. We also constrained our list to cases where the two analyses represent different lexemes, skipping over cases in which the difference is only one of inflection. Further, some cases were filtered out due to data sparsity. Finally, we also included a number of less frequent homographs, to allow for a comparison between frequent and infrequent homographs. + +In order to gather sentences for the contrast sets, we first sampled 5000 sentences for each target word, and sent them to student taggers. For balanced homographs, with ratios of 1:3 or less, this process handily provided a sufficiently large number of sentences for each of the two analyses. However, regarding cases of unbalanced homographs, wherein the naturally occurring ratio of the minority analysis can be 30:1 or even 129:1, this initial corpus was far from adequate. We used two methods to identify additional candidate sentences: (1) We ran texts through an automated Hebrew diacritizer (Shmidman et al., 2020) and took the cases where the word was diacritized as the minority analysis. (2) Where relevant, we leveraged optional Hebrew orthographic variations which indicate that a given word is intended in one specific way. These candidate sentences were then sent to student taggers to confirm that the minority analysis was in fact intended. Our student taggers tagged approximately 300 sentences per hour. Evaluation of their work revealed that they averaged an accuracy of 98 percent. In order to overcome this margin of error, we employed a Hebrew-language expert who proofread the resulting contrast sets. In our final corpus, each analysis of each homograph is attested in at least 400 sentences, and usually in 800-2.5K sentences (full details in Appendix Table 1). + +One issue we encountered when collecting naturally-occurring Hebrew sentences is that a small number of specific word-neighbors and collocations tend to dominate the examples. As an example: the word $\text{wen}$ can be vocalized as $\text{wen}$ ("possible", the majority case), or $\text{wen}$ ("he allowed"). However, over one third of the naturally occurring cases of the majority case boil down to some 90 frequently-occurring collocations, such as $\text{wen}$ ("impossible") or $\text{wen}$ ("is it possible?"). As such, a machine-learning model would overfit to those specific collocations, rather than learning more generic overarching patterns of + +
Option 1Option 2
#FormWord (Translation)MorphologyCount / 1MWord (Translation)MorphologyCount / 1MRatio
1\[ \text{NK} \]([accusative]) [ŋ]ACC18164(you) [ŋ]Pronoun [F,S,2]27566:1
2\[ \text{NK} \](you) [ŋ]Pronoun [M,S,2]1430(with her) [ŋ]Prep+Suf.Pron [F,S,3]2655:1
3\[ \text{NK} \](you) [ŋ]ACC+Suf.Pron [M,P,2]70(with you) [ŋ]Prep+Suf.Pron [M,P,2]710:1
4\[ \text{NK} \](you) [ŋ]Pronoun [M,P,2]324(with them) [ŋ]Prep+Suf.Pron [M,P,3]3410:1
5\[ \text{N}\text{I}\text{N}\text{I} \](blessing) [ŋ]Noun [cons,F,S]25POOL [ŋ]Noun [cons,F,S]0.830:1
6\[ \text{N}\text{I}\text{N} \](indeed) [ŋ]Conj / Intj418(mountains) [ŋ]Noun [cons,M,P]1233:1
7\[ \text{N}\text{I}\text{N} \](he will say) [ŋ]Verb [M,S,3,FUTURE]18(will be said) [ŋ]Verb [M,S,3,FUTURE]0.443:1
8\[ \text{N}\text{I}\text{N} \](tractate) [ŋ]Noun [abs/cons,F,S]54(mask) [ŋ]Noun [cons,F,S]143:1
9\[ \text{N}\text{I}\text{N} \](with) [ŋ]Preposition4240(nation) [ŋ]Noun [abs/cons,M,S]28614:1
10\[ \text{N}\text{I}\text{N} \](her face) [ŋ]Noun [F,M,p,suf=F,S,3]55(application) [ŋ]Noun [F,S]233:1
11\[ \text{N}\text{I}\text{N} \](they left) [ŋ]Verb [MF,P,3,PAST]6(they interpreted) [ŋ]Verb [MF,P,3,PAST]0.415:1
12\[ \text{N}\text{I}\text{N} \](third) [ŋ]Cardinal [F,S]107(trio) [ŋ]Noun [cons,F,S]0.8129:1
13\[ \text{N}\text{I}\text{N} \](different) [ŋ]Adj [M,S]474(after) [ŋ]Preposition3871:1
14\[ \text{N}\text{I}\text{N} \](her sons) [ŋ]Noun [M,P,suf=F,S,3]8(building) [ŋ]Noun [F,S]51.5:1
15\[ \text{N}\text{I}\text{N} \]Returning) [ŋ]Noun [F,S]62(she returned) [ŋ]Verb [F,S,3,PAST]551:1
16\[ \text{N}\text{I}\text{N} \](here knew) [ŋ]Verb [M,S,3,PAST]88(knowledge) [ŋ]Noun [abs/cons,M,S]551.5:1
17\[ \text{N}\text{I}\text{N} \](as minister) [ŋ]Prep+Noun [abs/cons,M,S]35(kosher) [ŋ]Adj [M,S] / Propn [MF,S]142.5:1
18\[ \text{N}\text{I}\text{N} \](here wrote) [ŋ]Verb [M,S,3,PAST]252(writing) [ŋ]Noun [cons,M,S]1032.5:1
19\[ \text{N}\text{I}\text{N} \](understands) [ŋ]Participle [M,S]174(from amongst) [ŋ]Preposition982:1
20\[ \text{N}\text{I}\text{N} \](her books) [ŋ]Noun [M,P,suf=F,S,3]13.library) [ŋ]Noun [F,S]42.5:1
21\[ \text{N}\text{I}\text{N} \](our nation) [ŋ]Noun [M,S,suf=MF,P,1]23(with us) [ŋ]Prep+Suf.Pron [MF,P,1]122:1
+ +the word usage. Therefore, we constrained our data collection such that there may be no more than 20 cases of any given word-neighbor combination. $^4$ + +# 3 Experiments + +We first use our challenge set to evaluate current state-of-the-art performance on the morphological disambiguation of Hebrew homographs. The best existing tool for Hebrew morphological disambiguation is YAP: Yet Another Parser (Tsarfaty et al., 2019). We run all $56,000+$ sentences from our challenge set through YAP. Due to the unbalanced natural distribution of the possible analyses in many of the cases, we compute recall and precision results separately for each analysis, and we then compute a macro-averaged F1 score. + +Next, we use our challenge set to train classifiers for each of the homographs in our corpus. We implement 2-layer MLPs using the DyNet framework (Neubig et al., 2017). As input, we feed the MLP an encoding $h(w_{i})$ , a representation of the context of the target word within the sentence. The target word itself is masked and not included in the input. The output of the MLP is a probabilistic choice of either Class 1 or Class 2, where each class represents one of the two possible diacritization options. + +We applied two methods to represent the surrounding context in the MLP input. The first is encoding the three neighboring words on both sides + +Table 1: The homographs covered in our challenge set. Words 1-12 are unbalanced homographs, in which the ratio between the two analyses is particularly skewed. These cases pose a particularly difficult disambiguation challenge because they are severely underrepresented in existing tagged Hebrew corpora. + +
YAP
Option 1Option 2
#WordPrecRecallPrecRecallAvg-F1
1πκ85.6199.24100.0012.37.570
2πκ53.5596.4295.0421.48.519
3πκ69.3097.2671.8813.71.520
4πκ37.8799.8775.00.24.277
5πκ-.0058.3193.20-
6πκ92.5397.1088.8263.04.843
7πκ-.0052.19100.00-
8πκ86.9324.8441.5189.86.477
9πκ87.7399.2091.5936.03.724
10πκ28.3633.9882.9078.85.559
11πκ71.9390.82-.00-
12πκ75.1290.6093.3865.13.794
13πκ95.7388.8482.7990.66.894
14πκ45.2227.2984.6785.51.596
15πκ81.0366.4976.8487.64.775
16πκ85.0963.5095.7689.63.827
17πκ94.7963.1375.1166.45.732
18πκ97.6378.1772.6190.86.838
19πκ77.0386.3294.8490.48.870
20πκ87.9314.9875.2599.15.556
21πκ83.7638.8976.6596.38.693
+ +Table 2: Results running our entire challenge set through YAP, the SOTA Hebrew morphological tagger. YAP performs far better on the balanced cases (13-21) than on the unbalanced cases (1-12). It is also worth noting that the YAP's poor performance on unbalanced homographs is not tied to the overall frequency of the word; the particularly frequent words (1,2,4,6,9) demonstrate similar scores to those of the relatively infrequent words (8,10,12). In three cases (5,7,11), where the difference is only the lexeme/stem, YAP always chooses one option; hence the - scores. + +of the target word; see Equation 1. The second is + +
Word2vecMorphologyComposite
#WordConcatLSTMConcatLSTMConcatLSTM
1\( n \).955.953.946.940.969.958
2\( n \).945.963.909.934.958.967
3\( n \).915.919.814.831.922.940
4\( n \).941.953.924.933.944.959
5\( n \).951.968.733.805.936.965
6\( n \).960.966.923.931.974.969
7\( n \).859.893.805.851.878.885
8\( n \).950.972.849.869.954.966
9\( n \).894.917.838.850.891.911
10\( n \).930.942.870.893.943.946
11\( n \).935.957.881.916.948.963
12\( n \).953.969.899.922.955.966
13\( n \).965.976.939.935.969.976
14\( n \).952.965.855.883.947.964
15\( n \).925.951.861.893.935.949
16\( n \).957.955.910.907.963.966
17\( n \).953.974.889.912.964.971
18\( n \).976.982.910.924.972.983
19\( n \).976.975.966.970.976.980
20\( n \).930.945.856.875.938.949
21\( n \).920.915.888.872.923.926
+ +Table 3: Accuracy of our specialized classifiers for the 21 homographs in our challenge set. We evaluate three methods for encoding the context words, and we run each method two ways: (1) "Concat": concatenate encodings of 3 neighboring words on each side; (2) "LSTM": run complete sentence context through a BiLSTM. We show F1 scores for each, macro-averaged across the two classes. See Appendix Tables 4-5 for a breakdown of recall/precision scores for each analysis. + +encoding the whole sentence around the word using a 2-layer biLSTM (Hochreiter and Schmidhuber, 1997), Equation 2. + +(1) $h(w_{i}) = w_{i - 3}\cdot w_{i - 2}\cdot w_{i - 1}\cdot w_{i + 1}\cdot w_{i + 2}\cdot w_{i + 3}$ +(2) $h(w_{i}) = LSTM(w_{0:i})\cdot LSTM(w_{n:i})$ + +We explore three alternate methods of encoding the vector $w_{i}$ . Our initial approach uses pre-trained word2vec embeddings for the neighboring words. + +Our second approach uses morphological information about the context words. Of course, we don't have any a priori knowledge regarding the morphological tagging of the neighboring words; and indeed, in a large percentage of the cases, the morphology of the neighboring words can be resolved in multiple ways. Thus, we construct a lattice of all possible analyses of the context words. + +short contexts was demonstrated by Fraenkel et al. (1979); Choueka and Lusignan (1985). Regarding short-context disambiguation methods in general, see Hearst (1991); Yarowsky (1994). + +6We use word2vecf (Levy and Goldberg, 2014) to build syntax-sensitive word embeddings, based on a corpus of 400M words of Hebrew text. To be sure, BERT might seem the more obvious choice rather than word2vec. However, BERT has been shown to be somewhat ineffective for morphologically rich languages such as Hebrew (Tsarfaty et al., 2020). BERT-based models underperform YAP and perform at the same level as BILSTM-based models, and BERT fails to capture internal morphological complexity (Klein and Tsarfaty, 2020). + +
UnbalancedBalanced
#WordYAPOurs#WordYAPOurs
1ה".570.96913ה".894.969
2ה".519.95814ה".596.947
3ה".520.92215ה".775.935
4ה".277.94416ה".827.963
5ה"-.93617ה".732.964
6ה".843.97418ה".838.972
7ה"-.87819ה".870.976
8ה".477.95420ה".556.938
9ה".724.89121ה".693.923
10ה".559.943
11ה"-.948
12ה".794.955
+ +Table 4: Comparison of the SOTA morphological disambiguation of Hebrew homographs (YAP) to our specialized classifiers (Avg F1). See Appendix Table 3 for a full precision/recall breakdown of this comparison. + +For every context word $w_{i}$ , we construct a vector for each possible part-of-speech $pos_{j}$ containing a trainable embedding for each possible morphological feature. The vector thus encodes: part-of-speech, gender, number, person, status, binyan, suffix, suf_gender, suf_person, suf_number, prefix. If a feature is not applicable to $w_{i}$ , we simply assign an NA embedding. We concatenate each vector $w_{pos_{j}}^{i}$ into a single vector representing $w_{i}$ . + +Finally, we explore a third composite method in which we concatenate the encodings from the two previous methods to the encoding for $w_{i}$ . + +We run each contrast set using each of our three methods for encoding the neighboring words. We evaluate the results using 10-fold cross validation. + +# 4 Results and Analysis + +In Table 2, we display the results of our baseline experiment, where we evaluate current SOTA (YAP) performance on our challenge set. These results empirically demonstrate how much more difficult it is for YAP to resolve the cases of unbalanced homographs. The unbalanced cases are shown in the top half of the table (1-12). YAP's F1 score is below .8 for all but one of the cases, and it is below .6 for 9 out of the 12 cases. In the two cases of Pronoun vs. Suffix Preposition (2,4), YAP performs particularly poorly, scoring .4 and .1. In contrast, the bottom half of the table (13-21) details nine cases of balanced homographs. As expected, + +YAP does considerably better here: all F1 scores are above .5, and four of the cases are above .8. The weakest cases are those in which YAP has to differentiate between an unsegmented noun and a case of a noun plus possessive suffix (cases 14,20). In both of these cases, YAP scores an F1 of approximately .56 (which, interestingly, is precisely on par with the analogous unbalanced case [10]). + +In Table 3, we display results regarding our specialized classifiers. In most cases, using a biLSTM over the entire sentence context performs better than a concatenation of the three neighbor words on each side. In terms of the encoding method for the context words, word2vec performs better than the morphological lattice. This may be because word2vec can better represent the regularly expected usage of the neighboring words, while the morphology lattice represents all possible analyses with equal likelihood. A second possibility is that the contrast sets were not sufficiently large to optimally train the embeddings of the morphological characteristics, whereas word2vec embeddings have the benefit of pretraining on over 100M words. The combination of the latter two methods overall outperforms each one of them individually; thus, although word2vec succeeds in encoding most of what is needed to differentiate between the options, the information provided by the morph lattice sometimes helps to make the correct call. + +In Table 4, we compare the results of our composite-method with those of YAP. Our specialized classifiers set a new SOTA for all the cases. + +# 5 Related Work + +Many recent papers have proposed global or unsupervised methods for homograph disambiguation in English (e.g. Liu et al. (2018); Wilks and Stevenson (1997); Chen et al. (2009)). While such methods have obvious advantages, they have limited applicability to Hebrew. As noted, in Hebrew the majority of the words are ambiguous, including the core building blocks of the language; without these anchors, global approaches tend to result in poor performance regarding unbalanced homographs. + +The problem of Hebrew diacritization is analogous to that of Arabic diacritization; Arabic, like Hebrew, is a morphologically-rich language written without diacritics, resulting in high ambiguity. Many recent studies have proposed machine-learning approaches for the prediction of Arabic diacritics across a given text (e.g. Bebah et al. + +(2014); Belinkov and Glass (2015); Neme and Paumier (2019); Fadel et al. (2019a,b); Darwish et al. (2020). However, these studies all perform evaluations on standard Arabic textual datasets, and do not evaluate accuracy regarding minority options of unbalanced homographs. We believe that these models would likely benefit from specialized challenge sets of the sort presented here to overcome the specific hurdle of unbalanced homographs. + +# 6 Conclusion + +Due to high morphological ambiguity, as well as the lack of diacritics, Semitic languages pose a particularly difficult disambiguation task, especially when it comes to unbalanced homographs. For such cases, specialized contrast sets are needed, both in order to evaluate performance of existing tools, as well as in order to train effective classifiers. In this paper, we construct a new challenge set for Hebrew disambiguation, offering comprehensive contrast sets for 21 frequent Hebrew homographs. These contrast sets empirically demonstrate the limitations of reported SOTA results when it comes to unbalanced homographs; a model may report a SOTA for a benchmark, yet fail miserably on real world rare-but-important cases. Our new corpus will allow Hebrew NLP researchers to test their models in an entirely new fashion, evaluating the ability of the models to predict minority-homograph analyses, as opposed to existing Hebrew benchmarks which tend to represent the language in terms of its majority usage. Furthermore, our corpus will allow researchers to train their own classifiers and leverage them within a pipeline architecture. We envision the classifiers positioned at the beginning of the pipeline, disambiguating frequent forms from the get-go, and yielding improvement down the line, ultimately improving results for downstream tasks (e.g. NMT). Indeed, as we have demonstrated, neural classifiers trained on our contrast sets handily achieve a new SOTA for all of the homographs in the corpus. + +# Acknowledgements + +The work of the last author has been supported by an ERC-StG grant #677352 and an ISF grant #1739/26. We acknowledge the substantial help of our programmers, Yehuda Broderick and Cheyn Shmuel Shmidman. + +# References + +Meni Adler and Michael Elhadad. 2006. An unsupervised morpheme-based HMM for Hebrew morphological disambiguation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 665-672, Sydney, Australia. Association for Computational Linguistics. +Srinivas Bangalore and Aravind K. Joshi. 1999. Supertagging: An approach to almost parsing. Computational Linguistics, 25(2):237-265. +Mohamed Bebah, Amine Chennoufi, Azzeddine Mazroui, and Abdelhak Lakhouaja. 2014. Hybrid approaches for automatic vowelization of arabic texts. CoRR, abs/1410.2646. +Yonatan Belinkov and James R. Glass. 2015. Arabic diacritization with recurrent neural networks. In EMNLP. +Steven Bird, Robert Dale, Bonnie Dorr, Bryan Gibson, Mark Joseph, Min-Yen Kan, Dongwon Lee, Brett Powley, Dragomir Radev, and Yee Tan. 2008. The acl anthology reference corpus: A reference dataset for bibliographic research in computational linguistics. +Ping Chen, Wei Ding, Chris Bowes, and David Brown. 2009. A fully unsupervised word sense disambiguation method using dependency knowledge. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 28-36, Boulder, Colorado. Association for Computational Linguistics. +Yaacov Choueka and Serge Lusignan. 1985. Disambiguation by short contexts. Computers and the Humanities, 19:147-157. +Paula Czarnowska, Sebastian Ruder, Edouard Grave, Ryan Cotterell, and Ann Copestake. 2019. Don't forget the long tail! a comprehensive analysis of morphological generalization in bilingual lexicon induction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 974-983, Hong Kong, China. Association for Computational Linguistics. +Kareem Darwish, Ahmed Abdelali, Hamdy Mubarak, and Mohamed Eldesouki. 2020. Arabic diacritic recovery using a feature-rich bilstm model. +Ali Elkahky, Kellie Webster, Daniel Andor, and Emily Pitler. 2018. A challenge set and methods for noun-verb ambiguity. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2562–2572, Brussels, Belgium. Association for Computational Linguistics. + +Ray Fabri, Michael Gasser, Nizar Habash, George Kiraz, and Shuly Wintner. 2014. Linguistic introduction: The orthography, morphology and syntax of semitic languages. In Imed Zitouni, editor, Natural Language Processing of Semitic Languages, Theory and Applications of Natural Language Processing, pages 3-41. Springer. +Ali Fadel, Ibraheem Tuffaha, Bara' Al-Jawarneh, and Mahmoud Al-Ayyoub. 2019a. Arabic text disacritization using deep neural networks. CoRR, abs/1905.01965. +Ali Fadel, Ibraheem Tuffaha, Bara' Al-Jawarneh, and Mahmoud Al-Ayyoub. 2019b. Neural arabic text disacritization: State of the art results and a novel approach for machine translation. Proceedings of the 6th Workshop on Asian Translation. +Aviezri S. Fraenkel, David Raab, and Eliezer Spitz. 1979. Semi-automatic construction of semantic concordances. Computers and the Humanities, 13:283-288. +Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, and Ben Zhou. 2020. Evaluating nlp models via contrast sets. +Nizar Habash, Owen Rambow, and Ryan Roth. 2009. Mada+token: A toolkit for arabic tokenization, diacritization, morphological disambiguation, pos tagging, stemming and lemmatization. Proceedings of the 2nd International Conference on Arabic Language Resources and Tools (MEDAR). +Marti A. Hearst. 1991. Noun homograph disambiguation using local context in large text corpora. In University of Waterloo, pages 1-22. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735-1780. +Alon Itai and Shuly Wintner. 2008. Language resources for hebrew. *Language Resources and Evaluation*, 42:75-98. +Stav Klein and Reut Tsarfaty. 2020. Getting the ##life out of living: How adequate are word-pieces for modelling complex morphology? In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 204-209, Online. Association for Computational Linguistics. +Omer Levy and Yoav Goldberg. 2014. Dependency-based word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 302-308, Baltimore, Maryland. Association for Computational Linguistics. + +Frederick Liu, Han Lu, and Graham Neubig. 2018. Handling homographs in neural machine translation. +Alexis Amid Neme and Sébastien Paumier. 2019. Restoring arabic vowels through omission-tolerant dictionary lookup. Language Resources and Evaluation, 54(2):487-551. +Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017. Dynet: The dynamic neural network toolkit. arXiv preprint arXiv:1701.03980. +Avi Shmidman, Shaltiel Shmidman, Moshe Koppel, and Yoav Goldberg. 2020. Nakdan: Professional Hebrew diacritizer. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 197-203, Online. Association for Computational Linguistics. +Reut Tsarfaty, Dan Bareket, Stav Klein, and Amit Seker. 2020. From SPMRL to NMRL: What did we learn (and unlearn) in a decade of parsing morphologically-rich languages (MRLs)? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7396-7408, Online. Association for Computational Linguistics. +Reut Tsarfaty, Shoval Sadde, Stav Klein, and Amit Seker. 2019. What's wrong with Hebrew NLP? and how to make it right. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations, pages 259-264, Hong Kong, China. Association for Computational Linguistics. +Yorick Wilks and Mark Stevenson. 1997. The grammar of sense: Using part-of-speech tags as a first step in semantic disambiguation. +David Yarowsky. 1994. Decision lists for lexical ambiguity resolution: Application to accent restoration in Spanish and French. In 32nd Annual Meeting of the Association for Computational Linguistics, pages 88-95, Las Cruces, New Mexico, USA. Association for Computational Linguistics. +George K.Zipf. 1949.Human Behaviour and the Principle of Least Effort.Addison-Wesley. + +Appendix + +
Option 1Option 2
#FormWord (Translation)Morphology# sentencesWord (Translation)Morphology# sentences
1\[ \text{吞} \](faccusative) [ŋ]ACC2,402(you) [ŋ]Pronoun [F,S,2]443
2\[ \text{吞} \](you) [ŋ]Pronoun [M,S,2]2,198(with her) [ŋ]Prep+Suf_Pron [F,S,3]2,450
3\[ \text{吞} \](you) [ŋ]ACC+Suf_Pron [M,P,2]1,630(with you) [ŋ]Prep+Suf_Pron [M,P,2]816
4\[ \text{吞} \](you) [ŋ]Pronoun [M,P,2]1,474(with them) [ŋ]Prep+Suf_Pron [M,P,3]2,064
5\[ \text{吞} \](blessing) [ŋ]Noun [cons,F,S]1,027(pool) [ŋ]Noun [cons,F,S]1,384
6\[ \text{吞} \](indeed) [ŋ]Conj / Intj1,939(mountains) [ŋ]Noun [cons,M,P]419
7\[ \text{吞} \](he will say) [ŋ]Verb [M,S,3,FUTURE]838(will be said) [ŋ]Verb [M,S,3,FUTURE]922
8\[ \text{吞} \](tractate) [ŋ]Noun [abs/cons,F,S]975(mask) [ŋ]Noun [cons,F,S]562
9\[ \text{吞} \](with) [ŋ]Preposition2,416(nation) [ŋ]Noun [abs/cons,M,S]510
10\[ \text{吞} \](her face) [ŋ]Noun [F,M,P,suf=F,S,3]607(application) [ŋ]Noun [F,S]2,435
11\[ \text{吞} \](they left) [ŋ]Verb [MF,P,3,PAST]1,321(they interpreted) [ŋ]Verb [MF,P,3,PAST]482
12\[ \text{吞} \](third) [ŋ]Cardinal [F,S]1,199(trio) [ŋ]Noun [cons,F,S]1,285
13\[ \text{吞} \](different) [ŋ]Adj [M,S]2,422(after) [ŋ]Preposition1,215
14\[ \text{吞} \](her sons) [ŋ]Noun [M,P,suf=F,S,3]578(building) [ŋ]Noun [F,S]2,448
15\[ \text{吞} \](returng) [ŋ]Noun [F,S]960(she returned) [ŋ]Verb [F,S,3,PAST]1,212
16\[ \text{吞} \](hen knew) [ŋ]Verb [M,S,3,PAST]651(knowledge) [ŋ]Noun [abs/cons,M,S]1,538
17\[ \text{吞} \](as minister) [ŋ]Prep+Noun [abs/cons,M,S]959(kosher) [ŋ]Adj [M,S] / Propn [MF,S]753
18\[ \text{吞} \](hen wrote) [ŋ]Verb [M,S,3,PAST]2,078(writing) [ŋ]Noun [cons,M,S]721
19\[ \text{吞} \](understands) [ŋ]Participle [M,S]891(from amongst) [ŋ]Preposition2,473
20\[ \text{吞} \](her books) [ŋ]Noun [M,P,suf=F,S,3]664.library) [ŋ]Noun [F,S]1,715
21\[ \text{吞} \](our nation) [ŋ]Noun [M,S,suf=MF,P,1]471(with us) [ŋ]Prep+Suf_Pron [MF,P,1]1,007
+ +Table 1: The homographs covered in our challenge set, the possible analyses for each homograph, and the number of attestations in our challenge set of each homograph analysis. + +
Composite Without ValenceComposite With Valence
Option 1Option 2Option 1Option 2
#WordPrecRecallPrecRecallAvg-F1PrecRecallPrecRecallAvg-F1
1πκ98.3399.2495.8191.18.96198.6999.3696.5193.07.969
2πκ95.5695.4495.7295.83.95696.0195.3595.6696.27.958
3πκ93.8895.2890.2587.54.91794.3995.3490.4688.62.922
4πκ93.4793.2395.8896.04.94793.6692.2495.3296.20.944
5πκ92.6791.6493.7394.52.93193.7291.5493.7295.37.936
6πκ98.7098.7094.1094.10.96499.0099.1095.9095.46.974
7πκ86.7086.7087.7587.75.87287.6086.8187.9588.68.878
8πκ96.4696.9194.2793.46.95396.9996.4593.5394.49.954
9πκ95.4098.0889.8578.27.90295.3097.3686.5077.90.891
10πκ92.2388.7897.2698.16.94193.7687.9797.0898.56.943
11πκ95.9998.4395.4388.87.94696.2698.2895.0689.68.948
12πκ94.8995.8296.1095.22.95596.1694.3594.8696.51.955
13πκ97.1898.0496.0594.37.96497.3998.4496.8494.77.969
14πκ91.2590.1797.6897.95.94392.6890.1797.6998.31.947
15πκ93.9691.3493.3295.37.93593.4091.9693.7394.88.935
16πκ93.4993.9197.3697.17.95594.4095.2597.9497.56.963
17πκ97.4296.5395.7096.80.96696.9396.6395.7996.16.964
18πκ98.5299.0597.1395.56.97698.5198.6595.9595.56.972
19πκ96.5396.6398.7698.72.97796.1296.7498.8098.56.976
20πκ91.6590.4496.3596.84.93890.6791.4796.7196.38.938
21πκ88.9688.0794.3094.75.91591.4887.4894.1196.08.923
+ +Table 2: Quantification of the contribution of the valence "supertag". We examine results of our "Concat Composite" method, wherein we use the three neighboring words before and after the homograph, with each neighboring word represented by a concatenation of its word2vec embedding and a lattice of the morphological features of the possible analyses of the word. We indicate the change in results when adding the valence supertag to the lattice. + +
YAPOur Classifier (Composite BiLSTM Method)
Option 1Option 2Option 1Option 2
#WordPrecRecallPrecRecallAvg-F1PrecRecallPrecRecallAvg-F1
1\( n \times n \)85.6199.24100.0012.37.57098.2999.0894.9690.97.958
2\( n \times n \)53.5596.4295.0421.48.51995.6597.6197.7195.83.967
3\( n \times n \)69.3097.2671.8813.71.52095.5196.5492.9090.90.940
4\( n \times n \)37.8799.8775.00.24.27794.1195.6697.3396.36.959
5\( n \times n \)-.0058.3193.20-96.0995.9196.9197.05.965
6\( n \times n \)92.5397.1088.8263.04.84399.0098.7594.3995.46.969
7\( n \times n \)-.0052.19100.00-86.7189.7490.2487.33.885
8\( n \times n \)86.9324.8441.5189.86.47797.4897.7595.8595.35.966
9\( n \times n \)87.7399.2091.5936.03.72496.2597.6488.3682.50.911
10\( n \times n \)28.3633.9882.9078.85.55992.7989.9297.5398.28.946
11\( n \times n \)71.9390.82-.00-97.4198.6596.2392.91.963
12\( n \times n \)75.1290.6093.3865.13.79496.8696.0796.3997.12.966
13\( n \times n \)95.7388.8482.7990.66.89497.9098.9697.8995.80.976
14\( n \times n \)45.2227.2984.6785.51.59696.1292.3798.2199.12.964
15\( n \times n \)81.0366.4976.8487.64.77595.7492.6894.3796.75.949
16\( n \times n \)85.0963.5095.7689.63.82795.3895.1097.8898.01.966
17\( n \times n \)94.7963.1375.1166.45.73298.5496.3295.5298.21.971
18\( n \times n \)97.6378.1772.6190.86.83899.2399.1097.3297.71.983
19\( n \times n \)77.0386.3294.8490.48.87096.7797.5099.0898.80.980
20\( n \times n \)87.9314.9875.2599.15.55692.1593.2497.3996.95.949
21\( n \times n \)83.7638.8976.6596.38.69390.7189.2694.8895.61.926
+ +Table 3: Expanded results comparing the performance of our specialized classifiers with that of the state-of-the-art Hebrew morphological tagger, YAP. Our classifiers set a new SOTA for all cases, both balanced and unbalanced, although the improvement is much more substantial regarding the unbalanced cases. (In three cases [5,7,11], where the difference is only one of lexeme or verbal stem, YAP always chooses one option; hence the - scores for these cases). + +
Word2vec embeddingsMorphological characteristicsComposite Method
Option 1Option 2Option 1Option 2Option 1Option 2
#WordPrecRecallPrecRecallAvg-F1PrecRecallPrecRecallAvg-F1PrecRecallPrecRecallAvg-F1
1πκ98.2998.8893.9390.97.95597.9398.6892.7889.08.94698.6999.3696.5193.07.969
2πκκ93.9594.6794.9594.27.94590.5190.7591.2891.06.90996.0195.3595.6696.27.958
3πκκκ94.2294.4588.8188.38.91587.2188.3676.0174.01.81494.3995.3490.4688.62.922
4πκκκ93.5091.7895.0596.12.94191.5389.4893.6894.96.92493.6692.2495.3296.20.944
5πκκκ94.2694.7095.9895.65.95170.7767.2976.1779.00.73393.7291.5493.7295.37.936
6πκκ98.7498.3592.6594.33.96096.7897.8089.5285.26.92399.0099.1095.9095.46.974
7πκκκ83.9587.2687.8284.63.85978.4681.7482.5179.34.80587.6086.8187.9588.68.878
8πκκκκ97.0695.7992.4494.66.95090.8487.1878.0483.82.84996.9996.4593.5394.49.954
9πκκ95.1397.8088.3776.98.89492.6197.3283.8964.27.83895.3097.3686.5077.90.891
10πκκκ90.2587.3296.9097.68.93081.2476.7594.3595.63.87093.7687.9797.0898.56.943
11πκκκ96.7696.1889.8491.30.93592.3995.3686.2578.74.88196.2698.2895.0689.68.948
12πκκκκ94.4495.9096.1594.77.95390.5288.3789.4791.43.89996.1694.3594.8696.51.955
13πκκ97.5697.7295.4795.17.96595.0296.9693.7289.94.93997.3998.4496.8494.77.969
14πκκ93.549.8597.8598.51.95282.5770.6893.2896.47.85592.6890.1797.6998.31.947
15πκκκ92.9890.1092.3894.63.92584.8883.9287.4388.21.86193.4091.9693.7394.88.935
16πκκ94.0593.9197.3797.43.95787.1787.8294.7294.41.91094.4095.2597.9497.56.963
17πκκκ96.6794.9993.8695.90.95390.9489.1786.7588.86.88996.9396.6395.7996.16.964
18πκκκ98.5699.1097.2695.69.97694.6596.5789.1183.71.91098.5198.6595.9595.56.972
19πκκκ96.7396.3198.6498.80.97696.4293.4997.6398.72.96696.1296.7498.8098.56.976
20πκκκκ89.0590.8896.4795.71.93080.2777.7991.5792.66.85690.6791.4796.7196.38.938
21πκκκκ89.5888.8794.6795.03.92086.0783.5092.1893.51.88891.4887.4894.1196.08.923
+ +Table 4: Full breakdown of the performance of our specialized classifiers when trained with short contexts (concatenation of encodings of the three word neighbors before and after the homograph). We display results for each of our three methods of encoding context words. + +
Word2vec embeddings - BiLSTMMorphological characteristics - BiLSTMComposite Method - BiLSTM
Option 1Option 2Option 1Option 2Option 1Option 2
#WordPrecRecallPrecRecallAvg-F1PrecRecallPrecRecallAvg-F1PrecRecallPrecRecallAvg-F1
1πK97.7199.4096.5487.82.95397.8598.3691.1488.66.94098.2999.0894.9690.97.958
2πK95.6296.7296.8895.83.96392.3594.1694.4192.67.93495.6597.6197.7195.83.967
3πK94.1695.2290.2088.14.91988.2989.5578.4276.17.83195.5196.5492.9090.90.940
4πK94.7493.4996.0796.84.95391.0092.3795.3294.44.93394.1195.6697.3396.36.959
5πK95.8696.9397.6696.84.96878.3376.9582.8183.92.80596.0995.9196.9197.05.965
6πK98.9598.6093.7595.24.96696.9898.1591.1386.17.93199.0098.7594.3995.46.969
7πK87.1091.3291.6387.54.89384.1585.0186.0685.25.85186.7189.7490.2487.33.885
8πK98.4097.5795.5997.07.97290.8990.5582.7483.30.86997.4897.7595.8595.35.966
9πK96.3797.9289.6683.06.91793.7996.1279.8370.72.85096.2597.6488.3682.50.911
10πK91.8389.5997.4598.04.94284.0681.4695.4796.19.89392.7989.9297.5398.28.946
11πK97.6197.7593.9093.52.95794.0797.3191.9683.40.91697.4198.6596.2392.91.963
12πK97.5196.0796.4197.73.96992.3191.4892.1892.95.92296.8696.0796.3997.12.966
13πK98.2198.6497.2896.43.97694.6496.8093.3689.14.93597.9098.9697.8995.80.976
14πK92.9395.7698.9998.27.96585.9076.4494.5697.03.88396.1292.3798.2199.12.964
15πK95.1993.8195.1896.26.95189.2686.4989.6091.79.89395.7492.6894.3796.75.949
16πK94.5592.7296.8897.69.95586.3087.9694.7593.96.90795.3895.1097.8898.01.966
17πK98.7596.6395.8998.46.97492.6691.5289.5390.91.91298.5496.3295.5298.21.971
18πK98.9799.2897.8396.90.98296.4295.9087.9589.37.92499.2399.1097.3297.71.983
19πK97.2395.4498.3399.00.97595.9595.2298.2498.52.97096.7797.5099.0898.80.980
20πK90.3893.9797.6596.16.94582.7381.0392.7793.50.87592.1593.2497.3996.95.949
21πK89.7587.0893.8895.22.91582.4583.1091.8591.50.87290.7189.2694.8895.61.926
+ +Table 5: Full breakdown of the performance of our specialized classifiers when trained with a bi-LSTM of the full sentence context. We display results for each of our three methods of encoding context words. \ No newline at end of file diff --git a/anovelchallengesetforhebrewmorphologicaldisambiguationanddiacriticsrestoration/images.zip b/anovelchallengesetforhebrewmorphologicaldisambiguationanddiacriticsrestoration/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..79f991daad3182f71cc78bddc2895d223798451e --- /dev/null +++ b/anovelchallengesetforhebrewmorphologicaldisambiguationanddiacriticsrestoration/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:638415444be00782bd61b238e7c146dc884b0145d95a294229e0de2ca53bde92 +size 1352658 diff --git a/anovelchallengesetforhebrewmorphologicaldisambiguationanddiacriticsrestoration/layout.json b/anovelchallengesetforhebrewmorphologicaldisambiguationanddiacriticsrestoration/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..e6ee4c0409bca05bd23237455454ec5a4a8236fb --- /dev/null +++ b/anovelchallengesetforhebrewmorphologicaldisambiguationanddiacriticsrestoration/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1435ff77cbe8d0e13c2dc87428e25e119bf885a70839d6c96816e140d1a00d1a +size 245036 diff --git a/anovelworkflowforaccuratelyandefficientlycrowdsourcingpredicatesensesandargumentlabels/46b835b6-ef48-48ae-bb8e-cf040697988a_content_list.json b/anovelworkflowforaccuratelyandefficientlycrowdsourcingpredicatesensesandargumentlabels/46b835b6-ef48-48ae-bb8e-cf040697988a_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..04a31e268f941d8546040c868a3ab5f3d7c8d774 --- /dev/null +++ b/anovelworkflowforaccuratelyandefficientlycrowdsourcingpredicatesensesandargumentlabels/46b835b6-ef48-48ae-bb8e-cf040697988a_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f8287cba6d281138358deef3e058b0bfe7f0e0696c2812adf904dd2c2717210c +size 49260 diff --git a/anovelworkflowforaccuratelyandefficientlycrowdsourcingpredicatesensesandargumentlabels/46b835b6-ef48-48ae-bb8e-cf040697988a_model.json b/anovelworkflowforaccuratelyandefficientlycrowdsourcingpredicatesensesandargumentlabels/46b835b6-ef48-48ae-bb8e-cf040697988a_model.json new file mode 100644 index 0000000000000000000000000000000000000000..0cc7f15ae6d16563de7129c0449fa7a463eec7ad --- /dev/null +++ b/anovelworkflowforaccuratelyandefficientlycrowdsourcingpredicatesensesandargumentlabels/46b835b6-ef48-48ae-bb8e-cf040697988a_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b51a5708afc20467761e2dd7d4fdc2e6c6734ee1e7b9abf23653175cce4c539d +size 60337 diff --git a/anovelworkflowforaccuratelyandefficientlycrowdsourcingpredicatesensesandargumentlabels/46b835b6-ef48-48ae-bb8e-cf040697988a_origin.pdf b/anovelworkflowforaccuratelyandefficientlycrowdsourcingpredicatesensesandargumentlabels/46b835b6-ef48-48ae-bb8e-cf040697988a_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f0d277a5cf0b30d3754ab6e442d7ba473588474e --- /dev/null +++ b/anovelworkflowforaccuratelyandefficientlycrowdsourcingpredicatesensesandargumentlabels/46b835b6-ef48-48ae-bb8e-cf040697988a_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2e0498433dfd01280d1f5f2aabb5c18eb046d9fa6e9d2585beb66fce3d0ccd89 +size 289398 diff --git a/anovelworkflowforaccuratelyandefficientlycrowdsourcingpredicatesensesandargumentlabels/full.md b/anovelworkflowforaccuratelyandefficientlycrowdsourcingpredicatesensesandargumentlabels/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b2c5f91dd2cc1cc096e8e7632f8b7df568c28563 --- /dev/null +++ b/anovelworkflowforaccuratelyandefficientlycrowdsourcingpredicatesensesandargumentlabels/full.md @@ -0,0 +1,188 @@ +# A Novel Workflow for Accurately and Efficiently Crowdsourcing Predicate Senses and Argument Labels + +Youxuan Jiang $^{1}$ , Huaiyu Zhu $^{2}$ , Jonathan K. Kummerfeld $^{1}$ , Yunyao Li $^{2}$ , Walter Lasecki $^{1}$ + +University of Michigan, Ann Arbor1 + +IBM Research, Almaden + +lyjiang@umich.edu huaiyu@us.ibm.com jkummerf@umich.edu + +yunyaoli@us.ibm.com wlasecki@umich.edu + +# Abstract + +Resources for Semantic Role Labeling (SRL) are typically annotated by experts at great expense. Prior attempts to develop crowdsourcing methods have either had low accuracy or required substantial expert annotation. We propose a new multi-stage crowd workflow that substantially reduces expert involvement without sacrificing accuracy. In particular, we introduce a unique filter stage based on the key observation that crowd workers are able to almost perfectly filter out incorrect options for labels. Our three-stage workflow produces annotations with $95\%$ accuracy for predicate labels and $93\%$ for argument labels, which is comparable to expert agreement. Compared to prior work on crowdsourcing for SRL, we decrease expert effort by 4x, from $56\%$ to $14\%$ of cases. Our approach enables more scalable annotation of SRL, and could enable annotation of NLP tasks that have previously been considered too complex to effectively crowdsourced. + +# 1 Introduction + +High quality data is crucial in NLP, but difficult to collect for complex tasks such as semantic role labeling (SRL). Annotating Propbank involved a team of annotators, each of whom took around three days to learn the annotation process (Palmer et al., 2005). For tasks such as sentiment analysis (Socher et al., 2013) and question answering (Rajpurkar et al., 2016), crowdsourcing has produced massive datasets that enabled the development of new, more sophisticated models. Recent work introduced a hybrid workflow to allow crowd workers to usefully contribute to annotation of SRL (Wang et al., 2017), but still required expert annotation in a third of cases. + +This paper introduces a new hybrid SRL annotation workflow with the goal of minimizing expert + +annotation without sacrificing annotation accuracy. In order to develop our method, we first explored why SRL annotations are hard for crowd workers. We found that workers had difficulty identifying the correct answer because the number of options for labels in SRL can be overwhelming and workers lack the linguistic expertise to handle subtle cases. However, we also observed that (1) non-expert workers are capable of reliably identifying many of the answers that are incorrect, and (2) when given the opportunity, crowd workers can accurately identify the limits of their knowledge. + +Based on these observations, we developed a three phase workflow: (1) workers filter the set of options, reducing the complexity of the task, (2) workers select an answer or say they are unsure, and (3) difficult cases that workers disagreed on or were unsure of are decided by experts. The experts choose from the complete, unfiltered set of options. + +To measure the effectiveness of the approach we ran experiments at two scales. First, using 200 examples, we measured the effectiveness of each phase in the process and ran a comparison of end-to-end performance against other workflows. Second, using a larger set of 2,014 examples, we verified the end-to-end performance of our approach, showing that it achieves high accuracy while requiring experts for only $13\%$ of cases. $^{1}$ + +Our work shows that with careful workflow design, crowd workers can effectively contribute to annotation of complex tasks such as semantic role labeling. The key ideas of crowd filtering and a mechanism for expressing uncertainty could be used in other NLP annotation tasks to enable the creation of larger, more sophisticated resources. + +# 2 Related Work + +A range of previous studies have explored methods of crowdsourcing SRL. Most work has focused on crowd-only workflows, with comparatively low accuracy or extensive worker training (Fossati et al., 2013; Feizabadi and Padó, 2014; Chang et al., 2015; Dumitrache et al., 2019; Hahm et al., 2020). This work guided our user interface designs and our understanding of challenges in SRL annotation. For example, we apply Dumitrache et al. (2018)'s finding that cases where workers disagree are often more subtle or ambiguous. The most relevant work, Wang et al. (2017), used a classifier to assign hard examples to experts and easy examples to crowd workers. They achieved high accuracy $(95\%)$ , but required experts for $34\%$ of cases. Their classifier is complementary to the ideas we propose. + +Another approach has used question-answering to annotate SRL (He et al., 2015; FitzGerald et al., 2018). This method is effective, but does not cover all roles and tends to have low recall. Recent work has improved recall, but overall accuracy remains low, with an F-score of 82 on CoNLL-2009 data (Roit et al., 2020). Another approach used an automatic process to expand existing datasets and then used the crowd to check paraphrases (Pavlick et al., 2015). While effective, this approach is limited to expanding lexical coverage using sentences from an existing resource. + +Word Sense Disambiguation (WSD) is related to the predicate sense labeling task we consider. Prior work has explored crowdsourcing for WSD, but has mostly been unable to achieve high performance (Hong and Baker, 2011; Rumshisky, 2011; Kapelner et al., 2012; Venhuizen et al., 2013; Jurgens, 2013). There has been success on combining crowdsourcing with distant supervision for relation extraction (Zhang et al., 2012; Liu et al., 2016; Abad et al., 2017). Many other semantic parsing formalisms exist, such as AMR and UCCA, but we are unaware of work on crowdsourcing for them. + +More generally, a range of approaches have been proposed to increase crowdsourcing quality, including worker filtering (Li and Liu, 2015), attention checks (Oppenheimer et al., 2009), and incentives (Venhuizen et al., 2013). These are all complementary to our proposed method. + +# 3 Proposed Workflow + +SRL can be divided into three parts: (1) identifying predicate and argument spans, (2) labeling predi + +cate senses, and (3) labeling argument roles. We consider the latter two.2 We describe each labeling decision as a task. In predicate sense classification tasks, a predicate in a sentence is given, and the goal is to identify the sense in which it is being used. In argument role classification tasks, an argument for a predicate with a known sense is given, and the goal is to identify the argument's role relative to the predicate. For example, for "John spoke .", there are five options for the sense of "speak", and between one and four options for the argument "John" depending on the sense of "speak". In this case, the correct sense is "speak.01 (speak, lecturing, talking)" for the predicate and "A0 (talker)" for the argument. + +We aim to use the crowd to annotate SRL with high accuracy. This is difficult for two reasons. First, non-expert workers lack the linguistic expertise to understand some of the more complex role labels. Second, there can be an overwhelming number of label options, with subtle differences in meaning. These issues increase the cognitive load of selection, reducing the likelihood that workers will select the true label. + +In a preliminary study, we measured the accuracy of asking five workers to choose a label. The crowd only outperformed a machine prediction when they were unanimous, which occurred in $1\%$ of cases. However, we also found that workers could reliably identify the top few most likely labels, and could almost perfectly identify the most unlikely labels. + +These observations led us to design a three phase workflow for predicate and role labeling: + +1. Filter: A task is given to $n$ workers. Each worker selects the least likely options, selecting at least half of them. Options selected by every worker are filtered out. All other options remain available. If there are still many options we repeat the process, gradually reducing the number of options. Tasks with exactly one option remaining are assigned that option and do not go to the other phases. +2. Select: Tasks with two or more options remaining are given to a new set of $n$ workers, who are asked to select one of these options as the correct answer. We also provide a "not + +# Step 1. Read the sentence below carefully. Pay attention to the words in red and blue. + +Al 's Little Cafe was small, dark, narrow, and filled with the mingled scent of beer, tobacco smoke, and Italian cooking. + +# Statement + +Al's Little Cafe is the agent, causer, agent following the action filled. + +Example: Outside, a young pressman filling a news box with an extra edition headlined "Herald Examiner Closes" refused to take areader's quarter. + +Al's Little Cafe is the container, destination, patient, theme following the action filled. + +Example: Outside, a young pressman filling a news box with an extra edition headlined "Herald Examiner Closes" refused to take areader's quarter. + +Figure 1: Part of the user interface for argument role identification in the Filter phase. On the left, the text "Al's Little Cafe" is blue and the word "filled" is red. On the right, the same colouring is applied, with the addition of "a young pressman" and "a news box" in blue. + +sure” option3 to allow workers to explicitly indicate uncertainty. Tasks that (1) achieve majority agreement on an answer and (2) do not have a single “not sure”, are assigned the answer and do not go to the final phase. + +3. Expert: Tasks that are not resolved in the first two phases are sent to experts. The interface presents the complete set of initial options, ranked as follows: (1) the automatic system's choice, (2) the highest voted choice in the Select phase, (3) other options chosen in the Filter phase, (4) all remaining options. + +This workflow addresses the two key challenges described above. First, consider the challenge that workers lack expert knowledge. The Select phase separates out difficult cases by requiring majority agreement and no uncertainty. These difficult cases are then decided by experts with the necessary knowledge. Second, consider the challenge that there can be an overwhelming number of options. The Filter phase reduces the complexity of the task, focusing attention on likely options. This assumes that our filtering process removes unlikely options without removing the correct ones, which we verify experimentally in Section 5.1. + +Comparison Approaches In our experiments, we compare with three other data annotation methods. Automatic uses the output of a statistical model (Akbik and Li, 2016), with no human input. Review-Select uses a two phase process. First, five workers review the system prediction. If any worker marks the prediction as incorrect, another set of workers choose an answer and we assign the most common choice. Review-Expert uses the same review process as the previous approach, but an expert chooses the answer rather than the crowd. + +# 4 Experimental Setup + +We consider experiments on two sets of data, both from the English portion of the CoNLL-2009 shared task (Hajic et al., 2009). We use one set of 200 randomly chosen tasks (drawn from the training data) to evaluate components of our approach. We use a second set of 2,014 randomly chosen tasks to evaluate our workflow end-to-end. There are 459 predicates and 1555 arguments, covering 300 sentences from the CoNLL test set. We did not include cases where there is only one frame for the predicate in Propbank as there is no decision to be made. We evaluate against the expert-annotated shared task data, with edits based on errors we found in 39 cases. + +We recruited crowd workers from Amazon Mechanical Turk via LegionTools (Lasecki et al., 2014; Gordon et al., 2015), and paid them US minimum wage (\(7.25/hr). In all conditions, workers received two tutorial tasks with feedback before working on ten tasks. Workers were randomly and independently assigned to tasks. \(n\) is five for both the Filter phase and the Select phase. + +The predicate word and argument spans are automatically identified using the Akbik and Li (2016) system. We present the workers with spans by projecting the head-word, as we expected spans to be more intuitive for workers. The sense inventory and argument types are as defined in Propbank. For argument labeling the sense of the predicate is the one produced by our workflow. If the span is incorrect, we expect workers would make a best effort to interpret the span (for example, if the span is one word too long or short they will probably still understand it correctly, especially since they see it in the context of the entire sentence). However, for evaluation, we label these cases with a special category, 'none', indicating that the span is incorrect or attached to the incorrect predicate. + +To confirm the consistency of our expert annota + +
RoundAll TasksTasks with 4+ options
Average Number of OptionsCumulative Gold LostCountAverage Number of Options
04.830769.07
12.841456.69
22.271255.88
32.052155.27
41.91364.67
51.87424.00
61.8540-
+ +tor, we had a second expert independently perform the annotations. The Cohen's Kappa score between the two experts was 0.92 for predicates and 0.85 for arguments, near-perfect agreement (Altman, 1990). + +# 4.1 Selecting the Filter Threshold + +The Filter phase repeats until the number of options for a task is below a pre-defined threshold. To choose the threshold, we performed an experiment in which we simulated the Filter phase and measured the accuracy of workers in the Select phase. The test involved ten predicate and ten argument tasks. We varied the number of options in each task, always keeping the true answer. We asked five workers to select the right answer and measured the accuracy of the majority choice. + +With two options they were perfect, with three options they scored 0.95, and with four they scored 0.80. This confirms our preliminary observation that workers are more accurate when there are fewer options. For the rest of the experiments, we set the filter threshold to three. + +# 5 Results + +# 5.1 Phase Evaluation + +These experiments evaluate the components of our system on a set of 200 tasks. + +Filtering effectively reduces the number of irrelevant options Table 1 shows results over multiple rounds of filtering. As the fourth column shows, after each round there are $40\%$ fewer tasks with $4+$ options. After six rounds of filtering, all tasks have three or fewer options and only $2\%$ of tasks have had the true answer removed. Even + +Table 1: Results of iterative filtering for 200 tasks. After six rounds, the gold answer has been lost in only four cases $(2\%)$ , and even then it can be recovered if the task goes to the expert phase. Meanwhile, the average number of options has been dramatically reduced. + +
PhaseCumulativeThis Phase Accuracy
Finished PAAccuracy PAPA
Filter38%13%0.991.000.991.00
Select87%85%0.940.970.900.96
Expert100%100%0.940.970.920.97
+ +Table 2: Tasks finished after each phase and their accuracy for Predicates (P) and Arguments (A). + +
WorkflowAccuracyExpertsCrowd Cost
PAPA
200 tasks
Automatic0.870.89000
Review-Select0.830.8200$39
Review-Expert0.940.9755%58%$30
Our Workflow0.940.9713%15%$103
2,014 tasks
Our Workflow0.950.9312%12%-
+ +Table 3: Comparison of workflows for annotation of predicates (P) and arguments (A). Our proposed workflow trades off expensive expert work for cheaper crowd work while maintaining high accuracy. + +in those cases, if the next step (Select) does not produce an answer then the expert will be able to assign the true answer since they choose from the unfiltered set of options. + +Most tasks finish early in the workflow with high accuracy Table 2 shows for each phase how many tasks are complete after that phase and the accuracy on those tasks. Frequently, the filter phase reduces the options down to a single correct answer. In tasks that proceed to the Select phase, we see that the number of options has been sufficiently reduced to enable high accuracy. Finally, the number of tasks that proceed to the final phase and require experts is relatively small. + +# 5.2 End-to-End Comparison + +This experiment aims to compare our overall approach with other options in terms of accuracy and expert workload. Table 3 shows an end-to-end comparison of output quality between several different workflows. The final row of the table shows the results of a scaled up version of the experiment, with 2,014 tasks. + +Our approach uses substantially less expert input If expert effort is fixed (e.g. the amount of time a research team has for annotation), then our approach allows 4x as much data to be annotated as Review-Expert. If the annotation budget is fixed, + +
LabelTotalSent to Experts
CountPercentageCountPercentage
A161139.36533.5
A037824.33417.5
A21217.8157.7
AM-TMP1167.5199.8
AM-MOD684.4105.2
AM-MNR473.084.1
none422.7178.8
AM-LOC392.594.6
AM-NEG382.421.0
AM-DIS372.473.6
A3191.231.5
AM-PNC161.021.0
AM-DIR130.821.0
A4100.610.5
+ +Table 4: The distribution of labels in the end-to-end experiment overall and for cases that go to the expert. 'none' applies to cases where the predicted argument span is incorrect or attached to the incorrect predicate. + +
AnnoGold
012TMPLOCnoneother
03691621151
1558973-122
214104-242
TMP--104-22
LOC-13-34-4
none----141
other315225232 / 10
+ +Table 5: Confusion matrix of annotated and gold argument labels on the end-to-end data with our workflow. The other-other cell shows (matching / not matching). + +then the balance depends on the cost of experts and the speed at which they work. Assuming even low expert pay, our approach comes out ahead, as we trade expensive expert effort for cheap crowd effort (decreasing expert effort by $4\mathrm{x}$ while increasing crowd effort by $3.4\mathrm{x}$ ). + +Table 4 shows the distribution of argument labels overall and for cases that are decided by experts in our workflow. They generally follow the same trend, with core arguments (A0, A1, A2) dominating in both cases. One exception is the cases where the argument span is incorrect (none), which go to experts much more frequently. This is a positive result, as the expert may then be able to address the span error (though we did not consider this possibility in our experiments). + +Our approach maintains high accuracy The agreement between our approach and the gold standard is comparable to expert agreement, which was $94\%$ on predicates and $95\%$ on arguments for Propbank before adjudication (Palmer et al., + +2005). To further understand the errors, we compared them with errors made by the automatic system. We avoid $67\%$ of the errors the automatic system makes, but do introduce errors in $1.7\%$ of the cases it gets right. Overall, this means there is a $62.5\%$ relative error reduction between the automatic system and our crowd workflow. Note that this is also the ideal scenario for the automatic model, as there is a close match with the training domain (also CoNLL data). Akbik and Li (2016) found precision and recall both dropped $10+$ points when evaluating systems out-of-domain. As a final test, we trained an SRL system using our annotations and found no significant shift in results, which is unsurprising, given that our annotations are almost identical to the reference. Table 5 shows a confusion matrix comparing our annotations and the gold annotations. No particular type of confusion dominates the 109 argument errors. + +We identify errors in the gold standard CoNLL data In the process of our experiments, 35 predicate tasks and 34 argument tasks had answers with unanimous agreement that did not match the CoNLL 2009 gold standard. We sent these to three experts for re-evaluation and $51\%$ of our predicates and $62\%$ of our arguments were actually correct. This highlights the effectiveness of this method. + +# 6 Conclusion + +We propose a filtering process that can simplify complex selection tasks that arise in SRL annotation. Evaluating on 2,014 examples, we find that our workflow matches gold-standard data for $95\%$ of predicates and $93\%$ of arguments, with expert input for only $13\%$ of cases. More broadly, our approach expands the applicability of crowdsourcing, enabling the creation of larger, more complex, high quality resources. + +# Acknowledgements + +We would like to thank Laura Burdick for helpful feedback on earlier drafts of this paper and the anonymous reviewers for their helpful suggestions. This material is based in part on work supported by IBM as part of the Sapphire Project at the University of Michigan, a DARPA Young Faculty Award grant number D19AP00079, and a Bloomberg Data Science Research Grant. Any opinions, findings, conclusions or recommendations expressed above do not necessarily reflect the views of IBM. + +# References + +Azad Abad, Moin Nabi, and Alessandro Moschitti. 2017. Self-crowdsourcing training for relation extraction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 518-523. +Alan Akbik and Yunyao Li. 2016. K-SRL: Instance-based learning for semantic role labeling. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 599-608. +Douglas G Altman. 1990. Practical statistics for medical research. CRC press. +Nancy Chang, Praveen Paritosh, David Huynh, and Collin Baker. 2015. Scaling semantic frame annotation. In Proceedings of The 9th Linguistic Annotation Workshop, pages 1-10. +Anca Dumitrache, Lora Aroyo, and Chris Welty. 2018. Capturing ambiguity in crowdsourcing frame disambiguation. In *The sixth AAAI Conference on Human Computation and Crowdsourcing (HCOMP)*, pages 12-20. +Anca Dumitrache, Lora Aroyo, and Chris Welty. 2019. A crowdsourced frame disambiguation corpus with ambiguity. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2164-2170. +Parvin Sadat Feizabadi and Sebastian Padó. 2014. Crowdsourcing annotation of non-local semantic roles. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, volume 2: Short Papers, pages 226-230. +Nicholas FitzGerald, Julian Michael, Luheng He, and Luke Zettlemoyer. 2018. Large-scale QA-SRL parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2051-2060. +Marco Fossati, Claudio Giuliano, and Sara Tonelli. 2013. Outsourcing FrameNet to the crowd. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 742-747. +Mitchell Gordon, Jeffrey P Bigham, and Walter S Lasecki. 2015. Legiontools: a toolkit+ ui for recruiting and routing crowds to synchronous real-time tasks. In Adjunct Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology, pages 81-82. +Younggyun Hahm, Youngbin Noh, Ji Yoon Han, Tae Hwan Oh, Hyonsu Choe, Hansaem Kim, and Key-Sun Choi. 2020. Crowdsourcing in the development of a multilingual FrameNet: A case study of + +Korean FrameNet. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 236-244. +Jan Hajic, Massimiliano Ciaramita, Richard Johansson, Daisuke Kawahara, Maria Antonia Martí, Lluis Marquez, Adam Meyers, Joakim Nivre, Sebastian Padó, Jan Štepánek, Pavel Stranák, Mihai Surdeanu, Nianwen Xue, and Yi Zhang. 2009. The CoNLL-2009 shared task: Syntactic and semantic dependencies in multiple languages. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL 2009): Shared Task, pages 1-18. +Luheng He, Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2017. Deep semantic role labeling: What works and what's next. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 473-483. +Luheng He, Mike Lewis, and Luke Zettlemoyer. 2015. Question-answer driven semantic role labeling: Using natural language to annotate natural language. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 643-653. +Jisup Hong and Collin F. Baker. 2011. How good is the crowd at "real" WSD? In Proceedings of the 5th Linguistic Annotation Workshop, pages 30-37. +David Jurgens. 2013. Embracing ambiguity: A comparison of annotation methodologies for crowdsourcing word sense labels. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 556-562. +Adam Kapelner, Krishna Kaliannan, H. Andrew Schwartz, Lyle Ungar, and Dean Foster. 2012. New insights from coarse word sense disambiguation in the crowd. In Proceedings of COLING 2012: Posters, pages 539-548. +Walter S Lasecki, Mitchell Gordon, Danai Koutra, Malte F Jung, Steven P Dow, and Jeffrey P Bigham. 2014. Glance: Rapidly coding behavioral video with the crowd. In Proceedings of the 27th annual ACM symposium on User interface software and technology, pages 551-562. +Hongwei Li and Qiang Liu. 2015. Cheaper and better: Selecting good workers for crowdsourcing. In Third AAAI Conference on Human Computation and Crowdsourcing, pages 20-21. +Angli Liu, Stephen Soderland, Jonathan Bragg, Christopher H. Lin, Xiao Ling, and Daniel S. Weld. 2016. Effective crowd annotation for relation extraction. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 897-906. + +Daniel M. Oppenheimer, Tom Meyvis, and Nicolas Davidenko. 2009. Instructional manipulation checks: Detecting satisficing to increase statistical power. Journal of Experimental Social Psychology, 45(4):867 - 872. +Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71-106. +Ellie Pavlick, Travis Wolfe, Pushpendre Rastogi, Chris Callison-Burch, Mark Dredze, and Benjamin Van Durme. 2015. FrameNet+: Fast paraphrastic tripling of FrameNet. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 408-413. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392. +Paul Roit, Ayal Klein, Daniela Stepanov, Jonathan Mamou, Julian Michael, Gabriel Stanovsky, Luke Zettlemoyer, and Ido Dagan. 2020. Controlled crowdsourcing for high-quality QA-SRL annotation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7008-7013. +Anna Rumshisky. 2011. Crowdsourcing word sense definition. In Proceedings of the 5th Linguistic Annotation Workshop, pages 74-81. +Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642. +Noortje J. Venhuizen, Valerio Basile, Kilian Evang, and Johan Bos. 2013. Gamification for word sense labeling. In Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013) - Short Papers, pages 397-403. +Chenguang Wang, Alan Akbik, Laura Chiticariu, Yun-yao Li, Fei Xia, and Anbang Xu. 2017. CROWDIN-THE-LOOP: A hybrid approach for annotating semantic roles. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1913-1922. +Ce Zhang, Feng Niu, Christopher Ré, and Jude Shavlik. 2012. Big data versus the crowd: Looking for relationships in all the right places. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 825-834. \ No newline at end of file diff --git a/anovelworkflowforaccuratelyandefficientlycrowdsourcingpredicatesensesandargumentlabels/images.zip b/anovelworkflowforaccuratelyandefficientlycrowdsourcingpredicatesensesandargumentlabels/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..20df1394db1f2720edc126b38611cf40a8eece39 --- /dev/null +++ b/anovelworkflowforaccuratelyandefficientlycrowdsourcingpredicatesensesandargumentlabels/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d26268af03e01974bfeb622e791f9ea11f492ea4ca3df637d9d5d355af086d9a +size 124379 diff --git a/anovelworkflowforaccuratelyandefficientlycrowdsourcingpredicatesensesandargumentlabels/layout.json b/anovelworkflowforaccuratelyandefficientlycrowdsourcingpredicatesensesandargumentlabels/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..34e5de885980b6ec05624a012ac7d92089ccef08 --- /dev/null +++ b/anovelworkflowforaccuratelyandefficientlycrowdsourcingpredicatesensesandargumentlabels/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5127c75acee1571eeb5606ae71696a429ec50b9b4432bafe1b23ee8367d1abe6 +size 214395 diff --git a/apilotstudyoftexttosqlsemanticparsingforvietnamese/f57825fa-5093-4e87-be34-ac3bc5b65f2a_content_list.json b/apilotstudyoftexttosqlsemanticparsingforvietnamese/f57825fa-5093-4e87-be34-ac3bc5b65f2a_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0a4ec46409b35b49946c8ac202a0e694fa6f4e81 --- /dev/null +++ b/apilotstudyoftexttosqlsemanticparsingforvietnamese/f57825fa-5093-4e87-be34-ac3bc5b65f2a_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d34897a8763d7c02bcfb81dc2e34f22dade6b1b280b7d971629ed054c03f3957 +size 45123 diff --git a/apilotstudyoftexttosqlsemanticparsingforvietnamese/f57825fa-5093-4e87-be34-ac3bc5b65f2a_model.json b/apilotstudyoftexttosqlsemanticparsingforvietnamese/f57825fa-5093-4e87-be34-ac3bc5b65f2a_model.json new file mode 100644 index 0000000000000000000000000000000000000000..76ac680437ede8f09bb1fff274a501adc5fb91f6 --- /dev/null +++ b/apilotstudyoftexttosqlsemanticparsingforvietnamese/f57825fa-5093-4e87-be34-ac3bc5b65f2a_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ea1ac99506d8b56219a4ec8cfeeaa73fa8469d3199e80f5b80d920235f003525 +size 56162 diff --git a/apilotstudyoftexttosqlsemanticparsingforvietnamese/f57825fa-5093-4e87-be34-ac3bc5b65f2a_origin.pdf b/apilotstudyoftexttosqlsemanticparsingforvietnamese/f57825fa-5093-4e87-be34-ac3bc5b65f2a_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..920b4001096f38764cc56bc7d372ce55da3f270a --- /dev/null +++ b/apilotstudyoftexttosqlsemanticparsingforvietnamese/f57825fa-5093-4e87-be34-ac3bc5b65f2a_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7552366551fa50226cbb8b59f32f9852d900180ddce60426c266d952c2f6148b +size 220504 diff --git a/apilotstudyoftexttosqlsemanticparsingforvietnamese/full.md b/apilotstudyoftexttosqlsemanticparsingforvietnamese/full.md new file mode 100644 index 0000000000000000000000000000000000000000..7a04a6ef104ba70da33e30ea298528d1e63139c6 --- /dev/null +++ b/apilotstudyoftexttosqlsemanticparsingforvietnamese/full.md @@ -0,0 +1,181 @@ +# A Pilot Study of Text-to-SQL Semantic Parsing for Vietnamese + +Anh Tuan Nguyen $^{1,*}$ , Mai Hoang Dao $^{2}$ and Dat Quoc Nguyen $^{2}$ + +$^{1}$ NVIDIA, USA; $^{2}$ VinAI Research, Vietnam + +tuananhn@nvidia.com, {v.maidh3, v.datnq9}@vinai.io + +# Abstract + +Semantic parsing is an important NLP task. However, Vietnamese is a low-resource language in this research area. In this paper, we present the first public large-scale Text-to-SQL semantic parsing dataset for Vietnamese. We extend and evaluate two strong semantic parsing baselines EditSQL (Zhang et al., 2019) and IRNet (Guo et al., 2019) on our dataset. We compare the two baselines with key configurations and find that: automatic Vietnamese word segmentation improves the parsing results of both baselines; the normalized pointwise mutual information (NPMI) score (Bouma, 2009) is useful for schema linking; latent syntactic features extracted from a neural dependency parser for Vietnamese also improve the results; and the monolingual language model PhoBERT for Vietnamese (Nguyen and Nguyen, 2020) helps produce higher performances than the recent best multilingual language model XLM-R (Conneau et al., 2020). + +# 1 Introduction + +Semantic parsing is the task of converting natural language sentences into meaning representations such as logical forms or standard SQL database queries (Mooney, 2007), which serves as an important component in many NLP systems such as Question answering and Task-oriented dialogue (Androutsopoulos et al., 1995; Moldovan et al., 2003; Guo et al., 2018). The significant availability of the world's knowledge stored in relational databases leads to the creation of large-scale Text-to-SQL datasets, such as WikiSQL (Zhong et al., 2017) and Spider (Yu et al., 2018), which help boost the development of various state-of-the-art sequence-to-sequence (seq2seq) semantic parsers (Boin et al., 2019; Zhang et al., 2019; Guo + +et al., 2019). Compared to WikiSQL, the Spider dataset presents challenges not only in handling complex questions but also in generalizing to unseen databases during evaluation. + +Most SQL semantic parsing benchmarks, such as WikiSQL and Spider, are exclusively for English. Thus the development of semantic parsers has largely been limited to the English language. As SQL is a database interface and universal semantic representation, it is worth investigating the Text-to-SQL semantic parsing task for languages other than English. Especially, the difference in linguistic characteristics could add difficulties in applying seq2seq semantic parsing models to the non-English languages (Min et al., 2019). For example, about $85\%$ of word types in Vietnamese are composed of at least two syllables (Thang et al., 2008). Unlike English, in addition to marking word boundaries, white space is also used to separate syllables that constitute words in Vietnamese written texts. For example, an 8-syllable written text "Có bao nhieu quoc gia ò chau Âu" (How many countries in Europe) forms 5 words "Có bao_nhiêu_How many quoc_gia_country ò in chau Âu_Europe". Thus it is interesting to study the influence of word segmentation in Vietnamese on its SQL parsing, i.e. syllable level vs. word level. + +In terms of Vietnamese semantic parsing, previous approaches construct rule templates to convert single database-driven questions into meaning representations (Nguyen and Le, 2008; Nguyen et al., 2009, 2012; Tung et al., 2015; Nguyen et al., 2017). Recently, Vuong et al. (2019) formulate the Text-to-SQL semantic parsing task for Vietnamese as a sequence labeling-based slot filling problem, and then solve it by using a conventional CRF model with handcrafted features, due to the simple structure of the input questions they deal with. Note that seq2seq-based semantic parsers have not yet been explored in any previous work w.r.t. Vietnamese. + +Semantic parsing datasets for Vietnamese include a corpus of 5460 sentences for assigning semantic roles (Phuong et al., 2017) and a small Text-to-SQL dataset of 1258 simple structured questions over 3 databases (Vuong et al., 2019). However, these two datasets are not publicly available for research community. + +In this paper, we introduce the first public largescale Text-to-SQL dataset for the Vietnamese semantic parsing task. In particular, we create this dataset by manually translating the Spider dataset into Vietnamese. We empirically evaluate strong seq2seq baseline parsers EditSQL (Zhang et al., 2019) and IRNet (Guo et al., 2019) on our dataset. + +Extending the baselines, we extensively investigate key configurations and find that: (1) Our human-translated dataset is far more reliable than a dataset consisting of machine-translated questions, and the overall result obtained for Vietnamese is comparable to that for English. (2) Automatic Vietnamese word segmentation improves the performances of the baselines. (3) The NPMI score (Bouma, 2009) is useful for linking a cell value mentioned in a question to a column in the database schema. (4) Latent syntactic features, which are dumped from a neural dependency parser pre-trained for Vietnamese (Nguyen and Verspoor, 2018), also help improve the performances. (5) Highest improvements are accounted for the use of pre-trained language models, where PhoBERT (Nguyen and Nguyen, 2020) helps produce higher results than XLM-R (Conneau et al., 2020). + +We hope that our dataset can serve as a starting point for future Vietnamese semantic parsing research and applications. We publicly release our dataset at: https://github.com/VinAIResearch/ViText2SQL. + +# 2 Our Dataset + +We manually translate all English questions and the database schema (i.e. table and column names as well as values in SQL queries) in Spider into Vietnamese. Note that the original Spider dataset consists of 10181 questions with their corresponding 5693 SQL queries over 200 databases. However, only 9691 questions and their corresponding 5263 SQL queries over 166 databases, which are used for training and development, are publicly available. Thus we could only translate those available ones. + +The translation work is performed by 1 NLP researcher and 2 computer science students (IELTS + +
#Qu.#SQL#DB#T/D#Easy#Med.#Hard#ExH
all969152631665.32233343920951924
train68313493995.41559225515021515
dev954589254.2249405191109
test19061193425.7425779402300
+ +Table 1: Statistics of our human-translated dataset. "#Qu.", "#SQL" and "#DB" denote the numbers of questions, SQL queries and databases, respectively. "#T/D" abbreviates the average number of tables per database. "#Easy", "#Med.", "#Hard" and "#ExH" denote the numbers of questions categorized by their SQL queries' hardness levels of "easy", "medium", "hard" and "extra hard", respectively (as defined by Yu et al.). + +$7.0+$ ). Every question and SQL query pair from the same database is first translated by one student and then cross-checked and corrected by the second student; and finally the NLP researcher verifies the original and corrected versions and makes further revisions if needed. Note that in case we have literal translation for a question, we stick to the style of the original English question as much as possible. Otherwise, for complex questions, we will rephrase them based on the semantic meaning of the corresponding SQL queries to obtain the most natural language questions in Vietnamese. + +Following Yu et al. (2018) and Min et al. (2019), we split our dataset into training, development and test sets such that no database overlaps between them, as detailed in Table 1. Examples of question and SQL query pairs from our dataset are presented in Table 2. Note that translated question and SQL query pairs in our dataset are written at the syllable level. To obtain a word-level version of the dataset, we apply RDRSegmenter (Nguyen et al., 2018) from VnCoreNLP (Vu et al., 2018) to perform automatic Vietnamese word segmentation. + +
Original (Easy question-involving one table in one database): +What is the number of cars with more than 4 cylinders? +SELECT count(*) FROM CARS_DATA WHERE Cylinders > 4
Translated: +Cho biế t sòng ngu ng chí c e có nhế u hàn 4 xì lanh. +SELECT count(*) FROM [du liē u xì] WHERE [sǒ lǚng xi lanh] > 4
Original (Hard question-with a nested SQL query): +Which countries in europe have at least 3 car manufacturers? +SELECT T1.CountryName FROM COUNTRIES AS T1 JOIN CONTINENTSS +AS T2 ON T1.Continent = T2.ContId JOIN CAR MAKERS +AS T3 ON T1.CountryId = T3.Country +WHERE T2.Continent = “europe” GROUP BY T1.CountryName +HAVING count(*) >= 3
Translated: +Nhùng quócgia não o chuí Âu có it khác 3 nhà sán xuát xe hòi? +SELECT T1.[tên quócgia] FROM [quócgia] AS T1 JOIN [luc dia] +AS T2 ON T1.[luc dia] = T2.[id luc dia] JOIN [nhà sán xuát xe hòi] +AS T3 ON T1.[id quócgia] = T3.[quócgia] +WHERE T2.[luc dia] = “chuí Âu” GROUP BY T1.[tên quócgia] +HAVING count(*) >= 3
+ +Table 2: Syllable-level examples. Word segmentation outputs are not shown for simplification. + +# 3 Baseline Models and Extensions + +# 3.1 Baselines + +Recent state-of-the-art results on the Spider dataset are reported for RYANSQL (Choi et al., 2020) and RAT-SQL (Wang et al., 2020), which are based on the seq2seq encoder-decoder architectures. However, their implementations are not published at the time of our empirical investigation. $^{1}$ Thus we select seq2seq based models EditSQL (Zhang et al., 2019) and IRNet (Guo et al., 2019) with publicly available implementations as our baselines, which produce near state-of-the-art scores on Spider. We briefly describe the baselines EditSQL and IRNet as follows: + +- EditSQL is developed for a context-dependent Text-to-SQL parsing task, consisting of: (1) a BiLSTM-based question-table encoder to explicitly encode the question and table schema, (2) a BiLSTM-based interaction encoder with attention to incorporate the recent question history, and (3) a LSTM-based table-aware decoder with attention, taking into account the outputs of both encoders to generate a SQL query. + +- IRNet first performs an n-gram matching-based schema linking to identify the columns and the tables mentioned in a question. Then it takes the question, a database schema and the schema linking results as input to synthesize a tree-structured SemQL query—an intermediate representation bridging the input question and a target SQL query. This synthesizing process is performed by using a BiLSTM-based question encoder and an attention-based schema encoder together with a grammar-based LSTM decoder (Yin and Neubig, 2017). Finally, IRNet deterministically uses the synthesized SemQL query to infer the SQL query with domain knowledge. + +See Zhang et al. (2019) and Guo et al. (2019) for more details of EditSQL and IRNet, respectively. + +# 3.2 Our Extensions + +NPMI for schema linking: IRNet essentially relies on the large-scale knowledge graph ConceptNet (Speer et al., 2017) to link a cell value mentioned in a question to a column in the database schema, based on two ConceptNet categories 'is a type of' + +and 'related terms'. However, these two ConceptNet categories are not available for Vietnamese. Thus we propose a novel use of the NPMI collocation score (Bouma, 2009) for the schema linking in IRNet, which ranks the NPMI scores between the cell values and column names to match a cell value to its column. + +Latent syntactic features: Previous works have shown that syntactic features help improve semantic parsing (Monroe and Wang, 2014; Jie and Lu, 2018). Unlike these works that use handcrafted syntactic features extracted from dependency parse trees, and inspired by Zhang et al. (2017)'s relation extraction work, we investigate whether latent syntactic features, extracted from the BiLSTM-based dependency parser jPTDP (Nguyen and Verspoor, 2018) pre-trained for Vietnamese, would help improve Vietnamese Text-to-SQL parsing. In particular, our approach is that we dump latent feature representations from jPTDP's BiLSTM encoder given our word-level inputs, and directly use them as part of input embeddings of EditSQL and IRNet. + +Pre-trained language models: Zhang et al. (2019) and Guo et al. (2019) make use of BERT (Devlin et al., 2019) to improve their model performances. Thus we also extend EditSQL and IRNet with the use of pre-trained language models XLM-R-base (Conneau et al., 2020) and PhoBERT-base (Nguyen and Nguyen, 2020) for the syllable- and word-level settings, respectively. XLM-R is the recent best multi-lingual model, based on RoBERTa (Liu et al., 2019), pre-trained on a 2.5TB multilingual corpus which contains 137GB of syllable-level Vietnamese texts. PhoBERT is a monolingual variant of RoBERTa for Vietnamese, pre-trained on a 20GB of word-level Vietnamese texts. + +# 4 Experiments + +# 4.1 Experimental Setup + +We conduct experiments to study a quantitative comparison between our human-translated dataset and a machine-translated dataset,2 the influence of Vietnamese word segmentation (i.e. syllable level and word level), and the usefulness of the latent syntactic features, the pre-trained language models and the NPMI-based approach for schema linking. + +For both baselines EditSQL and IRNet which require input pre-trained embeddings for syllables + +
ApproachEasyMediumHardExHSELECTWHEREORDER BYGROUP BYKEYWORDS
EditSQLDeP65.746.137.616.875.144.665.663.273.5
EditSQLXLM-R75.156.245.322.482.760.370.767.279.8
EditSQLPhoBERT75.658.047.422.783.361.872.567.980.6
IRNetDeP71.851.547.418.579.348.771.863.474.3
IRNetXLM-R76.257.846.823.583.559.174.468.380.5
IRNetPhoBERT76.857.547.224.884.559.376.668.280.3
+ +Table 4: Exact matching accuracy categorized by 4 different hardness levels, and $\mathrm{F}_1$ scores of different SQL components on the test set. "ExH" abbreviates Extra Hard. + +
ApproachdevtestApproachdevtest
Vi-SyllableEditSQL [MT]21.516.8IRNet [MT]25.420.3
EditSQL28.624.1IRNet43.338.2
EditSQLXLM-R55.251.3IRNetXLM-R58.652.8
EditSQL [MT]22.817.4IRNet [MT]27.421.6
Vi-WordEditSQL33.730.2IRNet49.743.6
EditSQLDeP45.342.2IRNetDeP52.247.1
EditSQLPhoBERT56.752.6IRNetPhoBERT60.253.2
EnEditSQLRoBERTa58.353.6IRNetRoBERTa63.855.3
+ +Table 3: Exact matching accuracies of EditSQL and IRNet. "Vi-Syllable" and "Vi-Word" denote the results w.r.t. the syllable level and the word level, respectively. [MT] denotes accuracy results with the machine-translated questions. The subscript "DeP" refers to the use of the latent syntactic features. Other subscripts denote the use of the pre-trained language models. "En" denotes our results on the English Spider dataset but under our training/development/test split w.r.t. the total 9691 public available questions. + +and words, we pre-train a set of 300-dimensional syllable embeddings and another set of 300-dimensional word embeddings using the Word2Vec skip gram model (Mikolov et al., 2013) on syllable- and word-level corpora of 20GB Vietnamese texts (Nguyen and Nguyen, 2020). In addition, we also use these 20GB syllable- and word-level Vietnamese corpora as our external datasets to compute the NPMI score (with a window size of 20) for schema linking in IRNet. + +Our hyperparameters for EditSQL and IRNet are taken from Zhang et al. (2019) and Guo et al. (2019), respectively. The pre-trained syllable and word embeddings are fixed, while the pre-trained language models XLM-R and PhoBERT are finetuned during training. + +Following Yu et al. (2018), we use two commonly used metrics for evaluation. The first one is the exact matching accuracy, which reports the percentage of input questions that have exactly the same SQL output as its gold reference. The second one is the component matching $\mathrm{F}_1$ , which reports $\mathrm{F}_1$ scores for SELECT, WHERE, ORDER + +BY, GROUP BY and all other keywords. + +We run for 10 training epochs and evaluate the exact matching accuracy after each epoch on the development set, and then select the best model checkpoint to report the final result on the test set. + +# 4.2 Main Results + +Table 3 shows the overall exact matching results of EditSQL and IRNet on the development and test sets. Clearly, IRNet does better than EditSQL, which is consistent with results obtained on the original English Spider dataset. + +We find that our human-translated dataset is far more reliable than a dataset consisting of machine-translated questions. In particular, at the word level, compared to the machine-translated dataset, our dataset obtains about $30.2 - 17.4 \approx 13\%$ and $43.6 - 21.6 = 22\%$ absolute improvements in accuracies of EditSQL and IRNet, respectively (i.e. $75\% - 100\%$ relative improvements). In addition, the word-based Text-to-SQL parsing obtains about $5 + \%$ absolute higher accuracies than the syllable-based Text-to-SQL parsing (EditSQL: $24.1\% \rightarrow 30.2\%$ ; IRNet: $38.2\% \rightarrow 43.6\%$ ), i.e. automatic Vietnamese word segmentation improves the accuracy results. + +Furthermore, latent syntactic features dumped from the pre-trained dependency parser jPTDP for Vietnamese help improve the performances of the baselines (EditSQL: $30.2\% \rightarrow 42.2\%$ ; IRNet: $43.6\% \rightarrow 47.1\%$ ). Also, biggest improvements are accounted for the use of pre-trained language models. In particular, PhoBERT helps produce higher results than XLM-R (EditSQL: $52.6\%$ vs. $51.3\%$ ; IRNet: $53.2\%$ vs. $52.8\%$ ). + +We also retrain EditSQL and IRNet on the English Spider dataset with the use of the strong pretrained language model RoBERTa instead of BERT, but under our dataset split. We find that the overall results for Vietnamese are smaller but comparable to the English results. Therefore, Text-to-SQL semantic parsing for Vietnamese might not be significantly more challenging than that for English. + +Table 4 shows the exact matching accuracies of EditSQL and IRNet w.r.t. different hardness levels of SQL queries and the $\mathrm{F}_1$ scores w.r.t. different SQL components on the test set. Clearly, in most cases, the pre-trained language models PhoBERT and XLM-R help produce substantially higher results than the latent syntactic features, especially for the WHERE component. + +NPMI-based schema linking: We also investigate the contribution of our NPMI-based extension approach for schema linking in applying IRNet for Vietnamese. Without using NPMI for schema linking, we observe $6 + \%$ absolute decrease in the exact matching accuracies of IRNet on both development and test sets, thus showing the usefulness of our NPMI-based approach for schema linking. + +# 4.3 Error Analysis + +To understand the source of errors, we perform an error analysis on the development set which consists of 954 questions. Using IRNetPhoBERT which produces the best result, we identify several causes of errors from 382/954 failed examples. + +For 121/382 cases $(32\%)$ , IRNetPhoBERT makes incorrect predictions on the column names which are not mentioned or only partially mentioned in the questions. For example, given the question "Hien tên và nám phát khác cúa nhùng bái hát thúoc vè ca sītre tiên責" (Show the name and the release year of the song by the youngest singer),4 the model produces an incorrect column name prediction of "tên" (name) instead of the correct one "tên bái hát" (song name). Errors related to column name predictions can either be missing the entire column names or inserting random column names into the WHERE component of the predicted SQL queries. + +About $12\%$ of failed examples (47/382) in fact have an equivalent implementation of their intent with a different SQL syntax. For example, the model produces a 'failed' SQL output "SELECT MAX [suc chuá] FROM [sân vân ], which is equivalent to the gold SQL query of "SELECT [suc chuá] FROM [sân vân ], i.e. the SQL output would be valid if we measure an execution accuracy. + +About $22\%$ of failed examples (84/382) are caused by nested and complex SQL queries which mostly belong to the Extra Hard category. With + +$18\%$ of failed examples (70/382), incorrectly predicting operators is another common type of errors. For example, given the phrases "già nhái" (oldest) and "tre nhái" (youngest) in the question, the model fails to predict the correct operators max and min, respectively. The remaining 60/382 cases $(16\%)$ are accounted for an incorrect prediction of table names in a FROM clause. + +# 5 Conclusion + +In this paper, we have presented the first public large-scale dataset for Vietnamese Text-to-SQL semantic parsing. We also extensively experiment with key research configurations using two strong baseline models on our dataset and find that: the input representations, the NPMI-based approach for schema linking, the latent syntactic features and the pre-trained language models all have the influence on this Vietnamese-specific task. We hope that our dataset can serve as the starting point for further research and applications in Vietnamese question answering and dialogue systems. + +# References + +I. Androutsopoulos, G.D. Ritchie, and P. Thanisch. 1995. Natural language interfaces to databases - an introduction. Natural Language Engineering, 1(1):29-81. +Ben Boin, Matt Gardner, and Jonathan Berant. 2019. Global Reasoning over Database Structures for Text-to-SQL Parsing. In Proceedings of EMNLP-IJCNLP, pages 3659-3664. +G. Bouma. 2009. Normalized (pointwise) mutual information in collocation extraction. In Proceedings of the Biennial GSCL Conference, pages 31-40. +DongHyun Choi, Myeong Cheol Shin, EungGyun Kim, and Dong Ryeol Shin. 2020. RYANSQL: Recursively Applying Sketch-based Slot Fillings for Complex Text-to-SQL in Cross-Domain Databases. arXiv preprint, arXiv:2004.03125. +Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle-moyer, and Veselin Stoyanov. 2020. Unsupervised Cross-lingual Representation Learning at Scale. In Proceedings of ACL, pages 8440-8451. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, pages 4171-4186. + +Daya Guo, Duyu Tang, Nan Duan, Ming Zhou, and Jian Yin. 2018. Dialog-to-Action: Conversational Question Answering Over a Large-Scale Knowledge Base. In NIPS, pages 2942-2951. +Jiaqi Guo, Zecheng Zhan, Yan Gao, Yan Xiao, JianGuang Lou, Ting Liu, and Dongmei Zhang. 2019. Towards Complex Text-to-SQL in Cross-Domain Database with Intermediate Representation. In Proceedings of ACL, pages 4524-4535. +Zhanming Jie and Wei Lu. 2018. Dependency-based Hybrid Trees for Semantic Parsing. In Proceedings of EMNLP, pages 2431-2441. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint, arXiv:1907.11692. +Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed Representations of Words and Phrases and their Compositionality. In Advances in Neural Information Processing Systems 26, pages 3111-3119. +Qingkai Min, Yuefeng Shi, and Yue Zhang. 2019. A Pilot Study for Chinese SQL Semantic Parsing. In Proceedings of EMNLP-IJCNLP, pages 3652-3658. +Dan Moldovan, Christine Clark, Sanda Harabagiu, and Steve Maiorano. 2003. COGEX: A Logic Prover for Question Answering. In Proceedings of HLT-NAACL, pages 166-172. +Will Monroe and Yushi Wang. 2014. Dependency Parsing Features for Semantic Parsing. +Raymond J. Mooney. 2007. Learning for semantic parsing. In Proceedings of CICLing, pages 311-324. +Anh Kim Nguyen and Huong Thanh Le. 2008. Natural Language Interface Construction Using Semantic Grammars. In Proceedings of PRICAI, pages 728-739. +Dai Quoc Nguyen, Dat Quoc Nguyen, and Son Bao Pham. 2009. A Vietnamese Question Answering System. In Proceedings of KSE, pages 26-32. +Dai Quoc Nguyen, Dat Quoc Nguyen, and Son Bao Pham. 2012. A Semantic Approach for Question Analysis. In Proceedings of IEA/AIE, pages 156-165. +Dat Quoc Nguyen and Anh Tuan Nguyen. 2020. PhoBERT: Pre-trained language models for Vietnamese. arXiv preprint, arXiv:2003.00744. +Dat Quoc Nguyen, Dai Quoc Nguyen, and Son Bao Pham. 2017. Ripple Down Rules for Question Answering. Semantic Web, 8(4):511-532. + +Dat Quoc Nguyen, Dai Quoc Nguyen, Thanh Vu, Mark Dras, and Mark Johnson. 2018. A Fast and Accurate Vietnamese Word Segmenter. In Proceedings of LREC, pages 2582-2587. +Dat Quoc Nguyen and Karin Verspoor. 2018. An improved neural network model for joint POS tagging and dependency parsing. In Proceedings of the CoNLL 2018 Shared Task, pages 81-91. +Le Hong Phuong, Pham Hoang, Pham Khoai, Nguyen Huyen, Nguyen Luong, and Nguyen Hiep. 2017. Vietnamese Semantic Role Labelling. VNU Journal of Science: Computer Science and Communication Engineering, 33(2). +Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Proceedings of AAAI, pages 4444-4451. +Dinh Quang Thang, Le Hong Phuong, Nguyen Thi Minh Huyen, Nguyen Cam Tu, Mathias Rossignol, and Vu Xuan Luong. 2008. Word segmentation of Vietnamese texts: a comparison of approaches. In Proceedings of LREC, pages 1933-1936. +Vu Xuan Tung, Le Minh Nguyen, and Duc Tam Hoang. 2015. Semantic Parsing for Vietnamese Question Answering System. In Proceedings of KSE, pages 332-335. +Thanh Vu, Dat Quoc Nguyen, Dai Quoc Nguyen, Mark Dras, and Mark Johnson. 2018. VnCoreNLP: A Vietnamese Natural Language Processing Toolkit. In Proceedings of NAACL: Demonstrations, pages 56-60. +Thi-Hai-Yen Vuong, Thi-Thu-Trang Nguyen, Nhu-Thuat Tran, Le-Minh Nguyen, and Xuan-Hieu Phan. 2019. Learning to Transform Vietnamese Natural Language Queries into SQL Commands. In Proceedings of KSE. +Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, and Matthew Richardson. 2020. RAT-SQL: Relation-Aware Schema Encoding and Linking for Text-to-SQL Parsers. In Proceedings of ACL, pages 7567-7578. +Pengcheng Yin and Graham Neubig. 2017. A Syntactic Neural Model for General-Purpose Code Generation. In Proceedings of ACL, pages 440-450. +Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir R. Radev. 2018. Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-SQL Task. In Proceedings of EMNLP, pages 3911-3921. +Meishan Zhang, Yue Zhang, and Guohong Fu. 2017. End-to-End Neural Relation Extraction with Global Optimization. In Proceedings of EMNLP, pages 1730-1740. + +Rui Zhang, Tao Yu, Heyang Er, Sungrok Shim, Eric Xue, Xi Victoria Lin, Tianze Shi, Caiming Xiong, Richard Socher, and Dragomir R. Radev. 2019. Editing-Based SQL Query Generation for Cross-Domain Context-Dependent Questions. In Proceedings of EMNLP-IJCNLP, pages 5337-5348. +Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning. arXiv preprint, arXiv:1709.00103. \ No newline at end of file diff --git a/apilotstudyoftexttosqlsemanticparsingforvietnamese/images.zip b/apilotstudyoftexttosqlsemanticparsingforvietnamese/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..14dea16bb908cee064e4486aa5569c3df0162208 --- /dev/null +++ b/apilotstudyoftexttosqlsemanticparsingforvietnamese/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cf6b55694ee4809c1ce31718176a6f65ec11f424d59c091106bd4b6499b7fbcd +size 193996 diff --git a/apilotstudyoftexttosqlsemanticparsingforvietnamese/layout.json b/apilotstudyoftexttosqlsemanticparsingforvietnamese/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..9ee27e74cbc4f3a01e9370a9c3e263d0fa839d2a --- /dev/null +++ b/apilotstudyoftexttosqlsemanticparsingforvietnamese/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c7e3fa4cef6d6d244e732cc9228e8f8c9a7a5b37dfba75ed2ef30057e95d9213 +size 195226 diff --git a/apointernetworkarchitectureforjointmorphologicalsegmentationandtagging/0792ee38-561c-4767-95a4-b3dcbff6e1db_content_list.json b/apointernetworkarchitectureforjointmorphologicalsegmentationandtagging/0792ee38-561c-4767-95a4-b3dcbff6e1db_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..87953c9c35a539da6498e145934f92432bbabb5e --- /dev/null +++ b/apointernetworkarchitectureforjointmorphologicalsegmentationandtagging/0792ee38-561c-4767-95a4-b3dcbff6e1db_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b2629c1855245ad140dcf99e5f76750a95cb480fd380e77ffe58678b7ab2584c +size 79110 diff --git a/apointernetworkarchitectureforjointmorphologicalsegmentationandtagging/0792ee38-561c-4767-95a4-b3dcbff6e1db_model.json b/apointernetworkarchitectureforjointmorphologicalsegmentationandtagging/0792ee38-561c-4767-95a4-b3dcbff6e1db_model.json new file mode 100644 index 0000000000000000000000000000000000000000..79e41df10d60fc0701d6ff148be6bdb14e1c2294 --- /dev/null +++ b/apointernetworkarchitectureforjointmorphologicalsegmentationandtagging/0792ee38-561c-4767-95a4-b3dcbff6e1db_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e6e7dc949c5210129551a1269a8f952eb2b2de84db88e997c3f12e513816fcda +size 95226 diff --git a/apointernetworkarchitectureforjointmorphologicalsegmentationandtagging/0792ee38-561c-4767-95a4-b3dcbff6e1db_origin.pdf b/apointernetworkarchitectureforjointmorphologicalsegmentationandtagging/0792ee38-561c-4767-95a4-b3dcbff6e1db_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b3007f3332345c98faab9b18a9eb70c44a1a2e7e --- /dev/null +++ b/apointernetworkarchitectureforjointmorphologicalsegmentationandtagging/0792ee38-561c-4767-95a4-b3dcbff6e1db_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5b92fa958319767612a35755e3bf4e3358940cdda233fe2fd9743dd3ab5d5cb0 +size 471139 diff --git a/apointernetworkarchitectureforjointmorphologicalsegmentationandtagging/full.md b/apointernetworkarchitectureforjointmorphologicalsegmentationandtagging/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a69ff99c3a838a95a96a9b394c9a9ae6d4364e8b --- /dev/null +++ b/apointernetworkarchitectureforjointmorphologicalsegmentationandtagging/full.md @@ -0,0 +1,300 @@ +# A Pointer Network Architecture for Joint Morphological Segmentation and Tagging + +Amit Seker + +Open University, Ra'anana, Israel + +aseker00@gmail.com + +Reut Tsarfaty + +Bar Ilan University, Israel + +reut.tsarfaty@biu.ac.il + +# Abstract + +Morphologically Rich Languages (MRLs) such as Arabic, Hebrew and Turkish often require Morphological Disambiguation (MD), i.e., the prediction of the correct morphological decomposition of tokens into morphemes, early in the pipeline. Neural MD may be addressed as a simple pipeline, where segmentation is followed by sequence tagging, or as an end-to-end model, predicting morphemes from raw tokens. Both approaches are suboptimal; the former is heavily prone to error propagation, and the latter does not enjoy explicit access to the basic processing units called morphemes. This paper offers an MD architecture that combines the symbolic knowledge of morphemes with the learning capacity of neural end-to-end modeling. We propose a new, general and easy-to-implement Pointer Network model where the input is a morphological lattice and the output is a sequence of indices pointing at a single disambiguated path of morphemes. We demonstrate the efficacy of the model on segmentation and tagging, for Hebrew and Turkish texts, based on their respective Universal Dependencies (UD) treebanks. Our experiments show that with complete lattices, our model outperforms all shared-task results on segmenting and tagging these languages. On the SPMRL treebank, our model outperforms all previously reported results for Hebrew MD in realistic scenarios. + +# 1 Introduction + +In Morphologically Rich Languages (MRLs) (Tsar-faty et al., 2010), raw tokens are morphologically ambiguous, complex, and consist of sub-token units referred to as morphemes. Morphological Disambiguation (MD) is the task of decomposing the tokens into their constituent morphemes, + +to be used as the basic processing units for NLP tasks down the pipeline (Mueller et al., 2013; More and Tsarfaty, 2016). As opposed to the commonly known scenario of morphological tagging (Bohnet et al., 2013), where every input token is assigned a single morphological signature (containing its lemma, part-of-speech tag, and morphological features such as gender, number, person, tense, etc.), in the MD scenario internally-complex input tokens may consist of multiple distinct units, each of which gets assigned its own morphological signature. + +Pre-neural statistical approaches for MD (Barhaim et al., 2008; Adler and Elhadad, 2006a; Lee et al., 2011; Habash et al., 2013) typically used weighted finite-state machines to unravel the possible morphological decompositions, and classic machine learning models to select the most likely decomposition. Current neural models, however, take radically different paths. + +One neural approach to MD employs pipeline, where a predicted segmentation of words into morphemes is passed on to sequence labeling component that performs tagging of each segment in context. This segmentation-first scenario employs sequence tagging to assign a single morphological tag to each segment similar to POS tagging in English, where each token in the input sequence is assigned a single label by the tagger. This method might be expected to work for MRLs just as well as standard NLP models do for English tagging, however, in actuality, such pipeline architectures are prone to error propagation, which undermines the accuracy of almost any task down the NLP pipeline (tagging, parsing, named entity recognition, relation extraction, etc.) (Tsarfaty et al., 2020; Klein and Tsarfaty, 2020; Bareket and Tsarfaty, 2020). + +A second conceivable approach is an end-to-end sequence-to-sequence model that consumes a sequence of tokens (or characters) and produces a + +
Hebrew TokenMorphological AnalysisEnglish Translation
bbitb/ADP bit/NOUNin a house
b/ADP h/DET bit/NOUNin the house
hlbnh/DET lbn/NOUNthe buttermilk
h/DET lbn/ADJthe white
hlbn/VERBwhitened
+ +Table 1: Partial list of Morphological Analyses for the Hebrew tokens: bbit hlbn. Each analysis is expressed as a list of morphological properties. In this example we only list the Segment/Tag properties. + +sequence of morphological signatures. Notably, the number of morphological signatures may vastly exceed the number of input tokens, (e.g., with an average of 1.4 tags per word in Hebrew). The drawback of this approach is that the model has no access to morphological information in the input, and is expected to extract all morphological information directly from the raw text. Tokens in MRLs are lexically and syntactically ambiguous, and carry many possible interpretations, so it is unclear if the surface signal is in fact sufficient. This fact is exacerbated by the fact that some MRLs are low resourced and even with pre-trained word embeddings, many forms are lacking when operating on internally-complex tokens. + +In this paper we propose an alternative approach, that enjoys the power of end-to-end neural modeling while maintaining access to morphemes. We frame the problem as a Morphological Analysis and Disambiguation (MA&D) task, in which every raw token in the input sequence first goes through Morphological Analysis (MA) that exposes all of its possible morphological decompositions as a lattice (see Figure 1). This morphological lattice is then passed to the MD component, based on a Pointer Network, which selects a sequence of most likely arcs in the context of the sentence being processed. Since every lattice arc contains rich information that is made available by the MA — namely, segmentation boundaries, lemma, Part-of-Speech tag and a set of morphological features — this MA&D framework can jointly predict rich morphological layers while avoiding the pipeline pitfall. + +Based on this architecture, we design a neural model for joint segmentation and tagging and apply it to two MRLs, Hebrew and Turkish. In realistic circumstances, the lexical coverage of the lattice may be partial, and we report MD results in both ideal and realistic scenarios. Our results on the Hebrew and Turkish UD treebanks show state-of-the-art performance for complete morphological + +![](images/1de1b5de45c6fd4f1deb23636a9350a2b5ff915545fb32920b36e2969c2f3531.jpg) +Figure 1: Lattice of the Hebrew tokens 'bbit hlbn' corresponding to the example in Table 1. Edges are morphemes. Nodes are segmentation boundaries. Bold nodes are token boundaries. Every path through the lattice represents a single morphological analysis. + +lattices, and on the Hebrew SPMRL treebank we outperform all previous results in realistic scenarios. Our MA&D solution is generic and can be applied to any language, e.g., assuming MA components as provided in More et al. (2018). In addition, our proposed architecture is suitable for any other task that encodes information in a lattice towards further disambiguation. + +# 2 Linguistic Data and Task Setup + +Input tokens in MRLs are internally complex, and bear multiple units of meaning. Morphological Analysis (MA) is aimed to convert each of the tokens to a set of all possible morphological decompositions licensed by the rules of the language. A single decomposition represents a possible interpretation of the token being analyzed. Consider the Hebrew phrase bbit hlbn.2 A partial list of analyses is presented in Table 1. A lattice representation of the analyses is illustrated in Figure 1. + +Morphological Disambiguation (MD) is the task of selecting a single most-likely analysis for each token in the context of the sentence. The resulting morpheme sequence may then serve as the input processing units for downstream tasks (similarly to space-delimited words in English). Our above example, bbit hlbn is likely to be disambiguated as: + +(1) $b / \mathrm{ADP} + h / \mathrm{DET} + bit / \mathrm{NOUN} + h / \mathrm{DET} + lbn / \mathrm{ADJ}$ literally: in+the+house+the+white translated: "in the white house". + +The ambiguous MA output is stored in a lattice data structure. A Lattice is Directed Acyclic Graph (DAG) often used to encode ambiguity in NLP. In a morphological lattice, every node represents a segment boundary, and every edge represents a morpheme. Every path through the lattice represents + +a single possible analysis of the entire sentence. Notably, not all segmental forms in the lattice are overt in the input stream. Some are implicit, due to intricate morpho-phonological and orthographic processes. For example, the analysis of the token bbit contains three morphological segments $b$ , $h$ , $bit$ in the chosen path, yet the $h$ segment is not visible in the input token bbit (Figure 1). + +# 3 Proposed Method + +The Task The input to our MA&D framework is a sequence of tokens and the output is a sequence of disambiguated morphological analyses, one per token. We assume a symbolic MA that generates ambiguous lattices containing all possible morphological analyses per token, based on a broad-coverage lexicon and/or symbolic rules of the language. + +Given an input lattice, we frame MD as a lattice disambiguation task. Sperber et al. (2019) approached this task by constructing a specific architecture that captures the lattice representation. We, in contrast, choose to modify the lattice representation and feed it to an existing network architecture. + +The key idea, in a nutshell, is to linearize the lattice into a sequence of partially-ordered analyses, and feed this partial order to a pointer network. For each token, the network will then learn to point to (select) the most likely analysis, preserving the linear constraints captured in the lattice structure. + +Pointer Network (PtrNet) Pointer networks (Vinyals et al., 2015) are designed as a special case of Sequence-to-Sequence (Seq2Seq) networks. Seq2Seq models take an input sequence and produce an output sequence which may differ in length and vocabulary. PtrNet in addition can handle output vocabulary depending on the input sequence which can be variable in length. + +Seq2Seq is composed of an encoder and a decoder. The encoder consumes and encodes the entire (embedded) input sequence. Then, the decoder is fed the entire encoded input representation and step by step produces discrete outputs which are fed back as input to the next decoding step. + +PtrNets have an additional Copy Attention layer. The attention layer focuses on specific elements of the encoded input sequence at each decoding step (Luong et al., 2015). Copy Attention is a special case where the attention weights determine which input element the decoder's state is most aligned with, which can then be copied to the output. + +Pointer Networks for MD (PtrNetMD) The PtrNet architecture is designed to learn the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence (Vinyals et al., 2015). Our goal is then to encode the morphological lattice as a sequence, and then feed it to the PtrNet so that the individual analyses in the lattice can be pointed, selected and copied into the output sequence, while respecting the lattice ordering constraints. + +Given a lattice we serialize it by going over each token and listing all of its analyses. The linearization function maps a sequence of $n$ tokens into a sequence of $m$ analyses while preserving the partial order of the tokens, and where $m$ is the sum of all token analyses. That is, for input tokens $t_1, \ldots, t_n$ , let $a_j^i$ denote the $i$ 'th analysis of the $j$ 'th token. Then the following holds, such that $\sum_{i=1}^{n} k_i = m$ . + +(2) linearize $(t_{1}, t_{2}, t_{3}, \ldots, t_{n}) =$ + +$$ +a _ {1} ^ {1}, \dots , a _ {1} ^ {k _ {1}}, a _ {2} ^ {1}, \dots , a _ {2} ^ {k _ {2}}, \dots , a _ {n} ^ {1}, \dots , a _ {n} ^ {k _ {n}} +$$ + +An analysis $a_j^i$ is expressed as a list of morphemes where each morpheme is represented as a tuple of morphological properties. Both the SPMRL and UD scheme specify four properties Form, Lemma, POS Tag, Morphological Features. For example, (3) is an analysis composed of three morphemes: + +(3) $a_{j}^{i} := [(form_{1}, lemma_{1}, tag_{1}, features_{1})$ , + +$$ +\left(f o r m _ {2}, l e m m a _ {2}, t a g _ {2}, f e a t u r e s _ {2}\right), +$$ + +$$ +\left. \left(f o r m _ {3}, l e m m a _ {3}, t a g _ {3}, f e a t u r e s _ {3}\right) \right] +$$ + +We design a Morphological Embedding layer which acts as an interface between the symbolic $MA$ and the neural $MD$ . Figure 2 describes the encoding of a single morphological analysis into an embedded vector: Each property is embedded and averaged across all the morphemes in a single analysis, and all of the averaged embedded properties are concatenated to form a single embedded vector of a fixed size. The entire MA&D process is depicted in Figure 3. + +# 4 Experimental Setup + +The Data The PTRNetMD architecture we propose does not depend on any specific definition of morphological signature. To showcase this, we experiment with data from two different languages and two different annotation schemes. We use the Universal Dependencies v2.2 dataset (Nivre et al., + +![](images/539a63295d09e5ac3a018e092c7647e1a1b13351b47a7d5cf179117c46ff9de6.jpg) +Figure 2: Morphological Embedding Layer Architecture. An analysis composed of 3 morphemes is transformed into a single embedded vector. + +![](images/dc0c1e659bbd05fd619e5b6dc7813506efb4a80802c5b735a5ef621c92f1ecd7.jpg) +Figure 3: Our Proposed MA&D Architecture. A sequence of tokens is transformed into a sequence of analyses while preserving the token order. The sequence of analyses is embedded and fed into an encoder. Then at each decoding step the entire encoded representation along with the current decoded state are used as input to an attention layer, and the attention weights are used to choose an element from the input sequence. + +2016) from the CoNLL18 UD Shared task. In addition we download the corresponding lattice files of each treebank from the CoNLL-UL project. Since our approach is sensitive to the lexical coverage of the MA lattices, we focus on the Hebrew (he_htb) and Turkish (tr_imst) treebanks. Unlike the other languages in the shared task, Hebrew and Turkish provided lattice files generated by broadcoverage analyzers (HEBLEX and TRMorph2). For comparability with previous work on Modern Hebrew, we also train and test our model on the Hebrew SPMRL treebank standard split. + +Lattice Embedding We use pre-trained FastText models to embed the forms and lemmas. FastText models generate vectors for any word using character ngrams, thus handling Out-of-Vocabulary forms and lemmas (Bojanowski et al., 2017). For POS tags and features we instantiate and train from scratch two embedding modules. Together, these 4 embedded properties are combined to produce a single morphological analysis vector. + +Lattice Encoding The above-mentioned morphological embedding layer turns the input analysis sequence into an embedded sequence. The partially ordered sequence of embedded analyses is fed to an encoder layer thus encoding the entire lattice. Next a step-by-step decoding process begins in which a decoder is using an Attention mechanism in order to score the alignment between each of the relevant encoded analyses and the token currently being decoded. Our Copy Attention module is the global dot-product of Luong et al. (2015) using masking mechanism to make sure each decoding step is focused only on the corresponding input token analyses (in figure 3 the masks are represented by the grouped arrows pointing from the decoder back to the encoded sequence). The decoder chooses the highest scoring analysis. The full output sequence contains a list of indices, one per token, pointing to the selected analyses from the input lattice (Fig. 2). + +# 4.1 Baseline Models + +MD may be considered a special case of POS tagging, performed on the morpheme sequence. To compare our PTRNetMD architecture to existing modeling solutions we consider three baseline variations of POS tagging-based MD models implemented end-to-end, defined as follows. + +Pipeline Straka and Straková (2017) approach the MD problem as a two-phased pipeline, first performing segmentation of the input tokens followed by sequence tagging on the morpheme sequence. This approach mimics the way English POS tagging is performed, with the exception that the tagging is done on the morphological forms as opposed to directly on input tokens. While it is straight forward to design, POS tagging accuracy suffers from error propagation from the earlier segmentation. We compare the tagging accuracy provided by gold (oracle) segments as opposed to realistically predicted segments, for Turkish, Hebrew, Arabic and English, to gauge the drop in the accuracy in English in comparison to MRLs. + +Token sequence multi-tagging In order to avoid error propagation and train our neural model end-to-end, we implement a baseline model predicting a complex analysis, referred to as multi-tag, for each token. That is, we assign a single complex label composed of multiple POS tags to each raw token. We define a multi-tag as a concatenated list of basic tags, one per segment. In training, a word such as bbit, which is gold-segmented into the basic tag sequence $b / IN$ , $h / DET$ , $bit / NOUN$ , is assigned a single multi-tag $bbit / IN-DET-NOUN$ . Similar to the form and lemma embedding in the $\mathrm{PtrNetMD}$ we use FastText for embedding the input token sequence. In addition, in order to inform the model about sub-token information, we combined each embedded token with a vector encoding the sequence of characters in the token, as suggested by Ling et al. (2015). A notable disadvantage of this model compared to the pipeline, and the proposed $\mathrm{PtrNet}$ model, is that it does not provide any information concerning segmentation boundaries. + +Sequence-to-sequence tagging Our multi-tagging model has the drawback of operating on a large and non-compositional output-labels space. So, it cannot assign previously unseen tag compositions to previously unseen tokens. To overcome this, we implement a sequence-to-sequence model in which the input again consists of raw input tokens but the output is a tag sequence, of a possibly different length, predicted (decoded) one by one. Here again we use the combined token and character embedding layer as described in the previous paragraph. This model too, does not provide explicit segmentation boundaries. + +# 4.2 Evaluation + +Aligned Segment The CoNLL18 UD Shared task evaluation campaign7 reports scores for segmentation and POS tagging8 for all participating languages. The shared task provides an evaluation script producing various levels of F1 scores, based on aligned token-level segments. Since the focus of the shared task was to reflect word segmentation and relations between content words, the script discards unmatched word segments, so in effect the POS tagging scores are in fact joint segmentation-and-tagging. We run this script to compare tagging scores between oracle (gold) segmentation and re + +alistic (predicted) segmentation in a pipeline model. In addition, since our $\mathrm{PtrNetMD}$ jointly predicts both segments and tags, we can compare our $\mathrm{PtrNetMD}$ against the shared task leaders for Hebrew and Turkish. + +Aligned Multi-Set In addition to the shared task scores, we compute F1 scores similar to the aforementioned with a slight but important difference. Token counts are based on multi-set intersections of the gold and predicted labels. A multi-set (mset) is a modification of the set concept, allowing multiple instances of its items. In our case we use a multi-set to count intersection of morphological signatures in each token. To illustrate the difference between aligned segment and aligned mset, let us take for example the gold segmented tag sequence: $b/IN$ , $h/DET$ , $bit/NOUN$ and the predicted segmented tag sequence $b/IN$ , $bit/NOUN$ . According to aligned segment, the first segment ( $b/IN$ ) is aligned and counted as a true positive, the second segment however is considered as a false positive ( $bit/NOUN$ ) and false negative ( $h/DET$ ) while the third gold segment is also counted as a false negative ( $bit/NOUN$ ). The aligned mset on the other hand is based on set difference. In this case both $b/IN$ and $bit/NOUN$ exist in the gold and predicted sets and counted as true positives, while $h/DET$ is mismatched and counted as a false negative. In both cases the total counts across the entire datasets are then incremented accordingly and finally used for computing Precision, Recall and F1. + +Formally, aligned mset F1 metric is calculated as follows: For each token we first create a multi-set based on the morphological signatures (morphological signature is defined by the properties of interest: Segments only, POS tag only, joint segment and tag, etc.) for both the predicted (Pred) and gold (Gold) morphemes: + +(4) $Pred_{\mathrm{token}} = \uplus (p_1, p_2, \dots, p_k)$ + +$$ +G o l d _ {\text {t o k e n}} = \uplus (g _ {1}, g _ {2}, \dots , g _ {1}) +$$ + +$\uplus$ : multi-set addition operator + +We then calculate the token level true and false positives (TP, FP) as well as false negatives (FN): + +(5) $TP_{\mathrm{token}} = Pred_{\mathrm{token}} \cap Gold_{\mathrm{token}}$ + +$$ +\begin{array}{l} F P _ {\text {t o k e n}} = \operatorname {P r e d} _ {\text {t o k e n}} - \operatorname {G o l d} _ {\text {t o k e n}} \\ F N _ {\text {t o k e n}} = \text {G o l d} _ {\text {t o k e n}} - \text {P r e d} _ {\text {t o k e n}} \\ \end{array} +$$ + +Finally we add up the token counts over the entire dataset to produce the F1 metric: + +(6) $TP_{\mathrm{total}} = \sum |TP_{\mathrm{token}}|$ + +$$ +F P _ {\text {t o t a l}} = \sum | F P _ {\text {t o k e n}} | +$$ + +$$ +F N _ {\text {t o t a l}} = \sum | F N _ {\text {t o k e n}} | +$$ + +$$ +P r e c i s i o n = T P _ {\text {t o t a l}} / \left(T P _ {\text {t o t a l}} + F P _ {\text {t o t a l}}\right) +$$ + +$$ +\text {R e c a l l} = T P _ {\text {t o t a l}} / \left(T P _ {\text {t o t a l}} + F N _ {\text {t o t a l}}\right) +$$ + +$$ +F 1 = \frac {2 \times P r e c i s i o n \times R e c a l l)}{P r e c i s i o n + R e c a l l} +$$ + +Having morphemes available even if out of order or partially, has merit to downstream tasks that consume and further process them. Aligned mset accounts for this quality. Furthermore, both our multi-tagging and sequence-to-sequence tagging baseline models produce a tag sequence without segmentation boundaries, and aligned mset can be used to compare them against our PtrNetMD model. Finally since this computation was also used by More et al. (2019) we are able to compare our results to their non-neural MA&D framework applied to the Hebrew SPRML treebank, which is so far considered the current state-of-the-art for Hebrew segmentation and tagging. + +Ideal vs Realistic Analysis Scenarios Following More et al. (2019) we distinguish between two evaluation scenarios. An Infused scenario is an idealised scenario in which the input lattice to our model has complete lexical coverage, and is guaranteed to include the correct analysis as one of its many internal paths. An Uninfused scenario is a realistic case in which the lexical coverage might be partial, and might lack certain gold analyses.9 + +# 5 Results + +CoNLL18 UD Shared Task Table 2 shows aligned segment F1 scores for joint segmentation- and-tagging on four languages that exhibit different degrees of morphological richness. The top two models are variants of the UDPipe pipeline system - UDPipe Oracle scores were obtained by running the UDPipe tagger on gold segments, and UDPipe Predicted scores were obtained by segmenting the raw text first and then tagging the predicted segments. $^{10}$ + +The top two rows in Table 2 allow us to gauge the effect of error propagation for different languages, as reflected in the performance difference between + +
EnglishTurkishArabicHebrew
UDPipe Oracle94.6293.2495.3095.13
UDPipe Predicted93.6291.6489.3480.87
Shared Task Leader95.9494.7893.6391.36
PtrNetMD Infused96.694.41
PtrNetMD Uninfused89.5491.3
+ +Table 2: Joint Segmentation-and-Tagging F1, Aligned Segment, CoNLL18 UD Shared Task Test Set. Top two rows are pipeline baseline. Bottom three rows are PrtNetMD compared to shared task leaders. + +
EnglishTurkishArabicHebrew
UDPipe Oracle100.00100.00100.00100.00
UDPipe Predicted99.0397.9293.7185.16
Shared Task Leader99.2697.9296.8193.98
PtrNetMD Infused99.4196.36
PtrNetMD Uninfused97.7894.74
+ +Table 3: Segmentation-only F1, Aligned Segment, CoNLL18 UD Shared Task Test Set. Top two rows are pipeline baseline. Bottom three rows are PtrNetMD compared to shared task leaders. + +tagging gold (Oracle) segments and tagging predicted segments. These results are remarkable — in an ideal (gold-oracle) scenario there is no significant difference in the tagging accuracy between English and MRLs, but in the realistic scenarios where segmentation precedes tagging, the difference is large. + +The bottom three models in Table 2 report the leading scores from the CoNLL18 UD Shared Task as well as our $\mathrm{PtrNetMD}$ results. The $\mathrm{PtrNetMD}$ achieves state-of-the-art results for joint segmentation-tagging, on both Hebrew and Turkish, in infused settings. Moreover, the $\mathrm{PtrNetMD}$ ties the state-of-the-art on the Hebrew treebank even with uninfused (realistic) lattices with partial lexical coverage. + +In Table 3 we see aligned segment F1 scores for segmentation-only on the same four languages. The results clearly indicate that segmenting Hebrew is harder than segmenting Arabic, which is then harder to segment than Turkish, and English requires essentially no segmentation. As in Table 2, we see similar behavior comparing PtrNetMD to shared task leaders on the segmentation task — PtrNetMD with infused lattices outperforms the shared-task leader on Turkish, and it outperforms the shared-task leader in both infused and uninfused scenarios on Hebrew. + +There are two possible explanations for prediction errors in uninfused scenarios. Either the cor + +
TurkishArabicHebrew
Token Multi-Tag92.5794.293.82
Token Seq-Tag92.7795.0593.75
PtrNetMD infused96.7696.40
PtrNetMD uninfused90.0194.02
+ +rect analysis (gold annotation) is part of the lattice but the model makes a wrong selection, or, the correct analysis is not in the lattice. Acknowledging the notable gap in Table 2 between $\mathrm{PtrNetMD}$ infused and uninfused scores on Turkish, we compared the number of prediction errors with the number of missing analyses in the uninfused lattices. Out of 1028 wrong predictions, 652 of them were also missing the correct analysis which accounts to $60\%$ of the uninfused errors. Interestingly there is a $60\%$ error reduction when moving to the infused lattices. The missing analyses could account for the difference between infused and uninfused scores. The same holds for Hebrew as well: out of 850 made, 330 do not have the correct analysis in the lattice, which is also very close to the difference between the infused and uninfused scores. Another insight into the coverage difference between the Turkish and Hebrew lattices is revealed by the fact that the average number of analyses per token is 2.6 for Turkish compared to 10 in Hebrew. + +Table 4 contains the aligned mset scores of our two baselines, as well as the $\mathrm{PtrNetMD}$ infused and uninfused settings (since both baselines don't predict segments they are inapplicable for aligned segment evaluation). In both Turkish and Hebrew, the infused $\mathrm{PtrNetMD}$ performs much better than end-to-end tagging models. The Hebrew $\mathrm{PtrNetMD}$ even outperforms both baselines in uninfused circumstances. The high infused scores on both treebanks suggest that the $\mathrm{PtrNetMD}$ model is more than capable to select the correct analysis as long as one is present in the lattice. The difference between infused and uninfused scores highlight the importance of generating full coverage lattices by the MA component. + +SPMRL Hebrew Treebank To put our results in context, Table 5 compares PtrNetMD on the Hebrew SPMRL treebank with the state of the art results of More et al. (2019), who used the same aligned mset scores for performing joint segmentation-and-tagging evaluation. The + +Table 4: Tagging F1, Aligned MSet, CoNLL18 UD Shared Task Test Set + +
Dev-InfDev-UninfTest-InfTest-Uninf
MoreMD94.0990.8392.9287.53
MoreMD-DEP95.4992.3693.9289.08
PtrNetMD95.0993.993.5190.49
+ +Table 5: Joint Segmentation-and-Tagging F1, Aligned MSet, Hebrew SPMRL treebank + +MoreMD lattice disambiguation approach is similar to our $\mathrm{PtrNetMD}$ , albeit non-neural, using feature-based structured perceptron for disambiguation. + +As can be seen in the table, the $\mathrm{PtrNetMD}$ outperforms the MoreMD model in all settings. The MoreMD-DEP model, jointly performs MD and dependency parsing, taking advantage of additional syntactic information that is predicted jointly with the segmentation and tags. The syntactic information contributes to the MD performance as can be seen in the Infused columns. However, our $\mathrm{PtrNetMD}$ handles incomplete morphological information better than MoreMD-DEP, as can be seen in the Uninfused columns. + +# 6 Related Work + +Initial work on MD viewed it as a special case of POS tagging and applied generative probabilistic frameworks such as Hidden Markov Models (Barhaim et al., 2008) as well as discriminative feature-based models (Sak et al., 2009; Lee et al., 2011; Bohnet et al., 2013; Habash et al., 2013). When used as input to parsing, Goldberg and Elhadad (2010) showed that consuming the predicted MD output of Adler and Elhadad (2006b) as input to dependency parsing significantly reduced parsing performance on Hebrew. + +To address this error propagation inherent in the pipeline approach, More et al. (2019) and Seeker and Çetinoğlu (2015) proposed joint morphosyntactic frameworks which enable interaction between the morphological and syntactic layers. While proving to be state-of-the-art for both MD and dependency parsing, on Hebrew and Turkish respectively, these solutions involved massive handcrafted feature engineering. + +MA&D on Arabic was addressed by Habash and Rambow (2005); Roth et al. (2008) using MA output and applying a set of classification and language models to make grammatical and lexical predictions. A ranking component then scored the analyses produced by the MA using a weighted sum of matched predicted features. Zalmout and + +Habash (2017) presented a neural version of the above system using LSTM networks in several configurations and embedding levels to model the various morphological features and use them to score and rank the MA analyses. In addition, they incorporated features based on the space of possible analyses from the MA into the MD component. By enriching the input word embedding with these additional morphological features they increased MD accuracy drastically. This ranking technique requires building several models - language models to predict form and lemma and sequence labeling models to predict non-lexical features such as POS, gender, number etc. Our solution on the other hand involves a single model to score the joint analyses and choose the best one. In addition, our neural MD component is language agnostic and doesn't depend on any language-specific properties, and as a result can be easily applied to any language. + +Yildiz et al. (2016) proposed a MA&D framework with a neural MD model, however their MD component was implemented as a binary classifier predicting whether or not a current property value is correct, and was trained in a semi-supervised fashion. Such simple topology is focused on predicting POS tags and morphological feature but is inappropriate for the general case that includes segmentation. + +Most recently, Khalifa et al. (2020) provided further validation of the hypothesis that in low-resource settings, morphological analyzers help boost the performance of the full morphological disambiguation task. We support this claim as well with our results on Hebrew and Turkish, which are considered low-resource languages, at least in terms of the resources the UD treebank collection provides. In the same vein, incorporating symbolic morphological information in MRLs has long shown to improve NLP tasks; see for instance Marton et al. (2013) for the contribution of morphological knowledge on parsing quality on Arabic. + +End-to-end neural modeling for word segmentation was addressed by Shao et al. (2018) who modeled segmentation as character-level sequence labeling, and applied it to the UD data collection. While improving the results averaged over the entire UD set, Hebrew and Arabic accuracy remained low. Wang et al. (2016) tackled the segmentation challenge by taking an unsupervised approach for learning segment boundaries, but did not address POS and morphological features assignments. + +A pre-requisite for our proposed approach is the availability of a morphological analyzer (MA) component. Over the past years several MA resources have been published and are available for MA&D research. The CoNLL-UL project (More et al., 2018) provides static lattice files generated for the CoNLL18 UD shared task (Zeman et al., 2018). Other MA resources are available for specific languages, for example: HEBLEX (Adler and Elhadad, 2006a), TRMorph2 (Çañri Çöltekin, 2014), and Calima-Star (Taji et al., 2018). To facilitate MA for the UD treebanks, Sagot (2018) produced a collection of multilingual lexicons in the CoNLL-UL format covering many of the UD languages. The Universal Morphology (UniMorph) project contains morphological data annotated in a canonical schema for many languages, which has been shown to improve, e.g., low-resource machine translation (Shearing et al., 2018). + +Encoding complete lattices into vector representations was previously achieved by modifying the implementation of the LSTM cells to keep track of the history of multiple node children (Ladhak et al., 2016; Su et al., 2017; Sperber et al., 2017). More recently, Sperber et al. (2019) applied self-attention layers coupled with reachability masks and positional embedding to efficiently handle lattice inputs. All of these lattice-aware networks were applied to speech recognition tasks, where the segmentation of the input stream refers only to overt elements, with no covert elements as in morphology. In this work, in contrast, we cope with non-concatenative morphological phenomena where not all segments are covert. Finally, our system is simple to apply and easy to comprehend. In contrast with the non-trivial modification to the internals of the neural model, we parse and encode the lattice as a sequence to be fed into (any) existing neural components. + +# 7 Conclusions and Future Work + +In this work we addressed the challenge of morphological disambiguation for MRLs. We design a general framework that consumes lattice files and output a sequence of disambiguated morphemes, each containing the segmentation boundary, lemma, part-of-speech tag and morphological features. Our solution is language agnostic and we apply it on two different languages and two different annotation schemes. We show that access to symbolic morphological information aids the neural disam + +biguation model, compared to end-to-end strong baselines that only have access to the raw tokens. + +We empirically evaluate our model using two evaluation methods. The CoNLL18 UD Shared Task evaluation, and a multi-set intersection-based evaluation, which is a more informative metric for downstream tasks that operate directly on morpheme sequences. In an ideal scenario, where full lexical coverage is assumed, our model outperformed the shared task leaders in the word segmentation task as well as the joint segmentation-and-tagging task, in both Turkish and Hebrew. Furthermore, we match the leading joint segmentation and tagging scores in realistic scenario with only partial lexical coverage on Hebrew. We further show superior performance of our model compared to previous models on the Hebrew SPMRL treebank. + +This work motivates two future research directions. Our infused-vs-uninfused analysis suggests that most errors on uninfused lattices are due to partial MA coverage. Our disambiguation model proves to be very reliable in selecting the correct analysis, when available. It follows that a broad-coverage MA component may improve the overall quality of the disambiguation in realistic (uninfused) scenarios. This motivates learning to induce universal, high-recall, MA which is free to generate large lattices, and rather than focusing on precision, reward high recall. A second research path towards improving realistic partial-coverage (uninfused) lattices is by combining our morphologically-aware Pointer Network with an end-to-end model that operates on the raw token sequence. + +Finally, we intend to extend this lattice-based architecture for complete Joint Morpho-Syntactic and Morpho-Semantic tasks. That is, in addition to morphological segmentation and tagging, the pointer network can be trained to predict span labels (as in NER), headedness relations (as in dependency parsing) and possibly more properties for the lattice arcs, so that these multiple layers of information may be jointly predicted as part of the lattice-disambiguation task. + +# Acknowledgements + +We thank the BIU-NLP lab members for comments and discussion, and to four anonymous reviewers for their insightful remarks. This research is funded by grants from the Israeli Science Foundation (ISF grant 1739/26) and the European Research Council (ERC grant 677352), for which we are grateful. + +# References + +Meni Adler and Michael Elhadad. 2006a. An unsupervised morpheme-based HMM for Hebrew morphological disambiguation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 665-672, Sydney, Australia. Association for Computational Linguistics. +Meni Adler and Michael Elhadad. 2006b. An unsupervised morpheme-based hmm for hebrew morphological disambiguation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics, ACL-44, pages 665–672, Stroudsburg, PA, USA. Association for Computational Linguistics. +Roy Bar-haim, Khalil Sima'an, and Yoad Winter. 2008. Part-of-speech tagging of modern hebrew text. Nat. Lang. Eng., 14(2):223-251. +Dan Bareket and Reut Tsarfaty. 2020. Neural modeling for named entities and morphology (nemo $^2$ ). +Bernd Bohnet, Joakim Nivre, Igor Boguslavsky, Richard Farkas, Filip Ginter, and Jan Hajic. 2013. Joint morphological and syntactic analysis for richly inflected languages. Transactions of the Association for Computational Linguistics, 1:415-428. +Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146. +Yoav Goldberg and Michael Elhadad. 2010. An efficient algorithm for easy-first non-directional dependency parsing. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 742-750, Los Angeles, California. Association for Computational Linguistics. +Nizar Habash and Owen Rambow. 2005. Arabic tokenization, part-of-speech tagging and morphological disambiguation in one fell swoop. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), pages 573-580, Ann Arbor, Michigan. Association for Computational Linguistics. +Nizar Habash, Ryan Roth, Owen Rambow, Ramy Eskander, and Nadi Tomeh. 2013. Morphological analysis and disambiguation for dialectal Arabic. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 426-432, Atlanta, Georgia. Association for Computational Linguistics. +Salam Khalifa, Nasser Zalmout, and Nizar Habash. 2020. Morphological analysis and disambiguation for Gulf Arabic: The interplay between resources + +and methods. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 3895-3904, Marseille, France. European Language Resources Association. +Stav Klein and Reut Tsarfaty. 2020. Getting the ##life out of living: How adequate are word-pieces for modelling complex morphology? In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, SIGMORPHON 2020, Online, July 10, 2020, pages 204-209. +Faisal Ladhak, Ankur Gandhi, Markus Dreyer, Lambert Mathias, Ariya Rastrow, and Björn Hoffmeister. 2016. Latticernn: Recurrent neural networks over lattices. In *INTERSPEECH*. +John Lee, Jason Naradowsky, and David A. Smith. 2011. A discriminative model for joint morphological disambiguation and dependency parsing. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 885-894, Portland, Oregon, USA. Association for Computational Linguistics. +Wang Ling, Chris Dyer, Alan W Black, Isabel Trancoso, Ramón Fernández, Silvio Amir, Luís Marujo, and Tiago Luís. 2015. Finding function in form: Compositional character models for open vocabulary word representation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1520-1530, Lisbon, Portugal. Association for Computational Linguistics. +Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421, Lisbon, Portugal. Association for Computational Linguistics. +Yuval Marton, Nizar Habash, and Owen Rambow. 2013. Dependency parsing of modern standard Arabic with lexical and inflectional features. Computational Linguistics, 39(1):161-194. +Amir More, Özlem Çetinoglu, Cagri Çoltekin, Nizar Habash, Benoit Sagot, Djame Seddah, Dima Taji, and Reut Tsarfaty. 2018. CoNLL-UL: Universal morphological lattices for Universal Dependency parsing. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC'18). +Amir More, Amit Seker, Victoria Basmova, and Reut Tsarfaty. 2019. Joint transition-based models for morpho-syntactic parsing: Parsing strategies for MRLs and a case study from modern Hebrew. Transactions of the Association for Computational Linguistics, 7:33-48. + +Amir More and Reut Tsarfaty. 2016. Data-driven morphological analysis and disambiguation for morphologically rich languages and universal dependencies. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 337-348, Osaka, Japan. The COLING 2016 Organizing Committee. +Thomas Mueller, Helmut Schmid, and Hinrich Schütze. 2013. Efficient higher-order CRFs for morphological tagging. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 322-332, Seattle, Washington, USA. Association for Computational Linguistics. +Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D. Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal dependencies v1: A multilingual treebank collection. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1659-1666, Portoroz, Slovenia. European Language Resources Association (ELRA). +Ryan Roth, Owen Rambow, Nizar Habash, Mona Diab, and Cynthia Rudin. 2008. Arabic morphological tagging, diacritization, and lemmatization using lexeme models and feature ranking. In Proceedings of ACL-08: HLT, Short Papers, pages 117-120, Columbus, Ohio. Association for Computational Linguistics. +Benoit Sagot. 2018. A multilingual collection of CoNLL-u-compatible morphological lexicons. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). +Hasim Sak, Tunga Güngör, and Murat Saraçlar. 2009. Morphological disambiguation of turkish text with perceptron algorithm. In Proceedings of the 8th International Conference on Computational Linguistics and Intelligent Text Processing, CIC Ling '07, page 107-118, Berlin, Heidelberg. Springer-Verlag. +Wolfgang Seeker and Özlem Çetinoğlu. 2015. A graph-based lattice dependency parser for joint morphological segmentation and syntactic analysis. Transactions of the Association for Computational Linguistics, 3:359-373. +Yan Shao, Christian Hardmeier, and Joakim Nivre. 2018. Universal word segmentation: Implementation and interpretation. Transactions of the Association for Computational Linguistics, 6:421-435. +Steven Shearing, Christo Kirov, Huda Khayrallah, and David Yarowsky. 2018. Improving low resource machine translation using morphological glosses (non-archival extended abstract). In Proceedings of the 13th Conference of the Association for Machine + +Translation in the Americas (Volume 1: Research Papers), pages 132-139, Boston, MA. Association for Machine Translation in the Americas. +K. Sima'any, A. Itaiz, Y. Winterz, A. Altmanz, and N. Nativx. 2001. Building a tree-bank of modern hebrew text. volume 42, pages 347-380. Traitement automatique des langues. +Matthias Sperber, Graham Neubig, Jan Niehues, and Alex Waibel. 2017. Neural lattice-to-sequence models for uncertain inputs. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1380-1389, Copenhagen, Denmark. Association for Computational Linguistics. +Matthias Sperber, Graham Neubig, Ngoc-Quan Pham, and Alex Waibel. 2019. Self-attentional models for lattice inputs. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1185-1197, Florence, Italy. Association for Computational Linguistics. +Milan Straka and Jana Strakova. 2017. Tokenizing, pos tagging, lemmatizing and parsing ud 2.0 with udpipe. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 88-99, Vancouver, Canada. Association for Computational Linguistics. +Jinsong Su, Zhixing Tan, Deyi Xiong, Rongrong Ji, Xiaodong Shi, and Yang Liu. 2017. Lattice-based recurrent neural network encoders for neural machine translation. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI'17, pages 3302-3308. AAAI Press. +Dima Taji, Salam Khalifa, Ossama Obeid, Fadhl Eryani, and Nizar Habash. 2018. An Arabic morphological analyzer and generator with copious features. In Proceedings of the Fifteenth Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 140-150, Brussels, Belgium. Association for Computational Linguistics. +Reut Tsarfaty, Dan Bareket, Stav Klein, and Amit Seker. 2020. From SPMRL to NMRL: what did we learn (and unlearn) in a decade of parsing morphologically-rich languages (mrls)? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7396-7408. +Reut Tsarfaty, Djamé Seddah, Yoav Goldberg, Sandra Kuebler, Yannick Versley, Marie Candito, Jennifer Foster, Ines Rehbein, and Lamia Tounsi. 2010. Statistical parsing of morphologically rich languages (SPMRL) what, how and whither. In Proceedings of the NAACL HLT 2010 First Workshop on Statistical Parsing of Morphologically-Rich Languages, pages 1-12, Los Angeles, CA, USA. Association for Computational Linguistics. + +Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2692-2700. Curran Associates, Inc. +Linlin Wang, Zhu Cao, Yu Xia, and Gerard de Melo. 2016. Morphological segmentation with window LSTM neural networks. In AAAI. +Eray Yildiz, Caglar Tirkaz, H. Sahin, Mustafa Eren, and Omer Sonmez. 2016. A morphology-aware network for morphological disambiguation. In AAAI Conference on Artificial Intelligence. +Nasser Zalmout and Nizar Habash. 2017. Don't throw those morphological analyzers away just yet: Neural morphological disambiguation for Arabic. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 704-713, Copenhagen, Denmark. Association for Computational Linguistics. +Daniel Zeman, Jan Hajic, Martin Popel, Martin Potthast, Milan Straka, Filip Ginter, Joakim Nivre, and Slav Petrov. 2018. CoNLL 2018 shared task: Multilingual parsing from raw text to universal dependencies. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 1-21, Brussels, Belgium. Association for Computational Linguistics. +Cagri Çoltekin. 2014. A set of open source tools for turkish natural language processing. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), Reykjavik, Iceland. European Language Resources Association (ELRA). \ No newline at end of file diff --git a/apointernetworkarchitectureforjointmorphologicalsegmentationandtagging/images.zip b/apointernetworkarchitectureforjointmorphologicalsegmentationandtagging/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..30265faa8c630655597fc887acc4469cd389671c --- /dev/null +++ b/apointernetworkarchitectureforjointmorphologicalsegmentationandtagging/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:53cf07c7d8a161305f19109fb74d43c242ad808e2e3f78cd39be86d394edf5de +size 230558 diff --git a/apointernetworkarchitectureforjointmorphologicalsegmentationandtagging/layout.json b/apointernetworkarchitectureforjointmorphologicalsegmentationandtagging/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..25c15698c3438efc5ec9fedd086b44082216c340 --- /dev/null +++ b/apointernetworkarchitectureforjointmorphologicalsegmentationandtagging/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c74cee713ddf663c62396b695a4a8768dbe73ed78c4ab5252c4b7ed8639d04c6 +size 328978 diff --git a/aselfrefinementstrategyfornoisereductioningrammaticalerrorcorrection/6bcd6a2d-3197-4c48-a780-fc241bf5f099_content_list.json b/aselfrefinementstrategyfornoisereductioningrammaticalerrorcorrection/6bcd6a2d-3197-4c48-a780-fc241bf5f099_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..1522f75d5961561d8efeb1bd2cc9d8c113f86e3d --- /dev/null +++ b/aselfrefinementstrategyfornoisereductioningrammaticalerrorcorrection/6bcd6a2d-3197-4c48-a780-fc241bf5f099_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6ab70ea015efff521231a0ace9a0c397b9acf2f4342675a78be895bf4b6fbed8 +size 96393 diff --git a/aselfrefinementstrategyfornoisereductioningrammaticalerrorcorrection/6bcd6a2d-3197-4c48-a780-fc241bf5f099_model.json b/aselfrefinementstrategyfornoisereductioningrammaticalerrorcorrection/6bcd6a2d-3197-4c48-a780-fc241bf5f099_model.json new file mode 100644 index 0000000000000000000000000000000000000000..2b4f52c7b7fdfc189f3005da395bd2b41cc836e3 --- /dev/null +++ b/aselfrefinementstrategyfornoisereductioningrammaticalerrorcorrection/6bcd6a2d-3197-4c48-a780-fc241bf5f099_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b8012182923de821d53b6a9851ee80fe7800e45555aa3b66f115bd9517857386 +size 115363 diff --git a/aselfrefinementstrategyfornoisereductioningrammaticalerrorcorrection/6bcd6a2d-3197-4c48-a780-fc241bf5f099_origin.pdf b/aselfrefinementstrategyfornoisereductioningrammaticalerrorcorrection/6bcd6a2d-3197-4c48-a780-fc241bf5f099_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d23808dda281e250e8cde5a5ae895cb94d92eb37 --- /dev/null +++ b/aselfrefinementstrategyfornoisereductioningrammaticalerrorcorrection/6bcd6a2d-3197-4c48-a780-fc241bf5f099_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:17296bf664475ab7942753b593975b3d1eaacd200743be10adf8ff9781b907dc +size 683906 diff --git a/aselfrefinementstrategyfornoisereductioningrammaticalerrorcorrection/full.md b/aselfrefinementstrategyfornoisereductioningrammaticalerrorcorrection/full.md new file mode 100644 index 0000000000000000000000000000000000000000..776e2b352abd09acce6bfc15de090ff70b422079 --- /dev/null +++ b/aselfrefinementstrategyfornoisereductioningrammaticalerrorcorrection/full.md @@ -0,0 +1,340 @@ +# A Self-Refinement Strategy for Noise Reduction in Grammatical Error Correction + +Masato Mita $^{1,2}$ Shun Kiyono $^{1,2}$ Masahiro Kaneko $^{3,1}$ Jun Suzuki $^{2,1}$ Kentaro Inui $^{2,1}$ + +$^{1}$ RIKEN Center for Advanced Intelligence Project + $^{2}$ Tohoku University + +$^{3}$ Tokyo Metropolitan University + +{masato.mita, shun.kiyono}@riken.jp + +kaneko-masahiro@ed.tmu.ac.jp + +{jun.suzuki, inui}@ecei.tohoku.ac.jp + +# Abstract + +Existing approaches for grammatical error correction (GEC) largely rely on supervised learning with manually created GEC datasets. However, there has been little focus on verifying and ensuring the quality of the datasets, and on how lower-quality data might affect GEC performance. We indeed found that there is a non-negligible amount of "noise" where errors were inappropriately edited or left uncorrected. To address this, we designed a self-refinement method where the key idea is to denoise these datasets by leveraging the prediction consistency of existing models, and outperformed strong denoising baseline methods. We further applied task-specific techniques and achieved state-of-the-art performance on the CoNLL2014, JFLEG, and BEA-2019 benchmarks. We then analyzed the effect of the proposed denoising method, and found that our approach leads to improved coverage of corrections and facilitated fluency edits which are reflected in higher recall and overall performance. + +# 1 Introduction + +Grammatical error correction (GEC) is often considered a variant of machine translation (MT) (Brockett et al., 2006; Junczys-Dowmunt et al., 2018) due to their structural similarity--"translating" from source ungrammatical text to target grammatical text. At present, several neural encoder-decoder (EncDec) approaches have been introduced for this task and have achieved remarkable results (Chollampatt and Ng, 2018; Zhao et al., 2019; Kiyono et al., 2019). EncDec models tend to further improve in performance with increasing data size (Koehn and Knowles, 2017; Sennrich and Zhang, 2019), however, this is not necessarily true in GEC. For example, Lo et al. (2018) reported that an EncDec-based GEC model trained on EF + +1: Errors are inappropriately edited + +Source: I want to discuss about the education. + +Target: I want to discuss of the education. + +2:Errors are left uncorrected + +Source: We discuss about our sales target. + +Target: We discuss about our sales target. + +Table 1: Example of an inappropriately corrected error and an unchanged error in EFCamDat. We consider these types of errors to be dataset noise that might hinder GEC model performance. + +CamDat (Geertzen et al., 2013) $^{1}$ , the largest publicly available learner corpus as of today (two million sentence pairs), was outperformed by a model trained on a smaller dataset (e.g., 720K pairs). They hypothesized that this may be due to the noisiness of EFCamDat, i.e., the presence of sentence pairs whose correction still contained grammatical errors due to inappropriate edits or being left uncorrected. For example, in Table 1, "discuss about" should most likely have been corrected to "discuss", and "are discussing", respectively. We confirmed that there is a non-negligible amount of noise in commonly used GEC datasets (Section 3). + +We recognise data noise as a generally overlooked issue in GEC, and consider the question of whether a better GEC model can be built by reducing noise in GEC corpora. To this end, we designed a self-refining approach—an effective denoising method where residual errors left by careless or unskilled annotators are corrected by an existing GEC model. This approach relies on the consistence of the GEC model's predictions (Section 4). + +We evaluated the effectiveness of our method over several GEC datasets, and found that it considerably outperformed baseline methods, includ + +ing three strong denoising baselines based on a filtering approach, which is a common approach in MT (Bei et al., 2018; Junczys-Dowmunt, 2018; Rossenbach et al., 2018). We further improved the performance by applying task-specific techniques and achieved state-of-the-art performance on the CoNLL-2014, JFLEG, and BEA-2019 benchmarks. Finally, through our analysis, we found unexpected benefits to our approach: (i) the approach benefits from the advantage of self-training in neural sequence generation due to its structural similarity (Section 6.3), (ii) resulted in significant increase in recall while maintaining equal precision, indicating improved coverage of correction (Section 6.4), and (iii) there seems to be a tendency for more fluent edits, possibly leading to more native-sounding corrections (Section 6.5). The last is reflected in performance on the JFLEG benchmark, which focuses on fluency edits. + +In summary, we present a data denoising method which improves GEC performance, verify its effectiveness by comparing to both strong baselines and current best-performing models, and analyze how the method affects both GEC performance and the data itself. + +# 2 Related Work + +In GEC, previous studies have generally focused on typical errors, such as the use of articles (Han et al., 2006), prepositions (Felice and Pulman, 2008), and noun numbers (Nagata et al., 2006). More recently, many studies have addressed GEC as a MT problem where ungrammatical text is expected to be translated into grammatical text. This approach allows the adoption of sophisticated sequence-to-sequence architectures (Sutskever et al., 2014; Bahdanau et al., 2015; Vaswani et al., 2017) that have achieved strong performance but require a large amount of data (Chollampatt and Ng, 2018; Junczys-Dowmunt et al., 2018; Kiyono et al., 2019). In GEC, the data are usually manually built by experts, which lead to an underlying assumption that the data is noise-free. Therefore, to the best of our knowledge, noise in existing common datasets remains largely under-explored and no previous research has investigated the effectiveness of denoising GEC datasets. Recently, Lichtarge et al. (2020) proposed a method for filtering large and noisy synthetic pretrained data in GEC by deriving example-level scores on their pretrained data. + +However, what they regard as noise consists of instances in source sentences (i.e., not target sentences) of the synthetic data that are outside the genuine learner error distribution, where they perform data selection based on the small and higher-quality genuine data (namely, the learner corpora we attempt to denoise in this study). Therefore, our methods are not comparable, and it is expected to further improve the performance by combining both methods, which we plan to investigate in our future work. + +In contrast, data noise is becoming an increasingly important topic in MT, where it is common to use automatically acquired parallel data via web crawling in addition to high-quality curated data. As a result, the MT field faces various data quality issues such as misalignment and incorrect translations, which may significantly impact translation quality (Khayrallah and Koehn, 2018). A straightforward solution is to apply a filtering approach, where noisy data are filtered out and a smaller subset of high-quality sentence pairs is retained (Bei et al., 2018; Junczys-Dowmunt, 2018; Rossenbach et al., 2018). Nevertheless, it is unclear whether such a filtering approach can be successfully applied to GEC, where commonly available datasets tend to be far smaller than those used in recent neural MT research. Hence, in this study, we investigate its effectiveness by conducting a comparative experiment using the proposed denoising approach. + +# 3 Noise in GEC Datasets + +In this study, we define noise as two types of residual grammatical errors in target sentences: inappropriate edits and those left uncorrected (Table 1). Most learner corpora, such as EFCamDat and Lang-8 (Mizumoto et al., 2011; Tajiri et al., 2012), are constructed based on correction logs in which the source texts are provided by human language learners and the corresponding corrected target texts are provided by editor (annotators). Unless each annotator has $100\%$ accuracy, all corpora inevitably contain noise. + +The presence of noise in GEC data was uncovered by previous work such as Lo et al. (2018), but the exact nature of it was unexplored. To confirm this, we manually assessed how much noise was contained in the following three commonly used training datasets: the BEA official training dataset (henceforth, $BEA$ -train) provided in the + +
(1)BEA-train
X:I will make a poet to kill this pain.
Y:I will make a poem to kill this pain.
Y':I will write a poem to get rid of this pain.
(2)EFCamDat
X:The restaurant in front of movie teather.
Y:The restaurant in front of movie theater.
Y':The restaurant is located opposite the movie theater.
(3)Lang-8
X:Coordinate with product support team for potential customer show site visit;
Y:Coordinate with product support team for potential customer show site visits;
Y':Please coordinate with the product support team to escort potential customers to site visits.
+ +Table 2: Examples of original sources sentences $(\mathbf{X})$ , original target sentences $(\mathbf{Y})$ and target sentences reviewed by the expert $(\mathbf{Y}^{\prime})$ in the most commonly used training data for GEC. + +
DatasetWER (%noise)
BEA-train37.1
EFCamDat42.1
Lang-834.6
+ +Table 3: Amount of noise in GEC training data estimated by WER. + +BEA-2019 workshop (Bryant et al., 2019) $^3$ , EF-CamDat, and the non-public Lang-8 corpus (henceforth, Lang-8) $^4$ . For 300 target sentences $\mathbf{Y}$ from each dataset, one expert reviewed them and we obtained denoised ones $\mathbf{Y}'$ (Table 2). We then calculated the word edit rate (WER) between the original target sentences $\mathbf{Y}$ and the denoised target sentences $\mathbf{Y}'$ . WER is defined as follows: + +$$ +\operatorname {W E R} = \frac {\sum_ {i = 1} ^ {N} d \left(\mathbf {Y} _ {i} , \mathbf {Y} _ {i} ^ {\prime}\right)}{\sum_ {i = 1} ^ {N} \left| \mathbf {Y} _ {i} \right|} \tag {1} +$$ + +where, $|\mathbf{Y}_i|$ is the total number of words in each original target sentences $\mathbf{Y}_i$ and $d(\cdot)$ is the word-based Levenshtein distance. Table 3 shows the amount of noise in the datasets estimated by WER. Here, the WER values are slightly higher than expected, but this is most likely caused by fluency edits by the editor, making the sentence more native-like. Thus, we found that (i) there is a nonnegligible amount of "noise" in the most commonly used training data for GEC, and (ii) EFCamDat is much noisier than the other two training datasets. + +# 4 Proposed Denoising Method + +The supervised learning problem for GEC is formally defined as follows. Let $\pmb{\theta}$ be all trainable parameters of a GEC model, and $\mathcal{D}$ be training data consisting of pairs of an ungrammatical source sentence $X$ and a grammatical target sentence $Y$ , i.e., $\mathcal{D} = \{(X_i,Y_i)\}_{i=1}^n$ . Then, the objective is to find the optimal parameters $\widehat{\pmb{\theta}}$ that minimize the following loss function $\mathcal{L}(\mathcal{D},\pmb{\theta})$ on training data $\mathcal{D}$ : + +$$ +\mathcal {L} (\mathcal {D}, \boldsymbol {\theta}) = - \frac {1}{| \mathcal {D} |} \sum_ {(\boldsymbol {X}, \boldsymbol {Y}) \in \mathcal {D}} \log (p (\boldsymbol {Y} | \boldsymbol {X}, \boldsymbol {\theta})). \tag {2} +$$ + +Conventionally, training data $\mathcal{D}$ is assumed to be "clean" parallel data. However, as argued in Section 3, this assumption typically does not hold in GEC. Here, we assume that training data $\mathcal{D}$ is "noisy", and, for clarity, we use the notation $\hat{\mathcal{D}}$ to represent "clean" parallel data, where "clean" means "denoised" in this context. The goal is, first, to obtain a new set $\hat{\mathcal{D}}$ by denoising $\mathcal{D}$ , and then, to obtain a GEC model $\hat{\theta}$ on the new training data $\hat{\mathcal{D}}$ . + +To deal with data noise, a straightforward solution is to apply a filtering approach, where noisy data are filtered out and a smaller subset of high-quality sentence pairs is retained, as employed in MT. However, applying a filtering approach may not be the best choice in GEC for two reasons: (i) GEC is a low-resource task compared to MT, thus further reducing data size by filtering may be critically ineffective; (ii) Even noisy instances may still be useful for training since they might contain + +Algorithm 1: Denoising GEC parallel data with self-refinement +Data: Noisy Parallel Data $\mathcal{D}$ Result: Denoised Parallel Data $\hat{\mathcal{D}}$ $\hat{\mathcal{D}} = \{\}$ // create empty set +Train a base model and acquire $\theta$ from $\mathcal{D}$ +for $(X, Y) \in \mathcal{D}$ do + $\boldsymbol{Y}' = \text{Beam\_Search\_Decoding}(\boldsymbol{Y}; \boldsymbol{\theta})$ +Compute perplexity PPL( $\boldsymbol{Y}$ ) and PPL( $\boldsymbol{Y}'$ ) +if PPL( $\boldsymbol{Y}$ ) - PPL( $\boldsymbol{Y}'$ ) ≥ 0 then + $\begin{bmatrix} \hat{Y} = \boldsymbol{Y}' \end{bmatrix}$ else + $\begin{bmatrix} \hat{Y} = \boldsymbol{Y} \end{bmatrix}$ $\hat{\mathcal{D}} = \hat{\mathcal{D}} \cup \{(\boldsymbol{X}, \hat{\boldsymbol{Y}})\}$ +Train a denoised new model $\hat{\theta}$ from $\hat{\mathcal{D}}$ + +some correct edits as well (Note that these correct edits would have also been lost to filtering, further decreasing the amount of informative cues in training). + +As an alternative to filtering, we propose a self-refinement (SR) approach for denoising GEC training data (Algorithm 1). The main idea is to train a GEC model (henceforth, base model) on noisy parallel data $\mathcal{D}$ and to use it for refining target sentences in $\mathcal{D}$ . Noisy annotations are potentially caused by carelessness or insufficient skills of annotators. This causes inconsistent corrections in similar context. In contrast, machine learning-based GEC models, such as EncDec, tend to be reliably consistent given similar contexts. Given noisy parallel data $\mathcal{D} = \{(X_i, Y_i)\}_{i=1}^n$ , we generate new target sentences $\hat{Y}_i$ from the original target sentences $Y_i$ and pair them with their original source sentences $X_i$ (line 4 in Algorithm 1). The consistency of the base model predictions ensures that the resulting parallel data $\hat{\mathcal{D}} = \{(X_i, \hat{Y}_i)\}_{i=1}^n$ contain noise at a less extent. It is worth noting that SR can be regarded as a variant of self-training due to its structural similarity, except that it takes the target sentences rather than the source sentences as input to the model. The algorithm itself is the key difference from existing methods based on self-training (Wang, 2019; Nie et al., 2019; Xie et al., 2020). + +One challenge of this approach is that the base model may consistently make inaccurate corrections. We thus incorporate a fail-safe mechanism + +as a sub-component to restore the original target sentence if the GEC model makes incorrect corrections (lines 5-9). For example, in cases such as in Table 1, the base model may predict every instance as "discuss about". In this step, to determine whether to accept the output $\mathbf{Y}'$ of the base model as a new target sentence, we compare the perplexity of the model output $\mathrm{PPL}(\mathbf{Y}')$ with that of the original target sentence $\mathrm{PPL}(\mathbf{Y})$ . Language models are trained on native-written corpora, meaning they can reasonably be assumed to contain information needed to estimate grammaticality. We believe that a measure of perplexity is a straightforward approach to exploit this information. + +# 5 Experiments + +We evaluate the proposed method in two ways. First, we exclusively focus on investigating the effectiveness of the proposed denoising method (Section 5.3). Then, we compare our strongest model trained with denoised data (henceforth, denoised model), with current best-performing ones to investigate whether the proposed method has a complementary effect on existing task-specific techniques (Section 5.4). + +# 5.1 Configurations + +Dataset For the training dataset, we used the same datasets as mentioned in Section 3: BEA-train, EFCamDat, and Lang-8. In addition, we used the BEA official validation set (henceforth, BEA-valid) provided in the BEA-2019 workshop as validation data. The characteristics of the datasets are summarized in Table 4. For preprocessing, we tokenized the training data using the spaCy tokenizer5. Then, we removed sentence pairs where both sentences were identical or both longer than 80 tokens. Finally, we acquired subwords from the target sentence via the byte-pair-encoding (BPE) (Sennrich et al., 2016b) algorithm. We used the subword-nmt implementation6 and then applied BPE to splitting both source and target texts. The number of merge operations was set to 8,000. + +Evaluation To investigate the effectiveness of the proposed method, we followed the work by Mita et al. (2019) and evaluated the performance of the GEC models across various GEC datasets in terms of the same evaluation metrics. We report + +
Dataset#sent (pairs) SplitScorer
BEA-train561,100 train-
EFCamDAT2,269,595 train-
Lang-85,689,213 train-
BEA-valid2,377 valid-
CoNLL-20141,312test M²scorer & GLEU
JFLEG747test M²scorer & GLEU
BEA-test4,477testERRANT
+ +Table 4: Summary of datasets used in our experiments. + +the results measured by both $\mathbf{M}^2$ scorer (Dahlmeier and Ng, 2012) $^7$ and GLEU metric (Napoles et al., 2015, 2016) $^8$ on both the CoNLL-2014 test set and the JFLEG test set (Napoles et al., 2017). All reported results (except those corresponding to the ensemble models) are the average of three distinct trials using three different random seeds. Let us emphasize that our focus is on denoising the training data, and denoising the test data is out of the scope of this study. The commonly used test data, such as CoNLL-2014 and JFLEG, have multiple references which can lower the noise factor. In addition to having multiple references, both JFLEG and CoNLL-2014 have been specifically constructed for GEC evaluation, while the training data (Lang-8 and EFCamDat) are more of an organic collection of learner and editor interactions. Naturally, we believe it is reasonable to assume that the test data are considerably cleaner. + +Model We employed the "Transformer (big)" settings Vaswani et al. (2017) using the implementation in the fairseq toolkit (Ott et al., 2019). Details on the hyper-parameters are listed in Appendix B. As a language model for the fail-safe mechanism, we used the PyTorch implementation of GPT-2 (Radford et al., 2019) $^{9}$ . Note that to avoid a preference for shorter phrases, we normalized the perplexity by sentence length. + +# 5.2Baselines + +As argued in Section 4, we hypothesized that the filtering-based denoising approaches are not well-suited for GEC. To verify this hypothesis, we employed the following three filtering-based denoising baseline methods in addition to a base model trained in noisy parallel data $\mathcal{D}$ (henceforth, no denoising). + +Cross-entropy filtering (CE filtering) The dual conditional cross-entropy filtering method was proposed by Junczys-Dowmunt (2018) and achieved the highest performance on the noisy parallel corpus filtering task at WMT2018 (Koehn et al., 2018). In this study, we prepared forward and reverse pretrain models using the BEA-train dataset to adapt the filtering method to GEC. We obtained the filtered data by removing $20\%$ of the sentence pairs10 with higher scores from the training data and used them for training. + +Sentence-level error detection filtering (SED filtering) Asano et al. (2019) demonstrated the effectiveness of the sentence-level error detection (SED) model as a filtering tool to preprocess GEC input. Considering these findings, we adopted SED as a filtering-based denoising method for training data. More specifically, we discarded the source-target sentence pairs in the noisy parallel data $\mathcal{D}$ if the SED model predicted the target sentence as an incorrect one. Following Asano et al. (2019), we obtained binary-labeled data using the BEA-train dataset to prepare a training set for the SED model, and then fine-tuned BERT (Devlin et al., 2019) on the prepared data. + +Language model filtering (LM filtering) Language model-based filtering is a method based on the hypothesis that if the perplexity of a target sentence is larger than that of the source sentence, the target sentence is more likely to contain noise. LM filtering has the same motivation as the one underlying the fail-safe mechanism. We used GPT-2 as the pre-trained language model. + +# 5.3 Results + +Table 5 shows the results of the main experiment. The experimental results show that SR significantly outperformed the others, including the three strong denoising baseline models on the multiple datasets. Applying SR to EFCamDat, for instance, yielded a larger performance improvement than without denoising (e.g., no denoising $\mathbf{E}\mathbf{F} = 40.3$ vs SR $\mathbf{E}\mathbf{F} = 48.4$ in CoNLL-2014 when using $\mathrm{F}_{0.5}$ ). Notably, we observed a similar trend when using both BEA-train and Lang-8 datasets as the training data, which indicated that SR was potentially effective for any corpora, not being limited to EFCamDat. + +Furthermore, we compared the effectiveness of SR to other denoising methods. The filtering-based + +![](images/c21bde311d9f9aecd465eaf025fa3e2e6dd67a57985dc9d28393c328ab6a09db.jpg) +(a) Precision + +![](images/7c7f6c5608cb657ee6156b06c061677164018b82a6961ed3e159b64cfd930cb6.jpg) +(b) Recall +Figure 1: Increases and decreases in precision and recall for denoising methods when no denoising is set to 0. + +
ModelCoNLL-2014JFLEG
F0.5GLEUF0.5GLEU
BEA-train (BEA):
No denoising BEA49.663.358.752.3
CE filtering BEA42.961.052.749.0
SED filtering BEA45.162.755.652.2
LM filtering BEA47.163.058.352.6
SR BEA (Ours)50.364.260.554.8
EFCamDAT (EF):
No denoising EF40.361.359.553.7
CE filtering EF40.961.559.854.2
SED filtering EF26.554.047.549.7
LM filtering EF41.261.759.754.2
SR EF (Ours)48.463.563.957.1
Lang-8 (L8):
No denoising L854.965.968.458.1
CE filtering L854.165.368.658.2
SED filtering L855.767.168.560.7
LM filtering L855.966.368.659.1
SR L8 (Ours)56.567.768.661.0
BEA+EF:
No denoising BEA+EF49.163.462.053.9
CE filtering BEA+EF49.663.361.954.5
SED filtering BEA+EF51.264.962.856.7
LM filtering BEA+EF48.363.362.354.7
SR BEA+EF (Ours)54.565.265.558.0
BEA+EF+L8:
No denoising BEA+EF+L856.165.767.056.9
CE filtering BEA+EF+L855.066.068.658.2
SED filtering BEA+EF+L856.167.367.760.3
LM filtering BEA+EF+L856.765.968.057.8
SR BEA+EF+L8 (Ours)58.868.070.661.4
+ +Table 5: Result of denoising experiments with crosscorpora evaluation: a bold value indicates the best result in each training data. + +methods, such as SED and LM filtering, generally achieved better results compared to the baseline models; however, they resulted in lower performance in smaller datasets such as BEA. This could be caused by the fact that these filtering methods have filtered out the training instances containing not only noise but also many correct corrections that may still be partially useful for training. As shown in Table 6, we analyzed the size of each training dataset after filtering. + +Figure 1 shows the increases and decreases in precision and recall when the performance without denoising is set as 0. The experimental results show that there was a certain pattern underlying the denoising effect. More specifically, reducing the noise by SR has little impact on the precision, but it has significantly improved the recall, indicating improved coverage of correction. We provide the detailed analysis on this question in Section 6.4. + +# 5.4 Comparison with Existing Models + +In the second experiment, we compared our best denoised model with the current best performing models to investigate whether SR works well with existing task-specific techniques. We incorporated task-specific techniques that have been widely used in shared tasks such as BEA-2019 and WMT-2019 into the proposed denoised model to further improve the performance. Concerning the task-specific techniques, we followed the work reported by Kiyono et al. (2019), as detailed below. + +Pre-training with pseudo data (PRET) Kiyono et al. (2019) investigated the applicability of incorporating pseudo data into the model and confirmed the reliability of their proposed settings by showing acceptable performance on several datasets. We trained the proposed model using their pre-trained model "PRELARGE+SSE" settings12. + +Right-to-left re-ranking (R2L) R2L is a common approach used to improve model performance by re-ranking using right-to-left models trained in the reverse direction (Sennrich et al., 2016a, 2017) in MT. More recently, previous studies confirmed the effectiveness of this approach when applied to GEC (Ge et al., 2018; Grundkiewicz et al., 2019). We adapted R2L to the proposed model. Specifically, we generated $n$ -best hypotheses using an + +
Filtering methodBEA-trainEFCamDatLang-8
CE filtering448,880 (∇ 20.0%)1,815,676 (∇ 20.0%)4,551,370 (∇ 20.0%)
SED filtering317,957 (∇ 43.3%)1,250,744 (∇ 44.9%)3,314,440 (∇ 41.7%)
LM filtering456,347 (∇ 18.7%)1,936,238 (∇ 14.7%)4,651,085 (∇ 18.2%)
+ +Table 6: The size of the filtered data. The numbers in parentheses indicate each reduction rates. + +
ModelCoNLL-2014JFLEGBEA
F0.5GLEUF0.5GLEUF0.5
Single model:
Junczys-Dowmunt et al. (2018)53.0--57.9-
Lichtarge et al. (2019)56.8--61.6-
Awasthi et al. (2019)59.7--60.3-
Kiyono et al. (2019)61.368.671.359.764.2
SR +PRET+SED61.469.372.563.365.5
Ensemble model:
Junczys-Dowmunt et al. (2018)55.8--59.9-
Lichtarge et al. (2019)60.4--63.3-
Grundkiewicz et al. (2019)64.2--61.269.5
Kiyono et al. (2019)65.068.872.961.470.2
SR +PRET+R2L+SED63.169.873.963.767.8
+ +Table 7: Comparison with existing top models: a bold value denotes the best result within the column. Both SR and BEA indicate SR $\mathbf{B}\mathbf{E}\mathbf{A} + \mathbf{E}\mathbf{F} + \mathbf{L}\mathbf{8}$ and BEA-test, respectively. + +ensemble of four left-to-right (L2R) models and then re-scored the hypotheses using these models. We then re-ranked the $n$ -best hypotheses based on the sum of the both two scores. + +Sentence-level error detection (SED) SED is used to identify whether a given sentence contains any grammatical errors. Following the work presented by Asano et al. (2019), we employed a strategy based on reducing the number of false positives by only considering sentences that contained grammatical errors in the GEC model, using an SED model. We implemented the same model employed for SED filtering. + +We evaluated the performance of the proposed best denoised model incorporated with the task-specific techniques on the three existing benchmarks: CoNLL-2014, JFLEG, and BEA-test, and then compared the scores with existing best-performing models. Table 7 shows the results for both the single and the ensemble models after applying PRET, SED $^{13}$ , and R2L to SR $^{14}$ . Since the reference of BEA-test is publicly unavailable, + +we evaluated the models on CodaLab $^{15}$ under the rules of BEA-2019 workshop. We confirmed that our best denoised model works complementarily with existing task-specific techniques, as compared with the performance presented in Table 5. As a result, our best denoised model achieved state-of-the-art performance on the CoNLL-2014, JF-LEG, and BEA-2019 benchmarks. Noteworthy is that the proposed model achieved state-of-the-art results on the JFLEG benchmark in terms of both single (GLEU = 63.3) and ensemble results (GLEU = 63.7). We provide a detailed analysis on this question in Section 6.5. + +# 6 Analysis + +# 6.1 Noise Reduction + +To evaluate the quality of the dataset after denoising, a researcher with a high level of English proficiency (not involved with this work) manually evaluated 500 triples of source sentences $\mathbf{X}$ , original target sentences $\mathbf{Y}$ , and generated target sentences $\mathbf{Y}'$ obtained by applying SR to EFCamDat satisfying $\mathbf{X} \neq \mathbf{Y} \neq \hat{\mathbf{Y}}$ (Table 8). We can see that $73.6\%$ of the replaced samples were determined to be appropriate corrections, including cases where both were correct. For reference, we provide examples of a confusion set before and after denoising in the + +
1:Improved by denoising (66.4%)
X:how about to going to movie .
Y:How about to going to movie .
Y':How about going to a movie .
2:Both are correct (7.2 %)
X:I'm twenty-nine old.
Y:I'm twenty-nine years old.
Y':I'm 29 years old.
3:Meaning is not preserved (10.4 %)
X:you need keep calm.
Y:You need to keep calm.
Y':You need to be calm.
4:Added Unnecessary information (8.8 %)
X:The are a few of chair and desk.
Y:There are a few chairs and desks.
Y':There are a few chairs and desks too.
5:Contains errors (3.8 %)
X:There are very positive news for us.
Y:There is very positive news for us .
Y':There is a very positive news for us .
6:Lack of fluency (3.4 %)
X:I go in my work on the bike.
Y:I go to work by bike.
Y':I go to work on my bike.
+ +Table 8: Result of manual evaluation. Samples of input sentences $(\mathbf{X})$ , original target sentences $(\mathbf{Y})$ and generated target sentences by our methods $(\mathbf{Y}^{\prime})$ . + +Appendix D. + +# 6.2 Effect of the Fail-safe Mechanism + +Next, we quantitatively and qualitatively analyzed the effectiveness of the fail-safe mechanism integrated into SR. + +Quantitatively, Table 9 provides the results of the ablation study of the fail-safe mechanism on CoNLL-2014. Our main proposal was to include a self-refining step to clean up training data, but we found that the added fail-safe mechanism serves as a sub-component to further improve performance. + +Qualitatively, we directly observed the decisions of the fail-safe mechanism and how it affected denoising. Table 10 provides examples for cases when SR activates and deactivates the fail-safe mechanism in EFCamDat. In the upper example (Table 10-1), *discuss of in the source sentence should have been corrected to discuss; however, it was inaccurately edited to *discuss about in the target sentence. In this case, SR succeeded in selecting the correct model output with a lower perplexity without activating the fail-safe mechanism. On the other hand, in the lower example, the model made + +an incorrect "correction" $(^{*}in\rightarrow at)$ . However, SR successfully activated the fail-safe mechanism and thus retained the correct original target sentence. + +# 6.3 Benefits from Self-training + +SR performed surprisingly well considering its simplicity. One reason might be that SR benefited from the advantages of self-training, as it could be regarded as a variant of self-training (Section 4). He et al. (2020) investigated the effect of self-training in neural sequence generation and found that the dropout in the pseudo-training step (namely, the training step of the denoised model in this study) played an important role in providing a smoothing effect, meaning that semantically similar inputs were mapped to the same or similar targets. As GEC also ideally holds the assumption of generating consistent targets for a similar context, this smoothing effect could contribute to avoiding overfitting and improving fitting the target distribution in the pseudo-training step. In fact, we confirmed that performance deteriorated when dropout was not applied in the training step of the denoised model, as shown in Table 11. In the case of relatively noisy data such as EFCamDat and Lang-8, the performance was better than without denoising, even without dropout. This could be explained by the presence of the denoising effect that was the objective of this study. + +# 6.4 On the Increase of Recall + +A pattern emerged when denoising with SR—recall significantly increased, while precision was mostly maintained (Figure 1). To clarify this observation, we manually assessed the amount of noise before and after denoising. Specifically, in the same way as in Section 3, we asked the expert to review 500 samples of the target sentence before denoising $\mathbf{Y}$ and the target sentences after denoising $\hat{\mathbf{Y}}$ . We then calculated the amount of noise using WER (Eq.1). As a result, we observed a decrease in WER from $43.2\%$ to $31.3\%$ before and after denoising, respectively. This can be interpreted as (i) a large part of the noise was due to uncorrected errors, and (ii) the effect on model training was to correct the bias towards leaving errors unedited, resulting in higher recall. + +# 6.5 Facilitating Fluency Edits + +The results presented in Table 7 indicate that the proposed denoised model tends to (i) perform better + +
ModelEFBEA+EFBEA+EF+Lang-8
Prec.Rec.F0.5Prec.Rec.F0.5Prec.Rec.F0.5
No denoising48.524.040.358.530.049.162.839.256.1
SR w/o fail-safe49.832.344.357.441.053.159.545.856.1
SR55.232.448.461.238.054.561.749.458.8
+ +Table 9: Ablation study of the fail-safe mechanism. + +
1: Fail-safe deactivates:pp1.
X: By the way, I have to *discuss of the education.94.65
Y: By the way, I have to *discuss about education.79.64
Y': By the way, I have to discuss education.73.37
2: Fail-safe activates:pp1.
X: Then I was treated in the hospital for one month.34.34
Y: I was treated in the hospital for one month.32.42
Y': I was treated *at the hospital for one month.33.59
+ +Table 10: Examples of input sentences $(X)$ , original target sentences $(Y)$ and generated target sentences by our methods $(Y^{\prime})$ when our method activates and deactivates the fail-safe in EFCamDat. ppl. indicates perplexity. + +
ModelF0.5GLEU
No denoising BEA49.663.3
SR BEA w/ dropout50.364.2
SR BEA w/o dropout49.563.9
No denoising EF40.361.3
SR EF w/ dropout48.463.5
SR EF w/o dropout47.363.0
No denoising L854.965.9
SR L8 w/ dropout56.567.7
SR L8 w/o dropout55.667.5
+ +Table 11: Ablation study on the influence of dropout. + +on JFLEG and (ii) be specifically highly rated in GLEU compared to other best-performing models. JFLEG was proposed by Naples et al. (2017) for the development and evaluation of GEC models in terms of fluency and grammaticality, i.e., making a sentence more native sounding. Moreover, they showed that GLEU was correlated more strongly with humans than $\mathbf{M}^2$ in JFLEG. The fact that SR is rated higher on JFLEG using GLEU than other best-performing models can be interpreted as achieving more fluent editing. One reason might be that SR performs a perplexity check on both the original target sentences and the new ones obtained after denoising, which always results in $\mathrm{PPL}(\boldsymbol {Y})\geq$ $\mathrm{PPL}(\hat{\boldsymbol{Y}})$ between $\mathcal{D}$ and $\hat{\mathcal{D}}$ . Therefore, SR can be expected to refine not only grammaticality but also + +fluency of the target sentences, and as a result, the proposed denoised model is capable of performing more native-sounding corrections. + +# 7 Conclusion and Future Work + +In this study, we focused on the quality of GEC datasets. The motivation behind our study was based on the hypothesis that the carelessness or insufficient skill of the annotators involved in data annotation could often lead to producing noisy datasets. To address this problem, we presented a self-refinement approach as a simple but effective denoising method which improved GEC performance, and verified its effectiveness by comparing to both strong baselines based on filtering approach and current best-performing models. Furthermore, we analyzed how SR affects both GEC performance and the data itself. + +Recently, several methods that incorporate pretrained masked language models such as BERT, XLNet (Yang et al., 2019), and RoBERTa (Liu et al., 2019) into EncDec based GEC have been proposed and achieved remarkable results (Kaneko et al., 2020; Omelianchuk et al., 2020). These approaches modify the model architecture and do not directly compete with the data-driven approaches discussed in this study. Thus, the combination of these methods can be expected to further improve the performance, which we plan to investigate in our future work. + +# Acknowledgments + +We thank the Tohoku NLP laboratory members who provided us with their valuable advice. We are grateful to Tomoya Mizumoto and Ana Brassard for their insightful comments and suggestions. + +# References + +Hiroki Asano, Masato Mita, Tomoya Mizumoto, and Jun Suzuki. 2019. The AIP-Tohoku System at the BEA-2019 Shared Task. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2019), pages 176-182. + +Abhijeet Awasthi, Sunita Sarawagi, Rasna Goyal, Sabyasachi Ghosh, and Vihari Piratla. 2019. Parallel Iterative Edit Models for Local Sequence Transduction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP 2019), pages 4259-4269. +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015). +Chao Bei, Hao Zong, Yiming Wang, Baoyong Fan, Shiqi Li, and Conghu Yuan. 2018. An Empirical Study of Machine Translation for the Shared Task of WMT18. In Proceedings of the Third Conference on Machine Translation (WMT 2018): Shared Task Papers, pages 340-344. +Chris Brockett, William B. Dolan, and Michael Gamon. 2006. Correcting ESL Errors Using Phrasal SMT Techniques. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (COLING-ACL 2006), pages 249-256. +Christopher Bryant, Mariano Felice, Øistein E. Anderen, and Ted Briscoe. 2019. The BEA-2019 shared task on grammatical error correction. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2019), pages 52-75. +Shamil Chollampatt and Hwee Tou Ng. 2018. A Multilayer Convolutional Encoder-Decoder Neural Network for Grammatical Error Correction. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI 2018), pages 5755-5762. +Daniel Dahlmeier and Hwee Tou Ng. 2012. Better Evaluation for Grammatical Error Correction. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2012), pages 568-572. +Daniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu. 2013. Building a Large Annotated Corpus of Learner English: The NUS Corpus of Learner English. In Proceedings of the 8th Workshop on Building Educational Applications Using NLP (BEA 2013), pages 22-31. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2019), pages 4171-4186. + +Rachele De Felice and Stephen G. Pulman. 2008. A Classifier-Based Approach to Preposition and Determiner Error Correction in L2 English. In Proceedings of the 22nd International Conference on Computational Linguistics (COLING 2008), pages 169-176. +Tao Ge, Furu Wei, and Ming Zhou. 2018. Reaching Human-level Performance in Automatic Grammatical Error Correction: An Empirical Study. arXiv preprint arXiv:1807.01270. +Jeroen Geertzen, Dora Alexopoulou, and Anna Korhonen. 2013. Automatic linguistic annotation of large scale L2 databases: The EF-Cambridge Open Language Database (EFCAMDAT). +Sylviane Granger. 1998. The computer learner corpus: A versatile new source of data for SLA research. In Sylviane Granger, editor, *Learner English on Computer*, pages 3-18. +Roman Grundkiewicz, Marcin Junczys-Dowmunt, and Kenneth Heafield. 2019. Neural Grammatical Error Correction Systems with Unsupervised Pre-training on Synthetic Data. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2019), pages 252-263. +Na-Rae Han, Martin Chodorow, and Claudia Leacock. 2006. Detecting Errors in English Article Usage by Non-Native Speakers. Natural Language Engineering, 12(2):115-129. +Junxian He, Jiatao Gu, Jiajun Shen, and Marc'Aurelio Ranzato. 2020. Revisiting Self-Training for Neural Sequence Generation. In International Conference on Learning Representations (ICLR 2020). +Marcin Junczys-Dowmunt. 2018. Dual Conditional Cross-Entropy Filtering of Noisy Parallel Corpora. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers (WMT 2018), pages 888-895. +Marcin Junczys-Dowmunt, Roman Grundkiewicz, Shubha Guha, and Kenneth Heafield. 2018. Approaching Neural Grammatical Error Correction as a Low-Resource Machine Translation Task. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2018), pages 595-606. +Masahiro Kaneko, Masato Mita, Shun Kiyono, Jun Suzuki, and Kentaro Inui. 2020. Encoder-Decoder Models Can Benefit from Pre-trained Masked Language Models in Grammatical Error Correction. arXiv preprint arXiv:2005.00987. +Huda Khayrallah and Philipp Koehn. 2018. On the Impact of Various Types of Noise on Neural Machine Translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 74-83. + +Diederik Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015). +Shun Kiyono, Jun Suzuki, Masato Mita, Tomoya Mizumoto, and Kentaro Inui. 2019. An Empirical Study of Incorporating Pseudo Data into Grammatical Error Correction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP 2019), pages 1236-1242. +Philipp Koehn, Huda Khayrallah, Kenneth Heafield, and Mikel L. Forcada. 2018. Findings of the WMT 2018 Shared Task on Parallel Corpus Filtering. In Proceedings of the Third Conference on Machine Translation (WMT 2018): Shared Task Papers, pages 726-739. +Philipp Koehn and Rebecca Knowles. 2017. Six Challenges for Neural Machine Translation. In Proceedings of the First Workshop on Neural Machine Translation (WMT 2018), pages 28-39. +Jared Lichtarge, Chris Alberti, and Shankar Kumar. 2020. Data weighted training strategies for grammatical error correction. arXiv preprint arXiv:2008.02976. +Jared Lichtarge, Chris Alberti, Shankar Kumar, Noam Shazeer, Niki Parmar, and Simon Tong. 2019. Corpora Generation for Grammatical Error Correction. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2019), pages 3291-3301. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:1907.11692. +Yu-Chun Lo, Jhih-Jie Chen, Chingyu Yang, and Jason Chang. 2018. Cool English: a Grammatical Error Correction System Based on Large Learner Corpora. In Proceedings of the 27th International Conference on Computational Linguistics (COLING 2018): System Demonstrations, pages 82-85. +Masato Mita, Tomoya Mizumoto, Masahiro Kaneko, Ryo Nagata, and Kentaro Inui. 2019. Cross-Corpora Evaluation and Analysis of Grammatical Error Correction Models — Is Single-Corpus Evaluation Enough? In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2019), pages 1309–1314. +Tomoya Mizumoto, Mamoru Komachi, Masaaki Nagata, and Yuji Matsumoto. 2011. Mining Revision Log of Language Learning SNS for Automated Japanese Error Correction of Second Language Learners. In Proceedings of the 5th International Joint Conference on Natural Language Processing (IJCNLP 2011), pages 147-155. + +Ryo Nagata, Atsuo Kawai, Koichiro Morihiro, and Naoki Isu. 2006. A feedback-augmented method for detecting errors in the writing of learners of English. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (COLING-ACL 2006), pages 241-248. +Courtney Naples, Keisuke Sakaguchi, Matt Post, and Joel Tetreault. 2015. Ground Truth for Grammatical Error Correction Metrics. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL & IJCNLP 2015), pages 588-593. +Courtney Napolles, Keisuke Sakaguchi, Matt Post, and Joel Tetreault. 2016. GLEU Without Tuning. arXiv preprint arXiv:1605.02592. +Courtney Naples, Keisuke Sakaguchi, and Joel Tetreault. 2017. JFLEG: A Fluency Corpus and Benchmark for Grammatical Error Correction. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017), pages 229-234. +Feng Nie, Jin-Ge Yao, Jinpeng Wang, Rong Pan, and Chin-Yew Lin. 2019. A simple recipe towards reducing hallucination in neural surface realisation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019), pages 2673-2679. +Kostiantyn Omelianchuk, Vitaliy Atrasevych, Artem Chernodub, and Oleksandr Skurzhanskyi. 2020. GECToR - Grammatical Error Correction: Tag, Not Rewrite. arXiv preprint arXiv:2005.12592. +Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. *fairseq: A Fast, Extensible Toolkit for Sequence Modeling*. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2019)*, pages 48-53. +Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. +Nick Rossenbach, Jan Rosendahl, Yunsu Kim, Miguel Graça, Aman Gokrani, and Hermann Ney. 2018. The RWTH Aachen University Filtering System for the WMT 2018 Parallel Corpus Filtering Task. In Proceedings of the Third Conference on Machine Translation (WMT 2018): Shared Task Papers, pages 946-954. +Rico Sennrich, Alexandra Birch, Anna Currey, Ulrich Germann, Barry Haddow, Kenneth Heafield, Antonio Valerio Miceli Barone, and Philip Williams. 2017. The university of Edinburgh's neural MT systems for WMT17. In Proceedings of the Second Conference on Machine Translation (WMT 2017), pages 389-399. + +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Edinburgh neural machine translation systems for WMT 16. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers (WMT 2016), pages 371-376. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural Machine Translation of Rare Words with Subword Units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016), pages 1715-1725. +Rico Senrich and Biao Zhang. 2019. Revisiting Low-Resource Neural Machine Translation: A Case Study. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019), pages 211-221. +Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to Sequence Learning with Neural Networks. In Advances in Neural Information Processing Systems 28 (NIPS 2014), pages 3104-3112. +Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), pages 2818-2826. +Toshikazu Tajiri, Mamoru Komachi, and Yuji Matsumoto. 2012. Tense and Aspect Error Correction for ESL Learners Using Global Context. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (ACL 2012), pages 198-202. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. In Advances in Neural Information Processing Systems 31 (NIPS 2017), pages 5998-6008. +Hongmin Wang. 2019. Revisiting challenges in data-to-text generation with fact grounding. In Proceedings of the 12th International Conference on Natural Language Generation (INLG 2019), pages 311-322. +Qizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V. Le. 2020. Self-training with noisy student improves imagenet classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2020). +Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d¥textquotesingle Alch¥'e-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32 (NIPS 2019), pages 5753-5763. +Helen Yannakoudakis, Øistein E. Andersen, Ardeshir Geranpayeh, Ted Briscoe, and Diane Nicholls. 2018. Developing an Automated Writing Placement system for ESL Learners. Applied Measurement in Education, 31(3):251-267. + +Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A New Dataset and Method for Automatically Grading ESOL Texts. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics (ACL 2011), pages 180-189. +Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, and Jingming Liu. 2019. Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2019), pages 156-165. + +# A The BEA-2019 official dataset + +The BEA-2019 Shared Task provided participants with the following datasets as official datasets: Lang-8 (Mizumoto et al., 2011; Tajiri et al., 2012), the National University of Singapore Corpus of Learner English (NUCLE) (Dahlmeier et al., 2013), the First Certificate in English corpus (Yannakoudakis et al., 2011), and W&I+LOCNESS (Yannakoudakis et al., 2018; Granger, 1998). The official dataset is publicly available at https://www.cl.cam.ac.uk/research/nl/bea2019st/. + +# B Hyper-parameter settings + +
ConfigurationsValues
Model ArchitectureTransformer (Vaswani et al., 2017)
OptimizerAdam (Kingma and Ba, 2015)
Learning Rate ScheduleSame as described in Section 5.3 of Vaswani et al. (2017)
Number of Epochs30
Dropout0.3
Stopping CriterionTrain model for 30 epochs. During the training, save model parameter for every 500 updates.
Gradient Clipping1.0
Loss FunctionLabel smoothed cross entropy (Szegedy et al., 2016)
Beam SearchBeam size 5 with length normalization
+ +# C Preliminary experiment of the cross-entropy filtering + +We investigated the effectiveness of changing the threshold of CE filtering by evaluating the model performance on BEA-valid. In this study, we prepared a forward and reverse pre-train model using BEA-train and CoNLL-2013 for as a training and validation set, respectively. + +![](images/1a32aecf259ebd845b3bca5165d73e6c7dadf351d9b00ca9cc609252ea38d89c.jpg) +Figure 2: Performance of base GEC model on BEA-valid as threshold of CE filtering is varied. + +# D Examples of a confusion set before and after denoising + +Table 13 provides examples of a confusion set before and after applying the denoising method to EFCamDat. We confirmed that we succeeded in reducing the noisy confusion set, including (*discuss about, *discuss about) or (*enter in, *enter in) in the target sentences using the proposed denoising. + +Table 12: Detailed hyper-parameters used for the base GEC model. + +
Confusion set (X, Y)Y (%)Ŷ (%)
(*discuss about, *discuss about)66.749.5
(*discuss about, discuss)33.050.2
(*discuss about, *discuss in)0.30.3
(*enter in, *enter in)61.631.7
(*enter in, enter)38.468.3
+ +Table 13: Examples of confusion set before and after denoising in EFCamDat. + +# E Results of comparison with existing models + +# F Ablation study of SED + +
ModelCoNLL-2014JFLEGBEA-test
Prec.Rec.F0.5GLEUPrec.Rec.F0.5GLEUPrec.Rec.F0.5
Single model:
Junczys-Dowmunt et al. (2018)--53.0----57.9---
Licharge et al. (2019)65.537.156.8----61.6---
Awasthi et al. (2019)66.143.059.7----60.3---
Kiyono et al. (2019)67.944.161.368.676.655.871.359.765.559.464.2
Kaneko et al. (2020)69.245.662.6----61.367.160.165.6
Omelianchuk et al. (2020)77.540.165.3-----79.253.972.4
SR BEA+EF+L8+PRET63.852.461.169.674.962.572.063.459.966.961.2
Ensemble model:
Junczys-Dowmunt et al. (2018)61.940.255.8----59.9---
Licharge et al. (2019)66.743.960.4----63.3---
Grundkiewicz et al. (2019)--64.2----61.272.360.169.5
Kiyono et al. (2019)72.446.165.068.879.554.672.961.474.756.770.2
Kaneko et al. (2020)72.646.465.2----62.072.361.469.8
Omelianchuk et al. (2020)78.241.566.5-----78.958.273.6
SR BEA+EF+L8+PRET + R2L65.553.262.670.176.563.373.463.962.967.763.8
+ +Table 14: Comparison with existing models: a bold value denotes the best result within the column. SR and BEA indicate SR $\mathbf{BEA} + \mathbf{EF} + \mathbf{L8}$ and BEA-test, respectively. Kaneko et al. (2020) and Omelianchuk et al. (2020) have appeared on arXiv less than 3 months before our submission and are considered contemporaneous to our submission. + +
ModelCoNLL-2014JFLEGBEA-test
Prec.Rec.F0.5GLEUPrec.Rec.F0.5GLEUPrec.Rec.F0.5
Single model:
SR BEA+EF+L8+PRET63.852.461.169.674.962.572.063.459.966.961.2
SR BEA+EF+L8+PRET+SED65.249.961.469.376.360.672.563.366.761.365.5
Ensemble model:
SR BEA+EF+L8+PRET + R2L65.553.262.670.176.563.373.463.962.967.763.8
SR BEA+EF+L8+PRET+R2L+SED67.150.863.169.877.861.573.963.769.462.167.8
+ +Table 15: Ablation study of SED \ No newline at end of file diff --git a/aselfrefinementstrategyfornoisereductioningrammaticalerrorcorrection/images.zip b/aselfrefinementstrategyfornoisereductioningrammaticalerrorcorrection/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..2e1c6068b81a2602af0ee5c9eb19406fbdf93981 --- /dev/null +++ b/aselfrefinementstrategyfornoisereductioningrammaticalerrorcorrection/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:48aa2d096c1c672f73c74ccd27c43c54615495bc2191a989e86645f783e0e1b0 +size 710356 diff --git a/aselfrefinementstrategyfornoisereductioningrammaticalerrorcorrection/layout.json b/aselfrefinementstrategyfornoisereductioningrammaticalerrorcorrection/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..0edc2cc31242e73d960bcc172ef74384aa2d252d --- /dev/null +++ b/aselfrefinementstrategyfornoisereductioningrammaticalerrorcorrection/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ad656fbdc7747ffc4df08ec6b93c351f924278e647fe39d2dd2bc0e832bc5f10 +size 437057 diff --git a/asemanticsbasedapproachtodisclosureclassificationinusergeneratedonlinecontent/01377718-cc31-410b-a4a1-8b796b499c82_content_list.json b/asemanticsbasedapproachtodisclosureclassificationinusergeneratedonlinecontent/01377718-cc31-410b-a4a1-8b796b499c82_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..d2e197ed0eac43cdd15d21c59eaf0412b5a279ef --- /dev/null +++ b/asemanticsbasedapproachtodisclosureclassificationinusergeneratedonlinecontent/01377718-cc31-410b-a4a1-8b796b499c82_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6fc7c19464a2517840096ca31335cbd52a7c525d196092394af89f8dce734458 +size 68755 diff --git a/asemanticsbasedapproachtodisclosureclassificationinusergeneratedonlinecontent/01377718-cc31-410b-a4a1-8b796b499c82_model.json b/asemanticsbasedapproachtodisclosureclassificationinusergeneratedonlinecontent/01377718-cc31-410b-a4a1-8b796b499c82_model.json new file mode 100644 index 0000000000000000000000000000000000000000..7d03b8438985af3bf8f1291e83e107d43aeb253a --- /dev/null +++ b/asemanticsbasedapproachtodisclosureclassificationinusergeneratedonlinecontent/01377718-cc31-410b-a4a1-8b796b499c82_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6047886acfc68f9f08929082b9c070256ae4aebb7fb65320cdc7efb1275b0acb +size 83508 diff --git a/asemanticsbasedapproachtodisclosureclassificationinusergeneratedonlinecontent/01377718-cc31-410b-a4a1-8b796b499c82_origin.pdf b/asemanticsbasedapproachtodisclosureclassificationinusergeneratedonlinecontent/01377718-cc31-410b-a4a1-8b796b499c82_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6fcd2221a269261da767281f49a121e6685764de --- /dev/null +++ b/asemanticsbasedapproachtodisclosureclassificationinusergeneratedonlinecontent/01377718-cc31-410b-a4a1-8b796b499c82_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e6efca9520ae6b2df3e8b864c9d3acace18ca595c82ec584ebd5b233ace288b5 +size 427533 diff --git a/asemanticsbasedapproachtodisclosureclassificationinusergeneratedonlinecontent/full.md b/asemanticsbasedapproachtodisclosureclassificationinusergeneratedonlinecontent/full.md new file mode 100644 index 0000000000000000000000000000000000000000..88899d14272972817dc9303411cb732ea558a449 --- /dev/null +++ b/asemanticsbasedapproachtodisclosureclassificationinusergeneratedonlinecontent/full.md @@ -0,0 +1,296 @@ +# A Semantics-based Approach to Disclosure Classification in User-Generated Online Content + +Chandan Akiti, Anna Squicciarini, Sarah Rajtmajer + +Pennsylvania State University + +{cra5302,acs20,smr48}@psu.edu + +# Abstract + +As users engage in public discourse, the rate of voluntarily disclosed personal information has seen a steep increase. So-called self-disclosure can result in a number of privacy concerns. Users are often unaware of the sheer amount of personal information they share across online forums, commentaries, and social networks, as well as the power of modern AI to synthesize and gain insights from this data. This paper presents an approach to detect emotional and informational self-disclosure in natural language. We hypothesize that identifying frame semantics can meaningfully support this task. Specifically, we use Semantic Role Labeling to identify the lexical units and their semantic roles that signal self-disclosure. Experimental results on Reddit data show the performance gain of our method when compared to standard text classification methods based on BiLSTM, and BERT. In addition to improved performance, our approach provides insights into the drivers of disclosure behaviors. + +# 1 Introduction + +With the growth of social networking sites and increased user engagement with public discourse online, heightened rates of user disclosure of personal information (henceforth, self-disclosure) have raised privacy and security concerns. Prior research (Keep et al., 2012) suggests that self-disclosure may be more common online thanks to the relative anonymity afforded in this environment and the lack of non-verbal cues to signal thoughts or feelings. Users are often unaware of the aggregate amount of personal information they share, as well as the power of modern AI to synthesize and gain insights from this data. + +Automating the process of identification and classification of private information in text is challenging (Abril et al., 2011). A large volume of textual data needs to be processed, and a number + +![](images/55dbb068f5b8b7883a0749426397f029d00aa27fcfade137270d191ec2319dea.jpg) +Figure 1: SRL of a sentence containing emotional disclosure. + +of real-time requirements need to be met (Agerri et al., 2015), (Singh and Nene, 2013), and significant ambiguities arise from nuanced use of natural language. + +In this work, we adopt the existing framework of Semantic Role Labeling to support self-disclosure identification and classification. Semantic role labeling (SRL) is a process which aims to recognize all predicate-argument pairs along with their roles in a given sentence and its predicates (usually verbs). SRL is a task with numerous applications to Natural Language Processing (NLP) like Question-Answering (Abujabal et al., 2017), Information Extraction (Christensen et al., 2011), Machine Translation (Xiong et al., 2012), text-to-scene generation (Coyne et al., 2012), dialog systems (Chen et al., 2013) and social-network extraction (Agarwal et al., 2014). We hypothesize that the inclusion of semantic frames can provide valuable context for the detection of self-disclosure. Our code is available here1. + +Self-disclosure in social media can take two nonexclusive forms: emotional disclosure, in which the user reveals their feelings towards something or someone; and informational disclosure, where the user reveals objective personal information, e.g., age, career, or address. Following, we propose an approach detecting emotional and informational self-disclosure in text. Specifically, we leverage the structured representations of frame semantics. Our + +method outperforms standard classification methods based on CNN, BiLSTM, and BERT by $9\%$ for emotional disclosure and $2\%$ for informational disclosure. + +# 2 Related Work + +Detection of private and sensitive information from user texts has been studied extensively. However, approaches to date appear to be either confined to specific application domains or targeted to specific identifying attributes. Many automated methods for detection of self-disclosure rely on the presence of first-person pronouns, disregard context, and suffer from poor generalizability (Caliskan Islam et al., 2014; Wang et al., 2016a; Vasalou et al., 2011; Bak et al., 2014; Chow et al., 2008; Choi et al., 2013). + +Fundamentally, most studies equate disclosure to the revelation of explicitly private information (Wang et al., 2016b). We posit that this frame is insufficient to capture the breadth of victimization that can result from voluntarily shared personal information (e.g., cyberbullying (Joinson and Paine, 2007)), and critically, harms supported by increasingly powerful inference algorithms operating on massive-scale longitudinal datasets (e.g., targeting, manipulation (Paramarta et al., 2018)). + +Recent advances in language models have shown improved applicability to classification tasks. Vaswani et al. (2017) introduced a deep bidirectional transformer (BERT) which provided state-of-the-art results on numerous NLP tasks (Devlin et al., 2018). We use BERT as a baseline in this paper. Mehdy et al. (2019) proposed a method to detect disclosures of private information in natural language text through linguistically-motivated artificial neural networks. However, these models do not provide insights into the drivers of disclosure. Sundar et al. (2020) propose heuristics to predict information disclosure, but these heuristics are not exhaustive. + +Gildea and Jurafsky (2002) first introduced the task of detecting the semantic frames evoked in text (Semantic Role Labeling; SRL), along with their arguments, formalized in Baker et al. (2007). There are several SRL annotation conventions, such as PropBank (Palmer et al., 2005) and FrameNet (Baker et al., 2007). Propbank provides a more general role labeling, whereas FrameNet provides much denser annotations with more than 1200 frame types. Several studies have explored (Guan + +et al., 2019) SRL with deep learning techniques. Sikos and Padó (2018) shows that the semantic frames defined in FrameNet can be extended across languages. + +Apart from this, several studies have applied SRL features to other Natural Language Processing tasks. Marzinotto et al. (2019) adapted a FrameNet semantic parser for spoken language understanding using adversarial learning. Abujabal et al. (2017) used semantic parsing to generate templates for question answering tasks. Christensen et al. (2011) used semantic role labeling to extract relations in the text without predefining domain or vocabulary. Xiong et al. (2012) utilized the predicate-argument structure of semantic role labeling to enhance Machine Translation. Coyne et al. (2012) extends the existing FrameNet database to bridge visual cues with semantic frames for the text-to-scene generation task. Chen et al. (2013) used semantic parsers to induce and fill semantic slots in dialog systems automatically. While, Agarwal et al. (2014) extract social networks from unstructured text using the FrameNet-defined tree kernel representations. + +Our work is motivated in part by Tenney et al. (2019). Authors show that BERT contains elements of the natural language processing pipeline: POS tagging, parsing, NER, semantic roles, and coreference. We explore semantic role labeling specifically for the disclosure detection problem. + +# 3 Frame Semantics + +The theory of Frame Semantics asserts that people understand the meaning of words largely by the frames which they evoke. The frames represent story fragments, which serve to connect a group of words to a bundle of meanings; for example, the term avenger evokes the Revenge frame, which describes a complex series of events and a group of participants. The study of Frame Semantics attempts to define frames and the "participants and props" involved in each of them. + +A frame is composed of lexical units with frame elements. A lexical unit (LU) is a pairing of a word with a meaning. Typically, each sense of a word belongs to a different semantic frame, a script-like conceptual structure that describes a particular type of situation, object, or event along with its participants and props. For example, the Apply_heat frame describes a common situation involving a Cook, Food, and a Heating Instrument. These semantic roles are referred to as frame elements (FEs). + +Frame-evoking words are LUs in the Apply\_heat frame. This frame is evoked by words such as bake, blanch, boil, broil, brown, simmer, steam, etc. + +The FrameNet (Baker and Sato, 2003; Ruppenhofer et al., 2006) lexical database currently contains more than 13,000 lexical units, around 7,000 of which are hierarchically annotated. A total of approximately 1200 semantic frames are exemplified in more than 200,000 annotated sentences. + +# 4 Frame Semantics for Disclosure Detection + +We approach the problem of disclosure detection through the learning of semantic-role based labels common to disclosure. The intuition behind Semantic Role Labeling is to assign semantic roles consistent with the frame semantics that are predefined in FrameNet (Baker and Sato, 2003; Ruppenhofer et al., 2006) database. Accordingly, SRL models recover the latent predicate-argument structure of a sentence. + +Exemplar sentences and frame semantics are shown in Figures 1 and 2. Target words and text spans are highlighted in the sentence, and their lexical units are shown italicized below. Frames are shown in colored blocks, and frame element segments appear horizontally alongside the frame. + +The SRL-labeled sentence in Figure 1, provides an example of a sentence containing emotional disclosure. The frame Emotion_Active is invoked by the predicate "worried". This frame has two lexical units containing words "I" and "worried". The lexical unit "I" is assigned a semantic role of Experiencer. We call Experiencer a frame-element of Emotion_Active frame. Clearly, Emotion_Active with an Experiencer as "I" leads to a self-disclosure of emotion. + +![](images/c9634a0a7667b4ab0c2f483e55fbece13691a988e9585dda54467d84dc8bbd09.jpg) +Figure 2: SRL of a sentence containing informational disclosure + +Figure 2 shows the case of multiple frames invoked by different predicates. The Make_Agreement_On_Action is invoked by the predicate "agree". This frame has multiple lexical units, but two frame elements. The frame element Party_1 is assigned to "I" and Obligation is + +assigned to the span "with all the other points you made". This frame supports informational self-disclosure. + +Our model predicts disclosure in a sentence based on the semantic frames present. We formulate our disclosure classification model as follows. A sentence $S$ contains a set of semantic frames $\mathcal{F} = \{\mathcal{F}_1, \mathcal{F}_2, \ldots, \mathcal{F}_m\}$ where $m \leq M$ . Every semantic frame $\mathcal{F}_j$ has a frame identification $I_j$ and frame elements set $E_j = \{E_{j1}, E_{j2}, \ldots, E_{jk}\}$ where $k \leq K$ and $E_{jk} \in \mathcal{E}$ represents $k^{th}$ frame element of $j^{th}$ semantic frame in the sentence. $\mathcal{E}$ is a set of pre-defined frame elements in FrameNet. In our formulation of the problem, the sentence $S$ contains a disclosure if at least one of the frames $\mathcal{F}$ contained in $S$ is associated with disclosure, according to our classifier. Formally, + +$$ +D (S) = \sigma \left(\max _ {f \in \mathcal {F}} D ^ {\prime} (f)\right) \tag {1} +$$ + +where $\mathcal{F}$ is the set of semantic frames in the sentence. $D^{\prime}$ is a disclosure-frame classification function which takes frame $f\in \mathcal{F}$ as the argument. $D$ is the disclosure-sentence classification function for $S$ and $\sigma$ is the classification function. + +# 5 Semantic Frame Embedding + +The FrameNet project (Baker and Sato, 2003; Ruppenhofer et al., 2006) has developed a lexicon of more than 1,200 semantic frames, and a corpus of sentences annotated with frames. We use the FrameNet database to extract semantic frames from the sentences in our dataset. Frame-semantic parsing is a pipeline of three sub-tasks: predicate identification (Which words evoke the frames?); frame identification (Which frames does each predicate evoke?); and argument (frame-element) identification (Which span of the text provides possible roles from $\mathcal{E}$ ). Target identification is usually a classification problem. + +For the purpose of frame semantics extraction, we use open-SESAME (SEmi-markov Softmax-margin ArguMEnt parser; Swayamdipta et al. (2017)), a framework that provides a pipeline for the three steps mentioned above. Open-SESAME uses Bi-LSTM to classify whether each word in the sentence is a predicate. For each detected predicate (mapped to all possible spans in the sentence), the framework classifies the semantic frame invoked using another Bi-LSTM. Then the framework uses segmental RNN (SegRNN; Kong et al. (2015)) for + +predicting frame-elements for the semantic frames detected in the previous step. + +# 5.1 Frame-semantic feature representation + +![](images/255b8763e1b06d08f411551e8fc7d46bf37ac311b5f52752a4bff50e414e3d7b.jpg) +Figure 3: Frame embedding with extracted SRL from the open-SESAME (Swayamdipta et al., 2017) pipeline. The input token embeddings are shown as black, and the input frame and frame-element embeddings are shown in purple. The token bi-LSTM hidden states are shown in green. Grey cells represent the frame-element embedding. Finally, the embedding of the frame is the sum of all frame-elements concatenated (in the figure) with the embedding of frame identification name. + +A semantic role labeling of a sentence $S$ is set of semantic frames $\mathcal{F} = \{\mathcal{F}_1,\dots ,\mathcal{F}_m\}$ where $m\leq M$ . Every semantic frame $\mathcal{F}_j$ has a name $I_{j}$ and frame elements set $E_{j} = \{E_{j1},\ldots ,E_{jk}\}$ where $k\leq K$ . $E_{jk}$ is the $k^{th}$ frame element of $j^{th}$ semantic frame in the sentence $S$ . A frame-element has a name and a span of the sentence. + +We represent the frame semantics in sentence $S$ as the set of embeddings for each semantic frame. A semantic frame is represented as a combination of two parts. The first part is the predicate embedding $P_{j}$ , a concatenation of the word embedding for predicate $w_{j}$ and the word embedding for frame name $I_{j}$ . + +$$ +P _ {j} = \left[ I _ {j}; w _ {j} \right] \tag {2} +$$ + +where $I_{j}$ is the frame name, $w_{j}$ is the corresponding predicate. + +The second part of the semantic frame embedding is frame-element embedding $FE_{j}$ . The embedding for frame-elements set $E_{j}$ is calculated as the combination of embeddings for all frame elements $E_{j1}, E_{j2}, \ldots, E_{jk}$ . The embedding for each frame element $E_{jk}$ is a concatenation of word + +embeddings for frame-element name and corresponding span. + +$$ +F E _ {j} = \frac {1}{K} \sum_ {k = 1} ^ {K} \left[ E _ {j k}; s _ {j k} \right] \tag {3} +$$ + +where $s_{jk}$ is the span for $k^{th}$ frame-element and $E_{jk}$ is the frame-element name. + +Thus, a frame is embedded as + +$$ +\mathcal {F} _ {j} = W _ {f} \left[ P _ {j}; F E _ {j} \right] + b _ {f} \tag {4} +$$ + +where $P_{j}$ is the predicate-frame embedding and $FE_{j}$ is the frame-elements embedding. $W_{f},b_{f}$ are weight and bias parameters for a fully connected layer. + +Thus, the sentence $S$ with $M$ frames has a frame semantic representation as + +$$ +S = \left[ \mathcal {F} _ {1}, \mathcal {F} _ {2}, \dots , \mathcal {F} _ {M} \right]. \tag {5} +$$ + +# 5.2 Classification model + +In this section, we discuss our model for the disclosure function described in Eq.1. The semantic frame representations extracted in Section 5.1 are stacked to form the sentence representation $\left[\mathcal{F}_1,\mathcal{F}_2,\dots ,\mathcal{F}_M\right]$ . We model the function $D^{\prime}$ as a multi-layer perceptron that is applied on each semantic frame. The max function in Eq.1 is modeled as the MaxPool layer that outputs maximum activation from all frames. + +![](images/00ced8c2df9c1cc856096187e5f0d5b4d06b4fcacf7f1fd5d2d06ca0f426f180.jpg) +Figure 4: Classification model: The frame representation is shown with two red cells. The green cell is the sentence representation. $\sigma$ is SoftMax layer whose output is sent to the max function. + +The output of max layer is normalized with SoftMax again. The maximum likelihood loss of the final two outputs is optimized. + +Through this approach, we classify emotional disclosure and informational disclosure. Gold training data from Affcon 2020 is used as the labeled data. + +# 6 Experimental Evaluation + +# 6.1 Dataset + +Reddit2 is a popular discussion forum platform consisting of a large number of subreddits focus + +ing on different topics and interests. The Reddit dataset (Jaidka et al., 2020) consists of 12,860 labeled sentences and 5,000 unlabeled sentences. The sentences are sampled from comments in two subreddits: r/CasualConversation, a 'friendlier' sub-community where people are encouraged to share what is on their mind about any topic, and r/OffMyChest, intended as a mutually supportive community where 'deeply emotional things people cannot tell others they know can be told'. The topics of the collected posts are limited to relationships, with the following tags: "wife"; "girlfriend"; "gf"; "husband"; "boyfriend" and "bf". The statistics of the data from each community are detailed in Table 1. + +
Labelr/OffMyChestr/CasualConversation
Emo24491499
Info27491742
Total76135247
+ +The dataset contains six gold labels for each sentence: emotional disclosure; information disclosure; support; general support; information support; and, emotional support. For the purpose of this paper, we only use gold labels of emotional and information disclosure. + +Table 1: Dataset statistics: This table shows the number of Emotional and Information disclosure sentences. + +
LabelFrequency
Emotional0.31
Informational0.38
+ +Table 2: Dataset statistics: label frequency. + +The Open-SESAME framework assumes a grammatically correct sentence input for which parts-of-speech can be extracted easily. However, Reddit data is prone to ungrammatical sentences, particularly in long paragraphs. To ameliorate this, we exclude from our analysis sentences with more than 50 words. For our dataset, our model provides frame-semantics with $M = 6$ and $K = 5$ . + +# 6.2 Semantic frames and frame elements closely linked with SD + +In this section, we study frame relevance to emotional and informational disclosure. We operationalize the relevance of a frame as a correlation with emotional (or informational) disclosure. For this analysis, we take the Term Frequency - Inverse Document Frequency (TF-IDF) representation of + +each sentence with respect to the semantic frames evoked in the sentences. With the TF-IDF representation as features of the sentences, we calculate and normalize feature importance using a random forest classifier. In Figure 5, we show the semantic frame relevance for the top 40 most relevant semantic frames to emotional and informational disclosure. + +![](images/63de7b7e5cb03b24f819a8c93214c3fe239c9e300051d93f39d9027892f25f07.jpg) +Figure 5: Frame relevance with Emotional disclosure and Informational disclosure + +As reported, frames related to emotion (Feeling, Emotion directed) show high correlation with emotional disclosure as compared to informational disclosure. For example, "The parent is INFURIATED" is a sentence containing the word 'INFURIATED' which evokes 'Emotion directed' frame. + +We also see the frame 'Kinship,' a frame that is evoked by kinship relational words has a high correlation to the informational disclosure as compared to emotional disclosure. For example, "Matilde is Gilberto's COUSIN" is a sentence containing the word 'COUSIN' which evokes 'Kinship' frame. The information disclosed in this sentence is Matilde and Gilberto are related by a kinship relation called cousin. + +Although the disclosure is not directly correlated with frame invoked, the observations above are strong motivation for us to explore our method in Section 5 to embed this information in our classification model for high performance. + +# 7 Baseline + +# 7.1 B1: biLSTM model + +We use the text classification model Bidirectional LSTM as our baseline. This model is an extension of traditional LSTM. Bidirectional LSTM trains two (instead of one) LSTM on the input sequence. The first input sequence is as-is and the second is on a reversed copy of the input sequence. This provides a forward and backward context for each token in the input sentence. biLSTMs provide competitive performance on text classification tasks. (Devlin et al., 2018) + +We use Glove.6B.200d embedding for the input tokens. The model is trained with an Adam optimizer and a learning rate of 5e-4 for ten epochs. The results are as shown in Tables 3 and 4. As shown, informational disclosure labels are detected with higher accuracy, and achieve 0.61 F1 score, mostly due to a higher precision rate than what we report for emotional disclosure labels. + +# 7.2 B2: BERT model + +BERT is a deeply bidirectional, unsupervised language representation, pre-trained (Devlin et al., 2018) using only a plain text corpus from BooksCorpus (800M words) (Zhu et al., 2015) and English Wikipedia (2,500M words). This makes it particularly suitable for our baseline task, as it allows us to input training text as-is, without imposing predefined and possibly biased features or setting hyper-parameters that would require further analysis. + +We train the bert-base-uncased version of BERT (12 layers, with a hidden size of 768 and 12 self-attention heads) with an Adam optimizer and learning rate $1e - 5$ for two epochs. The results of this baseline method are shown in Tables 3 and 4. + +# 8 Model Classification Results + +In our model A1, we use Glove.6B.200d embedding for the input tokens. The frame-semantic feature representation of each frame in the sentence is extracted, as described in section 5.1. A regular dropout layer will not help regularize the activations independently for the equivalent features in + +different frames. Accordingly, we add a spatial 1D-Dropout layer after the frame-semantic embedding layer to help regularize the model. We apply 1D convolution with 32 kernels after the Dropout layers. The output of this layer is passed to a MaxPool layer applied to each kernel output. Finally, the output of the MaxPool layer is passed to a fully-connected layer and a classification (Sigmoid) layer. + +Another variation of our model comes with replacing the Glove $^+$ biLSTM layers with an ELMO (Peters et al., 2018) or BERT (Devlin et al., 2018) embedding. ELMO is a shallow bidirectional model. BERT, unlike ELMO, is a deeply bidirectional model. We present two results with these layers as the contextual word embedding layers for extracting frame-semantic representations. + +
ModelPrecisionRecallF1-score
B10.430.670.53
B20.480.680.57
A10.55*0.720.62*
A1+ELMO0.520.690.59
A1+BERT0.570.71*0.63
+ +Table 3: 10-fold cross-validation results for emotion disclosure. + +
ModelPrecisionRecallF1-score
B10.560.670.61
B20.600.640.62
A10.570.69*0.63*
A1+ELMO0.580.660.62
A1+BERT0.59*0.690.64
+ +Table 4: 10-fold cross-validation results for informational disclosure. + +For the BERT version of our model, we take frame identification name embedding as the hidden state of the BERT last layer when the input is a tokenized version of the name. The results in Tables 3 and 4 indicate the performance of our model with BERT embeddings. There is a marginal improvement in the F1-score compared to the model with Glove embedding. + +# 9 Ablation + +We study the effect of frame-semantic features on the classification task. Our model, when compared with the biLSTM model, improves the F1-score + +on Emotional Disclosure by $9 \%$ and Informational Disclosure by $2 \%$ . We considered the contextual word embeddings of tokens in the sentence, frame identification name, and frame elements. This ablation study is to measure the effect of frameelements on the classification task qualitatively. + +
LabelModelPrecisionRecallF1
EmoB10.430.67*0.53
EmoA1-FE0.54*0.660.60*
EmoA10.550.720.62
InfoB10.560.67*0.61*
InfoA1-FE0.57*0.650.61
InfoA10.570.690.63
+ +We use a smaller version A1-FE of our model. In this version, we remove the frame-element embeddings (from gray cells in Figure 3) from the semantic frame feature representation. Thus, frame-semantic representation concatenates the embedding of a frame identification name and the predicate. The classification model remains the same as A1 (Figure 4). + +Our results are summarized in Table 5. Emotional disclosure classification improves the F1-score by $7\%$ with frame identification features. This improves further $(2\%)$ with frame-element features. This result indicates that frame identification names carry maximum information pertained to Emotional Disclosure. + +Informational disclosure classification performs similarly with or without frame-identification features. However, there is an improvement in F1-score by $2\%$ with frame-element features. This improvement suggests that we cannot infer informational disclosure from the frame identification name alone. Frame-elements are crucial for detecting informational disclosure. + +# 10 The Role of Conversation in Self-Disclosure + +In this section, we explore the effect of peer influence features on disclosure detection in the conversation. Given the highly contextual and inherently social character of self-disclosure, we are motivated to explore peer influence as a meaningful signal for this behavior. We incorporate peer + +effects in models aiming to detect and predict disclosure in conversation. We touch upon some early findings in this direction. Practically we develop a model that addresses the problem of predicting disclosure in a given comment using the comment text and peer influence features extracted from previous comments. + +We have completed an early exploration of conversational modeling of the effects of peer influence in Reddit conversations. We augmented our original Reddit dataset (Jaidka et al., 2020) with the missing responses from the original comments using Python Reddit API Wrapper (PRAW) $^3$ . We sample 1200 conversations (about 1600 users) from the comment trees and manually annotate the emotional and informational disclosure in comments using Mechanical Turk with consensus from three workers. We labeled the dataset into three labels: No Disclosure vs Low Disclosure vs High Disclosure for each of emotional and informational disclosure. We then calculate reliability metrics $^4$ which indicate high reliability scores for binary: No Disclosure (no) vs Disclosure (low or high) labels. + +Table 5: Ablation study for model without frame elements in frame-semantic feature representations. Comparing results of three models B1, A1-FE, and A1 with average scores on 10-fold cross validation. + +
TypeReliability metricInfoEmo
No vs Low vs HighFleiss Kappa0.4840.242
Gwet's AC1/AC20.6310.317
BinaryFleiss Kappa0.6530.394
Gwet's AC1/AC20.7010.644
+ +Table 6: Table indicating high reliability scores for binary labels of emotional and informational disclosure + +A Reddit post is composed of a post text written by an author, comments and votes. A comment is composed of comment text written by an author, reply comments and votes. Recursive comments within each post form a comment tree. We sample conversations from this comment tree by recursively taking the comments where the successor comment is a direct reply to the predecessor comment. Unlike the original dataset (Jaidka et al., 2020) which provides annotations for sentences sampled from the comments, here we classify the entire comment. + +
DisclosurePositiveTotal
Emo Disclosure8261200
Info Disclosure5431200
+ +Table 7: Label statistics for emotional and informational disclosure + +Our model is a simple modification of BertForSequenceClassification from Facebook's Hugging-Face library (Wolf et al., 2019). The model predicts disclosure in the fifth comment using the comment text and the following peer influence factors: number of unique self-disclosures in the last four comments; whether there is a disclosure from the same user in last four comments; the total number of users in the given conversation; previous disclosure; time elapsed since the most recent prior disclosure; and time elapsed since the most recent prior comment. Elapsed time is normalized to [0, 1], where 1 represents one day and any more than a day. + +The bert-base-uncased architecture of the BERT model is enhanced with the peer influence features listed above. The pooled output of the BERT model of dimension 768 is passed through a dropout layer with a dropout rate of 0.05 and then passed through a linear layer with an output dimension of 16. The peer influence features are appended to this output and passed though a binary classification layer. We train the model using an Adam optimizer with a learning rate of $4 \times 10^{-5}$ for one epoch. + +The emotional disclosure classification model achieves $87.9\%$ F1-score (representing a $2.2\%$ improvement on the model without peer influence features) and 0.54 Matthews Correlation Coefficient (MCC) with 5-fold cross validation. Similarly, the informational disclosure classification model achieves $73.3\%$ F1-score $(1.5\%$ improvement) and 0.61 MCC with 5-fold cross validation. + +These early promising results in affect analysis of peer-influence on disclosure, point to further exploration of frame semantics in conversation/dialogue systems as a promising avenue for future work. + +# 11 Conclusion + +In this paper, we have presented a study using semantic role labels to support the detection of voluntarily disclosed private information in a user-generated text. To the best of our knowledge, ours is the first study performing in-depth semantic + +analysis to facilitate detection and analysis of self-disclosure. In doing so, we have simultaneously improved upon state-of-the-art performance for detection of disclosure in sentences and furnished meaningful semantic information about tagged disclosures. The success of frame semantics in helping to identify sentences containing disclosure is perhaps unsurprising given its power in distilling meaning from groups of individual words. Yet, our models have potential for more insightful analysis, beyond what is presented here. For example, semantic frames across sentences in the comment can be linked in a graph-like structure if the same entities evoke the semantic frames. Moreover, the same can be applied across comments. We will explore these graph-based approaches in future work. + +# References + +Daniel Abril, Guillermo Navarro-Arribas, and Vicenç Torra. 2011. On the declassification of confidential documents. In Proceedings of the 8th International Conference on Modeling Decisions for Artificial Intelligence, MDAI'11, page 235-246, Berlin, Heidelberg. Springer-Verlag. +Abdalghani Abujabal, Mohamed Yahya, Mirek Riedewald, and Gerhard Weikum. 2017. Automated template generation for question answering over knowledge graphs. In Proceedings of the 26th International Conference on World Wide Web, WWW '17, page 1191-1200, Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee. +Apoory Agarwal, Sriramkumar Balasubramanian, Anup Kotalwar, Jiehan Zheng, and Owen Rambow. 2014. Frame semantic tree kernels for social network extraction from text. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 211-219, Gothenburg, Sweden. Association for Computational Linguistics. +Rodrigo Agerri, Xabier Artola, Zuhaitz Beloki, German Rigau, and Aitor Soroa. 2015. Big data for natural language processing: A streaming approach. Knowledge-Based Systems, 79:36 - 42. +JinYeong Bak, Chin-Yew Lin, and Alice Oh. 2014. Self-disclosure topic model for classifying and analyzing twitter conversations. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1986-1996, Doha, Qatar. Association for Computational Linguistics. +Collin Baker, Michael Ellsworth, and Katrin Erk. 2007. SemEval-2007 task 19: Frame semantic structure extraction. In Proceedings of the Fourth International + +Workshop on Semantic Evaluations (SemEval-2007), pages 99-104, Prague, Czech Republic. Association for Computational Linguistics. +Collin F. Baker and Hiroaki Sato. 2003. The FrameNet data and software. In *The Companion Volume to the Proceedings of 41st Annual Meeting of the Association for Computational Linguistics*, page 161-164. +Aylin Caliskan Islam, Jonathan Walsh, and Rachel Greenstadt. 2014. Privacy detective: Detecting private information and collective privacy behavior in a large social network. In Proceedings of the 13th Workshop on Privacy in the Electronic Society, WPES '14, pages 35-46, New York, NY, USA. ACM. +Y. Chen, W. Y. Wang, and A. I. Rudnicky. 2013. Unsupervised induction and filling of semantic slots for spoken dialogue systems using frame-semantic parsing. In 2013 IEEE Workshop on Automatic Speech Recognition and Understanding, pages 120-125. +Dongjin Choi, Jeongin Kim, Xeufeng Piao, and Pankoo Kim. 2013. Text analysis for monitoring personal information leakage on twitter. Journal of Universal Computer Science, 19(16):2472-2485. +Richard Chow, Philippe Golle, and Jessica Staddon. 2008. Detecting privacy leaks using corpus-based association rules. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '08, pages 893-901, New York, NY, USA. ACM. +Janara Christensen, Mausam, Stephen Soderland, and Oren Etzioni. 2011. An analysis of open information extraction based on semantic role labeling. In Proceedings of the Sixth International Conference on Knowledge Capture, K-CAP '11, page 113-120, New York, NY, USA. Association for Computing Machinery. +Bob Coyne, Alex Klapheke, Masoud Rouhizadeh, Richard Sproat, and Daniel Bauer. 2012. Annotation tools and knowledge representation for a text-to-scene system. In Proceedings of COLING 2012, pages 679-694, Mumbai, India. The COLING 2012 Organizing Committee. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. +Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Comput. Linguist., 28(3):245-288. +Chaoyu Guan, Yuhao Cheng, and Hai Zhao. 2019. Semantic role labeling with associated memory network. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3361-3371, Minneapolis, Minnesota. Association for Computational Linguistics. + +Kokil Jaidka, Iknoor Singh, Jiahui Lu, Niyati Chhaya, and Lyle Ungar. 2020. A report of the CL-Off OffMyChest Shared Task: Modeling Supportiveness and Disclosure. In Proceedings of the AAAI-20 Workshop on Affective Content Analysis, New York, USA. AAAI. +AN Joinson and CB Paine. 2007. Self-disclosure, privacy and the internet. Oxford handbook of internet psychology. +Melanie Keep, Yu Sun Bin, and Andrew Campbell. 2012. Comparing online and offline self-disclosure: A systematic review. Cyberpsychology, behavior and social networking, 15:103-11. +Lingpeng Kong, Chris Dyer, and Noah A. Smith. 2015. Segmental recurrent neural networks. +Gabriel Marzinotto, Geraldine Damnati, and Frédéric Béchet. 2019. Adapting a FrameNet Semantic Parser for Spoken Language Understanding Using Adversarial Learning. In *Interspeech* 2019, pages 799-803, Graz, Austria. ISCA. +Nuhil Mehdy, Casey Kennington, and Hoda Mehrpouyan. 2019. Privacy disclosures detection in natural-language text through linguistically-motivated artificial neural networks. +Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71-106. +Valentinus Paramarta, Muhammad Jihad, Ardhian Handoyo, Ika Hapsari, Puspa Sandhyaduhita, and Achmad Hidayanto. 2018. Impact of user awareness, trust, and privacy concerns on sharing personal information on social media: Facebook, twitter, and instagram. pages 271-276. +Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics. +Josef Ruppenhofer, Michael Ellsworth, Myriam Schwarzer-Petruck, Christopher R Johnson, and Jan Scheffczyk. 2006. Framenet ii: Extended theory and practice. +Jennifer Sikos and Sebastian Padó. 2018. Using embeddings to compare FrameNet frames across languages. In Proceedings of the First Workshop on Linguistic Resources for Natural Language Processing, pages 91-101, Santa Fe, New Mexico, USA. Association for Computational Linguistics. +Jayveer Singh and Manisha J. Nene. 2013. A survey on machine learning techniques for intrusion detection systems. + +S. Shyam Sundar, Jinyoung Kim, Mary Beth Rosson, and Maria D. Molina. 2020. Online privacy heuristics that predict information disclosure. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20, page 1-12, New York, NY, USA. Association for Computing Machinery. +Swabha Swayamdipta, Sam Thomson, Chris Dyer, and Noah A. Smith. 2017. Frame-semantic parsing with softmax-margin segmental rnns and a syntactic scaffold. +Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593-4601, Florence, Italy. Association for Computational Linguistics. +Asimina Vasalou, Alastair J. Gill, Fadhila Mazanderani, Chrysanthi Papoutsi, and Adam Joinson. 2011. Privacy dictionary: A new resource for the automated content analysis of privacy. Journal of the American Society for Information Science and Technology, 62(11):2095-2105. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998-6008. Curran Associates, Inc. +Yi-Chia Wang, Moira Burke, and Robert Kraut. 2016a. Modeling self-disclosure in social networking sites. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, CSCW '16, pages 74-85, New York, NY, USA. ACM. +Yi-Chia Wang, Moira Burke, and Robert Kraut. 2016b. Modeling self-disclosure in social networking sites. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work Social Computing, CSCW '16, page 74-85, New York, NY, USA. Association for Computing Machinery. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing. +Deyi Xiong, Min Zhang, and Haizhou Li. 2012. Modeling the translation of predicate-argument structure for SMT. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 902–911, Jeju Island, Korea. Association for Computational Linguistics. + +Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. \ No newline at end of file diff --git a/asemanticsbasedapproachtodisclosureclassificationinusergeneratedonlinecontent/images.zip b/asemanticsbasedapproachtodisclosureclassificationinusergeneratedonlinecontent/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..5dee1a1048e3e42e7501ceec85ba817ae2d74783 --- /dev/null +++ b/asemanticsbasedapproachtodisclosureclassificationinusergeneratedonlinecontent/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:760446f92b44ed03c7d5b413de743c43b694bfd19225ef703fa9f6afda1e0859 +size 260774 diff --git a/asemanticsbasedapproachtodisclosureclassificationinusergeneratedonlinecontent/layout.json b/asemanticsbasedapproachtodisclosureclassificationinusergeneratedonlinecontent/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..819d62201d6406e9fbcb9511224cf0c8653bcf7a --- /dev/null +++ b/asemanticsbasedapproachtodisclosureclassificationinusergeneratedonlinecontent/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3a3be0349f0a7d93a62f5bf03123dcf871cc41b570c2a12f16deeecae3882b2e +size 329926 diff --git a/asemisupervisedapproachtogeneratethecodemixedtextusingpretrainedencoderandtransferlearning/e8fb5fbb-ecda-4892-a149-5f1805f44207_content_list.json b/asemisupervisedapproachtogeneratethecodemixedtextusingpretrainedencoderandtransferlearning/e8fb5fbb-ecda-4892-a149-5f1805f44207_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..7173ee2a2e6b9319f5c083cd29b1a6943feb4898 --- /dev/null +++ b/asemisupervisedapproachtogeneratethecodemixedtextusingpretrainedencoderandtransferlearning/e8fb5fbb-ecda-4892-a149-5f1805f44207_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b645af5222d170fdcf93c212affdf1e9ac02f20f872b6918aaae8a7a7a588be7 +size 96492 diff --git a/asemisupervisedapproachtogeneratethecodemixedtextusingpretrainedencoderandtransferlearning/e8fb5fbb-ecda-4892-a149-5f1805f44207_model.json b/asemisupervisedapproachtogeneratethecodemixedtextusingpretrainedencoderandtransferlearning/e8fb5fbb-ecda-4892-a149-5f1805f44207_model.json new file mode 100644 index 0000000000000000000000000000000000000000..009a39fd559b51e284a6de82ea5d659c41bf81c0 --- /dev/null +++ b/asemisupervisedapproachtogeneratethecodemixedtextusingpretrainedencoderandtransferlearning/e8fb5fbb-ecda-4892-a149-5f1805f44207_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c50512cd7cc0046804f2c0a500e05e55128a1dd652f94d991b484bfb843436b7 +size 122566 diff --git a/asemisupervisedapproachtogeneratethecodemixedtextusingpretrainedencoderandtransferlearning/e8fb5fbb-ecda-4892-a149-5f1805f44207_origin.pdf b/asemisupervisedapproachtogeneratethecodemixedtextusingpretrainedencoderandtransferlearning/e8fb5fbb-ecda-4892-a149-5f1805f44207_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..9fbf299f6191f21c77c9f44fb2c3bf01b997cb09 --- /dev/null +++ b/asemisupervisedapproachtogeneratethecodemixedtextusingpretrainedencoderandtransferlearning/e8fb5fbb-ecda-4892-a149-5f1805f44207_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8ba47e801a30f9863e0adc9ca0bca1d428f57e2eb402e94801954d62298425ff +size 508396 diff --git a/asemisupervisedapproachtogeneratethecodemixedtextusingpretrainedencoderandtransferlearning/full.md b/asemisupervisedapproachtogeneratethecodemixedtextusingpretrainedencoderandtransferlearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..692b536c03d176772dfaaa687212ae0e47cb254b --- /dev/null +++ b/asemisupervisedapproachtogeneratethecodemixedtextusingpretrainedencoderandtransferlearning/full.md @@ -0,0 +1,413 @@ +# A Semi-supervised Approach to Generate the Code-Mixed Text using Pre-trained Encoder and Transfer Learning + +Deepak Gupta, Asif Ekbal, Pushpak Bhattacharyya + +Indian Institute of Technology Patna, India {deepak.pcs16, asif, pb}@iitp.ac.in + +# Abstract + +Code-mixing, the interleaving of two or more languages within a sentence or discourse is ubiquitous in multilingual societies. The lack of code-mixed training data is one of the major concerns for the development of end-to-end neural network-based models to be deployed for a variety of natural language processing (NLP) applications. A potential solution is to either manually create or crowd-source the code-mixed labelled data for the task at hand, but that requires much human efforts and often not feasible because of the language specific diversity in the code-mixed text. To circumvent the data scarcity issue, we propose an effective deep learning approach for automatically generating the code-mixed text from English to multiple languages without any parallel data. In order to train the neural network, we create synthetic code-mixed texts from the available parallel corpus by modelling various linguistic properties of code-mixing. Our code-mixed text generator is built upon the encoder-decoder framework, where the encoder is augmented with the linguistic and task-agnostic features obtained from the transformer based language model. We also transfer the knowledge from a neural machine translation (NMT) to warm-start the training of code-mixed generator. Experimental results and in-depth analysis show the effectiveness of our proposed code-mixed text generation on eight diverse language pairs. + +# 1 Introduction + +Multilingual content is very prominent on social media handles, especially in the multilingual communities like the Indian ones. Code-mixing is a common expression of multilingualism in informal text and speech, where there is a switch between the two languages, frequently with one in the character set of the other language. This has been a mean of communication in a multi-cultural + +and multi-lingual society, and varies according to the culture, beliefs, and moral values of the respective communities. + +Linguists have studied the phenomenon of code-mixing, put forward many linguistic hypotheses (Belazi et al., 1994; Pfaff, 1979; Poplack, 1978), and formulated various constraints (Sankoff and Poplack, 1981; Di Sciullo et al., 1986; Joshi, 1982) to define a general rule for code-mixing. However, for all the scenarios of code-mixing, particularly for the syntactically divergent languages (Berk-Seligson, 1986), these limitations cannot be postulated as a universal rule. + +In recent times, the pre-trained language model based architectures (Devlin et al., 2019; Radford et al., 2019) have become the state-of-the-art models for language understanding and generation. The underlying data to train such models comes from the huge amount of corpus, available in the form of Wikipedia, book corpus etc. Although, these are readily available in various languages, there is a scarcity of such amount of data in code-mixed form which could be used to train the state-of-the-art transformer (Vaswani et al., 2017) based language model, such as BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), XLM (Lample and Conneau, 2019) etc. The existing benchmark datasets on various NLP tasks can also be transformed to the code-mixed environmental setup, and subsequently be leveraged to assess the model's flexibility under the multilingual framework. Creating large-scale code-mixed datasets for such tasks is expensive and time-consuming as it requires considerable human efforts and language expertise to generate these manually. Therefore, it is necessary to build an automated code-mixed generation system capable of modeling intra-sentential language phenomenon. + +In this paper, we formulate the code-mixed phenomenon using the feature-rich and pre-trained lan + +guage model assisted encoder-decoder paradigm. The feature-rich encoder assists the model to capture the linguistic phenomenon of code-mixing, especially to decide when to switch between the two languages. Similarly, the pre-trained language model provides the task-agnostic feature which helps to encode the generic features. We adopt the gating mechanism to fuse the features of the pretrained language model and the encoder. Additionally we also perform transfer learning to learn the prior distribution from the pre-trained NMT. The pre-trained NMT weights are used to initialize the code-mixed generation network. Transfer learning guides the code-mixed generator to generate syntactically correct and fluent sentences. + +We summarize the contributions of our work below: + +(i). We propose a robust and generic method for code-mixed text generation. Our method exploits the capabilities of linguistic feature-rich encoding and pre-trained language model assisted encoder to capture the code-mixed formation across the languages. Our model is further tailored to generate the syntactically correct, adequate and fluent code-mixed sentences using the prior knowledge acquired by the transfer learning approach. +(ii). To warm start the training, we devise a robust and generic technique to automatically create the synthetic code-mixed sentences by modeling the linguistic properties using the parallel corpus. To the best of our knowledge, this is the very first step where we attempt to propose a generic method that produces the correct and fluent code-mixed sentences on multiple language pairs. The generated synthetic dataset will be a useful resource for machine translation and multilingual applications. +(iii). We demonstrate with detailed empirical evaluations the effectiveness of our proposed approach on eight different language pairs, viz. English-Hindi (en-hi), English-Bengali (en-bn), English-Malayalam (en-ml), English-Tamil (enta), English-Telugu (en-te), English-French (en-fr), English-German (en-de) and English-Spanish (enes). + +# 2 Related Work + +In the literature, there have been efforts for creating code-mixed texts by leveraging the linguistic properties. Pratapa et al. (2018) explored the equivalence constraint theory to construct artificial code-mixed data to reduce the perplexity of the + +RNN-based language model. + +Winata et al. (2018) proposed a multitask learning framework to address the issue of data scarcity in code-mixed setting. Particularly, they leveraged the linguistic information using a shared syntax representation, jointly learned over Part-of-Speech (PoS) and language modeling on codeswitched utterances. Garg et al. (2018) exploited SeqGAN in the generation of the synthetic codemixed language sequences. Most recently, Winata et al. (2019a) utilized the language-agnostic meta-representation method to represent the code-mixed sentences. There are also other studies (Adel et al., 2013a,b, 2015; Choudhury et al., 2017; Winata et al., 2018; Gonen and Goldberg, 2018; Samanta et al., 2019) for code-mixed language modelling. + +There are some other NLP areas like parts-of-speech (Solorio and Liu, 2008b; Gupta et al., 2017; Patel et al., 2016), sentiment analysis (Rudra et al., 2016; Gupta et al., 2016a), question answering (Gupta et al., 2018b; Chandu et al., 2017), language identification (Solorio et al., 2014; Gupta et al., 2014; Hidayat, 2012; Solorio and Liu, 2008a), entity extraction (Gupta et al., 2018a; Bhat et al., 2016; Gupta et al., 2016b), etc, where coder-mixing phenomena are explored and analyzed. + +In contrast to sthese existing works, firstly, we provide a linguistically motivated technique to create the code-mixed datasets from multiple languages with the help of parallel corpus (English to respective language). Thereafter, we utilize this data to develop a neural based model to generate the code-mixed sentences from the English sentence. Our current work has a wider scope as the underlying architecture can be used to harvest the code-mixed data for the various NLP tasks not only limited to the language modelling and speech recognition as it is generally been focused in the literature. In contrast to the previous studies, where only a few of the language pairs were considered for code-mixing, we propose an effective approach which shows its effectiveness in generating code-mixed sentences for eight different language pairs of diverse origins and linguistic properties. + +# 3 Synthetic Code-Mixed Generation + +We follow the matrix language frame (MLF) (Myers-Scotton, 1997; Joshi, 1982) theory to generate the code-mixed text. It is less restrictive and can easily be applied on many language pairs. According to MLF, a code-mixed text will have a + +
Language (L1)Language (L2)Code-Mixed (L1-L2)
enIndia's agriculture is their main strength.hi邦ारवति को कृधिके देशप्रस्था निकानी मूल्ति िताकल एँIndia's कृधिके देशप्रस्था main strength एँ
enEspecially valuable people like Connor Rooney.bnबर्वेयर्षय कोनर्षय कोनर्षय निकानी मूल्ति िताकल िताकल िताकल .बर्वेयर्षय Connor Rooney समाट्ति valuable िताकल .
enGlasses and cups, whatever they are, can be turned upside down.taसिलार्णे कोनर्षय कोनर्षय निकानी मूल्ति िताकल िताकल , अशुकृति पूर्वति िताकल िताकल , अशुकृति पूर्वति िताकल िताकल .Glasses िताकल पूर्वति िताकल िताकल . +असुकृति पूर्वति िताकल िताकल . +असुकृति पूर्वति िताकल िताकल . +असुकृति पूर्वति िताकल िताकल . +असुकृति पूर्वति उताकल िताकल . +असुकृति पूर्वति उताकल िताकल . +असुकृति पूर्वति उताकल िताकल . +असुकृति पूर्वति उताकल िताक� . +असुकृति पूर्वति उताकल िताकल . +असुकृति पूर्वति उताकल िताकल . +असुकृति पूर्वति उताकल िताकल . +असुकृता पूर्वति उताकल िताकल . +असुकृता पूर्वति उताकल िताकल . +असुकृता पूर्वति उताकल िताकल . +असुकृता पूर्वति िताकल . +असुकृता पूर्वति िताकल . +असुकृता पूर्वति िताकल . +असुकृता पूर्वति िताकल . +असुकृता पूर्वति ितাকल . +असुकृता पूर्वति िताकल . +असुकृता पूर्वति िताकल . +असुकृता पूर्वति िताकल . +असुकृता पूर्वति िताकल .Nous abolissons les ambassades nationales.Nous abolissons les embassies national.
enDemocracy and development go hand in hand.deDemokratie und Entwicklung geh Hand in Hand.Democracy und Development geh Hand in Hand.
enWe abolish national embassies.frNous abolissons les ambassades nationales.Nous abolissons les embassies national.
+ +Table 1: Samples of code-mixed (L1-L2) generated sentences from the parallel sentence of the language L1 and L2. + +dominant language (matrix language) and inserted language (embedded language). The insertions could be words or larger constituents and they will comply with the grammatical frame of the matrix language. However, random word insertions could lead to the formation of unnatural code-mixed sentences, which are very rare in practice. + +Linguistically informed strategy to insert the words or constituents can improve the quality of code-mixed text. It is also shown in the literature (Gupta et al., 2018b) that such strategy benefits the quality of generated code-mixed text. In our work, we utilize the parallel corpora to learn the alignments between English and other languages. Given a pair of parallel sentences, we identify the words from English and substitute their aligned counterparts with the identified English words to synthesize the English embedded code-mixed sentences. The input to our synthetic code-mixed generation algorithm (details are in Appendix) is a parallel sentence pair. We use the Indic-nlp-library1 tockenize the sentences of the Indic languages. Moses based tokenizer2 is used to translate the European and English language texts. Thereafter, we learn the alignment matrix, which guides to select the words or phrases to be mixed in the language. + +We use the official implementation3 of the fast-align algorithm (Dyer et al., 2013) to obtain the alignment matrix. The alignment matrix is used to construct the aligned phrases between the parallel sentences. We extract the PoS (mainly adjective), named entity (NE) and noun phrase (NP) from the English sentences, and insert them into the appropriate places of the sentences in the other language (i.e. the target language) counterparts. We use the Stanford library4 Stanza (Qi et al., 2020) to + +![](images/b751eb13b8e2104c41f8a965673eee0851abce16893378529ae0292d87d73b97.jpg) +Figure 1: An example of the alignment between a pair of parallel sentences. The aligned words which are mixed in En-Hi code-mixed (CM), are shown in blue. + +extract these linguistic features. We can extract multiple aligned phrases from the alignment matrix. However, in our proposed algorithm, we are interested in aligned words/phrases which are the NEs of types 'Person', 'Location' and 'Organization', noun phrases and adjective words. Let us see an example of En-Hi parallel sentence: + +- En: When was Mahatma Gandhi born? +·Hi:Mahalatma gahki kajn m kcb 3A? +Code-Mixed (En-Hi): Mahatma Gandhi ka Jana m kcb n a? + +The NE Mahatma Gandhi of type 'Person' is mixed in En-Hi code-mixed sentence. + +The need of replacing the aligned noun phrases can be understood with the examples of parallel sentences shown in Fig 1. In the given example, 'girl' and 'red umbrella' are the noun phrases $^{6}$ in the English sentence. To obtain the corresponding code-mixed sentence, their aligned phrases $\text{`} \text{L} \text{I} \text{I} \text{O} \text{K} \text{I} \text{I} \text{I} \text{I} \text{I}$ and $\text{`} \text{L} \text{I} \text{A} \text{L} \text{I} \text{`} \text{H} \text{A} \text{I} \text{I} \text{A} \text{I} \text{I} \text{I} \text{I}$ need to be replaced with English counterparts 'girl' and 'red umbrella', respectively. Similarly, we can visualize the requirement of choosing the adjective words to be mixed in the code-mixed sentence by the following example: + +- En: The situation in Mumbai has not yet + +come to normal. + +·Hi:MuBulB 5 +Code-Mixed (En-Hi): Mumbai situation +Δgho i normal nahla hui + +In the given example the adjective 'normal' is present in the English sentence. To make the corresponding code-mixed sentence the adjective word has to be inserted in the code-mixed sentence. In this case corresponding target (i.e. Hindi here) word 'स्वाकमानय' need to be replaced with the word 'normal' in the En-Hi code-mixed sentence. We show some samples in Table 1, and more details in the Appendix. + +# 4 Methodology + +We depict the architecture of our proposed model in Figure 2. + +Problem Statement: Given an English sentence $E$ having $m$ words $e_1, e_2, \ldots, e_m$ , the task is to generate the code-mixed sentence $\hat{C}$ having a sequence of $n$ words $\hat{C} = \{y_1, y_2, \ldots, y_n\}$ . + +# 4.1 Sub-word Vocabulary + +The task of generation using neural networks requires a fixed-sized vocabulary. To deal with the problem of Out-of-Vocabulary (OOV) words, we use the Byte-pair encoding (BPE) (Sennrich et al., 2016), and segment the words into sub-words. The sub-word based tokenization schemes inspired by BPE have become the norm in most of the advanced models including the very popular family of contextual language models like XLM (Lample and Conneau, 2019), GPT-2 (Radford et al., 2019), etc. In this work, we process the language pairs with the vocabulary created using the BPE. + +# 4.2 Feature-rich and Pre-trained Language Model Assisted Encoder + +We introduce a specific encoder which is equipped with linguistic features and pre-trained language model features. Firstly, we discuss the linguistic feature encoding to the standard long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) encoder. Later, we describe the pre-trained language model feature assisted encoder. + +In order to encode the input English sentence, we use the two-layered LSTM networks. Firstly, we tokenize the English sentence to the sub-word tokens using BPE. Each sub-word is mapped to a real-valued vector through an embedding layer. In + +addition, we also incorporate the linguistic features in the form of NE and PoS. The motivation to use these linguistic features comes from the synthetic code-mixed text generation (c.f. section 3) itself, where these features guide the generation process by selecting the words to either replace with their aligned English words or to keep the same word in the code-mixed sentence. In neural based generation, explicit linguistic features help the decoder to decide whether to copy from the English (source) or generate from the vocabulary. + +The network takes the concatenation of word embedding $u_{t}$ , NE encoding $n_t$ and $p_t$ (will be discussed shortly) at each time step $t$ and generate the hidden state as follows: + +$$ +h _ {t} = L S T M \left(h _ {t - 1}, \left[ u _ {t}, n _ {t}, p _ {t} \right]\right) \tag {1} +$$ + +We compute the forward and backward hidden states $\overrightarrow{h}_i$ and $\overleftarrow{h}_i$ , and compute the document encoder as the concatenation of the two hidden states, $h_i = [\overrightarrow{h}_i \oplus \overleftarrow{h}_i]$ . + +Feature Encoding: The NE and PoS features are encoded to the real valued vectors. We initialize the NE and PoS feature representations $n_t$ and $p_t$ at time $t$ using the random vectors of size 20. The NE and PoS features are represented by the $\{n_1, n_2, \ldots, n_m\}$ and $\{p_1, p_2, \ldots, p_m\}$ , respectively. + +Pre-trained Language Model Feature: Recent studies have shown the effectiveness of language model pre-training for text generation (Radford et al., 2019; Dong et al., 2019; Song et al., 2019). We utilize the pre-trained feature from the crosslingual language model (XLM) (Lample and Conneau, 2019). The XLM model is trained with three objective functions: Masked Language Modeling (MLM), Causal Language Modeling (CLM), and Translation Language Modeling (TLM). In the CLM objective the task is to model the probability of a word given the previous words. The MLM objective was introduced in Devlin et al. (2019), where the task is to predict the masked words from the sentence given the remaining words. The TLM objective is an extension of MLM for the parallel sentences. For the TLM objective function the input sentence is the concatenation of the source and target sentence and a random word is masked from the concatenated sentence and rest of the words is used to predict the masked word. + +The XLM model trained with multiple objective functions on different languages together has + +![](images/d8056e1d61472810ac4613907bb4f3418ce3455f44f8a3dcfa58a74684d802ee.jpg) +Figure 2: The architecture of the proposed code-mixed sentence generation model. The left part of the image shows the architecture of cross-lingual language model (XLM). The XLM feature along with the linguistic feature obtained from the Bi-LSTM encoder is passed to the Gated Feature Fusion (GFF) module. The right part of the image demonstrates the working of GFF module. It is to be noted that the transfer learning is enabled by initializing the parameters of the proposed model from the pre-trained neural machine translation model. + +shown the effectiveness on cross-lingual classification and machine translation. By virtue of dealing with multiple languages and setting the state-of-the-arts in language generation task, the pretrained XLM model is adopted to extract the language model features for code-mixed generation as it is reminiscence of the cross-lingual and generation paradigms. For the given input sentence $E: \{e_1, e_2, \ldots, e_m\}$ , we extract the language model feature $L: \{l_1, l_2, \ldots, l_m\}$ . + +The extracted language model features are fused to the linguistic features as follows: + +$$ +h _ {t} ^ {*} = \tanh \left(W _ {h} h _ {t} + b _ {h}\right) +$$ + +$$ +l _ {t} ^ {*} = \tanh (W _ {l} l _ {t} + b _ {l}) +$$ + +$$ +g = \sigma \left(W _ {g} \cdot \left[ h _ {t} \oplus l _ {t} \right]\right) \tag {2} +$$ + +$$ +f _ {t} = g \odot h _ {t} ^ {*} + (1 - g) \odot l _ {t} ^ {*} +$$ + +where, $\oplus$ and $\odot$ are the concatenation and elementwise multiplication operator. First, we project both the features $h_t$ and $l_t$ into the same vector space $h_t^*$ and $l_t^*$ via feed-forward network. Thereafter, we learn the gated value $g$ which controls the flow of each feature. The gated value $g$ controls how much of each feature should be the part of the final encoder representation $f_t$ . + +# 4.3 Decoding with Pointer Generator + +We use the one-layer LSTM network with the attention mechanism (Bahdanau et al., 2015) to generate the code-mixed sentence $y_{1},y_{2},\ldots ,y_{n}$ one word at a time. In order to deal with the rare or unknown words, the decoder has the flexibility to copy the words from documents via the pointing mechanism (See et al., 2017; Gulcehre et al., 2016). The LSTM decoder reads the word embedding $u_{t - 1}$ and the hidden state $s_{t - 1}$ to generate the hidden state $s_t$ at time step $t$ . Concretely, + +$$ +s _ {t} = L S T M \left(s _ {t - 1}, u _ {t - 1}\right) \tag {3} +$$ + +Similar to (See et al., 2017), we compute the attention distribution $\alpha_{t}$ and context vector $c_{t}$ . The generation probability is computed as follows: + +$$ +p _ {g e n} = \sigma \left(\mathbf {W} _ {\mathbf {a}} c _ {t} + \mathbf {W} _ {\mathbf {b}} s _ {t} + \mathbf {W} _ {\mathbf {u}} u _ {t}\right) \tag {4} +$$ + +where $\mathbf{W}_{\mathbf{a}}$ , $\mathbf{W}_{\mathbf{b}}$ and $\mathbf{W}_{\mathbf{s}}$ are the weight matrices and $\sigma$ is the Sigmoid function. We also consider the copying of the word from the English sentence. The probability to copy a word from English sentence at given time $t$ is computed by the following + +equation: + +$$ +P _ {c o p y} (w) = \sum_ {i = 1} ^ {m} \alpha_ {t, i} * \mathbf {1} \{w = = w _ {i} \} \tag {5} +$$ + +where $\mathbf{1}\{w == w_i\}$ denotes the vector of length $m$ having the value 1 where $w == w_i$ , otherwise 0. The final probability distribution over the dynamic vocabulary (English and code-mixed sentence vocabulary) is calculated by the following: + +$$ +P (w) = p _ {g e n} P _ {v o c a b} (w) + (1 - p _ {g e n}) P _ {c o p y} (w) (6) +$$ + +# 4.4 Transfer Learning for Code-mixing + +Transfer learning deals with the performance improvement of a task by using the learned knowledge from a near similar task. It has shown promise in solving various problems (Torrey and Shavlik, 2010; Pan and Yang, 2009) by significantly reducing the amount of training instances. In our case, we formulate the problem of code-mixed text generation with respect to the NMT framework. A closer to the code-mixed sentence reveals that the translated target text $(\mathrm{XX}^7)$ and code-mixed (En-XX) shares many words. For example: + +- Source (En): The situation in Mumbai has not yet come to normal. +Target (Hi): 朱 莫 里 空 时 +Code-Mixed (En-Hi): Mumbai situation normal + +In the above sentences, Target (Hi) and Code-mixed (En-Hi) share many words (underlined words). Because of this underlying similarity between the machine translation and code-mixed sentence generation, we adapted the transfer learning approach used in machine translation (Zoph et al., 2016; Kocmi and Bojar, 2017) for code-mixed text generation. + +We first train an NMT model on a large corpus of parallel sentences as discussed in Section 3. Next, we initialize the code-mixed text generation model with the already-trained NMT model. This is then trained on the synthetic code-mixed dataset. Rather than initializing the code-mixed model from the random parameters, we initialize it with the weights from the NMT model. By doing this, we achieve strong prior distribution from the NMT model to code-mixed text generation. When + +we train the code-mixed generation model initialized with the weights of the NMT model, it acquires the prior knowledge of translating the English sentences into the target language XX, and then is fine-tuned to adopt to the code-mixed phenomenon. + +# 5 Results and Analysis + +We evaluate the performance of our proposed approach on the synthetic code-mixed text from eight different language pairs. The datasets can be found here8. We compare the performance of our proposed code-mixed generation model with the (i) Seq2Seq (Sutskever et al., 2014), (ii) Attentive-Seq2Seq (Bahdanau et al., 2015) and (iii) Pointer Generator (See et al., 2017) baselines. + +# 5.1 Experimental Setup + +In our experiments, we use the same vocabulary for both the encoder and decoder. For the language pairs: en-hi, en-es, en-de, en-fr, we use the learned BPE codes on 15 languages to segment the sentences into sub-words and use this vocabulary to index the sub-words. For the language pairs: en-bn, en-ml, en-ta, en-te, we use the learned BPE codes on 100 languages from the XLM model to segment the sentences into sub-words and use the correspondent vocabulary to index the sub-words. The same set of vocabulary is used to extract the pre-trained language model feature and the corresponding NMT model for the transfer learning. We use the aligned multilingual word embedding of dimension 300 for the language pairs: en-es, ende, en-fr, en-hi and en-bn from Bojanowski et al. (2017); Joulin et al. (2018). For the rest of the language pairs, we obtain the monolingual embedding from Bojanowski et al. (2017) and use the MUSE library released by Lample et al. (2018) to align the vector in the same vector space. The embeddings of NE and PoS information are randomly initialized with the dimension of 20. + +http://www.iitp.ac.in/~ai-nlp-ml/resources.html +$^{9}$ https://dl.fbaipublicfiles.com/XLM/codes_xnli_15 +10 https://dl.fbaipublicfiles.com/XLM/vocab_xnli_15 +11https://dl.fbaipublicfiles.com/XLM/codes_ +xnli_100 +12https://fasttext.cc/docs/en/ aligned-vectors.html +13https://fasttext.cc/docs/en/ pretrained-vectors.html + +
Modelen-esen-deen-fren-hi
BRMBRMBRMBRM
Seq2Seq16.4236.0324.2319.1936.1924.8719.2838.5426.4115.4935.2923.72
Attentive-Seq2Seq17.2136.8325.4120.1237.1425.6420.1239.3027.5416.5536.2524.97
Pointer Generator18.9837.8126.1321.4538.2226.1421.4140.4228.7617.6237.3225.61
Proposed Model22.4741.2429.4524.1542.7630.4724.8943.5431.2621.5540.2128.37
(-) BPE21.7240.6728.6523.3141.8929.7624.2743.0230.8420.8939.5427.43
(-) PoS Feature22.2140.9229.1223.7642.1229.8824.2142.9530.8621.0239.8427.91
(-) NE Feature21.5240.3228.4122.1941.6429.3923.9242.5230.3720.4239.2027.46
(-) LM Feature21.5640.3628.4223.2141.8529.5623.8242.4830.2920.4739.1727.24
(-) GFF21.5940.2828.5923.2441.7529.5023.8742.5830.4620.3139.2427.51
(-) Transfer Learning20.6939.3927.5322.3940.9828.8722.6441.5729.3419.4838.3426.41
+ +Table 2: Performance comparison of the proposed model for code-mixed generation with the baseline models. The impact of each component (by removing one at a time) on the performance of the model. Here, B: BLEU, R: Rouge-L and M: METEOR + +
Modelen-bnen-mlen-taen-te
BRMBRMBRMBRM
Seq2Seq16.3233.0221.8215.9234.9723.1211.8225.1420.2110.8724.9219.05
Attentive-Seq2Seq17.2934.1223.0817.2135.9123.9413.0926.5721.4112.1426.1720.11
Pointer Generator18.2435.8624.3618.4937.1625.1214.0327.8422.5313.2127.3721.17
Proposed Model21.4939.1127.3221.6140.2328.0115.6929.5623.8814.8129.2322.56
(-) BPE20.8138.6426.6520.8939.7327.4915.1228.9223.1914.1528.7521.82
(-) POS Feature21.0438.7726.9421.1139.9127.5515.2328.1122.3414.2328.6721.86
(-) NER Feature20.4938.1426.3320.6339.2927.1115.1929.0623.4814.5128.6322.26
(-) LM Feature20.1337.7325.9520.5438.6926.4414.7328.6422.8913.9728.0721.79
(-) GFF20.5738.1126.3620.6939.1827.0715.2428.8423.1914.2928.6721.88
(-) Transfer Learning19.6737.4925.8720.1238.7426.5414.4828.3422.7213.7928.1221.53
+ +Table 3: Performance comparison of the proposed model for code-mixed generation with the baseline models + +The hidden dimension of all the LSTM cells is set to 512. We use the pre-trained XLM model $^{14}$ to extract the language model feature of dimension 1024 for en-hi, en-es, en-de, en-fr language pairs. For the rest of the language pairs, the pretrained model $^{15}$ trained on MLM objective function is used to extract the language model feature. We use beam search of beam size 4 to generate the code-mixed sentence. Adam (Kingma and Ba, 2015) optimizer is used to train the model with (i) $\beta_{1} = 0.9$ , (ii) $\beta_{2} = 0.999$ , and (iii) $\epsilon = 10^{-8}$ and initial learning rate of 0.0001. The maximum length of English and code-mixed tokens are set to 60 and 30, respectively. We set 5 as minimum decoding steps in each code-mixed language pair. We use the en-hi development dataset to tune the network hyper-parameters. All the model updates use a batch size of 16. + +We evaluate the generated text using the metrics, BLEU (Papineni et al., 2002), ROUGE (Lin, 2004) and METEOR (Banerjee and Lavie, 2005). + +# 5.2 Quantitative Analysis + +We report the results of our proposed model in Table 2 and Table 3. Performance comparisons to the three baselines are reported in Table 2 and Table 3. The Pointer Generator based baseline is the superior amongst all the baselines and achieve the maximum Bleu score of 21.45 for the en-de code-mixed language pair. Our proposed model achieves the maximum Bleu score of 24.89 for the en-fr code-mixed language pair. The minimum Bleu score that we achieve is 14.81 for the en-te language pair. We achieve lower Bleu scores for the language pairs, en-ta and en-te compared to the other language pairs. It is because the number of training samples for en-ta and en-te are very low (11, 380 and 9, 105) as compared to the other language pairs. Among the European languages, for en-fr pair, our model attains the highest performance; while for the Indian languages, our proposed model reports the comparable performance for both en-hi and en-bn language pairs. + +We also perform the ablation study to assess the efficacy of the model's components. We remove each component at a time from the proposed model + +
en-deInputThe real problem is statesponsored lawlessness.
Reference PG Proposed (-) TLDas real problem ist die vom statesponsored lawlessness. Das echtes problem ist die vom statesponsored Gesetz. Das real problem ist vom statesponsored lawlessness. Das problem ist die statesponsored Gesetzlosigkeit.
en-esInputHowever we have proposed some minor changes.
Reference PG Proposed (-) TLCon todo hemos propuestos algunos minor changes. Sin embargo, hemo propuestos minerero changes. Sin embargo hemos propuestos algunos minor changes. Con todo hemos propuestos algunos minor cambios.
en-hiInputIndia's agriculture is their main strength.
Reference PG Proposed (-) TLIndia kā agriculture ॅśāsākāo main strength रेल | India's agriculture ॅśāsākāo रागके रेल | India kā agriculture ॅśāsākāo main strength रेल | India कृपिक रैस्थका main strength.
en-frInputRead the statements by Giscard dEstaing.
Reference PG Proposed (-) TLLisez les statements de Giscard dEstaing. Lisez déclarations de Giscard dEstaing. Lisez les statement de Giscard dEstaing. Lisez de déclarations Giscard dEstaing.
+ +and report the results for each language pair in Table 2 and Table 3. The removal of BPE brings down the Bleu score from 0.57 (en-ta) to 0.84 (ende). The BPE encoding helps the model to mitigate the OOV word issue by providing the subword level information. Similarly, the removal of PoS feature reduces the Bleu score by 0.26 (en-es) to 0.58 (en-te). The NE feature helps most to the en-bn code-mixed language pair as we observe the decrease of 1.0 Bleu points while the NE feature is removed. The LM feature is obtained from the pre-trained language model, and it helps the model to obtain the better encoded representation. The ablation study reveals that removal of LM feature decreases the Bleu score by 1.36 points. We observe the near similar impact of LM feature on each language pair. Finally, the transfer learning is also proven to be an integral component of the proposed model as it contributes to the maximum of 2.25 Bleu score for en-fr and minimum of 1.02 Bleu score of en-te code-mixed language pair. The difference between the maximum and minimum contribution may be attributed to the fact that, we have sufficient parallel corpus (197, 922) to train the en-fr NMT model as compared to the en-te parallel corpus (10, 105). We follow the bootstrap test (Dror et al., 2018) which confirms that the performance improvement over the baselines are statistically significant as $(p < 0.005)$ . + +# 5.3 Qualitative Analysis + +We assess the quality of the generated code-mixed text, and show these samples in Table 4. We ob + +Table 4: Sample code-mixed sentences generated using the pointer generator (PG), proposed model, and the variant of the proposed model without transfer learning (-TL). + +
ApproachHumanBRM
Synthetic4.1967.5173.5671.21
Pointer Generator2.3419.4739.4827.39
Proposed Model3.2624.6543.5529.11
+ +Table 5: Comparison of different code-mixed text generation approaches on human and automatic evaluation metrics. + +serve that the code-mixed sentences generated using the PG model are able to copy the entities from the given English sentence, but the generated code-mixed sentences are incomplete and not fluent compared to the reference sentences. For example, in en-hi pair the PG based code-mixed sentence missed the 'main' word and it copies 'India's' rather than generating 'India का' which seems more natural and human-like code-mixed sentence. + +Our analysis also reveals that quality of the generated code-mixed sentence without transfer learning lacks in fluency. The examples can be seen in the (-) TL generated code-mixed sentence (in Table 4) in the en-hi and en-fr. In contrast, the generated output using the proposed model takes the benefits of both the pointer generator and transfer learning to generate adequate, fluent and complete human-like code-mixed sentences. We observe that the proposed model learns when to switch between the languages, and when to either copy the entity/phrase from the English sentence or to generate from the vocabulary. The examples can be seen in en-hi language pair, where the model copies the word 'main strength' from the English sentence, and it also switches between the languages at the appropriate time step by generating the correct word from the vocabulary. + +We perform human evaluation to judge the quality of the generated code-mixed text. For human evaluation, we randomly sample 100 English sentences from the en-hi code-mixed dataset, and ask three English and Hindi speakers to manually formulate the code-mixed sentences. These were then used to evaluate the quality of the generated code-mixed sentences. We ask the speakers to score (from 1 to 5) the machine generated code-mixed sentence with respect to the human generated sentences. The rate will define how natural and human-like the code-mixed sentence sounds as compared to the human one. The scores are associated with the quality of the generated code-mixed sentence, where 1 shows that there is a strong dis + +agreement between the machine generated and human formulated code-mixed sentence. Similarly, 2, 3, 4 and 5 are the categorical scores for Disagreement, Not Sure, Agreement and Strongly Agreement, respectively. + +We also compute the automatic evaluation metrics, BLEU,Rouge-L and Meteor. The comparison between the different approaches on human and automatic evaluation metrics are reported in Table 5. The reported human evaluation score corresponds to the average of all the three human experts. The proposed model achieves the human evaluation (naturalness) score of 3.26 compared to the synthetic generation of 4.19. It is to be noted that the algorithm of synthetic text generation needs parallel corpus. However, our neural generation model does not require any parallel data except at the time of warm-start with the synthetic data. The human evaluation achieves better score (3.26) compared to the strongest (2.34) pointer generator based baseline model. + +Error Analysis: We closely analyze the outputs of our proposed model to realize the challenges faced. We take up the language pair $(en - hi)$ , study the errors encountered by the proposed approach, and We categorize them into the following types: + +(1). Reference Inaccuracy: The errors encountered during the word alignment phase propagate, and lead to the inaccurate reference code-mixed sentences. Since, we use these sentences to train the generator model, it introduces errors in the generated code-mixed sentences too. This issue could possibly be reduced with an improved alignment algorithm. +(2). Missing/Incorrect Words: This is one of the very common error types, where the model generates the incorrect words/phrases. The missing or incorrect words cause fluency problem in the generated code-mixed sentence. We also observed that the majority of the missing words are function words, while incorrectly generated words belong to the content words category. +(3). Factual Inaccuracy: Our proposed model sometimes generates the factually incorrect NEs. These types of errors were mainly seen in the longer sentences, where the model was found to be confused to copy/generate the relevant entity in the given context. +(4). Code-Mixed Inaccuracy: We observe the inaccuracy in the generated sentence, where the model sometimes produces the sentence which ei + +ther violates the code-mixed theory or is unnatural (not human-like). + +(5) Rare Language Pairs: We notice that, the system makes the more errors on the en- ta and ente language pairs. It can be understand by the fact that, we had comparatively lesser number of samples of these language pairs to train the system. This error can be reduced by training the system with sufficient number of samples. +(6) Others: We categorize the remaining errors in others category. The other type of errors include repeated word, inadequate sentence generation, extra word generation etc. We also observe that majority of the error occurred when the input sentence were relatively longer than 12 words. + +We randomly take a sample of size 100 from the generated En-Hi code-mixed text and categorize them using the six different aforementioned error types. We found that top-3 frequent errors (Missing/Incorrect Words, Reference Inaccuracy, Code-Mixed Inaccuracy) come under $27.21\%$ , $23.37\%$ , and $17.44\%$ respectively. + +# 6 Conclusion + +In this paper, we have proposed a neural network based effective method coupled with the linguistic and pre-trained feature representation along with the transfer learning to generate the code-mixed sentences. To train and evaluate the proposed approach, we have introduced a linguistically motivated approach for code-mixed sentence generation using the parallel sentences of any particular language pair. Our experimental results and in-depth analysis show that the feature representation and transfer learning together effectively improve the model performance and the quality of the generated code-mixed sentence. We have shown the effectiveness of the proposed approach on eight different language pairs. In future work, we plan to explore the unsupervised neural approach for code-mixed text generation. + +# Acknowledgment + +Asif Ekbal gratefully acknowledges the Young Faculty Research Fellowship (YFRF) Award supported by the Visvesvaraya PhD scheme for Electronics and IT, Ministry of Electronics and Information Technology (MeitY), Government of India, and implemented by Digital India Corporation (formerly Media Lab Asia). + +# References + +Heike Adel, Ngoc Thang Vu, Katrin Kirchhoff, Dominic Telaar, and Tanja Schultz. 2015. Syntactic and semantic features for code-switching factored language models. IEEE/ACM transactions on audio, speech, and language Processing, 23(3):431-440. +Heike Adel, Ngoc Thang Vu, Franziska Kraus, Tim Schlippe, Haizhou Li, and Tanja Schultz. 2013a. Recurrent neural network language modeling for code switching conversational speech. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 8411-8415. IEEE. +Heike Adel, Ngoc Thang Vu, and Tanja Schultz. 2013b. Combination of recurrent neural networks and factored language models for code-switching language modeling. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 206-211. +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65-72. Association for Computational Linguistics. +Hedi M Belazi, Edward J Rubin, and Almeida Jacqueline Toribio. 1994. Code switching and x-bar theory: The functional head constraint. Linguistic inquiry, pages 221-237. +Susan Berk-Seligson. 1986. Linguistic constraints on intrasentential code-switching: A study of spanish/ hebrew bilingualism. Language in society, 15(3):313-348. +Irshad Ahmad Bhat, Manish Shrivastava, and Riyaz Ahmad Bhat. 2016. Code mixed entity extraction in indian languages using neural networks. In FIRE (Working Notes), pages 296-297. +Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146. +Khyathi Raghavi Chandu, Manoj Chinnakotla, Alan W Black, and Manish Shrivastava. 2017. Webshodh: A code mixed factoid question answering system for web. In International Conference of the Cross-Language Evaluation Forum for European Languages, pages 104-111. Springer. +Monojit Choudhury, Kalika Bali, Sunayana Sitaram, and Ashutosh Baheti. 2017. Curriculum design for + +code-switching: Experiments with language identification and language modeling with deep neural networks. In Proceedings of the 14th International Conference on Natural Language Processing (ICON-2017), pages 65-74. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Anne-Marie Di Sciullo, Pieter Muysken, and Rajendra Singh. 1986. Government and code-mixing. Journal of linguistics, 22(1):1-24. +Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In Advances in Neural Information Processing Systems 32, pages 13063-13075. Curran Associates, Inc. +Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. 2018. The hitchhiker's guide to testing statistical significance in natural language processing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1383-1392. Association for Computational Linguistics. +Chris Dyer, Victor Chahuneau, and Noah A Smith. 2013. A simple, fast, and effective reparameterization of ibm model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644-648. +Saurabh Garg, Tanmay Parekh, and Preethi Jyothi. 2018. Code-switched language models using dual rnns and same-source pretraining. arXiv preprint arXiv:1809.01962. +Hila Gonen and Yoav Goldberg. 2018. Language modeling for code-switching: Evaluation, integration of monolingual data, and discriminative training. arXiv preprint arXiv:1810.11895. +Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 140-149, Berlin, Germany. Association for Computational Linguistics. +Deepak Gupta, Asif Ekbal, and Pushpak Bhattacharyya. 2018a. A deep neural network based approach for + +entity extraction in code-mixed Indian social media text. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018), Miyazaki, Japan. European Languages Resources Association (ELRA). +Deepak Gupta, Ankit Lamba, Asif Ekbal, and Pushpak Bhattacharyya. 2016a. Opinion mining in a code-mixed environment: A case study with government portals. In Proceedings of the 13th International Conference on Natural Language Processing, pages 249-258. +Deepak Gupta, Pabitra Lenka, Asif Ekbal, and Pushpak Bhattacharyya. 2018b. Uncovering code-mixed challenges: A framework for linguistically driven question generation and neural based question answering. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 119-130, Brussels, Belgium. Association for Computational Linguistics. +Deepak Gupta, Shubham Tripathi, Asif Ekbal, and Pushpak Bhattacharyya. 2016b. A Hybrid Approach for Entity Extraction in Code-Mixed Social Media Data. MONEY, 25:66. +Deepak Gupta, Shubham Tripathi, Asif Ekbal, and Pushpak Bhattacharyya. 2017. SMPOST: Parts of Speech Tagger for Code-Mixed Indic Social Media Text. arXiv preprint arXiv:1702.00167. +Deepak Kumar Gupta, Shubham Kumar, and Asif Ekbal. 2014. Machine learning approach for language identification & transliteration. In Proceedings of the Forum for Information Retrieval Evaluation, pages 60-64. +Taofik Hidayat. 2012. An analysis of code switching used by facebookers (a case study in a social network site). Student essay for the study programme Pendidikan Bahasa Inggris (English Education) at STKIP Siliwangi Bandung, Indonesia. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780. +Aravind Joshi. 1982. Processing of sentences with intra-sentential code-switching. In *Coling* 1982: Proceedings of the Ninth International Conference on Computational Linguistics. +Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Hervé Jégou, and Edouard Grave. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. +Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. + +Tom Kocmi and Ondrej Bojar. 2017. Curriculum learning and minibatch bucketing in neural machine translation. In Proceedings of the International Conference on Recent Advances in Natural Language Processing, RANLP 2017, pages 379-386, Varna, Bulgaria. INCOMA Ltd. +Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In $MT$ summit, volume 5, pages 79-86. CiteSeer. +Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. Advances in Neural Information Processing Systems (NeurIPS). +Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018. Unsupervised machine translation using monolingual corpora only. In International Conference on Learning Representations. +Chin-Yew Lin. 2004. ROUGE: A Package for Automatic Evaluation of Summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 workshop, volume 8. Barcelona, Spain. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. +Carol Myers-Scotton. 1997. Duelling languages: Grammatical structure in codeswitching. Oxford University Press. +Sinno Jialin Pan and Qiang Yang. 2009. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345-1359. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: A Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 311-318. Association for Computational Linguistics. +Raj Nath Patel, Prakash B Pimpale, and M Sasikumar. 2016. Recurrent neural network based part-of-speech tagger for code-mixed social media text. arXiv preprint arXiv:1611.04989. +Carol W Pfaff. 1979. Constraints on language mixing: Intrasentential code-switching and borrowing in spanish/english. Language, pages 291-318. +Shana Poplack. 1978. Syntactic structure and social function of code-switching, volume 2. Centro de Estudios Puertorriquenos,[City University of New York]. +Adithya Pratapa, Gayatri Bhat, Monjit Choudhury, Sunayana Sitaram, Sandipan Dandapat, and Kalika Bali. 2018. Language modeling for code-mixing: The role of linguistic theory based synthetic data. In + +Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1543-1553. +Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A Python natural language processing toolkit for many human languages. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9. +Koustav Rudra, Shruti Rijhwani, Rafiya Begum, Kalika Bali, Monojit Choudhury, and Niloy Ganguly. 2016. Understanding Language Preference for Expression of Opinion and Sentiment: What do Hindi-English Speakers do on Twitter? In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1131-1141. +Bidisha Samanta, Sharmila Reddy, Hussain Jagirdar, Niloy Ganguly, and Soumen Chakrabarti. 2019. A deep generative model for code-switched text. arXiv preprint arXiv:1906.08972. +David Sankoff and Shana Poplack. 1981. A formal grammar for code-switching. Research on Language & Social Interaction, 14(1):3-45. +Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073-1083, Vancouver, Canada. Association for Computational Linguistics. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics. +Thamar Solorio, Elizabeth Blair, Suraj Maharjan, Steven Bethard, Mona Diab, Mahmoud Ghoneim, Abdelati Hawwari, Fahad AlGhamdi, Julia Hirschberg, Alison Chang, et al. 2014. Overview for the first shared task on language identification in code-switched data. In Proceedings of the First Workshop on Computational Approaches to Code Switching, pages 62-72. +Thamar Solorio and Yang Liu. 2008a. Learning to predict code-switching points. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 973-981. +Thamar Solorio and Yang Liu. 2008b. Part-of-speech tagging for english-spanish code-switched text. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1051-1060. Association for Computational Linguistics. + +Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2019. Mass: Masked sequence to sequence pre-training for language generation. In International Conference on Machine Learning, pages 5926-5936. +Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27, pages 3104-3112. Curran Associates, Inc. +Lisa Torrey and Jude Shavlik. 2010. Transfer learning. In Handbook of research on machine learning applications and trends: algorithms, methods, and techniques, pages 242-264. IGI Global. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008. +Genta Indra Winata, Zhaojiang Lin, and Pascale Fung. 2019a. Learning multilingual meta-embeddings for code-switching named entity recognition. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 181-186. +Genta Indra Winata, Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2018. Code-switching language modeling using syntax-aware multi-task learning. arXiv preprint arXiv:1805.12070. +Genta Indra Winata, Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2019b. Code-switched language models using neural based synthetic data from parallel sentences. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 271-280, Hong Kong, China. Association for Computational Linguistics. +Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1568-1575, Austin, Texas. Association for Computational Linguistics. + +# A Synthetic Code-Mixed Generation + +# A.1 Dataset Statistics + +We create the synthetic datasets for eight different language pairs: English-Hindi (en-hi), English-Bengali (en-bn), English-Malayalam (en-ml), English-Tamil (en-ta), English-Telugu (ente), English-French (en-fr), English-German (ende) and English-Spanish (en-es). We used the Europarl parallel corpus (Koehn, 2005) v7 $^{16}$ for the European languages, namely French, German and + +Algorithm 1 Code-Mixed Text Generation +1: Input: a parallel sentence (en-sentence, x-sentence) +2: Output: an equivalent code-mixed sentence (en-x-sentence) +3: procedure GETCODEMIXEDTEXT(en-sentence, x-sentence) +4: en-tokens $\leftarrow$ tokenizer(en-sentence) ▷ Tokenize the English sentence +5: x-tokens $\leftarrow$ tokenizer(x-sentence) ▷ Tokenize the language-x sentence +6: alignment $\leftarrow$ getAlignment(en-sentence, x-sentence) ▷ Learn the alignment matrix +7: phrases $\leftarrow$ extractPhrase(en-tokens, x-tokens, alignment) ▷ Phrase Extraction +8: en-x-tokens $\leftarrow$ x-tokens ▷ Initialize the code-mixed sentence +9: pos $\leftarrow$ getPartsOfSpeechTags(en-tokens) ▷ Parts-of-speech tagging of English sentence +10: ner $\leftarrow$ getNERTags(en-tokens) ▷ NER tagging of English sentence +11: noun-phrases $\leftarrow$ getNounPhrase(en-tokens) ▷ Extraction of noun phrases +12: for (entity, entity-type) in ner do ▷ Looping for each entity in English sentence +13: if entity-type in [`PER', `LOC', `ORG'] and entity in phrases then +14: aligned-phrase = getAlignedPhrase(phrases, entity) +15: en-x-tokens $\leftarrow$ en-x-tokens.replace(aligned-phrase, entity) +16: end if +17: end for +18: for nphrase in noun-phrase do ▷ Looping for each noun phrase in English sentence +19: aligned-phrase = getAlignedPhrase(phrases, nphrase) +20: en-x-tokens $\leftarrow$ en-x-tokens.replace(aligned-phrase, nphrase) +21: end for +22: for (token, pos-type) in pos do ▷ Looping for each token of English sentence +23: if pos-type == `ADJ' and token in phrases then +24: aligned-phrase = getAlignedPhrase(phrases, token) +25: en-x-tokens $\leftarrow$ en-x-tokens.replace(aligned-phrase, token) +26: end if +27: end for +28: en-x-sentence $\leftarrow$ '\,'join(en-x-tokens) ▷ Join each token to form the code-mixed sentence +29: return en-x-sentence +30: end procedure + +
Language Pairs# Parallel Sentences# Code-Mixed SentencesTrain/Dev/TestSPFCMI
en-es1,965,734200,725196,725/2,000/2,00068.5928.80
en-de1,920,209192,131188,131/2,000/2,00068.4128.26
en-fr2,007,723197,922193,922/2,000/2,00068.1228.40
en-hi1,561,840252,330248,330/2,000/2,00062.9223.49
en-bn337,428167,893163,893/2,000/2,00067.6125.41
en-ml359,423182,453178,453/2,000/2,00081.8428.13
en-ta26,21712,38011,380/500/50078.7428.16
en-te22,16510,1059,105/500/50076.1928.69
+ +Table 6: Statistics of parallel corpus and generated synthetic code-mixed sentences along with the training, development and test set distributions. We also show the complexity of the generated code-mixed sentence in terms of SPF and CMI. + +Spanish. For Indic languages, namely Hindi, Bengali, Malayalam, Tamil and Telugu, we obtain the parallel corpus from the multilingual parallel cor + +pus directory $^{17}$ based on the open parallel corpus $^{18}$ . + +We show the detailed statistics of the generated code-mixed corpus in Table 6. + +# A.2 Code-mixed Complexity + +We measure the complexity if the generated code-mixed text in terms of the following metrics: + +Switch-Point Fraction (SPF) Switch-point are the point in a sentence where the language of each side of the words are different. Following Pratapa et al. (2018); Winata et al. (2019b), we compute the SPF as the number of switch-points in a sentence divided by the total number of word boundaries. A sentence having more number of switch points are more complex as it contains many interleaving words in different languages. + +Code-mixing Index (CMI) It is used to measure the amount of code mixing in a corpus by accounting for the language distribution. The sentence level CMI score can be computed with the following formula: + +$$ +C _ {u} (x) = \frac {N (x) - \max \left(\ell_ {i} \in \ell \left\{w _ {\ell_ {i}} (x) \right\}\right)}{N (x)}, \tag {7} +$$ + +where $N(x)$ is the number of tokens of utterance $x$ , $w_{\ell_i}$ is the word in language $\ell_i$ . We compute this metric at the corpus-level by averaging the values for all sentences. We have reported the SPF and CMI values for all the language pairs in Table 6. \ No newline at end of file diff --git a/asemisupervisedapproachtogeneratethecodemixedtextusingpretrainedencoderandtransferlearning/images.zip b/asemisupervisedapproachtogeneratethecodemixedtextusingpretrainedencoderandtransferlearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..805303e2697a5312b77a07aa961a80cbfbc86131 --- /dev/null +++ b/asemisupervisedapproachtogeneratethecodemixedtextusingpretrainedencoderandtransferlearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d92f2e5e1398b436eed2c11f37d4d1b1709383e18b07c05679fee43de898dcae +size 615314 diff --git a/asemisupervisedapproachtogeneratethecodemixedtextusingpretrainedencoderandtransferlearning/layout.json b/asemisupervisedapproachtogeneratethecodemixedtextusingpretrainedencoderandtransferlearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..57adde0c9f9ca48f8c26a1a912b73d7039d26de4 --- /dev/null +++ b/asemisupervisedapproachtogeneratethecodemixedtextusingpretrainedencoderandtransferlearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2d3fefd221f1a9527e87abe9e09be7f716b7431f118c6969ee07baa1de995eda +size 441925 diff --git a/asentimentcontrollabletopictoessaygeneratorwithtopicknowledgegraph/58e30d52-54e3-43f1-be91-76e6fa6bcb55_content_list.json b/asentimentcontrollabletopictoessaygeneratorwithtopicknowledgegraph/58e30d52-54e3-43f1-be91-76e6fa6bcb55_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..28f0443f7cdf3e2c9427c05291f41ec67cac022f --- /dev/null +++ b/asentimentcontrollabletopictoessaygeneratorwithtopicknowledgegraph/58e30d52-54e3-43f1-be91-76e6fa6bcb55_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8e422c862ff824e24159c2d4632daeb162694b0ecff06b3559062ac8cbb823e3 +size 65836 diff --git a/asentimentcontrollabletopictoessaygeneratorwithtopicknowledgegraph/58e30d52-54e3-43f1-be91-76e6fa6bcb55_model.json b/asentimentcontrollabletopictoessaygeneratorwithtopicknowledgegraph/58e30d52-54e3-43f1-be91-76e6fa6bcb55_model.json new file mode 100644 index 0000000000000000000000000000000000000000..fb815a7878d271da07035510b231e4ab3531bdc0 --- /dev/null +++ b/asentimentcontrollabletopictoessaygeneratorwithtopicknowledgegraph/58e30d52-54e3-43f1-be91-76e6fa6bcb55_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:30f61a3d3e4e487bb2386297206a3ecc02ce22f8f63a5c74355c7a663095fbdb +size 76872 diff --git a/asentimentcontrollabletopictoessaygeneratorwithtopicknowledgegraph/58e30d52-54e3-43f1-be91-76e6fa6bcb55_origin.pdf b/asentimentcontrollabletopictoessaygeneratorwithtopicknowledgegraph/58e30d52-54e3-43f1-be91-76e6fa6bcb55_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c651656f0ca30212377f55bb1487ef0ce65c3014 --- /dev/null +++ b/asentimentcontrollabletopictoessaygeneratorwithtopicknowledgegraph/58e30d52-54e3-43f1-be91-76e6fa6bcb55_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a620ff68cf59058188034d67534b15d443e0d3b4f7643bc004dbb9d0219a14d5 +size 571590 diff --git a/asentimentcontrollabletopictoessaygeneratorwithtopicknowledgegraph/full.md b/asentimentcontrollabletopictoessaygeneratorwithtopicknowledgegraph/full.md new file mode 100644 index 0000000000000000000000000000000000000000..3e1003741e606ce6afb01e10db534a42ec2a6c89 --- /dev/null +++ b/asentimentcontrollabletopictoessaygeneratorwithtopicknowledgegraph/full.md @@ -0,0 +1,260 @@ +# A Sentiment-Controllable Topic-to-Essay Generator with Topic Knowledge Graph + +Lin Qiao $^{1*}$ , Jianhao Yan $^{2}$ , Fandong Meng $^{2}$ , Zhendong Yang, and Jie Zhou $^{2}$ + +1School of Software and Microelectronics, Peking University Ttern Recognition Center, WeChat AI, Tencent Inc, Beijing, China qiaolin66666@gmail.com + +{elliottyan,fandongmeng,withtomzhou} @tencent.com + +# Abstract + +Generating a vivid, novel, and diverse essay with only several given topic words is a challenging task of natural language generation. In previous work, there are two problems left unsolved: neglect of sentiment beneath the text and insufficient utilization of topic-related knowledge. Therefore, we propose a novel Sentiment-Controllable topic-to-essay generator with a Topic Knowledge Graph enhanced decoder, named SCTKG, which is based on the conditional variational autoencoder (CVAE) framework. We firstly inject the sentiment information into the generator for controlling sentiment for each sentence, which leads to various generated essays. Then we design a Topic Knowledge Graph enhanced decoder. Unlike existing models that use knowledge entities separately, our model treats knowledge graph as a whole and encodes more structured, connected semantic information in the graph to generate a more relevant essay. Experimental results show that our SCTKG can generate sentiment controllable essays and outperform the state-of-the-art approach in terms of topic relevance, fluency, and diversity on both automatic and human evaluation. + +# 1 Introduction + +Topic-to-essay generation (TEG) task aims at generating human-like paragraph-level texts with only several given topics. It has plenty of practical applications, e.g., automatic advertisement generation, intelligent education, or assisting in keyword-based news writing (Leppänen et al., 2017). Because of its great potential in practical use and scientific research, TEG has attracted a lot of interest. (Feng et al., 2018; Yang et al., 2019). However, In TEG, two problems are left to be solved: the neglect of sentiment beneath the text and the insufficient utilization of topic-related knowledge. + +![](images/fd42ff0bfb3ad0c03c9db9989500b22b1557e3fddeea67e576b56c715e60d1dc.jpg) +Figure 1: Examples of comparison between the generated essays with sentiment control and without sentiment. We show the first two sentences for each generated essay and denote positive sentences in red and negative sentences in blue. Sentences without sentiment label are showed in black. + +A well-performed essay generator should be able to generate multiple vivid and diverse essays when given the topic words. However, previous work tends to generate dull and generic texts. One of the reason is that they neglect the sentiment factor of the text. By modeling and controlling the sentiment of generated sentences, we can generate much more diverse and fascinating essays. As shown in Figure 1, given the topic words "Love", "Experience" and "Emotion", the "without sentiment" model generates monotonous article. In contrast, the sentiment-attach model generates positive statements such as "fall in love with my boyfriend" when given the "positive" label, and generates negative phrases such as "addicted to smoking", "broke up" when given the "negative" label. In addition, sentiment control is especially essential in topic-to-essay generation task, which aims to generate multiple sentences. As the number of sentences increases, the search space for generation model is exponentially enlarged by controlling the sentiment + +polarity for each of the sentence. Therefore, the ability to control sentiment is essential to improve discourse-level diversity for the TEG task. + +As for the other problem, imagine that when we human beings are asked to write articles with some topics, we heavily rely on our commonsense knowledge related to the topics. Therefore, the proper usage of knowledge plays a vital role in the topic-to-essay generation. Previous state-of-the-art method (Yang et al., 2019) extracts topic-related concepts from a commonsense knowledge base to enrich the input information. However, they ignore the graph structure of the knowledge base, which merely refer to the concepts in the knowledge graph and fail to consider their correlation. This limitation leads to concepts being isolated from each other. For instance, given two knowledge triples (law, antonym, disorder) and (law, part of, theory), about the topic word law, Yang et al. (2019) simply uses the neighboring concepts disorder and theory as a supplement to the input information. However, their method fails to learn that disorder has opposite meaning with law while theory is a hypernym to law, which can be learned from their edges (correlations) in the knowledge graph. Intuitively, lacking the correlation information between concepts in the knowledge graph hinders a model from generating appropriate and informative essays. + +To address these issues, we propose a novel Sentiment-Controllable topic-to-essay generator with a Topic Knowledge Graph enhanced decoder, named SCTKG, which is based on the conditional variational auto-encoder (CVAE) framework. To control the sentiment of the text, we inject the sentiment information in the encoder and decoder of our model to control the sentiment from both sentence level and word level. The sentiment labels are provided by a sentiment classifier during training. To fully utilize the knowledge, the model retrieves a topic knowledge graph from a large-scale commonsense knowledge base ConceptNet (Speer and Havasi, 2012). Different from Yang et al. (2019), we preserve the graph structure of the knowledge base and propose a novel Topic Graph Attention (TGA) mechanism. TGA attentively reads the knowledge graphs and makes the full use of the structured, connected semantic information from the graphs for a better generation. In the meantime, to make the generated essays more closely surround the semantics of all input topics, we adopt adversarial training based on a multi-label + +discriminator. The discriminator provides the reward to the generator based on the coverage of the output on the given topics. + +Our contributions can be summarized as follow: + +1. We propose a sentiment-controllable topic-to-essay generator based on CVAE, which can generate high-quality essays as well as control the sentiment. To the best of our knowledge, we are the first to control the sentiment in TEG and demonstrate the potential of our model to generate diverse essays by controlling the sentiment. +2. We equip our decoder with a topic knowledge graph and propose a novel Topic Graph Attention (TGA) mechanism. TGA makes the full use of the structured, connected semantic information from the topic knowledge graph to generate more appropriate and informative essays. +3. We conduct extensive experiments, showing that our model accurately controls the sentiment and outperforms the state-of-the-art methods both in automatic and human evaluations. + +# 2 Task Formulation + +Traditional TEG task takes as input a topic sequence $X = (x_{1},\dots ,x_{m})$ with $m$ words, and aims to generate an essay with $M$ sentences $(L_{1},\dots ,L_{M})$ corresponding with topic sequence $X$ . In this paper, we provide a sentiment sequence $S = (s_{1},\dots ,s_{M})$ , each of which corresponds to a target sentence in essay. Each sentiment can be positive, negative, or neutral. + +Essays are generated in a sentence-by-sentence manner. The first sentence $L_{1}$ is generated only conditioned on the topic sequence $X$ , then the model takes all the previous generated sentences as well as the topic sequence to generate the next sentence until the entire essay is completed. In this paper, we denote the previous sentences $L_{1:i-1}$ as context. + +# 3 Model Description + +In this section, we describe an overview of our proposed model. Our SCTKG generator based on a CVAE architecture consists of an encoder and a topic knowledge graph enhanced decoder. The encoder encodes topic sequence, sentiment, and + +![](images/3b4ebfb795f299fdac33ad239c597c9e44fd86ae055513da806344e61fe7b0ee.jpg) +Figure 2: The architecture of our model. $\oplus$ denotes the vector concatenation operation. Only the part with solid lines and the red dotted arrow is applied at inference, while the entire CVAE except the red dotted arrow part used in the training process. Sentiment label $s$ with blue arrows denote sentiment control. Red solid lines denote TGA at each decoding step. The text generated by SCTKG generator feeds to topic label discriminator. The above $m$ blue circle represents the probability that it belongs to the real text with the $m$ input topics, and the green circle represents the given text is a generated text. + +context and regards them as conditional variables $c$ . Then a latent variable $z$ is computed from $c$ through a recognition network (during training) or prior network (during inference). The decoder attaches with a topic knowledge graph and sentiment label to generate the texts. At each decoding step, the TGA is used to enrich input topic information through effectively utilizing the topic knowledge graph. + +We adopt a two-stage training approach: (1) Train the SCTKG generator with the conventional CVAE loss; (2) After the first step is done, we introduce a topic label discriminator to evaluate the performance of SCTKG generator. We adopt adversarial training to alternately train the generator and the discriminator to further enhance the performance of the SCTKG generator. + +# 3.1 SCTKG Generator + +# 3.1.1 Encoder + +As shown in Figure 2, the utterance encoder is a bidirectional GRU (Chung et al., 2014) to encode an input sequence into a fixed-size vector by concatenating the last hidden states of the forward and backward GRU. We use the utterance encoder to encode the topic sequence $X$ into $h^x = [\overrightarrow{h^x},\overleftarrow{h^x} ]$ $h^x\in \mathbb{R}^d$ $d$ is the dimension of the vector. The next sequence $L_{i}$ is also encoded by utterance encoder into $h_i = [\vec{h_i},\vec{h_i} ]$ $h_i\in \mathbb{R}^d$ .For context encoder, we use a hierarchical encoding strategy. Firstly, each sentence in context $L_{1:i - 1}$ is encoded by utterance encoder to get a fixed-size vector. By doing so, the context $L_{1:i - 1}$ is encoded + +into $h_{\text{context}} = [h_1, h_2, \ldots, h_{i-1}]$ . Then a single layer forward GRU is used to encode the sentence representations $h_{\text{context}}$ into a final state vector $h^c \in \mathbb{R}^d$ . + +Then the concatenation of $h^c$ , $h^x$ , $e(s)$ is functionalized as the conditional vector $c = [e(s); h^c; h^x]$ . $e(s)$ is the embedding of sentiment label $s$ . We assume that $z$ follows a multivariate Gaussian distribution with a diagonal covariance matrix. Thus the recognition network $q_{\phi}(z|h_i, c)$ and the prior network $p_{\theta}(z|c)$ follow $\mathcal{N}(\mu, \sigma^2\mathbf{I})$ and $\mathcal{N}(\mu', \sigma'^2\mathbf{I})$ , respectively. I is identity matrix, and then we have + +$$ +\begin{array}{l} \left[ \mu , \sigma^ {2} \right] = \mathrm {M L P} _ {\text {r e c o n g i t i o n}} \left(h _ {i}, c\right), \\ \left[ \mu^ {\prime}, \sigma^ {\prime 2} \right] = \mathrm {M L P} _ {\text {p r i o r}} (c). \end{array} \tag {1} +$$ + +Additionally, we use a reparametrization trick (Kingma and Welling, 2013) to sample $z$ from the recognition network during training and from prior network during testing. + +# 3.1.2 Decoder + +A general Seq2seq model may tend to emit generic and meaningless sentences. To create more meaningful essays, we propose a topic knowledge graph enhanced decoder. The decoder is based on a 1-layer GRU network with initial state $d_0 = W_d[z,c,e(s)] + b_d$ . $W_{d}$ and $b_{d}$ are trainable decoder parameters and $e(s)$ is the sentiment embedding as mentioned above. As shown in Figure 2, we equip the decoder with a topic knowledge graph to incorporate commonsense knowledge from ConceptNet1. ConceptNet is a semantic network which + +consists of triples $R =$ (head; rel; tail). The head concept head has the relation rel with tail concept tail. We use word vectors to represent head and tail concepts and learn trainable vector $r$ for relation rel, which is randomly initialized. Each word in the topic sequence is used as a query to retrieve a subgraph from ConceptNet and the topic knowledge graph is constituted by these subgraphs. Then we use the Topic Graph Attention (TGA) mechanism to read from the topic knowledge graph at each generation step. + +Topic Graph Attention. As previously stated, a proper usage of the external knowledge plays a vital role in our task. TGA takes as input the retrieved topic knowledge graph and a query vector $\mathbf{q}$ to produce a graph vector $g_{t}$ . We set $\mathbf{q} = [d_{t - 1}; c; z]$ , where $d_{t - 1}$ represents the decoder hidden state for $t - 1$ step. At each decoding step, we calculate the correlation score between each of the triples in the graph and $\mathbf{q}$ . Then we use the correlation score to compute the weighted sum of all the neighboring concepts² to the topic words to form the final graph vector $g_{t}$ . Neighboring concepts are entities that directly link to topic words. We formalize the computational process as follows: + +$$ +g _ {t} = \sum_ {n = 1} ^ {N} \alpha_ {n} \boldsymbol {o} _ {n}, \tag {2} +$$ + +$$ +\alpha_ {n} = \frac {\exp (\beta_ {n})}{\sum_ {j = 1} ^ {N} \exp (\beta_ {j})}, \tag {3} +$$ + +$$ +\beta_ {n} = \left\{ \begin{array}{l l} (\mathbf {W} _ {1} \boldsymbol {q}) ^ {\top} \tanh (\mathbf {W} _ {2} \boldsymbol {r} _ {n} + \mathbf {W} _ {3} \boldsymbol {o} _ {n}) \\ w h e n \quad \boldsymbol {o} _ {n} \in \mathcal {S} _ {1}, \\ (\mathbf {W} _ {1} \boldsymbol {q}) ^ {\top} \tanh (\mathbf {W} _ {2} \boldsymbol {r} _ {n} + \mathbf {W} _ {4} \boldsymbol {o} _ {n}) \\ w h e n \quad \boldsymbol {o} _ {n} \in \mathcal {S} _ {2} \end{array} \right. \tag {4} +$$ + +where $o_n$ is the embedding of $n^{th}$ neighboring concept and $r_n$ is the embedding of the relation for $n^{th}$ triple in the topic knowledge graph. $\mathbf{W}_1, \mathbf{W}_2, \mathbf{W}_3, \mathbf{W}_4$ are weight matrices for query, relations, head entities and tail entities, respectively. $S_1$ contains the neighboring concepts which being the head concepts in their triples, while $S_2$ contains the neighboring concepts which being the tail concepts. The matching score $\beta_n$ represents the correlation between the query $q$ and neighboring concept $o_n$ . Essentially, a graph vector $g_t$ is the + +weighted sum of the neighbouring concepts of the topic words. Note that we use different weight matrices to distinguish the neighboring concepts in different positions (in head or in tail). This distinction is necessary. For instance, given two knowledge triples (Big Ben, part of, London) and (London, part of, England). Even though the concepts Big Ben and England are both neighboring concepts to London with the same relation part of, they have the different meaning with regard to London. We need to model this difference by $\mathbf{W}_3$ and $\mathbf{W}_4$ . + +Then the final probability of generating a word is computed by + +$$ +\mathcal {P} _ {t} = \operatorname {s o f t m a x} \left(W _ {o} \left[ d _ {t}; e (s); g _ {t} \right] + b _ {o}\right), +$$ + +where $d_{t}$ is the decoder state at $t$ step and $W_{o} \in \mathbb{R}^{d_{model} \times |V|}$ , $b_{o} \in \mathbb{R}^{|V|}$ are trainable decoder parameters. $d_{model}$ is the dimension of $[d_t; e(s); g_t]$ and $|V|$ is vocabulary size. + +# 3.2 Topic Label Discriminator + +Another concern is that the generated texts should be closely related to the topic words. To this end, at the second training stage, a topic label discriminator is introduced to perform adversarial training with the SCTKG generator. In a max-min game, the SCTKG generator generates essays to make discriminator consider them semantically match with given topics. Discriminator tries to distinguish the generated essays from real essays. In detail, suppose there are a total of $m$ topics, the discriminator produces a sigmoid probability distribution over $(m + 1)$ classes. The score at $(m + 1)^{th}$ index represents the probability that the sample is the generated text. The score at the $j^{th}$ ( $i \in \{1, \dots, m\}$ ) index represents the probability that it belongs to the real text with the $j^{th}$ topic. Here the discriminator is a CNN (Kim, 2014) text classifier. + +# 3.3 Training + +We introduce our two stage training method in this section. Stage 1: Similar to a conventional CVAE model, The loss of our SCTKG generator $-\log p(Y|c)$ can be expressed as: + +$$ +\begin{array}{l} - \mathcal {L} (\theta ; \phi ; c; Y) _ {c v a e} = \mathcal {L} _ {\mathrm {K L}} + \mathcal {L} _ {\mathrm {d e c o d e r}} \\ = \operatorname {K L} \left(q _ {\phi} (z | Y, c) \| p _ {\theta} (z | c)\right) \tag {5} \\ - \mathbb {E} _ {q _ {\phi} (z | Y, c)} \left(\log p _ {\mathrm {D}} (Y | z, c)\right). \\ \end{array} +$$ + +Here, $\theta$ and $\phi$ are the parameters of the prior network and recognition network, respectively. Intuitively, $\mathcal{L}_{\mathrm{decoder}}$ maximizes the sentence generation probability after sampling from the recognition net + +work, while $\mathcal{L}_{\mathrm{KL}}$ minimizes the distance between the prior and recognition network. Besides, we use the annealing trick and BOW-loss (Zhao et al., 2017) to alleviate the vanishing latent variable problem in VAE training. + +Stage 2: After trained the SCTKG generator with equation (5), inspired by SeqGan (Yu et al., 2017), we adopt adversarial training between the generator and the topic label discriminator described in section 3.2. We refer reader to Yu et al. (2017) and Yang et al. (2019) for more details. + +# 4 Experiments + +# 4.1 Datasets + +We conduct experiments on the ZHIHU corpus (Feng et al., 2018). It consists of Chinese essays whose length is between 50 and 100. We select topic words based on frequency and remove rare topic words. The total number of topic labels are set to 100. Sizes of the training set and the test set are 27,000 essays and 2500 essays. For tuning hyperparameters, we set aside $10\%$ of training samples as the validation set. + +The sentence sentiment labels is required for our model training. To this end, we sample 5000 sentences from the dataset and annotated the data manually with three categories, i.e., positive, negative, neutral. This dataset was divided into a training set, validation set, and test set. We use an open-source Chinese sentiment classifier Senta to finetune on our manually-label training set. This classifier achieves an accuracy of 0.83 on the test set. During training, the target sentiment labels $s$ is computed by the sentiment classifier automatically. During inference, users can input any sentiment labels to control the sentiment for sentence generation. + +# 4.2 Implementation Details + +We use the 200-dim pre-trained word embeddings provided by Song et al. (2018) and dimension of sentiment embeddings is 32. The vocabulary size is 50,000 and the batch size is 64. We use a manually tuning method to choose the hyperparameter values and the criterion used to select is BLEU (Papineni et al., 2002a). We use GRU with hidden size 512 for both encoder and decoder and the size of latent variables is 300. We implement the model with + +Tensorflow5. The number of parameters is 68M and parameters of our model were randomly initialized over a uniform distribution [-0.08,0.08]. We pre-train our model for 80 epochs with the MLE method and adversarial training for 30 epochs. The average runtime for our model is 30 hours on a Tesla P40 GPU machine, which adversarial training takes most of the runtime. The optimizer is Adam (Kingma and Ba, 2014) with $10^{-3}$ learning rate for pre-training and $10^{-5}$ for adversarial training. Besides, we apply dropout on the output layer to avoid over-fitting (Srivastava et al., 2014) (dropout rate = 0.2) and clip the gradients to the maximum norm of 10. The decoding strategy in this paper uses greedy search and average length of generated essays is 79.3. + +# 4.3 Evaluation + +To comprehensively evaluate the generated essays, we rely on a combination of both automatic evaluation and human evaluation. + +Automatic Evaluation. Following previous work (Yang et al., 2019), we consider the following metrics: + +BLEU: The BLEU score (Papineni et al., 2002b) is widely used in machine translation, dialogue, and other text generation tasks by measuring word overlapping between ground truth and generated sentences. + +Dist-1, Dist-2 (Li et al., 2015): We calculate the proportion of distinct 1-grams and 2-grams in the generated essays to evaluate the diversity of the outputs. + +Consistency (Yang et al., 2019): An ideal essay should closely surround the semantics of all input topics. Therefore, we pre-train a multi-label classifier to evaluate the topic-consistency of the output. A higher "Consistency" score means the generated essays are more closely related to the given topics. + +Novelty (Yang et al., 2019): We calculated the novelty by the difference between output and essays with similar topics in the training corpus. A higher "Novelty" score means the output essays are more different from essays in the training corpus. + +Precision, Recall and Senti-F1: These metrics are used to measure sentiment control accuracy. If the sentiment label of the generated sentence is consistent with the ground truth, the generated result + +
Automatic evaluationHuman evaluation
MethodsBLEUConsistencyNoveltyDist-1Dist-2Con.Nov.E-div.Flu.
TAV6.0516.5970.322.6914.252.322.192.582.76
TAT6.329.1968.772.2512.171.762.072.322.93
MTA7.0925.7370.682.2411.703.142.872.173.25
CTEG9.7239.4275.715.1920.493.743.343.083.59
SCTKG(w/o-Senti)9.9743.8478.325.7323.163.893.353.903.71
SCTKG(Ran-Senti)9.6441.8979.545.8423.103.803.484.293.67
SCTKG(Gold-Senti)11.0242.5778.875.9223.073.813.373.943.75
+ +Table 1: Automatic and human evaluation result. In human evaluation, Con., Nov., E-div., Flu. represent topic-consistency, novelty, essay-diversity, fluency, respectively. The best performance is highlighted in bold. + +is right, and wrong otherwise. The sentiment label is predicted by our sentiment classifier mentioned above (see 4.1 for details about this classifier). + +Human Evaluation. We also perform human evaluation to more accurately evaluate the quality of generated essays. Each item contains the input topics and outputs of different models. Then, 200 items are distributed to 3 annotators, who have no knowledge in advance about the generated essays come from which model. Each annotator scores 200 items and we average the score from three annotators. They are required to score the generated essay from 1 to 5 in terms of three criteria: Novelty, Fluency, and Topic-Consistency. For novelty, we use the TF-IDF features of topic words to retrieve 10 most similar training samples to provide references for the annotators. To demonstrate the paragraph-level diversity of our model, we propose a Essay-Diversity criteria. Specifically, each model generates three essays with the same input topics, and annotators are required to score the diversity by considering the three essays together. + +# 4.4Baselines + +TAV (Feng et al., 2018) represents topic semantics as the average of all topic embeddings and then uses a LSTM to generate each word. Their work also includes the following two baselines. + +TAT (Feng et al., 2018) extends LSTM with an attention mechanism to model the semantic relatedness of each topic word with the generator's output. + +MTA (Feng et al., 2018) maintains a topic coverage vector to guarantee that all topic information is expressed during generation through an LSTM decoder. + +CTEG (Yang et al., 2019) adopts commonsense knowledge and adversarial training to improve gen + +eration. It achieves state-of-the-art performance on the topic-to-essay generation task. + +# 5 Results and Analysis + +In this section, we introduce our experimental results and analysis from two parts: the "text quality" and "sentiment control". Then we show case study of our model. + +# 5.1 Results on Text Quality + +The automatic and human evaluation results are shown in Table 1. We present three different versions of our model for a comprehensive comparison. (1) "SCTKG(w/o-Senti)" means we do not attach any sentiment label to the model. (2) "SCTKG(Ran-Senti)" means we randomly set the sentiment label for each generated sentence. (3) "SCTKG(Gold-Senti)" means we set the golden sentiment label for the generated sentence. By investigating the results in Table 1, we have the following observations: + +First, all versions of our SCTKG models outperform the baselines in all evaluation metrics (except the BLEU score of SCTKG(Ran-Senti)). This demonstrates that our SCTKG model can generate better essays than baseline models, whether uses the true sentiment, random sentiment or without any sentiment. + +Second, we can learn the superiority of the basic architecture of our model through the comparison between SCTKG(w/o-Senti) and the baselines. In human evaluation, SCTKG(w/o-Senti) outperform CTEG in topic-consistency, essay-diversity, and fluency by $+0.15$ (3.74 vs 3.89), $+0.82$ (3.08 vs 3.90), $+0.12$ (3.59 vs 3.71) respectively. Similar improvements can be also drawn from the automatic evaluation. The improvement in essay-diversity + +is the most significant. This improvement comes from our CVAE architecture because our sentence representation comes from the sampling from a continuous latent variable. This sampling operation introduces more randomness compared with baselines. + +Third, as previously stated, each model generates three essays and considers them as a whole when comparing the "E-div". When given the random and diverse sentiment label sequences, our SCTKG(Ran-Senti) achieves the highest "E-div" score (4.29). Consider that CVAE architecture has already improved the diversity compared with baselines. By randomizing the sentiment of each sentence, SCTKG(Ran-Senti) further boosts this improvement (from +0.82 to +1.21 compared with CTEG). This result demonstrates the potential of our model to generate discourse-level diverse essays by using diverse sentiment sequences, proving our claim in the introduction part. + +Fourth, when using the golden sentiment label, SCTKG(Gold-Senti) achieves the best performance in BLEU (11.02). However, we find the SCTKG(Gold-Senti) do not significantly outperforms other SCTKG models in other metrics. The results show the true sentiment label of the target sentence benefits SCTKG(Gold-Senti) to better fit in the test set, but there is no obvious help for other important metrics such as diversity and topic-consistency. + +Fifth, we find it interesting that when removing the sentiment label, the SCTKG(w/o-Senti) achieves the best topic-consistency score. We conceive that sentiment label may interfere with the topic information in the latent variable to some extent. But the effect of this interference is trivial. Comparing SCTKG(w/o-Senti) and SCTKG(Gold-Senti), the topic-consistency only drops 0.08 (3.89 vs 3.81) for human evaluation and 1.27 (43.84 vs 42.57) for automatic evaluation, which is completely acceptable for a sentiment controllable model. + +Ablation study on text quality. To understand how each component of our model contributes to the task, we train two ablated versions of our model: without adversarial training ("w/o AT") and without TGA ("w/o TGA"). Noted that in the "w/o TGA" experiment, we implement a memory network the same as Yang et al. (2019) which uses the concepts in ConceptNet but regardless of their correlation. All models use golden sentiment la + +
MethodsBLEUCon.Nov.E-div.Flu.
Full model11.023.813.373.943.75
w/o TGA10.343.543.173.893.38
w/o AT9.853.373.203.923.51
+ +Table 2: Ablation study on text quality. "w/o AT" means without adversarial training. "w/o TGA" means without TGA. Con., Nov., E-div., Flu. represent topic-consistency, novelty, essay-diversity, fluency, respectively. Full model represent SCTKG(Gold-Senti) in this table. + +bels. Table 2 presents the BLEU scores and human evaluation results of the ablation study. + +By comparing full model and "w/o TGA", we find that without TGA, the model performance drops in all metrics. In particularly, topic-consistency drops 0.27, which shows that by directly learning the correlation between the topic words and its neighboring concepts, concepts that are more closely related to the topic words are given higher attention during generation. Novelty drops 0.2, the reason is that TGA is an expansion of the external knowledge graph information. Therefore the output essays are more novel and informative. Fluency drops 0.37 because TGA benefits our model to choose a more suitable concept in the topic knowledge graph according to the current context. And the BLEU drops for 0.68 shows TGA helps our model to better fit the dataset by modeling the relations between topic words and neighboring concepts. + +By comparing full model and "w/o AT", we find that adversarial training can improve the BLEU, topic-consistency, and fluency. The reason is that the discriminative signal enhancing the topic consistency and authenticity of the generated texts. + +# 5.2 Results on Sentiment Control + +In this section, we investigate whether the model accurately controls the sentiment and how each component affects our sentiment control performance. We train three ablated versions of our model: without sentiment label in encoder, without sentiment label in decoder, and without TGA. We randomly sample 50 essays in our test set with 250 sentences. Instead of using golden sentiment labels, the sentiment labels are randomly given in this section. Predicting the golden sentiment is relatively simple because sometimes emotional labels can be directly derived from the coherence between contexts. We adopt a + +
MethodsPrecisionRecallSenti-F1
Full model0.680.660.67
w/o Enc-enti0.560.550.56
w/o Dec-enti0.590.620.61
w/o TGA0.620.640.63
+ +Table 3: Ablation study on sentiment control. "w/o Enc-senti" means to remove the sentiment embedding in the encoder side and "w/o Dec-senti" means to remove from the decoder. Full model represents SCTKG(Ran-Senti) in this table. + +
Input topics: Law Education
Sentiment label: neu. pos. neg. neg. neu.
Output essay: I am a senior high school student. I am in the best high school in our town. But bullying still exist on our campus. Teachers always ignore this phenomenon. What should we do to protect our rights?
+ +Table 4: Given topic "Law" and "Education", and randomly set sentiment label for each sentence. We generated an essay according to the topic and sentiment labels. "neu." represents neutral. "pos." represents positive and "neg." represents negative. We have translated the original Chinese output into English. + +more difficult experimental setting that aims to generate sentences following arbitrary given sentiment labels. The results are shown in Table 3. + +We can learn that removing the sentiment label either from encoder or decoder leads to an obvious control performance decrease $(-11\% / -6\%)$ on Senti-F1) and the sentiment label in the encoder is the most important, since removing it leads to the most obvious decline $(-11\%$ Senti-F1). Although TGA does not directly impose sentiment information, it still helps to improve the control ability ( $4 \%$ in Senti-F1), which shows that learning correlations among concepts in topic knowledge graph strengthens the emotional control ability of the model. For instance, when given a positive label, the concepts related to the relation "desire of" are more likely to attach more attention, because the concepts with the relation "desire of" may represent more positive meaning. + +# 5.3 Case Study + +Table 4 presents an example of our output essay with a random sentiment sequence. Positive sentences are shown in red and negative sentences are shown in blue. We can learn that the output es + +say is not only closely related to the topic “Law” and “Education”, but also corresponding with the randomly given sentiment label. Meanwhile, our model makes full use of commonsense knowledge with the help of TGA. For example, “high school student” and “right” are the neighboring concepts related to the topic words “Education” and “Law”. + +# 6 Related Work + +Topic-to-Text Generation. Automatically generating an article is a challenging task in natural language processing. Feng et al. (2018) are the first to propose the TEG task and they utilize coverage vector to integrate topic information. Yang et al. (2019) use extra commonsense knowledge to enrich the input information and adopt adversarial training to enhancing topic-consistency. However, both of them fail to consider the sentiment factor in the essay generation and fully utilize the external knowledge base. These limitations hinder them from generating high-quality texts. + +Besides, Chinese poetry generation is similar to our task, which can also be regarded as a topic-to-sequence learning task. Li et al. (2018) adopt CVAE and adversarial training to generate diverse poetry. Yang et al. (2017) use CVAE with hybrid decoders to generate Chinese poems. And Yi et al. (2018) use reinforcement learning to directly improve the diversity criteria. However, their models are not directly applicable to TEG task. Because they do not take knowledge into account, their models cannot generate long and meaningful unstructured essays. + +Controllable Text Generation. Some work has explored style control mechanisms for text generation tasks. For example, Zhou and Wang (2017) use naturally annotated emoji Twitter data for emotional response generation. Wang and Wan (2018) propose adversarial training to control the sentiment of the texts. Chen et al. (2019) propose a semi-supervised CVAE to generate poetry and deduce a different lower bound to capture generalized sentiment-related semantics. Different from their work, we inject sentiment label in both encoder and decoder of CVAE and prove that by modeling a topic knowledge graph can further enhance the sentiment control ability. + +# 7 Conclusions + +In this paper, we make a further step in a challenging topic-to-essay generation task by proposing a + +novel sentiment-controllable topic-to-essay generator with a topic knowledge graph enhanced decoder, named SCTKG. To get better representation from external knowledge, we present TGA, a novel topic knowledge graph representation mechanism. Experiments show that our model can not only generate sentiment-controllable essays but also outperform competitive baselines in text quality. + +# References + +Huimin Chen, Xiaoyuan Yi, Maosong Sun, Wenhao Li, and Zhipeng Guo. 2019. Sentiment-controllable Chinese poetry generation. pages 4925-4931. +Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. +Xiaocheng Feng, Ming Liu, Jiahao Liu, Bing Qin, Yibo Sun, and Ting Liu. 2018. Topic-to-essay generation with neural networks. In *IJCAI*, pages 4078–4084. +Yoon Kim. 2014. Convolutional neural networks for sentence classification. Eprint Arxiv. +Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. +Diederik P Kingma and Max Welling. 2013. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114. +Leo Leppänen, Myriam Munezero, Mark Granroth-Wilding, and Hannu Toivonen. 2017. Data-driven news generation for automated journalism. In Proceedings of the 10th International Conference on Natural Language Generation, pages 188-197. +Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A diversity-promoting objective function for neural conversation models. +Juntao Li, Yan Song, Haisong Zhang, Dongmin Chen, Shuming Shi, Dongyan Zhao, and Rui Yan. 2018. Generating classical chinese poems via conditional variational autoencoder and adversarial training. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3890-3900. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002a. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. + +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002b. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311-318. Association for Computational Linguistics. +Yan Song, Shuming Shi, Jing Li, and Haisong Zhang. 2018. Directional skip-gram: Explicitly distinguishing left and right context for word embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 175–180. +Robert Speer and Catherine Havasi. 2012. Representing general relational knowledge in conceptnet 5. In LREC, pages 3679-3686. +Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958. +Ke Wang and Xiaojun Wan. 2018. Sentigan: Generating sentimental texts via mixture adversarial networks. In *IJCAI*, pages 4446-4452. +Pengcheng Yang, Lei Li, Fuli Luo, Tianyu Liu, and Xu Sun. 2019. Enhancing topic-to-essay generation with external commonsense knowledge. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2002-2012. +Xiaopeng Yang, Xiaowen Lin, Shunda Suo, and Ming Li. 2017. Generating thematic chinese poetry using conditional variational autoencoders with hybrid decoders. arXiv preprint arXiv:1711.07632. +Xiaoyuan Yi, Maosong Sun, Ruoyu Li, and Wenhao Li. 2018. Automatic poetry generation with mutual reinforcement learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3143-3153. +Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In Thirty-First AAAI Conference on Artificial Intelligence. +Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. arXiv preprint arXiv:1703.10960. +Xianda Zhou and William Yang Wang. 2017. Mojitalk: Generating emotional responses at scale. arXiv preprint arXiv:1711.04090. \ No newline at end of file diff --git a/asentimentcontrollabletopictoessaygeneratorwithtopicknowledgegraph/images.zip b/asentimentcontrollabletopictoessaygeneratorwithtopicknowledgegraph/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..c21308f638293de4604e84c6646113e967750774 --- /dev/null +++ b/asentimentcontrollabletopictoessaygeneratorwithtopicknowledgegraph/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:12882c86a41fb062a62c8f38e18d40c86b7c0daee0ca7a3c8514d89b040d05e6 +size 285081 diff --git a/asentimentcontrollabletopictoessaygeneratorwithtopicknowledgegraph/layout.json b/asentimentcontrollabletopictoessaygeneratorwithtopicknowledgegraph/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..9df9cbd6fba7e9add651ef00a67fbcb504d87782 --- /dev/null +++ b/asentimentcontrollabletopictoessaygeneratorwithtopicknowledgegraph/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8776ce2ef499edc8842daa7a0d2d9d1becd624ad72bd9a4b23ad32bd7630bc93 +size 315128 diff --git a/asharedprivaterepresentationmodelwithcoarsetofineextractionfortargetsentimentanalysis/03676f47-ee60-400e-8b54-1ade84103624_content_list.json b/asharedprivaterepresentationmodelwithcoarsetofineextractionfortargetsentimentanalysis/03676f47-ee60-400e-8b54-1ade84103624_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..1066abec4d2aff74042358be006c6887542c5ed9 --- /dev/null +++ b/asharedprivaterepresentationmodelwithcoarsetofineextractionfortargetsentimentanalysis/03676f47-ee60-400e-8b54-1ade84103624_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:325cdc9fac9831b6a523c2bda1dd31d04665ce283102382624106ea84d059c3e +size 76643 diff --git a/asharedprivaterepresentationmodelwithcoarsetofineextractionfortargetsentimentanalysis/03676f47-ee60-400e-8b54-1ade84103624_model.json b/asharedprivaterepresentationmodelwithcoarsetofineextractionfortargetsentimentanalysis/03676f47-ee60-400e-8b54-1ade84103624_model.json new file mode 100644 index 0000000000000000000000000000000000000000..4d9a9a96b9fe21c2acab9837a92d1e901f23e118 --- /dev/null +++ b/asharedprivaterepresentationmodelwithcoarsetofineextractionfortargetsentimentanalysis/03676f47-ee60-400e-8b54-1ade84103624_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ab140e0bb9873d4ae082804e0fd51601ef78896aa1c58ea382416c829f1603cf +size 90203 diff --git a/asharedprivaterepresentationmodelwithcoarsetofineextractionfortargetsentimentanalysis/03676f47-ee60-400e-8b54-1ade84103624_origin.pdf b/asharedprivaterepresentationmodelwithcoarsetofineextractionfortargetsentimentanalysis/03676f47-ee60-400e-8b54-1ade84103624_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..deda3a27e7f668ce4dac29513831ea5829d7c934 --- /dev/null +++ b/asharedprivaterepresentationmodelwithcoarsetofineextractionfortargetsentimentanalysis/03676f47-ee60-400e-8b54-1ade84103624_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:873467ea004e36f664150cde11c38bcc3ad6b8ac19d4183b7236460d96e63d7d +size 384367 diff --git a/asharedprivaterepresentationmodelwithcoarsetofineextractionfortargetsentimentanalysis/full.md b/asharedprivaterepresentationmodelwithcoarsetofineextractionfortargetsentimentanalysis/full.md new file mode 100644 index 0000000000000000000000000000000000000000..d529e0c195e4fdeb40acf98b567476dabd61b9df --- /dev/null +++ b/asharedprivaterepresentationmodelwithcoarsetofineextractionfortargetsentimentanalysis/full.md @@ -0,0 +1,347 @@ +# A Shared-Private Representation Model with Coarse-to-Fine Extraction for Target Sentiment Analysis + +Peiqin Lin $^{1}$ Meng Yang $^{1,2*}$ + +$^{1}$ School of Data and Computer Science, Sun Yat-sen University, Guangzhou, China $^{2}$ Key Laboratory of Machine Intelligence and Advanced Computing (SYSU), Ministry of Education lpq29743@gmail.com, yangm6@mail.sysu.edu.cn + +# Abstract + +Target sentiment analysis aims to detect opinion targets along with recognizing their sentiment polarities from a sentence. Some models with span-based labeling have achieved promising results in this task. However, the relation between the target extraction task and the target classification task has not been well exploited. Besides, the span-based target extraction algorithm has a poor performance on target phrases due to the maximum target length setting or length penalty factor. To address these problems, we propose a novel framework of Shared-Private Representation Model (SPRM) with a coarse-to-fine extraction algorithm. For jointly learning target extraction and classification, we design a Shared-Private Network, which encodes not only shared information for both tasks but also private information for each task. To avoid missing correct target phrases, we also propose a heuristic coarse-to-fine extraction algorithm that first gets the approximate interval of the targets by matching the nearest predicted start and end indexes and then extracts the targets by adopting an extending strategy. Experimental results show that our model achieves state-of-the-art performance. + +# 1 Introduction + +Target sentiment analysis aims to detect the opinion targets explicitly mentioned in the sentences, referred to as target extraction, and predict the sentiment polarities over the opinion targets, referred to as target classification. For example, in the sentence "I love Windows 7 which is a vast improvement over Vista", the user mentions two opinion targets, namely, "Windows 7" and "Vista", and expresses positive sentiment over the first target, and negative sentiment over the second one. + +Traditional methods formulated the jointly target extraction and classification task as a sequence labeling task. Under the scheme of sequence tagging, some prevalent models, including Conditional Random Field (CRF) (Mitchell et al., 2013; Zhang et al., 2015; Li and Lu, 2017), Gated Recurrent Unit (GRU) (Ma et al., 2018), Long Short-Term Memory (LSTM) (Li et al., 2019a), Convolutional Neural Network (CNN) (He et al., 2019) and Bidirectional Encoder Representations from Transformers (BERT) (Li et al., 2019b), are applied. Although these methods have achieved improved results, they suffer from the sentiment inconsistency problem of sequence tagging scheme. + +To address it, some methods with span-based labeling, which can assure the sentiment consistency within a span, have been proposed (Zhou et al., 2019; Hu et al., 2019). (Zhou et al., 2019) proposed a span-based loss to predict whether the target within a span is correct. (Hu et al., 2019) proposed a span-based model, which first predict the boundary of the targets and then predict the sentiment polarities based on the corresponding features. Although deep learning methods, especially span-based methods, have achieved promising results, there are still some issues: + +1) The relation between target extraction and target classification is not well exploited. Previous methods applied either a shared encoding module (Ma et al., 2018) or two private encoding modules (Luo et al., 2019; Hu et al., 2019) to learn features for target extraction and target classification, thus weakening the ability to represent the relation between the two tasks. As shown in Fig. 1, there exist shared and private information between target extraction and target classification. Specifically, the semantic and syntactic information are essential for both tasks, so they are shared information. On the other hand, as for the target extraction sub-task, some information like noun and pronoun informa + +![](images/43ab2cb0573b9b2b6295acadca0008ae6d942bfdee7eb3c5ab7bef96139039da.jpg) +Figure 1: An example of shared and private information of target extraction and target classification + +tion can be exploited but may interfere with target classification. Similarly, sentiment information may only be useful for target classification. + +2) The span-based extraction algorithm still perform poorly on extracting target phrases. (Zhou et al., 2019) faces the trade-off problem between search space and target length. When it sets a small maximum target length, it may miss long target phrases. Conversely, setting a large maximum length will bring huge search space and many negative candidates. (Hu et al., 2019) adopts a heuristic algorithm a length penalty to avoid overlong targets. However, the length penalty makes the model be incline to ignore target phrases. + +To solve these issues, we propose a novel framework, namely Shared-Private Representation Model (SPRM) with a coarse-to-fine extraction algorithm. Inspired by (Bousmalis et al., 2016; Liu et al., 2016; Chen et al., 2018), we design a Shared-Private Network, which contains a shared encoding layer, namely Shared BERT (Devlin et al., 2018), and two private encoding layers, namely Target Extraction Long Short Term Memory (TE-LSTM) and Target Classification Long Short Term Memory (TC-LSTM). The two private networks can provide task-specific features and improve the ability of modeling the two sub-tasks. Moreover, we propose a coarse-to-fine extraction algorithm, which obtains the approximate intervals of targets by matching predicted start/end boundaries and then applies an extending strategy instead of a penalty factor for extracting target phrases correctly. The experiments on three benchmark datasets show that our model + +achieves state-of-the-art performance. Our contributions are summarized as follows: + +- A Shared-Private Network is designed to learn the shared and private representations for both of the two sub-tasks; +- A coarse-to-fine extraction algorithm is proposed for target extraction to better extract target phrases; +- Experimental results show our model achieves start-of-the-art performance. + +# 2 Related Work + +(Mitchell et al., 2013) formulated the task of target sentiment analysis as a sequence tagging problem and proposed to use Conditional Random Field (CRF) with hand-crafted linguistic features. In the proposed method, three ways are designed to solve the problem, namely, pipeline way, collapsed way, and joint way. The pipeline way uses two independent models to extract targets and predict the sentiment of the extracted targets separately. As for the joint way, there are shared modules between the two sub-tasks that are jointly trained. Finally, the collapsed model combines the label of target extraction and target classification into the same label space, and predicts the collapsed label. + +Based on (Mitchell et al., 2013), rule-based methods (Zhang et al., 2015; Li and Lu, 2017) and deep-learning-based methods (Ma et al., 2018; Li et al., 2019a; Luo et al., 2019; He et al., 2019) have been proposed to solve target sentiment analysis task with the sequence tagging scheme. Although these methods have achieved improved results, they suffer from the problem of huge search space and sentiment inconsistency of sequence tagging scheme (Hu et al., 2019). + +To address it, some span-based models were proposed (Zhou et al., 2019; Hu et al., 2019), which solved the target sentiment analysis task by predicting the span of the targets. (Zhou et al., 2019) proposed a span-based loss to predict whether the target candidate with a span is a correct target. (Hu et al., 2019) proposed an extract-then-classify framework, which first extracts targets using a heuristic multi-span decoding algorithm and then classifies their polarities with corresponding summarized span representations. Compared to (Zhou et al., 2019), the extraction method proposed by (Hu et al., 2019) has solved the problem of huge + +![](images/9186780403d7b68efc69edaa2e382866d74bc081ca2f2cd06c1c265e6d91c11e.jpg) +Figure 2: The overall architecture of SPRM. "TE" and "TC" denote "Target Extraction" and "Target Classification", respectively. + +search space better and achieve better results. However, there are still some issues with it. For instance, (Hu et al., 2019) simply implements the joint model by employing a shared backbone for the two subtasks, which ignores the private information of each task. In addition, the heuristic multi-span decoding algorithm involves manually-setting thresholds for different datasets, and a length penalty factor for avoiding overlong targets, which is not suitable for extracting target phrases. + +# 3 Model + +To solve the aforementioned issues, we simultaneously learn shared and private features for the target extraction and classification in a unified framework, in which a coarse-to-fine extraction algorithm is designed. In this paper, we propose a novel model of Shared-Private Representation Model (SPRM) shown in Fig. 2, which encodes the shared and private information of the target extraction sub-task and the target classification sub-task effectively at a lower cost. Specifically, a Shared BERT Network is designed to encode as much shared information of both sub-tasks as possible, and two Private BiLSTMs are introduced to get the supplementary private representations for each task with fewer parameters than BERT. Moreover, we design a coarse-to-fine algorithm that first gets the approximate interval of the targets by matching the nearest predicted start and end indexes without any thresholds and then gets the final targets by extending the interval if the adjacent words are predicted as start/end boundaries. With the algorithm, targets can be extracted with reasonable length, since the nearest strategy avoids overlong targets while the extending strategy avoids missing target phrases. + +# 3.1 Shared-Private Model + +The overall architecture of Shared-Private Model is shown in Fig 2, which is composed by six components: an embedding layer, two Private BiLSTM networks for target extraction and target sentiment + +classification, a Shared BERT Network for both two sub-tasks, and the final layers of target extraction and target classification. + +Given the sentence input, the embedding layer process it with the tokenization process and wordpiece embeddings of BERT (Devlin et al., 2018), and obtain the input embeddings $\pmb{E} \in R^{n \times d_{e}}$ , where $n$ is the length of the processed sequence and $d_{e}$ is the size of embedding vectors. + +For target sentiment analysis, both shared information of both sub-tasks and private information of each sub-tasks should be considered. Therefore, a shared network is designed to encode shared information between the two sub-tasks, such as semantic and syntactic information of the input sentence. + +$$ +\boldsymbol {V} _ {s} = f (\boldsymbol {E}) \tag {1} +$$ + +where $f(\cdot)$ is the function of learning shared features and $\mathbf{V}_s$ is the learned feature. + +At the same time, the task-specific private information of target extraction (e.g., whether a word is a noun) and target classification (e.g., sentiment information of each word) should be learned in private modules. + +$$ +\boldsymbol {V} _ {t e} = g _ {t e} (\boldsymbol {E}), \boldsymbol {V} _ {t c} = g _ {t c} (\boldsymbol {E}) \tag {2} +$$ + +where $g_{te}(\cdot)$ and $g_{tc}(\cdot)$ are the functions of learning private features of the target extraction task and the target classification task, $\mathbf{V}_{te}$ and $\mathbf{V}_{tc}$ are the private features. + +Based on the shared and private features, fusion modules are designed to obtain the final features for the two sub-tasks. + +$$ +\tilde {\boldsymbol {V}} _ {t e} = h _ {t e} \left(\boldsymbol {V} _ {s}, \boldsymbol {V} _ {t e}\right), \tilde {\boldsymbol {V}} _ {t c} = h _ {t c} \left(\boldsymbol {V} _ {s}, \boldsymbol {V} _ {t c}\right) \tag {3} +$$ + +where $h_{te}(\cdot)$ and $h_{tc}(\cdot)$ are the functions of fusing shared and private features of the target extraction task and the target classification task, $\tilde{\mathbf{V}}_{te}$ and $\tilde{\mathbf{V}}_{tc}$ are the final features, which are fed into output layers. + +Finally, $\tilde{V}_{te}$ and $\tilde{V}_{tc}$ are fed into the Target Extraction Layer (TE-Layer) and Target Classification Layer (TC-Layer) to generate the predictions, respectively. The model is finally trained by minimizing the sum of the target extraction loss and polarity classification loss: + +$$ +l = l _ {T E} + l _ {T C} \tag {4} +$$ + +where $l_{TE}$ and $l_{TC}$ are the losses of the target extraction task and target classification task. Here, + +we omit an exhaustive description of the TC-Layer as it's same as the classification layer applied in (Hu et al., 2019), and readers can get more details from (Hu et al., 2019). + +In the following subsections, we will detail the design of the aforementioned components, such as the shared module, the two private modules, the combination of shared and private modules, and the TE-Layer. + +# 3.1.1 Shared BERT + +As shared features are used in both target extraction and target classification, the shared module needs to have a strong ability of learning a shared representation. In addition, shared features generally portray common information between the two sub-tasks, like semantic and syntactic information, which also exist in other NLP tasks. Therefore, the prevalent model of Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018), which is a pre-trained bidirectional Transformer encoder that achieves state-of-the-art performances across a variety of NLP tasks, is chosen as the shared network. + +Given the embeddings $\pmb{E}$ , a series of stacked Transformer blocks are applied to project the input embeddings into a sequence of contextual vectors $V_{s} \in R^{n \times d_{s}}$ , where $d_{s}$ is the dimension of outputs. + +# 3.1.2 Private BiLSTM + +Although the Shared BERT has captured powerful features for the two sub-tasks, these shared features are task-invariant but not task-specific. Therefore, private modules should be designed to learn private features for the two sub-tasks, respectively. + +Since the Shared BERT has extracted as sufficient syntactic and semantic information as possible with a huge amount of parameters, we adopt Bidirectional Long Short Term Memory (BiLSTM), which captures the relationship between words in a sentence with fewer parameters than BERT, as the private modules. Specifically, we adopt two Private BiLSTM networks, namely TE-LSTM and TC-LSTM, to learn the private features for the tasks of target extraction and target sentiment classification, respectively. Taking the same embeddings $\pmb{E}$ as inputs, we can obtain the output of BiLSTMs $V_{te} \in R^{n \times 2d_p}$ and $V_{tc} \in R^{n \times 2d_p}$ , where $d_p$ is the hidden size of the BiLSTM networks. + +# 3.1.3 Combination of Shared and Private Features + +Since the dimension of the Private BiLSTM output is twice than that of the shared BERT, we first project the outputs of shared and private modules into the same vector space by employing fully connected layers after the private modules: + +$$ +\boldsymbol {V} _ {t e} ^ {\prime} = F C _ {t e} (\boldsymbol {V} _ {t e}); \boldsymbol {V} _ {t c} ^ {\prime} = F C _ {t c} (\boldsymbol {V} _ {t c}) \tag {5} +$$ + +where $V_{te}^{\prime}, V_{tc}^{\prime} \in R^{n \times d_s}$ . Then we simply apply concatenation operation to get the final features at a low cost. + +$$ +\tilde {\boldsymbol {V}} _ {t e} = \left(\boldsymbol {V} _ {s}; \boldsymbol {V} _ {t e} ^ {\prime}\right); \tilde {\boldsymbol {V}} _ {t c} = \left(\boldsymbol {V} _ {s}; \boldsymbol {V} _ {t c} ^ {\prime}\right) \tag {6} +$$ + +# 3.2 Coarse-to-Fine Extraction Algorithm + +(Hu et al., 2019) has proposed a heuristic algorithm based on the span-based labeling scheme and verified that the span-based labeling scheme performs better on target extraction compared to sequence tagging methods. However, the heuristic algorithm requires a manually-setting threshold for extracting targets and also has poor performance on target phrases due to the length penalty factor, which is designed to avoid overlong targets. + +To address these issues, we propose a coarse-to-fine extraction algorithm. In the coarse-to-fine algorithm, the approximate interval of a target can be obtained by matching the nearest predicted start and end indexes rather than manually setting a threshold, and then the final target is extracted with a reasonable length by adopting an extending strategy, which extends the intervals if the adjacent words are predicted as start/end boundaries. + +The implementation of the coarse-to-fine extraction algorithm is described in detail in the following subsections, and Table 1 shows how the algorithm is used in a concrete example. The coarse-to-fine extraction algorithm consists of three steps: + +- Boundary prediction gets the predictions of start and end positions (Sec. 3.2.1); +- Coarse extraction generates approximate intervals of target candidates by the nearest strategy based on the prediction results (Sec. 3.2.2); +- Fine extraction generates the final targets with an extending strategy based on the approximate intervals of candidates (Sec. 3.2.3). + +
InputExample... easy to integrate bluetooth devices, and USB devices are recognized ...
ID...01234567891011...
Tokens...easytointegrateblue##toothdevices,andusbdevicesarerecognized...
Boundary Predictionps-0.0005-0.80400.8515----0.9875----
pe---0.0060--0.9494--0.01710.8899---
label-0011000010000
labele-0000010001000
Coarse ExtractionBoundary Numbernb_s = 3 (3 true labels: {2, 3, 8}), nb_e = 2 (2 true labels: {5, 9})nb = max(nb_s, nb_e) = 3
Top nbStart/End BoundariesS={2, 3, 8}E={5, 8, 9}
Target CandidatesCs={2, 5, (3, 5), (8, 8)}, Ce={3, 5, (8, 8), (8, 9)}C={2, 5, (3, 5), (8, 8), (8, 9)}
Fine ExtractionExtending StrategyC'={2, 5, (8, 9)}
Target Numbernt_s = 2 (2 intervals: [2, 3] and [8, 8]), nt_e = 2 (2 intervals: [5, 5] and [9, 9])nt = round((nt_s + nt_e) / 2) = 2
Top ntTargetsO={2, 5, (8, 9)}
OutputTargets[“aggregate bluetooth devices”, “usb devices”]
+ +Table 1: An example for coarse-to-fine extraction algorithm. The input words are represented as their ids. + +# 3.2.1 Boundary Prediction + +As we have mentioned in Sec. 3.1, $\tilde{V}_{te}$ is fed into the TE-Layer to generate the predictions, and then the loss of the target extraction task $l_{TE}$ is computed. Here, the TE-Layer will be described in detail. + +The start and end scores for each word in the sequence can be obtained by first applying fully connected layers and then using a sigmoid function: + +$$ +\pmb {g} ^ {s} = F C _ {s} (\tilde {\pmb {V}} _ {t e}), \pmb {p} ^ {s} = s i g m o i d (\pmb {g} ^ {s}) \quad (7) +$$ + +$$ +\boldsymbol {g} ^ {e} = F C _ {e} \left(\tilde {\boldsymbol {V}} _ {t e}\right), \boldsymbol {p} ^ {e} = \operatorname {s i g m o i d} \left(\boldsymbol {g} ^ {e}\right) \tag {8} +$$ + +Different to (Hu et al., 2019), we employ a sigmoid function instead of the softmax function to get the scores, because the sigmoid function is more suitable for binary classification, like predicting whether a word is a start/end here. Given the probabilities of start and end positions of each word, the corresponding labels denoting whether a word is the start/end boundary of a target can be computed by the following steps. + +$$ +l a b e l ^ {s} = \left\{ \begin{array}{l l} 1 & \text {i f} p ^ {s} \geq 0. 5 \\ 0 & \text {o t h e r w i s e} \end{array} \right. +$$ + +$$ +\boldsymbol {l a b e l} ^ {e} = \left\{ \begin{array}{l l} 1 & \text {i f} \boldsymbol {p} ^ {e} \geq 0. 5 \\ 0 & \text {o t h e r w i s e} \end{array} \right. +$$ + +where $\pmb{p}^{s} = \{p_{1}^{s}, p_{2}^{s}, \dots, p_{n}^{s}\}$ and $\pmb{p}^{e} = \{p_{1}^{e}, p_{2}^{e}, \dots, p_{n}^{e}\}$ are the start and end scores, respectively. Taking these two scores, the start labels $\pmb{y}^{s} = \{y_{1}^{s}, y_{2}^{s}, \dots, y_{n}^{s}\}$ and the end labels + +$\pmb{y}^{e} = \{y_{1}^{e},y_{2}^{e},\dots ,y_{n}^{e}\}$ as inputs, we get the loss of target extraction: + +$$ +l _ {T E} = \sum_ {i} ^ {n} \left(\log \operatorname {l o s s} \left(p _ {i} ^ {s}, y _ {i} ^ {s}\right) + \log \operatorname {l o s s} \left(p _ {i} ^ {e}, y _ {i} ^ {e}\right)\right) \tag {10} +$$ + +where $\log loss(p_i, y_i)$ is an error function defined as follows: + +$$ +\operatorname {l o g l o s s} \left(p _ {i}, y _ {i}\right) = \left\{ \begin{array}{l l} - \log \left(p _ {i}\right) & \text {i f} y _ {i} = 1 \\ - \log \left(1 - p _ {i}\right) & \text {i f} y _ {i} = 0 \end{array} \right. \tag {11} +$$ + +# 3.2.2 Coarse Extraction + +The coarse extraction step first gets top start/end boundaries and then generates the original set of target candidates by the nearest strategy, which matches the nearest predicted start and end boundaries without any thresholds. + +Given the predicted labels of start and end positions, we can get the numbers of tokens predicted as start/end boundaries, namely $nb_{s}$ and $nb_{e}$ . Since enough candidates should be extracted to avoid missing correct candidates, we employ maximum function to compute the number of the boundaries $nb$ which should be considered. + +$$ +n b = \max \left(n b _ {s}, n b _ {e}\right) \tag {12} +$$ + +Therefore, the top $nb$ candidates of start/end boundary from $\pmb{p}^s$ and $\pmb{p}^e$ are obtained and then the set of start/end candidates, namely $\pmb{S}$ and $\pmb{E}$ , are generated. + +Since a target generally consists of a few tokens, we apply nearest strategy to avoid overlong targets. + +Using the nearest strategy, we match the nearest end index in $\pmb{E}$ with each start boundary candidate to get the start target candidate set $C_s$ . Similarly, the end target candidate set $C_e$ is also obtained. Finally, the approximate intervals of target candidates are obtained. + +$$ +\boldsymbol {C} = \boldsymbol {C} _ {s} \cup \boldsymbol {C} _ {e} \tag {13} +$$ + +# 3.2.3 Fine Extraction + +To get the final targets, the fine extraction step first adopts an extending strategy and then selects targets based on start/end probabilities and the computed target number. + +For target phrases, the boundaries of the nouns in them are usually predicted as start/end positions, too. For example, the token 'blue' of the target phrase 'integrate bluetooth devices' is predicted as the start position of a target, as shown in Table 1. Therefore, an extending strategy shown in Algorithm 1 is designed to extract complete targets. In the extending strategy, every possible candidate can be extended on both the left side (line 3-4) and the right side (line 5-6) if the adjacent word is predicted as the start or end boundary. + +# Algorithm 1 Extending Strategy + +Input: $C$ : the candidate set; $S$ : the start candidate set; $E$ : the end candidate set + +Output: $C'$ : the extended candidate set + +1: $C' = \{\}$ , $O = \{\}$ +2: for $(s_i, e_i)$ in $C$ do +3: while $s_i - 1 \geq 0$ and $s_i - 1 \in S$ do +4: $s_i = s_i - 1$ +5: while $e_i - 1 < n$ and $e_i + 1 \in E$ do +6: $e_i = e_i + 1$ +7: $C' = C' \cup (s_i, e_i)$ + +As is mentioned before, the boundaries of the nouns in target phrases are usually predicted as start/end position of a target. Therefore, we can observe that the model may predict one or a few start/end positions for a target, which are generally adjacent to each other. In other words, the numbers of intervals which contain only labels predicted as true start/end boundaries can be used to infer the number of extracted targets $nt$ . Specifically, the interval numbers of $label^s$ and $label^e$ , namely $nt_s$ , $nt_e$ , are computed first, and then we use the average value to estimate the number of the targets $nt$ . + +$$ +n t = \operatorname {r o u n d} \left(\left(n t _ {s} + n t _ {e}\right) / 2\right) \tag {14} +$$ + +
Dataset#+-#0Total
LAPTOPTrain9878604552302
Test339130165634
RESTTrain2,6101,0376674314
Test1,5245012642289
TWITTER-7032742,2663,243
+ +Table 2: Dataset statistics. $^\circ +$ , $-$ , and $0 ^ { \prime }$ refer to the positive, negative, and neutral sentiment classes, respectively. + +With the target number $nt$ , we sort the extended candidate set $C'$ in descending order with the addition of start and end probabilities and then choose the top $nt$ candidates. Note that the candidates overlapped by the extracted targets will be removed while being chosen. + +# 4 Experiments + +# 4.1 Setup + +# 4.1.1 Datasets + +We conduct experiments on three benchmark datasets, as shown in Table. 2. LAPTOP contains product reviews from the laptop domain in SemEval 2014 (Pontiki et al., 2014). REST is the union set of the restaurant domain from SemEval 2014, 2015 and 2016 (Pontiki et al., 2015, 2016). TWITTER is built by (Mitchell et al., 2013), consisting of twitter posts. Following (Zhang et al., 2015; Li et al., 2019a; Hu et al., 2019), we report the ten-fold cross-validation results for TWITTER, as there is no train-test split. For each dataset, the gold target span boundaries are available, and the targets are labeled with sentiment polarities, namely positive (+), negative (-), and neutral (0). + +# 4.1.2 Metrics + +We adopt the precision (P), recall (R), and F1 score as evaluation metrics. A predicted target is correct only if it exactly matches the gold targets and the corresponding polarity. To separately analyze the performance of two sub-tasks, precision, recall, and F1 are also used for the target extraction subtask, while the accuracy (ACC) metric is applied to polarity classification. + +# 4.1.3 Model Settings + +We use the publicly available BERT-Base model as the shared BERT, and refer readers to (Devlin et al., 2018) for details on model sizes. The dimension sizes $d_{e}$ , $d_{p}$ and $d_{s}$ are all 768. In addition, we use Adam optimizer (Kingma and Ba, 2014) with + +
ModelLAPTOPRESTTWITTER
Prec.Rec.F1Prec.Rec.F1Prec.Rec.F1
Sequence-tagging-based MethodCRF-pipeline59.6947.5452.9352.2851.0151.6442.9725.2131.73
CRF-joint57.3835.7644.0660.0048.5753.6843.0924.6731.35
CRF-collapsed59.2741.8649.0663.3957.7460.4348.3519.6427.86
NN-CRF-pipeline57.7249.3253.1960.0961.9361.0043.7137.1240.06
NN-CRF-joint55.6434.4845.4961.5650.0055.1844.6235.8439.67
NN-CRF-collapsed58.7245.9651.5662.6160.5361.5646.3232.8438.36
TAG-pipeline65.8467.1966.5171.6676.4573.9854.2454.3754.26
TAG-joint65.4366.5665.9971.4775.6273.4954.1854.2954.20
TAG-collapsed63.7166.8365.2371.0575.8473.3554.0554.2554.12
UNIFIED61.2754.8957.9068.6471.0169.8053.0843.5648.01
DOER--60.35--72.78--51.37
Span-based MethodZhou SPAN61.4058.2059.7676.2068.2071.9854.8448.4451.44
Hu SPAN-pipeline69.4666.7268.0676.1473.7474.9260.7255.0257.69
Hu SPAN-joint67.4161.9964.5972.3272.6172.4757.0352.6954.55
Hu SPAN-collapsed50.0847.3248.6663.6353.0457.8551.8945.0548.11
Our ModelSPRM68.6668.7768.7277.7880.6079.1760.2558.7659.45
+ +Table 3: Main results (%) on three benchmark datasets. State-of-the-art results are marked in bold. + +
LAPTOPREST
SPRM w/o Shared BERT53.2570.03
SPRM w/o Private BiLSTMs66.7278.78
SPRM w/o Aspect Extraction LSTM66.2078.74
SPRM w/o Aspect Classification LSTM67.9278.41
SPRM68.7279.17
+ +a learning rate of 3e-5 and warmup over the first $10\%$ steps. The batch size is 32 and a dropout probability of 0.1 is used. + +# 4.1.4 Baselines + +We compare SPRM with both sequence-tagging-based methods and span-based methods. The sequence-tagging-based methods include CRF-{pipeline, joint, collapsed} (Mitchell et al., 2013), NN-CRF-{pipeline, joint, collapsed} (Zhang et al., 2015), TAG-{pipeline, joint, collapsed} (Hu et al., 2019), UNIFIED (Li et al., 2019a), DOER (Luo et al., 2019). The span-based methods are Zhou SPAN (Zhou et al., 2019) and Hu SPAN-{pipeline, joint, collapsed} (Hu et al., 2019). + +# 4.2 Main Results + +We report the results of SPRM and the baselines in Table. 3. Two main observations can be obtained from the table. Firstly, compared to SPAN-joint, SPRM improves the performance significantly by $4.13\%$ , $6.70\%$ and $4.90\%$ on three datasets, since SPAN-joint ignores the private encoding components for the two sub-tasks and only apply a Shared BERT network. It shows that some private informa + +Table 4: F1 results $(\%)$ on examining the effectiveness of Shared-Private Network. + +
NparaLAPTOPREST
SPAN-pipeline + BERT-Large671M68.0674.92
SPAN-joint + BERT-Large336M64.5972.47
SPAN-joint + BERT-Base109M59.8868.95
SPRM + BERT-Large342M69.1179.08
SPRM + BERT-Base131M68.7279.17
+ +Table 5: F1 results $(\%)$ on LAPTOP and REST w.r.t different BERT backbone models. + +
LAPTOPREST
SPRM with CRF59.5575.34
SPRM with (Hu et al., 2019)66.3578.49
SPRM68.7279.17
+ +Table 6: F1 results (\%) on examining the effectiveness of Coarse-to-Fine Extraction Algorithm. + +tion for the two sub-tasks can be well obtained by applying two private encoding components. Secondly, SPRM achieves $0.66\%$ , $4.25\%$ , and $1.76\%$ absolute gains on three datasets compared to the best SPAN method SPAN-pipeline, indicating the efficacy of the Shared BERT. Therefore, SPRM can get better performance with fewer parameters compared to SPAN-pipeline, which employs two separate BERT encoding network for target extraction and target classification, respectively. + +# 4.3 Effectiveness of Shared-Private Network + +To verify the effectiveness of the Shared-Private Network, we conduct extensive experiments on the LAPTOP and REST datasets, and the experimental results is shown in Table. 4. + +From the results, we observe that removing Shared BERT makes the performance worse since + +BERT has a strong ability of learning powerful features. Although the model can perform well while just applying BERT, the Private BiLSTMs can also learn useful features for each sub-task to improve the performance. Specifically, the Private AE-LSTM is more effective than the Private AC-LSTM, as the performance of the former LSTM has a bigger decrease in performance. + +Moreover, we plot the performance of SPAN and SPRM with respect to different BERT backbone networks in Table. 5 to further examine the effectiveness of the Shared-Private Network. We can observe that SPRM with BERT-Base achieves comparable results compared to SPRM with BERT-Large, while the performance of SPAN-joint with BERT-Base is significantly worse than that of SPAN-joint with BERT-Large. It shows that the introduction of private layers improves the performance with fewer parameters compared to using BERT-Large as the backbone network instead of BERT-Base. Besides, SPRM with BERT-Base outperforms SPAN-pipeline with BERT-Large, which uses almost 5 times the trainable parameters of SPRM with BERT-Base. Therefore, the introduction of Shared BERT can not only connect the task of target extraction and target classification to some extent but also reduce the parameter number. + +# 4.4 Effectiveness of Coarse-to-Fine Extraction Algorithm + +To verify the effectiveness of the coarse-to-fine extraction algorithm, we employ CRF and the heuristic algorithm proposed by (Hu et al., 2019) instead of our coarse-to-fine extraction algorithm on the LAPTOP and REST datasets, and the experimental results are shown in Table. 6. + +Among the three extraction methods, CRF preforms worse since it suffers from the problems of huge search space. In addition, the coarse-to-fine extraction algorithm outperforms the heuristic extraction method of (Hu et al., 2019) as our model applies a flexible way to extract targets. + +# 4.5 Analysis on Both Sub-Tasks + +To analyze the performance of our model on target extraction and target sentiment classification, we compare our model with previous approaches designed for both of the two tasks and some state-of-the-art methods proposed for one of the sub-tasks, namely, DE-CNN (Xu et al., 2018) for target extraction and DMMN-SDCM (Lin et al., 2019) for target classification. The experimental results of + +
DatasetLAPTOPRESTTWITTER
DE-CNN81.59--
TAG85.2084.4873.47
SPAN83.3582.3875.28
SPM84.7286.7169.85
+ +Table 7: F1 comparison of different approaches for target extraction. + +
DatasetLAPTOPRESTTWITTER
DMMN-SDCM77.59--
TAG71.4281.8059.76
SPAN81.3989.9575.16
SPM81.5090.3578.34
+ +Table 8: Accuracy comparison of different approaches for polarity classification. + +target extraction and target classification are shown in Table. 7 and Table. 8, respectively. + +On the task of target extraction, our model doesn't have the best performance on all of the three datasets. SPM outperforms SPAN by $1.37\%$ and $4.33\%$ on the LAPTOP and REST datasets, but has worse performance on the TWITTER dataset. And on the task of target sentiment classification, our model outperforms all the baselines by $0.11\%$ , $0.40\%$ , and $3.18\%$ on three datasets. The experimental results show that one of the disadvantages of the joint model over the pipeline model is that it can make sure to perform best on the task of target sentiment analysis but can't perform best on both sub-tasks at the same time for guarantee. + +# 4.6 Qualitative Analysis + +Table. 9 shows some qualitative cases sampled from SPAN-pipeline and SPRM. We can observe that our model SPRM with the coarse-to-fine extraction algorithm can extract more accurate targets. The heuristic coarse-to-fine extraction algorithm computes the number of targets by the predict scores of start and end boundaries instead of a manually set threshold, so our method can be more precise with the number of targets. Take the example 6 in the table as an example, the correct targets, "Windows XP" and "Windows 7", are not extracted by SPAN-pipeline as the threshold filters them incorrectly, while our method extracts all the three correct targets as we infer the number of targets correctly. Example 1 is also a good example to confirm this. In addition, our algorithm adopts the extending strategy instead of the strategy of length + +
ExamplesSPAN-pipelineSPRM
1. All in all, the [food]+ was great (except for the [desserts]−).[food]+(✓), None (✗)[food]+(✓), [desserts]− (✓)
2. [Vanison]0 was good but not amazing.[Vanison]0 (✓)[Vanison]0 (✓)
3. The [selection of food]+ is excellent (I'm not used to having much choice at restaurants), and the [atmosphere]+ is great.[selection]+(✓), [food]+(✗), [atmosphere]+(✓)[selection of food]+(✓), [atmosphere]+(✓)
4. Beware of the [chili signed food items]− not unless you want to call the fire department to douse the flames in your mouth.[chili]− (✗), [food items]− (✗)[chili signed food items]− (✓)
5. This mac does come with an [extender cable]0 and I'm using mine right now hoping the [cable]+ will stay nice for the many years I plan on using this mac.[extender cable]0 (✓), None (✗)[extender cable]0 (✓) [cable]+(✓)
6. I used [Windows XP]0, [Windows Vista]0, and [Windows 7]0 extensively.None (✗), [Windows Vista]0 (✓), None (✗)[Windows XP]0 (✓), [Windows Vista]0 (✓), [Windows 7]0 (✓)
7. The only thing I miss is that my old Alienware laptop had [backlit keys]−.[backlit]− (✗), [keys]− (✗)[backlit keys]− (✓)
+ +Table 9: Case study. The extracted targets are wrapped in brackets with the predicted polarities given as subscripts. Correct and incorrect predictions are marked with $\checkmark$ and $X$ , respectively. + +penalty, and it can avoid missing the targets which consist of a few words. Take the example 4 in the table as an example, the correct extracted target should be "chili signed food items", but SPANpipeline split the gold target entity to two separate targets because of its length penalty. However, our algorithm can extract the target "chili signed food items" correctly since we get the original candidates with the closest indexes and then extract the targets by the extending strategy. + +# 5 Conclusion + +In this paper, we propose a Shared-Private Representation Model (SPRM) with coarse-to-fine extraction for target sentiment analysis. To encode the information of the two sub-tasks of target sentiment analysis, a Shared-Private Network has been proposed to learn shared features as well as private features. Moreover, we designed a coarse-to-fine extraction algorithm, which extracts targets without thresholds and adopts an extending strategy for better extracting target phrases. Experiments on three benchmark datasets show the effectiveness of SPRM. + +# Acknowledgements + +This work is partially supported by National Natural Science Foundation of China (Grants no. 61772568), Guangdong Basic and Applied Basic Research Foundation (Grant no. 2019A1515012029), and Guangdong Special Support Program. + +# References + +Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. 2016. Domain separation networks. In Advances in neural information processing systems, pages 343-351. +Xilun Chen, Ahmed Hassan Awadallah, Hany Hassan, Wei Wang, and Claire Cardie. 2018. Zero-resource multilingual model transfer: Learning what to share. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2019. An interactive multi-task learning + +network for end-to-end aspect-based sentiment analysis. arXiv preprint arXiv:1906.06906. +Minghao Hu, Yuxing Peng, Zhen Huang, Dongsheng Li, and Yiwei Lv. 2019. Open-domain targeted sentiment analysis via span-based extraction and classification. arXiv preprint arXiv:1906.03820. +Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. +Hao Li and Wei Lu. 2017. Learning latent sentiment scopes for entity-level sentiment analysis. In Thirty-First AAAI Conference on Artificial Intelligence. +Xin Li, Lidong Bing, Piji Li, and Wai Lam. 2019a. A unified model for opinion target extraction and target sentiment prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6714-6721. +Xin Li, Lidong Bing, Wenxuan Zhang, and Wai Lam. 2019b. Exploiting bert for end-to-end aspect-based sentiment analysis. arXiv preprint arXiv:1910.00883. +Peiqin Lin, Meng Yang, and Jianhuang Lai. 2019. Deep mask memory network with semantic dependency and context moment for aspect level sentiment classification. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, pages 5088-5094. +Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2016. Recurrent neural network for text classification with multi-task learning. arXiv preprint arXiv:1605.05101. +Huaishao Luo, Tianrui Li, Bing Liu, and Junbo Zhang. 2019. Doer: Dual cross-shared rnN for aspect term-polarity co-extraction. arXiv preprint arXiv:1906.01794. +Dehong Ma, Sujian Li, and Houfeng Wang. 2018. Joint learning for targeted sentiment analysis. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4737-4742. +Margaret Mitchell, Jacqui Aguilar, Theresa Wilson, and Benjamin Van Durme. 2013. Open domain targeted sentiment. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1643-1654, Seattle, Washington, USA. Association for Computational Linguistics. +Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, AL-Smadi Mohammad, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orphée De Clercq, et al. 2016. Semeval-2016 task 5: Aspect based sentiment analysis. In Proceedings of the 10th international workshop on semantic evaluation (SemEval-2016), pages 19-30. + +Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015. Semeval-2015 task 12: Aspect based sentiment analysis. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 486-495. +Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. Semeval-2014 task 4: Aspect based sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 27-35. +Hu Xu, Bing Liu, Lei Shu, and Philip S Yu. 2018. Double embeddings and cnn-based sequence labeling for aspect extraction. arXiv preprint arXiv:1805.04601. +Meishan Zhang, Yue Zhang, and Duy-Tin Vo. 2015. Neural networks for open domain targeted sentiment. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 612-621, Lisbon, Portugal. Association for Computational Linguistics. +Yan Zhou, Longtao Huang, Tao Guo, Jizhong Han, and Songlin Hu. 2019. A span-based joint model for opinion target extraction and target sentiment classification. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, pages 5485-5491. AAAI Press. \ No newline at end of file diff --git a/asharedprivaterepresentationmodelwithcoarsetofineextractionfortargetsentimentanalysis/images.zip b/asharedprivaterepresentationmodelwithcoarsetofineextractionfortargetsentimentanalysis/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..11092c927a414fe13eeb546193d2fe7e339f0393 --- /dev/null +++ b/asharedprivaterepresentationmodelwithcoarsetofineextractionfortargetsentimentanalysis/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:87cd724e17fb4113a28d5e9deef026b1ed8ac5820b0d638f38a46074cb26492c +size 640684 diff --git a/asharedprivaterepresentationmodelwithcoarsetofineextractionfortargetsentimentanalysis/layout.json b/asharedprivaterepresentationmodelwithcoarsetofineextractionfortargetsentimentanalysis/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..977b9b2d0b650d66de5c45b16c83ec308c481cfd --- /dev/null +++ b/asharedprivaterepresentationmodelwithcoarsetofineextractionfortargetsentimentanalysis/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:83fcfdcff436d3758055150567b7e95bc678647bd3250b527dff1a3cd7cf6e2c +size 361402 diff --git a/astructureenhancedgraphconvolutionalnetworkforsentimentanalysis/1b0df017-f116-4598-89ef-490c6fe7c36d_content_list.json b/astructureenhancedgraphconvolutionalnetworkforsentimentanalysis/1b0df017-f116-4598-89ef-490c6fe7c36d_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0e31313fc5a2b8161cee2db4747a31a2c8fad2c7 --- /dev/null +++ b/astructureenhancedgraphconvolutionalnetworkforsentimentanalysis/1b0df017-f116-4598-89ef-490c6fe7c36d_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0047fab0afaec5097e93498c169dc0a73e73cd2edd163c4ca325a25e1d3b3c67 +size 65127 diff --git a/astructureenhancedgraphconvolutionalnetworkforsentimentanalysis/1b0df017-f116-4598-89ef-490c6fe7c36d_model.json b/astructureenhancedgraphconvolutionalnetworkforsentimentanalysis/1b0df017-f116-4598-89ef-490c6fe7c36d_model.json new file mode 100644 index 0000000000000000000000000000000000000000..2809c17c0db1db4a3997d63bbdf7936ec6a42949 --- /dev/null +++ b/astructureenhancedgraphconvolutionalnetworkforsentimentanalysis/1b0df017-f116-4598-89ef-490c6fe7c36d_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c4e90140467bd5027c05d1cd237efb4d07f2ab12ba63f0b54e31e067743da53a +size 81517 diff --git a/astructureenhancedgraphconvolutionalnetworkforsentimentanalysis/1b0df017-f116-4598-89ef-490c6fe7c36d_origin.pdf b/astructureenhancedgraphconvolutionalnetworkforsentimentanalysis/1b0df017-f116-4598-89ef-490c6fe7c36d_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7eb5be74bebcb663bbb2230d9fd127ee7d9789c6 --- /dev/null +++ b/astructureenhancedgraphconvolutionalnetworkforsentimentanalysis/1b0df017-f116-4598-89ef-490c6fe7c36d_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1d2d511043b5ddd5a2b6f1868fc07853e929ddd167bf51b0eae2c36547f55bb9 +size 2482364 diff --git a/astructureenhancedgraphconvolutionalnetworkforsentimentanalysis/full.md b/astructureenhancedgraphconvolutionalnetworkforsentimentanalysis/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c30b2750ab358a7f2e1b1ee0318dd76eba22d025 --- /dev/null +++ b/astructureenhancedgraphconvolutionalnetworkforsentimentanalysis/full.md @@ -0,0 +1,276 @@ +# Sentiment Analysis with Weighted Graph Convolutional Networks + +Fanyu Meng, Junlan Feng, Danping Yin, Si Chen, Min Hu + +China Mobile Research Institute, Beijing, China + +{mengfanyu,fengjunlan,yinding,chensiyj,humin}@chinamobile.com + +# Abstract + +Syntactic information is essential for both sentiment analysis(SA) and aspect-based sentiment analysis(ABSA). Previous work has already achieved great progress utilizing Graph Convolutional Network(GCN) over dependency tree of a sentence. However, these models do not fully exploit the syntactic information obtained from dependency parsing such as the diversified types of dependency relations. The message passing process of GCN should be distinguished based on these syntactic information. To tackle this problem, we design a novel weighted graph convolutional network(WGCN) which can exploit rich syntactic information based on the feature combination. Furthermore, we utilize BERT instead of BiLSTM to generate contextualized representations as inputs for GCN and present an alignment method to keep word-level dependencies consistent with wordpiece unit of BERT. With our proposal, we are able to improve the state-of-the-art on four ABSA tasks out of six and two SA tasks out of three. + +# 1 Introduction + +Sentiment analysis(SA), also known as opinion mining, is the task of determining the polarity of a piece of text. Commonly the classification is whether the text is expressing a negative or positive attitude towards a topic or a product. Fine-grained sentiment analysis involves more than two sentiment classes (very negative, negative, neutral, positive and very positive). Aspect-based sentiment analysis(ABSA) is one step further by assigning sentiment polarities to specific aspects of an involved entity or a topic. For example, comment on a restaurant saying "The restaurant was expensive, but the menu was great" has positive and negative attitudes for two aspects food and price. + +Much progress has been made recently to advance the state-of-the-art on shared SA and ABSA tasks. Contributions mainly come from two research directions. + +One is to take advantage of the pre-trained language models such as ELMo(Peters et al., 2018), BERT(Devlin et al., 2018) and XLNet(Yang et al., 2019a), which are typically employed to extract contextual features of a piece of text for the final classifier. These models effectively alleviate the heavy effort of feature engineering of earlier work on SA and ABSA. Further inventions have been proposed to better fine-tune these models. For instance, a recent work (Sun et al., 2019a) converts ABSA to a sentence pair classification task, where an auxiliary sentence is generated. It then finetunes the pre-trained model from BERT for this new task. Promising experimental results are observed. + +Second line of research is to exploit the syntactic structures of subjective sentences with a belief that interactions between words need to be considered in sentiment analysis, which however is not sufficiently captured by even the latest attention-based models. (Zhang et al., 2019) quotes a concrete example "Its size is ideal and the weight is acceptable", where acceptable is often incorrectly identified by attention models as the most attentive word to size. Previous works in (Socher et al., 2011; Dong et al., 2015; Qian et al., 2015; Socher et al., 2013) propose a recursive tree-structured model to compose sentence representation from its constituent phrases. (Kim et al., 2018) presents a novel RvNN architecture to dynamically integrate comprehensive syntactic information derived from the sentence parsing structure and linguistic tags on word level. Models using a Graph Convolutional Network(GCN) over the dependency tree of a sentence have shown evident effectiveness in ABSA tasks. The argument is that GCN captures long + +![](images/73bd5b67d4514e41c4bec0dcf6ea2826f8497f23d1641d663d001c267f935280.jpg) +Figure 1: An example of a dependency tree noted with type of dependency relation and POS tag for each word. + +range syntactic relations that are obscure from the surface(Sun et al., 2019b; Zhang et al., 2019; Zhao et al., 2019). + +Though these efforts have substantially pushed up the state-of-the-art accuracy of SA and ABSA, some challenges remain for sentiment classification. For example, the aforementioned GCN-based models are designed to encode the dependency tree of a sentence, where the adjacency matrix is binary with 1 representing if there is a dependency relationship between two corresponding words and 0 for others. However, types of dependency relations are diversified and the corresponding words of each relation may have different part-of-speech(POS) tags. These syntactic information should also influence the message passing process of GCN. As it is shown in Figure 1, the relationship ("det(vehicle-3, a-2)") has less influence on polarity than the relationship ("nsubj(worthwhile-14, film-11)") in the sentence "As a vehicle to savour Binoche's skill, the film is well worthwhile". Besides, as (Sethi and Bhattacharyya, 2017) points, pitfalls of SA and ABSA like Sentiment Shifters (such as Negations, Double Negations and But clauses) have not been well handled by current models. + +In this paper, we are motivated to encode more syntactic features and leverage both the pre-trained models and the syntactic parsing in a compositional way. We believe these are complementary to tackle the long-standing challenges for SA and ABSA. More specifically, we propose a Weighted Graph Convolutional Network(WGCN) to work with BERT. WGCN improves on top of GCN to model rich syntactic information. The adjacency matrix in WGCN represents not only the binary representations of dependency relations, but also the types of dependency relations as well as the part-of-speech(POS) categories of the involved words. We argue that the POS tag of each word is the category assigned in accordance with its syntactic function, hence has influence on the overall sentiment of the sentence as well as sentiments of aspects. All weights and embeddings in WGCN are trainable. + +Details of this model will be provided later in this paper. WGCN reply on BERT to extract contextualized representations as inputs for the WGCN layers. One challenge is the inconsistency between the WordPiece unit of BERT, and the word-pairs considered in the dependency tree. We propose an alignment method to bridge this chasm. + +Our contributions are summarized as follows: + +- We propose a novel weighted GCN(WGCN) architecture over dependency tree which can exploit rich syntactic features by assigning trainable weights for adjacent matrix. +- We propose a framework to compositionally exploit the pre-trained language models(BERT) and WGCN for SA and ABSA. We refer to the whole architecture as BERT-WGCN. +- With our proposal, we are able to improve the state-of-the-art on four ABSA tasks out of six and two SA tasks out of three. + +The rest of the paper is organized as follows. Section 2 gives a brief review of BERT and GCN. Section 3 elaborates on our proposed overall model architecture that integrates WGCN and BERT, as well as how the model is trained respectively for SA and ABSA tasks. Section 4 reports our experiments and analysis. + +# 2 Review of GCN and BERT + +Graph convolutional network(Kipf and Welling, 2016) is an adaptation of the convolutional neural network(LeCun et al., 1998) for encoding unstructured data. Given a graph with $k$ nodes, we can obtain an adjacency matrix $A$ where $A_{ij}$ is obtained based on the connection between node $i$ and node $j$ . In an L-layer GCN where $H^{l-1}$ represents the output feature matrix at $(l-1)$ -th layer and $H^l$ represents the output feature matrix at the $l$ -th layer, a graph convolutional operation can be written as: + +$$ +H ^ {l} = \sigma \left(\tilde {D} ^ {- \frac {1}{2}} \tilde {A} \tilde {D} ^ {- \frac {1}{2}} H ^ {l - 1} W ^ {l}\right) \tag {1} +$$ + +$\tilde{A} = A + I_{k}$ is the adjacency matrix with self-loops, where $I_{k}$ is the identity matrix. $\tilde{D}_{ii} = \sum_{j}\tilde{A}_{ij}$ . $\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2}}$ is the normalized adjacency matrix. $W^{l}$ is a linear transformation weight, and $\sigma$ is a nonlinear function (e.g., ReLU). In each graph convolution, each node collects and processes information from its neighboring nodes. + +BERT(Devlin et al., 2018) is one of the key innovations in the recent progress of contextualized representation learning inspired by Transformer(Vaswani et al., 2017). Given a sentence $s = \{w_{1},\dots,w_{n}\}$ , its tokenized sequential representation is $\{t_1,t_2,\dots,t_k\}$ . Transformer creates three vectors (query, key and value) for each sequence position, and then applies the attention mechanism for each position $x_{i}$ , using the query vector for $x_{i}$ as well as key and value vectors for all other positions. This computation can be presented as: + +$$ +\operatorname {A t t e n t i o n} (Q, K, V) = \operatorname {s o f t m a x} \left(\frac {Q K ^ {T}}{\sqrt {d _ {k}}}\right) V \tag {2} +$$ + +Instead of performing a single attention function, (Vaswani et al., 2017) found it is beneficial to have multiple attention heads. Bert built on Transformers contains a number of layers(Transformer blocks) $L$ . Each layer is identical with a fixed number of hidden units $H$ and a fixed number of multi-threading self-attention heads $A$ . Particularly we use the $BERT_{LARGE}$ model with $L = 24$ , $H = 1024$ and $A = 16$ as hyper-parameters. + +# 3 Approach + +Figure 2 gives an overview of the whole architecture. Our model consists of 3 main components. First, the input sequence of text is parsed into word-based syntactic features as inputs for WGCN. At the same time, the text is also directly fed into BERT for wordpiece contextualized representations. One challenge here is the inconsistency between the wordpiece unit of BERT and word-based syntactic features for WGCN. The second part is the reform of GCN to exploit rich syntactic features. The third component is the sentiment classifier for SA and ABSA. The components will be introduced separately in the rest of the section. + +# 3.1 Token Alignment towards BERT + +Traditional GCN-based approaches over dependency tree use Bi-LSTM to get contextualized representations as initialized inputs for GCN (Zhang + +![](images/933386458fd9d07f3a737e1c12846c17608e96e6aeaec298e1412fddc5f162da.jpg) +Figure 2: Overview of proposed architecture for SA and ABSA. + +et al., 2018a,b). Recently pre-trained models have proved the effectiveness of capturing contextual information. Thus we first feed input sentences into BERT model to generate contextualized representations. This BERT contextualization layer is trained jointly with the rest of the network. One challenge to have BERT work with WGCN as shown in Figure 2 is the tokenization inconsistency between them. Bert tokenizes input into wordpiece units, instead of keeping word boundaries as they are. + +To resolve this issue, we propose an alignment procedure to map the word-level sequence from the parser to the wordpiece sequence in BERT. Dependency relations and POS tags are then accordingly aligned. The procedure is as follows: + +Given a piece of text $s$ , the parser tokenizes it into a $n$ word-level sequence: $s = \{w_1, \dots, w_i, \dots, w_n\}$ and BERT processes it into a $k$ wordpiece sequence: $s_t = \{t_1, \dots, t_m, \dots, t_n, \dots, t_k\}$ . For any given $w_i$ in $s$ , there is a corresponding subsequence of wordpiece tokens $seg_i = \{t_m, \dots, t_n\}$ , where $1 \leq m \leq n \leq k$ . We apply two alignment rules to map parsing results into a new form: + +- Rule 1: If $w_{i}$ is labeled by a POS Tagger as $p_i$ , then all tokens in $\text{seg}_i$ are assigned the same tag $p_i$ . +- Rule 2: If there is a dependency relation $r_{ij}$ between $w_i$ and $w_j$ , then we assign the same + +![](images/61fdd64ad723db48b1782b8530b36fa5060002d0a7e5e96acbcff4d419e298a2.jpg) +Figure 3: Alignment from word-based adjacency matrix to wordpiece adjacency matrix. + +dependency relation $r_{ij}$ between any token in $seg_i$ and any token in $seg_j$ . + +With this alignment, given an adjacency matrix $A$ where $A_{ij} = 1$ if node $i$ is connected with node $j$ , we can obtain a new adjacency matrix $A^{align}$ where $A_{xy}^{align} = 1$ for any token $x$ in $seg_i$ and any token $y$ in $seg_j$ . We plot one example in Figure 3. For a better illustration, we show what the adjacency matrix looks like before and after the alignment. The left side shows the dependency matrix between the 14 words for the sentence "As a vehicle to savour Binoche's skill, the film is well worthwhile". Each color represents a particular relation type. The right side shows the dependency matrix on wordpiece sentence "as a vehicle to sa ##vo ##ur bin ##oche's skill, the film is well worth ##while" after we run alignment with the above procedure. It's worth noting that we present directed graphs in Figure 3 for clarity. As GCNs generally do not consider directions, we use un-directional graph in our model. + +# 3.2 Weighted Graph Convolutional Networks over Syntactic Information + +![](images/35026af126a05a7c87e19556f2091bcf87be8ddbac7456ac75a70e9c0d7def2c.jpg) +Figure 4: An overview of WGCN. We only show the detailed graph convolution computation for the aspect words price and service for clarity. + +We aim to extend GCN to model rich syntactic information. To this end, we propose WGCN, which is depicted in Figure 4. Following the same strategy in (Sun et al., 2019b; Zhang et al., 2018b,a), WGCN also considers the adjacency matrix obtained from dependency tree as input. Different from their approaches, WGCN assigns trainable weights to the adjacency matrix. Each weight is compositionally determined by syntactic information including the type of dependency relation and the corresponding POS tags of the word-pairs. + +Our hypothesis is that the type of dependency relation and POS tags of the word-pairs should have combined impacts on the process of aggregating information from neighbours in GCN. We follow the procedure proposed by (Guo et al., 2017) for Factorization Machines(FM) to cast pairwise feature interactions as inner product of the latent vectors, which has shown very promising results on many tasks. Let $W_{type}$ be a matrix of $R^{d \times N_{type}}$ , where + +![](images/bc4900123a36f4bc11123d33418832047acade4dab5b78c68946eb52e8b42954.jpg) +Figure 5: Computation of the adjacency matrix in WGCN. + +$d$ is the dimension of the embedding space which is fixed hyper-parameter, and $N_{type}$ is the number of types of dependency relations. Let $W_{pos}$ be a matrix of $R^{d \times N_{pos}}$ and $N_{pos}$ is the number of combinations of POS tags of all word-pairs appeared in dependency relations. The feature combination weight over the dependency relation from node $x$ to node $y$ in adjacency matrix can be presented as: + +$$ +\alpha_ {x y} = f \left(r _ {x y}\right) g \left(\left(p _ {x}, p _ {y}\right)\right) \tag {3} +$$ + +$r_{xy}$ is the type of dependency relation from node $x$ to node $y$ , $p_x$ and $p_y$ are the POS tags in the sentence for node $x$ and node $y$ . The function $f()$ maps the one-hot type vector into the corresponding column of $W_{type}$ . The function $g()$ maps the two-hot POS vector into the corresponding column of $W_{pos}$ . + +Let $\widehat{A}$ be the final adjacent matrix for WGCN, + +
TasksABSA tasksSA tasks
DatasetsSEM14(LAP)SEM14(Rest)Rest15Rest16TwitSST2SST5SE13
Train22823608120417486051692085446021
Dev-----8721101890
Test6321119542616677182122102376
ofClass33333253
+ +Table 1: Dataset statistics of aspect-based sentiment analysis(ABSA) and sentiment analysis(SA) + +then each value of $\widehat{A}$ can be computed as: + +$$ +\widehat {A} _ {x y} = \alpha_ {x y} A _ {x y} ^ {\text {a l i g n}} \tag {4} +$$ + +where $\alpha_{xy}$ is computed from Equation (3) and $A_{xy}^{align}$ is obtained by alignment rules. The process of obtaining $\widehat{A}$ is shown in Figure 5. + +To adapt with trainable adjacency matrix, we reform the custom GCN. Inspired by (Zhang et al., 2018c), we use $K$ -th power of adjacency matrix to aggregate information from K-hop neighbours. Since nodes never connect to themselves in a dependency relation, following the idea of self-looping(Kipf and Welling, 2016), we add a matrix $I^{align}$ which is transformed by an identity matrix with proposed alignment method to carry over information. + +Let $H^{b}$ be the final output of BERT layer, WGCN can be presented as: + +$$ +H ^ {G C N} = \sigma (C _ {i} ((\widehat {A}) ^ {K} + I ^ {a l i g n}) H ^ {b} W) \qquad (5) +$$ + +$C_i(\cdot)$ is a clip function for the matrix. W is the parameter matrix for WGCN and $\sigma$ is the nonlinear ReLU function. + +# 3.3 Model Training for SA and ABSA + +Sentiment analysis considers the polarity of the whole sequence. In our framework, we use an average pooling to aggregate the whole sequence. Let $H^{GCN} = \{h_1^{GCN},\dots,h_k^{GCN}\}$ be the final output of WGCN, $Avg(\cdot)$ be the average pooling function. The pooling process can be presented as: + +$$ +h _ {S A} = \operatorname {A v g} \left(\left\{h _ {1} ^ {G C N}, \dots , h _ {k} ^ {G C N} \right\}\right) \tag {6} +$$ + +Aspect-Based Sentiment Analysis considers the polarity of several aspect words given in a current sentence. The BERT model and WGCN allow embeddings for aspect tokens to respectively aggregate contextual tokens and neighbouring tokens in + +a dependency tree, providing supervisory signals for the aspect-based classification task. Different from sentiment analysis, we use an average pooling to aggregate only the aspect words. Given a sentence pair $(a,s)$ , where $a$ is a sub-sequence of $s$ as aspect tokens. The final outputs of WGCN are $\{h_1^{GCN},\dots,h_{a_s}^{GCN},\dots,h_{a_e}^{GCN},\dots,h_k^{GCN}\}$ where $a_{s}$ and $a_{e}$ are indexes an aspect starts from and ends at. The pooling process can be presented as: + +$$ +h _ {A B S A} = \operatorname {A v g} \left(\left\{h _ {a _ {s}} ^ {G C N}, \dots , h _ {a _ {e}} ^ {G C N} \right\}\right) \tag {7} +$$ + +$h^{SA}$ or $h^{ABSA}$ is then fed into a linear layer followed by a softmax operation to obtain a probability distribution over polarities. For training we use Adam algorithm(Kingma and Ba) with the cross-entropy loss and L2-regularization. + +# 4 Experiments + +# 4.1 Datasets and Experimental Settings + +We conduct our experiments on five aspect-based sentiment analysis datasets and three sentiment analysis datasets: + +- TWITTER dataset for ABSA, was originally built by (Li et al., 2014) containing thousands of twitter posts. Annotations are sentiment labels(negative, neutral and positive) for given keywords or topics such as "taylorm Swift", "xbox". +- LAP14, REST14, REST15, REST16 datasets for ABSA are respectively from SemEval 2014 task 4(pontiki et al., 2014), SemEval 2015 task 12(Pontiki et al., 2015) and SemEval 2016 task 5(Pontiki et al., 2016), consisting of data from two categories, i.e. laptop and restaurant. +- SST(SST2, SST5) is a dataset for sentiment analysis on movie reviews, which are anno + +
DatasetsSEM14(LAP)SEM14(REST)SEM14(AVG)REST15REST16Twitter
ModelACC.F1ACC.F1ACC.F1ACC.F1ACC.F1ACC.F1
ASGCN-DG75.5571.0580.7772.0278.1671.5479.8961.8988.9967.4872.1570.40
CDT77.1972.9982.3074.0275.0973.51--85.5869.9374.6673.66
BERT-PT78.0775.0884.9576.9681.5176.02------
SDGCN81.3578.3483.5776.4782.4677.41------
TNET76.5471.7580.6971.2778.6271.51----74.9773.60
BERT-ADA Rest79.1474.9387.8981.0583.5277.99------
BERT-ADA Lapt80.2375.7786.2279.7983.2277.78------
BERT(comp)78.2673.3583.5073.3380.8873.3481.2060.1188.2572.0671.0970.81
BERT-GCN(comp)80.0375.7985.3278.0582.6876.9285.3066.0190.9175.3173.9871.62
BERT-WGCN80.9676.9586.7179.1283.8478.0385.3966.2691.3575.1975.8973.82
+ +Table 2: Model comparison results for ABSA tasks. The state-of-the-art performance with each dataset is in bold. We list average scores on SemEval2014 on accuracy and F1 to evaluate generalization of different models. + +tated with two or five labels(Socher et al., 2013). + +- SemEval13 is a dataset of Semeval-2013 task 2 (Nakov et al., 2013) for sentiment analysis, consisting of tweets with three sentiment labels(positive, negative and neutral). + +The statistics of datasets are reported in Table 1. The datasets are parsed by Stanford parser(v3.6.0) for dependency relation and spacy(2.2.3) for POS tag. We use a learning rate of 0.0001 and a batch size of 32. We set the number of WGCN layers to 3 and the dimension of syntactic feature to 20, which are the best-performing settings in pilot studies. Experiments and benchmarks are run with a single GPU server with 4 V100 GPU cards and 8Gb of RAM. All models are implemented with Tensorflow 1.13 using Cuda 10.1. + +The experimental results are obtained by averaging 5 runs with random initialization, where Accuracy and Macro-Averaged F1 are adopted as the evaluation metrics. + +# 4.2 Model for Comparison + +To evaluate the effectiveness of our model(BERT-WGCN), we compare our performance with a range of baselines and state-of-the-art models, as listed below: + +- CDT(Sun et al., 2019b) is a dependency graph convolutional network integrated with a BiLSTM model. +- ASGCN-DG(Zhang et al., 2019) utilizes aspect-aware attention on a dependency graph convolutional network. + +- BERT-PT(Xu et al., 2019) transforms ABSA tasks to machine reading comprehension (MRC) and uses a post-training approach on BERT for ABSA tasks.. +- SDGCN(Zhao et al., 2019) employs GCN to model the sentiment dependencies between different aspects in one sentence. +- TNET(Li et al., 2018) employs CNN as the feature extractor and uses target specific transformation component to better integrate target information into the word representation. +- BERT-ADA (Rietzler et al., 2019) uses self-supervised domain-specific BERT language model for tuning, followed by supervised task-specific fine-tuning. +- BCN+CoVe(Brahma, 2018) utilizes prefix and suffix of each token in a sentence, which is encoded in both forward and reverse directions to capture long range dependencies for classification tasks. +- SSAN (Ambartsoumian and Popowich, 2018) is a simple multiple self-attention network with positional-encoding for sentiment analysis. +- XLNet (Yang et al., 2019b) is an unsupervised language representation learning method based on a novel generalized permutation language modeling objective and employs Transformer-XL as the backbone model. +- BERT-GCN(comp) (Rietzler et al., 2019) is a model for comparison which connects GCN after BERT-LARGE model with our way of + +
ModelAspectWeight visualizationPredictionLabel
BERTfoodgreatfoodbutthe service wasdreadful!pospos
servicegreatfoodbuttheservicewasdreadful!posneg
staffOur waiter was friendly and it is a shame that he didn't have a supportive staff to work with.posneg
BERT-GCNfoodgreatfoodbutthe service wasdreadful!pospos
servicegreatfoodbuttheservicewasdreadful!negneg
staffOur waiter was friendly and it is a shame that he didn't have a supportive staff to work with.posneg
BERT-WGCNfoodgreatfoodbutthe service wasdreadful!pospos
servicegreatfoodbuttheservicewasdreadful!negneg
staffOur waiter was friendly and it is a shame that he didn't have a supportive staff to work with.negneg
+ +alignment and the size of parameters is in the same order of magnitude with our BERT-WGCN. + +- BERT(comp) (Rietzler et al., 2019) is a model for comparison which is based on BERT-LARGE and the size of parameters is in the same order of magnitude with our BERT-WGCN. + +# 4.3 Experimental Results + +Table 2 shows the performance of our model on accuracy and macro-F1 on ABSA tasks. Our BERT-WGCN outperforms most of the compared models on REST15, REST16 and TWITTER datasets, and achieves competitive results on SEM14(LAP) and SEM14(REST) datasets compared with SDGCN and BERT-ADA. Notably, our model achieves highest average accuary and F1 on SEM14(LAP) and SEM14(REST) dataset combined. The results demonstrate the effectiveness of BERT-WGCN. + +For ablation study, we compare our GCN-based models with BERT(comp) with same number of parameters. BERT-GCN(comp) and BERT-WGCN can consistently show improvements. It implies that the syntactic structure is helpful for ABSA tasks. Compared to BERT-GCN(comp), BERT-WGCN is able to gain better performance for almost all ABSA datasets. It proves that WGCN factorizing dependency relations and POS tags is better at utilizing syntactic information than the traditional GCN architecture. For the slight F1 degradation on the REST16 dataset, the reason might be + +that the size of REST16 datasets is relatively small. Another important observation is that all architectures that achieve the state-of-the-art results utilize pre-trained model. SDGCN-BERT initializes the word embeddings with pre-trained BERT token embeddings and uses self-attention network for training. BERT-ADA uses domain-specific dataset for model pre-training. Thus we believe that the contextualized information is essential for ABSA tasks. + +Table 3: The weight visualization on aspect sentiment analysis tasks for BERT(comp), BERT-GCN(comp) and BERT-WGCN with corresponding labels. + +
ModelSST-2SST-5SE13
BCN+CoVe-56.2-
XLNet96.8--
SSAN84.248.172.2
BERT(comp)94.354.874.9
BERT-GCN(comp)94.355.075.2
BERT-WGCN94.956.577.3
+ +Table 4: Model comparison results for SA tasks. The state-of-the-art performance with each dataset is in bold. + +For SA task, as it is shown in Table 4, the message is complex. For SST-2 dataset, our proposed model has no improvement. For SST-5 and SemEval2013, as far as we know, we achieve the new state-of-the-art performance. For ablation study, BERT-GCN(comp) and BERT(comp) get almost the same performance. We believe the main reason is that the importance of sentence structure in SA tasks is not as important as that + +in ABSA tasks. BERT-WGCN gets better performance mainly based on the additional feature combinations. + +# 4.4 Case Analysis + +In this section we compare BERT-WGCN with two baseline models on case examples. To this end we present visualizations showing the weights extracting from the whole sentence by aspect tokens on ABSA tasks. To show the effectiveness of our model, we expect the aspect tokens can attend to tokens which can influence the sentiment correctly. + +As it is shown in Table 3, the first example "great food but the service was dreadful!" has two aspects within one sentence. The BERT model is able to detect the polarity for the first aspect "food" but fails to infer sentiment polarities for aspect "service". Our hypothesis is that the distance between aspect token and adjunct token is important for attention-based model. The GCN-based model can address this connection correctly because they are directly related on the dependency tree. The second example "Our waiter was friendly and it is a shame that he didn't have a supportive staff to work with." shows the importance of feature combination of dependency relation and POS tags on Negatives. These results suggest the advantage of our model against attention-based model and traditional GCN-based models. + +# 4.5 Investigation on the Combination of Syntactic Features + +
High ImportanceLow Importance
RelationPOS-pairsRelationPOS-pairs
amod(NOUN, ADJ)cc(CCONJ, CCONJ)
nsubj(NOUN, ADJ)nsubj(DET, AUX)
advmod(ADV, VERB)prt(ADP, VERB)
advmod(ADV, ADJ)det(SCONJ,SCONJ)
cc(VERB, CCONJ)pobj(ADP, NOUN)
csubj(AUX, VERB)amod(ADJ, ADJ)
advcl(VERB, AUX)amod(ADV, ADV)
prep(SCONJ, VERB)det(DET, DET)
+ +Table 5: Importance of different Feature Combination on SST-5 task. + +To evaluate the influence of feature combination of dependency relation and POS tags of word-pairs, we present several combinations of different importance in WGCN based on the weight score in adjacency matrix. As we use clip function in training, the combinations in column is not ordered. As it is shown in Table 5, relations of adjectival modifier("amod") or nominal subject("nsubj") from + +"ADJ" to from "NOUN" outweighs relation of determiner ("det") in SA tasks. Another observation is that dependency type and POS tags jointly determine the importance. Same dependency relation may have different importance according to the corresponding POS tags. + +# 4.6 Impact of GCN Layers + +![](images/db7dbe319f9f206e980106e7595217936bd04c0fa2ed4dc24f5ede00b16f288c.jpg) +Figure 6: Accuracy curves for BERT-GCN(comp) and BERT-WGCN on the Rest14 dataset. + +The number of GCN layers $K$ indicates that we can obtain $K$ -hop neighborhood matrix. We vary the number of layers in $\{1,2,3,4,5,6,7\}$ and check the corresponding accuracy of BERT-GCN(comp) and BERT-WGCN on the REST14 dataset. The results are shown in Figure 6. In particular, the performances of both models increase in first 3 layers. The performance becomes unstable after that. With the increase of number of layers, the model becomes more difficult to train and the performance begins to fall. + +# 5 Conclusion + +In this paper we propose a novel weighted graph convolutional networks(WGCN) to work with BERT on sentiment analysis and aspect-based sentiment analysis tasks. WGCN improves on top of GCN to model rich syntactic information including dependency relations as well as POS tags. BERT is used as a powerful tool to extract contextual representations, which are then used as inputs to WGCN to derive the final vectors for classification. We propose an alignment approach to solve the token inconsistency issue between WGCN and BERT. Our experimental results with visualizations show the success of our proposal comparing to the baseline and previous approaches in the literature. + +# References + +Artaches Ambartsoumian and Fred Popowich. 2018. Self-attention: A better building block for sentiment analysis neural network classifiers. In Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 130–139, Brussels, Belgium. Association for Computational Linguistics. +Siddhartha Brahma. 2018. Improved sentence modeling using suffix bidirectional LSTM. arXiv preprint arXiv:1805.07340. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Li Dong, Furu Wei, Ke Xu, Shixia Liu, and Ming Zhou. 2015. Adaptive multi-compositionality for recursive neural network models. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 24(3):422-431. +Huifeng Guo, Ruiming TANG, Yunming Ye, Zhenguo Li, and Xiuqiang He. 2017. Deepfm: A factorization-machine based neural network for ctr prediction. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, pages 1725-1731. +Taeuk Kim, Jihun Choi, Daniel Edmiston, Sanghwan Bae, and Sang-goo Lee. 2018. Dynamic compositionality in recursive neural networks with structure-aware tag representations. CoRR, abs/1809.02286. +Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. +Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. +Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324. +Dong Li, Furu Wei, Chuanqi Tan, Duyu Tang, and Xu Ke. 2014. Adaptive recursive neural network for target-dependent twitter sentiment classification. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). +Xin Li, Lidong Bing, Wai Lam, and Bei Shi. 2018. Transformation networks for target-oriented sentiment classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 946-956, Melbourne, Australia. Association for Computational Linguistics. +Preslav Nakov, Sara Rosenthal, Zornitsa Kozareva, Veselin Stoyanov, Alan Ritter, and Theresa Wilson. + +2013. SemEval-2013 task 2: Sentiment analysis in twitter. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 312-320, Atlanta, Georgia, USA. Association for Computational Linguistics. +Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. +Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, Mohammad Al-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, De Orphée Clercq, Véronique Hoste, Marianna Apidianaki, Xavier Tannier, V. Natalia Loukachevitch, Evgeniy Kotelnikov, Núria Bel, María Jiménez Salud Zafra, and Gülsen Eryigit. 2016. Semeval-2016 task 5: Aspect based sentiment analysis. SemEval@NAACL-HLT, pages 19-30. +Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015. SemEval-2015 task 12: Aspect based sentiment analysis. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 486-495, Denver, Colorado. Association for Computational Linguistics. +maria pontiki, dimitris galanis, john pavlopoulos, harris papageorgiou, ion androutsopoulos, and suresh manandhar. 2014. Semeval-2014 task 4: Aspect based sentiment analysis. SemEval@COLING. +Qiao Qian, Bo Tian, Minlie Huang, Yang Liu, Xuan Zhu, and Xiaoyan Zhu. 2015. Learning tag embeddings and tag-specific composition functions in recursive neural network. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1365-1374. +Alexander Rietzler, Sebastian Stabinger, Paul Opitz, and Stefan Engl. 2019. Adapt or get left behind: Domain adaptation through BERT language model finetuning for aspect-target sentiment classification. CoRR, abs/1908.11860. +Abhishek Sethi and Pushpak Bhattacharyya. 2017. Aspect based sentiment analysis-a survey. +Richard Socher, Eric H Huang, Jeffrey Pennin, Christopher D Manning, and Andrew Y Ng. 2011. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Advances in neural information processing systems, pages 801-809. +Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on + +empirical methods in natural language processing, pages 1631-1642. +Chi Sun, Luyao Huang, and Xipeng Qiu. 2019a. Utilizing BERT for aspect-based sentiment analysis via constructing auxiliary sentence. CoRR, abs/1903.09588. +Kai Sun, Richong Zhang, Samuel Mensah, Yongyi Mao, and Xudong Liu. 2019b. Aspect-level sentiment analysis via convolution over dependency tree. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5679-5688, Hong Kong, China. Association for Computational Linguistics. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. CoRR, abs/1706.03762. +Hu Xu, Bing Liu, Lei Shu, and Philip Yu. 2019. BERT post-training for review reading comprehension and aspect-based sentiment analysis. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2324-2335, Minneapolis, Minnesota. Association for Computational Linguistics. +Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019a. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural information processing systems, pages 5754-5764. +Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019b. Xlnet: Generalized autoregressive pretraining for language understanding. CoRR, abs/1906.08237. +Chen Zhang, Qiuchi Li, and Dawei Song. 2019. Aspect-based sentiment classification with aspect-specific graph convolutional networks. CoRR, abs/1909.03477. +Yuhao Zhang, Peng Qi, and Christopher D. Manning. 2018a. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2205-2215, Brussels, Belgium. Association for Computational Linguistics. +Yuhao Zhang, Peng Qi, and Christopher D Manning. 2018b. Graph convolution over pruned dependency trees improves relation extraction. arXiv preprint arXiv:1809.10185. + +Zhengchao Zhang, Meng Li, Xi Lin, Yinhai Wang, and Fang He. 2018c. Multistep speed prediction on traffic networks: A graph convolutional sequence-to-sequence learning approach with attention mechanism. CoRR, abs/1810.10237. +Pinlong Zhao, Linlin Hou, and Ou Wu. 2019. Modeling sentiment dependencies with graph convolutional networks for aspect-level sentiment classification. CoRR, abs/1906.04501. \ No newline at end of file diff --git a/astructureenhancedgraphconvolutionalnetworkforsentimentanalysis/images.zip b/astructureenhancedgraphconvolutionalnetworkforsentimentanalysis/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..890f6c5dcd67150e96ff708077cc57076a331e0f --- /dev/null +++ b/astructureenhancedgraphconvolutionalnetworkforsentimentanalysis/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1599e1edab25de03dfa7d4f1a0f5748267f50d9c9c8b0365edc7b6cebcb84ede +size 504467 diff --git a/astructureenhancedgraphconvolutionalnetworkforsentimentanalysis/layout.json b/astructureenhancedgraphconvolutionalnetworkforsentimentanalysis/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..54142f94869e6a0383b3f0a2fa285baca2ba450f --- /dev/null +++ b/astructureenhancedgraphconvolutionalnetworkforsentimentanalysis/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b5796ff634f32938582e454c833a7f9778cb75050fb5b7f63e7759fb81d302c7 +size 344318 diff --git a/astudyinimprovingbleureferencecoveragewithdiverseautomaticparaphrasing/03d12ac0-a1d4-404f-8161-8a0b3767b683_content_list.json b/astudyinimprovingbleureferencecoveragewithdiverseautomaticparaphrasing/03d12ac0-a1d4-404f-8161-8a0b3767b683_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..01479058528ad63ac95c92977753234e1fd108ce --- /dev/null +++ b/astudyinimprovingbleureferencecoveragewithdiverseautomaticparaphrasing/03d12ac0-a1d4-404f-8161-8a0b3767b683_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ea2949d795a148c15f4e8499c085259f9807f0ee79a87e085286e492f120ed46 +size 104678 diff --git a/astudyinimprovingbleureferencecoveragewithdiverseautomaticparaphrasing/03d12ac0-a1d4-404f-8161-8a0b3767b683_model.json b/astudyinimprovingbleureferencecoveragewithdiverseautomaticparaphrasing/03d12ac0-a1d4-404f-8161-8a0b3767b683_model.json new file mode 100644 index 0000000000000000000000000000000000000000..3baed877fe219b1414d6ba8b33f33d4ba9bd8b00 --- /dev/null +++ b/astudyinimprovingbleureferencecoveragewithdiverseautomaticparaphrasing/03d12ac0-a1d4-404f-8161-8a0b3767b683_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3c6d860346cb64bbf642505c1546b726a4ebdc8ffc26d98c96e1d749a40ed984 +size 122576 diff --git a/astudyinimprovingbleureferencecoveragewithdiverseautomaticparaphrasing/03d12ac0-a1d4-404f-8161-8a0b3767b683_origin.pdf b/astudyinimprovingbleureferencecoveragewithdiverseautomaticparaphrasing/03d12ac0-a1d4-404f-8161-8a0b3767b683_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..da8034b8f73201b2aaf306a63e766e81e123b522 --- /dev/null +++ b/astudyinimprovingbleureferencecoveragewithdiverseautomaticparaphrasing/03d12ac0-a1d4-404f-8161-8a0b3767b683_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8b0c266489b9f20cf6d89d3a3ab2373b0ba278925e0ea645abf0057d9b86c5ed +size 398757 diff --git a/astudyinimprovingbleureferencecoveragewithdiverseautomaticparaphrasing/full.md b/astudyinimprovingbleureferencecoveragewithdiverseautomaticparaphrasing/full.md new file mode 100644 index 0000000000000000000000000000000000000000..4b947ed44ff5c60403108ea7e436e373aa84dc0a --- /dev/null +++ b/astudyinimprovingbleureferencecoveragewithdiverseautomaticparaphrasing/full.md @@ -0,0 +1,363 @@ +# A Study in Improving BLEU Reference Coverage with Diverse Automatic Paraphrasing + +Rachel Bawden1 Biao Zhang1 Lisa Yankovskaya2 Andre Tättar2 Matt Post3 + +$^{1}$ School of Informatics, University of Edinburgh, Scotland + +$^{2}$ University of Tartu, Tartu, Estonia + +3Johns Hopkins University, Baltimore, Maryland, USA + +# Abstract + +We investigate a long-perceived shortcoming in the typical use of BLEU: its reliance on a single reference. Using modern neural paraphrasing techniques, we study whether automatically generating additional diverse references can provide better coverage of the space of valid translations and thereby improve its correlation with human judgments. Our experiments on the into-English language directions of the WMT19 metrics task (at both the system and sentence level) show that using paraphrased references does generally improve BLEU, and when it does, the more diverse the better. However, we also show that better results could be achieved if those paraphrases were to specifically target the parts of the space most relevant to the MT outputs being evaluated. Moreover, the gains remain slight even when using human paraphrases elicited to maximize diversity, suggesting inherent limitations to BLEU's capacity to correctly exploit multiple references. Surprisingly, we also find that adequacy appears to be less important, as shown by the high results of a strong sampling approach, which even beats human paraphrases when used with sentence-level BLEU.1 + +# 1 Introduction + +There is rarely a single correct way to translate a sentence; work attempting to encode the entire translation space of a sentence suggests there may be billions of valid translations (Dreyer and Marcu, 2012). Despite this, in machine translation (MT), system outputs are usually evaluated against a single reference. This especially affects MT's dominant metric, BLEU (Papineni et al., 2002), since it is a surface metric that operates on explicit $n$ -gram overlap (see. (1) showing two ade + +quate MT outputs, one with only minimal overlap with the reference):2 + +(1) Ref: This did not bother anybody. + +$\mathrm{MT}_{1}$ : This didn't bother anybody. + +$\mathrm{MT}_2$ : Nobody was bothered by this. + +Almost since its creation, BLEU's status as the dominant metric for MT evaluation has been challenged (e.g., Callison-Burch et al. (2006), Mathur et al. (2020)). Such work typically uses only a single reference, however, which is a deficient form of the metric, since one of BLEU's raisons d'être was to permit the use of multiple references, in a bid to represent "legitimate differences in word choice and word order." Unfortunately, multiple references are rarely available due to the high cost and effort of producing them. One way to inexpensively create them is with automatic paraphrasing. This has been tried before (Zhou et al., 2006; Kauchak and Barzilay, 2006), but only recently have paraphrase systems become good enough to generate fluent, high quality sentential paraphrases (with neural MT-style systems). Moreover, it is currently unclear (i) whether adding automatically paraphrased references can provide the diversity needed to better cover the translation space, and (ii) whether this increased coverage overlaps with observed and valid MT outputs, in turn improving BLEU's correlation with human judgments. + +We explore these questions, testing on all into-English directions of the WMT19 metrics shared task (Ma et al., 2019) at the system and segment level. We compare two approaches: (i) generating diverse references with the hope of covering as much of the valid translation space as possible, and (ii) more directly targeting the relevant areas of the translation space by generating paraphrases that contain $n$ -grams selected from the system out + +puts. This allows us to compare the effects of diversity against an upper bound that has good coverage. We anchor our study by comparing automatically produced references against human-produced ones on a subset of our data. + +Our experiments show that adding paraphrased references rarely hurts BLEU and can provide moderate gains in its correlation with human judgments. Where it does help, the gains are correlated with diversity (and less so adequacy), but see diminishing returns, and fall short of the nondiverse method designed just to increase coverage. Manual paraphrasing does give the best system-level BLEU results, but even these gains are relatively limited, suggesting that diversity alone has its limits in addressing weaknesses of surface-based evaluation metrics like BLEU. + +# 2 Related Work + +Paraphrasing for MT evaluation There is a long history of using paraphrasing to overcome the limitations of BLEU-style metrics. Some early approaches rely on external resources (e.g. WordNet) to provide support for synonym matching (Banerjee and Lavie, 2005; Kauchak and Barzilay, 2006; Denkowski and Lavie, 2014). More automatic methods of identifying paraphrases have also been developed. An early example is ParaEval (Zhou et al., 2006), which provides local paraphrase support using paraphrase sets automatically extracted from MT phrase tables. More recently, Apidianaki et al. (2018) exploit contextual word embeddings to build automatic HYTER networks. However they achieve mixed results, particularly when evaluating high performing (neural) models. + +The use of MT systems to produce paraphrases has also been studied previously. Albrecht and Hwa (2008) create pseudo-references by using out-of-the-box MT systems and see improved correlations with human judgments, helped by the systems being of better quality than those evaluated. This method was extended by Yoshimura et al. (2019), who filter the pseudo-references for quality. An alternative strategy is to use MT-style systems as paraphrasers, applied to the references. Madnani et al. (2007) show that additional (paraphrased) references, even noisy ones, reduce the number of human references needed to tune an SMT system, without significantly affecting MT quality. However their aim for coverage over quality means that their paraphrases are unlikely to be + +good enough for use in a final evaluation metric. + +Despite the attention afforded to the task, success has been limited by the fact that until recently, there were no good sentence-level paraphrasers (Federmann et al. (2019) showed that neural paraphrasers can now outperform humans for adequacy and cost). Attempts (e.g. Naples et al., 2016) using earlier MT paradigms were not able to produce fluent output, and publicly available paraphrase datasets have only been recently released (Wieting and Gimpel, 2018; Hu et al., 2019a). Moreover, most works focus on synonym substitution rather than more radical changes in sentence structure, limiting the coverage achieved. + +Structurally diverse outputs Diverse generation is important to ensure a wide coverage of possible translations. Diversity, both lexical and structural, has been a major concern of text generation tasks (Colin and Gardent, 2018; Iyyer et al., 2018). State-of-the-art neural MT-style text generation models used for paraphrasing (Prakash et al., 2016; Mallinson et al., 2017) typically suffer from limited diversity in the beam. Techniques such as sampling from the model distribution or from noisy outputs have been proposed to tackle this (Edunov et al., 2018) but can harm output quality. + +An effective strategy to encourage structural diversity is to add syntactic information (which can be varied) to the generated text. The constraints can be specified manually, for example by adding a parse tree (Colin and Gardent, 2018; Iyyer et al., 2018) or by specifying more abstract constraints such as rewriting embeddings (Xu et al., 2018). A similar but more flexible approach was adopted more recently by Shu et al. (2019), who augment target training sentences with cluster pseudotokens representing the structural signature of the output sentence. When decoding, the top cluster codes are selected automatically using beam search and for each one a different hypothesis is selected. We adopt Shu et al.'s approach here, due to the automatic nature of constraint selection and the flexibility afforded by constraint definition, allowing us to test different types of diversity by varying the type of sentence clustering method. + +# 3 Generating paraphrased references + +We look at two ways to produce paraphrases of English references using English-English NMT architectures. The first (Sec. 3.1) aims for maximal lexical and syntactic diversity, in a bid to + +better cover the space of valid translations. In contrast, the second (Sec. 3.2) aims to produce paraphrases that target the most relevant areas of the space (i.e. that are as close to the good system outputs as possible). Of course, not all outputs are good, so we attempt to achieve coverage while maintaining adequacy to the original reference by using information from the MT outputs. While less realistic practically, this approach furthers the study of the relationship between diversity and valid coverage. + +# 3.1 Creating diverse paraphrases + +To encourage diverse paraphrases, we use Shu et al.'s (2019) method for diverse MT, which consists in clustering sentences according to their type and training a model to produce outputs corresponding to each type. Applied to our paraphrasing scenario, the methodology is as follows: + +1. Cluster target sentences by some property (e.g., semantic, syntactic representation); +2. Assign a code to each cluster and prefix each target sentence in the training data with its code (a pseudo-token), as follows: + +(2) $\langle \mathrm{cl}_{.14} \rangle$ They knew it was dangerous. + + They had chickens, too. + + That 's the problem. + +3. Train an NMT-style paraphrase model using this augmented data; +4. At test time, apply the paraphraseer to each reference in the test set; beam search is run for each of the $n$ most probable sentence codes to produce $n$ paraphrases per reference. + +As in (Shu et al., 2019), we test two different types of diversity: semantic using LASER sentential embeddings (Artetxe and Schwenk, 2019) and syntactic using a TreeLSTM encoder (Tai et al., 2015). Both methods encode each sentence as a vector, and the vectors are clustered using $k$ -means into 256 clusters (full details in App. C). + +Semantic: We use pretrained LASER sentential embeddings (Artetxe and Schwenk, 2019) to encode sentences into 1024-dimensional vectors. + +Syntactic: As in (Shu et al., 2019), we encode constituency trees into hidden vectors using a TreeLSTM-based recursive autoencoder, with the difference that we use $k$ -means clustering to make the method more comparable to the above, and we encode syntactic information only. + +# 3.2 Output-guided constrained paraphrases + +Diversity is good, but even a highly diverse set of references may not necessarily be in the same space as the MT outputs. We attempt to achieve high coverage of the system outputs by using a weak signal from those outputs. The signal we use is unrewarded $n$ -grams from the best systems, which are $n$ -grams in system outputs absent from the original reference. We identify them as follows. For each sentence in a test set, we find all $n$ -grams that are (a) not in the reference but (b) are present in at least $75\%$ of the system outputs, (c) limited to the top half of systems in the human system-level evaluation (Barrault et al., 2019). Then, for each such $n$ -gram, we generate one paraphrase of the reference using constrained decoding (Post and Vilar, 2018), with that $n$ -gram as a constraint. This gives a variable-sized set of paraphrased references for each sentence. In order to limit overfitting to the best systems, we use a cross-validation framework, in which we randomly split the submitted systems into two groups, the first used to compute the $n$ -gram constraints and the augmented references, and the second half for evaluation. We repeat this ten times and report the average correlation across the splits. + +# 4 Experiments + +Our goal is to assess whether we can generate paraphrases that are representative of the translation space and which, when used with BLEU, improve its utility as a metric. We therefore carry out experiments to (i) evaluate the adequacy and diversity of our paraphrases (Sec. 5.2) and (ii) compare the usefulness of all methods in improving BLEU's correlation with human judgments of MT quality (Sec. 4.1). BLEU is a corpus-level metric, and our primary evaluation is therefore its system-level correlation. However, it is often also used at the segment level (with smoothing to avoid zero counts). It stands to reason that multiple references would be more important at the segment-level, so we also look into the effects of adding paraphrase references for SENTBLEU too. + +# 4.1 Metric evaluation + +For each set of extra references, we produce multi-reference BLEU and SENTBLEU metrics, which we use to score all into-English system outputs from the WMT19 news task. We evaluate the + +scores as in the metrics task (Ma et al., 2019), by calculating the correlation with manual direct assessments (DA) of MT quality (Graham et al., 2013). System-level scores are evaluated using Pearson's $r$ and statistical significance of improvements (against single-reference BLEU) using the Williams test (Williams, 1959). Segment-level correlations are calculated using Kendall's $\tau$ (and significance against single-reference SENTBLEU with bootstrap resampling) on the DA assessments transformed into relative rankings. + +# 4.2 Baseline and contrastive systems + +Our true baselines are case-sensitive corpus BLEU and SENTBLEU, both calculated using sacreBLEU (Post, 2018) using the standard BLEU formula. Though likely familiar to the reader, we review it here. BLEU is computed by averaging modified $n$ -gram precisions $(p_n, n = 1..4)$ and multiplying this product by a brevity penalty (BP), which penalizes overly short translations and thereby works to balance precision with recall: + +$$ +\mathrm {B L E U} = \mathrm {B P} \cdot \exp \left(\sum_ {n = 1} ^ {N} w _ {n} \log p _ {n}\right) \tag {1} +$$ + +$$ +\mathrm {B P} = \left\{ \begin{array}{l l} 1 & \text {i f} c > r \\ e ^ {1 - r / c} & \text {i f} c \leq r \end{array} \right. \tag {2} +$$ + +$$ +p _ {n} = \frac {\sum_ {h \in H} \sum_ {\mathrm {n g r a m} \in h} \# _ {\mathrm {c l i p}} (n g r a m)}{\sum_ {h ^ {\prime} \in H} \sum_ {\mathrm {n g r a m} ^ {\prime} \in h ^ {\prime}} \# (n g r a m ^ {\prime})}, \tag {3} +$$ + +with $c$ and $r$ the lengths of the hypothesis and reference sets respectively, $H$ is the set of hypothesis translations, $\# (ngram)$ the number of times ngram appears in the hypothesis, and $\#_{\mathrm{clip}}(ngram)$ is the same but clipped to the maximum number of times it appears in any one reference. + +By definition, BLEU is a corpus-level metric, since the statistics above are computed across sentences over an entire test set. The sentence-level variant requires a smoothing strategy to counteract the effect of $0n$ -gram precisions, which are more probable with shorter texts. We use exponential smoothing. Both baselines use the single provided reference only. We also compare against several contrastive paraphrasing approaches: (i) BEAM, which adds to the provided reference the the $n$ best hypotheses in the beam of a baseline paraphrase, and (ii) SAMPLED, which samples from the top $80\%$ of the probability mass at each time step (Edunov et al., 2018). For the sentence encoding methods, we also include (iii) RANDOM, + +where randomly selected cluster codes are used at training and test time. + +As a topline, we compare against manually paraphrased references (HUMAN), which we produce for a subset of 500 sentences from the de-ent test set. Two native English speakers together produced five paraphrases per reference (alternately two or three paraphrases). They were instructed to craft paraphrases that were maximally different (lexically and syntactically) from both the reference and the other paraphrases (to which they had access), without altering the original meaning. + +# 4.3 Paraphrase model training + +We train our paraphrasers using data from Parabank 2 (Hu et al., 2019b), containing $\approx 20\mathrm{M}$ sentences with up to 5 paraphrases each, of which we use the first paraphrase only. We preprocess by removing duplicate sentences and those longer than 100 words and then segment into subwords using SentencePiece (Kudo and Richardson, 2018) (unigram model (Kudo, 2018) of size $16\mathrm{k}$ ). The data splits are created by randomly shuffling the data and reserving $3\mathrm{k}$ pairs each for dev and test. For syntactic sentence encoding methods, we use the Berkeley Parser (Petrov et al., 2006) (internal tokenisation and prioritizing accuracy) and prune trees to a depth of 4 for $\approx 6\mathrm{M}$ distinct trees. $^4$ + +Paraphrase models are Transformer base models (Vaswani et al., 2017) (Cf. App. B for details). All models are trained using the Marian NMT toolkit (Junczys-Dowmunt et al., 2018), except for SAMPLED and the constraint approach, for which we use the Sockeye toolkit (Hieber et al., 2018), since Marian does not support these features. + +For baseline models, we produce $n$ additional references by taking the $n$ -best in the beam (using a beam size of 20, which is the maximum number of additional references we test). For models using cluster codes, paraphrases are produced by selecting the $n$ -best cluster codes at the first decoding step and then decoding each of these hypotheses using separate beam searches (of size 6). + +# 5 Paraphrase Adequacy and Diversity + +# 5.1 Adequacy + +To ensure our automatically produced paraphrases are of sufficient quality, we first assess their adequacy (i.e., faithfulness to the original meaning). We determine adequacy by manually eval + +
ReferenceDAWhat provoked Lindsay Lohan to such very strange actions is currently completely unclear.Now they have come to an agreement.
BEAM91.7What caused Lindsay Lohan to do such strange things is not clear at the moment. +What provoked Lindsay Lohan's strange actions is not clear at the moment. +What has provoked Lindsay Lohan's strange actions is not clear at the moment.Now they've made a deal. +Now they've reached a deal. +Now they made a deal.
SAMPLED85.0What prompted Lindsay Lohan's most extraordinary actions? +What made Lindsay Lohan act so weird? +What inspired Lindsay Lohan to do such odd things?And now they've agreed. +And now they've agreed. +They've reached an agreement.
LASER90.1What provoked Lindsay Lohan to act so strangely is not clear at the moment. +It's not clear what provoked Lindsay Lohan to act so strangely. +It's not clear what prompted Lindsay Lohan to act so strangely.Now they've reached a deal. +Now they've agreed. +Now they've agreed
TREELSTM88.0What provoked Lindsay Lohan to do such a strange thing is not clear at the moment. +It is not clear at this time what provoked Lindsay Lohan to do such strange things. +The reason that Lindsay Lohan has been provoked by these very strange actions is not clear at the moment.Now they made a deal. +Now they've made a deal. +They've already made a deal.
HUMAN95.2It is currently totally unclear what made Lindsay Lohan do such strange things. +The cause of Lindsay Lohan's strange actions is really not clear at the moment. +The reasons behind Lindsay Lohan's such bizarre acts are completely obscure for now.They have now come to an agreement. +An agreement has now been made. +They have reached an agreement.
+ +uating paraphrases of the first 100 sentences of the de-en test set. We compare a subset of the automatic methods (BEAM, SAMPLED, LASER, TREELSTM) as well as HUMAN. 5 annotators (2 native and 3 fluent English speakers) rated the paraphrases' adequacy using DA, indicating how well (0-100) the official reference's meaning is preserved by its paraphrases. 25 judgments were collected per sentence (sampling from each system's top 5 paraphrases) System-level scores are produced by averaging across all annotations. + +The results and examples of some of the paraphrased references are given in Tab. 1 (more examples are given in App. G). Whilst the task is inherently subjective, we see a clear preference for human paraphrases, providing a reference point for interpreting the scores. The automatic paraphrase systems are not far behind, and the scores are further corroborated by the lowest score being assigned to the sampled output, which we expect to be less faithful to the reference meaning. + +# 5.2 Diversity + +We evaluate the diversity of paraphrased references using two diversity scores (DS): + +$$ +\mathrm {D S} _ {\mathrm {x}} = \frac {1}{| Y | (| Y | - 1)} \sum_ {y \in Y} \sum_ {y ^ {\prime} \in Y, y ^ {\prime} \neq y} 1 - \Delta_ {\mathrm {x}} \left(y, y ^ {\prime}\right), +$$ + +where $Y$ is the set of paraphrases of a sentence produced by a given system, and $\Delta_{\mathrm{x}}$ calculates the similarity of paraphrases $y$ and $y'$ . We use two different functions: $\Delta_{BOW}$ (for lexical similarity) and $\Delta_{tree}$ (for syntactic similarity). Both give scores between 1 (identical) and 0 (maximally diverse), + +$\mathbf{DS}_{BOW}$ is the lexical overlap between the sets of words in two paraphrases. $\Delta_{BOW}(y, y')$ corresponds to the number of unique words in common between $y$ and $y'$ , divided by their mean length. + +$\mathbf{DS}_{tree}$ uses $\Delta_{tree}$ , the average tree kernel similarity score between paraphrases. We compute tree kernels using the "subset tree" (SST) comparison tree kernel similarity function presented in (Moschitti, 2006, §2.2), with a decay value of $\lambda = 0.5$ , and excluding leaves ( $\sigma = 0$ ). + +Table 1: Direct assessment (DA) adequacy scores for the BEAM and SAMPLED baseline, the two diverse approaches and human paraphrases for the 100-sentence de-en subset. We also provide each method's top 3 paraphrases for two references. + +
nMethodDSBOWDStreeBLEU
0none--29.8
5RANDOM0.100.0134.8
BEAM0.220.3037.0
LASER0.240.3337.5
TREELSTM0.280.4737.7
SAMPLED0.410.5640.1
5*SAMPLED0.400.5547.0
Constraints0.190.3056.5
HUMAN0.800.6848.9
20RANDOM0.100.0134.8
BEAM0.270.3739.7
LASER0.310.4541.3
TREELSTM0.320.5341.0
SAMPLED0.510.6547.3
Constraints0.210.2846.4
MT submissions0.370.51-
+ +Table 2: Diversity scores (DS) of paraphrased references averaged over all into-English test sets, where $n$ is the number of paraphrases. The final row indicates diversity among MT outputs. * indicates results just for the 500-sentence de-en subset. The final column is the average BLEU score. + +The results (Tab. 2) show that all methods other than RANDOM give more diversity than BEAM. Shu et al.'s cluster code method generates diverse paraphrases. As expected, random cluster codes are not helpful, producing mostly identical para + +phrases differing only in the cluster code. Diversity increases for all methods as paraphrases are added. TREELSTM produces structurally more diverse paraphrases than LASER and has high lexical diversity too, despite codes being entirely syntactic, suggesting that structural diversity leads to varied lexical choices. The most lexically and structurally diverse method (except for HUMAN), is in fact the strong baseline SAMPLED, which is likely due to the noise added with the method. + +The increased diversity is generally reflected by an increase in the average BLEU score (final column of Tab. 2). These higher BLEU scores indicate that the additional paraphrases are better covering the translation space of the MT outputs, but it remains to be seen whether this concerns the space of valid and/or invalid translations. In contrast, some of the diversity makes less of an impact on the BLEU score; the gap in syntactic diversity between LASER and TREELSTM (+20 references) is not reflected in a similar gap in BLEU score, indicating that this added diversity is not relevant to the evaluation of these specific MT outputs. + +# 6 Metric Correlation Results + +The correlation results for each of the metrics (both system- and segment-level) for different numbers of additional references5 (aggregated full results) are shown in Tab. 3a and Tab. 3b (for the de-en 500-sample subset). We aggregate the main results to make them easier to interpret by averaging over all into-English test sets (the Ave. column) and we also provide the gains for the language pairs that gave the smallest and greatest gains (Min and Max respectively). Full raw results can be found in App. D. + +System-level Adding paraphrased references does not significantly hurt performance, and usually improves it; we see small gains for most languages (Ave. column), although the size of the gain varies, and correlations for two directions (fien and gu-en) are degraded but non-significantly (shown by the small negative minimum gains). + +Fig. 1 (top) shows that for the diverse approaches, the average gain is positively correlated with the method's diversity: increased diversity does improve coverage of the valid translation space. This positive correlation holds for all directions for which adding paraphrases helps + +(i.e., all except fi-en and gu-en). For these exceptions, none of the methods significantly improves over the baseline, and RANDOM gives as good if not marginally better results. The constraints approach achieves the highest average gain, suggesting that it is more efficiently targeting the space of valid translations, even though its paraphrases are significantly less diverse (Tab. 2). + +Finally, and in spite of these improvements, we note that all systems fall far short of the best WMT19 metrics, shown in the last row. Automatic paraphrases do not seem to address the weakness of BLEU as an automatic metric. + +Segment-level Similar results can be seen at the segment level, with most diverse approaches showing improvements over the baseline (this time SENTBLEU) and a minority showing nonsignificant deteriorations (i.e., no change). The diversity of the approaches is again positively correlated with the gains seen (Fig. 1, bottom), with the exception of zh-en, for no easily discernable reason. + +The best result of the diverse approaches is again achieved by the SAMPLED baseline. + +The constraint-based approach achieves good scores, comparable to SAMPLED, despite an anomalously poor score for one language pair (for kk-en, with a degradation of 0.097. This approach also had the highest BLEU scores, however, suggesting that the targeted paraphrasing approach here missed its mark. + +De-en 500-sentence subset The general pattern shows the same as the averages over all languages in Tab. 3a, with the more diverse methods (especially SAMPLED) resulting in the greatest gains. The human results also follow this pattern, resulting in the highest gains of all at the system level. Interestingly, the constrained system yields higher average BLEU scores than HUMAN (Tab. 2) yet a comparable system correlation gain, indicating it targets more of the invalid translation space. For this particular subset, the constraints-based approach helps slightly more at the segment level than the system level, even surpassing the human paraphrases in terms of relative gains, despite it having remarkably less diversity. + +# 7 Discussion + +Does diversity help? In situations where adding paraphrases helps (which is the case for a majority + +
ApproachMethodSystem GainsSegment GainsSystem de-en
Ave.MinMaxAve.MinMax
Baselines (+5)BEAM0.020-0.0060.0590.013-0.0010.0290.0400.021
RANDOM0.0170.0000.0460.007-0.0020.0170.0310.017
SAMPLED0.024-0.0020.0670.017-0.0040.0440.0440.043
Diversity (+1)LASER0.017-0.0000.0480.009-0.0030.0250.0340.022
TREELSTM0.017-0.0000.0480.011-0.0020.0270.0310.011
Diversity (+5)LASER0.020-0.0040.0560.011-0.0020.0330.0400.022
TREELSTM0.020-0.0040.0570.013-0.0040.0300.0440.008
Output-specific (+1)LASER0.012-0.0060.0410.006-0.0010.0160.0320.015
TREELSTM0.014-0.0070.0410.007-0.0050.0160.0390.011
Constraints4-grams0.025-0.0020.0610.002-0.0970.072-0.0270.035
Human WMT-19 bestMultiple------0.0390.037
0.0790.0100.1940.1170.0720.145--
+ +(a) Average and minimum and maximum gains over all into-English test sets +(b) 500-sample subset + +Table 3: Absolute gains in correlation (with respect to the true BLEU and sentenceBLEU baseline correlations). Significant gains (except for averages) are marked in bold ( $p \leq 0.05$ ). Full results per language pair are provided in App. D. WMT-19 best refers to the best metric scores from the official shared task (the best metric can be different for each language pair). + +![](images/330ac3f4db7d2972a3eae6058351d986350de5446b844325eb0f653ca14a491e.jpg) +Figure 1: Lexical diversity versus absolute correlation gain at the system level (top) and segment level (bottom) for a variety of paraphrase systems $(+2, +5, +10$ and $+20$ references). + +of language directions), the diversity of those paraphrases tends to positively correlate with gains in metric performance for both BLEU and SENTBLEU. The adequacy of the paraphrases appears to be a less important factor, shown by the fact that the best automatic diverse method at both levels was the SAMPLED baseline, the most diverse but + +the least adequate. $^{6}$ The comparison against human paraphrases on the de-en subsample suggests room for improvement in automated techniques, at least at the system level, where all automatic metrics are beaten by HUMAN paraphrases, which are both more diverse and more adequate. + +However, diversity is not everything; although HUMAN has nearly twice the lexical diversity of SAMPLED, it improves BLEU only somewhat and harms sentence BLEU. On the other side, targeted constraints have relatively low diversity, but higher correlation gains. Diversity itself does not necessarily result in coverage of the space occupied by good translation hypotheses. + +What effect do more references have? Diversity increases the more paraphrases there are and it is positively correlated with gains for most language directions. However, improvements are slight, especially with respect to what we would hope to achieve (using human references results in much more diversity and also greater improvements). The relationship between the number of extra references and system-level correlations shown in Fig. 2 suggests that increasing the number of references results in gains, but for most test sets, the initial paraphrase has the most impact and the subsequent ones lead to lesser gains or even occasional deteriorations. Similar results are seen at the segment level. + +![](images/8b0e243c60350fdb3f8d13e0d2ce7c3d03976ac142d16d1650e54f8ed0e51cac.jpg) +Figure 2: TREELSTM system-level correlations (+0-20). + +![](images/4436a801bde59a5776e2b7f953d7ab4b51c730c7c5561376d9eb0f4ed0baa354.jpg) +Figure 3: % improved and degraded (with respect to single-reference sentence-BLEU) for methods with +5 references. + +Why are gains only slight? With respect to the SENTBLEU baseline, we calculate the percentage of comparisons for which the decision is improved (the baseline scored the worse translation higher than the better one and the new paraphrase-augmented metric reversed this)7 and for which the decision is degraded (opposite reversal). The results (Fig. 3) show that although all the systems improve a fair number of comparisons (up to $9.6\%$ ), they degrade almost as many. So, while paraphrasing adds references that represent the space of valid translations, references are indeed being added that match with the space of invalid ones too. Interestingly, the same pattern can be seen for human paraphrases, $6.46\%$ of comparisons being degraded vs. $8.30\%$ improved, suggesting that even when gold standard paraphrases are produced, the way in which the references are used by SENTBLEU still rewards some invalid translations, though the balance is shifted slightly in favour of valid translations. This suggests that at least at the segment level, BLEU is a balancing act between rewarding valid translations and avoiding rewarding invalid ones. Some of these effects may be smoothed out in system-level BLEU but there is still likely to be an effect. It is worth noting that for the two languages directions, fi-en and gu-en, for which diversity was negatively cor + +related with correlation gain (i.e., diversity could be harming performance), the most conservative approach (RANDOM) leads to some of the best results. + +What is the effect on individual $n$ -grams? We study which new $n$ -grams are being matched by the additional references for the two language directions with the largest system-level correlation gain (ru-en and de-en). For each sentence, we collect and count the $n$ -grams that were not in the original reference but where in the five paraphrased references of BEAM (missing $n$ -grams), accumulated across all test set sentences. We also looked at the most frequent $n$ -grams not found at all, even with the help of the paraphrases (i.e., the unrewarded $n$ -grams from Sec. 3.2). The results are in Table 4. + +Unsurprisingly, most 1-grams are common grammatical words (e.g., $a$ , of, to, in, the) that may be present (or not) in any sentence; it is hard to draw any conclusions. For 4-grams, however, we see some interesting patterns. Present in both lists are acronym variants such as $U$ . $S$ . for 'United States' and $p$ . $m$ . for 'afternoon' or the 24-hour clock; their presence on both sides indicates success in sometimes grabbing this variant as well as failure to do so consistently. We also see phrasal variants such as, according to and, "he said. These last points corroborate a point made by Freitag et al. (2020, §7.2) that references may omit these common variants. It also suggests a more focused method for generating paraphrases: identify a high-precision set of common variants, and ensure their presence in the set of references, via constrained decoding or other means (in the spirit of Meteor's (Denkowski and Lavie, 2011) synonym-based matching). We note however, that our paraphrasing methods do seem to contain complementary information as they also tend to improve Meteor too (see results in App. F). + +# 8 Conclusion + +We studied the feasibility of using diverse automatic paraphrasing of English references to improve BLEU. Although increased diversity of paraphrases does lead to increased gains in correlation with human judgments at both the system and segment levels, the gains are small and inconsistent. We can do a slightly better job by using + +
Nnewly matched ngramsmissing ngrams
1a (494) of (480), (442) to (370) in (364) The (315) the (273) is (204) for (196) has (196) on (193) was (179) have (171) that (166) be (155) at (145) been (140) with (138) and (134)to (921) in (921) on (870) is (780) a (786) for (568) The (556) with (509) it (508) has (505) are (482) by (480) was (478) have (449) - (443) at (437) as (426) which (386)
4U . S . (63) the U . S (39) , as well as (19) p . m . (15) for the first time (13) in accordance with the (12) the United States , (11) in the United States (10) a member of the (10) of the United States (9) The U . S (9) . m . on (9) , in order to (9) the United States and (8) , of course , (8) . S . Navy (8) . m . , (8) the Chinese Academy of (8) Chinese Academy of Engineering (8) the renaming of the (7)U . S . (136) , according to the (99) , ” he said (77) the U . S (55) of the United States (48) of the Ministry of (39) the end of the (38) , ” said the (37) same time , the (36) , such as the (36) as well as the (35) ( Xinhua ) - (34) and so on . (33) , he said . (32) the head of the (32) , the head of (31) , as well as (30) on the basis of (30) , and so on (29)
+ +Table 4: Most frequently newly matched and missing $n$ -grams for the de-en and ru-en test sets for BEAM (+5). + +cues from the system outputs themselves to produce paraphrases providing a helpful form of "targeted" diversity. The comparison with manually produced paraphrases shows that there is room for improvement, both in terms of how much diversity is achieved and how much BLEU can be improved. However, the lack of any improvement in some languages points to how hard it is to target this "right kind" of diversity a priori; this, together with the relatively limited gains overall (especially in comparison with the best WMT19 metrics), suggests an intrinsic limit to BLEU's capacity to handle multiple references. + +# Acknowledgements + +We would like to thank MT Marathon 2019, during which this work was initiated and which also provide us with initial computing resources. This work was supported by funding from the European Union's Horizon 2020 research and innovation programme under grant agreements No 825299 (GoURMET), 825303 and the UK Engineering and Physical Sciences Research Council (EPSRC) fellowship grant EP/S001271/1 (MTStretch). + +# References + +Joshua Albrecht and Rebecca Hwa. 2008. The role of pseudo references in MT evaluation. In Proceedings of the Third Workshop on Statistical Machine Translation, pages 187-190, Columbus, Ohio. Association for Computational Linguistics. +Marianna Apidianaki, Guillaume Wisniewski, Anne Cocos, and Chris Callison-Burch. 2018. Automated paraphrase lattice creation for HyTER machine translation evaluation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 480-485, New Orleans, Louisiana. Association for Computational Linguistics. + +Mikel Artetxe and Holger Schwenk. 2019. Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond. Transactions of the Association for Computational Linguistics, 7:597-610. +Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65-72, Ann Arbor, Michigan. Association for Computational Linguistics. +Loic Barrault, Ondrej Bojar, Marta R. Costa-jussa, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias Muller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine translation (WMT19). In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1-61, Florence, Italy. Association for Computational Linguistics. +Chris Callison-Burch, Miles Osborne, and Philipp Koehn. 2006. Re-evaluating the role of Bleu in machine translation research. In 11th Conference of the European Chapter of the Association for Computational Linguistics, Trento, Italy. Association for Computational Linguistics. +Emilie Colin and Claire Gardent. 2018. Generating syntactic paraphrases. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 937-943, Brussels, Belgium. Association for Computational Linguistics. +Michael Denkowski and Alon Lavie. 2011. Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 85-91, Edinburgh, Scotland. Association for Computational Linguistics. +Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the EACL 2014 Workshop on Statistical Machine Translation. + +Markus Dreyer and Daniel Marcu. 2012. HyTER: Meaning-equivalent semantics for translation evaluation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 162-171, Montreal, Canada. Association for Computational Linguistics. +Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 489-500, Brussels, Belgium. Association for Computational Linguistics. +Christian Federmann, Oussama Elachqar, and Chris Quirk. 2019. Multilingual whispers: Generating paraphrases with translation. In Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019), pages 17-26, Hong Kong, China. Association for Computational Linguistics. +Markus Freitag, David Grangier, and Isaac Caswell. 2020. BLEU might be Guilty but References are not Innocent. arXiv. +Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2013. Continuous measurement scales in human evaluation of machine translation. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 33-41, Sofia, Bulgaria. Association for Computational Linguistics. +Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton, and Matt Post. 2018. The sockeye neural machine translation toolkit at AMTA 2018. In Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Papers), pages 200-207, Boston, MA. Association for Machine Translation in the Americas. +J. Edward Hu, Huda Khayrallah, Ryan Culkin, Patrick Xia, Tongfei Chen, Matt Post, and Benjamin Van Durme. 2019a. Improved lexically constrained decoding for translation and monolingual rewriting. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 839-850, Minneapolis, Minnesota. Association for Computational Linguistics. +J. Edward Hu, Abhinav Singh, Nils Holzenberger, Matt Post, and Benjamin Van Durme. 2019b. Largescale, diverse, paraphrastic bitexts via sampling and clustering. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 44-54, Hong Kong, China. Association for Computational Linguistics. +Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. + +In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1875-1885, New Orleans, Louisiana. Association for Computational Linguistics. +Marcin Junczys-Dowmunt, Kenneth Heafield, Hieu Hoang, Roman Grundkiewicz, and Anthony Aue. 2018. Marian: Cost-effective high-quality neural machine translation in $\mathrm{C + + }$ . In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 129-135, Melbourne, Australia. Association for Computational Linguistics. +David Kauchak and Regina Barzilay. 2006. Paraphrasing for automatic evaluation. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 455-462, New York City, USA. Association for Computational Linguistics. +Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 66-75, Melbourne, Australia. Association for Computational Linguistics. +Taku Kudo and John Richardson. 2018. Sentence-Piece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics. +Qingsong Ma, Johnny Wei, Ondrej Bojar, and Yvette Graham. 2019. Results of the WMT19 metrics shared task: Segment-level and strong MT systems pose big challenges. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 62-90, Florence, Italy. Association for Computational Linguistics. +Nitin Madnani, Necip Fazil Ayan, Philip Resnik, and Bonnie Dorr. 2007. Using paraphrases for parameter tuning in statistical machine translation. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 120-127, Prague, Czech Republic. Association for Computational Linguistics. +Jonathan Mallinson, Rico Sennrich, and Mirella Lapata. 2017. Paraphrasing revisited with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 881-893, Valencia, Spain. Association for Computational Linguistics. +Nitika Mathur, Timothy Baldwin, and Trevor Cohn. 2020. Tangled up in BLEU: Reevaluating the evaluation of automatic machine translation evaluation + +metrics. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4984-4997, Online. Association for Computational Linguistics. +Alessandro Moschitti. 2006. Making tree kernels practical for natural language learning. In 11th Conference of the European Chapter of the Association for Computational Linguistics, Trento, Italy. Association for Computational Linguistics. +Courtney Naples, Chris Callison-Burch, and Matt Post. 2016. Sentential paraphrasing as black-box machine translation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 62-66, San Diego, California. Association for Computational Linguistics. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. +Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 433-440, Sydney, Australia. Association for Computational Linguistics. +Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191, Brussels, Belgium. Association for Computational Linguistics. +Matt Post and David Vilar. 2018. Fast lexically constrained decoding with dynamic beam allocation for neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1314-1324, New Orleans, Louisiana. Association for Computational Linguistics. +Aaditya Prakash, Sadid A. Hasan, Kathy Lee, Vivek Datla, Ashequl Qadir, Joey Liu, and Oladimeji Farri. 2016. Neural paraphrase generation with stacked residual LSTM networks. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2923-2934, Osaka, Japan. The COLING 2016 Organizing Committee. +Raphael Shu, Hideki Nakayama, and Kyunghyun Cho. 2019. Generating diverse translations with sentence codes. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1823-1827, Florence, Italy. Association for Computational Linguistics. + +Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1556-1566, Beijing, China. Association for Computational Linguistics. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998-6008. Curran Associates, Inc. +John Wieting and Kevin Gimpel. 2018. ParaNMT-50M: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451–462, Melbourne, Australia. Association for Computational Linguistics. +Evan James Williams. 1959. Regression Analysis. Wiley, New York. +Qiongkai Xu, Juyan Zhang, Lizhen Qu, Lexing Xie, and Richard Nock. 2018. D-PAGE: diverse paraphrase generation. CoRR, abs/1808.04364. +Ryoma Yoshimura, Hiroki Shimanaka, Yukio Matsumura, Hayahide Yamagishi, and Mamoru Komachi. 2019. Filtering pseudo-references by paraphrasing for automatic evaluation of machine translation. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 521-525, Florence, Italy. Association for Computational Linguistics. +Liang Zhou, Chin-Yew Lin, and Eduard Hovy. 2006. Re-evaluating machine translation results with paraphrase support. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 77-84, Sydney, Australia. Association for Computational Linguistics. + +# A Number of distinct parse trees at different depths + +Table 5 shows the number of distinct pruned tree at different depths. We choose a depth of 4 for the syntactic sentence encoding methods in our experiments. + +
depthno leavestype/tokenwith leaves
1160%16
2207,7941.0%207,794
32,158,11411.2%2,629,907
46,089,87431.6%10,631,249
58,865,72046.1%14,102,645
13,054,27268.1%17,362,448
+ +# B Paraphraseer training details + +All paraphrase models are Transformer base models (Vaswani et al., 2017): 6 layers, 8 heads, word embedding dimension of 512, feedforward dimension of 2048. We set dropout to 0.1 and tie all embeddings to the output layer with a shared vocabulary size of 33,152. We use the same vocabulary (including the 256 cluster codes) for all models. We adopt Adam optimisation with a scheduled learning rate (initial $3 \times 10^{-4}$ ) and mini-batch size of 64. We train each model on 4 GTX Titan X GPUs with a gradient update delay of 2, and select the final model based on validation BLEU. + +# C Sentence clustering training details + +We set $k$ to 256 for $k$ -means clustering. We train TREELSTM sentence encoders using Ada-grad with a learning rate of 0.025, weight decay of $10^{-4}$ and batch size of 400 for a maximum of 20 iterations. We set the model size to 256 and limit the maximum number of child nodes to 10. + +# D Full raw WMT19 results + +Table 7 shows the raw correlations of each each paraphrase-augmented BLEU metric on WMT19 (system-level results top and segment-level results bottom). These correspond to the raw scores used to calculate the gains of each method with respect to the true baseline (BLEU or sentenceBLEU) shown in the main results section in Table 3. We indicate the best system from WMT19 as a point of reference. + +# E Raw results for the de-en 500-sentence subset + +Table 5: Number of distinct pruned trees in different depths with and without leaves in the parsed data. + +
MethodCorrelation
SystemSegment
Baseline(sentence)BLEU0.8950.026
Baselines (+5)BEAM0.9340.048
RANDOM0.9260.043
SAMPLED0.9390.069
Diversity (+1)LASER0.9290.048
TREELSTM0.9260.037
Diversity (+5)LASER0.9350.049
TREELSTM0.9390.034
Constraints4-gram0.9330.064
Human0.9480.063
+ +Table 6: Correlations on the 500-sentence subset. + +# F Results with the Meteor metric + +Although we focus on ways of improving BLEU using paraphrases in this article, as BLEU is the dominant metric, it is also interesting to look at how adding paraphrases could help similar metrics. We apply the same method to improving the Meteor metric (version 1.5) (Denkowski and Lavie, 2014), a metric which already integrates synonym support. + +Summarised results (as gains with respect to the single-reference Meteor metric) are shown in Tab. 8 and raw results are shown in Tab. 9 for both system-level and segment-level correlations. We observe that the true baselines (Meteor and sentenceMeteor) are improved in both cases, possibly more so than BLEU and in different ways, showing that the information added by the paraphrases is complementary to the synonym support offered by Meteor. + +# G Further examples of automatically paraphrased references + +We provide additional examples of paraphrased references. As can be seen from Table 10, TREELSTM gives us more diverse sentences compared to LASER. + +
ApproachMethodde-en (16)fi-en (12)gu-en (12)kk-en (11)lt-en (11)ru-en (14)zh-en (15)Ave
BaselineBLEU0.8900.9850.7990.9430.9690.8620.8880.905
Paraphrase baselines (+5)BEAM0.9280.9840.7930.9610.9860.9210.9000.925
RANDOM0.9160.9860.8050.9570.9830.9080.8980.922
SAMPLED0.9370.9840.7980.9660.9890.9290.9020.929
Diversity (+1)LASER0.9190.9870.7990.9570.9810.9090.9040.922
TREELSTM0.9210.9850.8000.9580.9820.9100.9010.922
Diversity (+5)LASER0.9340.9850.7950.9630.9870.9180.8960.925
TREELSTM0.9330.9820.7960.9640.9870.9180.8980.925
Constraints4-grams0.9220.9830.8090.9630.9890.9240.9210.930
WMT-19 best0.950** (Y1S1-1.SRL)0.995 (METEOR)0.993*** (Y1S1-0)0.998*** (WMDO)0.989* (ESIM)0.979** (Y1S1-1)0.988*** (ESIM)0.985
+ +(a) Pearson correlations at the system level. + +
ApproachMethodde-en (32000)fi-en (23952)gu-en (12192)kk-en (11000)lt-en (11000)ru-en (28000)zh-en (30000)Ave
BaselinesentenceBLEU0.0550.2280.1750.3680.2510.1140.3170.215
ParaphraseBEAM0.0610.2500.1890.3710.2810.1290.3170.228
baselinesRANDOM0.0560.2400.1840.3740.2690.1220.3150.223
(+5)SAMPLED0.0730.2510.1920.3740.2950.1270.3130.232
DiversityLASER0.0610.2440.1870.3680.2760.1210.3140.225
(+1)TREETSTM0.0610.2420.1850.3830.2780.1230.3150.227
DiversityLASER0.0620.2450.1870.3720.2840.1230.3150.227
(+5)TREETSTM0.0650.2470.1950.3760.2810.1190.3140.228
Constraints4-grams0.0900.2420.1610.2710.3230.1220.3140.218
WMT-19 best0.199*** (YiSi-1SRL)0.346*** YiSi-10.306*** (YiSi-1)0.442*** (YiSi-1SRL)0.380*** (YiSi-1SRL)0.22*** (YiSi-1SRL)0.431*** (YiSi-1SRL)0.333
+ +(b) Kendall's $\tau$ at the segment level +Table 7: WMT19 correlations of paraphrased BLEU for each method against human assessments (# judgments in brackets). Results that are significantly better than the sacreBLEU baseline are indicated as follows (at least $p \leq 0.05$ ) are marked in bold. + +
ApproachMethodSystemSegment
Ave.MinMaxAve.MinMax
Baselines (+5)BEAM0.0120.0020.0360.0160.0070.027
RANDOM0.0090.0020.0280.0100.0040.022
SAMPLED0.0130.0020.0380.0180.0090.031
Diversity (+1)LASER0.0090.0020.0250.0110.0050.017
TREELSTM0.0090.0010.0250.0110.0040.019
Diversity (+5)LASER0.0140.0030.0340.0150.0070.021
TREELSTM0.0150.0020.0390.0160.0080.030
Output-specific (+1)LASER0.0070.0000.0200.0090.0030.018
TREELSTM0.0100.0020.0200.0130.0040.021
Constraints4-grams0.004-0.0500.027-0.0020.043-0.084
+ +Table 8: Absolute gains in correlation for paraphrased Meteor for WMT19 with respect to the Meteor baseline. Significant gains (except for averages) are marked in bold ( $p \leq 0.05$ ). + +
ApproachMethodde-en (16)fi-en (12)gu-en (12)kk-en (11)lt-en (11)ru-en (14)zh-en (15)Ave
BaselineMETEOR0.9090.9930.8830.9690.9720.8250.9410.927
Paraphrase baselines (+5)BEAM0.9270.9940.8870.9760.9830.8620.9490.940
RANDOM0.9200.9940.8890.9740.9810.8530.9450.937
SAMPLED0.9250.9950.8910.9780.9820.8640.9450.940
Diversity (+1)LASER0.9240.9950.8860.9750.9790.8510.9480.937
TREELSTM0.9230.9940.8890.9740.9790.8500.9470.937
Diversity (+5)LASER0.9320.9950.8900.9780.9830.8600.9500.941
TREELSTM0.9300.9950.8940.9770.9830.8640.9500.942
Constraints4-grams0.9220.9900.9100.9830.9880.7750.9490.931
WMT-19 best0.950 (YiSi-1,SRL)0.995 (METEOR)0.993 (YiSi-0)0.998 (WMDO)0.989 (ESIM)0.979 (YiSi-1)0.988 (ESIM)0.985
+ +(a) Pearson correlations at the system level. + +
ApproachMethodde-en (32000)fi-en (23952)gu-en (12192)kk-en (11000)lt-en (11000)ru-en (28000)zh-en (30000)Ave
BaselinesentenceMETEOR0.0610.2430.1970.3560.2750.1450.3510.233
Paraphrase baselines (+5)BEAM0.0810.2570.2190.3830.2850.1520.3600.248
RANDOM0.0720.2540.2190.3640.2810.1560.3560.243
SAMPLED0.0800.2620.2280.3750.2920.1600.3600.251
Diversity (+1)LASER0.0790.2580.2090.3700.2830.1500.3590.244
TREELSTM0.0740.2550.2100.3740.2840.1490.3570.243
Diversity (+5)LASER0.0780.2570.2140.3770.2930.1580.3580.248
TREELSTM0.0740.2590.2280.3780.2870.1530.3610.249
Constraints4-grams0.0980.2370.1930.2720.3180.1450.3510.230
WMT-19 best0.20 (YiSi-1SRL)0.35 (YiSi-1)0.31 (YiSi-1)0.44 (YiSi-1SRL)0.38 (YiSi-1SRL)0.22 (YiSi-1SRL)0.43 (YiSi-1SRL)0.333
+ +(b) Kendall's $\tau$ at the segment level + +Table 9: WMT19 correlations of paraphrased METEOR for each method against human assessments (# judgments in brackets). Results that are significantly better than the METEOR baseline are indicated as follows (at least $p \leq 0.05$ ) are marked in bold. + +
ReferenceAnyway the application phase for each of the 20 properties, which the municipality is offering at better conditions and prices - similar to the earlier local model, has now ended.
LASERHowever, the application phase for each of the 20 properties offered by the municipality at better conditions and prices, similar to the previous local model, is now over. However, the application phase for each of the 20 properties offered by the municipality at better conditions and prices, similar to the previous local model, is now over. However, the application phase for each of the 20 properties offered by the municipality at better conditions and prices, similar to the previous local model, is now over.
TREELSTMHowever, the application phase for each of the 20 properties offered by the municipality at better conditions and prices - similar to the previous local model - is now closed. In any case, the application phase for each of the 20 properties offered by the municipality at better conditions and prices - similar to the previous local model - has now come to an end. However, for each of the 20 properties that the municipality offers at better conditions and prices, similar to the previous local model, the application phase has now come to an end.
ReferenceAnd there is a need for discussion.
LASERAnd we need to discuss it. And we need to have a discussion. And that's what we need to discuss.
TREELSTMAnd there's a need to discuss it. And we need to discuss it. It is necessary to discuss it.
ReferenceWhy are there increasingly conflicts, despite successful integration?
LASERWhy is there ever more conflict, despite successful integration? Why is there ever more conflict, despite successful integration? Why are there more and more conflicts, despite the success of integration?
TREELSTMWhy is there more and more conflict, despite successful integration? Why are conflicts growing despite successful integration? Why are there ever more conflicts, despite the success of integration?
ReferenceProfessor Doctor Aladin El-Mafaalani is standing before them.
LASERIn front of them is Professor Dr. Aladin El-Mafaalani. Professor Dr. Aladdin El-Mafaalani stands in front of them. Professor Dr. Aladdin El-Mafaalani stands in front of them.
TREELSTMProfessor Aladdin El-Mafaalani stands before them. In front of them is Professor Aladdin El-Mafaalani. There's Professor Aladdin El-Mafaalani in front of them.
ReferenceHowever, the operation took some time.
LASERBut the surgery took a while. But the operation took a while. But the surgery took a while.
TREELSTMHowever, the surgery lasted a while. But the surgery lasted a while. However, it took some time to operate.
ReferenceAccording to the police, it appears the fire was monitored for a long time, but the embers were inadvertently left behind.
LASERAccording to the police, the fire seems to have been under long surveillance, but the coals were inadvertently left behind. According to the police, the fire seems to have been under long surveillance, but the coals have been inadvertently left behind. According to the police, the fire seems to have been under long surveillance, but the coals were inadvertently left behind.
TREELSTMPolice say the fire seems to have been under observation for a long time, but the coals are accidentally left behind. The police seem to have been watching the fire for a long time, but the coals were accidentally left behind. According to the police, the fire seems to have been under observation for a long time, but the coals were accidentally left behind.
ReferenceWhat is the situation on the island now?
LASERHow's the island now? What's happening on this island now? What's the status on the island these days?
TREELSTMWhat's it like on the island? What's going on on the island? So what's the status on the island?
+ +Table 10: Top three paraphrases for seven sentences. \ No newline at end of file diff --git a/astudyinimprovingbleureferencecoveragewithdiverseautomaticparaphrasing/images.zip b/astudyinimprovingbleureferencecoveragewithdiverseautomaticparaphrasing/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..dd8608c311a31b4c9fbfc2033d244c99af74cb66 --- /dev/null +++ b/astudyinimprovingbleureferencecoveragewithdiverseautomaticparaphrasing/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7e4d5d456ffdd3b5e4cde78636038c62f5a4cb63fafee69eb9f10cf8af0e9c6a +size 1276341 diff --git a/astudyinimprovingbleureferencecoveragewithdiverseautomaticparaphrasing/layout.json b/astudyinimprovingbleureferencecoveragewithdiverseautomaticparaphrasing/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..1f5b259dc9b92ac4f06cf46e16c9f60563f1839f --- /dev/null +++ b/astudyinimprovingbleureferencecoveragewithdiverseautomaticparaphrasing/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:93af80df9397e1d89642fb47e519322571c548f958c319124123e0563c05ecd9 +size 417472 diff --git a/turkihtweetsabenchmarkdatasetforturkishtextcorrection/45476bbe-b529-4925-a8a1-753515b20536_content_list.json b/turkihtweetsabenchmarkdatasetforturkishtextcorrection/45476bbe-b529-4925-a8a1-753515b20536_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..1d1fd677761af05612101da0250eb8e3a426a14c --- /dev/null +++ b/turkihtweetsabenchmarkdatasetforturkishtextcorrection/45476bbe-b529-4925-a8a1-753515b20536_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d7c0a45c14acec01b3b20b9afff3dbd6c0afb2466876d7bfc80ac95935f240ca +size 60935 diff --git a/turkihtweetsabenchmarkdatasetforturkishtextcorrection/45476bbe-b529-4925-a8a1-753515b20536_model.json b/turkihtweetsabenchmarkdatasetforturkishtextcorrection/45476bbe-b529-4925-a8a1-753515b20536_model.json new file mode 100644 index 0000000000000000000000000000000000000000..54591c77e1628344a3ec582c60c43cf6c9ae8d73 --- /dev/null +++ b/turkihtweetsabenchmarkdatasetforturkishtextcorrection/45476bbe-b529-4925-a8a1-753515b20536_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:035877b3ca292bf96af8c7a3548a568a4914fc803fff950db54b1255a1ce0eeb +size 73629 diff --git a/turkihtweetsabenchmarkdatasetforturkishtextcorrection/45476bbe-b529-4925-a8a1-753515b20536_origin.pdf b/turkihtweetsabenchmarkdatasetforturkishtextcorrection/45476bbe-b529-4925-a8a1-753515b20536_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b17e0423ccd5fe5e569aeddf48c2c4b52b2a071f --- /dev/null +++ b/turkihtweetsabenchmarkdatasetforturkishtextcorrection/45476bbe-b529-4925-a8a1-753515b20536_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3cef7b937721f25279d03fa41e91c6f4a395c690f6a7e6d14af1ca9d96d229d0 +size 199741 diff --git a/turkihtweetsabenchmarkdatasetforturkishtextcorrection/full.md b/turkihtweetsabenchmarkdatasetforturkishtextcorrection/full.md new file mode 100644 index 0000000000000000000000000000000000000000..830131762251c0a12bf08d3a5467f90ef8bb61e8 --- /dev/null +++ b/turkihtweetsabenchmarkdatasetforturkishtextcorrection/full.md @@ -0,0 +1,291 @@ +# TurkishTweets: A Benchmark Dataset for Turkish Text Correction + +Asiya Tuba Koksal1, Ozge Bozal1,2, Emre Yurekli1, Gizem Gezici1,3 + +1Huawei R&D Center, Istanbul, Turkey +$^{2}$ Bogazici University, Istanbul, Turkey +$^{3}$ Sabanci University, Istanbul, Turkey + +{asiye.tuba.koksal, ozge.bozal, emre.yurekli, gizem.gezici}@huawei.com + +# Abstract + +Turki$hTweets is a benchmark dataset for the task of correcting the user misspellings, with the purpose of introducing the first public Turkish dataset in this area. #Turki$hTweets provides correct/incorrect word annotations with a detailed misspelling category formulation based on the real user data. We evaluated four state-of-the-art approaches on our dataset to present a preliminary analysis for the sake of reproducibility. The annotated dataset is publicly available at https://github.com/atubakoksal/annotated_tweets. + +# 1 Introduction + +The extensive use of social media platforms such as Twitter, Facebook, forums, and blogs has created a new form of written language, which is full of intentional and unintentional misspellings as well as newly-emerged slang words and abbreviations. This new type of language poses significant challenges for various natural language processing (NLP) tasks, mostly requiring properly written textual content for analysis. Therefore, text normalization, i.e., transforming non-standard words into their standard forms and spelling correction, i.e., correcting unintentional spelling errors, have become indispensable pre-processing stages. The pre-processing phase is known to boost the model performance for the various NLP applications, including but not limited to POS tagging, sentiment classification and search1. + +Although correcting misspelling errors is crucial for NLP applications, it is generally not straightforward and even challenging for morphologically rich languages. There exist many different surface forms of a single word in highly agglutinative languages. Specifically for Turkish, suffixes should + +also comply with the vowels and the last letter of the word. This leads to many different variations of a single word, thereby increasing the possibility of misspelling errors. Previous studies adapted for English do not fit and thus, there is the need for resources tailored particularly for these languages. Currently, there is no publicly available dataset in this area that can be used for model evaluation with reproducible results. + +In the online platforms, there exist not only intentionally misspelled words but also unintentional spelling errors, both constituting out-of-vocabulary (OOV) words in the textual content. Intentional user misspellings are quite frequent, particularly in online media and these errors highly vary depending on the intention of use. Therefore, a more fine-grained analysis of the error types is required by categorizing the errors and further including them in the evaluation dataset for a proper model evaluation. In this way, a particular model can be assessed whether it provides generalizable results for the corresponding language. + +Based on these, we introduce a new Turkish dataset by categorizing, annotating and correcting the distinct misspelling types in text. Moreover, we make a fine-grained evaluation of the selected state-of-the-art approaches in literature for reproducibility purposes. To the best of our knowledge, this is the first attempt that introduces a public dataset including the detailed misspelling category formulation with the purpose of providing a reproducible evaluation results on the existing approaches. We hope that the fine-grained analysis of selected models in this work serves as an exemplary usage of the dataset. Most similar work to ours is Eryiğit and Torunoglu-Selamet (2017) in which authors define rules for correcting the misspelling errors present in social media content. The authors compare the proposed model with the selected state-of-the-art approaches on their dataset which is not + +publicly available to researchers. Hence, we created a benchmark dataset by randomly selecting and annotating Turkish tweets from a public dataset of 20M tweets2. + +Our contributions in this work are as follows. + +- We provide a real dataset such that misspellings are created by real users, +- We propose a systematic formulation for error categorization of OOV words in a real dataset, +- We show a fair evaluation of the selected models on the same benchmark dataset, for the sake of reproducibility. + +The paper is structured as follows. In Section 2, we give related work. In Section 3, we provide the details about data preparation & analysis. In Section 4, we show the evaluation results. Finally, in Section 5 we conclude the paper. + +# 2 Related work + +Research studies in spelling correction of Turkish text date back to 1990s. However, each work has carried out its own evaluation data generation process and none of these datasets are publicly available. In early spelling correction studies, synthetic datasets were used for evaluation. Oflazer (1996) used words collected from Turkish corpus that are perturbed such that the words and their correct forms are 1, 2 or 3 edit distances apart. Büyük et al. (2019), Gupta (2020) and Büyük (2020) created a synthetic dataset composed of misspelled words with 1 edit distance. Büyük (2020) also used a nonpublic dataset manually annotated by Torunoglu-Selamet et al. (2016) for a better comparison. There are also other works that used real datasets. Oflazer and Güzey (1994) evaluated their model on incorrect words in Turkish text which are mostly 1 edit distance apart. Torunoglu-Selamet et al. (2016) manually annotated words from social media text excluding the intentional mistakes such as words without vowels; they separated the task of text normalization and correction of unintentionally misspellled words. Bölücü and Can (2019) used an open-source morphological analyzer to extract incorrect words from BOUN corpus (Sak et al., 2008) which is composed of newspaper and website textual content. + +With the rise of social media, new text style has emerged: micro-blogging. Those text sources have + +their own jargon including the intentional and unintentional misspellings. Torunoglu-Selamet and Eryigit (2014)'s work focused on text normalization in Twitter. They manually aligned 1200 tweets in which some of the tokens are in one-to-many alignment. Researchers used this dataset for evaluation of their proposed rule-based model. Çolakoglu et al. (2019) used the same dataset, except this, they manually annotated another Twitter dataset for model evaluation. Nonetheless, the dataset introduced in Torunoglu-Selamet and Eryigit (2014) is not open to the research community, it can only be obtained upon request under some restricted license constraints. + +In this work, we propose a new benchmark dataset composing of real Turkish tweets with misspelling annotations for different types of OOV words. + +# 3 Data preparation + +We used a public dataset of 20M Turkish tweets written by real users to create the benchmark dataset. First, we applied some pre-processing steps such as cleaning up the lines with meta-information like timestamps, URLs, usernames, etc. to provide one sentence per line format. After that, we had 23M sentences. Then, out of these 23M sentences, 2000 sentences, including at least one out-of-vocabulary word were randomly selected. To check if there is an OOV word in a sentence, we used TRMorph (Çağrı Çoltekin, 2010), an opensource Turkish morphological analyzer. We first tokenized every sentence on each line, using the TRmorph's tokenizer and sent each token to the TRmorph for morphological analysis. If the TRMorph achieved to provide an analysis for a given token, then the token was assumed to be a correct Turkish word, i.e., in-vocabulary (IV) word for the rest of the paper, otherwise incorrect as referred to as OOV. In this way, we guaranteed at least one OOV word in each sentence obeying one-to-one token alignment. The data statistics and all the details about the error annotation & correction process are provided in Section 3.1, 3.2 and 3.3. + +# 3.1 Preprocessing + +We first filtered the appropriate tweets for the annotation process. We have three main criteria for the appropriateness of a given tweet: i. being written in Turkish, ii. forming a full sentence, iii. including at least one misspelled word. There are + +many homonymic words in Turkish, the meaning of which can only be inferred when used in a full sentence. Similarly, some orthographic errors such as unintentional character mistakes can only be solved in a context, due to word-sense disambiguation problem. Therefore, we only accepted full-sentence tweets in our dataset. Also, we removed the tweets which contain only hashtags or emojis from our analysis (no correction is necessary). + +The use of non-canonical forms of punctuation, e.g., emojis, repetition of punctuations, is quite common in tweets, which is in fact not an orthonographical error. All kinds of punctuation and emojis were replaced with white-space in selected tweets, except for these: i. the apostrophe, since it is used to separate some suffixes from proper nouns and deleting it would be an orthographic error and ii. the number sign (#), since this sign indicates hashtags in tweets; it is necessary to keep it to differentiate any word from a hashtag word. We left the numbers as they appeared and annotated them with IV tag, unless there is a misspelling caused by suffixes added to the numbers. All words were converted to lowercase (including formal abbreviations, foreign words and initial words of the sentences), except for the correctly spelled proper nouns. In a word showing enthusiasm, repetitive characters were also left unchanged and considered to be intentional character mistakes. There were misspellings in the dataset regarding compound words, such that some words should have been typed separately, while others adjacently. We added “|” character to indicate a white-space for the token alignment where such errors occurred. For sample instances, please refer to “Separation Error” and “Adjacent Error” in Table 6 in Appendix A.2. + +# 3.2 Data annotation & correction + +For the categorization of OOV words, we have been inspired by Aw et al. (2006); Han et al. (2012), which proposed well-defined distinction of English OOV words in terms of whether they need any normalization. We also consulted Han and Baldwin (2011); Beaufort et al. (2010); Pamay et al. (2015); Eryigit and Torunoglu-Selamet (2017) in grouping the error types of OOV words. + +The annotation of the dataset was completed after examining the different error types present in Turkish tweets. Then, we referred to the authorized dictionary and Turkish spelling rules dictated by + +the Turkish Language Institution (TDK) for the data correction. Three annotators fulfilled the annotation and correction process accordingly, then the final decisions on the error types were made by consensus. + +The error types used for annotation are all mutually exclusive and fully cover all kinds of errors in the dataset, i.e., no additional error type can be found to a misspelling in words of a Turkish tweet. There were both syntactic and semantic errors. We determined thirteen different subgroups considering orthographic spelling errors, intentional errors, non-lexical words derived for social media jargon and slang words. Detailed explanations for each error type can be found in Appendix A.1. + +The tokens were tagged with IV or OOV based on the TDK Turkish Dictionary3. If a token was tagged with OOV, then one of the error types shown in Table 1 was assigned to this token as well. Furthermore, if a correction was necessary for the token, then it was also assigned an additional tag of ill-formed, otherwise well-formed. + +Tokens were allowed to have multiple tags and the data statistics given in Table 1 are based on the occurrences of the individual tags in the dataset. Several examples from the dataset corresponding each error tag can be found in Table 6 Appendix A.2. + +# 3.3 Data statistics + +The dataset consists of 2000 sentences and 16878 tokens. Each token has corresponding error tags, where the tokens and tags are aligned with each other. There exist 9713 unique tokens and 6488 of them are OOV tokens. The percentages of different error types in the OOV tokens are given in Table 1. The most frequent error type is the deascification, while the least frequent one is the phonetic substitution. Since the dataset consists of real user tweets, it also gives us some hints about users' general misspelling tendencies in Turkish social media. + +Among 2000 sentences, $77\%$ of them have more than one error and $59\%$ of all sentences contain multiple error types. + +# 4 Experiments + +The performance of a text correction model can be evaluated with the following metrics, correction rate over the misspelled words and non-corruption rate over already correct words. For this reason, we built a two-step pipeline for the text correction: + +
Error TypesP.(%)
Deasciificationill-formed44.94
Accentill-formed11.22
Proper Nameill-formed9.20
Intentional Charill-formed9.02
Separationill-formed7.68
Foreign Wordwell-formed4.92
Unintentional Charill-formed4.69
Social Media Phrasewell-formed2.50
Abbreviationwell-formed2.37
Adjacentill-formed1.36
Neologismwell-formed0.96
Vowelill-formed0.63
Phonetic Substitutionill-formed0.52
+ +i. OOV word detection and ii. word correction. In the first part, we aimed to detect the OOV words for the correction step, thus preventing unnecessary modifications in IV words. For this purpose, we compared the performance of two morphological analyzers on finding the OOV words in the dataset. As the second step, we compared the correction and non-corruption rates of several text correction models on the OOV words detected by the better performing analyzer from the previous step. In the experiments, we used TRMorph's morphological analyzer, an open-source Turkish NLP library Zemberek4 and our implementations for the rest of the models. These experiments were conducted on 9223 unique words which consist of ill-formed OOV tokens (refer Table 1) and IV tokens from the dataset. + +# 4.1 Morphological Analysis and OOV Detection + +We compared TRmorph's and Zemberek's morphological analyzers in terms of two aspects: 1. What percentage of the words that are considered to be OOV are true OOV words, and 2. What percentage of the true OOV words were identified. The corresponding precision and recall values and F1 scores can be found in Table 2 (For the same analysis of the IV words, please see Table 5 in Appendix A.2). + +Table 1: The percentage distribution of error types over OOV words in the dataset. + +
PrecisionRecallF1-score
TRmorph0.9770.8220.893
Zemberek0.9850.7480.850
+ +Table 2: Out-of-vocabulary word detection results of the morphological analyzers. + +# 4.2 Correction of OOVs + +In this section, we will briefly mention the models used in the experiments. For the preliminary results, we selected the frequently used models in this area, the source codes of which are publicly available, except the model described in Section 4.2.3. In Table 3, the models were evaluated on the OOV words dataset detected by TRmorph as described in Section 4.1, since it's F1 score is better than Zemberek (see Table 2). Note that the resulting OOV word dataset is noisy in the sense that, it contains some IV words which were misidentified as OOV by TRMorph. + +# 4.2.1 Zemberek + +In Table 3, we evaluated Zemberek's normalization module composing of spell checker (first model) and noisy text normalizer (second model). The spell checker module suggests multiple words for a given OOV word. In this experiment, the highest-ranked suggestion was accepted as the correction of the given OOV word. Before testing these models, each token was checked, whether it had repetitive characters. If a character was consecutively repeating more than twice, it was normalized to one character of itself. + +# 4.2.2 Edit Distance + +We implemented the edit distance algorithm which returns the most probable candidate word in maximum 2 edit distance. The model uses the METU Turkish Corpus (Say et al., 2002) to retrieve the possible candidates and chooses the word with minimum edit distance and the highest frequency information in the corpus. + +# 4.2.3 Rule-based Pipeline + +In this part, we implemented a rule-based model, which is similar to the cascaded model proposed in Torunoglu-Selamet and Eryiğit (2014). We note that our model is not the exact replication of the cascaded model5. + +The steps are defined as follows. + +1. Check if a given word is OOV or IV with the TRmorph's morphological analyzer. If it is IV, then the process returns the word and terminates, otherwise goes to the next step. +2. Remove recurrent characters (e.g. geeeeel $\rightarrow$ gel), then try step 1. +3. Deasciify the token (e.g. canim $\rightarrow$ canim), try step 1. +4. Deaccent the token (e.g. gelmiyom $\rightarrow$ gelmiyorum), then try step 1. +5. Suggest a possible corrected form of the word using a look-up table. + +For steps 2 and 3, we used regular expressions to correct the misspelled words, as illustrated in the steps above. For step 3, we used an open-source deasciifier module6, which translates the ASCII characters into their Turkish counterparts (e.g., o → ö, s → Ş). For the final step, we prepared a lookup table consisting of 1.9M IV tokens in METU (Say et al., 2002) and BOUN (Sak et al., 2008) corpora using TRmorph's analyzer. We removed all vowels from each token (if a token begins with a vowel, we kept it) and created consonant skeleton & possible full form pairs. One sample entry of the look-up table for the word glyrm: geliyorum, guliyorum. Each misspelled word was searched in this look-up table to retrieve its consonant skeleton & possible full forms pair. Then, using the vowels and their sequence in the misspelled word, the word's possible correct form was returned from its possible full forms. + +# 4.3 Discussion + +A successful text corrector model is considered to have a high correction rate on the misspelled words (OOV words) and a high non-corruption rate on the already correct words (IV words). The dataset we used for the comparative evaluation of the selected models is noisy as explained in Section 4.2. Therefore, we compared the models in terms of their correction rate on actual OOV words (C in Table 3), non-corruption rate on IV words (NC in Table 3), and the overall accuracy (Acc in Table 3) calculated on all of the tokens (IV and OOV). To better identify the reasons behind the differences in performance results, we made a detailed analysis of each model in Table 4 on each ill-formed error type listed in Table 1. + +
ModelCNCAcc
Zemberek-Spellchecker (ZS)0.4090.7410.415
Zemberek-Normalizer (ZN)0.7140.6380.713
Edit distance (ED)0.3730.4760.375
Rule-based (RB)0.6020.7240.605
+ +Table 3: Correction (C), non-corruption (NC) and accuracy (Acc) rates of several models on words which are identified as OOV by TRmorph in section. 4.1. + +
ZSZNEDRB
Accent0.2950.6080.2260.399
Adjacent0.00.1430.5310.0
Deascii0.4070.8710.4330.858
Int. char.0.6670.6830.4480.361
Phon. sub.0.4350.3910.3910.0
Proper0.4060.0090.00.004
Separation0.00.4790.00.0
Unint. char.0.5340.5070.5070.137
Vowel0.0450.6360.0910.182
+ +Table 4: Accuracy results of the models from Table 3 on each ill-formed error type. + +# 5 Conclusion + +We introduced a benchmark dataset for Turkish text correction by annotating the real users' misspellings in Turkish tweets. We categorized the error types and corrected them accordingly. The dataset can be used in various NLP applications, especially for the social media platforms. The error category formulation can also be used for other tasks like query correction in search, which highly affects the search performance. + +We hope that our work will be a valuable contribution to the Turkish research community, being the first work proposing a benchmark dataset with a fine-grained and fair comparative evaluation of some of the state-of-the-art models. As future work, we plan to analyze existing models' deficiencies elaborately and establish new models performing better on our benchmark dataset for distinct error types. + +# References + +AiTi Aw, Min Zhang, Juan Xiao, and Jian Su. 2006. A phrase-based statistical model for sms text normalization. In Proceedings of the COLING/ACL on Main conference poster sessions, pages 33-40. Association for Computational Linguistics. +Richard Beaufort, Sophie Roekhaut, Louise-Amélie Coughon, and Cedrick Fairon. 2010. A hybrid rule/model-based finite-state framework for normalizing sms messages. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 770-779. Association for Computational Linguistics. +Osman Büyük. 2020. Context-dependent sequence-to-sequence turkish spelling correction. ACM Trans. Asian Low-Resour. Lang. Inf. Process., 19(4). +Osman Büyük, Mustafa Erden, and Levent M Arslan. 2019. Context influence on sequence to sequence turkish spelling correction. In 2019 27th Signal Processing and Communications Applications Conference (SIU), pages 1-4. IEEE. +N. Bolucü and B. Can. 2019. Context based automatic spelling correction for turkish. In 2019 Scientific Meeting on Electrical-Electronics Biomedical Engineering and Computer Science (EBBT), pages 1-4. +Talha Çolakoglu, Umut Sulubacak, and Ahmet Cüneyd Tantug. 2019. Normalizing non-canonical Turkish texts using machine translation approaches. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, Florence, Italy. Association for Computational Linguistics. +Gulsen Eryigit and Dilara and Torunoglu-Selamet. 2017. Social media text normalization for turkish. Natural Language Engineering, 23(6):835-875. +Prabhakar Gupta. 2020. sc. In 2020 IEEE 14th International Conference on Semantic Computing (ICSC), pages 116-122. IEEE. +Bo Han and Timothy Baldwin. 2011. Lexical normalisation of short text messages: Makn sens a# twitter. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 368-378. Association for Computational Linguistics. +Bo Han, Paul Cook, and Timothy Baldwin. 2012. Automatically constructing a normalisation dictionary for microblogs. In Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning, pages 421-432. Association for Computational Linguistics. +Kemal Oflazer. 1996. Error-tolerant finite-state recognition with applications to morphological analysis and spelling correction. Computational Linguistics, 22(1):73-89. + +Kemal Oflazer and Cemaleddin Güzey. 1994. Spelling correction in agglutinative languages. In Proceedings of the fourth conference on Applied natural language processing, pages 194-195. Association for Computational Linguistics. +Tugba Pamay, Umut Sulubacak, Dilara Torunoglu-Selamet, and Gülsen Eryigit. 2015. The annotation process of the itu web treebank. In Proceedings of the 9th Linguistic Annotation Workshop, pages 95-101. +Hasim Sak, Tunga Güngör, and Murat Saraçlar. 2008. Turkish language resources: Morphological parser, morphological disambiguator and web corpus. In International Conference on Natural Language Processing, pages 417-427. Springer. +Bilge Say, Deniz Zeyrek, Kemal Oflazer, and Umut Ozge. 2002. Development of a corpus and a treebank for present-day written turkish. In Proceedings of the eleventh international conference of Turkish linguistics, pages 183-192. Eastern Mediterranean University. +Dilara Torunoglu-Selamet, Eren Bekar, Tugay Ilbay, and Gülsen Eryigit. 2016. Exploring spelling correction approaches for turkish. In Proceedings of the 1st International Conference on Turkic Computational Linguistics at CICLING, Konya, pages 7-11. +Dilara Torunoglu-Selamet and Gülsen Eryiğit. 2014. A cascaded approach for social media text normalization of turkish. In Proceedings of the 5th Workshop on Language Analysis for Social Media (LASM), pages 62-70. +Cagri Cöltekin. 2010. A freely available morphological analyzer for turkish. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10), Valletta, Malta. European Language Resources Association (ELRA). + +# A Appendix + +# A.1 Error Tags + +# A.1.1 Ill-formed OOV tags + +Ill-formed refers to the misspelled words that have orthographic or cognitive errors calling for correction. Each of the following tags corresponds to a subcategory of ill-formed words. + +- Deasciification errors consist of the errors corresponding to incorrect substitution of Turkish characters $(\iota, \ddot{\iota}, \ddot{\circ}, \varsigma, \check{g}, \varsigma)$ . Both false usage of ascii characters instead of their Turkish deascii counterparts and vice versa are tagged as deasciification error. + +Examples: + +canim $\rightarrow$ canim (my dear) + +$\mathrm{ke}\mathrm{m}\mathrm{i}\mathrm{k}\rightarrow \mathrm{ke}\mathrm{m}\mathrm{i}\mathrm{k}(\mathrm{b}\mathrm{o}\mathrm{n}\mathrm{e})$ + +- Accent errors In Turkish, most of the words are pronunciation as they are written. However, this rule is violated in everyday spoken language or by some local accents. Accent errors consist of both cognitive and intentional errors due to such pronunciation of Turkish words. + +Examples: + +geliyom $\rightarrow$ geliyorum (I am coming) + +bi sey $\rightarrow$ bir sey (a thing) + +de mi $\rightarrow$ degil mi (isn't it) + +- Proper name errors occur when proper nouns start with lowercase letters or an apostrophe is needed to separate a suffix from the proper noun but it lacks. + +Examples: + +ayse $\rightarrow$ Ayse (Turkish proper name) + +$13\ddot{\mathbf{u}}\rightarrow 13^{\prime}\ddot{\mathbf{u}}$ + +mehmetin $\rightarrow$ Mehmet'in (Turkish proper name) + +- Intentional character errors consist of intentionally mistyped words due to the use of fewer or repetitive characters. This type of errors was categorized as intentional since in this category, the words are written deliberately in an erroneous way by the users for the sake of writing easier/faster or emphasizing an emotion. + +Examples: + +senn $\rightarrow$ senin (yours) + +gelmeeeeee $\rightarrow$ gelme (don't come) + +- Separation errors occur when the words are written without using a space between them where they should be written separately. + +Examples: + +birsey $\rightarrow$ birsey (your) + +bende $\rightarrow$ ben de (me too) + +- Unintentional character errors consist of the orthographic errors caused by pressing the wrong character's button on the keyboard (characters in the vicinity of the correct character) or the cognitive errors due to not knowing the correct form of the word. + +Examples: + +kslem $\rightarrow$ kalem (pen) + +direk $\rightarrow$ direkt (directly) + +- Adjacency errors occur when the words are written separately where they should be written without using a space between them. + +Examples: + +hiçbiri→hiçbirì (none) + +halbu ki $\rightarrow$ halbuki (whereas) + +- Vowel errors occur when the words are written by omitting all the vowels for an/a easier/faster writing. + +Examples: + +$\mathrm{snn}\rightarrow \mathrm{senin}$ (your) + +$\mathrm{cnm}\to \mathrm{canim}$ (my dear) + +- Phonetic substitution errors occur when one or more characters in words are replaced with either their non-Turkish (if the pronunciations are similar) or non-alphabetical symbolic (if the shapes of the characters are similar) counterparts. + +Examples: + +Serqan $\rightarrow$ Serkan (proper Turkish name) + +ewe $\rightarrow$ eve (to home) + +# A.1.2 Well-formed OOV tags + +The following 4 tags constitute the well-formed category indicating that we did not correct the words belonging to this category, since there is no correct form of these words in Turkish. + +- Foreign word errors consist of all foreign words (proper, correctly spelled and misspelled) and derived non-Turkish company, brand names etc. without checking if they are typed correctly. Note that all words considered to be in this category are converted to lowercase. + +Examples: + +direction, director, justin + +digitürk (company name with misspelling) + +turkcell (company name without misspelling) + +- Social media errors consist of the words that are vocatives, hashtags etc. that are used in social media texts. + +Examples: + +hahahahah, #resist + +- Abbreviations consist of the words that are both official abbreviations or commonly used abbreviated forms of Turkish words. + +Examples: + +fb - fenerbahçé (famous Turkish sports club) +dk - dakika (minute) + +- Neologisms consist of the derived non-lexical words commonly used in social media texts. Usually such words are derived by using an English word and a Turkish derivational suffix. + +Examples: + +tivit-lemek (tweet-ing) + +hack-lemek (hack-ing) + +# A.2 Additional Tables + +
PrecisionRecallF1-Score
TRmorph0.8810.9860.930
Zemberek0.8400.9910.909
+ +Table 5: Precision and recall values of the morphological analyzers for IV words. + +
Error TypesGroup Name TagWrongCorrected
AbbreviationOOV-well_formed-abbrkibkib (kendine iyi bak) +(take care of yourself)
AccentOOV-ill_formed-accentgeliyom, dimigeliyorum (I am com-ing),degil mi (is that so)
AdjacentOOV-ill_formed-jointbir|kaçbirkac (a few)
DeascificationOOV-ill_formed-deasciicalismak,gṭmekçalismak (to work),git-mek (to go)
Foreign WordOOV-well_formed-foreignTwitter,iPhoneTwitter,iPhone
Intentional CharOOV-ill_formed-intcanımmm,haydiiicanim(sweetheart),haydi (come on)
NeologismOOV-well_formed-neologismkardokardo
Phonetic SubstitutionOOV-ill_formed-phonetic_subgeli$imgelisim (development)
Proper NameOOV-ill_formed-properahmettenAhmet’ten
SeparationOOV-ill_formed-sepherseyher|sey (everything)
Social Media PhraseOOV-well_formed-socialahahnaha,sdfsdfsdf,yha,#hashtagahahnaha,sdfsdfsdf,yha,#hashtag
Unintentional CharOOV-ill_formed-unintgerel,haaytgerek (need),hayat(life)
VowelOOV-ill_formed-voweltmm,flntamam (okey), falan(so and so)
+ +Table 6: All error types and name tags of OOV words in the sentence dataset along with the examples and corresponding normalized words. \ No newline at end of file diff --git a/turkihtweetsabenchmarkdatasetforturkishtextcorrection/images.zip b/turkihtweetsabenchmarkdatasetforturkishtextcorrection/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..009ccd183208ef0bc8675bd569c43b77ff1d5bd4 --- /dev/null +++ b/turkihtweetsabenchmarkdatasetforturkishtextcorrection/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0df40724def036956ca07613c39ed81b5655857f796ac02e1eaa5041d4d0579a +size 317217 diff --git a/turkihtweetsabenchmarkdatasetforturkishtextcorrection/layout.json b/turkihtweetsabenchmarkdatasetforturkishtextcorrection/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..c43ea56ac6005c42b7ae3b225bffc7b952146a77 --- /dev/null +++ b/turkihtweetsabenchmarkdatasetforturkishtextcorrection/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f657d9a72dcf130fa45cba8f7d7f70b85f09f927c13fb2a7a8444e40f1c6feb +size 265996