diff --git a/adversarialtextgenerationviasequencecontrastdiscrimination/a4d95139-771b-47f6-a1f2-6ceae119623e_content_list.json b/adversarialtextgenerationviasequencecontrastdiscrimination/a4d95139-771b-47f6-a1f2-6ceae119623e_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..d8d2cd874362862471f3874f631bb6ea009db5d9 --- /dev/null +++ b/adversarialtextgenerationviasequencecontrastdiscrimination/a4d95139-771b-47f6-a1f2-6ceae119623e_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:26ce1b873235c4242bec71db6dc0ac56a740449bb326f9e6a0ecbf2125fcc836 +size 46969 diff --git a/adversarialtextgenerationviasequencecontrastdiscrimination/a4d95139-771b-47f6-a1f2-6ceae119623e_model.json b/adversarialtextgenerationviasequencecontrastdiscrimination/a4d95139-771b-47f6-a1f2-6ceae119623e_model.json new file mode 100644 index 0000000000000000000000000000000000000000..5107eeabe8ad90cc3822380ab505031eb6a57ca4 --- /dev/null +++ b/adversarialtextgenerationviasequencecontrastdiscrimination/a4d95139-771b-47f6-a1f2-6ceae119623e_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4a6adf5b1c8c9fc09426f2818e607e79fbf59edbb8ed77e92d120ac42d1bff77 +size 58051 diff --git a/adversarialtextgenerationviasequencecontrastdiscrimination/a4d95139-771b-47f6-a1f2-6ceae119623e_origin.pdf b/adversarialtextgenerationviasequencecontrastdiscrimination/a4d95139-771b-47f6-a1f2-6ceae119623e_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..54cbaf904ced6d9bfd953a0151fd048a1e1818ba --- /dev/null +++ b/adversarialtextgenerationviasequencecontrastdiscrimination/a4d95139-771b-47f6-a1f2-6ceae119623e_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:986bbf4577c1151c345cbebf57ada2f888599c64ffaa2746bc4437da57bf360c +size 366546 diff --git a/adversarialtextgenerationviasequencecontrastdiscrimination/full.md b/adversarialtextgenerationviasequencecontrastdiscrimination/full.md new file mode 100644 index 0000000000000000000000000000000000000000..658c404dc9d278d119dde0caf35501c646817b98 --- /dev/null +++ b/adversarialtextgenerationviasequencecontrastdiscrimination/full.md @@ -0,0 +1,238 @@ +# Adversarial Text Generation via Sequence Contrast Discrimination + +Ke Wang, Xiaojun Wan + +Wangxuan Institute of Computer Technology, Peking University + +The MOE Key Laboratory of Computational Linguistics, Peking University + +{wangke17, wanxiaojun}@pku.edu.cn + +# Abstract + +In this paper, we propose a sequence contrast loss driven text generation framework, which learns the difference between real texts and generated texts and uses that difference. Specifically, our discriminator contains a discriminative sequence generator instead of a binary classifier, and measures the 'relative realism' of generated texts against real texts by making use of them simultaneously. Moreover, our generator uses discriminative sequences to directly improve itself, which not only replaces the gradient propagation process from the discriminator to the generator, but also avoids the time-consuming sampling process of estimating rewards in some previous methods. We conduct extensive experiments with various metrics, substantiating that our framework brings improvements in terms of training stability and the quality of generated texts. + +# 1 Introduction + +Generating human-like texts has always been a fundamental problem in the natural language processing field, which is essential to many applications such as machine translation (Bahdanau et al., 2015), image captioning (Fang et al., 2015), and dialogue systems (Reschke et al., 2013). Currently, the dominant approaches are auto-regressive models, such as Recurrent Neural Network (Mikolov et al., 2011), Transformer (Vaswani et al., 2017), and Convolutional Seq2Seq (Gehring et al., 2017), which have achieved impressive performances for the task of language generation using the Maximum Likelihood Estimation (MLE) method. Nevertheless, some studies reveal that such settings may have three main drawbacks: First, the MLE method makes the generative model extremely sensitive to rare samples, which results in the learned distribution being too conservative (Feng and McCulloch, 1992; Ahmad and Ahmad, 2019). Second, auto-regressive generation models suffer from exposure + +bias (Bengio et al., 2015) due to the dependence on the previous sampled output during the inferring phase. Third, they only consider the word-level objective and may fail to guarantee some sentence-level goals, such as realism, preserving semantic consistencies, long-range semantic structure, and so on (Ranzato et al., 2016). + +Recently, lots of recent studies (Yu et al., 2017; Che et al., 2017; Lin et al., 2017; Zhang et al., 2017; Chen et al., 2018; Wang and Wan, 2018; Ke et al., 2019; Nie et al., 2019; Wang and Wan, 2019; Wang et al., 2019) try to apply generative adversarial networks (GAN) (Goodfellow et al., 2014) in text generation, which uses discriminator networks as loss functions to ensure these higher-level objectives. However, the discreteness of texts makes it difficult for the gradient to pass from the discriminator to the generator. The current solution is mainly based on reinforcement learning (Yu et al., 2017) or differentiable sampling functions (Jang et al., 2017). In addition, considering the complexity of the language, the generator is easily much weaker than the discriminator in practice, making it difficult to obtain a clear optimization direction from the discriminator and learn from scratch. + +In this paper, by borrowing techniques from contrastive learning (Hadsell et al., 2006; Henaff et al., 2019; He et al., 2019; Chen et al., 2020), we propose a sequence contrast loss driven adversarial learning framework for text generation, SLGAN. In our framework, the discriminator $D$ is not just a simple binary classifier, but a Siamese network composed of a sequence generator $G_{d}$ , which can provide sequences with discriminative information. In other words, our discriminator $D$ measures the gap between the generated texts and the real texts, rather than simply predicting the probability of the generated data (by generator $G$ ) being real. Specifically, these discriminative sequences with well-formed textual structure information can be used to + +![](images/733820c44dae884c8fadcbe8f76ceefcff119385f069dddd6c683614219b9bd0.jpg) +Figure 1: Illustration of SLGAN. $\pmb{x}$ is the real text sampled from $\mathcal{D}$ . $\pmb{y}$ is the text generated by $G$ , and $\hat{\pmb{y}}$ is the discriminative text generated by $G_{d}$ . + +measure the 'relative realism' (sequence contrast loss) of the generated texts against the real texts, and further improve the generator $G$ . Intuitively, the discriminator can not only tell if the text generated by the generator is good, but also teach the generator in which direction to generate better text. Our motivations are two-fold: 1) Our discriminator can provide better discriminative information to the generator because it observes both 'fake' and 'real' data simultaneously. 2) Compared to other gradient propagation strategies based on reinforcement learning or differentiable sampling functions, the contrastive loss between generated sequences and discriminative sequences can improve the generator more time-efficiently and steadily. + +We conduct experiments on both synthetic and real datasets, and use various metrics (i.e., fluency, novelty, generalization, diversity, human evaluation, and learning curve) to show that our approach not only produces more realistic samples but also greatly stabilizes the adversarial training process. + +# 2 Method + +The architecture of our proposed model is depicted in Figure 1. The whole framework can be divided into two adversarial learning objectives: generator learning and discriminator learning. The goal of the discriminator $D$ is to learn the difference ('relative realism') between fake texts $(\pmb{y}$ , texts generated by generators) and real texts $(\pmb{x})$ . While the goal of the generator $G$ is to use this difference (discriminative sequences) to generate more realistic texts, which contains a word-level item $(\mathcal{L}_{mle})$ and a sentence-level item $(\hat{\mathcal{L}}_{adv})$ . + +To achieve the above goals, we start with two things. One is that discriminator $D$ observes and uses both 'fake' and 'real' data at the same time, rather than considering them in an alternating fashion. The other is that the inside of the discriminator is not a binary classifier, but a sequence generator $G_{d}$ . $G_{d}$ aims to generate discriminative sequence $\hat{\pmb{y}}$ , which can be considered as a sequence representation to be used for better measurement of 'relative realism'. To some extent, $G_{d}$ can be seen as an 'amplifier', and the closer the input text is to real texts, the less it changes. Further, $\hat{\pmb{y}}$ not only can be used to measure the 'relative realism' of generated texts against real texts, but also can be used to directly affect $G$ through sequence contrast loss. Therefore, by calculating the contrastive loss, the gradient back-propagation process from the discriminator to the generator is avoided, which is of significant importance in adversarial learning. + +Discriminator Learning: The contrastive loss of our discriminator takes the output of the discriminative sequence generator $G_{d}$ for a positive example (real texts $\pmb{x}$ ), and calculates its similarity to an example of the same class $(\pmb{x})$ and contrasts that with the distance to negative examples $(\pmb{y}$ , texts generated by generators): + +$$ +\mathcal {L} _ {\text {d i s c r i m i n a t o r}} = \lambda_ {i} \operatorname {S i m} _ {s} - \operatorname {S i m} _ {d}, \tag {1} +$$ + +where $Sim_{d}$ and $Sim_{s}$ are the similarity measure of a pair of dissimilar points and a pair of similar points, respectively. $\lambda_{i} = \max \{\lambda, 1 - \alpha i\}$ is the coefficient to balance two terms at $i$ -th epoch. It is worth noting that Eq 1 degenerates into the vanilla GAN's adversarial loss when $\lambda_{i} = 0$ . + +We use the $KL$ -divergence to measure how similar two word distributions of generated sequences are to each other, and the inter-class loss $Sim_{d}$ is: + +$$ +\begin{array}{l} \operatorname {S i m} _ {d} = \mathcal {L} _ {a d v} = \mathbb {E} _ {\boldsymbol {x} \sim \mathcal {D}, z \sim \mathcal {P}} \\ [ | | G _ {d} (\boldsymbol {x}; \theta_ {d}) - G _ {d} (G (z; \theta_ {g}); \theta_ {d}) | | _ {k l} ], \tag {2} \\ \end{array} +$$ + +where $z$ is sampled from a noise distribution $\mathcal{P}$ . The outputs of $G_{d}$ is not a probability between 0 and 1, but a representation with more discriminative information. That is, the generator $G_{d}$ in our discriminator takes input of the real data $\pmb{x}$ or the fake data $G(z;\theta_g)$ , and then generates word sequence $\hat{\pmb{y}}$ for each input. + +In addition, we consider making $\hat{\pmb{y}}$ meaningful, with the purpose that it can be used not only to discriminate but also to represent 'realism' features. + +We hence rewrite the intra-class loss $Sim_{s}$ with a similar idea as: + +$$ +S i m _ {s} = \mathcal {L} _ {r e c} = \mathbb {E} _ {\boldsymbol {x} \sim \mathcal {D}} [ | | G _ {d} (\boldsymbol {x}; \theta_ {d}) - \boldsymbol {x} | | _ {k l} ]. (3) +$$ + +In practice, we add noise to $\pmb{x}$ by randomly replacing input word with the noise word $(<\text{unk}>)$ . + +Generator Learning: The loss function of our generator includes two terms: one term $(\mathcal{L}_{mle})$ is to concern word-level fitness and another term $(\hat{\mathcal{L}}_{adv})$ is to ensure a higher level of 'realism' resembling qualities: + +$$ +\mathcal {L} _ {\text {g e n e r a t o r}} = \mathcal {L} _ {m l e} + \hat {\lambda} _ {i} \hat {\mathcal {L}} _ {a d v}, \tag {4} +$$ + +where $\hat{\lambda}_i = \hat{\lambda} (i / k)$ is the balance coefficient and $k$ is the number of all epochs. + +Given a training sentence $\pmb{x} = \{x_0, \dots, x_t, \dots\}$ with length $|x|$ , the word-level objective $\mathcal{L}_{mle}$ is to minimize the negative log-likelihood loss as follows: + +$$ +\mathcal {L} _ {m l e} = \mathbb {E} _ {\boldsymbol {x} \sim \mathcal {D}} [ - \sum_ {t = 1} ^ {| \boldsymbol {x} | - 1} \log G (x _ {t} | \boldsymbol {x} _ {0: t - 1}) ], \tag {5} +$$ + +where $G(x_{t}|\pmb{x}_{0:t - 1})$ denotes the probability that the output of $G$ is $x_{t}$ under the condition of the former given sequence $\pmb{x}_{0:t - 1} = \{x_0,x_1,\dots x_{t - 1}\}$ at time step $t$ . While in the inference phase, generator $G$ will take the previous sampled output $\pmb{y}_{0:t - 1}$ as the input at time step $t$ . Here $G$ is an auto-regressive generation model (e.g., RNN and its variants (Mikolov et al., 2011; Hochreiter and Schmidhuber, 1997; Chung et al., 2014), Transformer (Vaswani et al., 2017) and Convolutional Seq2Seq (Gehring et al., 2017)). + +Furthermore, the other goal of generator $G$ is to minimize $Sim_{d}$ in Eq 2, with the intuition that using a discriminator network to learn the loss function of sentence-level properties (e.g., long-range semantic structure, preserving semantic consistencies, etc.) over time, rather than explicitly formulating these properties. According to the discriminator's loss (Eq 1), the closer $G(z; \theta_{g})$ is to $x$ , the closer $G_{d}(G(z; \theta_{g}); \theta_{d})$ is to $G(z; \theta_{g})$ . As such, we resort to an approximation approach to define the generator's adversarial loss as: + +$$ +\hat {\mathcal {L}} _ {a d v} = \mathbb {E} _ {z \sim \mathcal {P}} [ \| G _ {d} (G (z; \theta_ {g}); \theta_ {d}) - G (z; \theta_ {g}) \| _ {k l} ]. \tag {6} +$$ + +In this way, we can directly guide the generation of $G$ by measuring the sequence contrast loss of the output between $G$ and $G_{d}$ , which not only avoids the gradient back-propagation process from the discriminator to the generator, but also makes the generator use the discriminator's discriminative information more effectively. + +# 3 Experiments + +# 3.1 Setup + +In this study, we use Texygen (Zhu et al., 2018), a benchmarking platform that implements a majority of GAN-based text generation models and covers a set of metrics, to standardize comparisons with other GAN models. We compare SLGAN with several typical and state-of-the-art unsupervised generic text generation models, including MLE (Mikolov et al., 2011), SeqGAN (Yu et al., 2017), MaliGAN (Che et al., 2017), RankGAN(Lin et al., 2017), GSGAN (Kusner and Hernandez-Lobato, 2016), TextGAN (Zhang et al., 2017), LeakGAN (Guo et al., 2018). Without loss of generality, we evaluate our model on two benchmark datasets: a synthetic dataset and a real text dataset (COCO image caption (Lin et al., 2014)). + +# 3.1.1 Implementation Details + +In our model, the default initial parameters of all generators follow a Gaussian distribution $\mathcal{N}(0,1)$ . The total number of adversarial training epochs is 200 and the sampling temperature is set to 1.0. We set $\lambda = 1.0$ and $\alpha = 0.1$ , and $G_{d}$ is a seq2seq model based on single-layer RNN-GRU and Luong attention. $\hat{\lambda}$ is set to 1.0, and the number of all epochs $k = 200$ , based on performance. $G$ is a single-layer RNN-GRU network and can be easily extended to other types of generators as well. We implement our model based on Pytorch and use a TITAN X graphic card for learning. + +# 3.1.2 Dataset Statistics + +A summary of statistics for each dataset is provided in Table 1. To be fair, on the synthetic and real datasets, we train all models using the same-size (size = 10,000) training set and use the models to generate the same-size (size = 10,000) set of sentences for evaluation. + +# 3.2 Synthetic Data Experiment + +Here we use the synthetic dataset used by Texygen (Zhu et al., 2018), which consists of a set of sequential tokens which can be seen as the simulated + +
Datasets#Train#Test#VocabMax-Length
Synthetic10,00010,0005,00020
Real10,00010,0004,68438
+ +Table 1: Statistics for the synthetic and real dataset we use. + +![](images/9f95fe637f970d25e194360c88350ede25d7413cbb967f4af97febf92649e4c7.jpg) +Figure 2: The illustration of learning curves. Dotted line is the end of pre-training for baseline models except GSGAN and TextGAN. + +data comparing to the real-word language data. We compare our model with various models on this dataset, as shown in Figure 2. We observe that our model outperforms all other competitors with a large margin and the NLL loss declines rapidly and steadily from the beginning, demonstrating that our model is more stable and time-efficient. + +# 3.3 Real Data Experiment + +We also conduct experiments on a real-world dataset (i.e., COCO image caption), and present a variety of evaluation methods for a comprehensive comparison. + +Fluency: We show the perplexity of generated sentences in Figure 3, which shows that our model is good at keeping the fluency of sentences. + +Novelty: We use the novelty measure in (Wang and Wan, 2018) to investigate how different the generated sentences and the training corpus are. From + +![](images/b47c15ce574656d335b288d6332840b379c0bcdf8d241cb291d7abe2923aa32e.jpg) +Figure 3: Comparison of fluency (lower perplexity means better fluency) and novelty of generated sentences. + +![](images/37aa5477c8765ba42e5b7e5e533363a089c86c8f91a7965d5f64ac9654fe4ee2.jpg) +Figure 4: Different loss curves during adversarial training process. + +the results in Figure 3, we observe that our model has a better ability to generate new sequences. + +Generalization: Same as Texygen, we also evaluate BLEU (Papineni et al., 2002) between generated sentences and the test set to see the generalization capacity of different models. The BLEU scores are shown in Table 2, which show that our model has a rather good generalization capacity. Moreover, as the order (i.e., n) of n-gram rises, the corresponding BLEU performance of our model does not drop as fast as other models. + +Diversity: We use Self-BLEU to evaluate how one sentence resembles the rest in a generated collection. From Table 2, we see that the sentences generated by our model have the lowest Self-BLEU score, indicating the highest diversity. + +Human Evaluation: We randomly extract 100 sentences from the generated sentences and then hire three workers on Amazon Mechanical Turk to rate each of them according to its 'grammaticality', 'topicality', and 'overall' aspects, where 'topicality' indicates the semantic consistency of the entire sentence. The rating score ranges from 1 to 5, and 5 is the best. As shown in Table 2, our model outperforms several baseline models, especially in the aspects of 'topicality' and overall quality. + +Training Stability: We also show the different loss curves of our model during the adversarial training process in Figure 4. As can be seen in Figure 4, the adversarial process between $G$ and $G_{d}$ is quite stable. Firstly, the discriminator is not powerful enough to let loss $\mathcal{L}_{adv}$ fall to 0, because it does more things than a simple binary prediction. Secondly, the ability $(\hat{\mathcal{L}}_{adv})$ of the generator to attempt to deceive the discriminator has been fluctuating. As the discriminator has been getting better, we argue that the capabilities of the generator are constantly being enhanced, that is, more similar to real texts. + +
ModelsGeneralization ↑Diversity ↓Human Evaluation ↑
BLEU-2BLEU-3BLEU-4BLEU-5BLEU-2BLEU-3BLEU-4BLEU-5GrammaticalityTopicalityOverall
MLE0.7310.4970.3050.1890.9160.7690.5830.4083.682.032.57
SeqGAN0.7450.4980.2940.1800.9500.8400.6700.4983.733.293.36
MaliGAN0.6730.4320.2570.1590.9180.7810.6060.4373.832.322.79
RankGAN0.7430.4670.2640.1560.9590.8820.7620.6183.943.833.78
LeakGAN0.7460.5280.3550.2300.9660.9130.8480.7804.084.043.96
TextGAN0.5930.4630.2770.2070.9420.9310.8040.7464.233.463.99
SLGAN0.7530.5020.3480.2510.7510.5730.4220.3133.934.294.16
+ +Table 2: Results on real dataset. $\downarrow$ means the smaller the better, and $\uparrow$ is the opposite. The best scores are bold and our scores are underlined. The kappa coefficient of the three workers is 0.63. + +# 4 Conclusion and Future Work + +In this study, we propose a sequence contrast loss for adversarial text generation, where the discriminator outputs discriminative sequences rather than binary classification probabilities. Extensive experimental results demonstrate that our model brings improvements in training stability and the quality of generated texts. + +In future work, we will expand our method to have specific targets, to benefit more conditional text generation tasks (e.g., sentimental text generation, dialogue response generation). + +# Acknowledgments + +This work was supported by National Natural Science Foundation of China (61772036), Beijing Academy of Artificial Intelligence (BAAI) and Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). We appreciate the anonymous reviewers for their helpful comments. Xiaojun Wan is the corresponding author. + +# References + +Kaisar Ahmad and Sheikh Parvaiz Ahmad. 2019. A comparative study of maximum likelihood estimation and bayesian estimation for erlang distribution and its applications. In Statistical Methodologies. InTechOpen. +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR 2015. +Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In NeurIPS 2015, pages 1171-1179. +Tong Che, Yanran Li, Ruixiang Zhang, R. Devon Hjelm, Wenjie Li, Yangqiu Song, and Yoshua Bengio. 2017. Maximum-likelihood augmented discrete generative adversarial networks. CoRR, abs/1702.07983. + +Liqun Chen, Shuyang Dai, Chenyang Tao, Haichao Zhang, Zhe Gan, Dinghan Shen, Yizhe Zhang, Guoyin Wang, Ruiyi Zhang, and Lawrence Carin. 2018. Adversarial text generation via feature-mover's distance. In NeurIPS 2018, pages 4671-4682. +Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. 2020. A simple framework for contrastive learning of visual representations. CoRR, abs/2002.05709. +Junyoung Chung, Caglar Gülcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. CoRR, abs/1412.3555. +Hao Fang, Saurabh Gupta, Forrest N. Iandola, Rupesh Kumar Srivastava, Li Deng, Piotr Dolkar, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, and Geoffrey Zweig. 2015. From captions to visual concepts and back. In CVPR 2015, pages 1473-1482. +Ziding Feng and Charles E McCulloch. 1992. Statistical inference using maximum likelihood estimation and the generalized likelihood ratio when the true parameter is on the boundary of the parameter space. Statistics & Probability Letters, 13(4):325-332. +Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. In ICML 2017, pages 1243-1252. +Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In NeurIPS 2014, pages 2672-2680. +Jiaxian Guo, Sidi Lu, Han Cai, Weinan Zhang, Yong Yu, and Jun Wang. 2018. Long text generation via adversarial training with leaked information. In AAAI 2018, pages 5141-5148. +Raia Hadsell, Sumit Chopra, and Yann LeCun. 2006. Dimensionality reduction by learning an invariant mapping. In CVPR 2006, pages 1735-1742. + +Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross B. Girshick. 2019. Momentum contrast for unsupervised visual representation learning. CoRR, abs/1911.05722. +Olivier J. Henaff, Aravind Srinivas, Jeffrey De Fauw, Ali Razavi, Carl Doersch, S. M. Ali Eslami, and Aaron van den Oord. 2019. Data-efficient image recognition with contrastive predictive coding. CoR-R, abs/1905.09272. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780. +Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparameterization with gumbel-softmax. In ICLR 2017. OpenReview.net. +Pei Ke, Fei Huang, Minlie Huang, and Xiaoyan Zhu. 2019. ARAML: A stable adversarial training framework for text generation. In EMNLP-IJCNLP 2019, pages 4270-4280. Association for Computational Linguistics. +Matt J. Kusner and José Miguel Hernández-Lobato. 2016. GANS for sequences of discrete elements with the gumbel-softmax distribution. CoRR, abs/1611.04051. +Kevin Lin, Dianqi Li, Xiaodong He, Ming-Ting Sun, and Zhengyou Zhang. 2017. Adversarial ranking for language generation. In NeurIPS 2017, pages 3158-3168. +Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C. Lawrence Zitnick. 2014. Microsoft COCO: common objects in context. In ECCV 2014, pages 740-755. +Tomas Mikolov, Stefan Kombrink, Lukás Burget, Jan Cernocký, and Sanjeev Khudanpur. 2011. Extensions of recurrent neural network language model. In ICASSP 2011, pages 5528-5531. +Weili Nie, Nina Narodytska, and Ankit Patel. 2019. *Relgan: Relational generative adversarial networks* for text generation. In *ICLR 2019*. OpenReview.net. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL 2002, pages 311-318. +Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In ICLR 2016. +Kevin Reschke, Adam Vogel, and Dan Jurafsky. 2013. Generating recommendation dialogs by extracting information from user reviews. In ACL 2013, pages 499-504. + +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS 2017, pages 6000-6010. +Ke Wang, Hang Hua, and Xiaojun Wan. 2019. Controllable unsupervised text attribute transfer via editing entangled latent representation. In NeurIPS 2019, pages 11034-11044. +Ke Wang and Xiaojun Wan. 2018. Sentigan: Generating sentimental texts via mixture adversarial networks. In *IJCAI* 2018, pages 4446-4452. +Ke Wang and Xiaojun Wan. 2019. Automatic generation of sentimental texts via mixture adversarial networks. Artif. Intell., 275:540-558. +Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In AAAI 2017, pages 2852-2858. +Yizhe Zhang, Zhe Gan, Kai Fan, Zhi Chen, Ricardo Henao, Dinghan Shen, and Lawrence Carin. 2017. Adversarial feature matching for text generation. In ICML 2017, pages 4006-4015. +Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texy- gen: A benchmarking platform for text generation models. In SIGIR 2018, pages 1097-1100. + +# A Appendices + +# A.1 Implementation Details + +In our model, the default initial parameters of all generators follow a Gaussian distribution $\mathcal{N}(0,1)$ . The total number of adversarial training epochs is 200 and the sampling temperature is set to 1.0. We set $\lambda = 1.0$ and $\alpha = 0.1$ , and $G_{d}$ is a seq2seq model based on single-layer RNN-GRU and Luong attention. $\hat{\lambda}$ is set to 1.0, and the number of all epochs $k = 200$ , based on performance. $G$ is a single-layer RNN-GRU network and can be easily extended to other types of generators as well. We implement our model based on Pytorch and use a TITAN X graphic card for learning. + +# A.2 Generated Cases + +In Table 3, we show example sentences generated by different models trained on a real-world dataset. From the examples, we see that: 1) Although the sentence produced by the MLE method is longer, it may have unreadable and unreasonable problems. 2) The sentences generated by LeakGAN and TextGAN are more readable, but they are not diversified and relatively short. 3) In particular, compared with all benchmark methods, the sentences + +
MLEa store is blue sink in a water bottle. (Unreasonable) +serious air force jet mid flight during a cobblestone day, where a flooded street +a simple bathroom with some wood cupboards. +a girafee is standing in the spot for a village in parking spot with four hinged cakes trees +a jet jet flying away on the runway, in the sky. +a fat orange motorcycle is low building. +a bathroom with a sink, a sink, refrigerator and the walls. (Unreadable) +a living room with a blue roof and green traffic lights blue. +person sitting in a commercial plane at night.
LeakGANa view of a parking desk with two plungers +a desk with multiple large monitors. (Very short) +a woman wearing a glass is sitting on a cupboard. +a kitchen with a shelf area. +a man tinkers with his ear. +a white stove top open from a wood oven. +a group of men talking. +a kitchen with a shelf area. (Repeated) +two people sitting on.
TextGANa man riding a motorcycle. (Very short) +is to a bathroom with a sink. (Unreadable) +a man is on a motorcycle. +a white toilet a sink. +with a sink and a table. +a motorcycle in a blue sky. +a bathroom with a sink. +a man is sitting on a motorcycle. (Repeated) +a bathroom with a sink.
SLGANa group of people sat in front of the house together. +several people stood in front of the bicycle. +a person is holding a monitor range in the kitchen. +a woman is riding a motorcycle on the street. +three adults sat in his car with hats. +two people in a public parking lot. +white bathtub, toilet and basin under the bathroom wall. +an old brick building with a wooden manufacturer next to it. +a motor scooter parked in the street with a crowd waiting for a parade.
+ +Table 3: Example sentences generated by different models. + +produced by our model are more readable, diversified and of better quality. \ No newline at end of file diff --git a/adversarialtextgenerationviasequencecontrastdiscrimination/images.zip b/adversarialtextgenerationviasequencecontrastdiscrimination/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..deba54d639119b37c81b7d52d4c2ebdacdd03714 --- /dev/null +++ b/adversarialtextgenerationviasequencecontrastdiscrimination/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:33c9beb8b08e032158f52b17b1901e1e01298b8c83fa1405fad7a86ff8017e92 +size 368019 diff --git a/adversarialtextgenerationviasequencecontrastdiscrimination/layout.json b/adversarialtextgenerationviasequencecontrastdiscrimination/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..79a9e246b07a43adf9147abe822882772c6c4a1a --- /dev/null +++ b/adversarialtextgenerationviasequencecontrastdiscrimination/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:924669e5c76b569ffdb48bcbee19c2b9a1b4b052fec0cc37e5424dffd3d5798b +size 266030 diff --git a/adversarialtrainingforcoderetrievalwithquestiondescriptionrelevanceregularization/a031f169-a4eb-455f-b10e-e7c04edf6505_content_list.json b/adversarialtrainingforcoderetrievalwithquestiondescriptionrelevanceregularization/a031f169-a4eb-455f-b10e-e7c04edf6505_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..1772d3c4019f28cd5605ef8d1f4c897977586bd5 --- /dev/null +++ b/adversarialtrainingforcoderetrievalwithquestiondescriptionrelevanceregularization/a031f169-a4eb-455f-b10e-e7c04edf6505_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:704d34673615bcc84069ceaa14c485a8b0ce3aff602479e99bfad6897189cdbc +size 77005 diff --git a/adversarialtrainingforcoderetrievalwithquestiondescriptionrelevanceregularization/a031f169-a4eb-455f-b10e-e7c04edf6505_model.json b/adversarialtrainingforcoderetrievalwithquestiondescriptionrelevanceregularization/a031f169-a4eb-455f-b10e-e7c04edf6505_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b444e8703979e34afa20a3a0260d1330e4e9087e --- /dev/null +++ b/adversarialtrainingforcoderetrievalwithquestiondescriptionrelevanceregularization/a031f169-a4eb-455f-b10e-e7c04edf6505_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d096ff19e30e59bd7cc59a5558aeacd1673ba2cc18f821634a6f6dfec9a6c807 +size 96067 diff --git a/adversarialtrainingforcoderetrievalwithquestiondescriptionrelevanceregularization/a031f169-a4eb-455f-b10e-e7c04edf6505_origin.pdf b/adversarialtrainingforcoderetrievalwithquestiondescriptionrelevanceregularization/a031f169-a4eb-455f-b10e-e7c04edf6505_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..19d5005e38bf77dfe687459f8df09deeadd0b905 --- /dev/null +++ b/adversarialtrainingforcoderetrievalwithquestiondescriptionrelevanceregularization/a031f169-a4eb-455f-b10e-e7c04edf6505_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:43c802455a205219c728c126167fe547c9db950b170c43588d6dc3b10f6bc456 +size 501421 diff --git a/adversarialtrainingforcoderetrievalwithquestiondescriptionrelevanceregularization/full.md b/adversarialtrainingforcoderetrievalwithquestiondescriptionrelevanceregularization/full.md new file mode 100644 index 0000000000000000000000000000000000000000..972204e7aa115341faf37ed53e2b25791d896bf7 --- /dev/null +++ b/adversarialtrainingforcoderetrievalwithquestiondescriptionrelevanceregularization/full.md @@ -0,0 +1,284 @@ +# Adversarial Training for Code Retrieval with Question-Description Relevance Regularization + +Jie Zhao +The Ohio State University +zhao.1359@osu.edu + +Huan Sun +The Ohio State University +sun.397@osu.edu + +# Abstract + +Code retrieval is a key task aiming to match natural and programming languages. In this work, we propose adversarial learning for code retrieval, that is regularized by question-description relevance. First, we adapt a simple adversarial learning technique to generate difficult code snippets given the input question, which can help the learning of code retrieval that faces bi-modal and data-scarce challenges. Second, we propose to leverage question-description relevance to regularize adversarial learning, such that a generated code snippet should contribute more to the code retrieval training loss, only if its paired natural language description is predicted to be less relevant to the user given question. Experiments on large-scale code retrieval datasets of two programming languages show that our adversarial learning method is able to improve the performance of state-of-the-art models. Moreover, using an additional duplicate question prediction model to regularize adversarial learning further improves the performance, and this is more effective than using the duplicated questions in strong multi-task learning baselines. + +# 1 Introduction + +Recently there has been a growing research interest in the intersection of natural language (NL) and programming language (PL), with exemplar tasks including code generation (Agashe et al., 2019; Bi et al., 2019), code summarizing (LeClair and McMillan, 2019; Panthaplackel et al., 2020), and code retrieval (Gu et al., 2018). In this paper, we study code retrieval, which aims to retrieve code snippets for a given NL question such as "Flatten a shallow list in Python." Advanced code retrieval tools can save programmers tremendous time in + +various scenarios, such as how to fix a bug, how to implement a function, which API to use, etc. Moreover, even if the retrieved code snippets do not perfectly match the NL question, editing them is often much easier than generating a code snippet from scratch. For example, the retrieve-and-edit paradigm (Hayati et al., 2018; Hashimoto et al., 2018; Guo et al., 2019) for code generation has attracted growing attention recently, which first employs a code retriever to find the most relevant code snippets for a given question, and then edit them via a code generation model. Previous work has shown that code retrieval performance can significantly affect the final generated results (Huang et al., 2018) in such scenarios. + +There have been two groups of work on code retrieval: (1) One group of work (e.g., the recent retrieve-and-edit work (Hashimoto et al., 2018; Guo et al., 2019)) assumes each code snippet is associated with NL descriptions and retrieves code snippets by measuring the relevance between such descriptions and a given question. (2) The other group of work (e.g., CODENN (Iyer et al., 2016) and Deep Code Search (Gu et al., 2018)) directly measures the relevance between a question and a code snippet. Comparing with the former group, this group of work has the advantage that they can still apply when NL descriptions are not available for candidate code snippets, as is often the case for many large-scale code repositories (Dinella et al., 2020; Chen and Monperrus, 2019). Our work connects with both groups: We aim to directly match a code snippet with a given question, but during training, we will utilize question-description relevance to improve the learning process. + +Despite the existing efforts, we observe two challenges for directly matching code snippets with NL questions, which motivate this work. First, code retrieval as a bi-modal task requires representation learning of two heterogeneous but complementary + +modalities, which has been known to be difficult (Cvitkovic et al., 2019; LeC; Akbar and Kak, 2019) and may require more training data. This makes code retrieval more challenging compared to document retrieval where the target documents often contain useful shallow NL features like keywords or key phrases. Second, code retrieval often encounters special one-to-many mapping scenarios, where one NL question can be solved by multiple code solutions that take very different approaches. Table 1 illustrates the challenges. For $i = 1,2$ or 3, $q^{(i)}$ is an NL question/description that is associated with a Python answer $c^{(i)}$ . Here, question $q^{(1)}$ should be matched with multiple code snippets: $c^{(1)}$ and $c^{(2)}$ , because they both flatten a 2D list despite with different programming approaches. In contrast, $c^{(3)}$ is performing a totally different task, but uses many overlapped tokens with $c^{(1)}$ . Hence, it can be difficult to train a code retrieval model that generalizes well to match $q^{(1)}$ with both $c^{(1)}$ and $c^{(2)}$ , and is simultaneously able to distinguish $c^{(1)}$ from $c^{(3)}$ . + +To address the first challenge, we propose to introduce adversarial training to code retrieval, which has been successfully applied to transfer learning from one domain to another (Tzeng et al., 2017) or learning with scarce supervised data (Kim et al., 2019). Our intuition is that by employing a generative adversarial model to produce challenging negative code snippets during training, the code retrieval model will be strengthened to distinguish between positive and negative $\langle q, c \rangle$ pairs. In particular, we adapt a generative adversarial sampling technique (Wang et al., 2017), whose effectiveness has been shown in a wide range of uni-modal text retrieval tasks. + +For the second challenge, we propose to further employ question-description (QD) relevance as a complementary uni-modal view to reweight the adversarial training samples. In general, our intuition is that the code retrieval model should put more weights on the adversarial examples that are hard to distinguish by itself, but easy from the view of a QD relevance model. This design will help solve the one-to-many issue in the second challenge, by differentiating true negative and false negative adversarial examples: If a QD relevance model also suggests that a code snippet is not relevant to the original question, it is more likely to be a true negative, and hence the code retrieval model should put more weights on it. Note that this QD relevance + +$\pmb{q}^{(1)}$ Flatten a shallow list in Python + +$\pmb{c}^{(1)}$ from itertools import chain +rslt = chain(*list_2d) + +$\pmb{q}^{(2)}$ How to flatten a 2D list to 1D without using numpy? +$c^{(2)}$ list_of_lists $= [[1,2,3],[1,2],[1,4,5,6,7]]$ [j for sub in list_of_lists for j in sub] + +$\pmb{q}^{(3)}$ How to get all possible combinations of a list's elements? +$\pmb{c}^{(3)}$ from itertools import chain, combinations subsets $=$ chain(*map( lambda x: combinations(mylist,x),range(0,len(mylist)+1))) + +Table 1: Motivating Example. $\langle q^{(i)},c^{(i)}\rangle$ denotes an associated (natural language question, code snippet) pair. $q^{(i)}$ can also be viewed as a description of $c^{(i)}$ . Given $q^{(1)}$ , the ideal code retrieval result is to return both $c^{(1)}$ and $c^{(2)}$ as their programming semantics are equivalent. Contrarily, $c^{(3)}$ is semantically irrelevant to $q^{(1)}$ and should not be returned, although its surface form is similar to $c^{(1)}$ . In such cases, it can be easier to decide their relationships from the question perspective, because $\langle q^{(1)},q^{(2)}\rangle$ are more alike than $\langle q^{(1)},q^{(3)}\rangle$ . + +design aims to help train the code retrieval model better and we do not need NL descriptions to be associated with code snippets at testing phase. + +We conduct extensive experiments using a large-scale dataset StaQC (Yao et al., 2018) and our collected duplicated question dataset from Stack Overflow2. The results show that our proposed learning framework is able to improve the state-of-the-art code retrieval models and outperforms using adversarial learning without QD relevance regularization, as well as strong multitask learning baselines that also utilize question duplication data. + +# 2 Overview + +The work studies code retrieval, a task of matching questions with code, which we will use QC to stand for. The training set $\mathcal{D}^{\mathrm{QC}}$ consists of NL question and code snippet pairs $\mathcal{D}^{\mathrm{QC}} = \{q^{(i)},c^{(i)}\}$ . Given NL question $q^{(i)}$ , the QC task is to find $c^{(i)}$ from $\mathcal{D}^{\mathrm{QC}}$ among all the code snippets. For simplicity, we omit the data sample index and use $q$ and $c$ to denote a QC pair, and $c^{-}$ to represent any other code snippets in the dataset except for $c$ . + +Our goal is to learn a QC model, denoted as $f_{\theta}^{\mathrm{QC}}$ , that retrieves the highest score code snippets for an input question: arg max $c' \in \{c\} \cup \{c^{-}\} f_{\theta}^{\mathrm{QC}}(q, c')$ . Note that at testing time, the trained QC model $f^{\mathrm{QC}}$ can be used to retrieve code snippets from any code bases, unlike the group of QC methods (Hayati et al., 2018; Hashimoto et al., 2018; Guo et al., + +2019) relying on the availability of NL descriptions of code. + +We aim to address the aforementioned challenges in code retrieval through two strategies: (1) We introduce adversarial learning (Goodfellow et al., 2014a) to alleviate the bi-modal learning challenges. Specifically an adversarial QC generator selects unpaired code snippets that are difficult for the QC model to discriminate, to strengthen its ability to distinguish top-ranked positive and negative samples (Wang et al., 2017). (2) We also propose to employ a question-description (QD) relevance model to provide a secondary view on the generated adversarial samples, inspired by the group of QC work that measures the relevance of code snippets through their associated NL descriptions. + +Figure 1 gives an overview of our proposed learning framework, which does not assume specific model architectures and can be generalized to different base QC models or use different QD relevance models. A general description is given in the caption. In summary, the adversarial QC generator selects $\hat{c}$ that is unpaired with a given $q$ . $\hat{q}$ is an NL description of $\hat{c}$ . Details on how to acquire $\hat{q}$ will be introduced in Section 3.2. Next, a QD model predicts a relevance score for $\langle q,\hat{q}\rangle$ . A pairwise ranking loss is calculated based on whether the QC model discriminates ground-truth QC pair $\langle q,c\rangle$ from unpaired $\langle q,\hat{c}\rangle$ . Learning through this loss is reweighted by a down-scale factor, which is dynamically determined by the QD relevance prediction score. This works as a regularization term over potential false negative adversarial samples. + +# 3 Proposed Methodology + +We now introduce in detail our proposed learning framework. We start with the adversarial learning method in Section 3.1 and then discuss the rationale to incorporate question-description or QD relevance feedback in Section 3.2, before putting them together in Section 3.3 and Section 3.4. + +# 3.1 Adversarial Learning via Sampling + +We propose to apply adversarial learning (Goodfellow et al., 2014a) to code retrieval. Our goal is to train a better QC model $f_{\theta}^{\mathrm{QC}}$ by letting it play the adversarial game with a QC generator model $g_{\phi}^{\mathrm{QC}}$ . $\theta$ represents the parameters of the QC model and $\phi$ represents the parameters of the adversarial QC generator. As in standard adversarial learning, $f_{\theta}^{\mathrm{QC}}$ plays the discriminator role to distinguish ground + +![](images/7315be5aecbf289b2c86d90769e7f4b558dfe5c932143c6fc4c20df272b2d8fe.jpg) +Figure 1: Regularized adversarial learning framework. Best viewed in color. The adversarial QC generator (middle) produces an adversarial code given an NL question. The QD relevance model (right) then predicts a relevance score between the given question and the NL description or the generated adversarial code. A pairwise ranking loss is computed between the ground-truth code and the adversarial code. The QC model (left) is trained with the ranking loss, after it is scaled by a QD relevance regularization weight that depends on the QD relevance score. The parameter update is larger when the relevance score is smaller and vice versa. + +truth code snippet $c$ from generated pairs $\hat{c}$ . The training objective of the QC model is to minimize $\mathcal{L}_{\theta}$ below: + +$$ +\begin{array}{l} \mathcal {L} _ {\theta} = \sum_ {i} \mathbb {E} _ {\hat {c} \sim P _ {\phi} (c | q ^ {(i)})} l _ {\theta} \left(q ^ {(i)}, c ^ {(i)}, \hat {c}\right), \\ l _ {\theta} = \max (0, d + f _ {\theta} ^ {\mathrm {Q C}} (q ^ {(i)}, \hat {c}) - f _ {\theta} ^ {\mathrm {Q C}} (q ^ {(i)}, c ^ {(i)})), \\ \end{array} +$$ + +where $l_{\theta}$ is a pairwise ranking loss, and specifically we use a hinge loss with margin $d$ . $\hat{c}$ is generated by $g_{\phi}^{\mathrm{QC}}$ and follows a probability distribution $P_{\phi}(c|q^{(i)})$ . $g_{\phi}^{\mathrm{QC}}$ aims to assign higher probabilities to code snippets that would mislead $f_{\theta}^{\mathrm{QC}}$ . + +There are many ways to realize the QC generator. For example, one may employ a sequence model to generate the adversarial code snippet $\hat{c}$ token by token (Bi et al., 2019; Agashe et al., 2019). However, training a sequence generation model is difficult, because the search space of all code token combinations is huge. Henceforce, we turn to a simpler idea inspired by Wang et al. (2017), and restrict the generation of $\hat{c}$ to the space of all the existing code snippets in the training dataset $\mathcal{D}^{\mathrm{QC}}$ . The QC generator then only needs to sample an existing code snippet $c^{(j)}$ from an adversarial probability distribution conditioned on a given query and let it be $\hat{c}$ , i.e., $\hat{c} = c^{(j)} \sim P_{\phi}(c|q^{(i)})$ . Adopting this method will make training the QC generator easier, and ensures that the generated code snippets are legitimate as they directly come from the training dataset. We + +define the adversarial code distribution as: + +$$ +P _ {\phi} (c | q ^ {(i)}) = \frac {\exp (g _ {\phi} ^ {\mathrm {Q C}} (q ^ {(i)} , c) / \tau)}{\sum_ {c ^ {\prime}} \exp (g _ {\phi} ^ {\mathrm {Q C}} (q ^ {(i)} , c ^ {\prime}) / \tau)}, +$$ + +where $g_{\phi}^{\mathrm{QC}}$ represents an adversarial QC matching function. $\tau$ is a temperature hyper-parameter used to tune the distribution to concentrate more of less on top-scored code snippets. Moreover, scoring all code snippets can be computationally inefficient in practice. Therefore, we use the method of Yang et al. (2019) to first uniformly sample a subset of data, whose size is much smaller than the entire training set size, and then perform adversarial sampling on this subset. + +The generator function $g_{\phi}^{\mathrm{QC}}$ can be pre-trained in the same way as the discriminator (i.e., $f_{\theta}^{\mathrm{QC}}$ ) and then get updated using standard policy gradient reinforcement learning algorithms, such as REINFORCE (Williams, 1992), to maximize the ranking losses of the QC model. Formally, the QC generator aims to maximize the following expected reward: $J(\phi) = \sum_{i}\mathbb{E}_{c^{(j)}\sim P_{\phi}(c|q^{(i)})}[l_{\theta}(q^{(i)},c^{(i)},c^{(j)})]$ where $l_{\theta}(q^{(i)},c^{(i)},c^{(j)})$ is the pairwise ranking loss of the discriminator model defined earlier. The gradient of $J$ can be derived as $\nabla_{\phi}J = \sum_{i}\mathbb{E}_{c^{(j)}\sim P_{\phi}(c|q^{(i)})}[l_{\theta}\cdot \nabla_{\phi}\log P_{\phi}(c^{(j)}|q^{(i)})]$ . Another option is to let $g_{\phi}^{\mathrm{QC}}$ use the same architecture as $f_{\theta}^{\mathrm{QC}}$ and use tied parameters (i.e., $\phi = \theta$ ), as adopted in previous work (Deshpande and M.Khapra, 2019; Park and Chang, 2019). + +The focus of this work is to show the effectiveness of applying adversarial learning to code retrieval, and how to regularize it with QD relevance. We leave more complex adversarial techniques (e.g. adversarial perturbation (Goodfellow et al., 2014b; Miyato et al., 2015) or adversarial sequence generation (Li et al., 2018)) for future studies. + +# 3.2 Question-Description Relevance Regularization + +Intuitively, we can train a better code retrieval model, if the negative code snippets are all true-negative ones, i.e., if they are confusingly similar to correct code answers, but perform different functionalities. However, because of the one-to-many mapping issue, some negative code snippets sampled by the adversarial QC generator can be false-negative, i.e. they are equally good answers for a given question despite that they are not paired with the question in the training set. Unfortunately during training, this problem could become increas + +ingly obvious as the adversarial will be improved along with the code retrieval model, and eventually makes learning less and less effective. Since both the QC model and the adversarial QC generator operates from the QC perspective, it is difficult to further discriminate true-negative and false-negative code snippets. + +Therefore, we propose to alleviate this problem with QD relevance regularization. This idea is inspired by the group of QC work mentioned in Section 1 that retrieves code snippets by matching their NL descriptions with a given question. But different from them, we only leverage QD relevance during training to provide a secondary view and to reweight the adversarial samples. Fortunately, an adversarial code snippet $\hat{c}$ sampled from the original training dataset $\mathcal{D}^{\mathrm{QC}}$ is paired with an NL question $\hat{q}$ , which can be regarded as its NL description and used to calculate the relevance to the given question $q$ . + +Let us refer to the example in Table 1 again. At a certain point of training, with $q^{(1)}$ "Flatten a shallow list in Python" being the given question, the adversarial QC generator may choose $c^{(2)}$ and $c^{(3)}$ as the negative samples, but instead of treating them equivalently, we can infer from the QD matching perspective that $c^{(3)}$ is likely to be true negative, because $q^{(3)}$ "How to get all possible combinations of a list's elements" clearly has different meanings from $q^{(1)}$ , while $c^{(2)}$ is likely to be a false negative example since $q^{(2)}$ "How to flatten a 2D list to 1D without using numpy?" is similar to $q^{(1)}$ . Hence, during training, the discriminative QC model should put more weights on negative samples like $c^{(3)}$ rather than $c^{(2)}$ . + +We now explain how to map QD relevance scores to regularization weights. Let $f^{\mathrm{QD}}(q,\hat{q})$ denote the predicted relevance score between the given question $q$ and the question paired with an adversarial code snippet $\hat{q}$ , and let $f^{\mathrm{QD}}(q,\hat{q})$ be normalized to the range from 0 to 1. We can see from the above example that QD relevance and adjusted learning weight should be reversely associated, so we map the normalized relevance score to a weight using a monotonously decreasing polynomial function: $w^{\mathrm{QD}}(x) = (1 - x^{a})^{b}$ , $0\leq x\leq 1$ . Both $a$ and $b$ are positive integer hyper-parameters that control the shape of the curve and can be tuned on the dev sets. In this work, they are both set to one by default for simplicity. $w^{\mathrm{QD}}\in [0,1]$ allows the optimization objective to weigh less on adversarial samples that + +Algorithm 1: Question-Description Relevance Regularized Adversarial Learning. +QC training data: $\mathcal{D}^{\mathrm{QC}} = \{q^{(i)},c^{(i)}\}$ QD model $f^{\mathrm{QD}}$ Constants :positive intergers $N,\tau ,a,b$ Result: QC model $f_{\theta}^{\mathrm{QC}}$ +1 Pretrain $f_{\theta}^{\mathrm{QC}}$ on $\mathcal{D}^{\mathrm{QC}}$ using pairwise ranking loss $l_{\theta}^{\mathrm{QC}}$ with randomly sampled negative code snippets ; +2 Initialize QC generator $g_{\phi}^{\mathrm{QC}}$ with $f_{\theta}^{\mathrm{QC}}$ .. $\phi \leftarrow \theta$ . +while not converge or not reach max iter number do +for random sampled $\langle q^{(i)},c^{(i)}\rangle \in \mathcal{D}^{QC}$ do +Randomly choose $D = \{q,c\} \subset \mathcal{D}^{\mathrm{QC}}$ where $|D| = N$ Sample $c^{(j)}\in D$ that $c^{(j)}\sim P_{\phi}(c^{(j)}|q^{(i)})=$ softmax $\tau (g_{\phi}^{\mathrm{QC}}(q^{(i)},c^{(j)}))$ . $l_{\theta}^{\mathrm{QC}}\gets l_{\theta}(q^{(i)},c^{(i)},c^{(j)})$ . Find $q^{(j)}$ associated with $q^{(j)}$ $w^{\mathrm{QD}}\gets (1 - f^{\mathrm{QD}}(q^{(i)},q^{(j)})^{a})^{b}$ Update QC model with gradient descent to reduce loss: $w^{\mathrm{QD}}\cdot l_{\theta}^{\mathrm{QC}}$ Update adversarial QC generator with gradient ascent: $l_{\theta}^{\mathrm{QC}}\cdot \nabla_{\phi}\log P_{\phi}(c^{(j)}|q^{(i)})$ +end +Optional QD model update. (See Section 3.4.) + +are more likely to be false negative. + +# 3.3 Question-Description Relevance Regularized Adversarial Learning + +Now we describe the proposed learning framework in Algorithm 1 that combines adversarial learning and QD relevance regularization. Let us first assume the QD model is given and we will explain how to pre-train, and optionally update it shortly. + +The QC model can be first pre-trained on $\mathcal{D}^{\mathrm{QC}}$ using standard pairwise ranking loss $l_{\theta}(q^{(i)},c^{(i)},c^{(j)})$ with randomly sampled $c^{(j)}$ . Line 3-11 show the QC model training steps. For each QC pair $\langle q^{(i)},c^{(i)}\rangle$ , a batch of negative QC pairs are sampled randomly from the training set $\mathcal{D}^{\mathrm{QC}}$ . The QC generator then choose an adversarial $c^{(j)}$ from distribution $P_{\phi}(c|q^{(i)})$ defined in Section 3.1, and its paired question is $q^{(j)}$ . Two questions $q^{(i)}$ and $q^{(j)}$ are then passed to the QD model, and the QD relevance prediction is mapped to a regularization weight $w^{\mathrm{QD}}$ . Finally, the regularization weight is used to control the update of the QC model on the ranking loss with the adversarial $\hat{c}$ . + +# 3.4 Base Model Architecture + +Our framework can be instantiated with various model architectures for QC or QD. Here we choose the same neural network architecture as (Gu et al., + +2018; Yao et al., 2019) as our base QC model, that achieves competitive or state-of-the-art code retrieval performances. Concretely, both a natural language question $q$ and a code snippet $c$ are sequences of tokens. They are encoded respectively by separate bi-LSTM networks (Schuster and Paliwal, 1997), passed through a max pooling layer to extract the most salient features of the entire sequence, and then through a hyperbolic tangent activate function. The encoded question and code representations are denoted as $h^q$ and $h^c$ . Finally, a matching component scores the vector representation between $q$ and $c$ and outputs their matching score for ranking. We follow previous work to use cosine similarity: $f^{\mathrm{QC}}(q,c) = \cos \mathrm{ine}(h^{q},h^{c})$ . + +QD Model. There are various model architecture choices, but here for simplicity, we adapt the QC model for QD relevance prediction. We let the QD model use the same neural architecture as the QC model, but with Siamese question encoders. The QD relevance score is the cosine similarity between $h^{q(i)}$ and $h^{q(j)}$ , the bi-LSTM encoding outputs for question $q^{(i)}$ and $q^{(j)}$ respectively: $f^{\mathrm{QD}}(q^{(i)},q^{(j)}) = \mathrm{cosine}(h^{q^{(i)}},h^{q^{(j)}})$ . This method allows using a pre-trained QC model to initialize the QD model parameters, which is easy to implement and the pre-trained question encoder in the QC model can help the QD performance. Since programming-domain question paraphrases are rare, we collect a small QD training set consisting of programming related natural language question pairs $\mathcal{D}^{\mathrm{QD}} = \{q^{(j)},p^{(j)}\}$ based on duplicated questions in Stack Overflow. + +The learning framework can be symmetrically applied, as indicated by Line 12 in Algorithm 1, so that the QD model can also be improved. This may provide better QD relevance feedback to help train a better QC model. In short, we can use a discriminative and a generative QD model. The generative QD model selects adversarial questions to help train the discriminative QD model, and this training can be regularized by the relevance predictions from a QC model. More details will be introduced in the experiments. + +# 4 Experiments + +In this section, we first introduce our experimental setup, and then will show that our method not only outperforms the baseline methods, but also multi-task learning approaches, where question-description relevance prediction is the other task. In + +
PythonSQL
TrainDevTestTrainDevTest
QC68,2358,5298,53060,5097,5647,564
QD1,0851,0851,44718,0202,2522,253
+ +Table 2: Dataset statistics. QD is used to represent the duplicate question dataset. + +particular, the QD relevance regularization consistently improves QC performance upon adversarial learning, and the effectiveness of relevance regularization can also be verified as it is symmetrically applied to improve the QD task. + +# 4.1 Datasets + +We use StaQC (Yao et al., 2018) to train and evaluate our code retrieval model, which contains automatically extracted questions on Python and SQL and their associated code answers from Stack Overflow. We use the version of StaQC that each question is associated with a single answer, as those associated with multiple answers are predicted by an automatic answer detection model and therefore noisier. We randomly split this QC datasets by a 70/15/15 ratio into training, dev and testing sets. The dataset statistics are summarized in Table 2. + +We use Stack Exchange Data Explorer3 to collect data for training and evaluating QD relevance prediction. Specifically, we collect the question pairs from posts that are manually labeled as duplicate by users, which are related by LinkTypeId=3. It turns out that the QD datasets are substantially smaller than the QC datasets, especially for Python, as shown in Table 2. This makes it more interesting to check whether a small amount of QD relevance guidance can help improve code retrieval performances. + +# 4.2 Baselines and Evaluation Metrics + +We select state-of-the-art methods from both groups of work for QC (mentioned in Introduction). DecAtt and DCS below are methods that directly match questions with code. EditDist and vMF-VAE transfer code retrieval into a question matching problem. + +- DecAtt (Parikh et al., 2016). This is a widely used neural network model with attention mechanism for sentence pairwise modeling. +- DCS (Gu et al., 2018). We use this as our base model, because it is a simple yet effective code + +retrieval model that achieves competitive performance without introducing additional training overheads (Yao et al., 2019). Its architecture has been described in Section 3.4. + +- EditDist (Hayati et al., 2018). Code snippets are retrieved by measuring an edit distance based similarity function between their associated NL descriptions and the input questions. Since there is only one question for each sample in the QC datasets, we apply a standard code summarization tool (Iyer et al., 2016) to generate code descriptions to match with input questions. +- vMF-VAE (Guo et al., 2019). This is similar to EditDist, but a vMF Variational Autoencoder (Xu and Durrett, 2018) is separately trained to embed questions and code descriptions into latent vector distributions, whose distance is then measured by KL-divergence. This method is also used by Hashimoto et al. (2018). + +We further consider multi-task learning (MTL) as an alternative way how QD can help QC. It is worth mentioning that our method does not require associated training data or the sharing of trained parameters between the QD and QC tasks, whereas MTL typically does. For fair comparison, we adapt two MTL methods to our scenario that use the same base model, or its question and code encoders: + +- MTL-DCS. This is a straightforward MTL adaptation of DCS, where the code encoder is updated on the QC dataset and the question encoder is updated on both QC and QD datasets. The model is alternatively trained on both datasets. +- MTL-MLP (Gonzalez et al., 2018). This recent MTL method is originally designed to rank relevant questions and question-related comments. It uses a multi-layer perceptron (MLP) network with one shared hidden layer, a task-specific hidden layer and a task-specific classification layer for each output. We adapt it for our task. The input to the MLP is the concatenation of similarity features $\left[\max(h^{q}, h^{c}), h^{q} - h^{c}, h^{q} \odot h^{c}\right]$ , where $\odot$ is element-wise product. $h^{q}$ and $h^{c}$ are learned using the same encoders as our base model. + +The ranking metrics used for evaluation are Mean Average Precision (MAP) and Normalize Discounted Cumulative Gain (nDCG) (Järvelin and Kekäläinen, 2002). The same evaluation method as previous work is adopted (Iyer et al., 2016; Yao et al., 2019) for both QC and QD, where we randomly choose from the testing set a fixed-size (49) pool of negative candidates for each question, and + +
PythonSQL
MAPnDCGMAPnDCG
EditDist (Hayati et al., 2018)0.23480.38440.20960.3641
vMF-VAE (Guo et al., 2019)0.28860.45110.29210.4537
DecAtt (Parikh et al., 2016)0.57440.67160.51420.6231
DCS (Gu et al., 2018)0.60150.69290.51550.6237
MTL-MLP (Gonzalez et al., 2018)0.57370.67120.50790.6179
MTL-DCS0.60240.69350.51600.6237
Our0.6372*0.7206*0.5404*0.6429*
Our - RR0.6249*0.7111*0.5274*0.6327*
+ +Table 3: Code retrieval (QC) performance on test sets. * denotes significantly different from DCS (Gu et al., 2018) in one-tailed t-test $(p < 0.01)$ + +![](images/8c998c91fedddb814e25d6094e2aee487157cb1cf31f415e5b7364d356929b01.jpg) +Figure 2: QC learning curves on the Python dev set. + +evaluate the ranking of its paired code snippet or questions among these negative candidates. + +# 4.3 Implementation Details + +Our implementation is based on Yao et al. (2019). We follow this work to set the base model hyperparameters. The vocabulary embedding size for both natural language and programming language is set at 200. The LSTM hidden size is 400. Margin in the hinge loss is 0.05. The trained DCS model is used as pre-training for our models. The learning rate is set at 1e-4 and the dropout rate set at 0.25. For adversarial training, we set $\tau$ to 0.2 following (Wang et al., 2017) and limit the maximum number of epochs to 300. Standard L2-regularization is used on all the models. We empirically tried to tie the parameters of the discriminator and the generator following previous work (Deshpande and M.Khapra, 2019; Park and Chang, 2019), which shows similar improvements over the baselines. Implementation from Xu and Durrett (2018) is used for the vMF-VAE baseline. + +We follow the code preprocessing steps in Yao et al. (2018) for Python and Iyer et al. (2016) for SQL. We use the NLTK toolkit (Bird and Loper, 2004) to tokenize the collected duplicate questions, and let it share the same NL vocabulary as the QC dataset $\mathcal{D}^{\mathrm{QC}}$ . + +# 4.4 Results and Analyses + +Our experiments aim to answer the following research questions: + +(1) Can the question regularized adversarial learning framework improve code retrieval (QC) performance? We will first compare the code retrieval performance of different methods. Table 3 summarizes the test results, which are consistent on both Python and SQL datasets. Code retrieval baselines by measuring QD relevance, e.g., EditDist and vMF-VAE, are popularly used in code generation related work, but do not perform well compared to other code retrieval baselines in our experiments, partly because they are not optimized toward the QC task. This suggests that applying more advanced code retrieval methods for retrieve-and-edit code generation can be an interesting future research topic. DCS is a strong baseline, as it outperforms DecAtt that uses a more complex attention mechanism. This indicates that it is not easy to automatically learn pairwise token associations between natural language and programming languages from software community data, which is also suggested by previous work (Panthaplackel et al., 2019; Vinayakarao et al., 2017). + +Our proposed learning algorithm can improve the QC performance compared to all the baselines. The “-RR” variant is to only apply adversarial sampling without QD relevance regularization. It already leads to improvements compared to the base model (i.e. DCS), but does not perform as well as our full model. This proves the usefulness of the QD relevance regularization and indicates that selectively weighting the contribution of adversarial samples to the training loss can help the model generalize better to test data. Figure 2 compares QC learning curves on the dev set. The full model curve being the smoothest qualitatively suggests that the adversarial learning has been well regularized. + +(2) How does the proposed algorithm compare with multi-task learning methods? The results are reported in Table 4. The MTL-MLP model is originally proposed to improve question-question relevance prediction by using question-comment relevance prediction as a secondary task (Gonzalez et al., 2018). It does not perform as well as MTL-DCS, which basically uses hard parameter sharing between the two tasks and does not require additional similarity feature definitions. In general, the effectiveness of these MTL baselines on the QC task is limited because there are only a small amount of QD pairs available for training. Both our method and its ablated variant outperform the + +
PythonSQL
MAPnDCGMAPnDCG
MTL-MLP (Gonzalez et al., 2018)0.57370.67120.50790.6179
MTL-DCS0.60240.69350.51600.6237
Our0.63720.72060.54040.6429
+ +MTL baselines. This shows that it may be more effective to use a data scarce task to regularize the adversarial learning of a relatively data rich task, than using those scarce data in MTL. + +(3) Can the QD performance be improved by the proposed method? Although QD is not the focus of this work, we can use it to verify that generalizability of our method by symmetrically applying it to update the QD model as mentioned in Section 3.2. To be concrete, a generative adversarial QD model selects difficult questions from the a distribution of question pair scores: $\hat{q}\sim \mathrm{softmax}_{\tau}(f^{\mathrm{QD}}(\hat{q},q^{(i)}))$ Then a QC model is used to calculate a relevance score for a question-code pair, and this can regularize the adversarial learning of the QD model. + +Table 5 shows the results. Our method and its ablated variants outperform the QD baselines EditDist and vMF-VAE, again suggesting that supervised learning is more effective. The full model achieves the best overall performance and removing relevance regularization (-RR) from the QC model consistently leads to performance drop. In contrast, further removing adversarial sampling (-AS) hurts the performance on SQL dataset slightly, but not on Python. This is probably because the Python QD dataset is very small and using adversarial learning can easily overfit, which again suggests the importance of our proposed relevance regularization. Finally, removing QC as pretraining (-Pretrain) greatly hurts the performance, which is understandable since QC datasets are much larger. + +Because the QD model performance can be improved in such a way, we allow it to get updated in our QC experiments (corresponding to line 12 in Algorithm 1) and the results have been discussed in Table 3. We report here the QC performance using a fixed QD model (i.e. Our - RR - AS) for relevance regularization: $\mathrm{MAP} = 0.6371$ , $\mathrm{nDCG} = 0.7205$ for Python and $\mathrm{MAP} = 0.5366$ , $\mathrm{nDCG} = 0.6398$ for SQL. Comparing these results with those in Table3 (Our), one can see that allowing the QD model to update consistently improves QC performance, which suggests that a better QD model can provide more accurate relevance regularization to the QC model and leads to better results. + +Table 4: Compare QC performance with MTL. + +
PythonSQL
MAPnDCGMAPnDCG
EditDist (Hayati et al., 2018)0.36170.48830.32460.4580
vMF-VAE (Guo et al., 2019)0.30090.46160.30290.4641
Our0.71620.78210.69470.7651
Our - RR0.70460.77340.68460.7575
Our - RR - AS0.71160.77870.67640.7512
Our - RR - AS - Pretrain0.38820.51700.62840.7129
+ +Table 5: Question relevance prediction results, evaluated on the question duplication dataset we collected. + +# 5 Related Work + +Code Retrieval. Code retrieval has developed from using classic information retrieval techniques (Hill et al., 2014; Haiduc et al., 2013; Lu et al., 2015) to recently deep neural methods that can be categorized into two groups. The first group directly model the similarity across the natural language and programming language modalities. Besides CODENN (Iyer et al., 2016) and DCS (Gu et al., 2018) discussed earlier, Yao et al. (2019) leverage an extra code summarization task and ensemble a separately trained code summary retrieval model with a QC model to achieve better overall code retrieval performances. Ye et al. (2020) further train a code generation model and a code summarization model through dual learning, which helped to learn better NL question and code representations. Both works employ additional sequence generation models that greatly increases the training complexity, and they both treat all unpaired code equally as negatives. Our work differs from them as we introduce adversarial learning for code retrieval, and the existing work do not leverage question relevance for code retrieval as we do. + +The second group of works transfer code retrieve to a code description retrieval problem. This methodology has been widely adopted as a component in the retrieve-and-edit code generation literature. For example, heuristic methods such as measuring edit distance (Hayati et al., 2018) or comparing code type and length (Huang et al., 2018) are used, and separate question latent representations (Hayati et al., 2018; Guo et al., 2019) are learned. Our work shares with them the idea to exploit QD relevance, but we use QD relevance in a novel way to regularize the adversarial learning of QC models. It will be an interesting future work to leverage the proposed code retrieval method for retrieve-and-edit code generation. + +Adversarial Learning. Adversarial learning has been widely used in areas such as computer vision + +(Mirza and Osindero, 2014; Chen et al., 2016; Radford et al., 2015; Arjovsky et al., 2017), text generation (Yu et al., 2017; Chen et al., 2019; Liang, 2019; Gu et al., 2018; Liu et al., 2017; Ma et al., 2019), relation extraction (Wu et al., 2017; Qin et al., 2018), question answering (Oh et al., 2019; Yang et al., 2019), etc. We proposed to apply adversarial learning to code retrieval, because they have effectively improved cross-domain task performances and helped generate useful training data. We adapted the method from Wang et al. (2017) for the bi-modal QC scenario. As future work, adversarial learning for QC can be generalized to other settings with different base neural models (Yang et al., 2019) or with more complex adversarial learning methods, such as adding perturbed noises (Park and Chang, 2019) or generating adversarial sequences (Yu et al., 2017; Li et al., 2018). Our method differs from most adversarial learning work in that the discriminator (QC model) does not see all generated samples as equally negative. + +# 6 Conclusion + +This work studies the code retrieval problem, and tries to tackle the challenges of matching natural language questions with programming language (code) snippets. We propose a novel learning algorithm that introduces adversarial learning to code retrieval, and it is further regularized from the perspective of a question-description relevance prediction model. Empirical results show that the proposed method can significantly improve the code retrieval performances on large-scale datasets for both Python and SQL programming languages. + +# Acknowledgments + +We would like to thank the anonymous reviewers for their helpful comments. This research was sponsored in part by the Army Research Office under cooperative agreements W911NF-17-1-0412, NSF Grant IIS1815674, and NSF CAREER #1942980. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notice herein. + +# References + +Rajas Agashe, Srini Iyer, and Luke Zettlemoyer. 2019. Juice: A large scale distantly supervised dataset for open domain context-based code generation. ArXiv, abs/1910.02216. +Shayan A. Akbar and Avinash C. Kak. 2019. Scor: Source code retrieval with semantics and order. 2019 IEEE/ACM 16th International Conference on Mining Software Repositories (MSR), pages 1-12. +Martín Arjovsky, Soumith Chintala, and Léon Bottou. 2017. Wasserstein generative adversarial networks. In ICML. +Bin Bi, Chen Wu, Ming Yan, Wei Wang, Jiangnan Xia, and Chenliang Li. 2019. Incorporating external knowledge into machine reading for generative question answering. ArXiv, abs/1909.02745. +Steven Bird and Edward Loper. 2004. NLTK: The natural language toolkit. In Proceedings of the ACL Interactive Poster and Demonstration Sessions, pages 214-217, Barcelona, Spain. Association for Computational Linguistics. +MK Chen, Xinyi Lin, Chen Wei, and Rui Yan. 2019. Bofgan: Towards a new structure of backward-orforward generative adversarial nets. In The World Wide Web Conference, pages 2652-2658. ACM. +Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. 2016. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In NIPS. +Zimin Chen and Martin Monperrus. 2019. A literature study of embeddings on source code. *ArXiv*, abs/1904.03061. +Milan Cvtkovic, Badal Singh, and Anima Anandkumar. 2019. Open vocabulary learning on source code with a graph-structured cache. In ICML. +Ameet Deshpande and Mitesh M.Khapra. 2019. Dissecting an adversarial framework for information retrieval. +Elizabeth Dinella, Hanjun Dai, Ziyang Li, Mayur Naik, Le Song, and Ke Wang. 2020. Hoppity: Learning graph transformations to detect and fix bugs in programs. In ICLR. +Ana Gonzalez, Isabelle Augenstein, and Anders Søgaard. 2018. A strong baseline for question relevancy ranking. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4810-4815. +Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014a. Generative adversarial nets. In Advances in neural information processing systems, pages 2672-2680. + +Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014b. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. +Xiaodong Gu, Hongyu Zhang, and Sunghun Kim. 2018. Deep code search. In 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE), pages 933-944. IEEE. +Daya Guo, Duyu Tang, Nan Duan, Ming Zhou, and Jian Yin. 2019. Coupling retrieval and meta-learning for context-dependent semantic parsing. In ACL. +Sonia Haiduc, Gabriele Bavota, Andrian Marcus, Rocco Oliveto, Andrea De Lucia, and Tim Menzies. 2013. Automatic query reformulations for text retrieval in software engineering. In Proceedings of the 2013 International Conference on Software Engineering, pages 842-851. IEEE Press. +Tatsunori B. Hashimoto, Kelvin Guu, Yonatan Oren, and Percy Liang. 2018. A retrieve-and-edit framework for predicting structured outputs. ArXiv, abs/1812.01194. +Shirley Anugrah Hayati, Raphael Olivier, Pravalika Avvaru, Pengcheng Yin, Anthony Tomasic, and Graham Neubig. 2018. Retrieval-based neural code generation. In EMNLP. +Emily Hill, Manuel Roldan-Vega, Jerry Alan Fails, and Greg Mallet. 2014. Nl-based query refinement and contextualized code search results: A user study. In 2014 Software Evolution Week-IEEE Conference on Software Maintenance, Reengineering, and Reverse Engineering (CSMR-WCRE), pages 34-43. IEEE. +Po-Sen Huang, Chenglong Wang, Rishabh Singh, Wen tau Yih, and Xiaodong He. 2018. Natural language to structured query generation via meta-learning. In NAACL-HLT. +Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2016. Summarizing source code using a neural attention model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2073-2083. +Kalervo Järvelin and Jaana Kekäläinen. 2002. Cumulated gain-based evaluation of ir techniques. ACM Transactions on Information Systems (TOIS), 20(4):422-446. +Dong-Jin Kim, Jinsoo Choi, Tae-Hyun Oh, and In So Kweon. 2019. Image captioning with very scarce supervised data: Adversarial semi-supervised learning approach. In EMNLP/IJCNLP. +Alexander LeClair and Collin McMillan. 2019. Recommendations for datasets for source code summarization. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), + +pages 3931-3937, Minneapolis, Minnesota. Association for Computational Linguistics. +Dianqi Li, Qiuyuan Huang, Xiaodong He, Lei Zhang, and Ming-Ting Sun. 2018. Generating diverse and accurate visual captions by comparative adversarial learning. ArXiv, abs/1804.00861. +Shangsong Liang. 2019. Unsupervised semantic generative adversarial networks for expert retrieval. In The World Wide Web Conference, pages 1039-1050. ACM. +Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2017. Adversarial multi-task learning for text classification. In ACL. +Meili Lu, Xiaobing Sun, Shaowei Wang, David Lo, and Yucong Duan. 2015. Query expansion via wordnet for effective code search. In 2015 IEEE 22nd International Conference on Software Analysis, Evolution, and Reengineering (SANER), pages 545-549. IEEE. +Jing Ma, Wei Gao, and Kam-Fai Wong. 2019. Detect rumors on twitter by promoting information campaigns with generative adversarial learning. In The World Wide Web Conference, pages 3049-3055. ACM. +Mehdi Mirza and Simon Osindero. 2014. Conditional generative adversarial nets. ArXiv, abs/1411.1784. +Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. 2015. Distributional smoothing with virtual adversarial training. arXiv preprint arXiv:1507.00677. +Jong-Hoon Oh, Kazuma Kadowaki, Julien Kloetzer, Ryu Iida, and Kentaro Torisawa. 2019. Open-domain why-question answering with adversarial learning to encode answer texts. In ACL. +Sheena Panthaplackel, Milos Gligoric, Raymond J. Mooney, and Junyi Jessy Li. 2019. Associating natural language comment and source code entities. ArXiv, abs/1912.06728. +Sheena Panthaplackel, Pengyu Nie, Milos Gligoric, Junyi Jessy Li, and Raymond J. Mooney. 2020. Learning to update natural language comments based on code changes. ArXiv, abs/2004.12169. +Ankur Parikh, Oscar Tackstrom, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2249-2255, Austin, Texas. Association for Computational Linguistics. +Dae Hoon Park and Yi Chang. 2019. Adversarial sampling and training for semi-supervised information retrieval. In The World Wide Web Conference, pages 1443-1453. ACM. + +Pengda Qin, Weiran Xu, and William Yang Wang. 2018. Dsgan: Generative adversarial training for distant supervision relation extraction. In ACL. +Alec Radford, Luke Metz, and Soumith Chintala. 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. +Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673-2681. +Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. 2017. Adversarial discriminative domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7167-7176. +Venkatesh Vinayakarao, Anita Sarma, Rahul Purandare, Shuktika Jain, and Saumya Jain. 2017. Anne: Improving source code search using entity retrieval approach. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, pages 211-220. ACM. +Jun Wang, Lantao Yu, Weinan Zhang, Yu Gong, Yinghui Xu, Benyou Wang, Peng Zhang, and Dell Zhang. 2017. Irgan: A minimax game for unifying generative and discriminative information retrieval models. In Proceedings of the 40th International ACM SIGIR conference on Research and Development in Information Retrieval, pages 515-524. ACM. +Ronald J Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229-256. + +Yi Wu, David Bamman, and Stuart J. Russell. 2017. Adversarial training for relation extraction. In EMNLP. +Jiacheng Xu and Greg Durrett. 2018. Spherical latent spaces for stable variational autoencoders. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. +Xiao Yang, Madian Khabsa, Miaosen Wang, Wei Wang, Ahmed Hassan Awadallah, Daniel Kifer, and C Lee Giles. 2019. Adversarial training for community question answer selection based on multiscale matching. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 395-402. +Ziyu Yao, Jayavardhan Reddy Peddamail, and Huan Sun. 2019. Coacor: Code annotation for code retrieval with reinforcement learning. In *The World Wide Web Conference*, pages 2203-2214. ACM. +Ziyu Yao, Daniel S Weld, Wei-Peng Chen, and Huan Sun. 2018. Staqc: A systematically mined questioncode dataset from stack overflow. In Proceedings of the 2018 World Wide Web Conference, pages 1693-1703. International World Wide Web Conferences Steering Committee. +Wei Ye, Rui Xie, Jinglei Zhang, Tianxiang Hu, Xiaoyin Wang, and Shikun Zhang. 2020. Leveraging code generation to improve code retrieval and summarization via dual learning. In Proceedings of The Web Conference 2020, pages 2309-2319. +Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In Thirty-First AAAI Conference on Artificial Intelligence. \ No newline at end of file diff --git a/adversarialtrainingforcoderetrievalwithquestiondescriptionrelevanceregularization/images.zip b/adversarialtrainingforcoderetrievalwithquestiondescriptionrelevanceregularization/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..c369c716fffb777c8ad49a0b0a82c110c0731a70 --- /dev/null +++ b/adversarialtrainingforcoderetrievalwithquestiondescriptionrelevanceregularization/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8a20c2f715c59e042066cc37cfc5b8a8f1f989f658e700ab5dcad1cdf6ccf8b4 +size 171874 diff --git a/adversarialtrainingforcoderetrievalwithquestiondescriptionrelevanceregularization/layout.json b/adversarialtrainingforcoderetrievalwithquestiondescriptionrelevanceregularization/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d95127c0fc350b77d69deb31a219691135a9dfcb --- /dev/null +++ b/adversarialtrainingforcoderetrievalwithquestiondescriptionrelevanceregularization/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9a9760fbd3f843a0a202c844081197180d2ac44d72f127d892861fa601d842be +size 430909 diff --git a/airconciergegeneratingtaskorienteddialogueviaefficientlargescaleknowledgeretrieval/6aec2548-1f5d-46cb-8a5c-7be65a1cc701_content_list.json b/airconciergegeneratingtaskorienteddialogueviaefficientlargescaleknowledgeretrieval/6aec2548-1f5d-46cb-8a5c-7be65a1cc701_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..53f363001177372b23e0e9ebc7efb6028cb626c6 --- /dev/null +++ b/airconciergegeneratingtaskorienteddialogueviaefficientlargescaleknowledgeretrieval/6aec2548-1f5d-46cb-8a5c-7be65a1cc701_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:85f7f398c3d03848b2853e85862f2670d8d5244608b7ad9fc54c7cd88fb71b81 +size 84956 diff --git a/airconciergegeneratingtaskorienteddialogueviaefficientlargescaleknowledgeretrieval/6aec2548-1f5d-46cb-8a5c-7be65a1cc701_model.json b/airconciergegeneratingtaskorienteddialogueviaefficientlargescaleknowledgeretrieval/6aec2548-1f5d-46cb-8a5c-7be65a1cc701_model.json new file mode 100644 index 0000000000000000000000000000000000000000..71e72eff0f9a63aa9586e3e4296633d8d0aa3279 --- /dev/null +++ b/airconciergegeneratingtaskorienteddialogueviaefficientlargescaleknowledgeretrieval/6aec2548-1f5d-46cb-8a5c-7be65a1cc701_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:da1d2340ea5c1acbb583c335b8f131085ed844576381644a1ec32aef04d0dbc8 +size 98343 diff --git a/airconciergegeneratingtaskorienteddialogueviaefficientlargescaleknowledgeretrieval/6aec2548-1f5d-46cb-8a5c-7be65a1cc701_origin.pdf b/airconciergegeneratingtaskorienteddialogueviaefficientlargescaleknowledgeretrieval/6aec2548-1f5d-46cb-8a5c-7be65a1cc701_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8c1a222e97426d99e341ecdb6a217260a34c8a6e --- /dev/null +++ b/airconciergegeneratingtaskorienteddialogueviaefficientlargescaleknowledgeretrieval/6aec2548-1f5d-46cb-8a5c-7be65a1cc701_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:adf8596a36c5a23cf11fbf44cb89496674b2c6947dc8b31c742bdc5adaa4a1b1 +size 703601 diff --git a/airconciergegeneratingtaskorienteddialogueviaefficientlargescaleknowledgeretrieval/full.md b/airconciergegeneratingtaskorienteddialogueviaefficientlargescaleknowledgeretrieval/full.md new file mode 100644 index 0000000000000000000000000000000000000000..31c2cfd7904b0f0f0904947a2fa346217a0d1645 --- /dev/null +++ b/airconciergegeneratingtaskorienteddialogueviaefficientlargescaleknowledgeretrieval/full.md @@ -0,0 +1,338 @@ +# AirConcierge: Generating Task-Oriented Dialogue via Efficient Large-Scale Knowledge Retrieval + +Chieh-Yang Chen† Pei-Hsin Wang† Shih-Chieh Chang† Da-Cheng Juan¶ Wei Wei¶ Jia-Yu Pan¶ †National Tsing-Hua University ¶Google Research {darius107062542, peihsin}@gapp.nthu.edu.tw scchang@cs.nthu.edu.tw {dacheng, wewei, jypan}@google.com + +# Abstract + +Despite recent success in neural task-oriented dialogue systems, developing such a real-world system involves accessing large-scale knowledge bases (KBs), which cannot be simply encoded by neural approaches, such as memory network mechanisms. To alleviate the above problem, we propose AirConcierge, an end-to-end trainable text-to-SQL guided framework to learn a neural agent that interacts with KBs using the generated SQL queries. Specifically, the neural agent first learns to ask and confirm the customer's intent during the multi-turn interactions, then dynamically determining when to ground the user constraints into executable SQL queries so as to fetch relevant information from KBs. With the help of our method, the agent can use less but more accurate fetched results to generate useful responses efficiently, instead of incorporating the entire KBs. We evaluate the proposed method on the AirDialogue dataset, a large corpus released by Google, containing the conversations of customers booking flight tickets from the agent. The experimental results show that AirConcierge significantly improves over previous work in terms of accuracy and the BLEU score, which demonstrates not only the ability to achieve the given task but also the good quality of the generated dialogues. + +# 1 Introduction + +The task-oriented dialogue system (Young et al., 2013) is one of the rapidly growing fields with many practical applications, attracting more and more research attention recently (Zhao and Eskenazi, 2016; Wen et al., 2016; Bordes et al., 2017; Dhingra et al., 2017; Eric and Manning, 2017; Liu and Lane, 2017). In order to assist users in solving a specific task while holding conversations with human, the agent needs to understand the intentions of a user during the conversation and + +![](images/9a4c3db5ab579e83d5e3b8614681eb9b19e598c73a069341e45a992e175ad5e9.jpg) +Figure 1: An example of the task-oriented dialogue that incorporates a knowledge base (KB) from the AirDialogue dataset. The agent ground the user constraints into executable SQL query at the turn annotated in red. + +fulfills the request. Such a process often involves interacting with external KBs to access task-related information. Figure 1 shows an example of a task-oriented dialogue between a user and an airline ticket reservation agent. + +Traditional dialogue systems (Kim et al., 2008; Deoras and Sarikaya, 2013) may rely on the predefined slot-filling pairs, where a set of slots needs to be filled during the conversation. In addition, some works (Sukhbaatar et al., 2015; Madotto et al., 2018; Wu et al., 2019) have considered integrating KBs in a task-oriented dialogue system to generate a suitable response and have achieved promising performance. However, these methods either are limited by predefined configurations or do not scale to large KBs. Since real-world KBs typically contain millions of records, end-to-end dialogue systems are not able to incorporate external KBs effectively, leading to unstable dialogue responses. + +Moreover, very few research has attempted to + +explore how to efficiently cooperate with KBs or taken resource consumption, such as FLOPs or memory space, into consideration when designing the model. In order to solve the issues mentioned above, we propose AirConcierge, an SQL-guided task-oriented dialogue system that can efficiently work with real-world, large-scale KBs, by formulating SQL queries based on the context of the dialogue so as to retrieve relevant information from KBs. + +We evaluate and demonstrate AirConcierge on AirDialogue (Wei et al., 2018), a large-scale airline reserving dataset published recently. AirDialogue has high complexity in contexts, creating the opportunity and the necessity of forming diverse task-oriented conversations. Our experiments show that AirConcierge achieves improvements in accuracy and resource usage compared to previous work. + +# 2 Related Work + +# 2.1 Task-oriented Dialogue System + +Traditional task-oriented dialogue systems are usually accompanied by complex modular pipelines (Rudnicky et al., 1999; Zue, 2000; Zue et al., 2000). Each module is trained individually and follows by being pipelined for testing, so error from previous modules may propagate to downstream modules. Therefore, several jointed learning (Yang et al., 2017) and end-to-end reinforcement learning (RL) framework (Zhao and Eskenazi, 2016) are proposed to jointly train NLU and dialog manager using specifically collected supervised labels or user utterances to migrate the above problems. Other different end-to-end trainable dialogue systems (Wen et al., 2016; Li et al., 2017) have also been proposed and achieved successful performance by using supervised learning or RL. Compared to the pure end-to-end system, intermediate labels are still added to the model to train NLU and DST. + +Existing pipeline methods to task-oriented dialogue systems still have problems of structural complexity and fragility. For example, NLU typically detects dialog domains by parsing user utterances, then classifying user intentions, and filling a set of slots to form domain-specific semantic frames. These models may highly rely on manual feature engineering, which makes them laborious and time-consuming and are difficult to adapt to new domains. Therefore, more and more research (Manning and Eric, 2017; Sukhbaatar et al., 2015; Dodge et al., 2016; Serban et al., 2016; Bordes + +et al., 2017; Eric and Manning, 2017) dedicated to building end-to-end dialogue systems, in which all their components are trained entirely from the utterances themselves without the need to assume domains or dialog state structure, so it is easy to automatically extend to new domains and free it from manually designed pipeline modules. For example, (Bordes et al., 2017) treated dialogue system learning as the problem of learning a mapping from dialogue histories to system responses. + +The common point of the pipeline and end-to-end methods is that they both need to acquire knowledge from the knowledge base to produce more contentful responses. For instance, (Eric and Manning, 2017) represent each entry as several key-value tuples and attend on each key to extract useful information from a KB in an end-to-end fashion, KB-InfoBot (Dhingra et al., 2017) directly model posterior distributions over KBs according to the user input and a prior distribution, and GLMP (Wu et al., 2019) use a global to local memory network (Weston et al., 2014; Sukhbaatar et al., 2015) to encode KBs and query it in a continuous neural. However, as the KBs continue to grow in the real-world scenarios, such end-to-end methods of directly encoding and integrating whole KBs will eventually result in inefficiency and incorrect responses. + +On the other hand, some works may put the user utterances through a semantic parser to obtain executable logical forms and apply this symbolic query to the KB to retrieve entries based on their attributes. A common practice for generating queries is to record the slot values that appeared in each dialogue turn. For instance, (Lei et al., 2018) design text spans named belief spans to track dialogue beliefs and record infromable and requestable slots1, then converting them into a query with human efforts. Additionally, (Bordes et al., 2017) generate API calls from predefined candidates. Use such pipeline methods can interact and cooperate with the knowledge base efficiently by issuing API calls such as SQL-like queries. However, such symbolic operations break the differentiability of the system and prevent end-to-end training of neural dialogue agents. + +In particular, it is unclear if end-to-end models can completely replace and perform better than pipeline methods in a task-directed setting. In comparison, our end-to-end trainable text-to-SQL + +guided framework balances the strengths and the weaknesses of the two research methods. We first introduce the natural-language-to-SQL concept into task-oriented systems that map context dialogue histories and table schema to a SQL query and choose instead to rely on learned neural representations for implicit modeling of user intent and current state. Moreover, we provide more efficient labeling by only generating a query at an appropriate timing based on current state representations, instead of recording each slot values at each time step. By doing this, we do not need predefined slot-value pair or domain ontology, but just input dialogue histories and table schema and output synthesized SQL queries. Then we use a memory network to encode the results retrieved from KBs. Thus, we can access KBs more efficiently and achieve a high task success rate. + +# 2.2 Semantic Parsing in SQL + +Another related research is text-to-SQL, a sub-task of semantic parsing that aims at synthesizing SQL queries from natural language. The widely adopted dataset is the WikiSQL (Zhong et al., 2017). The task goal is to generate a corresponding SQL query given a natural language question and sets of table schema (Xu et al., 2018; Yu et al., 2018a; McCann et al., 2018; Hwang et al., 2019). Furthermore, cross-domain semantic parsing in text-to-SQL has been investigated (Yu et al., 2019b, 2018b, 2019a). In comparison, the SQL generator in our model is a task-oriented dialogue-to-SQL generator, which aims to help users accomplish a specific task, and dynamically determines whether to ground the dialogue context to an executable SQL. + +# 3 The Proposed Framework + +Our design of the AirConcierge system addresses the following challenges in developing an effective task-oriented dialogue system, including + +- When should the system access the KBs to obtain task-relevant information during a conversation? +- How does the system formulate a query that retrieves task-relevant data from the KBs? + +# 3.1 System Architecture of AirConcierge + +AirConcierge is a task-oriented dialogue system for flight reservations and therefore depends on + +flight information in large external KBs to fulfill user requests. Unlike previous work that directly encodes the entire KBs, AirConcierge issues API calls to the KBs at the appropriate time to retrieve the information relevant to the task. Besides, during the dialogue with a user, AirConcierge actively prompts and guides the user for key information, and responds with informative and human-comprehensible sentences based on the retrieved results from the KBs. In particular, the "dialogueto-SQL-to-dialogue" approach, which we implement in AirConcierge allows it to integrate with large-scale, real-world KBs. + +Figure 2 shows the system architecture of AirConcierge. During a dialogue with a user, AirConcierge processes the dialogue lines in the following procedures: For each new line of a dialogue, it serves as an input to the Dialogue Encoder, which encodes the conversation history. The hidden states of Dialogue Encoder are next used by the Dialogue State Tracker to determine the phase of the dialogue (e.g., greeting phase or the problem-solving phase). If the system determines that enough information about the user's request has been collected, the SQL generator then generates a SQL query, according to the context of the dialogue so far, to retrieve information from KBs. Next, the retrieved results are encoded and stored in a Memory Network. With the encoded dialogue and the memory readout, a context-aware Dialogue Decoder generates a corresponding response. In addition to the process described above, there is a Dialogue Goal Generator which predicts the final status of the full dialogue, given the entire conversation history, to measure the agent performance. + +# 3.2 Dialogue Encoder + +We implement the Dialogue Encoder using a RNN with a gated recurrent unit (GRU) (Chung et al., 2014). Given a sequence of the conversation history $X = \{x_{1}, x_{2}, \dots, x_{t}\}$ , a word embedding matrix $W_{emb}$ embeds each token $x_{t}$ . A GRU then models the sequence of tokens by taking the embedded token $W^{emb}(x_{t-1})$ and the hidden state $h_{t-1}^{e}$ from time step $t - 1$ as inputs at the next time step $t$ : + +$$ +h _ {t} ^ {e} = G R U \left(W _ {e m b} \left(x _ {t - 1}\right), h _ {t - 1} ^ {e}\right) \tag {1} +$$ + +The whole dialogue history is encoded into the hidden states $H = (h_1^e, \ldots, h_T^e)$ , where $T$ is the total number of time steps. + +![](images/1753fe5b28df2379e053f0de5d5aee496a50e21fe810cd7ee27b54facef0110c.jpg) +Figure 2: An overview of the system architecture of AirConcierge. + +# 3.3 Dialogue State Tracker (Information Gate Module) + +In order to determine whether a dialogue has reached a state where the system has received enough initial information about a user's need and transitioned from the "greeting state" into the "problem-solving state", we design a Dialogue State Tracker to model such a transition of states. This is a module introduced by AirConcierge to determine when to retrieve and incorporate data from the KBs into the dialogue, so we also consider it as an "information gate". The Dialogue State Tracker takes the information about the schema of KBs as an input to the model. Intuitively, by matching the information in the dialogue history with the available columns in the KBs, a better decision can be made about whether it is the right time to start querying the KBs. This module takes the last hidden state $h_T^e$ from the Dialogue Encoder and outputs a binary value $s \in \{0,1\}$ indicating whether the current information is sufficient to generate a query. Let $P(s)$ denote the probability that the agent would send a query: + +$$ +P \left(s \mid h _ {T} ^ {e}, x _ {1: J} ^ {\text {c o l}}\right) = \sigma \left(W _ {2} ^ {s} \left(W _ {1} ^ {s} h _ {T} ^ {e} + \Sigma U _ {2} W _ {\text {e m b}} \left(x _ {1: J} ^ {\text {c o l}}\right)\right)\right), \tag {2} +$$ + +where $x_{1:J}^{col}$ denotes the tokens of the $J$ column names; $W_{emb}$ is the word embedding matrix as in Equation (1); $U_{2} \in \mathbb{R}^{d_{enc} \times d_{enc}}$ is a bidirectional LSTM; $W_{1}^{s}$ and $W_{2}^{s}$ are fully-connected layers with size $d_{enc} \times d_{enc}$ ; and $\sigma$ is the sigmoid function. Note that we denote $U_{2} W_{emb}(x_{1:J}^{col})$ as $h^{col}$ in Figure 2. + +# 3.4 SQL Generator + +In order to enable AirConcierge to handle large-scale KBs, we devise a SQL Generator and + +deployed it in AirConcierge. If the state s from the Dialogue State Tracker is "problem-solving state", AirConcierge will activate the SQL Generator and generate a SQL query to access the KBs. A SQL query is in the form of SELECT * FROM KBs WHERE $COL $OP $VALUE (AND $COL $OP $VALUE)*, where $COL is a column name. Here we focus on predicting the constraints in the WHERE clause. + +To predict the column $\mathbb{S}\mathbb{C}\mathbb{O}\mathbb{L}$ , we follow the sequence-to-set idea from SQLNet (Xu et al., 2018). That is, given the encoded column names $\{h_j^{col}\}_{j = 1\dots J}$ and the last encoding of the dialogue history $h_T^e$ , the model computes the probability $P_{col}(x_j^{col})$ of column $j$ to appear in the SQL query: + +$$ +P _ {c o l} \left(x _ {j} ^ {c o l} \mid h _ {j} ^ {c o l}, h _ {T} ^ {e}\right) = \sigma \left(W _ {1} ^ {c o l} h _ {j} ^ {c o l} + W _ {2} ^ {c o l} h _ {T} ^ {e}\right) \tag {3} +$$ + +The $\$ 0\mathsf{P}$ slots are predicted using similar architecture: + +$$ +P _ {o p} \left(x _ {j} ^ {o p} \mid h _ {j} ^ {c o l}, h _ {T} ^ {e}\right) = \sigma \left(W _ {1} ^ {o p} h _ {j} ^ {c o l} + W _ {2} ^ {o p} h _ {T} ^ {e}\right) \tag {4} +$$ + +As for predicting the $VALUE slot for a particular $COL, we model it as a classification problem. Let $v_{i}^{j}$ be the $i$ -th value of the $j$ -th column. The predicted probability of the value $v_{i}^{j}$ is: + +$$ +\begin{array}{c} P _ {v a l u e} \left(v _ {i} ^ {j} \mid h _ {j} ^ {c o l}, h _ {T} ^ {e}\right) = \\ S o f t m a x \left(W _ {1} ^ {v a l} \left(W _ {2} ^ {v a l} h _ {T} ^ {e} + W _ {3} ^ {v a l} h _ {j} ^ {c o l}\right)\right) \end{array} \tag {5} +$$ + +where all $W_{1,2}^{col}$ , $W_{1,2}^{op}$ and $W_{1,2,3}^{val}$ are trainable matrices of size $d_{enc} \times d_{enc}$ . + +# 3.5 Knowledge Base Memory Encoder + +We encode the retrieved data from the KBs with a memory network mechanism. Unlike previous + +work (Wei et al., 2018) which applies a hierarchical RNN to encode the entire KBs directly, we only model the retrieved results from the KBs. Thanks to the SQL Generator module that filters out most of the irrelevant data in KBs, AirConcierge is needless to encode the entire KBs and can focus on the small set of relevant data records. + +Let the data records of flights retrieved from the KBs be $\{f_1,..,f_F\}$ , each flight containing 12 column attributes and one additional "flight number" column attribute. These records are converted into memory vectors $\{m_1,\dots,m_F\}$ using a set of trainable embedding matrices $C = \{C^1,\dots,C^{K + 1}\}$ , where $C^k\in \mathbb{R}^{|V|\times d_{emb}}$ and $K$ is the number of hops. Note that we additionally add an empty flight vector $m_{empty}$ to represent the case where no flight in the KBs meets the customer's intent. + +An initial query vector $q^0$ is defined to be the output of the dialogue encoder $h_T^e$ . Then, the query vector is passed through a few "hops" where, at each hop $k$ , a vector $q^k$ is computed as attention weights with respect to each memory vector $m_i$ : + +$$ +p _ {i} ^ {k} = \operatorname {S o f t m a x} \left(\left(q ^ {k}\right) ^ {T} c _ {i} ^ {k}\right) \tag {6} +$$ + +where $c_i^k = B(C^k (f_i))$ is the embedding vector at the $i^{th}$ memory position, and $B(\cdot)$ is a bag-of-word function. Here, $p_i^k$ decides which ticket has higher relevance to the customer intent. Then, the memory readout $o^k$ is summed over $c^{k + 1}$ weighted by $p^k$ as: + +$$ +o ^ {k} = \sum_ {i = 1} ^ {F} p _ {i} ^ {k} c _ {i} ^ {k + 1} \tag {7} +$$ + +To continue to the next hop, the query vector is updated by $q^{k + 1} = q^k + o^k$ . + +We use the pointer $G = (g_{1},\ldots ,g_{F})$ to pick the most relevant ticket and also filter out unimportant or unqualified tickets. $\mathbf{K}$ denotes the last hop. + +$$ +\boldsymbol {g} _ {i} ^ {K} = \operatorname {S o f t m a x} \left(\left(\boldsymbol {q} ^ {K}\right) ^ {\top} \boldsymbol {c} _ {i} ^ {K}\right) \tag {8} +$$ + +# 3.6 Dialogue Decoder + +We adopt a GRU model as the Dialogue Decoder to generate the agent's response. At each time step, the Dialogue Decoder generates a token based on the encoded dialogue $h_T^e$ and flight ticket information $g_i^K$ , by calculating a probability over all tokens: + +$$ +\begin{array}{l} h _ {t} ^ {d} = G R U \left(W _ {e m b} \left(\hat {y} _ {t - 1}\right), h _ {t - 1} ^ {d}\right), \tag {9} \\ P (\hat {y} _ {t}) = \mathrm {S o f t m a x} (W _ {d e c} h _ {t} ^ {d}) \\ \end{array} +$$ + +where $W_{dec} \in \mathbb{R}^{d_{enc} \times |V|}$ is a trainable matrix, and $h_0$ is initialized as a concatenation of $q^K$ and $h_T^e$ , $\hat{y}_t$ is output tokens at timestep $t$ . + +# 3.7 Dialogue Goal Generator + +As stated in the AirDialogue (Wei et al., 2018), three final dialogue goals $s_a, s_n, s_f$ are generated by the agent to examine the correctness at the end of conversations. $s_n$ represents the name of the customer. The flight state $s_f$ is the flight number selected from $F$ flights in the KBs. The action $s_a$ that accomplished at the end of a dialogue can be one of the following five choices: "booked", "changed", "no flight found", "no reservation" and "cancel". We feed $h_T^e$ into three fully-connected layers, $W_i^{goal}$ , to predict the three goals $(i \in \{\mathrm{n},\mathrm{f},\mathrm{a}\})$ , respectively: + +$$ +P (s _ {i}) = W _ {i} ^ {\text {g o a l}} h _ {T} ^ {e}. \tag {10} +$$ + +# 3.8 Objective Function + +In order to train the dialogue system in an end-to-end fashion, loss functions are defined for the above modules. The loss for Dialogue State Tracker, $\mathcal{L}_{gate}$ , is the binary cross entropy (BCE). The loss for SQL generator consists of three parts: $\mathcal{L}_{SQL} = \mathcal{L}_{col} + \mathcal{L}_{op} + \mathcal{L}_{value}$ . The loss for the $\$ \mathrm{COL}$ slots $\mathcal{L}_{col}$ is the BCE, and the loss for both $\$ \mathrm{OP}$ and $\$ \mathrm{VALUE}$ slots is CE. For the KB memory encoder, we use CE: $\mathcal{L}_{mem} = -\sum_{i=1}^{N} \sum_{j=1}^{F} (y_{ij} \cdot \log(g_{ij}^{K}))$ , where $g_{ij}^{K}$ is the pointer, $N$ is the number of samples, and $F$ is the number of flights retrieved from KBs. For the state generator, CE is used for all three states, that is, $\mathcal{L}_{goal} = \mathcal{L}_{name} + \mathcal{L}_{flight} + \mathcal{L}_{action}$ . + +The overall loss function is formed by summing up the losses of all modules: + +$$ +\mathcal {L} = \mathcal {L} _ {\text {g a t e}} + \mathcal {L} _ {\text {S Q L}} + \mathcal {L} _ {\text {m e m}} + \mathcal {L} _ {\text {g o a l}} \tag {11} +$$ + +# 4 Experiments + +# 4.1 Dataset + +AirDialogue Dataset We evaluate the proposed framework on the AirDialogue dataset, a large-scale task-oriented dialogue dataset released by Google. The dataset contains 402,038 conversations, with an average length of 115. For data pre-processing, we follow the steps in the original paper (Wei et al., 2018) and their official code2. + +Labels for State Tracker Since the original Air-Dialogue dataset lacks the labels for learning the Dialogue State Tracker, we devise a method to annotate each dialogue turn with a "ground-truth" state label. We define two dialogue states: At the beginning of a dialogue, while the customer expresses travel constraints and the agent asks for information, we define this as the "greeting state" of the dialogue. Once the agent receives adequate information from the user and decides to send a query, we define that the dialogue enters the "problem-solving state" and will remain in this state afterward. + +We use a rule-based model to annotate. For most dialogues, the first turn of the "problem-solving state" is where the flight number is mentioned. With this observation, we label the turn where the flight number first occurs to be the starting point of the "problem-solving state". As for the dialogues that either issue multiple SQL queries or have no mention of the flight number, we apply a set of keywords to mark the problem-solving state. + +Labels for SQL Generator In the original AirDialogue dataset, each dialogue is accompanied with an intention indicating the customer's travel constraints. We construct the "ground-truth query" based on the user's intention of each dialogue. + +# 4.2 Training Details + +We conduct experiments using one 2080 Ti GPU and the Pytorch (Paszke et al., 2017) environment. We use Adam (Kingma and Ba, 2015) to optimize the model parameters with a learning rate $1e^{-3}$ and a batch size of 32. The word embedding size and GRU hidden dimension are 256. The hop of the memory encoder $K$ is set to 3. For Dialogue Decoder, a greedy strategy is used instead of beam-search. The accelerated training technique used in Wei et al. (2018) is also adopted in our model. The models are trained for 5 epochs, roughly equals to 44000 steps. + +# 4.3 Evaluation + +There are two important perspectives about the model: the quality of the dialogue and the correctness of the exact information. In order to properly evaluate these two, we use the BLEU score to evaluate the dialogues and use accuracy to evaluate the dialogue goals and SQL queries. While providing a human-like interaction with the customers is important, it is even more critical to guarantee that all + +![](images/12527363bd03abf3e85be85a53e018ad16ba44649657cb827609b3f8f88bb07f.jpg) +Figure 3: Inference time under different numbers of KB records on the AirDialogue dev set. "1x." denotes 30 records in the KBs, "10x." is 300 records, and so on. + +![](images/1034d375b1de91f14d567773fcb2c91b8d0aa72f999c183c0b404fed66f41a21.jpg) +Figure 4: Memory consumption under different amounts of KB data on the AirDialogue dev set. "1x." denotes 30 records in the KBs, "10x." is 300 records, and so on. + +of the provided information is correct. + +For example, the agent might reply "We have found a flight number 1011 which meets your need. Should I book it?". Suppose the actual correct flight number is 1012, this sentence may have a high BLEU score while the provided information is misleading. Such an error further reveals the importance of the accuracy of Dialogue Goal Generator. + +As for the correctness of the provided information, we evaluate the performance by SQL accuracy and state accuracy. The SQL accuracy is critical in filtering and accessing data from the KBs. + +User simulator For self-play evaluation, we build a simulator to model a user's utterances. The simulator generates a response based on three things: a list of travel constraints, the user's intent (\{"book", "change", "cancel"\}), and the dialogue history. Similar to the previous work, we adopt a + +
ModelName Acc.Flight Acc.State Acc.BLEU
Supervised (2018) (AirDialogue dev)0.9 %1.2%12%23.26
RL (2018) (AirDialogue dev)1%4%29%19.65
AirConcierge (AirDialogue dev)100%72.2%90.0%32.59
Supervised (2018) (Synthesized dev)0%8%32%68.72
RL (2018) (Synthesized dev)0%35%39%62.71
AirConcierge (Synthesized dev)100%58.9%86.0%73.51
Human (AirDialogue test)98%91.4%91.8%-
+ +Table 1: Dialogue performance under self-play evaluation. The agent model is the model in the first column, while the customer is the user simulator described in section 4.3. The supervised model and the Reinforcement Learning (RL) model are the baseline models reported in the original AirDialogue paper. + +sequence-to-sequence model to build the simulator. + +SQL evaluation We use logical-form accuracy $(Acc_{lf})$ and execution accuracy $(Acc_{ex})$ (Zhong et al., 2017) to measure the SQL quality. For $Acc_{lf}$ , we directly compare the generated SQL query with the ground truth to check whether they match each other. For $Acc_{ex}$ , we execute both the generated query and the ground truth and compare whether the retrieved results match each other. We also evaluate the accuracy of the 3 components ( $COL,$ OP, and $VALUE) of a WHERE condition: $Acc_{col}$ , $Acc_{op}$ , and $Acc_{val}$ , respectively. For each dialogue, we evaluate only the SQL query at the turn when the "problem-solving state" first occurs. + +# 4.4 Experimental Results: Accuracy + +In Table 1, we compare the performance of AirConcierge with the baseline in the AirDialogue paper. On generating a response that matches the ground-truth dialogue line, AirConcierge achieves improvements on the BLEU score by 9.33 and 4.79 on the dev set and the synthesized set, respectively. In the self-play evaluation, AirConcierge achieves significant improvements on NameAcc, FlightAcc, and ActionAcc. We attribute the high accuracy to the correctness of SQL queries, since the data retrieved from KBs is correctly filtered and thus helps the agent make suitable and better predictions. + +Besides the model's overall performance in accomplishing a user's task, we are interested in the accuracy of the SQL queries generated by Air-Concierge based on the dialogue context. In this evaluation, we consider two cases: the accuracy of the 6 essential attributes (departure airport, return airport, departure month, return month, departure day, and return day), and the accuracy on all 12 at + +tributes. The 6 essential attributes are the ones that are essential in identifying a ticket and therefore appear in nearly all dialogue samples. + +Table 2 shows the model's accuracy in generating SQL queries. The model achieves outstanding accuracy in predicting the column-name slots, the operator slots, and the value slots. The metric $Acc_{lf}$ evaluates whether two queries are exactly the same, so its value is typically smaller than $Acc_{col}$ , $Acc_{op}$ , or $Acc_{val}$ , especially when more conditions are considered. This can be observed in the table, where the accuracy $Acc_{lf}$ under 12 conditions is much smaller than that under only 6 essential conditions. + +Furthermore, we break down the performance of overall SQL queries into each $\$ 25$ VALUE slot, results presented in Table 3. AirConcierge achieves high accuracy on predicting the values of the 6 essential conditions, but performs not as good on the other 6 conditions (departure time, return time, class, price, connections, and airline). This may be due to that the essential 6 conditions are provided in nearly all dialogues, while the other conditions are only provided from time to time. Having fewer data about the other conditions makes it harder for the model to learn about them. + +# 4.5 Experimental Results: Scalability + +An important contribution of AirConcierge is the efficiency in cooperating with KBs. By employing the SQL Generator, AirConcierge increases the model's ability to handle large-scale KBs. In Figure 3, we show the model's inference time with respect to the number of data records in the KBs. The "1x." at the x-axis corresponds to having 30 data records in the KBs, and "10x." corresponds to 300 entries in the KBs, and so on. As shown in the + +
ExperimentAcccolAccopAccvalAcclfAccex
AirConcierge†98.96%99.7%97.9%95.54%96.44%
AirConcierge‡97.24%98.6%61.4%28.11%86.28%
+ +Table 2: Performance on the AirDialogue dataset. $\dagger$ indicates considering only 6 conditions, such as departure city, return city, departure month, return month, departure day, and return day. $\ddagger$ means considering all 12 conditions. The models of $\dagger$ and $\ddagger$ are the same. We report the average accuracy. + +
Experimentdep. cityret. citydep. monthret. monthdep. dayret. day
AirConcierge98.89%97.93%97.52%97.49%97.27%97.29%
Experimentdep. timeret. timeclasspriceconnectionsairline
AirConcierge49.60%52.46%42.74%37.60%95.36%42.12%
+ +Table 3: Performance of each $VALUE slot to be generated in the query. + +figure, the inference time of AirConcierge remains short as the KBs grows larger. On the contrary, the baseline model, AirDialogue, requires obviously more inference time: when the KBs are 70 times larger, AirDialogue takes 5 times longer to complete the dialogue. We also compare the memory consumption of AirConcierge with that of AirDialogue. In Figure 4, it is shown that AirConcierge consumes a constant amount of memory regardless of the KBs size, while AirDialogue requires more memory as the KBs size grows. This indicates that AirConcierge is scalable from the aspect of memory consumption as well. + +We inflate the size of KBs by augmenting additional data records. To generate a variant data record, we choose an existing ground-truth record and modify the values of some of its columns. The modified column value is sampled from a prior distribution defined for that column. We experiment with different numbers of columns to modify. For an augmentation where the last $i$ columns subject to variations, we denote such an augmentation as "#Augment-column- $i$ ". + +Intuitively, the more columns are subject to variations, the more diverse the records are. Therefore, fewer records will match the query when more columns are subject to variations. This is shown in Figure 5. When more records are added in the KBs, for an augmentation that has more variant columns (e.g., #Augment-column-10), the growth of the number of records returned for a SQL query is slower than the growth experienced by augmentation with fewer variation columns (e.g., #Augment-column-6). This also illustrates the importance of having a high-quality SQL Generator. Since gener + +![](images/3967fcca73cf3e82f4790968096bcb2ddc639503b386a87f43edfe1bb8247225.jpg) +Figure 5: Number of returned data from different augment types of KBs using SQL queries generated by our model. + +ating precise SQL queries can effectively cut down the data records to be considered. + +# 5 Conclusions + +We propose AirConcierge, a task-oriented dialogue system that has high accuracy in achieving the user's tasks. By employing a subsystem, including a Dialogue State Tracker and a SQL Generator, AirConcierge can issue a precise SQL query at the right time during a dialogue and retrieve relevant data from KBs. As a result, AirConcierge can handle large-scale KBs efficiently, in terms of shorter processing time and less memory consumption. Using a precise SQL query also filters out noise and irrelevant data from the KBs, which improves the quality of the dialogue responses. Our experiments demonstrate the better performance and efficiency of AirConcierge, over the previous work. + +# References + +Antoine Bordes, Y-Lan Boureau, and Weston Jason. 2017. Learning end-to-end goal-oriented dialog. In ICLR. +Junyoung Chung, Caglar Gülcehre, Kyunghyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. ArXiv, abs/1412.3555. +Anoop Deoras and Ruhi Sarikaya. 2013. Deep belief network based semantic taggers for spoken language understanding. In INTERSPEECH. +Bhuwan Dhingra, Lihong Li, Xiujun Li, Jianfeng Gao, Yun-Nung Chen, Faisal Ahmed, and Li Deng. 2017. Towards end-to-end reinforcement learning of dialogue agents for information access. In ACL. +Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander H. Miller, Arthur Szlam, and Jason Weston. 2016. Evaluating prerequisite qualities for learning end-to-end dialog systems. CoRR, abs/1511.06931. +Mihail Eric and Christopher D. Manning. 2017. Key-value retrieval networks for task-oriented dialogue. In SIGDIAL. +Wonseok Hwang, Jinyeung Yim, Seunghyun Park, and Minjoon Seo. 2019. A comprehensive exploration on wikisql with table-aware word contextualization. arXiv preprint arXiv:1902.01069. +Kyungduk Kim, Cheongjae Lee, Sangkeun Jung, and Gary Geunbae Lee. 2008. A frame-based probabilistic framework for spoken dialog management using dialog examples. In SIGDIAL Workshop. +Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. +Wenqiang Lei, Xisen Jin, Zhaochun Ren, Xiangnan He, Min-Yen Kan, and Dawei Yin. 2018. Sequicity: Simplifying task-oriented dialogue systems with single sequence-to-sequence architectures. In ACL. +Xiujun Li, Yun-Nung Chen, Lihong Li, Jianfeng Gao, and Asli Çelikyilmaz. 2017. End-to-end task-completion neural dialogue systems. ArXiv, abs/1703.01008. +Bing Liu and Ian Lane. 2017. An end-to-end trainable neural network model with belief tracking for task-oriented dialog. ArXiv, abs/1708.05956. +Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2018. Mem2seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems. ArXiv, abs/1804.08217. +Christopher D. Manning and Mihail Eric. 2017. A copy-augmented sequence-to-sequence architecture gives good performance on task-oriented dialogue. In EACL. + +Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language decathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730. +Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary Devito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In NIPS-W. +Alexander I. Rudnicky, Eric H. Thayer, Paul C. Constantinides, Chris Tchou, R. Shern, Kevin A. Lenzo, Weiyang Xu, and Alice H. Oh. 1999. Creating natural dialogs in the Carnegie mellon communicator system. In EUROSPEECH. +Iulian Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In AAAI. +Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory networks. In NIPS. +Wei Wei, Quoc V. Le, Andrew M. Dai, and Jia Li. 2018. Airdialogue: An environment for goal-oriented dialogue research. In EMNLP. +Tsung-Hsien Wen, David Vandyke Lina Maria Rojas-Barahona, Milica Gasic, Nikola Mrksic, Pei hao Su, Stefan Ultes, and Steve J. Young. 2016. A network-based end-to-end trainable task-oriented dialogue system. In EACL. +Jason Weston, Sumit Chorpa, and Antoine Bordes. 2014. Memory networks. arXiv:1410.3916. +Chien-Sheng Wu, Richard Socher, and Caiming Xiong. 2019. Global-to-local memory pointer networks for task-oriented dialogue. ArXiv, abs/1901.04713. +Xiaojun Xu, Chang Liu, and Dawn Song. 2018. Sqlnet: Generating structured queries from natural language without reinforcement learning. In ICLR. +Xuesong Yang, Yun-Nung Chen, Dilek Z. Hakkani-Tür, Paul Crook, Xiujun Li, Jianfeng Gao, and Li Deng. 2017. End-to-end joint learning of natural language understanding and dialogue manager. 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5690-5694. +Steve J. Young, Milica Gasic, Blaise Thomson, and Jason D. Williams. 2013. Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101:1160-1179. +Tao Yu, Zifan Li, Zilin Zhang, Rui Zhang, and Dragomir Radev. 2018a. Typesql: Knowledge-based type-aware neural text-to-sql generation. In NAACL. + +Tao Yu, Rui Zhang, He Yang Er, Suyi Li, Eric Xue, Bo Pang, Xi Victoria Lin, Yi Chern Tan, Tianze Shi, Zihan Li, Youxuan Jiang, Michihiro Yasunaga, Sungrok Shim, Tao Chen, Alexander R. Fabbri, Zifan Li, Luyao Chen, Yuwen Zhang, Shreya Dixit, Vincent Zhang, Caiming Xiong, Richard Socher, Walter S. Lasecki, and Dragomir R. Radev. 2019a. Cosql: A conversational text-to-sql challenge towards cross-domain natural language interfaces to databases. In EMNLP/IJCNLP. + +Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir R. Radev. 2018b. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task. In EMNLP. + +Tao Yu, Rui Zhang, Michihiro Yasunaga, Yi Chern Tan, Xi Victoria Lin, Suyi Li, Heyang Er, Irene Li, Bo Pang, Tao Chen, Emily Ji, Shreya Dixit, David N Proctor, Sungrok Shim, Jonathan Kraft, Vincent Zhang, Caiming Xiong, Richard Socher, and Dragomir R. Radev. 2019b. Sparc: Cross-domain semantic parsing in context. In ACL. + +Tiancheng Zhao and Maxine Eskenazi. 2016. Towards end-to-end learning for dialog state tracking and management using deep reinforcement learning. In SIGDIAL Conference. + +Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. ArXiv, abs/1709.00103. + +Victor Zue. 2000. Conversational interfaces: advances and challenges. Proceedings of the IEEE, 88:1166-1180. + +Victor Zue, Stephanie Seneff, James R. Glass, Joseph Polifroni, Christine Pao, Timothy J. Hazen, and I. Lee Hetherington. 2000. Jupiter: a telephone-based conversational interface for weather information. IEEE Trans. Speech Audio Process., 8:85-96. + +# A Appendices + +# A.1 Data Statistics + +For the data records in the KBs, each of them is generated using the prior distributions defined in Table 4. In section 4.5, we conduct experiments under different scales of the KBs, where the newly augmented records are generated according to these prior distributions. The original AirDialogue dataset contains 30 records in the KBs, and we augment the KBs to "10x.", "50x.", and "70x". That is, we additionally add 270 records, sampled according to the prior distributions, into the "10x." KBs. Similar things are done to the "50x." KBs and "70x." KBs. + +# A.2 Qualitative Analysis + +We provide samples of dialogues generated by our agent and the user simulator under the self-play evaluation. The user simulator has a pre-defined intent that belongs to one of the three: "book", "change", "cancel", as well as a list of travel constraints. On the other hand, responses provided by the agent may result in one of the five actions: booked", "changed", "cancelled", "no flight found", "no reservation". The user intent "book" could lead to the agent action "booked" or "no flight found", while both "change" and "cancel" may lead to "no reservation". However, the user intent "change" could be successfully achieved, and result in the agent action "changed". Similarly, "cancel" could lead to "cancelled". + +We show several samples according to the agent's action. First, Table 5 shows the two samples of the agent action "booked". We see that the user tends to provide the destination and return airport codes spontaneously, followed by the agent requiring the travel dates. After the ticket is found, the agent informs the user about the flight details, which is a human-like behaviour. Finally, the ticket is confirmed by the user, and both the user and agent ends the dialogue through the thankfulness. + +Table 6 shows the samples for the action "changed". At the beginning, the user and the agent greets with each other. Then, the user not only expresses the intent to change the flight, but also gives a reason for changing. We see that the agent learns to judge whether the user has provided his/her name. In the first, or say upper, sample, the user mentioned his/her name right after greeting, and hence the agent goes through to check the KBs. However, in the second, or say lower, sample, the agent identified that the user hasn't told his/her name yet, so the agent requires the name before querying the KBs. + +For the action "cancelled", samples are provided in Table 7. We observe similar patterns to the action "changed". The user first describes the need to cancel the ticket, and followed by the agent asking the name if necessary. Lastly, the agent found the ticket and confirm the cancellation with the user. + +Table 8 provides the samples of the action "no flight found". Similar to the samples of "booked", the user describes the travel constraints and ask to book a ticket. The difference is that the agent could not find a matched flight, and thus responds with no flight available. One thing special is that + +
featuredep./ret.citydep./ret. monthdep./ret. daydep./ret. time
rangecategorical1-121-3100-23
prob.uniformuniformuniformuniform
featureclasspriceconnectionsairline
rangebusiness,economy0-50000,1,2categorical
economy (7%)≤200 (25%)
business (3%)≤500 (25%)0 (7%) 1 (90%)standard fare (5%)
prob.any (90%)≤1000 (25%)any (3%)UA, Delta
any (25%)AA, Hawaiian any (95%)
+ +Table 4: Flight features of the AirDialogue dataset. + +
Samples of dialogues with state “booked”.
UsrHello.
AgtHello. How can I help you?
UsrI am Dennis Carter. I am planning to visit Detroit, can you please book me a ticket from AUS to DTW?
AgtSure, I will help you to book a flight ticket. Can you please share me your travelling dates?
UsrMy preferable journey dates are from 03/19 to 03/21.
AgtOk, give me a moment.
UsrSure, take your own time.
AgtHere, I found a flight for you with single connection. flight number is <fl_1020>.
UsrPlease book my ticket with this airlines.
AgtYour ticket is booked.
UsrThank you for booking my ticket.
AgtWelcome.
UsrHello.
AgtHello, how may I help you?
UsrI am Andrew Young. I want to book a flight ticket from DCA to MSP. Could you help me in booking?
AgtSure, may I know your trip dates please?
UsrMy travelling dates are from May 10 - 12.
AgtDo you have any other requirements?
UsrI need a connecting flight.
AgtOk, please wait. Ok, please wait.
UsrSure.
AgtThank you for waiting and we have found a flight in economy class with 1 connection in Spirit airlines and the fare of your flight will be 200. Do you like me to proceed your booking with this flight? flight number is <fl_1026>.
UsrPlease proceed.
AgtOk, your ticket is booked.
UsrThank you.
AgtMost welcome.
+ +Table 5: Samples of dialogues with state "booked". + +
Samples of dialogue with state “changed”.
UsrHello .
AgtHello . How can I help you ?
UsrI am Jeffrey Miller . I want to change my existing reservation due to postpone of my exams . Can you please help me with this ?
AgtSure , please wait for a while .
Usrsure .
AgtI found your reservation , can you please share your travelling dates ?
UsrMy travelling dates are July 28 and July 30 .
AgtCan I know your airport codes ?
UsrMy airport codes are PHL and MSP .
AgtDo you have any other specifications ?
UsrNo .
AgtOk , please wait for a while .
UsrSure .
AgtThere is a flight with single connection in UA airlines and the ticket fare is 200 only . Can I book the ticket for you ? flight number is <fl_1025> .
UsrYou can proceed .
AgtYour ticket is confirmed in economy class .
UsrThank you .
AgtThank you for choosing us .
UsrHello .
AgtHello . How can I help you ?
UsrI want to change my existing reservation due to postpone of my exams . Can you please help me with this ?
AgtSure , I will help you to change your reservation .
UsrThank you .
AgtMay I know your name please ?
UsrMy name is David Martinez .
AgtOk , please wait a moment .
UsrSure .
AgtThank you for waiting , I have found your reservation and cancelled it successfully .
UsrThank you for your help .
AgtThank you for reaching us .
+ +Table 6: Samples of dialogue with state "changed". + +
Samples of dialogue with state “cancelled”.
UsrHello .
AgtHello . How can I help you ?
UsrI am Amy White . I want to cancel my recent reservation due to sudden cancellation of my trip . Can you help me ?
AgtSure , I will help you to cancel your reservation .
UsrThank you .
AgtHappy to inform you that we found a reservation on your name . Shall I cancel it ?
UsrYes , please cancel it .
AgtYour ticket has been cancelled .
UsrThank you for your help .
AgtWelcome .
UsrHello .
AgtHello . How can I help you ?
UsrI am Jacob Clark . I want to cancel my recent reservation due to sudden cancellation of my trip . Can you help me ?
AgtSure , I will help you to cancel your reservation .
UsrThank you .
AgtPlease give me a moment .
UsrSure .
AgtI found your reservation and I have successfully cancelled your reservation .
UsrThank you for your help .
AgtWelcome .
+ +Table 7: Samples of dialogue with state "cancelled". + +
Samples of dialogue with state “no flight found”.
UsrHello .
AgtHello , how may I help you ?
UsrI am Paul Phillips . I am planning to attend music concert at CLT , can you please help me with booking a flight ticket from BOS ?
AgtSure , may I know your trip dates please ?
UsrSure , I would like to take off on Mar 10 and return on Mar 12 .
AgtDo you have any other requirements ?
UsrYes , I need a connecting flight .
AgtPlease wait .
UsrSure , take your time .
AgtSorry , there are no flights available on your preferred dates .
UsrOk , no issues . Thank you for checking .
AgtThank you for choosing us .
UsrHello .
AgtHello , how may I help you ?
UsrI am Robert Lee . I am planning to visit Atlanta , will you help me in booking the flight ticket from PHL to ATL ?
AgtSure , can I know your travelling dates ?
UsrMy travelling dates are 10/03 and 10/05 .
AgtSorry , there is no flight available on your route .
UsrThat ’s ok , thank you for checking .
AgtMost welcome .
+ +Table 8: Samples of dialogue with state "no flight found". + +
Samples of dialogue with state “no reservation”.
UsrHello .
AgtHello . How can I help you ?
UsrI am Steven Allen . I want to change my existing reservation due to some health issues , can you please help me with that ?
AgtSure , I will help you to change your reservation .
UsrThank you .
AgtPlease give me a moment .
UsrSure .
AgtSorry , there is no reservation found on your name .
UsrOk , no problem . Thank you for your information .
AgtWelcome .
UsrHello .
AgtHello , how may I help you ?
UsrI am Karen Gonzalez . I want to cancel my recent reservation due to sudden cancellation of my trip . Can you help me ?
AgtSure , please wait for a moment .
UsrOk .
AgtSorry , there is no reservation found on your name .
UsrNo problem , thank you for the information .
AgtThank you for reaching us .
+ +Table 9: Samples of dialogue with state "no reservation". + +the agent responds no matching flight along with a reason. For instance, the agent in the upper sample mentions that no matching flights found is due to the mismatching dates. + +For "no reservation", Table 9 shows the corresponding samples, where the upper sample is with the user intent "change" and the lower sample is with the intent "cancel". We see similar patterns to samples of "changed" and "cancelled". At the beginning, the user says the intent of changing, or cancelling, the ticket with some reason. The agent asks for the name if needed, and confirm the action of changing, or cancel, with the user. \ No newline at end of file diff --git a/airconciergegeneratingtaskorienteddialogueviaefficientlargescaleknowledgeretrieval/images.zip b/airconciergegeneratingtaskorienteddialogueviaefficientlargescaleknowledgeretrieval/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..7cb90e5b5ba43bcf7ab94947818d4bacd08ff869 --- /dev/null +++ b/airconciergegeneratingtaskorienteddialogueviaefficientlargescaleknowledgeretrieval/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:55d9daa1fc92bc79fa3cebf708acaeae32e8d15b15919ae4a7cfaed982af2bb2 +size 907854 diff --git a/airconciergegeneratingtaskorienteddialogueviaefficientlargescaleknowledgeretrieval/layout.json b/airconciergegeneratingtaskorienteddialogueviaefficientlargescaleknowledgeretrieval/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..ad58d00a3af17e8c5650fd9641eabd1db1b57c6b --- /dev/null +++ b/airconciergegeneratingtaskorienteddialogueviaefficientlargescaleknowledgeretrieval/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8a07d7c7804814a241d920703c48df03750215076c82ff6cd68d4f74309852d2 +size 388940 diff --git a/anattentiverecurrentmodelforincrementalpredictionofsentencefinalverbs/e742f1db-5949-48df-99bd-dcbcbc58dffb_content_list.json b/anattentiverecurrentmodelforincrementalpredictionofsentencefinalverbs/e742f1db-5949-48df-99bd-dcbcbc58dffb_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..3a671d8183de94ef4bb55dfc39339c029bdbea4c --- /dev/null +++ b/anattentiverecurrentmodelforincrementalpredictionofsentencefinalverbs/e742f1db-5949-48df-99bd-dcbcbc58dffb_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:173540ff5eb9e081f4388a716310306bbf3c7ac3852d432487bcef9d98dfbf16 +size 66841 diff --git a/anattentiverecurrentmodelforincrementalpredictionofsentencefinalverbs/e742f1db-5949-48df-99bd-dcbcbc58dffb_model.json b/anattentiverecurrentmodelforincrementalpredictionofsentencefinalverbs/e742f1db-5949-48df-99bd-dcbcbc58dffb_model.json new file mode 100644 index 0000000000000000000000000000000000000000..3eb8eb6b86b490f28a247738efb9faedab8a62f8 --- /dev/null +++ b/anattentiverecurrentmodelforincrementalpredictionofsentencefinalverbs/e742f1db-5949-48df-99bd-dcbcbc58dffb_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b1448bd5f3484447f145cf56e1d5c7622686006cabf0fdf402fa5a9c3da41eee +size 83607 diff --git a/anattentiverecurrentmodelforincrementalpredictionofsentencefinalverbs/e742f1db-5949-48df-99bd-dcbcbc58dffb_origin.pdf b/anattentiverecurrentmodelforincrementalpredictionofsentencefinalverbs/e742f1db-5949-48df-99bd-dcbcbc58dffb_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3ce538093f95eb4d124d3e4f60faf1fbdc5fba3e --- /dev/null +++ b/anattentiverecurrentmodelforincrementalpredictionofsentencefinalverbs/e742f1db-5949-48df-99bd-dcbcbc58dffb_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b932144571eaec0afebd8d366268fd31941ca0e9151838870a5f13d3b568accb +size 1365948 diff --git a/anattentiverecurrentmodelforincrementalpredictionofsentencefinalverbs/full.md b/anattentiverecurrentmodelforincrementalpredictionofsentencefinalverbs/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a41c8f253d566bf4f4e9ec65485e39b3321d5d81 --- /dev/null +++ b/anattentiverecurrentmodelforincrementalpredictionofsentencefinalverbs/full.md @@ -0,0 +1,341 @@ +# An Attentive Recurrent Model for Incremental Prediction of Sentence-final Verbs + +Wenyan Li + +Comcast AI Research Lab + +wenyan19562@gmail.com + +Alvin Grissom II + +Haverford College + +agrissom@haverford.edu + +Jordan Boyd-Graber + +University of Maryland + +jbg@umiacs.umd.edu + +# Abstract + +Verb prediction is important for understanding human processing of verb-final languages, with practical applications to real-time simultaneous interpretation from verb-final to verb-medial languages. While previous approaches use classical statistical models, we introduce an attention-based neural model to incrementally predict final verbs on incomplete sentences in Japanese and German SOV sentences. To offer flexibility to the model, we further incorporate synonym awareness. Our approach both better predicts the final verbs in Japanese and German and provides more interpretable explanations of why those verbs are selected. + +# 1 Introduction + +Final verb prediction is fundamental to human language processing in languages with subject-object-verb (SOV) word order, such as German1 and Japanese, (Kamide et al., 2003; Momma et al., 2014; Chow et al., 2018) particularly for simultaneous interpretation, where an interpreter generates a translation in real time. Instead of waiting until the entire sentence is completed, simultaneous interpretation requires translation of the source text units while the interlocutor is speaking. + +When human simultaneous interpreters translate from an sov language to an SVO one incrementally—without waiting for the final verb at the end of a sentence—they must use strategies to reduce the lag, or delay, between the time they hear the source words and the time they translate them (Wilss, 1978; He et al., 2016). One strategy is final verb prediction: since the verb comes late in the source sentence but early in the target translation, if the verb is predicted in advance, it can be translated before it is heard, allowing for a more + +German Cazeneuve Dankte dort den Männern und sagte, ohne deren kühlen Kopf hatte es vielleicht ein “furchtbares Drama” gegeben. + +English Cazeneuve thanked the men there and said that without their cool heads there might have been a "terrible drama". + +Japanese 1 + +English It also said that he was acquainted with a secret lodging accommodation in Katsuragiyama in Nara Prefecture of Yamato. + +Figure 1: An example of the verb position difference between sov and svo languages, where the final verb in German and Japanese is expected much earlier in their English translation. + +“simultaneous” (or monotonic) translation (Jörg, 1997; Bevilacqua, 2009; He et al., 2015). Furthermore, Chernov et al. (2004) argue that simultaneous interpreters' probability estimates and predictions of the verbal and semantic structure of preceding messages facilitate simultaneity in human simultaneous interpretation. + +Like for human translation, simultaneous machine translation (SMT), becomes more monotonic for SOV-SVO with better verb prediction (Grissom II et al., 2014; Gu et al., 2017; Alinejad et al., 2018). Earlier work used pattern-matching rules (Matsubara et al., 2000), $n$ -gram language models (Grissom II et al., 2014), or a logistic regression with linguistic features (Grissom II et al., 2016). Recent neural simultaneous translation systems have integrated prediction into the encoder-decoder model or argued that these predictions, including verb predictions, are made implicitly by such models (Gu et al., 2017; Alinejad et al., 2018), but they have not systematically studied the late-occurring verb predictions themselves. + +German +Auch die deutschen Skispringer können sich Hoffnungen auf ihre erstige Medaille bei den Winterspielen in Vancouver [machen, schaffen, tun]. + +English The German ski jumpers can also hope for their first medal at the Winter Games in Vancouver. + +Figure 2: An example of alternatives of final verbs ("machen", "schaffen", "tun") that preserve same general meaning in German and do not influence its translation in English. + +While neural models can identify complex patterns from feature-rich datasets (Goldberg, 2017), less research has gone into problem of long-distance prediction, particularly for sentence-final verbs, where predictions must be made with incomplete information. We introduce a neural model, Attentive Neural Verb Inference for Incremental Language (ANVIIL) for verb prediction, which predicts verbs earlier and with higher accuracy. Moreover, we make ANVIIL's predictions more flexible by introducing synonym awareness. Self-attention also allows visualizes why a certain verb is selected and how it relates to specific tokens in the observed subsentence. + +# 2 The Problem of Verb Prediction + +Given an SOV sentence, we want to predict the final verb as soon as possible in an incremental setting. For example, in Figure 1, the final verb, "gegeben", in German is expected to be translated together with "hätte es" as "there would have been" in the middle of the English translation. + +Human interpreters will often predict a related verb rather than the exact verb in a reference translation, while preserving the same general meaning, since predicting the exact verb in a reference translation is difficult (Jörg, 1997). For instance, in Figure 2, besides "machen", verbs such as "schaffen" and "tun" also offen pair with "Hoffnungen" to express "hope for" in English. We therefore include two verb prediction tasks: first, we learn to predict the exact verb; second, we learn to predict verbs semantically similar to the exact reference verb. We describe these two tasks below. + +# 2.1 Exact Prediction + +We follow Grissom II et al. (2016), who formulate final verb prediction as sequential classification: a + +sentence is revealed to the classifier incrementally, and the classifier predicts the exact verb at each time step. While Grissom II et al. (2016) use logistic regression with engineered linguistic features, we use a recurrent neural model with self-attention, which learns embeddings $^{2}$ and a context representation that captures relations between tokens, regardless of the distance. Verbs are predicted by classifying on the learned representation of incomplete sentences. + +# 2.2 Synonym-aware Prediction + +We also extend the idea in Section 2.1 to allow for synonym-aware predictions: for example, the verb synonym "give", used in place of "provide", preserves the intended meaning in most circumstances and can be considered a successful prediction. Instead of training the model to focus on one fixed verb for each input, we encourage the model to be confident about a set of verb candidates which are generally correct in the context. + +# 3 A Neural Model for Verb Prediction + +This section describes ANVIIIL's structure. Gated recurrent neural networks (RNNs), such as LSTMs (Hochreiter and Schmidhuber, 1997) and gated recurrent units (Cho et al., 2014, GRUs), can capture long-range dependencies in text, which we need for effective verb prediction. + +We construct an RNN-based classifier with self-attention (Lin et al., 2017) for predicting sentence-final verbs (Figure 3). This is a natural encoding of the problem, as it explicitly models how interpreters might receive information and update their verb predictions. The hidden states of the sequence model can be either at the word or character level. + +# 3.1 BigRU Sequence Encoder + +Following Yang et al. (2016), we encode input sequences using the bidirectional GRU (BiGRU). Given an incomplete sentence prefix $\pmb{x} = (x_{1}, x_{2}, \dots, x_{l})$ of length $l$ , BiGRU takes as input the embeddings $(\pmb{w}_{1}, \pmb{w}_{2}, \dots, \pmb{w}_{l})$ , where $\pmb{w}_{i}$ is the $d$ -dimensional embedding vector of $x_{i}$ . At time + +![](images/01a9214f06429e79176f9cb96aaa50f84089045e613f12c889bb961179f98897.jpg) +Figure 3: ANVIL. Token sequences at the input layer are mapped to embeddings, which go to the GRU. The dot product of attention weights and hidden states pass through a dense layer to predict the verb. + +step $t$ , the forward and backward hidden states are: + +$$ +\overrightarrow {\boldsymbol {h} _ {t}} = \overrightarrow {\operatorname {G R U}} \left(\boldsymbol {w} _ {t}, \overrightarrow {\boldsymbol {h} _ {t - 1}}\right) \quad \leftarrow \quad \leftarrow \quad \leftarrow \tag {1} +$$ + +$$ +\overleftarrow {\boldsymbol {h}} _ {t} = \overleftarrow {\mathrm {G R U}} (\boldsymbol {w} _ {t}, \overleftarrow {\boldsymbol {h}} _ {t + 1}). +$$ + +These are concatenated as $\pmb{h}_t = [\overrightarrow{\pmb{h}}_t; \overleftarrow{\pmb{h}}_t]$ and we represent the input sequence as + +$$ +H = \left(\boldsymbol {h} _ {1}, \boldsymbol {h} _ {2}, \dots , \boldsymbol {h} _ {l}\right). \tag {2} +$$ + +As we only use a prefix of the sentence as input for prediction, we won't be able to see backward messages from unrevealed. However, once we see those words, later words in the prefix do change the internal representation of earlier words in $H$ , creating a more powerful overall representation that uses more of the available context. + +Embedding vectors for the input can be word embeddings or character embeddings, yielding a word-based or a character-based model; we try both in Section 4. + +# 3.2 Structured Self-attention + +Following Lin et al. (2017), we apply self-attention with multiple views of the input sequence to obtain a weighted context vector $\pmb{v}$ . By viewing the sequence multiple times, it allows different attention to be assigned at each time. Using a two layer multilayer perceptron (MLP) without bias and a softmax function over the sequence length, we have an $r$ -by- $l$ attention matrix $A$ , which includes $r$ attention vectors extracted from $r$ views of $\pmb{x}$ : + +$$ +A = \operatorname {s o f t m a x} \left(W _ {s _ {2}} \tanh \left(W _ {s _ {1}} H ^ {T}\right)\right) \tag {3} +$$ + +We sum over all $r$ attention vectors and normalize, yielding a single attention vector $\mathbf{a}$ with normalized + +weights (Figure 3). By assigning each hidden state its attention $a_{t}$ , we acquire an overall representation of the sequence: + +$$ +\boldsymbol {v} = \sum_ {t = 1} ^ {l} a _ {t} \boldsymbol {h} _ {t}. \tag {4} +$$ + +# 3.3 Verb Predictor + +For an incomplete input prefix $\pmb{x}$ , the target verb is $y \in \mathcal{Y} = \{1,2,\dots ,K\}$ . Based on the high-level representation $\pmb{v}$ of the input sequence, we compute the probability of each verb $k$ and select the one with the highest probability as the predicted verb: + +$$ +p (y \mid \boldsymbol {v}) = \frac {e ^ {f _ {y} (\boldsymbol {v})}}{\sum_ {k = 1} ^ {K} e ^ {f _ {k} (\boldsymbol {v})}} \tag {5} +$$ + +where $f_{k}(\pmb{v})$ is the logit from the dense layer. + +# 3.3.1 Exact Verb Prediction + +As there is only one ground-truth verb $y$ for the input, we maximize the log-likelihood of the correct verb with cross-entropy loss: + +$$ +\mathcal {L} = - \sum_ {k = 1} ^ {K} q (k \mid \boldsymbol {v}) \log p (k \mid \boldsymbol {v}) \tag {6} +$$ + +where $q(k \mid v)$ is the ground-truth distribution over the verbs, which equals 1 if $k = y$ , or 0 otherwise. + +# 3.3.2 Synonym-aware Verb Prediction + +In addition to the exact verb $y$ , we add verbs that are of similar meaning to $y$ in to a synonym set $\mathcal{Y}^{\prime}\subset Y$ , creating a verb candidate pool for each input sample. Instead of maximizing the log-likelihood of the fixed verb $y$ , we maximize the log-likelihood of the most probable verb candidate $y^\prime \in \mathcal{Y}^\prime$ dynamically through training: + +$$ +\mathcal {L} = - \sum_ {k = 1} ^ {K} q ^ {\prime} (k \mid \boldsymbol {v}) \log p (k \mid \boldsymbol {v}) \tag {7} +$$ + +where + +$$ +q ^ {\prime} (k \mid \boldsymbol {v}) = \left\{ \begin{array}{l l} 1, & \text {i f} k = \underset {k \in \mathcal {Y} ^ {\prime}} {\operatorname {a r g m a x}} p (k \mid \boldsymbol {v}) \\ 0, & \text {o t h e r w i s e .} \end{array} \right. \tag {8} +$$ + +As the candidate can be different in each step, overall the likelihood of any verb candidate in the synonym set is maximized in the training process. + +
Most Frequent +VerbsThousand +of VerbsCoverage (%)
DE +(Inflected)1001286.716.0
2002243.728.0
3002577.332.2
JA +(Normalized)10070.256.8
20085.268.9
30093.275.4
+ +Table 1: Dataset for final-verb prediction. We extract sentences with the most frequent 100–300 verbs in German and Japanese verb final sentences. Using normalized Japanese verbs reduces the sparsity of the verbs and improves coverage of sentences. + +# 4 Exact Prediction Experiments + +We first test exact prediction on both Japanese and German verb-final sentences with both word-based and character-based models. + +# 4.1 Datasets + +We use German and Japanese verb-final sentences between ten and fifty tokens (Table 1) that end in the 100 to 300 most common verbs (Wolfel et al., 2008). For each sentence, the extracted final verb becomes the label; the token sequence preceding it (the preverb) is the input. We split sentences into train $(64\%)$ , evaluation $(16\%)$ and test $(20\%)$ sets. + +For Japanese, we use the Kyoto Free Translation Task (KFT) corpus of Wikipedia articles. Since Japanese is unsegmented, we use the morphological analyzer MeCab (Kudo, 2005) for tokenization. Like Grissom II et al. (2016), we strip out post-verbal copulas and normalize verb forms to the dictionary ru (non-past tense) form. We also consider suru light verb constructions a single unit. + +For German, we use the Wortschatz Leipzig news corpus from 1995 to 2015 (Goldhahn et al., 2012). German sentences ending with a verb (we throw out verb medial sentences) are tokenized and POS-tagged with TreeTagger (Schmid, 1995). Since German sentences may end with two verbs—for example, a verb followed by ist, we only predict the content verb, i.e., the first verb in the two-verb sequence. Unlike Japanese, we leave German verbs inflected, as there is less variation (usually past participle or infinitive form). + +# 4.2 Training Data Representation + +Because we predict from partial input, we train on incrementally longer preverb subsequences. Each + +subsequence is an independent input sample during training, and each preverb is truncated into five progressively longer subsentences: $30\%$ , $50\%$ , $70\%$ , $90\%$ , and $100\%$ .4 + +# 4.3 Training Details + +We train both word- and character-based models for German and Japanese verb prediction. We use the dev sets to manually tune hyperparameters for accuracy—word embedding size, hidden layer size, dropout rates and learning rate. + +Character-based Model For input character sequences, we learn 64-dimensional embeddings and encode them with a two-layer BiGRU of 256 hidden units. The embeddings are randomly initialized with PyTorch defaults and updated during training jointly with other parameters. Mini-batch sizes are 256 for German but 128 for Japanese's smaller corpus. We use the evaluation set for tuning and set the embedding dropout rate as 0.6 and the RNN dropout rate as 0.2 while averaging from five views for attention vectors. We optimize with Adam (Kingma and Ba, 2015) with an initial learning rate of $10^{-4}$ , decaying by 0.1 when loss increases. Training takes approximately two (Japanese) and four (German) hours on one 6GB GTX1060 GPU. + +Word-based Model We use a vocabulary of 50,000 for German and Japanese; we use the $$ token for out-of-vocabulary tokens. The embedding size is 300. We encode the input embeddings with a two-layer BiGRU with 512 hidden units. Other hyperparameters are unchanged from the character-based model. + +# 4.4 Results + +We compare ANVIIIL to the logistic regression model in Grissom II et al. (2016) on the 100 most frequent verbs in the corpus (Figure 4). For both languages, ANVIIIL has higher accuracy than previous work (Figure 5), especially early in the sentence. While word-based models work best for German, character-based models work best for Japanese, perhaps because it is agglutinative. + +Figure 6 compares other encodings of preverbs (at a character level) in Japanese. In general, ANVIL has higher accuracy on verb prediction tasks. + +![](images/7baf6baf9c9880cec6eff817e2ec387eae69b6082a9f8486d97971ed8b28be7a.jpg) +Figure 4: Comparing word and character representations for German (inflected) and Japanese (normalized) verb prediction. ANVIIIL consistently has higher accuracy than LogReg from Grissom II et al. (2016), and word-based prediction is slightly better for German but worse for Japanese. + +![](images/a39315a5118e3e338ba1bc4c71909888d0a55453baf1f2cf20e55bd28f730ff5.jpg) +Figure 5: Accuracy when classifying among the most common 100, 200, and 300 verbs. ANVIIL consistently outperforms the best-performing model described in Grissom II et al. (2016), especially early in the sentences. + +# 5 Synonym-aware Prediction + +We now describe synonym-aware verb prediction (Section 4). We use 2,214,523 German sentences ending with 100 most frequent lemmatized verbs. For each sentence, we extract the preverb as in Section 4.1, but in this case, the target is not just a + +single verb. For each lemmatized verb, we extract its synonyms among the 100 verbs using Germanet synsets (Hamp and Feldweg, 1997; Henrich and Hinrichs, 2010). If synonyms exist, we include them all in a list as candidate target verbs for the input as in Figure 2. Synonyms exist for $40.79\%$ + +![](images/0103d70d2861b7a8066a9c72adbbcffb047ff59bd944af3c23df894dc1fd199a.jpg) +Figure 6: ANVIIL's BiGRU with self-attention outperforms other most settings on predicting the 100 most common verbs in Japanese. + +![](images/51d5fc00527dec9e3646ad5685b138c29e9d78356a4fb1fb0e0004ab6894b0f2.jpg) +Figure 7: Accuracy across time on exact/synonym-aware match with exact/synonym-aware training. Accuracy increases slightly with the addition of the synonym-aware matching. + +of the sentences in the dataset. + +Similarly, we train incrementally on subsequences of the preverb as in Section 4.3. We learn high-level representations of the preverb using word-level embeddings and use the same training parameters as in Section 4.3 + +During training, instead of maximizing the exact verb's log-likelihood, we maximize the log-likelihood of any verb in the synonym-set, encouraging the model to be confident about any verb that fits in the context. + +# 5.1 Verb Prediction Results + +We compare accuracy for predicting exact and synonym-aware verbs with different objects in training. In synonym-aware prediction, we consider the prediction successful if it is one of the candidate verbs. Compared to predicting the exact verb, while being less focused on the fixed verb, synonym-aware prediction further improves the predication accuracy (Figure 7), but only slightly. ANVIIL clearly outperforms the feature engineering linear models on Japanese across the entire sentence, even when the number of verbs to choose from is larger; and on German, ANVIIL outperforms previous models when the number of verbs to choose from is the same (Figure 4). This is may be due to the long-range dependencies which are not captured in the logistic regression model. + +# 6 Visualization and Analysis + +We now analyze our model's predictions. While previous work (Grissom II et al., 2016) examines the contribution of features by examining the model itself, our approach does not rely on feature engineering. To examine our model, we instead use a heatmap to visualize the time course attention values in sentences, allowing us to see on what the model focuses when predicting. + +# 6.1 Visualization of the Prediction Process + +We visualize how our model makes its predictions in Figure 8 and Figure 9. In both languages, the model not only focuses on the most recent revealed word, but also focuses attention to relevant long-distance dependencies. + +Predictions are, as expected, also more confident and accurate when approaching the end of the preverb. This is consistent with the verb prediction process for human interpreters (Wilss, 1978) and with previous work (Grissom II et al., 2016). With increasing information, the number of possible alternatives gradually declines. Figure 10 visualizes how the model makes synonym-aware predictions. + +# 6.2 Character-based versus Word-based + +As described in Section 4.3, we implement both character-based and word-based models for verb prediction. For Japanese final-verb prediction, the + +![](images/7d4fd25d478f652cf4814402caa0797b851f9582d030a0aaa3126cf838c43c43.jpg) +Figure 8: Attention during German verb prediction. The model usually attends to the most recent word, but focuses on "es", which can be used as the subject of an existential phrase (Joseph, 2000) in combination with the verb "geben". Thus, it focuses on an interpretation of "es" as the subject, consistently attends to "es" throughout the sentence, and correctly predicts "geben" (for consistency with the Japanese examples, we show the model that predicts the normalized—infinitive—form of the verb). + +![](images/45ea00ec7e258572af1483b8d41290c6cb26123a7539e76ba064b51dbb9d7fab.jpg) +Figure 9: Attention during Japanese verb prediction. Attention and prediction transition through time on a Japanese sentence. The genitive case marker no, in bright yellow, has a high attention weight, as do the characters making in the noun before it. Case marker-adjacent nouns, including before the genitive no (twice) and the accusative wo have slightly less. Toward the end of the sentence, attention shifts to the quotative particle to, which significantly limits possible completions. + +character-based model has higher prediction accuracy. Unlike the word-based model, it does not require use of a morphological analyzer and has a smaller vocabulary size. The word-based model, however, works better for German verb prediction and word-based heatmaps are more interpretable than character-based ones for German. We show word-based heatmaps for exact prediction in Figure 8 and Figure 11. + +# 6.3 Synonym-aware versus Exact Prediction + +We show an example of how synonym-aware prediction can make the task easier in Figure 12. By providing synonyms during training, the model makes an alternative prediction "zeigen" (present, show) for the original verb "einetzen" (use). + +# 6.4 Case Markers + +Previous work suggests that case markers play a key role in both human and machine verb prediction for Japanese (Grissom II et al., 2016). Japanese + +has explicit postposition case markers which mark the roles of the words in a sentence. By examining the accuracy of predictions when the most recent token is a case marker, we can gain insight into their contributions to the predictions. + +Figure 13 considers the instances where the most recent token observed is the given case marker; in these situations, the accuracy of predicting one of the 100 most frequent verbs is much higher than in general. It is unsurprising that the quotative particles have higher accuracy at the end of the sentence, since the set of verbs that follow them is highly constrained—e.g., say, think, announce, etc. Quotative particles for the entire sentence occur immediately before to final verb. More general particles, such as ga (NOM) and wo (ACC) show a smaller increase in accuracy. + +# 7 Related Work + +This section examines previous work on prediction in humans, simultaneous interpretation, and + +![](images/e77285ea55dad43e76dacb334183bb8af4e8249878aee9a694ad9b15dfe1fe34.jpg) +Figure 10: Attention during German synonym-aware verb prediction. The model constantly focuses on "skispringer" (ski jumpers), which is the subject of the verb and predicts "machen" and "schaffen" from three of the verb candidates. + +![](images/64c1405cb9fd9b83e83e8599e5784feb4c69f934423d0998f89bcc743e9f79a7.jpg) +Figure 11: Progression of attention weights of a word-based model on a German sentence. The model successfully captures the passive voice in the sentence where "wird erwartet" is often translated together as "is expected". Full translation of the example is: Chancellor Merkel is expected to speak in London next week. + +simultaneous machine translation. + +Psycholinguistics has examined argument structure using verb-final $b\check{a}$ -construction sentences in Chinese (Chow et al., 2015, 2018). Kamide et al. (2003) find that case markers facilitate verb predictions for humans, likely because they provide clues about the semantic roles of the marked words in sentences. In sentence production, Momma et al. (2015) suggest that humans plan verbs after selecting a subject but before objects. + +Empirical work on German verb prediction first investigated German-English simultaneous interpreters in Jörg (1997): professional interpreters often predict verbs. Matsubara et al. (2000) introduce early verb prediction into Japanese-English SMT + +by predicting verbs in the target language. Grissom II et al. (2014) and Gu et al. (2017) use verb prediction in the source language and learn when to trust the predictions with reinforcement learning, while Oda et al. (2015) predict syntactic constituents and do the same. Grissom II et al. (2016) predict verbs with linear classifiers and compare the predictions to human performance. We extend that approach with a modern model that explains which cues the model uses to predict verbs. + +In interactive translation (Peris et al., 2017) and simultaneous translation (Alinejad et al., 2018; Ma et al., 2019) systems, neural methods for next word prediction improve translation. BERT (Devlin et al., 2019) uses masked deep bidirectional language + +![](images/32e615209af2de4a35da6a7e3f3cf6aba2fae8470816c1b487b07ed1f2aa454e.jpg) +Figure 12: Imperfect synonym-aware prediction process on a German sentence. The predicted synonym "zeigen" (show/appear) in context is not a perfect replacement for the correct verb "einsenetzen" (put in place), but it better preserves the general meaning of the sentence: "This money had been made available to the country for the process of EU membership and should now appear for refugee assistance." + +![](images/d4038d8379e7c633527d4d55fdf75f16abcfb8ebb2e609884076da1b867249fd.jpg) +Figure 13: Case markers correlate with improved verb prediction compared to overall verb prediction (Figure 4). Some case markers, such as to, have large jumps in accuracy toward the end, while others, such as wo do not. We examine nominative (NOM), instructive (INS), accusative (ACC), dative (DAT), quotative (QUOT), and essive (ESS) markers. + +models and contextualized representations (Peters et al., 2018) for pretraining and gain improvements in word prediction and classification. We incorporate bidirectional encoding to verb prediction. + +Existing neural attention models for sequential classification are commonly trained on complete input (Yang et al., 2016; Shen and Lee, 2016; Bahdanau et al., 2014). Classification on incomplete sequences and long-distance sentence-final verb prediction remains difficult and under-explored. + +# 8 Conclusion + +We present a synonym-aware neural model for incremental verb prediction using BiGRU with self-attention. It outperforms existing models in predicting the most frequent sentence-final verbs in both Japanese and German. As we predict the verbs incrementally, our method can be directly applied to solve real-time sequential classification or prediction problems. SMT systems for SOV to SVO simultaneous MT can also benefit from our work to reduce translation latency. We show that larger datasets always help with predicting the sentence-final verbs, suggesting that larger corpora will further improve results. + +# Acknowledgements + +This material is based upon work supported by the National Science Foundation under Grant No. 1748663 (UMD). The views expressed in this paper are our own. We thank Graham Neubig and Hal Daumé III for useful feedback. + +# References + +Ashkan Alinejad, Maryam Siahbani, and Anoop Sarkar. 2018. Prediction improves simultaneous neural machine translation. In Conference of Empirical Methods in Natural Language Processing, pages 3022-3027. +Emmon Bach. 1962. The order of elements in a transformational grammar of German. Language, 38(3):263-269. +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv e-prints. + +Lorenzo Bevilacqua. 2009. The position of the verb in Germanic languages and simultaneous interpretation. The Interpreters' Newsletter, 14:1-31. +Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146. +G.V. Chernov, R. Setton, and A. Hild. 2004. Inference and Anticipation in Simultaneous Interpreting: A Probability-prediction Model. Benjamins translation library. J. Benjamins Publishing Company. +Kyunghyun Cho, Bart van Merrienboer, Caglar Gülcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Conference of Empirical Methods in Natural Language Processing. +Wing-Yee Chow, Ellen Lau, Suiping Wang, and Colin Phillips. 2018. Wait a second! delayed impact of argument roles on on-line verb prediction. Language, Cognition and Neuroscience, 33(7):803-828. +Wing-Yee Chow, Cybelle Smith, Ellen Lau, and Colin Phillips. 2015. A "bag-of-arguments" mechanism for initial verb predictions. Language, Cognition and Neuroscience, pages 1-20. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the Association for Computational Linguistics. +Patrick Doetsch, Pavel Golik, and Hermann Ney. 2017. A comprehensive study of batch construction strategies for recurrent neural networks in MXNet. IEEE International Conference on Acoustics, Speech, and Signal Processing. +Yoav Goldberg. 2017. Neural Network Methods for Natural Language Processing. Synthesis Lectures on Human Language Technologies. +Dirk Goldhahn, Thomas Eckart, and Uwe Quasthoff. 2012. Building large monolingual dictionaries at the Leipzig corpora collection: From 100 to 200 languages. In International Language Resources and Evaluation. +Alvin Grissom II, Naho Orita, and Jordan Boyd-Graber. 2016. Incremental prediction of sentence-final verbs: Humans versus machines. In Conference on Computational Natural Language Learning, pages 95-104. +Alvin C. Grissom II, He He, Jordan Boyd-Graber, John Morgan, and Hal Daumé III. 2014. Don't until the final verb wait: Reinforcement learning for simultaneous machine translation. In *Conference of Empirical Methods in Natural Language Processing*. + +Jiatao Gu, Graham Neubig, Kyunghyun Cho, and Victor O.K. Li. 2017. Learning to translate in real-time with neural machine translation. European Chapter of the Association for Computational Linguistics. +Birgit Hamp and Helmut Feldweg. 1997. Germanet-a lexical-semantic net for german. Automatic Information Extraction and Building of Lexical Semantic Resources for NLP Applications. +He He, Jordan Boyd-Graber, and Hal Daumé III. 2016. Interpretese vs. translationese: The uniqueness of human strategies in simultaneous interpretation. In Conference of the North American Chapter of the Association for Computational Linguistics. +He He, Alvin Grissom II, Jordan Boyd-Graber, and Hal Daume III. 2015. Syntax-based rewriting for simultaneous machine translation. In *Conference of Empirical Methods in Natural Language Processing*. +Verena Henrich and Erhard Hinrichs. 2010. GernEdit—the GermaNet editing tool. In International Language Resources and Evaluation. European Languages Resources Association (ELRA). +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780. +Udo Jorg. 1997. Bridging the gap: Verb anticipation in German-English simultaneous interpreting. In M. Snell-Hornby, Z. Jettmarova, and K. Kaiindl, editors, Translation as Intercultural Communication: Selected Papers from the EST Congress, Prague 1995. +Brian Joseph. 2000. What gives with es gibt? American Journal of Germanic Linguistics and Literatures, 12:243-265. +Yuki Kamide, Gerry Altmann, and Sarah L Haywood. 2003. The time-course of prediction in incremental sentence processing: Evidence from anticipatory eye movements. Journal of Memory and Language, 49(1):133-156. +Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations. +Jan Koster. 1975. Dutch as an SOV language. Linguistic analysis, 1(2):111-136. +T. Kudo. 2005. Mecab: Yet another part-of-speech and morphological analyzer. http://mecab.sourceforge.net/. +Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. In Proceedings of the International Conference on Learning Representations. + +Mingbo Ma, Liang Huang, Hao Xiong, Renjie Zheng, Kaibo Liu, Baigong Zheng, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, Hua Wu, and Haifeng Wang. 2019. STACL: Simultaneous translation with implicit anticipation and controllable latency using prefix-to-prefix framework. +Shigeki Matsubara, Keiichi Iwashima, Nobuo Kawaguchi, Katsuhiko Toyama, and Yasuyoshi Inagaki. 2000. Simultaneous Japanese-English interpretation based on early predation of English verb. In Symposium on Natural Language Processing. +Shota Momma, L Robert Slevc, and Colin Phillips. 2015. The timing of verb selection in japanese sentence production. Journal of experimental psychology. Learning, memory, and cognition. +Shota Momma, Robert Slevc, and Colin Phillips. 2014. The timing of verb selection in english active and passive sentences. +Makoto Morishita, Yusuke Oda, Graham Neubig, Koichiro Yoshino, Katsuhito Sudoh, and Satoshi Nakamura. 2017. An empirical study of mini-batch creation strategies for neural machine translation. In The First Workshop on Neural Machine Translation. +Yusuke Oda, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2015. Syntax-based simultaneous translation through prediction of unseen syntactic constituents. Proceedings of the Association for Computational Linguistics. +Ivaro Peris, Miguel Domingo, and Francisco Casacuberta. 2017. Interactive neural machine translation. Computer Speech and Language, 45:201-220. +Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the North American Chapter of the Association for Computational Linguistics. +Helmut Schmid. 1995. Improvements in part-of-speech tagging with an application to german. In Proceedings of the ACL SIGDAT-Workshop. +Sheng-syun Shen and Hung-yi Lee. 2016. Neural attention models for sequence classification: Analysis and application to key term extraction and dialogue act detection. In Conference of the International Speech Communication Association. +Wolfram Wilss. 1978. Syntactic anticipation in German-English simultaneous interpreting. In Language Interpretation and Communication. +M. Wolfel, M. Kolss, F. Kraft, J. Niehues, M. Paulik, and A. Waibel. 2008. Simultaneous machine translation of German lectures into English: Inspecting research challenges for the future. In *IEEE Spoken Language Technology Workshop*. + +Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alexander J. Smola, and Eduard H. Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the North American Chapter of the Association for Computational Linguistics. \ No newline at end of file diff --git a/anattentiverecurrentmodelforincrementalpredictionofsentencefinalverbs/images.zip b/anattentiverecurrentmodelforincrementalpredictionofsentencefinalverbs/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..942efd67834eac991423f2ee380bec8c19a9399b --- /dev/null +++ b/anattentiverecurrentmodelforincrementalpredictionofsentencefinalverbs/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:767315fb43705fa26faad69a2844e8d59b0482622926823d966e82426171ef03 +size 499266 diff --git a/anattentiverecurrentmodelforincrementalpredictionofsentencefinalverbs/layout.json b/anattentiverecurrentmodelforincrementalpredictionofsentencefinalverbs/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..2d47f8276f5c2097d304d8924fd4e4d6b36151eb --- /dev/null +++ b/anattentiverecurrentmodelforincrementalpredictionofsentencefinalverbs/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fbbf2fd971ce8a2aa66e900045cde637609eb84bc1ddd1642dca931afd27655b +size 331153 diff --git a/anempiricalexplorationoflocalorderingpretrainingforstructuredprediction/cd0643f5-f58d-44c5-9e38-398e4e5d85a2_content_list.json b/anempiricalexplorationoflocalorderingpretrainingforstructuredprediction/cd0643f5-f58d-44c5-9e38-398e4e5d85a2_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..4eebefa38a571a93e3eb411966e7128100904fe4 --- /dev/null +++ b/anempiricalexplorationoflocalorderingpretrainingforstructuredprediction/cd0643f5-f58d-44c5-9e38-398e4e5d85a2_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c2cad5f611b02f635a0064bf8e681dec5fc78d66d373fd66977c4702db190953 +size 91936 diff --git a/anempiricalexplorationoflocalorderingpretrainingforstructuredprediction/cd0643f5-f58d-44c5-9e38-398e4e5d85a2_model.json b/anempiricalexplorationoflocalorderingpretrainingforstructuredprediction/cd0643f5-f58d-44c5-9e38-398e4e5d85a2_model.json new file mode 100644 index 0000000000000000000000000000000000000000..34b41f66e57ba610d7ab712a4248a5a0b3b98508 --- /dev/null +++ b/anempiricalexplorationoflocalorderingpretrainingforstructuredprediction/cd0643f5-f58d-44c5-9e38-398e4e5d85a2_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1513c0542a7c237e7c7dc65f646a33cb980cee9c595c37664532ac4cf4ae8d33 +size 107856 diff --git a/anempiricalexplorationoflocalorderingpretrainingforstructuredprediction/cd0643f5-f58d-44c5-9e38-398e4e5d85a2_origin.pdf b/anempiricalexplorationoflocalorderingpretrainingforstructuredprediction/cd0643f5-f58d-44c5-9e38-398e4e5d85a2_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e2220a2bf1318134a1a601fee91abf8079824610 --- /dev/null +++ b/anempiricalexplorationoflocalorderingpretrainingforstructuredprediction/cd0643f5-f58d-44c5-9e38-398e4e5d85a2_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:29a7e35a4cd181796c37aa88d697feab611c6deda3866cfc6efb65bc5e496f6c +size 713420 diff --git a/anempiricalexplorationoflocalorderingpretrainingforstructuredprediction/full.md b/anempiricalexplorationoflocalorderingpretrainingforstructuredprediction/full.md new file mode 100644 index 0000000000000000000000000000000000000000..4101f48539245e1f23c4d805d550128a362aaae8 --- /dev/null +++ b/anempiricalexplorationoflocalorderingpretrainingforstructuredprediction/full.md @@ -0,0 +1,364 @@ +# An Empirical Exploration of Local Ordering Pre-training for Structured Prediction + +Zhisong Zhang, Xiang Kong, Lori Levin, Eduard Hovy +Language Technologies Institute, Carnegie Mellon University +{zhisongz, xiangk, lsl, hovy}@cs.cmu.edu + +# Abstract + +Recently, pre-training contextualized encoders with language model (LM) objectives has been shown an effective semi-supervised method for structured prediction. In this work, we empirically explore an alternative pre-training method for contextualized encoders. Instead of predicting words in LMs, we "mask out" and predict word order information, with a local ordering strategy and word-selecting objectives. With evaluations on three typical structured prediction tasks (dependency parsing, POS tagging, and NER) over four languages (English, Finnish, Czech, and Italian), we show that our method is consistently beneficial. We further conduct detailed error analysis, including one that examines a specific type of parsing error where the head is misidentified. The results show that pre-trained contextual encoders can bring improvements in a structured way, suggesting that they may be able to capture higher-order patterns and feature combinations from unlabeled data. + +# 1 Introduction + +Recently, pre-trained contextualized encoders (Peters et al., 2018; Radford et al., 2019; Devlin et al., 2019) have been shown to be beneficial for NLP tasks, including structured prediction (Kulmizev et al., 2019; Kondratyuk and Straka, 2019). Most of the pre-training objectives are based on variants of language models (LM), that is, the model is trained to predict lexical items with partial inputs. Masked Language Model (MaskLM) is a typical example, popularized by BERT (Devlin et al., 2019), which masks out lexical tokens in the input sequences and predicts their identities. Since natural sentences contain not only lexical tokens but also their linearized word orders, it is a natural question if we can perform pre-training by "masking out" and recovering word order information. + +Word order is an important method of grammatical encoding (Dryer, 2007), and can play an important role in predicting basic sentence structures (Naseem et al., 2012; Tackström et al., 2013; Ammar et al., 2016; Ahmad et al., 2019). Recently, Wang et al. (2018) pre-train an explicit word reordering model and show that its contextualized representations improve dependency parsing. + +In this work, we explore a local ordering pretraining strategy with word-selection objectives. Instead of completely discarding original word order information, we segment the input sentence into local bags of words and keep the ordering of these bags. Inside each bag, we discard all the local word orders and train the model to recover them. Furthermore, we simplify the training objectives: instead of training explicit word linearizers which require extra unidirectional decoders, we only ask the model to select original neighboring words. This scheme simplifies the pre-training procedure and enhances the encoder since it can take information from the whole sentence. + +A similar idea is explored in StructBERT (Wang et al., 2020), which adopts a word structural objective by shuffling and re-predicting randomly selected subsets of trigrams. Our method is different in that we make local bags of words instead of shuffling and we adopt simpler and cheaper word-selection objectives. Moreover, we focus on empirical experiments and error analysis on structured prediction tasks. + +We evaluate on three structured prediction tasks (dependency parsing, part-of-speech (POS) tagging, and Named Entity Recognition (NER)) over four languages (English, Finnish, Czech, Italian). The highlights of our findings are: + +- For local ordering pre-training, the best performance is obtained when partially masking out information in a suitable degree. (§3.2.1) + +![](images/e9317fdcd600c5a2f7f52745dc4c951c0866ad03379caffba983f9dcab7501bd.jpg) +Left Selection: +Words: +Positions: +Figure 1: Illustration of the local ordering pre-training strategy. We segment the input sentence into local bags (bag size is fixed to three here) and discard word order information inside each bag by assigning same position indexes. Training objectives are to select original neighboring words. Here, we only show the scenario for direct left-neighbor selection, while selections for other positions will be similar. + +- Even when pre-trained with a small amount of data (1M Wikipedia sentences), our method can improve the performances of structured predictors in a consistent way. Our method performs comparably to MaskLM and there can be further improvements when combining the two objectives, especially for parsing, which is the most structured task we explore. (§3.2.2, §3.3) +- The pre-trained models make fewer structured errors, suggesting that they may be able to capture higher-order patterns and feature combinations from unlabeled data. (§3.4) + +# 2 Local Ordering Pre-training + +Word reordering or linearization itself is an interesting task, aiming to arrange a bag of words into a natural sentence (Liu et al., 2015; Zhang and Clark, 2015; Schmaltz et al., 2016). Wang et al. (2018) show that representations from an explicit reordering model can benefit dependency parsing. However, there may be two issues with an explicit reordering model for pre-training. Firstly, the input is a bag of words without any positional information. This could discard too much information, leading to relatively large discrepancies between pre-training and fine-tuning. Moreover, training explicit reordering models requires unidirectional decoders, which are only aware of contexts from one direction and cannot make full use of the bidirectional information at one time. + +To mitigate these issues, we explore a local ordering pre-training strategy with word-selection objectives. Inspired by MaskLM, where only some of the tokens are masked out, we "mask out" par + +tial ordering information by segmenting the input sentence into multiple local bags of words, and only discarding word orders inside each bag (§2.1). Moreover, we adopt simpler training objectives of selecting original neighboring words, which avoids the need of unidirectional decoders and focuses the pre-training on the encoder (§2.2). + +# 2.1 Local Bags of Words + +Instead of discarding all positional information, we keep the overall ordering and only discard local word orders. This is achieved by segmenting the input sentence into a sequence of local bags of words. In this way, the model is not aware of the local word orders inside each bag, but the overall ordering of the bags is kept. Figure 1 provides a simplified example to illustrate this scheme. We specify special positional encodings to "mask out" local word orders: inside each local bag, all the tokens get the same positional indexes. For example, the position indexes in the first bag {There, is, a} are all set to 0, while in the second bag {cat, on, the}, the position indexes are all casted to 3. + +The above example illustrates a simplified scheme, whereas in actual pre-training, we adopt several variations to make it more flexible. 1) First, for the position indexes inside each bag, we do not fix them to the index of the first token, but randomly pick a representative token and adopt its index. For example, in the second bag, we randomly choose a representative index from $\{3,4,5\}$ , and then set all position indexes to this value. 2) Moreover, for each local bag, we randomly sample its bag size from a pre-defined range, instead of using a fixed size. 3) In addition, we randomly pick half of the bags and keep the original position indexes in them, which is another way of retaining partial ordering information. + +# 2.2 Word-selection Objectives + +Since the aim of pre-training is not the pre-training task itself but the encoder, we do not need an explicit word reordering model, which may require unidirectional decoders. In some way, an explicit reordering model can be regarded as a LM which constrains candidate words to come from the input sentence. Therefore, it may suffer from the same problem as unidirectional LMs: at one time, contexts from only one direction can be utilized instead of from both directions. This is the bias of unidirectional decoders and we replace them with simpler word selectors. + +Specifically, we only ask the model to select original neighbors for each word that loses its local word order information. Figure 1 illustrates the case for left-neighbor selection. This task is nontrivial since the model is unaware of word orders inside each bag. In many scenarios, it needs to capture certain global sentence structures. For example, in the second bag {cat, on, the}, if looking only locally, we may pick "the" as the left neighbor of "cat". However, if we notice that there is another determiner "a" in the first bag, then "the" will not be the only choice. + +In actual running, we adopt four classification tasks corresponding to different original offsets: two for the selection of the original left neighbor (-1) and the left of the left neighbor (-2) and two for the right ones. Each word selector gets its own parameters. Since the word selection task is similar to dependency parsing (Zhang et al., 2017), we adopt the biaffine scorer (Dozat and Manning, 2017). The training objectives are negative log likelihoods on selecting the correct words. + +Formally, assume that we have an input sequence of $w_0, w_1, \ldots, w_{n-1}$ , and we generate their corrupted positions $p_0, p_1, \ldots, p_{n-1}$ with our local bag strategy. For a specific word $w_i$ (where $p_i \neq i$ ) and a specific selection offset $\delta$ ( $\delta \in \{-2, -1, 1, 2\}$ ), its loss objective will be (for brevity, we omit the conditions on the inputs): + +$$ +\ell_ {w _ {i}, \delta} = - \log \frac {\exp \mathbf {S c o r e} _ {\delta} (w _ {i} , w _ {i + \delta})}{\sum_ {j} \exp \mathbf {S c o r e} _ {\delta} (w _ {i} , w _ {j})} +$$ + +Here, $\mathrm{Score}_{\delta}$ denotes the scores of two tokens having positional differences $\delta$ . + +Notice that the simplified tasks are not necessarily easier than the explicit reordering task, since we can recover the original word order if we know all the local neighboring information. The word-selection objectives get rid of the explicit decoder as well as its unidirectional bias. At the same time, the model is still as efficient as word reordering models, since we only need to select among the words that appear in the input sentence, and there is no need to do the computationally expensive normalizations over the whole vocabulary as in LMs. + +# 2.3 Hybrid Training + +We further perform multi-task hybrid training, including both ordering and MaskLM objectives. Actually, our local ordering strategy can be integrated with MaskLM in a natural way. Since half of the local bags preserve the original position indexes, we + +![](images/b0fadd09c8ed2f5bf1981bd5c893b4e978939f89b94c2c1dfad0c2c95d825b09.jpg) +Figure 2: Illustration of the overall training scheme. The encoder is pre-trained in the pre-training stage with the unlabeled data. Later, the task-specific decoder is stacked and both modules are further fine-tuned with task-specific labeled data. + +randomly select words inside those bags to mask and predict. This scheme is nearly as effective as the original one because we can segment local bags and mask words at the same time and thus there is no need to run through the encoder twice. The encoder produces one set of contextualized representations, which we can feed to the corresponding modules of the two tasks. We adopt equal weights (both set to 0.5) for the two objectives. + +# 3 Experiments + +# 3.1 Settings + +In this sub-section, we briefly describe our main experiment settings1. Please refer to the Appendix for more details. + +Scheme Figure 2 shows our overall training scheme. We take a two-step approach: pre-training plus fine-tuning. First, the encoder is pre-trained using a relatively large unlabeled corpus, then the task-specific decoders are stacked upon the pretrained encoder and all the modules are fine-tuned with task-specific labeled data, which is much smaller than the pre-training data. + +Data We explore four languages to evaluate our pre-training strategy: English (en), Finnish (fi), Czech (cs), and Italian (it). For the unlabeled data in pre-training, we collect Wikipedia corpora from the 2018-Fall Wiki-dump. Due to limitation of computational resources, we sample 1M sentences for each language. For POS tagging and dependency parsing, we utilize Universal Dependencies (UD) v2.4 (Nivre et al., 2019). For NER, we utilize CoNLL03 (Tjong Kim Sang and De Meulder, 2003) for English, Digitoday (Ruokolainen et al., 2019) for Finnish, Czech Named Entity Corpus (Ševčíková et al., 2007) for Czech and + +EVALITA 2009 (Speranza, 2009) for Italian. We mainly follow the default dataset splittings, except for the training sets. To investigate middle- and low-resource scenarios, we explore three settings of different training sizes, sampling 1k, 5k and 10k sentences from the original training set. We adopt standard evaluation criteria: accuracies for POS tagging, first-level (language-independent) Labeled Attachment Score (LAS) for dependency parsing, and F1 score for NER. + +Encoders We adopt encoders with the same architecture: a 6-layer Transformer, whose head number, model dimension and feed-forward hidden dimension are set to 8, 512 and 1024, respectively. In addition, we adopt relative positional encodings (Shaw et al., 2018; Dai et al., 2019) within the Transformer, since in preliminary experiments we find this helpful for target tasks. In contrast to BERT, we adopt words $^2$ as basic input and modeling units. We further include a character-level Convolutional Neural Network (CNN) to capture internal structures of words. + +Decoders For the decoders of specific tasks, we adopt typical solutions. For dependency parsing, we adopt the biaffine graph-based decoder (Dozat and Manning, 2017). For POS tagging, we simply add a single-layer classifier over all tags (Yang et al., 2018). For NER, we adopt a standard CRF layer (Lafferty et al., 2001). + +Training For model training, we adopt the Adam optimizer (Kingma and Ba, 2014) with a warming-up styled learning rate schedule. In pre-training, each mini-batch includes 480 sentences and we train the model for 200k steps, in which the first 5k steps are specified for linearly increasing the learning rate towards 4e-4. The pre-training stage takes around 3 days with one RTX 2080 Ti GPU. In task-specific training, we adopt a mini-batch size of 80 sentences and train the model for maximally 250 epochs over the training set, which generally takes several hours using a single GPU. + +# 3.2 Effects of Pre-training Strategies + +In this sub-section, we explore the effects of pretraining strategies. Here, we take the English dependency parsing dataset for development. + +
R357911
10k86.8387.7287.7587.9187.6486.98
5k85.6186.5486.7086.7086.3885.64
1k80.8782.0782.2581.9182.1779.06
+ +Table 1: Comparisons of bag size ranges $\left[\frac{\mathrm{R} + 1}{2},\mathrm{R}\right]$ for the local ordering strategy. “ $\mathbf{R} = \infty$ ” indicates that all words from one input sentence fall into one bag. Evaluations are performed with the English dependency parsing task (LAS on development set). Each row represents different (target task) training sizes. + +# 3.2.1 Bag Size Range + +As described in §2.1, we adopt variable bag sizes for the ordering pre-training. The aim is to make the model more flexible and prevent it from always seeing the same patterns associated with fixed bag sizes. The neighbor selection process is not affected by this since it does not care about the bag boundaries, and selects among all the input tokens. The bag size range is a major setting in this strategy. To reduce the number of hyper-parameters, we specify a maximum bag size $R$ , and set the bag size range to $[\frac{R + 1}{2}, R]$ . For example, if $R$ is set to 7, then for each bag, its size is randomly selected from 4 to 7. We also include a setting where $R$ is $\infty$ , which corresponds to the case where all words fall into one global bag, as in the full word reordering model. + +The results are shown in Table 1. Firstly, in the case of $R = \infty$ , the model generally performs worse than those with local bags. This shows the effectiveness of keeping partial ordering information for pre-training, which may possibly reduce the discrepancies between pre-training and fine-tuning, matching our intuition of the local ordering strategy. Furthermore, when the bag size is too small as in the case of $R = 3$ , the performances are also worse, possibly because the task becomes so simple that the model learns little in pre-training. Among the middle-ranged settings of $R$ , which partially mask out information in suitable degrees, the results do not differ too much. In the following experiments, we fix $R$ to 7, which performs well overall. + +# 3.2.2 Comparisons + +We compare various pre-training strategies and show the results in Table 2. As split in this table, we arrange the models into three groups: + +(1) The first group includes models without pretrained encoders. "Random" gets random initialization, and "fastText" gets its word lookup table + +
RandomfastTextBiLMMaskLMLBagHybridBERT
10k83.70±0.3686.00±0.1087.28±0.1687.96±0.0987.75±0.1388.27±0.1189.60±0.10
5k80.75±0.3583.17±0.2486.16±0.0387.09±0.1086.70±0.1387.35±0.1088.47±0.11
1k69.93±0.3272.84±0.2580.75±0.0382.65±0.0482.25±0.0783.28±0.2684.62±0.28
+ +Table 2: Comparisons of different pre-training strategies with the English dependency parsing task (LAS on development set, averaged over three runs). Each row represents different (target task) training sizes. + +initialized from static fastText embeddings $^3$ . + +(2) The second group includes models whose encoders are pre-trained with the same settings on the 1M Wiki corpus. "BiLM" denotes Elmo-styled (Peters et al., 2018) Bidirectional LM (BiLM), where we train left-to-right and right-to-left language models with causality attention masks. "MaskLM" means the BERT-styled MaskLM, where $15\%$ of the words are masked out and predicted. "LBag" denotes our Local-Bag based ordering strategy and "Hybrid" is the multi-task hybrid model trained with both ordering and MaskLM objectives. +(3) The third group only contains "BERT", which directly utilizes pre-trained $\mathrm{BERT}^4$ . + +In the first group, where there are no pre-trained encoders, the performances drop drastically in low-resource cases. The pre-trained static word embeddings help in some way, but its degree of performance drop is very similar to the baseline: there are performance gaps of nearly 14 points between $10\mathrm{k}$ and $1\mathrm{k}$ training sizes. If we adopt pre-trained encoders, as in the second and third group, the performance clearly improves for all training sizes. Particularly, in the low-resource (1k) settings, the performance drops from the $10\mathrm{k}$ settings are much smaller than those in the first group. + +The more interesting comparisons are among those in the second group, where the settings are kept the same except for pre-training strategies. Firstly, BiLM performs worst in this group. The reason may be that BiLM contains unidirectional decoders, which cannot make full use of the inputs. The performance of our local ordering strategy (LBag) is very close to those of the MaskLM, with performance gaps of only 0.2 to 0.4 in LAS. Furthermore, if we combine the ordering and MaskLM objectives as in the Hybrid model, there can be further improvements. This suggests that local or + +dering pre-training may capture orthogonal information from MaskLM. Overall, the model performances in the second group do not differ too much, suggesting that the effectiveness of contextualized pre-training can be realized as long as the model is capable enough. + +Unsurprisingly, BERT performs the best, possibly due to its larger model and training corpus. Nevertheless, if calculating the gaps between the second group and BERT, we can find that they are relatively consistent as training sizes get smaller. In contrast, the gaps between the first group and BERT obviously get larger in lower-resource settings. This again suggests the effectiveness of contextualized pre-training. + +For the pre-trained models in the following experiments, we focus on three strategies: MaskLM, LBag and Hybrid, since they are the ones that we are most interested to compare. + +# 3.3 Main Results + +Figure 3 shows the main results on the test sets. The patterns are very similar to the development results. Pre-trained BERT obtains the best results, while our smaller pre-trained models lag behind by small gaps, which are relatively consistent across different training sizes. Those without pre-trained encoders mostly get worse results, especially in low-resource cases. For the parsing task, our local ordering strategy can get comparable results to those of MaskLM and overall there can be further improvements by combining the two objectives. For the other two sequence labeling tasks, the results are mixed, possibly because in these cases the lexical information may be more important, and the LM-styled pre-training may be better at capturing them. Nevertheless, our strategy still generally obtains comparable results to MaskLM. + +# 3.4 Analysis + +It is not surprising that contextualized pre-training can help structured prediction, since pre-trained encoders may have already captured structured patterns from unlabeled data. We perform detailed + +![](images/8d6780b00900b18209baaf00f7b2d3d59a540147d3b0b0ef63e62ecc47be5daa.jpg) + +![](images/5fbb7c70728a6f2719f1e25fb051ed4a2b299ba33c4a359814031b5a301c36a8.jpg) + +![](images/735061c2403be42b2774ed346e3f225cb0907f58924d491d65abadb43626d419.jpg) + +![](images/7d318ce23ae07434412ab1143e397f240a1143b161575907e1ed0f2fc7c35486.jpg) + +![](images/5593817806107e5c60fb2a3b96a7df789aeb2026f653047383fd6fa2bf68b668.jpg) + +![](images/561aa08e16d33a0bd5eaca31870f9ced0de17799fbc5afefb68e8f31db8c3ecb.jpg) + +![](images/7e30d5d54b512ac1cc2f4064debd509273e6f9b83b27adaa2fbf5d9f487b456c.jpg) + +![](images/ddc492d35693a1cae26e220ca5196767051afd1ec159b61fe4d0795254bb9d52.jpg) + +![](images/457b42295cad9633977c8a25778e7c026f88e5ad1d923a5c3d21c9ae2d2115df.jpg) +Figure 3: Test results for dependency parsing (LAS), POS tagging (Accuracy%) and NER (F1 score). + +![](images/562148550edfecb89cdf4475eee2a0102a1df1b349a4cb43c1fcc2c9c30f3d02.jpg) + +![](images/d73d35a20b024e16604c384c9d1060ba1b73183a1e52be7d1a5f1e937441fc6d.jpg) + +![](images/39fd8f673722a5ba0e11cbaa40fa8881624a3ed00daf806399755e1d0d85d3fa.jpg) + +analysis to investigate in what aspects pre-training are helpful. We select low-resource dependency parsing (with 1k training size) as the analyzing task, since parsing is the most structurally complex task we explore and there may be more obvious patterns in low-resource scenarios. For error analysis of parsing, Kulmizev et al. (2019) provide detailed error breakdowns on various factors, along the lines of (McDonald and Nivre, 2007, 2011). In this work, we explore different aspects, especially focusing on the structured nature of the task. + +# 3.4.1 On Word Frequencies + +Since pre-training is performed on a much larger corpus than the task-specific training set, we would + +expect that pre-trained models perform better on out-of-vocabulary (OOV) and rare words, since they would be seen more often in pre-training. + +To investigate this, we split the words of the development set into four bins according to their frequency ranking in the (target task) training vocabulary. Except for the OOV bin where words do not appear in training, the other three bins get the same number of running word counts. + +Figure 4 shows a breakdown of the results. First, if comparing fastText against the Random baseline, we can find that overall, the most improvements come from low frequency and OOV words. For words with high and middle frequency, static em + +![](images/91012f6ff0e4e3e4c00915eb2064cca08e5c5d22fa754d2d6d61b1fa9e7c9fd2.jpg) +Figure 4: Performance breakdown of dependency parsing (LAS on development sets, trained with 1k sentences) on word frequencies. Non-OOV words are evenly divided into the first three bins according to frequency ranking in (target task) training vocabularies. + +beddings provide less or sometimes even no obvious improvements. With pre-trained encoders, not only do the results on rare and OOV words get much better, but even high frequent words improve by a large margin. This suggests that the benefits of pre-training include not just that each individual word is known better, which may also be captured by static embeddings, but also that contextualized pre-training may be able to identify higher-order structured patterns. + +When comparing the models with pre-trained encoders, the trends are very similar to the overall LAS scores. A slightly surprising phenomenon is that, although our models are trained on much less data than BERT, the performance gaps are still relatively consistent across different frequency bins. This may suggest that even for rare or OOV words, their contexts can be signals that are strong enough for syntax prediction. + +# 3.4.2 On Higher-order Matches + +A dependency tree is a collection of dependency edges, which are not individual but interact with each other, forming higher-order structures. To investigate how pre-trained encoders help predicting higher-order structures, we specify some frame patterns and calculate the higher-order matching accuracies. Here, we use "frame" to denote a collection of dependency edges which form a pre-defined pattern. Accuracy is calculated by counting how many times all the dependency edges in the specific frame are correctly predicted. + +We investigate five frame patterns: 1) pred: all edges connecting a predicate and its core argu + +ment children, 2) mwe: all multi-word expression (MWE) edges connected to the head word of an MWE phrase, 3) conj: all edges related to a conjunction, 4) expl: an expletive edge and its core argument siblings, 5) acl: an adjectival clause modifier and all its core argument children. Please refer to the Appendix for examples and more detail about the extraction of these higher-order patterns. + +Figure 5 shows the results. We can again observe that static word embeddings improve higher-order accuracies very limitedly, while pre-trained encoders give totally different stories. For the "pred" patterns, the trends are very similar to the overall LAS results, where LBag is slightly worse than MaskLM and Hybrid is better. The interesting cases are "mwe" and "conj", where LBag mostly performs better than MaskLM. The reason might be that these patterns are more fixed in aspects of word order, which may be captured better by ordering pre-training. For the last two types, the results are mixed for different languages. Nevertheless, the ordering pre-trained models can still achieve comparable or sometimes better results than MaskLM. + +# 3.4.3 On Head Errors + +Finally, we investigate a special error pattern in dependency parsing, for which Figure 6 shows an example. Here, all the predicted edges are wrong, but there seems to be only one head selection error: "Epic" is an apposition modifier of "movie", but the model picks "Epic" as the head, leading to all other errors. In constituency trees, an attachment error may lead to multiple wrong brackets (Kummerfeld et al., 2012). In contrast, in dependency trees, a + +![](images/c9714bb4e07cc6fcc4ed9b6850ea19e3dc004915db7b8042cd80ab71ab04fecc.jpg) +Figure 5: Comparisons of higher-order matching accuracies on dependency parsing (on development sets, with 1k training). There are no results for "fi-expl" since in the Finnish (TDT) Treebank we adopt, "expl" is not used. + +![](images/8002a3e7dd9deece9d3b03ec820bf766f0d414a871b6a1526ea71eceb5de789b.jpg) +Figure 6: An example of head error. Here, the edges above the tokens are gold ones and the edges below are predictions. The red edge indicates the back edge, which is directly reversed in this case. + +pure attachment error may influence no other edges, but head errors may lead to multiple related errors. + +In the pattern of head errors, the predicted edge that forms a back edge in the original gold tree can usually be the signature. The prediction of a back edge indicates that a word is wrongly attached to one of its descendants in the gold tree. In addition to the wrongly predicted back edge itself, there must be at least another error, since loops are not allowed in trees. The example in Figure 6 shows a special case where the back edge is a directly reversed one, where the head and the modifier are reversely predicted. This type of 1-step back edges usually indicates local head errors, while there can be back edges involving multiple steps, which usually suggest more complex structured errors. + +Figure 8 shows the results on back edges. Firstly, + +![](images/88cba390e80077641856d9b4e4fd4ba2f6a38684342349aac4828a552cf7b265.jpg) +Figure 7: Illustration of multi-step back edge. Here, the edges above the tokens are gold ones (Notice that in actual sequence, the tokens do not necessarily appear in left-to-right order). The red edge below indicates a $n$ -step back edge for the gold tree. + +as the trends in previous analyses, the pre-trained models obviously predict fewer back edges and thus make fewer head errors, again suggesting structural improvements. Moreover, comparing the 1-step back-edge percentages, the pre-trained models also have higher rates, indicating that their head errors are more local. Further comparing different pre-training strategies, we can see that, except for Finnish, the MaskLM predicts fewer back edges and makes more local head errors (indicated by higher 1-step back edge percentages) than LBag. This suggests that, LM pre-training, which directly predicts lexical items, may be more sensitive to the information of head words. + +We further investigate errors5 that might be related with head errors. We adopt a relatively simple + +![](images/2c1a7e6c9d3da1e8c52045456ab701c9781db1f83763acab0db0032a83a08e82.jpg) +Figure 8: Results on back edges (on development sets, with 1k training). The light bars indicate the number of all back edges, while the darker and shaded parts represent the number of 1-step back edges. The numbers on the $x$ -axis indicate the percentage of 1-step back edges (which indicate more local errors) among all back edges. + +![](images/8a6de1e64ce0eb91734d8b781d9529a6b0ea3db8107d7d9e850b2a0d8ea9dca7.jpg) + +![](images/c1f6e2e434e27d66209bfe23aae5fdd669cbab7a5b31bb4366aa72e41b7a5a5b.jpg) + +![](images/3ba688b06af3b2cf018085fcf04ca5b338b06762622b1a3bc7204c8ff4119356.jpg) + +![](images/b53c7b485285891fd88a0b371451b753f24bfa02faa0caf7c7f8b816ffac3909.jpg) +Figure 9: Results on head-error related errors (on development sets, with 1k training). The light bars indicate the number of total erroneous edges, while the darker and shaded parts represent the number of the ones that are related with head errors. The numbers on the $x$ -axis indicate the relatedness rates: the percentage of head-error related erroneous edges among all erroneous edges. + +![](images/b8aa1e930fd2ce3ff0ac77c3d7307148a8bea0e907e1fe8b31df5dc2677f0a77.jpg) + +![](images/c61020ffda2d7689e373b5343a9bb789f29859a74940ee1103d06a408ff531cc.jpg) + +![](images/457dae6dec710a20aaf8e7a6ee3752a43292d7557589b9bbbdb3626211f16680.jpg) + +strategy: first identify all back edges, and then include other erroneous edges that might be related with any head error. We use the diagram in Figure 7 to illustrate our criterion for relatedness. We mark three types of erroneous edges as head-error related: 1) the back edge itself $(h_n \to h_0)$ , 2) any wrongly predicted children of $h_n$ whose gold head should be one of $[h_0, h_1, \dots, h_{n-1}]$ , 3) any errors for the head prediction of the tokens $[h_0, h_1, \dots, h_{n-1}]$ . This criterion may miss or over-predict related errors, nevertheless we find it a reasonable approximation. + +Figure 9 shows the results. First, as in Figure 8, the pre-trained models are less influenced by head errors, again suggesting structural improvements. Further comparing different pre-training strategies, generally MaskLM is less influenced by head errors, as shown by either lower head-error related error counts or relatedness rates. + +# 4 Conclusion + +In this work, we empirically explore an alternative pre-training strategy for contextualized encoders. Instead of training variants of language models, we adopt a local word ordering strategy, which segments the inputs into local bags of words, together with order-based word-selection objectives. Evaluated on typical structured prediction tasks, we show the effectiveness of this method. With further analysis on one typical structured task, we show that pre-trained encoders can bring improvements in a structured way. We hope this empirical work can shed some light and inspire future work on exploring how pre-trained contextualized encoders capture language structures. + +# References + +Wasi Ahmad, Zhisong Zhang, Xuezhe Ma, Eduard Hovy, Kai-Wei Chang, and Nanyun Peng. 2019. On difficulties of cross-lingual transfer with order differences: A case study on dependency parsing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2440-2452, Minneapolis, Minnesota. Association for Computational Linguistics. +Waleed Ammar, George Mulcaire, Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2016. Many languages, one parser. Transactions of the Association for Computational Linguistics, 4:431-444. +Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2978-2988, Florence, Italy. Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Timothy Dozat and Christopher D. Manning. 2017. Deep bioaffine attention for neural dependency parsing. In ICLR. +Matthew S Dryer. 2007. Word order. Language typology and syntactic description, 1:61-131. +Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. +Dan Kondratyuk and Milan Straka. 2019. 75 languages, 1 model: Parsing universal dependencies universally. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2779-2795, Hong Kong, China. Association for Computational Linguistics. +Artur Kulmizev, Miryam de Lhoneux, Johannes Gontrum, Elena Fano, and Joakim Nivre. 2019. Deep contextualized word embeddings in transition-based and graph-based dependency parsing - a tale of two parsers revisited. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2755-2768, Hong Kong, China. Association for Computational Linguistics. + +Jonathan K. Kummerfeld, David Hall, James R. Curran, and Dan Klein. 2012. Parser showdown at the wall street corral: An empirical investigation of error types in parser output. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1048-1059, Jeju Island, Korea. Association for Computational Linguistics. +John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML, pages 282-289. +Yijia Liu, Yue Zhang, Wanxiang Che, and Bing Qin. 2015. Transition-based syntactic linearization. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 113-122, Denver, Colorado. Association for Computational Linguistics. +Ryan McDonald and Joakim Nivre. 2007. Characterizing the errors of data-driven dependency parsing models. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 122-131, Prague, Czech Republic. Association for Computational Linguistics. +Ryan McDonald and Joakim Nivre. 2011. Analyzing and integrating dependency parsers. Computational Linguistics, 37(1):197-230. +Tahira Naseem, Regina Barzilay, and Amir Globerson. 2012. Selective sharing for multilingual dependency parsing. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 629-637, Jeju Island, Korea. Association for Computational Linguistics. +Joakim Nivre, Mitchell Abrams, Željko Agić, and et al. 2019. Universal dependencies 2.4. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (UFAL), Faculty of Mathematics and Physics, Charles University. +Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9. + +Teemu Ruokolainen, Pekka Kauppinen, Miikka Silfverberg, and Krister Lindén. 2019. A finnish news corpus for named entity recognition. arXiv preprint arXiv:1908.04212. +Allen Schmaltz, Alexander M. Rush, and Stuart Shieber. 2016. Word ordering without syntax. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2319-2324, Austin, Texas. Association for Computational Linguistics. +Magda Ševčíková, Zdeněk Žabokrtský, and Oldřich Krůza. 2007. Named entities in czech: Annotating data and developing NE tagger. In Lecture Notes in Artificial Intelligence, Proceedings of the 10th International Conference on Text, Speech and Dialogue, Lecture Notes in Computer Science, pages 188-195, Berlin / Heidelberg. Springer. +Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 464-468, New Orleans, Louisiana. Association for Computational Linguistics. +Manuela Speranza. 2009. The named entity recognition task at evalita 2009. In EVALITA 2009. +Oscar Tackstrom, Ryan McDonald, and Joakim Nivre. 2013. Target language adaptation of discriminative transfer parsers. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1061-1071, Atlanta, Georgia. Association for Computational Linguistics. +Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142-147. +Wei Wang, Bin Bi, Ming Yan, Chen Wu, Jiangnan Xia, Zuyi Bao, Liwei Peng, and Luo Si. 2020. Structbert: Incorporating language structures into pre-training for deep language understanding. In International Conference on Learning Representations. +Wenhui Wang, Baobao Chang, and Mairgup Mansur. 2018. Improved dependency parsing using implicit word connections learned from unlabeled data. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2857-2863, Brussels, Belgium. Association for Computational Linguistics. +Jie Yang, Shuailong Liang, and Yue Zhang. 2018. Design challenges and misconceptions in neural sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, + +pages 3879-3889, Santa Fe, New Mexico, USA. Association for Computational Linguistics. +Xingxing Zhang, Jianpeng Cheng, and Mirella Lapata. 2017. Dependency parsing as head selection. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 665-676, Valencia, Spain. Association for Computational Linguistics. +Yue Zhang and Stephen Clark. 2015. Discriminative syntax-based word ordering for text generation. Computational Linguistics, 41(3):503-538. + +# Appendices + +# A Detailed Experiment Settings + +In this subsection, we describe the details of our experiment settings, mainly including datasets and hyper-parameter settings. + +# A.1 Datasets + +Languages In this work, we explore four languages from different language family subdivisions: English (Germanic), Finnish (Uralic), Czech (Slavic) and Italian (Romance). It may be interesting to see how the effects of pre-training are influenced by specific language characteristics, for example, the agglutination in Finnish and relatively free word order in Czech. We would like to include more languages in future work, especially those in different language families. + +Unlabeled data For pre-training, we use the unlabeled data collected from the 2018-Fall Wiki-dump6. We extract raw texts using WikiExtractor7 and then do sentence-splitting and tokenization using UDPipe8. Due to the limitation of computational resources, for each language, we sample 1M sentences whose length is between 5 and 80 for the purpose of pre-training. Our empirical results show that for the basic structured prediction tasks explored in this work, such relative small amount of unlabeled data is already enough to bring obvious improvements. + +Vocabularies Except for models that directly use pre-trained BERT, all models regard words as the basic inputting and modeling units. Therefore, for pre-trained encoders, we collect vocabularies from the unlabeled corpus, filtering out rare words that appear less than five times. Table 4 summaries + +
Lang.TrainNER DevTestTrainParsing/POS DevTest
en15.0k/203.6k3.5k/51.4k3.7k/46.4k12.5k/204.6k2.0k/25.1k2.1k/25.1k
fi13.5k/180.2k1.0k/13.6k3.5k/46.4k12.2k/162.8k1.4k/18.3k1.6k/21.1k
cs7.2k/160.0k0.9k/20.0k0.9k/20.1k68.5k/1.2m9.3k/159.3k10.1k/173.9k
it10.0k/189.1k1.2k/23.4k4.1k/86.4k13.1k/276.0k0.6k/11.9k0.5k/10.4k
+ +Table 3: Statistics (#Sent./#Token) of the original Parsing/POS and NER datasets. In our experiments, we adopt the original development and test sets, but sample training sets with different sizes from the original training sets. + +
Lang.#Sent.#Token#VocabOOV%
en1M23.6M103k2.7%
fi1M14.1M177k10.9%
cs1M19.2M175k5.1%
it1M25.3M128k2.6%
+ +the related statistics. We adopt word-level inputs mainly to follow the conventions of the target tasks explored in this work and to compare with baseline models without pre-trained encoders. It will be interesting to explore other input schemes (such as sub-words as in BERT) in future work, which is orthogonal to the main focus of this work. + +Target tasks We explore three typical structured prediction tasks: dependency parsing, part-of-speech (POS) tagging and Named Entity Recognition (NER). For the tagging and parsing tasks, we utilize annotations from UDv2.49. Specifically, we use the following treebanks: "English-EWT", "Finnish-TDT", "Czech-PDT" and "Italian-ISDT". For NER, we utilize various datasets, including CoNLL0310 (Tjong Kim Sang and De Meulder, 2003) for English, Digitoday11 (Ruokolainen et al., 2019) for Finnish, Czech Named Entity Corpus12 (Ševčíková et al., 2007) for Czech and EVALITA 200913 (Speranza, 2009) for Italian. We only adopt simple settings for the NER tasks, specifically, ignoring nested annotations for Finnish NER and considering Supertypes for Czech NER. For it-NER, we take the first 10k sentences as training set and the rest 1.2k as development set. Table 3 lists the + +Table 4: Statistics of the unlabeled Wiki corpus for pretraining. For each language (Lang.), we sample 1M sentences ("#Sent"). "#Token" indicates the number of tokens (words), "#Vocab" denotes the vocabulary size after rare words filtering. The final column represents the out-of-vocabulary (OOV) rate over the 1M corpus. + +
Embeddingsdemb300
dchar50
dproj.512
EncoderNlayer6
dmodel512
df1024
position-encodingRelative
PreTrainoptimizerAdam
learning-rate4e-4
warmup-steps5k
total-steps200k
batch-size480
DecodingPOSEnumeration
ParsingGraph-based(o1)
NERCRF
FineTuneoptimizerAdam
learning-rate2e-4
total-epochs250
batch-size80
+ +Table 5: Hyper-parameter settings of the model and training. + +statistics of the original datasets. + +We mainly follow the default dataset splittings, but for the training set, we explore three different training sizes by sampling 1k, 5k and 10k sentences14. These settings aim at exploring how pre-trained encoders can improve the structured learners in middle- and low-resource settings. For evaluations, POS tagging is evaluated by tagging accuracies and NER is evaluated by the standard F1 scores. For dependency parsing, we report first-level Labeled Attachment Scores (LAS) over all tokens including punctuation. + +# A.2 Hyper-parameter Settings + +Table 5 lists our main hyper-parameter settings. + +Encoder Throughout our experiments, we adopt Transformer encoders with almost the same architecture. For the inputting parts of the encoder, we include representations of words and characters. Word representations are from a randomly initial- + +![](images/8b8464dc7ff073a7e2f85f5a8971aa4d5b5e14d13abfcd6c5d58c7adf17097ce.jpg) + +![](images/6172164768cad160c9270973ee75f4bce922ed8a06b67d3ffe45a04f0ae014d3.jpg) + +![](images/315d241ddac80396c671d56e80569ce358798893ba0122d5da64ef6e32318b1b.jpg) + +![](images/b3d70e9f1207d0df0da6a51deede1c7da6c4feb9a3abb6a7e46cb47526a623fd.jpg) + +![](images/8810038f4752d36511dd1e3b7b21d05e0e0f53ab41918a059d68ad1a405e3a0d.jpg) +Figure 10: Examples of the higher-order frame patterns. The red solid edges are included, while others (black dotted ones) are not included. + +ized word lookup table, while character representations are from a character-level CNN. Further, a linear layer is added to project these input features to the model dimension. Notice that there are no other input factors, since these are the ones that are directly available from the unlabeled corpus. + +Pre-training We adopt almost identical pretraining schemes for all pre-training strategies, including optimizer, learning rate scheme and batch sizes. We employ one RTX 2080 Ti GPU for the pre-training. To fit the GPU memory, we split one mini-batch into multiple pieces and do gradient accumulation. The pre-training stage takes around 3 days for the MaskLM, LBag and Hybrid strategies, while the BiLM requires around 5 days. + +Decoders For specific target tasks, we specify corresponding decoders. Since our main focus is not on decoders, we adopt the standard choices for these tasks. For dependency parsing, we adopt non-projective first-order (o1) graph-based decoder. For POS tagging, we do simple enumeration and select the maximally scored POS tag for each word. Since dependency parsing and POS tagging share the same datasets, we apply simple multi-task learning and train one joint model for these two tasks. For NER, we adopt a standard CRF layer and perform decoding with the Viterbi algorithm. + +Fine-tuning For the training or fine-tuning of the target tasks, we also adopt similar schemes. In addition, the learning rate is decreased by a decay rate of 0.75 every 8 epochs when there are no improvements on the development set, which is also utilized for model selection. The training on target tasks usually takes several hours, depending on + +training sizes. + +# B Details of Analysis + +# B.1 Details on Higher-order Matches + +We provide extraction details and examples for the five patterns we explore. We first define several groupings of dependency relations according to the UD documentation15: + +- $\mathbf{PRED} = \{ \text{csubj, ccomp, xcomp, advel, acl, root} \}$ . This set denotes dependency relations where the modifier is usually a clausal predicate. +- $\mathbf{CORE} = \{\mathrm{nsubj, obj, iobj, csubj, ccomp, xcomp}\}$ . This set includes the core arguments of predicates. +- MWE=\{fixed, flat, compound\}. This set includes the Multi-Word Expression (MWE) dependency relations. + +To extract the specified patterns, we go through each word $w$ and apply a filter to decide whether there is a frame which we are looking for. If there is, then we apply the extractor to obtain all the related dependency edges, forming the frame that we want to extract. Table 6 describes the extraction rules (the filters and extractors) and Figure 10 further provides some examples. + +# C Extra Results + +# C.1 Results on Development Sets + +Figure 11 shows the results on development sets, whose patterns are similar to those of the test sets as shown in the main contents. + +
PatternFilterExtractor
predlambda w: w.label in PRED[c for c in w.children if c.label in CORE]
mwelambda w: any(c.label in MWE for c in w.children)[c for c in w.children if c.label in MWE]
conjlambda w: any(c.label=="conj" for c in w.children)[c for c in w.children if c.label=="conj"]+[g for g in w.grandchildren if g.label=="cc"]
expllambda w: any(c.label=="expl" for c in w.Children)[c for c in w.Children if c.label=="expl"]+[c for c in w.Children if c.label in CORE]
acllambda w: w.label=="acl"[w]+[c for c in w.Children if c.label in CORE]
+ +Table 6: Filter and extractor functions for the frame pattern extraction (in Python-styled pseudocode). We go through each word $w$ and apply the filter. If the filter returns True, then the extractor is applied to extract all related dependency edges, forming the desired frame. + +![](images/11d448664f8d4755700df25e348869f12e8c155b423e0c0c91894f7b11561864.jpg) +Figure 11: Development results for dependency parsing (LAS), POS tagging (Accuracy%) and NER (F1 score). \ No newline at end of file diff --git a/anempiricalexplorationoflocalorderingpretrainingforstructuredprediction/images.zip b/anempiricalexplorationoflocalorderingpretrainingforstructuredprediction/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..46bcb7a0cec97d7a104f419c17bc1ad13154be58 --- /dev/null +++ b/anempiricalexplorationoflocalorderingpretrainingforstructuredprediction/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:355e9e34e74792e9dfc8d49809ed0274ca9a4f6cb5ce8ce1ba9087d084cdeb0c +size 1197091 diff --git a/anempiricalexplorationoflocalorderingpretrainingforstructuredprediction/layout.json b/anempiricalexplorationoflocalorderingpretrainingforstructuredprediction/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..bcc6ca1ce489e689f65ac7075e6f260ea73d66df --- /dev/null +++ b/anempiricalexplorationoflocalorderingpretrainingforstructuredprediction/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:19a04f210386fd04a2a2b7847cf2c8fd848484f5074108240d143ae378b2f360 +size 397119 diff --git a/anempiricalinvestigationofbeamawaretraininginsupertagging/0f0d87d3-a9dc-4432-b09a-5f988a3b0d5f_content_list.json b/anempiricalinvestigationofbeamawaretraininginsupertagging/0f0d87d3-a9dc-4432-b09a-5f988a3b0d5f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..7f879faf77c2830cbe2a999e1d74e3efe71ddf61 --- /dev/null +++ b/anempiricalinvestigationofbeamawaretraininginsupertagging/0f0d87d3-a9dc-4432-b09a-5f988a3b0d5f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7ddc2f2a3853f852be77a118bc55f96bab0de56142831adacc88bc42c8fdd4d8 +size 65345 diff --git a/anempiricalinvestigationofbeamawaretraininginsupertagging/0f0d87d3-a9dc-4432-b09a-5f988a3b0d5f_model.json b/anempiricalinvestigationofbeamawaretraininginsupertagging/0f0d87d3-a9dc-4432-b09a-5f988a3b0d5f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..70c50fbf4f5b10ba5ca3e0997e092b60b00428c7 --- /dev/null +++ b/anempiricalinvestigationofbeamawaretraininginsupertagging/0f0d87d3-a9dc-4432-b09a-5f988a3b0d5f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bc24a53ccafe764e841a8453d4b3bda3fb4ae7857d72631431078a5b6c9d988b +size 76717 diff --git a/anempiricalinvestigationofbeamawaretraininginsupertagging/0f0d87d3-a9dc-4432-b09a-5f988a3b0d5f_origin.pdf b/anempiricalinvestigationofbeamawaretraininginsupertagging/0f0d87d3-a9dc-4432-b09a-5f988a3b0d5f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ac60e2630eb22dad9ef52a0d5837cc7d9df7324b --- /dev/null +++ b/anempiricalinvestigationofbeamawaretraininginsupertagging/0f0d87d3-a9dc-4432-b09a-5f988a3b0d5f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f7bd0b6ae761176c0e0907f0222e77c82bfc26c7e8a7e584d48634e2c21ec357 +size 974741 diff --git a/anempiricalinvestigationofbeamawaretraininginsupertagging/full.md b/anempiricalinvestigationofbeamawaretraininginsupertagging/full.md new file mode 100644 index 0000000000000000000000000000000000000000..3870c8648c126897ce127a24c32c4f35723d74a7 --- /dev/null +++ b/anempiricalinvestigationofbeamawaretraininginsupertagging/full.md @@ -0,0 +1,265 @@ +# An Empirical Investigation of Beam-Aware Training in Supertagging + +Renato Negrinho1 + +Matthew R. Gormley1 + +Geoffrey J. Gordon1,2 + +Carnegie Mellon University1, MSR Montreal2 + +{negrinho,mgormley,ggordon}@cs.cmu.edu + +# Abstract + +Structured prediction is often approached by training a locally normalized model with maximum likelihood and decoding approximately with beam search. This approach leads to mismatches as, during training, the model is not exposed to its mistakes and does not use beam search. Beam-aware training aims to address these problems, but unfortunately, it is not yet widely used due to a lack of understanding about how it impacts performance, when it is most useful, and whether it is stable. Recently, Negrinho et al. (2018) proposed a meta-algorithm that captures beam-aware training algorithms and suggests new ones, but unfortunately did not provide empirical results. In this paper, we begin an empirical investigation: we train the supertagging model of Vaswani et al. (2016) and a simpler model with instantiations of the meta-algorithm. We explore the influence of various design choices and make recommendations for choosing them. We observe that beam-aware training improves performance for both models, with large improvements for the simpler model which must effectively manage uncertainty during decoding. Our results suggest that a model must be learned with search to maximize its effectiveness. + +# 1 Introduction + +Structured prediction often relies on models that train on maximum likelihood and use beam search for approximate decoding. This procedure leads to two significant mismatches between the training and testing settings: the model is trained on oracle trajectories and therefore does not learn about its own mistakes; the model is trained without beam search and therefore does not learn how to use the beam effectively for search. + +Previous algorithms have addressed one or the other of these mismatches. For example, + +DAgger (Ross et al., 2011) and scheduled sampling (Bengio et al., 2015) use the learned model to visit non-oracle states at training time, but do not use beam search (i.e., they keep a single hypothesis). Early update (Collins and Roark, 2004), LaSO (Daumé and Marcu, 2005), and BSO (Wiseman and Rush, 2016) are trained with beam search, but do not expose the model to beams without a gold hypothesis (i.e., they either stop or reset to beams with a gold hypothesis). + +Recently, Negrinho et al. (2018) proposed a meta-algorithm that instantiates beam-aware algorithms as a result of choices for the surrogate loss (i.e., which training loss to incur at each visited beam) and data collection strategy (i.e., which beams to visit during training). A specific instantiation of their meta-algorithm addresses both mismatches by relying on an insight on how to induce training losses for beams without the gold hypothesis: for any beam, its lowest cost neighbor should be scored sufficiently high to be kept in the successor beam. To induce these training losses it is sufficient to be able to compute the best neighbor of any state (often called a dynamic oracle (Goldberg and Nivre, 2012)). Unfortunately, Negrinho et al. (2018) do not provide empirical results, leaving open questions such as whether instances can be trained robustly, when is beam-aware training most useful, and what is the impact on performance of the design choices. + +Contributions We empirically study beam-aware algorithms instantiated through the meta-algorithm of Negrinho et al. (2018). We tackle supertagging as it is a sequence labelling task with an easy-to-compute dynamic oracle and a moderately-sized label set (approximately 1000) which may require more effective search. We examine two supertagging models (one from Vaswani et al. (2016) and a simplified version designed to be heavily + +reliant on search) and train them with instantiations of the meta-algorithm. We explore how design choices influence performance, and give recommendations based on our empirical findings. For example, we find that perceptron losses perform consistently worse than margin and log losses. We observe that beam-aware training can have a large impact on performance, particularly when the model must use the beam to manage uncertainty during prediction. Code for reproducing all results in this paper is available at https://github.com/negrinho/beam Learn_supertagging. + +# 2 Background on learning to search and beam-aware methods + +For convenience, we reuse notation introduced in Negrinho et al. (2018) to describe their metaalgorithm and its components (e.g., scoring function, surrogate loss, and data collection strategy). See Figure 1 and Figure 2 for an overview of the notation. When relevant, we instantiate notation for left-to-right sequence labelling under the Hamming cost, which supertagging is a special case of. + +Input and output spaces Given an input structure $x \in \mathcal{X}$ , the output structure $y \in \mathcal{Y}_x$ , is generated through a sequence of incremental decisions. An example $x \in \mathcal{X}$ induces a tree $G_x = (V_x, E_x)$ encoding the sequential generation of elements in $\mathcal{Y}_x$ , where $V_x$ is the set of nodes and $E_x$ is the set of edges. The leaves of $G_x$ correspond to elements of $\mathcal{Y}_x$ and the internal nodes correspond to incomplete outputs. For left-to-right sequence labelling, for a sequence $x \in \mathcal{X}$ , each decision assigns a label to the current position of $x$ and the nodes of tree encode labelled prefixes of $x$ , with terminal nodes encoding complete labellings of $x$ . + +Cost functions Given a golden pair $(x,y)\in \mathcal{X}\times$ $\mathcal{V}$ , the cost function $c_{x,y}:\mathcal{Y}_x\to \mathbb{R}$ measures how bad the prediction $\hat{y}\in \mathcal{V}_x$ is relative to the target output structure $y\in \mathcal{V}_x$ . Using $c_{x,y}:\mathcal{V}_x\rightarrow$ $\mathbb{R}$ , we define a cost function $c_{x,y}^{*}:V_{x}\to \mathbb{R}$ for partial outputs by assigning to each node $v\in V_x$ the cost of its best reachable complete output, i.e., $c_{x,y}^{*}(v) = \min_{v^{\prime}\in T_{v}}c_{x,y}(v^{\prime})$ , where $T_{v}\subseteq \mathcal{V}_{x}$ is the set of complete outputs reachable from $v$ . For a left-to-right search space for sequence labelling, if $c_{x,y}:\mathcal{V}_x\to \mathbb{R}$ is Hamming cost, the optimal completion cost $c_{x,y}^{*}:\mathcal{V}_{x}\to \mathbb{R}$ is the number of mistakes in the prefix as the optimal completion matches the remaining suffix of the target output. + +Dynamic oracles An oracle state is one for which the target output structure can be reached. Often optimal actions can only be computed for oracle states. Dynamic oracles compute optimal actions even for non-oracle states. Evaluations of $c_{x,y}^* : V_x \to \mathbb{R}$ for arbitrary states allows us to induce the dynamic oracle—at a state $v \in V_x$ , the optimal action is to transition to the neighbor $v' \in N_v$ with the lowest completion cost. For sequence labelling, this picks the transition that assigns the correct label. For other tasks and metrics, more complex dynamic oracles may exist, e.g., in dependency parsing (Goldberg and Nivre, 2012, 2013). For notational brevity, from now on, we omit the dependency of the search spaces and cost function on $x \in \mathcal{X}, y \in \mathcal{Y}$ , or both. + +Beam search space Given a search space $G = (V, E)$ , the beam search space $G_{k} = (V_{k}, E_{k})$ is induced by choosing a beam size $k \in \mathbb{N}$ and a strategy for generating the successor beam out of the current beam and its neighbors. In this paper, we expand all the elements in the beam and score the neighbors simultaneously. The highest scoring $k$ neighbors are used to form the successor beam. For $k = 1$ , we recover the greedy search space $G$ . + +Beam cost functions The natural cost function $c^{*}: V_{k} \to \mathbb{R}$ for $G_{k}$ is created from the element-wise cost function on $G$ , and assigns to each beam the cost of its best element, i.e., for $b \in V_{k}$ , $c^{*}(b) = \min_{v \in b} c^{*}(v)$ . For a transition $(b, b') \in E_{k}$ , we define the transition cost $c(b, b') = c^{*}(b') - c^{*}(b)$ , where $b' \in N_{b}$ , i.e., $b'$ can be formed from the neighbors of the elements in $b$ . A cost increase happens when $c(b, b') > 0$ , i.e., the best complete output reachable in $b$ is no longer reachable in $b'$ . + +Policies Policies operate in beam search space $G_{k}$ and are induced through a learned scoring function $s(\cdot ,\theta):V\to \mathbb{R}$ which scores elements in the original space $G$ . A policy $\pi :V_{k}\rightarrow \Delta (V_{k})$ , i.e., mapping states (i.e., beams) to distributions over next states. We only use deterministic policies where the successor beam is computed by sorting the neighbors in decreasing order of score and taking the top $k$ . + +Scoring function In the non-beam-aware case, the scoring function arises from the way probabilities of complete sequences are computed with the locally normalized model, namely $p(y|x,\theta) = \prod_{j=1}^{h} p(y_i | y_{1:i-1}, x,\theta)$ , where we assume that all + +![](images/95accb67ed84ba36a1f19fc22c4ac02fff825c62863c33e62ea246da6c5a19c5.jpg) +Figure 1: Beam $b$ has neighborhood $A_{b}$ , where $k = |b| = |b^{\prime}| = 3$ and $n = |A_{b}| = 5$ . Edges from elements in $b$ to elements in $A_{b}$ encode neighborhood relationships, e.g., $v_{3}$ has a single neighbor $v_{5}^{\prime}$ . Permutation $\hat{\sigma} : [n] \to [n]$ sorts hypotheses in decreasing order of score, and permutation $\sigma^{*} : [n] \to [n]$ sorts them in increasing order of cost, i.e., $v_{\sigma^{*}(1)}^{\prime}$ is the lowest cost neighbor and $v_{\hat{\sigma}(1)}^{\prime}$ is the highest scoring neighbor. The successor beam $b^{\prime}$ keeps the neighbor states in $A_{b}$ with highest score according to vector $s$ , or equivalently highest rank according to $\hat{\sigma}$ . + +![](images/e6f6e17e9eea75e2d06fb12275faf37cb1f4c1f30b2cbbb177bc54cb81fb1100.jpg) + +![](images/acbfc1a96d96e6a88fdb6da2151f1e2c0895ba6e9db154d146307c2f73eced27.jpg) + +![](images/71003086c803d8958a98febaeb93f8ba3e58d8462fe435ff35c2596054779b3f.jpg) +Figure 2: Sampling a trajectory through the beam search space at training time. A loss $\ell(b_i, \theta)$ is incurred at each visited beam $b_i$ , $i \in [h-1]$ , resulting in total accumulated loss $\ell(b_{1:h}, \theta)$ for beam trajectory $b_{1:h}$ . The terminal beam $b_h$ corresponds to a complete output $y(b_h) \in \mathcal{V}$ . Transitions between beams are sampled according to a data collection policy $\pi': V_k \to \Delta(V_k)$ . We consider $\pi'$ induced by a scoring function $s(\cdot, \theta): V \to \mathbb{R}$ or cost function $c^*: V \to \mathbb{R}$ . Parameters $\theta$ parametrize the scoring function of the model. Losses $\ell(b_i, \theta)$ are low if the scores of the neighbors of $b_i$ comfortably keep the lowest cost elements in the successor beam (see Section 3.2), and high otherwise. See Figure 1 for the notation to describe the surrogate loss $\ell(b_i, \theta)$ at each beam $b_i$ . + +outputs for $x \in \mathcal{X}$ have $h$ steps. For sequence labelling, $h$ is the length of the sentence. The resulting scoring function $s(\cdot, \theta): V \to \mathbb{R}$ is $s(v, \theta) = \sum_{i=1}^{j} \log p(y_i | y_{1:i-1}, x, \theta)$ , where $v = y_{1:j}$ and $j \in [h]$ . Similarly, the scoring function that we learn in the beam-aware case is $s(v, \theta) = \sum_{i=1}^{j} \tilde{s}(v, \theta)$ , where $x$ has been omitted, $v = y_{1:j}$ , and $\tilde{s}(\cdot, \theta): V \to \mathbb{R}$ is the learned incremental scoring function. In Section 4.6, we observe that this cumulative version performs uniformly better than the non-cumulative one. + +# 3 Meta-algorithm for learning beam search policies + +We refer the reader to Negrinho et al. (2018) for a discussion of how specific choices for the meta-algorithm recover algorithms from the literature. + +# 3.1 Data collection strategies + +The data collection strategy determines which beams are visited at training time (see Figure 2). + +Strategies that use the learned model differ on how they compute the successor beam $b' \in N_b$ when $s(\cdot, \theta)$ leads to a beam without the gold hypothesis, i.e., $c(b, b') > 0$ , where $b' = \{v_{\hat{\sigma}(1)}, \ldots, v_{\hat{\sigma}(k)}\} \subset A_b$ and $A_b = \{v_1, \ldots, v_n\} = \cup_{v \in b} N_v$ . We explore several data collection strategies: + +stop If the successor beam does not contain the gold hypothesis, stop collecting the trajectory. Structured perceptron training with early update (Collins and Roark, 2004) use this strategy. + +reset If the successor beam does not contain the gold hypothesis, reset to a beam with only the gold hypothesis1. LaSO (Daumé and Marcu, 2005) use this strategy. For $k = 1$ , we recover teacher forcing as only the oracle hypothesis is kept in the beam. + +continue Ignore cost increases, always using the successor beam. DAgger (Ross et al., 2011) take this strategy, but does not use beam search. Negrinho et al. (2018) suggest this for beam-aware training but do not provide empirical results. + +reset (multiple) Similar to reset, but keep $k - 1$ hypothesis from the transition, i.e., $b' = \{v_{\sigma^{*}(1)}, v_{\hat{\sigma}(1)}, \ldots, v_{\hat{\sigma}(k-1)}\}$ . We might expect this data collection strategy to be closer to continue as a large fraction of the elements of the successor beam are induced by the learned model. + +oracle Always transition to the beam induced by $\sigma^{*}:[n]\to [n]$ , i.e., the one obtained by sorting the costs in increasing order. For $k = 1$ , this recovers teacher forcing. In Section 4.4, we observe that oracle dramatically degrades performance due to increased exposure bias with increased $k$ . + +# 3.2 Surrogate losses + +Surrogate losses encode that the scores produced by the model for the neighbors must score the best neighbor sufficiently high for it to be kept comfortably in the successor beam. For $k = 1$ , many of these losses reduce to losses used in non-beam-aware training. Given scores $s \in \mathbb{R}^n$ and costs $c \in \mathbb{R}^n$ over neighbors in $A_{b} = \{v_{1},\ldots ,v_{n}\}$ , we define permutations $\hat{\sigma} : [n] \to [n]$ and $\sigma^{*} : [n] \to [n]$ that sort the elements in $A_{b}$ in decreasing order of scores and increasing order of costs, respectively, i.e., $s_{\hat{\sigma}(1)} \geq \dots \geq s_{\hat{\sigma}(n)}$ and $c_{\sigma^{*}(1)} \leq \dots \leq c_{\sigma^{*}(n)}$ . See Figure 1 for a description of the notation used to describe surrogate losses. Our experiments compare the following surrogate losses: + +perceptron (first) Penalize failing to score the best neighbor at the top of the beam (regardless of it falling out of the beam or not). + +$$ +\ell (s, c) = \max \left(0, s _ {\hat {\sigma} (1)} - s _ {\sigma^ {*} (1)}\right). +$$ + +perceptron (last) If this loss is positive at a beam, the successor beam induced by the scores does not contain the golden hypothesis. + +$$ +\ell (s, c) = \max \left(0, s _ {\hat {\sigma} (k)} - s _ {\sigma^ {*} (1)}\right). +$$ + +margin (last) Penalize margin violations of the best neighbor of the hypothesis in the current beam. Compares the correct neighbor $s_{\sigma^{*}(1)}$ with the neighbor $v_{\hat{\sigma} (k)}$ last in the beam. + +$$ +\ell (s, c) = \max \left(0, s _ {\hat {\sigma} (k)} - s _ {\sigma^ {*} (1)} + 1\right) +$$ + +cost-sensitive margin (last) Same as margin (last) but weighted by the cost difference of the pair. Wiseman and Rush (2016) use this loss. + +$$ +\ell (s, c) = c _ {\hat {\sigma} (k), \sigma^ {*} (1)} \max (0, s _ {\hat {\sigma} (k)} - s _ {\sigma^ {*} (1)} + 1), +$$ + +where $c_{\hat{\sigma}(k), \sigma^*(1)} = c_{\hat{\sigma}(k)} - c_{\sigma^*(1)}$ . + +log loss (neighbors) Normalizes over all elements in $A_{b}$ . For beam size $k = 1$ , it reduces to the usual log loss. + +$$ +\ell (s, c) = - s _ {\sigma^ {*} (1)} + \log \left(\sum_ {i = 1} ^ {n} \exp (s _ {i})\right) +$$ + +log loss (beam) Normalizes only over the top $k$ neighbors of a beam according to the scores $s$ . + +$$ +\ell (s, c) = - s _ {\sigma^ {*} (1)} + \log \left(\sum_ {i \in I} \exp (s _ {i})\right), +$$ + +where $I = \{\sigma^{*}(1),\hat{\sigma} (1),\ldots ,\hat{\sigma} (k)\}$ . The normalization is only over the golden hypothesis $v_{\sigma^{*}(1)}$ and the elements included in the beam. Andor et al. (2016) use this loss. + +# 3.3 Training + +The meta-algorithm of Negrinho et al. (2018) is instantiated by choosing a surrogate loss, data collection strategy, and beam size. Training proceeds by sampling an example $(x,y)\in \mathcal{X}\times \mathcal{Y}$ from the training set. A trajectory through the beam search space $G_{k}$ is collected using the chosen data collection strategy. A surrogate loss is induced at each non-terminal beam in the trajectory (see Figure 2). Parameter updates are computed based on the gradient of the sum of the losses of the visited beams. + +# 4 Experiments + +We explore different configurations of the design choices of the meta-algorithm to understand their impact on training behavior and performance. + +# 4.1 Task details + +We train our models for supertagging, a sequence labelling where accuracy is the performance metric of interest. Supertagging is a good task for exploring beam-aware training, as contrary to other sequence labelling datasets such as named-entity recognition (Tjong Kim Sang and De Meulder, 2003), chunking (Sang and Buchholz, 2000), and part-of-speech tagging (Marcus et al., 1993), has a moderate number of labels and therefore it is + +![](images/785a63fb56c19f11e1d2ce111397ac178f8ec44933d7391ddfa9cf9df5cad209.jpg) +Figure 3: High-level structure of the two models used in the experiments. The model on the left is from Vaswani et al. (2016). The model on the right is a simplification of the one on the left, namely, it does not have an encoding of the complete sentence at the start of prediction. + +![](images/a00f66da8285e313189c383d1fa4b07129fb345794565a3a887ee1c3a177fdc7.jpg) + +likely to require effective search to achieve high performances. We used the standard splits for CCGBank (Hockenmaier and Steedman, 2007): the training and development sets have, respectively, 39604 and 1913 examples. Models were trained on the training set and used the development set to compute validation accuracy at the end of each epoch to keep the best model. As we are performing an empirical study, similarly to Vaswani et al. (2016), we report validation accuracies. Each configuration is ran three times with different random seeds and the mean and standard deviation are reported. We replace the words that appear at most once in the training set by UNK. By contrast, no tokenization was done for the training supertags. + +# 4.2 Model details + +We have implemented the model of Vaswani et al. (2016) and a simpler model designed by removing some of its components. The two main differences between our implementation and theirs are that we do not use pretrained embeddings (we train the embeddings from scratch) and we use the gold POS tags (they use only the pretrained embeddings). + +Main model For the model of Vaswani et al. (2016) (see Figure 3, left), we use 64, 16, and 64 for the dimensions of the word, part-of-speech, and supertag embeddings, respectively. All LSTMs (forward, backward, and LM) have hidden dimension 256. We refer the reader to Vaswani et al. (2016) for the exact description of the model. Briefly, embeddings for the words and part-of-speech tags are concatenated and fed to a bi-directional LSTM, the outputs of both directions are then fed into a + +combiner (dimension-preserving linear transformations applied independently to both inputs, added together, and passed through a ReLU non-linearity). The output of the combiner and the output of the LM LSTM (which tracks the supertag prefix up to a prediction point) is then passed to another combiner that generates scores over supertags. + +Simplified model We also consider a simplified model that drops the bi-LSTM encoder and the corresponding combiner (see Figure 3, right). The concatenated embeddings are fed directly into the second combiner with the LM LSTM output. Values for the hyperparameters are the same when possible. This model must leverage the beam effectively as it does not encode the sentence with a bi-LSTM. Instead, only the embeddings for the current position are available, giving a larger role to the LM LSTM over supertags. While supertagging can be tackled with a stronger model, this restriction is relevant for real-time tasks, e.g., the complete input might not be known upfront. + +Training details Models are trained for 16 epochs with SGD with batch size 1 and cosine learning rate schedule (Loshchilov and Hutter, 2016), starting at $10^{-1}$ and ending at $10^{-5}$ . No weight decay or dropout was used. Training examples are shuffled after each epoch. Results are reported for the model with the best validation performance across all epochs. We use 16 epochs for all models for simplicity and fairness. This number was sufficient, e.g., we replicated Table 2 by training with 32 epochs and observed minor performance differences (see Table 6). + +
1248
oracle reset93.780.1293.810.1193.820.1093.820.10
continue94.040.0794.050.0794.050.0794.060.07
stop93.860.0993.900.0793.900.0793.910.07
oracle reset73.200.3176.550.2477.420.2777.540.22
continue81.990.0482.300.0382.370.0882.410.08
stop74.350.2377.060.1477.730.1377.820.09
+ +Table 1: Development accuracies for models trained with different data collection strategies in a non-beam-aware way (i.e., $k = 1$ ) and decoded with beam search with varying beam size. continue performs best, showing the importance of exposing the model to its mistakes. Differences are larger for the simplified model. + +# 4.3 Non-beam-aware training + +We first train the models with $k = 1$ and then use beam search to decode. Crucially, the model does not train with a beam and therefore does not learn to use it effectively. We vary the data collection strategy. The results are presented in Table 1 and should be used as a reference when reading the other tables to evaluate the impact of beam-aware training. Tables are formatted such that the first and second horizontal halves contain the results for the main model and simplified model, respectively. Each position contains the mean and the standard deviation of running that configuration three times. We use this format in all tables presented. + +The continue data collection strategy (i.e., DAgger for $k = 1$ ) results in better models than training on the oracle trajectories. Beam search results in small gains for these settings. In this experiment, training with oracle is the same as training with reset as the beam always contains only the oracle hypothesis. The performance differences are small for the main model but much larger for the simplified model, underscoring the importance of beam search when there is greater uncertainty about predictions. For the stronger model, the encoding of the left and right contexts with the bi-LSTM provides enough information at each position to predict greedily, i.e., without search. This difference appears consistently in all experiments, with larger gains for the weaker model. + +The gains achieved by the main model by decoding with beam search post-training are very small (from 0.02 to 0.05). This suggests that training the model in a non-beam-aware fashion and then using beam search does not guarantee improvements. The model must be learned with search to improve on these results. For the simpler model, larger im + +
1248
oracle94.100.0892.980.0791.660.2285.950.79
reset94.200.1194.340.0694.330.0194.420.04
reset (mult.)94.150.0793.980.0894.060.0694.160.05
continue94.150.0294.350.0594.370.0494.330.04
stop93.950.0994.110.0594.240.0794.250.06
oracle75.090.1780.670.4078.691.2747.381.79
reset75.060.1687.210.1491.240.0292.460.09
reset (mult.)75.040.1886.190.1290.760.1192.160.03
continue82.010.0689.170.0891.800.1292.690.01
stop75.080.5487.160.0890.980.1392.180.06
+ +Table 2: Development accuracies for beam-aware training with varying data collection strategies. + +provements are observed (from 0.42 to 4.34). Despite the gains with beam search for reset and stop, they are not sufficient to beat the greedy model trained on its own trajectories, yielding 81.99 for continue with $k = 1$ versus 77.54 for oracle and 77.82 for reset, both with $k = 8$ . These results show the importance of the data collection strategy, even when the model is not trained in a beam-aware fashion. These gains are eclipsed by beam-aware training, namely, compare Table 1 with Table 2. See Figure 4 for the evolution of the validation and training accuracies with epochs. + +# 4.4 Comparing data collection strategies + +We train both models using the log loss (neighbors), described in Section 3.2, and vary the data collection strategy, described in Section 3.1, and beam size. Results are presented in Table 2 Contrary to Section 4.3, these models are trained to use beam search rather than it being an artifact of approximate decoding. Beam-aware training under oracle worsens performance with increasing beam size (due to increasing exposure bias). During training, the model learns to pick the best neighbors for beams containing only close to optimal hypotheses, which are likely very different from the beams encountered when decoding. The results for the simplified model are similar—with increasing beam size, performance first improves but then degrades. For the main model, we observe modest but consistent improvements with larger beam sizes across all data collection strategies except oracle. By comparing the results with those in the first row of Table 1, we see that we improve on the model trained with maximum likelihood and decoded with beam search. + +The data collection strategy has a larger impact on performance for the simplified model. continue + +![](images/8ff5e81452fba68314c859f2c4a721beb6789e3aa506810fe7d36a0a8366975b.jpg) +Figure 4: Validation and training accuracies for non-beam-aware training (i.e., $k = 1$ ) with different data collection strategies for the main (left half) and simplified (right half) models. continue achieves higher accuracies. + +![](images/97da5cfba5aa7ad355be68d9016e02f140c75c13ef3d55cd71dc71f62c58691b.jpg) + +![](images/f289c07224a8ec0061f7b9a0f34e540f9bc394a0bd9529dc7c215092793ab220.jpg) + +![](images/33d0f903f8f2684cc1a3ea80f2aac6e089afbc230fa0c7f6ca5b7dbdc054405d.jpg) + +![](images/91b5a51bd7d437f1b9b029df227e58b5ae2e669a884dfb985cbb6f7532f07005.jpg) +Figure 5: Validation and training accuracies for beam-aware training with different data collection strategies and beam sizes for the main (left half) and simplified (right half) models. Larger beam sizes achieve higher performances while overfitting less, and are crucial for the simplified model to achieve higher training and validation accuracies. For smaller beams continue performs better than reset. All models can be trained stably from scratch. Three runs were aggregated by showing the mean and the standard deviation for each epoch. + +![](images/56b6ea57a034032fa5d5d1c7f9ce0d115e663ba981f925804938becfe16576d0.jpg) + +![](images/3d6445abcf08076f8777bdb42e40887722e654129fb6ee6bf7bdf4d9f07ce722.jpg) + +![](images/b4f957930102c6d93bbfa09e071b7ba1a8fe0e8f9f70f3227e5eebe2b080971c.jpg) + +
1248
percep. (first)92.810.0693.220.0493.440.0293.520.06
percep. (last)92.840.1193.570.0693.860.0993.770.04
m. (last)94.100.0794.290.0794.270.0394.430.04
cost-s. m. (last)93.980.0394.320.1094.370.0394.330.13
log loss (beam)92.290.0792.090.1194.240.0894.320.02
log loss (neig.)94.220.0094.290.0394.270.0694.380.01
percep. (first)77.620.1486.320.0589.830.0591.000.07
percep. (last)77.670.0787.620.0390.820.1691.980.11
m. (last)81.750.0488.800.0291.910.0592.810.05
cost-s. m. (last)81.760.0588.920.0691.810.0392.810.03
log loss (beam)77.500.0788.250.0891.460.0692.560.11
log loss (neig.)81.940.0289.010.1091.750.0392.600.03
+ +Table 3: Development accuracies for the loss functions in Section 3.2. + +achieves the best performance. Compare these performances with those for the simplified model in Table 1. For larger beams, the improvements achieved by beam-aware training are much larger than those achieved by non-beam-aware ones. For example, 92.69 versus 82.41 for continue with $k = 8$ , where in the first case it is trained in a beam-aware manner ( $k = 8$ for both training and decoding), while in the second case, beam search is used only during decoding ( $k = 1$ during training but $k = 8$ during decoding). This shows the importance of training with beam search and exposing the model to its mistakes. Without beam-aware training, the model is unable to learn to use the beam effectively. Check Figure 5 for the evolution of the training and validation accuracies with training epoch for + +beam-aware training. + +# 4.5 Comparing surrogate losses + +We train both models with continue and vary the surrogate loss and beam size. Results are presented in Table 3.2. Perceptron losses (e.g., perceptron (first) and perceptron (last)) performed worse than their margin-based counterparts (e.g., margin (last) and cost-sensitive margin (last)). log loss (beam) yields poor performances for small beam sizes (e.g., $k = 1$ and $k = 2$ ). This is expected due to small contrastive sets (i.e., at most $k + 1$ elements are used in log loss (beam)). For larger beams, the results are comparable with log loss (neighbors). + +# 4.6 Additional design choices + +Score accumulation The scoring function was introduced as a sum of prefix terms. A natural alternative is to produce the score for a neighbor without adding it to a running sum, i.e., $s(y_{1:j}, \theta) = \tilde{s}(y_{1:j}, \theta)$ rather than $s(y_{1:j}, \theta) = \sum_{i=1}^{j} \tilde{s}(y_{1:i}, \theta)$ . Surprisingly, score accumulation performs uniformly better across all configurations. For the main model, beam-aware training degraded performance with increasing beam size. For the simplified model, beam-aware training improved on the results in Table 1, but gains were smaller than those with score accumulation. We observed that the LM LSTM failed to keep track of differences earlier in the supertag sequence, leading to similar scores over their neighbors. Accumulating the scores is a + +simple memory mechanism that does not require the LM LSTM to learn to propagate long-range information. This performance gap may not exist for models that access information more directly (e.g., transformers (Vaswani et al., 2017) and other attention-based models (Bahdanau et al., 2014)). See the appendix for Table 4 which compares configurations with and without score accumulation. Performance differences range from 1 to 5 absolute percentage points. + +Update on all beams The meta-algorithm of Negrinho et al. (2018) suggests inducing losses on every visited beam as there is always a correct action captured by appropriately scoring the neighbors. This leads to updating the parameters on every beam. By contrast, other beam-aware work updates only on beams where the transition leads to increased cost (e.g., Daumé and Marcu (2005) and Andor et al. (2016)). We observe that always updating leads to improved performance, similar to the results in Table 3 for perceptron losses. We therefore recommend inducing losses on every visited beam. See the appendix for Table 5, which compares configurations trained with and without updating on every beam. + +# 5 Related work + +Related work uses either imitation learning (often called learning to search when applied to structured prediction) or beam-aware training. Learning to search (Daumé et al., 2009; Chang et al., 2015; Goldberg and Nivre, 2012; Bengio et al., 2015; Negrinho et al., 2018) is a popular approach for structured prediction. This literature is closely related to imitation learning (Ross and Bagnell, 2010; Ross et al., 2011; Ross and Bagnell, 2014). Ross et al. (2011) addresses exposure bias by collecting data with the learned policy at training time. Collins and Roark (2004) proposes a structured perceptron variant that trains with beam search, updating the model parameters when the correct hypothesis falls out of the beam. Huang et al. (2012) introduces a theoretical framework to analyze the convergence of early update. Zhang and Clark (2008) develops a beam-aware algorithm for dependency parsing that uses early update and dynamic oracles. Goldberg and Nivre (2012, 2013) introduce dynamic oracles for dependency parsing. Ballesteros et al. (2016) observes that exposing the model to mistakes during training improves a dependency parser. Bengio et al. (2015) makes a similar observation and + +present results on image captioning, constituency parsing, and speech recognition. Beam-aware training has also been used for speech recognition (Collobert et al., 2019; Baskar et al., 2019). Andor et al. (2016) proposes an early update style algorithm for learning models with a beam, but use a log loss rather than a perceptron loss as in Collins and Roark (2004). Parameters are updated when the golden hypothesis falls out of the beam or when the model terminates with the golden hypothesis in the beam. Wiseman and Rush (2016) use a similar algorithm to Andor et al. (2016) but they use a margin-based loss and reset to a beam with the golden hypothesis when it falls out of the beam. Edunov et al. (2017) use beam search to find a contrastive set to define sequence-level losses. Goyal et al. (2018, 2019) propose a beam-aware training algorithm that relies on a continuous approximation of beam search. Negrinho et al. (2018) introduces a meta-algorithm that instantiates beam-aware algorithms based on choices for beam size, surrogate loss function, and data collection strategy. They propose a DAgger-like algorithm for beam search. + +# 6 Conclusions + +Maximum likelihood training of locally normalized models with beam search decoding is the default approach for structured prediction. Unfortunately, it suffers from exposure bias and does not learn to use the beam effectively. Beam-aware training promises to address some of these issues, but is not yet widely used due to being poorly understood. In this work, we explored instantiations of the meta-algorithm of Negrinho et al. (2018) to understand how design choices affect performance. We show that beam-aware training is most useful when substantial uncertainty must be managed during prediction. We make recommendations for instantiating beam-aware algorithms based on the meta-algorithm, such as inducing losses at every beam, using log losses (rather than perceptron-style ones), and preferring the continue data collection strategy (or reset if necessary). We hope that this work provides evidence that beam-aware training can greatly impact performance and be trained stably, leading to their wider adoption. + +# Acknowledgements + +We gratefully acknowledge support from 3M — M*Modal. This work used the Bridges system, which is supported by NSF award number ACI- + +1445606, at the Pittsburgh Supercomputing Center (PSC). + +# References + +Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. ACL. +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv:1409.0473. +Miguel Ballesteros, Yoav Goldberg, Chris Dyer, and Noah A Smith. 2016. Training with exploration improves a greedy stack-LSTM parser. arXiv:1603.03793. +Murali Karthick Baskar, Lukáš Burget, Shinji Watanabe, Martin Karafiát, Takaaki Hori, and Jan Honza Cernocký. 2019. Promising accurate prefix boosting for sequence-to-sequence asr. In ICASSP. +Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. NeurIPS. +Kai-Wei Chang, Akshay Krishnamurthy, Alekh Agarwal, Hal Daumé, and John Langford. 2015. Learning to search better than your teacher. ICML. +Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. ACL. +Ronan Collobert, Awni Hannun, and Gabriel Synnaeve. 2019. A fully differentiable beam search decoder. arXiv:1902.06022. +Hal Daumé, John Langford, and Daniel Marcu. 2009. Search-based structured prediction. Machine learning. +Hal Daumé and Daniel Marcu. 2005. Learning as search optimization: Approximate large margin methods for structured prediction. ICML. +Sergey Edunov, Myle Ott, Michael Auli, David Grangier, and Marc'Aurelio Ranzato. 2017. Classical structured prediction losses for sequence to sequence learning. arXiv:1711.04956. +Yoav Goldberg and Joakim Nivre. 2012. A dynamic oracle for arc-eager dependency parsing. _COLING_ 2012. +Yoav Goldberg and Joakim Nivre. 2013. Training deterministic parsers with non-deterministic oracles. TACL. +Kartik Goyal, Chris Dyer, and Taylor Berg-Kirkpatrick. 2019. An empirical investigation of global and local normalization for recurrent neural sequence models using a continuous relaxation to beam search. In *NAACL*. + +Kartik Goyal, Graham Neubig, Chris Dyer, and Taylor Berg-Kirkpatrick. 2018. A continuous relaxation of beam search for end-to-end training of neural sequence models. AAAI. +Julia Hockenmaier and Mark Steedman. 2007. CCGbank: a corpus of CCG derivations and dependency structures extracted from the Penn Treebank. Computational Linguistics, 33(3). +Liang Huang, Suphan Fayong, and Yang Guo. 2012. Structured perceptron with inexact search. *NAACL*. +Ilya Loshchilov and Frank Hutter. 2016. SGDR: Stochastic gradient descent with warm restarts. arXiv:1608.03983. +Mitchell Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational linguistics. +Renato Negrinho, Matthew Gormley, and Geoffrey J Gordon. 2018. Learning beam search policies via imitation learning. In NeurIPS. +Stéphane Ross and Andrew Bagnell. 2014. Reinforcement and imitation learning via interactive no-regret learning. arXiv:1406.5979. +Stéphane Ross and Drew Bagnell. 2010. Efficient reductions for imitation learning. AISTATS. +Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. 2011. A reduction of imitation learning and structured prediction to no-regret online learning. *AIS-TATS*. +Erik F Sang and Sabine Buchholz. 2000. Introduction to the conll-2000 shared task: Chunking. cs/0009008. +Erik Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. NAACL. +Ashish Vaswani, Yonatan Bisk, Kenji Sagae, and Ryan Musa. 2016. Supertagging with LSTMs. In ACL. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS. +Sam Wiseman and Alexander Rush. 2016. Sequenceto-sequence learning as beam-search optimization. ACL. +Yue Zhang and Stephen Clark. 2008. A tale of two parsers: investigating and combining graph-based and transition-based dependency parsing using beam-search. In EMNLP. \ No newline at end of file diff --git a/anempiricalinvestigationofbeamawaretraininginsupertagging/images.zip b/anempiricalinvestigationofbeamawaretraininginsupertagging/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4e9c39636a60a10e1ecb8c6fb92b865b2ec57d16 --- /dev/null +++ b/anempiricalinvestigationofbeamawaretraininginsupertagging/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9d9dd53e7464745f23c325346144f5ded797614029d1e0cf044cd3cdb9df67c1 +size 314162 diff --git a/anempiricalinvestigationofbeamawaretraininginsupertagging/layout.json b/anempiricalinvestigationofbeamawaretraininginsupertagging/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..13f215817ec218c0a88ff419a81036d3fc293619 --- /dev/null +++ b/anempiricalinvestigationofbeamawaretraininginsupertagging/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:81823c9d87a4e8a9b7d94d3080662436bb13d9b7ab46922d86a5ba1e5973c9b8 +size 373284 diff --git a/anempiricalmethodologyfordetectingandprioritizingneedsduringcrisisevents/484069f8-b429-4daa-8c1f-fbd61dcb9856_content_list.json b/anempiricalmethodologyfordetectingandprioritizingneedsduringcrisisevents/484069f8-b429-4daa-8c1f-fbd61dcb9856_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..3a26cb5137621f11094f54e9568cca90cbb62874 --- /dev/null +++ b/anempiricalmethodologyfordetectingandprioritizingneedsduringcrisisevents/484069f8-b429-4daa-8c1f-fbd61dcb9856_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b7342bcb0b21aeecd6235970dcf3756b29837bc0957985226bb5101dfd7c17e5 +size 38715 diff --git a/anempiricalmethodologyfordetectingandprioritizingneedsduringcrisisevents/484069f8-b429-4daa-8c1f-fbd61dcb9856_model.json b/anempiricalmethodologyfordetectingandprioritizingneedsduringcrisisevents/484069f8-b429-4daa-8c1f-fbd61dcb9856_model.json new file mode 100644 index 0000000000000000000000000000000000000000..03d9e5ba20a772e945999f66b1218699795f5022 --- /dev/null +++ b/anempiricalmethodologyfordetectingandprioritizingneedsduringcrisisevents/484069f8-b429-4daa-8c1f-fbd61dcb9856_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e98b7aaf378c87fe9e9c0801b4a0cffe39d5df57d488b5c53de4ae9b7ed17b5c +size 46983 diff --git a/anempiricalmethodologyfordetectingandprioritizingneedsduringcrisisevents/484069f8-b429-4daa-8c1f-fbd61dcb9856_origin.pdf b/anempiricalmethodologyfordetectingandprioritizingneedsduringcrisisevents/484069f8-b429-4daa-8c1f-fbd61dcb9856_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..cfd2ab5d261a24ab4a22d2725b57b9b2251ce219 --- /dev/null +++ b/anempiricalmethodologyfordetectingandprioritizingneedsduringcrisisevents/484069f8-b429-4daa-8c1f-fbd61dcb9856_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:67d7f607fe8b4804804310f18145f303eeed9af2156168f1bfb632f20aa73d7e +size 200049 diff --git a/anempiricalmethodologyfordetectingandprioritizingneedsduringcrisisevents/full.md b/anempiricalmethodologyfordetectingandprioritizingneedsduringcrisisevents/full.md new file mode 100644 index 0000000000000000000000000000000000000000..2735fd256f8d791d8b7344e24aee08a706a0d932 --- /dev/null +++ b/anempiricalmethodologyfordetectingandprioritizingneedsduringcrisisevents/full.md @@ -0,0 +1,146 @@ +# An Empirical Methodology for Detecting and Prioritizing Needs during Crisis Events + +M. Janina Sarol, Ly Dinh, Rezvaneh Rezapour, Chieh-Li Chin, Pingjing Yang, Jana Diesner + +University of Illinois at Urbana-Champaign, IL, USA + +{mjsarol,dinh4,rezapou2,cchin6,py2,jdiesner}@illinois.edu + +# Abstract + +In times of crisis, identifying essential needs is crucial to providing appropriate resources and services to affected entities. Social media platforms such as Twitter contain a vast amount of information about the general public's needs. However, the sparsity of information and the amount of noisy content present a challenge for practitioners to effectively identify relevant information on these platforms. This study proposes two novel methods for two needs detection tasks: 1) extracting a list of needed resources, such as masks and ventilators, and 2) detecting sentences that specify who-needs-what resources (e.g., we need testing). We evaluate our methods on a set of tweets about the COVID-19 crisis. For extracting a list of needs, we compare our results against two official lists of resources, achieving 0.64 precision. For detecting who-needs-what sentences, we compared our results against a set of 1,000 annotated tweets and achieved a 0.68 F1-score. + +# 1 Introduction + +During crises, substantial amounts of information are shared and discussed on social media (Palen and Anderson, 2016; Reuter et al., 2018). Some of these posts may contain relevant information about the needs of affected and at-risk populations (Basu et al., 2018; Dutt et al., 2019; Purohit et al., 2014). The recent COVID-19 virus outbreak is no exception; online platforms such as Twitter have been crucial means for sharing information about the impact of the outbreak (Singh et al., 2020), personal accounts from infected individuals (Jimenez-Sotomayor et al., 2020), and updates from medical professionals (Rosenberg et al., 2020). Crisis responders and practitioners have also turned to online platforms to obtain actionable information that could aid them in response planning (Vieweg et al., 2010; Zade et al., 2018). In particular, scholars in crisis informatics have provided solutions + +to detect relevant Twitter messages that express resource needs and availabilities related to crisis events, e.g., during the 2015 Nepal earthquake (Basu et al., 2017; Dutt et al., 2019) and the 2015 Chennai floods (Sarkar et al., 2019). This paper builds upon and extends prior literature by proposing two needs detection tasks and applying needs detection to data about the COVID-19 crisis. In particular, we (1) extract a list of needs by using word embeddings to identify closest terms to needs and supplies with respect to their cosine similarity, and (2) detect who-needs-what sentences to determine social entities who need particular resources. + +This study makes two contributions. First, we propose a method for identifying and prioritizing resource needs during a crisis. Second, we present a set of heuristics to determine the social entities that need specific resources. Overall, our study provides a reliable set of methods that might help response professionals identify immediate types of needs in the general population quickly and make effective decisions accordingly. + +# 2 Related Work + +A large body of literature from the field of crisis informatics has used natural language processing and machine learning methods to extract relevant situational awareness content from large text corpora (Vieweg et al., 2010; Verma et al., 2011). One of several categories of situational awareness content is needs expressed by (affected) individuals and communities (Imran et al., 2016; Purohit et al., 2014; Varga et al., 2013; Temnikova et al., 2015). Imran, Mitra, and Castillo (2016) analyzed tweets about eight major natural disaster events and found that about $21.7\%$ of all tweets contained crucial information about urgent needs for shelter, donations, and essential supplies, such as medical aid, clothing, food, and water. Varga and colleagues (Varga + +et al., 2013) leveraged machine learning models to match tweets, indicating problems with aid being offered to minimize the waste of resources during a crisis. Similarly, Purohit and colleagues (2014) classified tweets based on requests and offers of resources, and further matched requests with offers using regular expressions. Temnikova, Castillo, and Vieweg (2015) developed a lexical resource that contained 23 categories of situational awareness, most of which are based on needs requested and resources available (e.g., clean water, shelter material), as well as services (e.g., rescue workers, relief work) to meet the needs. Basu and colleagues (2017; 2019) identified need and availability tweets, and matched the identified needs with availabilities (Basu et al., 2018). Our paper builds upon this prior work that has primarily focused on classifying need/non-need tweets. More specifically, we propose methods that identify a general overview of the needs and specify where and by whom these resources are needed. + +# 3 Data + +We collected 665,667 tweets posted between February 28, 2020 and May 8, 2020, with a maximum of 10,000 samples for each day using Coronavirus $\mathsf{Hexagon}^1$ . Each tweet contains at least one of the following hashtags: #COVID19, #COVID-19, #coronavirusoutbreak, #WuhanCoronavirus, #2019nCoV, #CCPvirus, #coronavirus, #CoronavirusPandemic, #SARS-CoV-2, #coronavirus, #wuhanflu, #kungflu, #chineseviruscorona, #ChinaVirus19, #chinesevirus. Our sample includes only tweets from users in the United States and tweets written in English. + +# 4 Methodology + +# 4.1 Extracting a List of Needs + +For detecting needs, we trained an embedding model on the dataset and identified the terms closest to the seed terms needs and supplies with respect to their cosine similarity. Specifically, we performed the following steps: + +1. Detect phrases using AutoPhrase (Shang et al., 2018), setting the threshold for salient phrases to 0.8, and annotate dataset with phrases. +2. Split tweets into sentences and tokens using the NLTK (Loper and Bird, 2002) sentence and tweet tokenizer, respectively. + +3. Run word2vec (Mikolov et al., 2013) on the tokenized sentences. +4. Select the top 100 nouns closest to the word embeddings of needs and supplies. These nouns are representative of the needed resources. + +To identify nouns, we ran the NLTK part-of-speech (POS) tagger on the tweets (before phrase annotation). We considered nouns as words whose most frequent POS tag is a noun, and a phrase as a noun if its final token is a noun (e.g., testing-capacity is a noun as capacity is a noun). + +# 4.2 Detecting Who-Needs-What Sentences + +We developed a rule-based method to identify who-needs-what sentences, where who is an entity (noun or pronoun) and what is a resource or an item (noun). We leveraged the grammatical structure of sentences for this purpose by using a dependency parser to identify sentences containing this triple. We developed two simple rules to identify these types of sentences. + +The first rule considers the occurrence of the word need as a verb (as per its POS) in a sentence. This is a straightforward application of the who-needs-what format. We identified sentences where who is the subject and what is the direct object. After identifying that need (or its other word forms) is used as a verb, we selected sentences where the left child of need in the dependency parse tree is a nominal subject (nsubj), and the right child is a direct object (dobj). Figure 1 shows an example sentence that follows this rule and its dependency parse tree. The second rule considers the use of the word need as a noun (as per its POS). Our initial data exploration identified many sentences in the form of $X$ is in need of $Y$ , where, in the dependency parse tree, the who and what are not direct children of the term need. The who is a child of a copular verb (e.g., is), which is an ancestor of need. The term linking the copular verb and need is a preposition (i.e., the copular verb is the + +![](images/bdb07d4280a97130df01afae373e5875e9f0a5f1d7b5fceecf6ea271e4a160c2.jpg) +Figure 1: Rule considering need as a verb + +![](images/948e5a71f4a4b70b8d912872b325b07e08e9cf4103ee0aed7cfaae488c8e1aa6.jpg) +Figure 2: Rule considering need as a noun + +term's parent and need is its prepositional object (pobj). The what is a descendant of need, also linked through a preposition. Figure 2 shows an example sentence that conforms to this rule and its dependency parse tree. + +Similar to the first needs detection task, we used the NLTK sentence and tweet tokenizer to split the tweets into sentences and tokens, respectively. We used spaCy (Honnibal and Montani, 2017) to generate the dependency parse trees. Our source code is available on GitHub2. + +# 4.3 Evaluation + +There is no single comprehensive list of resources needed by people in the U.S. for the COVID-19 crisis that could serve as ground truth for evaluation. We found two sets of sources that we deemed as proxies for such a list. First, the World Health Organization's (WHO) essential resource planning guidelines (2020) provide a set of forecasting tools and documents for calculating the required manpower, supplies, and equipment needed to respond to the virus adequately. Second, the U.S. Department of Health and Human Services (HHS) Office of Inspector General published the results of a survey conducted about hospitals' experiences in responding to the pandemic (Grimm, 2020). To evaluate our results for the first needs detection task, we counted the number of matches between the list we had generated and the resources mentioned in the WHO and HHS documents. This helps to capture precision. We report our results as precision@k, with k ranging from 10 to 100. + +For the who-needs-what detection task, two annotators identified who-needs-what sentences from a random set of 1,000 sentences that contained any word form of need (i.e., need, needs, needing, and needed). Each annotator was assigned 600 sentences, where 200 sentences also appeared in the other annotator's list. Cohen's kappa was 0.91. + +We report our results for the who-needs-what + +detection task using precision, recall, and F1-score. We compare our work to the needs detection algorithm proposed by Basu and colleagues (Basu et al., 2017), who classified need vs. non-need tweets by ranking tweets based on their cosine similarities to the embeddings of the stemmed terms need and requir. We set the cut-off value of the need-related tweets to 250 and performed the same pre-processing steps outlined in (Basu et al., 2017). While their work is focused on identifying all need tweets, it is still the closest prior work to our task. + +# 5 Results + +Table 1 shows the top 10 resources generated by our first needs detection method. The full set of results is shown in Appendix A. Comparing them to the WHO guidelines, precision@10 is 0.8, and comparing them to the HHS survey, it is 0.9. When both WHO and HHS documents are considered, the precision@10 is 1. The top 13 terms (and 19 of the top 20 terms) appear in at least either one of the WHO or HHS documents. Overall, 41 of the top 100 terms appear in the WHO guidelines, 57 in the HHS survey, and 64 in at least one document. + +Figure 3 shows the precision@k, where k is in increments of 10. There is a steep drop-off in the results when the cut-off is relaxed from 20 to 30, but the precision@k decreases at a more controlled rate after this drop-off. This indicates that the resources + +
ResourceWHOHHS
medical-equipment
equipment
medical-supplies
protective-gear
stockpile×
protective-equipment
ppe
manufacturing×
personal-protective-equipment
medicines×
+ +Table 1: Resources generated for COVID-19 + +![](images/11164ea3ccad32c021f974eccfb93fe471394b41c624e6e46fbb3325f12de1a9.jpg) +Figure 3: Precision at different cutoffs + +needed still appear lower in the list. High precision scores for lower k values suggest that our proposed method can identify resources needed and produce a rigorous ranking of needs. + +For the who-needs-what detection task, our method produced a precision of 0.66, recall of 0.70, and F1-score of 0.68. Sentences that were incorrectly predicted as positive examples include those of the form if you need $x$ , then..., while false negatives include more complex sentences. Only using the first rule produces a precision of 0.66, recall of 0.68, and an F1-score of 0.67, indicating that most who-needs-what sentences follow this rule, where the who is the subject of the sentence or clause and the what is the direct object. Our baseline method, inspired by the work by Basu et al., 2017), performed poorly, achieving only 0.28 precision, 0.26 recall, and 0.27 F1-score. + +# 6 Discussion + +The first needs detection results vary in terms of specificity (e.g., equipment vs. medical equipment, personal protective equipment vs. respirators, funding vs. federal funding). Several retrieved terms that are not on the WHO and HHS lists are general terms such as goods, aid, efforts, programs, and assets. In addition, several terms are synonymous (e.g., personal protective equipment and PPE). These results suggest that clustering the terms may lead to a more distinct set of results. + +It is not surprising that more of the terms we detected appeared in the HHS than in the WHO document because we collected our tweet data from the U.S., and the HHS document is from a survey of U.S. hospitals, while the WHO list is for a global audience. Overall, our results suggest two findings: 1) our needs detection method works, and 2) most COVID-19 needs mentioned on Twitter are either of medical or financial nature (see Appendix A). + +Our who-needs-what detection results show that a simple rule-based method can retrieve sentences that mention entities needing resources and the resources needed (0.68 F1-score). This is an interesting finding with several implications. We can produce a simple white-box method for identifying who-needs-what sentences. While deep learning may increase the scores, our method requires no training data. Another implication of our findings is that mentioning needs on Twitter often follows a specific, uniform format, which could be due to the limited characters available per tweet. Testing the generalizability of this method on other crisis events is part of our future work. + +While social media has been shown to be a valuable source of information during crises, finding useful information is still akin to finding a needle in a haystack. For our who-needs-what detection task, we only found 262 positive examples. Overall, our first needs detection method can generate a ranked set of needs for $600,000+$ tweets in less than 30 minutes. Running steps such as phrase detection and POS tagging in parallel may even improve this time. For the who-needs-what detection task, our method can classify 1,000 sentences in 8 seconds. + +# 7 Conclusions and Future Work + +In this paper, we presented two needs detection methods: one for extracting a list of needed resources during a crisis, and another one for detecting the who-needs-what sentences. We believe that these two methods are helpful in capturing the broad range of needs that emerges during crisis events. Specific to the COVID-19 crisis, our results suggest that the essential needs are protective equipment and financial assistance. Our methods can help detect the essential needs of the general population and affected stakeholders so they can properly plan and respond effectively. + +In future work, we aim to expand our methodology to identify the availability of needs, if they have been met, and social entities who address them. In addition, we plan to differentiate between a more comprehensive set of requests, including hopes, wants, and wishes during a crisis. + +# Acknowledgments + +This work was supported in part by the U.S. Department of Homeland Security under Grant Award Number 2015-ST-061-CIRC01. The views and conclusions contained in this document are those + +of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the U.S. Department of Homeland Security. + +# References + +Moumita Basu, Kripabandhu Ghosh, Somenath Das, Ratnadeep Dey, Somprakash Bandyopadhyay, and Saptarshi Ghosh. 2017. Identifying post-disaster resource needs and availabilities from microblogs. In Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017, pages 427-430. +Moumita Basu, Anurag Shandilya, Kripabandhu Ghosh, and Saptarshi Ghosh. 2018. Automatic matching of resource needs and availabilities in microblogs for post-disaster relief. In Companion Proceedings of the The Web Conference 2018, pages 25-26. +Moumita Basu, Anurag Shandilya, Prannay Khosla, Kripabandhu Ghosh, and Saptarshi Ghosh. 2019. Extracting resource needs and availabilities from microblogs for aiding post-disaster relief operations. IEEE Transactions on Computational Social Systems, 6(3):604-618. +Ritam Dutt, Moumita Basu, Kripabandhu Ghosh, and Saptarshi Ghosh. 2019. Utilizing microblogs for assisting post-disaster relief operations via matching resource needs and availabilities. Information Processing & Management, 56(5):1680-1697. +Christi A Grimm. 2020. Hospital experiences responding to the COVID-19 pandemic: Results of a national pulse survey march 23-27, 2020. Washington DC: Office of the Inspector General; April 3, 2020. Report no. OEI-06-20-00300. +Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing. +Muhammad Imran, Prasenjit Mitra, and Carlos Castillo. 2016. Twitter as a lifeline: Human-annotated twitter corpora for nlp of crisis-related messages. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pages 1638–1643, Paris, France. European Language Resources Association (ELRA). +Maria Renee Jimenez-Sotomayor, Carolina Gomez-Moreno, and Enrique Soto-Perez-de Celis. 2020. Coronavirus, ageism, and twitter: An evaluation of tweets about older adults and covid-19. Journal of the American Geriatrics Society, 68(8):1661-1665. +Edward Loper and Steven Bird. 2002. NLTK: the natural language toolkit. arXiv preprint cs/0205028. + +Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, pages 3111-3119. +World Health Organization. 2020. Coronavirus disease (COVID-19) technical guidance: Essential resource planning. Available at https://www.who.int/emergencies/diseases/novel-coronavirus-2019/technical-guidance/covid-19-critical-items. +Leysia Palen and Kenneth M Anderson. 2016. Crisis informatics—new data for extraordinary times. Science, 353(6296):224-225. +Hemant Purohit, Carlos Castillo, Fernando Diaz, Amit Sheth, and Patrick Meier. 2014. Emergency-relief coordination on social media: Automatically matching resource requests and offers. First Monday, 19(1). +Christian Reuter, Amanda Lee Hughes, and Marc-Andre Kaufhold. 2018. Social media in crisis management: An evaluation and analysis of crisis informatics research. International Journal of Human-Computer Interaction, 34(4):280-294. +Hans Rosenberg, Shahbaz Syed, and Salim Rezaie. 2020. The twitter pandemic: The critical role of twitter in the dissemination of medical information and misinformation during the COVID-19 pandemic. Canadian Journal of Emergency Medicine, 22(4):418-421. +Abhinav Sarkar, Swagata Roy, and Moumita Basu. 2019. Curating resource needs and availabilities from microblog during a natural disaster: A case study on the 2015 chennai floods. In Proceedings of the India Joint International Conference on Data Science and Management of Data, pages 338-341. +Jingbo Shang, Jialu Liu, Meng Jiang, Xiang Ren, Clare R Voss, and Jiawei Han. 2018. Automated phrase mining from massive text corpora. IEEE Transactions on Knowledge and Data Engineering, 30(10):1825-1837. +Lisa Singh, Shweta Bansal, Leticia Bode, Ceren Budak, Guangqing Chi, Kornraphop Kawintiranon, Colton Padden, Rebecca Vanarsdall, Emily Vraga, and Yanchen Wang. 2020. A first look at COVID-19 information sharing on twitter. arXiv preprint arXiv:2003.13907. +Irina P Temnikova, Carlos Castillo, and Sarah Vieweg. 2015. Emterms 1.0: A terminological resource for crisis tweets. In Proceedings of the ISCRAM 2015 Conference, pages 134-146. +István Varga, Motoki Sano, Kentaro Torisawa, Chikara Hashimoto, Kiyonori Ohtake, Takao Kawai, Jong-Hoon Oh, and Stijn De Saeger. 2013. Aid is out there: Looking for help from tweets during a large + +scale disaster. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1619-1629, Sofia, Bulgaria. Association for Computational Linguistics. + +Sudha Verma, Sarah Vieweg, William J Corvey, Leysia Palen, James H Martin, Martha Palmer, Aaron Schram, and Kenneth M Anderson. 2011. Natural language processing to the rescue? extracting "situational awareness" tweets during mass emergency. In Fifth International AAAI Conference on Weblogs and Social Media. + +Sarah Vieweg, Amanda L Hughes, Kate Starbird, and Leysia Palen. 2010. Microblogging during two natural hazards events: what twitter may contribute to situational awareness. In Proceedings of the SIGCHI conference on human factors in computing systems, pages 1079-1088. + +Himanshu Zade, Kushal Shah, Vaibhavi Rangarajan, Priyanka Kshirsagar, Muhammad Imran, and Kate Starbird. 2018. From situational awareness to actionability: Towards improving the utility of social media data for crisis response. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW). + +# A Appendix: Resources generated for COVID-19 + +
medical Equipmentmaterialshand-sanitizergrants
equipmentaccessface-masksrelief
medical-suppliesdemandglovesessential-workers
protective-gearessential-goodslocal-hospitalscapability
stockpileproductionrespiratorsgroceries
protective Equipmentface-shieldshealthcare-workersdevices
ppepersonnelrecipientspharmacies
manufacturingfederal-fundingrefusedflexibility
personal-protective Equipmentreagentsessential-suppliesmasks
medicinesfederal-assistancebarriersliving-wage
#ppeventilatorsdemandsnational-stockpile
supplysystemsrepairsmedical-facilities
distributionassetsrelief-fundsassistance
goodscapacityfood-bankspackages
manufacturersprogramsutilitiestrace
fundsaidmedsdpa
planseconomic-relieftesting-capacitypurchases
essentialskitsdefense-production-acthandouts
essential-itemsgownschildcaremachines
financial-relieffoodabilitydeliveries
needingfundingserviceslocal-governments
necessitieseffortsproviderspaid-sick-leave
critical-suppliesmedicationrequirementsshortages
clean-watersupply-chainsurgical-masksfailed
resourcesfacilitiesexpenseshospitals
+ +Table A1: Resources generated for COVID-19 \ No newline at end of file diff --git a/anempiricalmethodologyfordetectingandprioritizingneedsduringcrisisevents/images.zip b/anempiricalmethodologyfordetectingandprioritizingneedsduringcrisisevents/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..c0d893526a4798cbce77008c0339705b60d9f3a0 --- /dev/null +++ b/anempiricalmethodologyfordetectingandprioritizingneedsduringcrisisevents/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dedd3556122615429b9bf62d8930306eeb78a3b96b646e8c8a508e920e0e4619 +size 189528 diff --git a/anempiricalmethodologyfordetectingandprioritizingneedsduringcrisisevents/layout.json b/anempiricalmethodologyfordetectingandprioritizingneedsduringcrisisevents/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..fa85fbfff7ebe88afbff36dd4a68f4711ab4cf5f --- /dev/null +++ b/anempiricalmethodologyfordetectingandprioritizingneedsduringcrisisevents/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3c00df01e1e619853b5c1e237add015e13a75fb779b3ef6e0adbf6dfeda6da1f +size 149505 diff --git a/anevaluationmethodfordiachronicwordsenseinduction/7b2eec2b-e541-4104-975d-2d7cd4d77ca3_content_list.json b/anevaluationmethodfordiachronicwordsenseinduction/7b2eec2b-e541-4104-975d-2d7cd4d77ca3_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..cd7ef140ac9f384fc3023c7384f7532640db8deb --- /dev/null +++ b/anevaluationmethodfordiachronicwordsenseinduction/7b2eec2b-e541-4104-975d-2d7cd4d77ca3_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e3bd5828e10bf8f77b0267f90a1bbfc5ad3af5fdeec4b32344f6c13adc32de54 +size 79879 diff --git a/anevaluationmethodfordiachronicwordsenseinduction/7b2eec2b-e541-4104-975d-2d7cd4d77ca3_model.json b/anevaluationmethodfordiachronicwordsenseinduction/7b2eec2b-e541-4104-975d-2d7cd4d77ca3_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c0173c4e86bead12a27fb57fb59477e40b820f0e --- /dev/null +++ b/anevaluationmethodfordiachronicwordsenseinduction/7b2eec2b-e541-4104-975d-2d7cd4d77ca3_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:28fc8d2369ba8e5ef5304dbbfaa2faf3c1b3f75d074cd0aa869c87ca071283c0 +size 95402 diff --git a/anevaluationmethodfordiachronicwordsenseinduction/7b2eec2b-e541-4104-975d-2d7cd4d77ca3_origin.pdf b/anevaluationmethodfordiachronicwordsenseinduction/7b2eec2b-e541-4104-975d-2d7cd4d77ca3_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..51bc47df998bf9b3daf38b093ca0524628600672 --- /dev/null +++ b/anevaluationmethodfordiachronicwordsenseinduction/7b2eec2b-e541-4104-975d-2d7cd4d77ca3_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:671f6291a46a2c4aad2859d604fbc6a6c6c5f08611dedd23500a404e7c1abcc8 +size 569073 diff --git a/anevaluationmethodfordiachronicwordsenseinduction/full.md b/anevaluationmethodfordiachronicwordsenseinduction/full.md new file mode 100644 index 0000000000000000000000000000000000000000..7c931d14b82760280124434510bda1856dd0f7d9 --- /dev/null +++ b/anevaluationmethodfordiachronicwordsenseinduction/full.md @@ -0,0 +1,324 @@ +# An Evaluation Method for Diachronic Word Sense Induction + +# Ashjan Alsulaimani + +School of Computer Science and Statistics & Trinity Centre for Computing and Language Studies Trinity College Dublin alsulaia@tcd.ie + +# Erwan Moreau + +School of Computer Science and Statistics & Adapt Centre Trinity College Dublin moreaue@scss.tcd.ie + +# Carl Vogel + +School of Computer Science and Statistics & Trinity Centre for Computing and Language Studies Trinity College Dublin vogel@scss.tcd.ie + +# Abstract + +The task of Diachronic Word Sense Induction (DWSI) aims to identify the meaning of words from their context, taking the temporal dimension into account. In this paper we propose an evaluation method based on large-scale time-stamped annotated biomedical data, and a range of evaluation measures suited to the task. The approach is applied to two recent DWSI systems, thus demonstrating its relevance and providing an in-depth analysis of the models. + +# 1 Introduction + +Words naturally evolve through time, their meaning may encounter subtle or radical changes resulting in a variety of senses. For example, the word mouse only had the meaning of animal until it acquired a brand new sense in 1980 as computer device. But sense changes are not always so definite, a word usage may drift progressively from its original sense or be affected by historical events. A recent example of this phenomenon is the word coronavirus, which has seen a dramatic usage surge in 2020 because of the emergence of its SARS-CoV-2 variant. Before 2020, the word coronavirus was mostly a technical term describing a family of viruses, but it is now used in the mainstream media to mean the specific SARS-CoV-2, the related Covid19 disease or even the general health crisis and its consequences. + +The dynamic behaviour of words contributes to semantic ambiguity, which is a challenge in many NLP tasks. The ability to detect such changes across time could potentially benefit various applications, such as machine translation and information retrieval. In the biomedical domain, it can improve the quality of the automatic identification of senses in contexts where no complete terminology is available, such as with clinical notes, and to assist indexers who build terminology resources. + +Recent research focused on detecting semantic shifts across time (Kutuzov et al., 2018) but also Diachronic Word Sense Induction (Emms and Kumar Jayapal, 2016). The task of Diachronic Word Sense Induction (DWSI) is similar to Word Sense Induction (WSI) in identifying the meaning of words from their context, but also takes the temporal dimension into account. + +In §2 we briefly present two Bayesian models that have been proposed for the DWSI task: Emms and Kumar Jayapal (2016) proposed a model which represents the evolution of word senses in order to detect the emergence year of new senses. A different model was proposed by Frermann and Lapata (2016), focusing instead on capturing the subtle meaning changes within a sense over time. However evaluating such models is difficult, as the lack of large scale time-stamped data prevents direct quantitative evaluation. + +In this paper we introduce a method which relies on annotated biomedical data to evaluate DWSI. While the general aim of this article is the evaluation of DWSI systems across domains and genres, the biomedical domain is the only one to date which offers suitable data for the task. Our approach leverages the availability of unambiguous manual annotations (and publication years) in the Medline citation database in order to build a large time-stamped dataset, as detailed in §3. In §4 we introduce a range of evaluation measures which can be used to directly and quantitatively measure the performance of a DWSI system on such an annotated dataset. Finally in §5 we compare the two aforementioned models using our evaluation method, which demonstrates the relevance of the approach and allows a deep analysis of the models. + +# 2 State of the Art + +# 2.1 Diachronic Word Sense Induction + +Most existing work on diachronic meaning change has focused on static methods, in the sense that the learning algorithms are either time-unaware or applied to independent periods of time (Lau et al., 2012; Cook et al., 2014; Mitra et al., 2015). For example, Mitra et al. (2015) split the data into eras and then apply WSI independently on each era subset in order to identify new senses of a word. However, recent approaches have introduced time aware probabilistic models in order to represent the changes in word meaning over time. + +# 2.2 The NEO Model + +The model introduced by Emms and Kumar Jayapal (2016), called $\mathbf{NEO}^2$ herein, is a generative Bayesian model that chooses a sense $s$ given a time $t$ (respecting relevant sense-given-time probabilities $P(s|t)$ ) then chooses context words $\mathbf{w}$ given the sense $s$ (respecting relevant word-given sense probabilities $P(w|s)$ ). The joint probability distribution over the parameters is defined as in (1). + +$$ +\begin{array}{l} P (t, s, \mathbf {w}; \pi_ {1: N}, \theta_ {1: K}) \\ = \prod_ {t} D i r i c h \left(\theta_ {t}; \gamma_ {\pi}\right) \times \prod_ {k} D i r i c h \left(\theta_ {k}; \gamma_ {\theta}\right) \tag {1} \\ \times P (t; \tau_ {1: N}) P (s | t; \pi_ {1: N}) \prod_ {w _ {i} \in \mathbf {w}} P (w _ {i} | s; \theta_ {1: K}) \\ \end{array} +$$ + +The authors' aim is to capture sense changes in order to detect the emergence, i.e. origin time, of a novel sense. In this model the probabilities of the context words are represented independently from time, which means that senses can change over time with respect to each other, but the probabilities of the words representing a particular sense are assumed to be constant. + +# 2.3 The SCAN Model + +Frermann and Lapata (2016) proposed a generative Bayesian model inspired from dynamic topic modeling (Blei and Lafferty, 2006), hereafter called SCAN, which shares similarities with NEO but is more complex: given a time $t$ , a sense $s$ is chosen following the distribution of the parameter $\phi_t$ ; then given a sense $s$ and a time $t$ , the context words $\mathbf{w}$ are drawn following the distribution of the parameter $\psi_{s,t}$ . This design allows the representation of a sense with a different distribution of words at different times, as opposed to NEO. Thus in the + +SCAN model, time-adjacent representations of a sense are codependent in order to allow capturing the meaning change in a smooth and gradual way. This is made possible by defining their prior as an intrinsic Gaussian Markov Random Field. Following the structural dependencies defined through iGMRF prior, Frermann (2017) expresses the posterior distribution over the latent variables given the input $\mathbf{w}$ , parameters $a, b, \kappa^{\Psi}$ and the choices of the distributions Gamma $(Ga)$ , Logistic Normal distribution $(N)$ : + +$$ +\begin{array}{l} P (s, \Phi , \Psi , \kappa^ {\Phi} | \mathbf {w}, \kappa^ {\Psi}, a, b) \\ \propto G a \left(\kappa^ {\Phi}; a, b\right) \prod_ {t} \left[ \prod_ {k} \left[ N \left(\Psi^ {t, k} \mid \kappa^ {\Psi}\right) \right] \prod_ {d} \left[ \Phi_ {s} ^ {t} \prod_ {w ^ {i} \in \mathbf {w}} \Psi_ {w ^ {i}} ^ {s, t} \right] \right. \tag {2} \\ \end{array} +$$ + +where $\kappa^{\Phi}$ is drawn from a conjugate Gamma prior and $\kappa^{\Psi}$ is estimated during inference, which both control the degree of sense-specific word distributions variations over time. Thus the SCAN model is meant to capture changes between senses but also changes of meaning within a sense. + +# 2.4 Existing Evaluation Methods + +One way to find the ground truth of sense emergence is by using a dictionary. This approach is taken by many studies (Rohrdantz et al., 2011; Lau et al., 2012; Cook et al., 2014; Mitra et al., 2015). + +In (Emms and Kumar Jayapal, 2016), the model is evaluated qualitatively on the Google NGrams corpus (Michel et al., 2011), using a few manually selected target words. The ground truth is obtained by the "tracks-plot" method, which consists in representing a target sense by a few hand-picked co-occurrences (e.g. "screen", "click" for mouse as a computing device), then tracking these co-occurrences over time and taking the mean of the separate tracks. An emergence detection algorithm "EmergeTime" is proposed in (Jayapal, 2017) to detect the year of emergence either from the "tracks-plot" data (ground truth emergence) or a predicted distribution $P(s|t)$ (predicted emergence). The algorithm checks whether there is a year in the $P(s|t)$ plot which satisfies the following constraints: + +- The year is followed by a 10 year window of sufficient increase in probabilities: $85\%$ of the years show a climb in probabilities of $2 - 3\%$ of the maximum value. +- $80\%$ of the preceding years are lower than 0.1 (i.e. close to zero in probability). + +Emms and Kumar Jayapal (2016) evaluate the quality of the sense clustering qualitatively by inspecting the top 30 ranked words that are associated with a specific sense. + +Frermann and Lapata (2016) present four indirect evaluation methods, relying on closely related tasks used as applications of their model: + +- "Temporal Dynamic": qualitative evaluation of the appearance of a new sense. +- “Novel Sense Detection”: evaluation using Mitra et al. (2015)'s complex approach based on WordNet.3 +- "Word Meaning Change": evaluation using Gulordava and Baroni (2011)'s method and data for detecting meaning change between two time slices. +- "Task-based Evaluation": extrinsic evaluation on the SemEval Diachronic Text Evaluation task (Popescu and Strapparava, 2015), designed for supervised learning. + +Despite the authors's best efforts to compare their results against others, they state that the "scores [that they obtain] are not directly comparable due to the differences in training corpora, focus and reference times, and candidate words" (Frermann and Lapata, 2016, p.39). Additionally, models of both Emms and Kumar Jayapal (2016) and Frermann and Lapata (2016) offer a continuous time representation $P(s|t)$ . The sophistication of their systems would deserve a more suitable evaluation framework, since they have to simplify their outcomes in order to compare them against previous works which rely on models which only represent independent time slices. + +A recent evaluation framework is proposed by (Schlechtweg et al., 2020) for the task of Unsupervised Lexical Semantic Change Detection (LSC) in SemEval-2020. However, the benchmark datasets contain only two independent periods of time. The subtasks are only designed to capture whether there is a change (subtask 1) or the extent of a change (subtask 2). Precisely, as opposed to the DWSI task, the subtasks do not capture how many distinct senses exist in the data, what kind of change happens over time, to which sense, and the emergence year of a novel sense. Although the annotation process involves clustering senses and computing sense frequency distributions for two independent periods of time, the sense information is neglected. + +Instead, the target values of the subtasks are based on "change scores" which represent only the existence or degree of LSC. As a result of this simplification, the evaluation methods used in the Unsupervised LSC are incompatible with the WSI and DWSI tasks. The task differs from WSI and DWSI in the sense that it does not either provide a way to predict the sense of an instance or the set of senses of a polysemous target word and their prevalence. + +# 3 A Biomedical Dataset for DWSI + +The DWSI task requires not only target words with several senses, but also time-stamped data for every target word. The evaluation of DWSI is challenging because manual annotation of such a large amount of instances (since they span over many years) would be prohibitively costly. In this section, we propose a method to collect diachronic data for ambiguous terms in medical terminologies. + +# 3.1 Data Collection Process + +Our method relies on the medical literature and exploits medical terminology resources: Medline5 is a database referencing most of the biomedical literature (30 millions citations). The citations are annotated with Mesh descriptors. MeSH6 (Medical Subject Headings) is “the US National Library of Medicine (NLM) controlled vocabulary thesaurus used for indexing articles for PubMed.” The Unified Medical Language System (UMLS) Metathesaurus is “a large biomedical thesaurus that is organized by concept, or meaning, and it links similar names for the same concept” (Bodenreider, 2004).7 Each concept in UMLS is identified by a Concept Unique Id (CUI), and all the terms listed in UMLS are assigned a CUI. Since UMLS includes MeSH terms, there is a partial mapping between MeSH descriptors and UMLS CUIs. + +The MSH WSD data (Jimeno-Yepes et al., 2011) consists of 203 ambiguous medical terms, each provided with the list of CUIs which identify the different meanings of the term. This dataset was created for the Word Sense Disambiguation task, + +so the instances it contains are labelled by CUI (sense) but they are not time-stamped. We collect a time-stamped dataset as follows: + +1. The MSH WSD data provides us with target terms and CUIS. +2. For every CUI, the corresponding MeSH descriptor is extracted from UMLS. +3. From Medline, all the citations labeled with a particular MeSH descriptor are extracted (title, publication year and abstract if any). +4. When available, the text of the full article is retrieved from PubMed Central. $^{8}$ + +# 3.2 Data pre-processing + +For every target and every sense (CUI), a collection of documents made of titles, abstracts and full articles is obtained. Every occurrence of the target term in a document is assumed to have the sense given by the CUI.9 In the interest of maximising the number of instances available for each year, we also collect the full list of terms associated with the CUI from UMLS and substitute every occurrence of such a term with the ambiguous target. In both cases of collecting instances, the longest possible term is matched in order to capture the most specific expressions.10 + +$\mathrm{SpaCy}^{11}$ is used to tokenise the documents into sentences and words. Using a global stopwords list based on the tokens frequencies, the most frequent tokens such as non-content words (the, a, however) and punctuation signs $(!, \%)$ are removed from the context. Every occurrence of the target in a document is extracted together with its 10-word context (5 words on each side). In order to provide the DWSI systems with sufficient data for every year, we only include the longest consecutive period with at least 4 instances every year across senses. + +At the end of the process, the dataset contains 188 target (out of 203 initial targets).12 175 targets have two senses, 12 have 3 and one has 5 senses. + +There are 61,352 instances by sense in average. $^{13}$ 102 senses out of 391 have emergence according to the "EmergeTime" method. $^{14}$ + +# 4 Evaluation + +As explained in §3, the collected dataset contains sense labels which can be used to directly evaluate a DWSI system in a reliable way. Since by definition the output of an unsupervised clustering algorithm is unlabeled, we propose in §4.1 a method to match a gold sense with a predicted sense. Thanks to this matching method, a system can be evaluated externally, in a way similar to a supervised WSD system. We propose several evaluation methods, each meant to capture the performance of a DWSI system from a different perspective. + +# 4.1 Global Maximum Matching Method + +After estimating the model, the posterior probability is calculated for every instance, according to Eq. (3) for NEO and Eq. (4) for SCAN. The sense corresponding to the maximum probability is assigned to the instance. + +$$ +P (S \mid t ^ {d}, \mathbf {w} ^ {d}) = \frac {P (S , t ^ {d} , \mathbf {w} ^ {d})}{\sum_ {S ^ {\prime}} P \left(S ^ {\prime} \mid t ^ {d} , \mathbf {w} ^ {d}\right)} \tag {3} +$$ + +$$ +P (S \mid t ^ {d}, \mathbf {w} ^ {d}) \propto P (S ^ {d} \mid t ^ {d}) P (\mathbf {w} ^ {d} \mid t ^ {d}, S) \tag {4} +$$ + +The pairs of gold/predicted senses are matched iteratively based on their joint frequency. At every iteration, the pair corresponding to the highest frequency (global maximum) in the table is matched. Once a gold sense is matched with a predicted sense, neither the gold nor the predicted sense can be matched again with another sense. This eliminates the possibility of having two different gold senses matched with the same predicted sense or two different predicted senses matched with the same gold sense, an issue present in the methods used by (Agirre and Soroa, 2007; Manandhar et al., 2010). Moreover, by matching the largest senses first, the number of incorrectly matched instances is minimized. An example is provided in table 1. + +# 4.2 Based on Clusters of Instances + +# 4.2.1 Clustering Classification Measures + +Given the true class (i.e. true sense, obtained as explained in §3) and the assigned predicted + +
C0030131C0030625C0078944C0149576C0429865
060850246803525171
1108191196346617345
2131220213948416128
3153230268463726222
41313162388598569
+ +
C0030131C0030625C0078944C0149576C0429865
06085024680352-
11081911963466-
21312202139484-
3-----
41313162388598-
+ +
Predicted senseGold sense
0C0078944
1C0030131
2C0149576
3C0429865
4C0030625
+ +Table 1: Global maximum matching example. The top contingency table shows the number of instances for every predicted/gold sense pair (the predicted sense is assigned by calculating the maximum of the posterior probability). At the first iteration, senses C0429865 and 3 are matched based on the global maximum (in bold). The second table shows the remaining frequencies at the second iteration. The bottom table shows the resulting matching at the end of the process. + +class (obtained using the matching method presented in §4.1), every instance can be categorised as True/False Positive/Negative for any specific sense $s$ , following the standard classification methodology. This way the standard binary classification measures can be applied at the level of a sense: precision, recall, F1-score. The micro-average and macro-average of these measures are calculated to represent the performance at the level of a target or across targets. + +# 4.2.2 Clustering Mean Absolute Error + +The classification measures do not distinguish whether the system is confident in its prediction (e.g. if the posterior probability is 0.99) or not (e.g. if it is 0.51), this is why we also propose to use the mean absolute error (MAE). The intuition behind this measure is that a perfect system should predict probability one for the gold sense and zero for any other sense. Therefore, the further the predicted probability deviates from one, the higher the error. We use the mean absolute error to measure how close to one is the posterior probability of the gold sense in average. The mean absolute error is defined for every sense as in Eq. (5). + +$$ +\frac {1}{| D |} \sum_ {d \in D} (1 - P (\hat {s _ {g}} | d)) \tag {5} +$$ + +where $D$ represents a set of instances, $\hat{s_g}$ is the sense that matches the gold sense, and the posteriors are defined as mentioned in Eq. (3) and (4). Since the individual error value is unique for a given instance, this measure can be calculated for any set of instances, in particular at the level of a single sense, a target or across the whole data. By contrast to the classification measures which assign a categorical label to an instance, this measure takes into account the potential numerical variations of the probability values. However at the level of a sense it does not capture any information about the false positive cases. As a consequence, classification measures and MAE are susceptible to show complementary aspects of performance. + +# 4.3 Based on the Estimated Parameters + +# 4.3.1 Emergence Classification Measures + +Generally the task of emergence detection consists in predicting the year (or period of time) when a new sense emerges. As explained in §2.4, this task is performed by applying the emergence detection algorithm on the inferred $P(s|t)$ parameter. In theory the true answer is the emergence year, but in a classification setting it is reasonable to allow some margin of error. Thus the predictions of an emergence is counted as correct if it falls within the bounds of a 5 year window centered on the true emergence year. Based on this categorisation, the standard precision, recall and F1-score can be calculated across all targets. + +# 4.3.2 Emergence Mean Absolute Error + +The binary classification measures restrict the predicted answer to be either inside or outside a window, thus do not take into account the distance between the gold and predicted emergence years. By contrast, a numerical error value can be calculated as follows: + +$$ +e = \left\{ \begin{array}{l l} 0 & \text {i f} \neg g \wedge \neg p \\ M & \text {i f} (\neg g \wedge p) \vee (g \wedge \neg p) \\ | y - \bar {y} | & \text {i f} g \wedge p \end{array} \right. +$$ + +where: + +- $g$ (resp. $p$ ) is true if and only if the gold (resp. predicted) sense has emergence, +- $M$ is the maximum error defined as the number of years of data for a specific target, +- $y$ is the true year of emergence and $\hat{y}$ is the predicted year of emergence. + +In order to compare error levels across different targets, a normalised variant is defined as $e_{norm} =$ + +$\frac{e}{M}$ . The MAE is defined over a set of senses $S$ as the mean of their $e_{norm}$ values. + +The intuition is that the case where both the gold and the predicted senses have emergence should always be assigned a lower error than when only one of them has emergence, therefore we assign the maximum error in the latter case. Since all the targets do not have the same number of years of data, the maximum individual error is different among targets, this is why a normalised variant is used where the individual value is divided by the total number of years. This allows comparisons of the error level between senses, targets, as well as at the system level. + +# 4.3.3 Time Series Distances + +The predicted evolution across time of the sense probability $P(s|t)$ is an essential outcome of the DWSI task. We use distance measures in order to evaluate how far the predicted $P(s|t)$ is from the true probability across time. There are many options available for measuring the distance between two time series. We propose two of them: + +- The linear Euclidean distance is a simple measure which assumes that the $i^{th}$ point in one sequence is aligned with the exact $i^{th}$ point in the other one. +- The non-linear Dynamic Time Warping (DTW) distance measure performs an alignment of the two sequences (Berndt and Clifford, 1994; Sardá-Espinosa, 2017). This allows a more flexible comparison of the dissimilarity with respect to the alignment of the two series across time. + +The superiority of DTW over Euclidean measure is that DTW is tailored to time shifts, scale and noise and not only defined for series of equal length. In our task, we will compare both Euclidean and DTW results and test whether DTW finds local similarities between sequences which share some patterns but are not fully aligned. + +# 5 Results and Analysis + +In this section, we evaluate the NEO and SCAN systems using the dataset presented in §3 and the evaluation methods defined in §4. This allows us to compare the two systems on the same grounds. Additionally, this rich annotated dataset allows us to provide an in-depth analysis, thus uncovering the strengths and weaknesses of the two systems. + +The DWSI task is unsupervised, so the whole + +data is used both to estimate the parameters and perform evaluation on the predictions. No parameter has been tuned at any point: the experiments are run using the systems provided by the original authors with their default parameters, except for the number of senses (the true number of senses is used for every target), one-year time interval, and the size of the context window (10).16 + +# 5.1 Observations of Posterior Distribution + +The graphs in Figure 1 show the frequency of the predicted probabilities that correspond to the matched gold senses and the frequency of the highest predicted probabilities that are assigned for each instance. The predicted probabilities follow a U-shaped distribution, which means the system tends to assign extreme probabilities (close to either zero or one) to the majority of the data. The graphs also show the overlap between the predicted gold sense probabilities and the highest predicted probabilities, which represents the instances where the true sense was predicted correctly. By contrast, the area in red on the left half represents cases where the true sense is predicted with a low probability (false negative), and the blue area which does not overlap represents instances where an incorrect sense is predicted (false positive). In comparison to NEO, SCAN tends to assign even more extreme probabilities. In particular, SCAN tends to make more serious errors: in more than 5 million cases, the predicted probability is 0 (or close to 0) for the gold sense instead of 1. + +Table 2 compares the deciles of the error distribution between NEO and SCAN. For NEO, the error is below 0.1 (near perfect predictions) for more than $30\%$ of the instances while it is above 0.9 (totally incorrect predictions) for slightly less than $20\%$ of the instances. In contrast, SCAN scores correctly more than $40\%$ of the instances while the incorrect predictions are more than $30\%$ . + +Overall, NEO performs better than SCAN according to the MAE: 0.425 vs. 0.444. This difference is significant (p-value 0.000024 for Wilcoxon signed rank test at the level of targets). + +# 5.2 Influence of Data Size + +It is often expected that performance improves with the amount of data provided. This is not verified in the data, which shows a slight negative correlation level (between -0.1 and -0.3) between data size and performance across targets in both systems. + +![](images/c4704d60a6dd4dbca7eb07ad19b06180041f2249cacb74c6d630754052057462.jpg) +Figure 1: Distribution of the probabilities predicted by NEO and SCAN systems: the red distribution represents the predicted probability of the gold sense for every instance in the data; the blue distribution represents the highest predicted probability for every instance. +Pearson correlation: NEO -0.48, SCAN -0.52 + +
Bottom N %decile (NEO)decile (SCAN)
10%0.0090.0000002
20%0.0390.00003
30%0.0950.001
40%0.1890.016
50%0.3310.174
60%0.5180.774
70%0.7180.985
80%0.8800.999
90%0.9730.999
+ +We investigate how the size of each sense (as opposed to the full target size) contributes to the performance of the model. In other words, we observe the difference between targets where the senses have a similar size and targets where there is a strong imbalance between the senses. For every target, the standard deviation of the sense size proportions is used as a measure of the imbalance across senses. Figure 2 shows the relationship between SD and macro F1-score. There is a clear pattern where higher imbalance between senses is associated with lower performance in general, regardless of the model type. + +A detailed analysis shows that SCAN outperforms NEO when the imbalance level is not large between senses within a target, while the two systems perform similarly otherwise. This effect can be observed in the global classification results in table 3. SCAN outperforms NEO at the level of + +![](images/380a238dde95bb81818d9fa08834670a3f9c37cd6b7a139f8cd1c1a3795c8796.jpg) +Figure 2: Relation between gold sense imbalance and performance by target. + +Table 2: Deciles for error values for the predicted senses (across all instances) based on the clustering mean absolute error evaluation measure for NEO and SCAN systems. + +
Perf.NEOSCAN
PRF1PRF1
macro0.5480.5690.5580.5620.5910.577
micro0.5950.5950.5950.5580.5580.558
+ +macro results whereas NEO performs better at the level of micro results. However, Wilcoxon rank test shows that the superiority of SCAN at the level of macro F1-score by target is not significant (p-value: 0.354) whereas the superiority of NEO at the level of micro F1-score is (p-value: 1.167e-07). Given that macro scores are based on the average performance across senses independently from their size, this means that SCAN performs better than NEO with the minority class (i.e. sense) and conversely NEO shows better performance with the majority class. Table 4 confirms that the superiority of SCAN for the minority class is not significant yet the superiority of NEO for the majority class is. + +Table 3: Global classification results for NEO and SCAN systems. P/R/F1: Precision/Recall/F1-score + +
Number of SensesSense rankMean F1-scoreWilcoxon test p-value
NEOSCAN
-first0.2990.3216.657119e-01
-last0.7320.6923.503092e-10
2first0.3150.3356.920240e-01
2second0.7400.69951.310836e-09
3first0.1000.1431.000000e+00
3second0.2530.3901.220703e-02
3third0.6290.5972.333984e-01
+ +Table 4: Comparison of the performance by senses, ranked by proportion within a target. The sense rank is organised by the number of senses. It starts from the smallest sense (in proportion; rank first) and increases to the largest (rank last). “-” means the ranking is based on the min and the max senses across all the data. Wilcoxon test is applied on the F1 scores of the senses in order to assess whether the distribution of F1 scores is significantly different between NEO and + +![](images/c2c520f1ab154e2f685628417789e533f9267f4a06dc256e86f0ad926b2310b5.jpg) +Figure 3: Relation between size of the gold and predicted senses for NEO (top) and SCAN (bottom). + +
SystemPrecisionRecallF1-score
NEO0.3060.2500.275
SCAN0.1260.0900.105
+ +Having confirmed that the imbalance between gold senses size has a strong impact on performance, we observe how the two systems behave with respect to the predicted size of the senses. It can be observed on Figure 3 that both systems split the data in favour of the senses with a low proportion, i.e. tend to predict a larger size for small senses and conversely a smaller size for large senses.[17] This tendency is exacerbated for SCAN which splits most senses equally regardless of their true size. + +# 5.3 Evaluation of Emergence + +Table 5 shows the global results after applying the emergence algorithm on the predictions of both systems. NEO performs much better than SCAN in predicting the emergence of a new sense, with an F1-score of 0.275 against 0.106 for SCAN. + +Figure 4 shows the gold standard and the predicted emergence years for every sense which has emergence in both NEO and SCAN. SCAN tends to have earlier emergence results compared to the gold, while NEO tends to take the opposite direction with an average difference of -17.318 and 0.697 respectively across the senses. This tendency + +Table 5: Results of NEO and SCAN regarding detecting the emergence of a new sense (5 year window). + +
SystemGlobal MAENormalised Global MAE
NEO17.0760.295
SCAN19.0280.327
+ +![](images/ec5cfa8eba8e2af667a0ce6ed19744b9fde07b0783557bbe17f69710f9d5d7c4.jpg) +Figure 4: Gold and predicted emergence years for NEO and SCAN, ordered by gold emergence year. + +is confined by the fact that $90\%$ of the amount of the difference error (predicted -gold) is predicted earlier for SCAN while NEO has only $45\%$ of early predictions. The MAE results shown in table 6 are consistent with the classification results, showing a better performance by NEO. The emergence results in both systems are affected by data imbalance: for instance, both systems have a high number of FN cases when senses have a lower proportion of data $(< 0.5)$ . Similarly, the FP cases tend to correspond to senses which have a lower proportion. + +# 5.4 Evaluation on $P(s|t)$ + +Table 7 shows that NEO has less errors by senses across years than SCAN according to the distance measures over $P(s|t)$ . This is confirmed by Wilcoxon test, which shows that the errors distributions of the two systems are significantly different. + +One would expect that the distance errors have an impact on emergence. By examining the means of two categories, TP cases (when the emergence is predicted within 5 years of the true emergence, see 5.3) as a category and the rest as a second category, one can observe that the means of the errors is lower for the former while its higher for the latter, as shown in table 8. + +# 5.5 Comparing Evaluation Measures + +The evaluation measures reflect different types of errors. The correlation values between clustering-based classification and regression measures are -0.71 for NEO and -0.44 for SCAN. This apparent + +Table 6: Global emergence MAE, based on individual error by sense. + +
DistanceNEOSCANWilcoxon p-value
Global meanGlobal mean
DTW0.1820.2222.0413e-15
Euclidean0.1240.1425.3543e-06
+ +Table 7: Mean distance errors across senses by DTW and Euclidean algorithms. + +
Predicted EmergenceDTW meanEuclidean meanError mean
NEOTP0.0780.04150.009
not TP0.1890.1300.313
SCANTP0.1930.1240.016
not TP0.2220.1420.334
+ +Table 8: Comparison between mean errors by predicted emergence status (error values normalised by the number of years). DTW and Euclidean distance are obtained by comparing the predicted vs. gold $P(s|t)$ , whereas the classification status (TP vs. not TP) and normalised error mean are calculated based on the emergence year by sense. + +
Distance MeasureSense level F1-scoreTarget level macro F1-score
NEODTW-0.270-0.448
Euclidean-0.230-0.432
SCANDTW-0.313-0.419
Euclidean-0.248-0.374
+ +Table 9: Correlation between distance measures and classification measures at the level of senses/targets. + +discrepancy between the two evaluation measures is explained by several factors, some related to the definition of the measures and some due to the data characteristics. On one hand, the MAE is calculated as the average error across the instances which are labeled only with this particular true sense. On the other hand, in the classification setting, all the instances of a target are taken into account for a specific sense. This implies that the instances of the other senses are also taken into account. + +For any given year $t$ , the probability of the parameter $P(s|t)$ is estimated from the proportion of a sense among the instances of this year. This means that the value of the parameter $P(s|t)$ is directly related to the posterior probability used for the evaluation at the level of the instances. Therefore one would expect a quite strong correlation level between the DTW and/or Euclidean distance based on the estimated parameter $P(s|t)$ and the evaluation score based on the instances. However the correlation values observed at the level of senses (e.g. F1-score) is weak, although they are more significant at the level of targets, as shown in table 9. + +The low correlation level is primarily due to the fact that the majority of the targets have two senses which are complement of each other, thus the two $P(s|t)$ series are a mirror of each other (i.e. $P(s_{1}|t) = 1 - P(s_{2}|t)$ ), in turn causing the DTW and Euclidean distance values to be the same for both senses. On the contrary, the instance-based evaluation scores tend to be very different for the + +two senses, especially in the case of strong size imbalance (see 5.2). The difference in correlation between the level of senses and the level of targets is likely due to the fact that the discrepancies in the evaluation between senses are balanced out at the level of targets. + +# 6 Conclusion and Discussion + +We have addressed the issue of evaluating DWSI: we evaluated two models, NEO and SCAN, directly on the task itself, independently from any extrinsic related tasks, with a large dataset collected from biomedical resources. We defined and tested various external evaluation measures. Overall, NEO performs significantly better in the tasks of detecting senses and the emergence of new senses, according to most of our evaluation measures. + +The design differences between the models and their parameters could potentially have an effect on the amount of data they require, but it turns out that the global data size has no important effect on the accuracy of either system. Both systems are unable to predict the correct size of the clusters: they tend to split the data almost equally between senses irrespective of the true semantic sense represented by the context words, and this impacts the correct detection of the emergence. This issue also explains why the original studies tend to use a high number of senses in order to capture the true senses, even though this causes the clusters to be split and the appearance of "junk senses". We also find that NEO performs better with larger senses while SCAN tends to perform better with smaller senses. This opens the perspective of combining the advantages of the two systems. We acknowledge that the data is domain-specific, however the observed biases of the systems are likely to hold across domains. + +# Acknowledgements + +We would like to thank Dr. Martin Emms and Dr. Lea Frermann for sharing the code of their systems. We are also grateful to the anonymous reviewers for their valuable comments. + +The first author is grateful to King Abdullah Scholarship Program from the Saudi Arabian Government for supporting this work. + +The ADAPT Centre for Digital Content Technology is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund. + +# References + +Eneko Agirre and Aitor Soroa. 2007. Semeval-2007 task 02: Evaluating word sense induction and discrimination systems. In Proceedings of the fourth international workshop on semantic evaluations (semeval-2007), pages 7-12. +Donald J Berndt and James Clifford. 1994. Using dynamic time warping to find patterns in time series. In KDD workshop, volume 10, pages 359-370. Seattle, WA. +David M Blei and John D Lafferty. 2006. Dynamic topic models. In Proceedings of the 23rd international conference on Machine learning, pages 113-120. ACM. +Olivier Bodenreider. 2004. The unified medical language system (UMLS): integrating biomedical terminology. *Nucleic acids research*, 32(suppl_1):D267-D270. +Paul Cook, Joy Han Lau, Diana McCarthy, and Timothy Baldwin. 2014. Novel word-sense identification. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1624-1635. +Martin Emms and Arun Kumar Jayapal. 2016. Dynamic generative model for diachronic sense emergence detection. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1362-1373. +Lea Frermann. 2017. Bayesian Models of Category Acquisition and Meaning Development. PhD thesis, University of Edinburgh. +Lea Frermann and Mirella Lapata. 2016. A bayesian model of diachronic meaning change. Transactions of the Association for Computational Linguistics, 4:31-45. +Kristina Gulordava and Marco Baroni. 2011. A distributional similarity approach to the detection of semantic change in the google books ngram corpus. In Proceedings of the GEMS 2011 workshop on geometrical models of natural language semantics, pages 67-71. +Arun Jayapal. 2017. Finding Sense Changes by Unsupervised Methods. PhD thesis, Trinity College Dublin. +Antonio J Jimeno-Yepes, Bridget T McInnes, and Alan R Aronson. 2011. Exploiting mesh indexing in medline to generate a data set for word sense disambiguation. BMC bioinformatics, 12(1):223. +Andrey Kutuzov, Lilja Øvrelid, Terrence Szymanski, and Erik Velldal. 2018. Diachronic word embeddings and semantic shifts: a survey. arXiv preprint arXiv:1806.03537. + +Jey Han Lau, Paul Cook, Diana McCarthy, David Newman, and Timothy Baldwin. 2012. Word sense induction for novel sense detection. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 591-601. Association for Computational Linguistics. +Suresh Manandhar, Ioannis Klapaftis, Dmitriy Dligach, and Sameer Pradhan. 2010. SemEval-2010 task 14: Word sense induction &disambiguation. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 63–68, Uppsala, Sweden. Association for Computational Linguistics. +Jean-Baptiste Michel, Yuan Kui Shen, Aviva Presser Aiden, Adrian Veres, Matthew K Gray, Joseph P Pickett, Dale Hoiberg, Dan Clancy, Peter Norvig, Jon Orwant, et al. 2011. Quantitative analysis of culture using millions of digitized books. science, 331(6014):176-182. +Sunny Mitra, Ritwik Mitra, Suman Kalyan Maity, Martin Riedl, Chris Biemann, Pawan Goyal, and Animesh Mukherjee. 2015. An automatic approach to identify word sense changes in text media across timescales. Natural Language Engineering, 21(5):773-798. +Octavian Popescu and Carlo Strapparava. 2015. Semeval 2015, task 7: Diachronic text evaluation. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 870-878. +Christian Rohrdantz, Annette Hautli, Thomas Mayer, Miriam Butt, Daniel A Keim, and Frans Plank. 2011. Towards tracking semantic change by visual analytics. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pages 305-310. Association for Computational Linguistics. +Alexis Sardá-Espinosa. 2017. Comparing time-series clustering algorithms in r using the dtwclust package. R package vignette, 12:41. +Dominik Schlechtweg, Barbara McGillivray, Simon Hengchen, Haim Dubossarsky, and Nina Tahmasebi. 2020. Semeval-2020 task 1: Unsupervised lexical semantic change detection. arXiv preprint arXiv:2007.11464. \ No newline at end of file diff --git a/anevaluationmethodfordiachronicwordsenseinduction/images.zip b/anevaluationmethodfordiachronicwordsenseinduction/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..707f95b9b7ab8b5077814649959e0c84052db0bd --- /dev/null +++ b/anevaluationmethodfordiachronicwordsenseinduction/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cf1943050c1e4b146a452dc8e85edf1843f89504bb1ca90f4344e0fa03ce1619 +size 345303 diff --git a/anevaluationmethodfordiachronicwordsenseinduction/layout.json b/anevaluationmethodfordiachronicwordsenseinduction/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..7fd5394c7b4e6b0e921c3001a1a7d0f2be6c8d4f --- /dev/null +++ b/anevaluationmethodfordiachronicwordsenseinduction/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7e50d0e6e8b141f1041f7870f8013cf6e39ade434e2808a336667482594a529 +size 358792 diff --git a/aninstancelevelapproachforshallowsemanticparsinginscientificproceduraltext/bd003223-e145-4ed7-9ad1-1c55f61df622_content_list.json b/aninstancelevelapproachforshallowsemanticparsinginscientificproceduraltext/bd003223-e145-4ed7-9ad1-1c55f61df622_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..9738d1648c1c5860b26a557beb55ace1727ddf72 --- /dev/null +++ b/aninstancelevelapproachforshallowsemanticparsinginscientificproceduraltext/bd003223-e145-4ed7-9ad1-1c55f61df622_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:013000b17778ef173b36f4bcbfcb0a404f90f64b6560f64cfb09b0c705a4e083 +size 53785 diff --git a/aninstancelevelapproachforshallowsemanticparsinginscientificproceduraltext/bd003223-e145-4ed7-9ad1-1c55f61df622_model.json b/aninstancelevelapproachforshallowsemanticparsinginscientificproceduraltext/bd003223-e145-4ed7-9ad1-1c55f61df622_model.json new file mode 100644 index 0000000000000000000000000000000000000000..151664436a5a0ed331beff6a751f93616cc66ff2 --- /dev/null +++ b/aninstancelevelapproachforshallowsemanticparsinginscientificproceduraltext/bd003223-e145-4ed7-9ad1-1c55f61df622_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:36c745b1aca5c8d41690f0acc8ea870a2cb8ee7ca87ab81aa20e774de7e5b900 +size 62016 diff --git a/aninstancelevelapproachforshallowsemanticparsinginscientificproceduraltext/bd003223-e145-4ed7-9ad1-1c55f61df622_origin.pdf b/aninstancelevelapproachforshallowsemanticparsinginscientificproceduraltext/bd003223-e145-4ed7-9ad1-1c55f61df622_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d05dc26dbfe7b95934285f9abc056123546c5cbb --- /dev/null +++ b/aninstancelevelapproachforshallowsemanticparsinginscientificproceduraltext/bd003223-e145-4ed7-9ad1-1c55f61df622_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:91645e9ad21f55849f8d9e13e37893c41a1c5845e99e5ae895dc22fe5e65efb9 +size 314551 diff --git a/aninstancelevelapproachforshallowsemanticparsinginscientificproceduraltext/full.md b/aninstancelevelapproachforshallowsemanticparsinginscientificproceduraltext/full.md new file mode 100644 index 0000000000000000000000000000000000000000..e6c6f26984a99a26d6172d1256efd9d2349a3e92 --- /dev/null +++ b/aninstancelevelapproachforshallowsemanticparsinginscientificproceduraltext/full.md @@ -0,0 +1,221 @@ +# An Instance Level Approach for Shallow Semantic Parsing in Scientific Procedural Text + +Daivik Swarup, Ahsaas Bajaj, Sheshera Mysore +Tim O'Gorman, Rajarshi Das, Andrew McCallum + +{dswarupogguv, abajaj, smysore +togorman, rajarshi, mccallum }@cs.umass.edu + +# Abstract + +In specific domains, such as procedural scientific text, human labeled data for shallow semantic parsing is especially limited and expensive to create. Fortunately, such specific domains often use rather formulaic writing, such that the different ways of expressing relations in a small number of grammatically similar labeled sentences may provide high coverage of semantic structures in the corpus, through an appropriately rich similarity metric. In light of this opportunity, this paper explores an instance-based approach to the relation prediction sub-task within shallow semantic parsing, in which semantic labels from structurally similar sentences in the training set are copied to test sentences. Candidate similar sentences are retrieved using SciBERT embeddings. For labels where it is possible to copy from a similar sentence we employ an instance level copy network, when this is not possible, a globally shared parametric model is employed. Experiments show our approach outperforms both baseline and prior methods by 0.75 to 3 F1 absolute in the Wet Lab Protocol Corpus and 1 F1 absolute in the Materials Science Procedural Text Corpus. + +# 1 Introduction + +Being able to represent natural language descriptions of scientific experiments in a structured form promises to allow tackling a range of challenges from automating biomedical experimental protocols (Kulkarni et al., 2018) to gaining materials science insight by large scale mining of the literature (Mysore et al., 2019). To facilitate these applications, recent work has created datasets annotated with sentence level semantic structure for procedural scientific text from experimental biology (Kulkarni et al., 2018) and materials science (Mysore et al., 2019). However, these corpora, the Wet Lab Protocols corpus (WLP) and the Materials + +Query: “Centrifuge the sample at 14,000xg for 5 minutes.” + +Neighbor: "Centrifuge supernatant at 12,000xg for 10 minutes." + +Query: "Add $700\mu \mathrm{l}$ $70\%$ ethanol to the tube and invert several times to wash the DNA pellet." + +Neighbor: "Add $200\mu l$ $70\%$ ethanol and invert the tube twice to wash the pellet." + +Figure 1: Example sentences from the WLP corpus, and their nearest neighbours based on sentence representations obtained from ScIBERT. + +Science Procedural Text (MSPT) corpus remains small. This motivates approaches to parsing that are likely to generalize given limited labelled data. + +We propose an instance-based edge-factored approach for the relation prediction sub-problem of shallow semantic parsing. To predict a possible relation between two entities, our approach retrieves a set of sentences similar to the target sentence, and learns to copy relations in those sentences to the target sentence (Figure 1 shows some examples). + +However, using only a nearest-neighbours approach over similar sentences poses a coverage problem, as some edge labels may have zero instances in the set of nearest neighbour sentences. To address this, we employ a parametric approach which can score a label when it is not possible to copy that label from any of the neighbours. Therefore, we combine a local, instance-level approach with a global, parametric approach. + +Our instance-based approach is motivated by the observation that text in the WLP and MSPT corpora, both of which describe experimental protocols, follow domain-specific writing conventions (sometimes referred to as a sublanguage (Grish + +man, 2001; Grishman and Kittredge, 1986)) resulting in text that is repetitive and semi-structured. In such restricted domains we postulate that a low-bias instance-level approach may generalize better compared to a parametric approach, which is likely to suffer from a lack of training data. + +In evaluations of the proposed approach we find the proposed local and global approach to outperform baseline methods based on parametric approaches by 0.75 F1 absolute in WLP and 1 F1 absolute in MSPT and prior work by 2.69 F1 absolute (12.7% error reduction) on the WLP corpus. We also present first results for relation prediction on the MSPT corpus. Code and data for our experiments is available. $^{1}$ + +# 2 Task Setup and Notation + +Given a sentence $X = \langle x_{1},\ldots x_{i},\ldots x_{L}\rangle$ from a dataset $\mathcal{D}$ , let $x$ denote tokens, and $(m,t)$ entity mentions and their entity types, where $m\in C$ where $C$ is the set of all possible contiguous token spans in $X$ . In a sentence, we denote the set of all entity mentions with $M$ . Given this, we focus on the task of relation prediction which outputs a set of directed edges $E$ such that, $e = (m_s,m_d,r)$ with $e\in E\subset M\times M$ , where $m_{s},m_{d}$ denote source and destination mentions, $r\in \{\mathcal{R}\cup \varnothing \}$ denotes a relation edge label, $\mathcal{R}$ denotes the set of relation labels defined for the dataset and $\varnothing$ denotes the absence of a relation. + +# 3 Local and Global Model for Relation Prediction + +The proposed relation prediction approach is a combination of two components: a local, instance-based component which predicts the relation $r$ of one edge $(m_s, m_d)$ by copying a label from a set of nearest neighbor edges $e_n = (m_{ns}, m_{nd}, r_n) \in N$ , and a second component making a prediction from a globally shared set of parameters. The set of nearest neighbor edges $N$ is obtained from similar sentences in the training set (§3.2). This is formulated as follows: + +$$ +\begin{array}{l} \mathrm {P} _ {l g} \left(r _ {i} \mid m _ {s}, m _ {d}, N\right) = \\ \left\{ \begin{array}{l l} \frac {1}{Z} e ^ {E _ {l} \left(r _ {i}, m _ {s}, m _ {d}, N\right)} & \text {i f} r _ {i} \in \operatorname {l a b e l s} (N) \\ \frac {1}{Z} e ^ {E _ {g} \left(r _ {i}, m _ {s}, m _ {d}\right)} & \text {i f} r _ {i} \notin \operatorname {l a b e l s} (N) \end{array} \right. \tag {1} \\ \end{array} +$$ + +$^{1}$ https://github.com/bajajahsaas/ knn-srl-procedural-text $^{2}$ Non-contiguous entities in WLP (< 1%) are excluded. + +Here, $E_{g}$ represents the globally shared scoring function and $E_{l}$ the local scoring function, here we drop additional arguments to these functions for brevity. $Z$ denotes the normalization constant where: $Z = \sum_{r_k\in \mathrm{labels}(N)}e^{E_l(r_k)} + \sum_{r_j\notin \mathrm{labels}(N)}e^{E_g(r_j)}$ . In computing the score from $E_{l}$ per label, an instance level score from $E_{c}(r_{i},m_{s},m_{d},e_{n})$ is aggregated for every label present in the neighbours $N$ as: $E_{l} = \log \mathsf{sumexp}_{\mathsf{label}(e_n) = r_i}E_c$ This represents making a soft maximum selection of a neighbour edge most similar to the test edge for a given label $r_i$ . Here, $\mathsf{labels}(N)$ returns the set of labels present in $N$ and $\mathsf{label}(e_n)$ , returns the neighbour edge label. + +Equation 1 represents a model which is biased first to copy edge labels from $N$ and in the absence of a label in $N$ rely on a global model. This is in contrast to a model which trades off local and global models in a data-dependent manner, the approach taken in the copy-generate model of See et al. (2017). The proposed formulation imposes an inductive bias in the model to copy edge labels which we believe helps perform well in our small data regime. In practice, our approach uses the local model for more frequently occurring labels and the global model for rare labels. Conceptually, this is once again, in contrast to the models of See et al. (2017) and Gu et al. (2016) which use a copy-model for long-tail or low-frequency phenomena. We believe this contrast is reasonable due to the formulaic nature of the text and the small data regime. Here, a local instance-level approach is able to generalize better by copying labels while the global model suffers from a lack of training data to learn the majority label patterns. Low frequency labels would see comparable performance for the global and instance level models. We confirm these intuitions empirically in §4. Next we define the neural-network parameterization of the model. + +# 3.1 Edge Representation and Scoring Function Parameterization + +We define the instance level scoring function $E_{c}$ and $E_{g}$ for the global model as follows: + +$$ +\begin{array}{l} E _ {c} \left(e _ {n}\right) = \operatorname {F F N} _ {R} \left(\left[ \mathbf {e} _ {q}; \mathbf {e} _ {n}; \mathbf {r} _ {n} \right]\right) (2a) \\ \mathbf {e} _ {q} = \operatorname {F F N} _ {e} \left(\left[ \mathbf {m} _ {s}; \mathbf {m} _ {d}; \mathbf {t} _ {s}; \mathbf {t} _ {d}; \mathbf {d} _ {s, d} \right]\right) (2b) \\ \mathbf {e} _ {n} = \operatorname {F F N} _ {e} \left(\left[ \mathbf {m} _ {n s}; \mathbf {m} _ {n d}; \mathbf {t} _ {n s}; \mathbf {t} _ {n d}; \mathbf {d} _ {n s, n d} \right]\right) (2c) \\ \end{array} +$$ + +Here, $\mathrm{FFN}_R$ is a feed-forward network which returns a scalar, $\mathbf{e}_q$ the vector representations for the + +query/test edge, $\mathbf{e}_n$ the neighbour edge and $\mathbf{r}_n$ the neighbours relation. Network $\mathrm{FFN}_e$ produces a vector representations for $e_q$ or $e_n$ . And, $\mathbf{m}$ represents a contextualized representation for the source and destination entity mentions, $\mathbf{t}$ and $\mathbf{d}$ represents a vector representations of the entity type and the distance between the source and destination. The parameters $\mathbf{t}$ , $\mathbf{r}$ and $\mathbf{d}$ are learned as model parameters and contextualized mention representations are obtained from ScIBERT (Beltagy et al., 2019) (word-pieces averaged) without fine-tuning. Next, the global scoring function is formulated as: + +$$ +E _ {g} (r _ {i}) = \mathrm {F F N} _ {R} ([ \mathbf {e} _ {q}; \mathbf {e} _ {r _ {i}}; \mathbf {r} _ {i} ]) \qquad (3) +$$ + +While most notation remains the same as in Equation 2, $\mathbf{e}_{r_i}$ represents a globally shared "prototype" edge representation per label, learned as model parameters. Note that $\mathbf{e}_{r_i}$ is only used in the global model and is the same kind of object as $\mathbf{e}_n$ . + +# 3.2 Training and Sentence Retrieval + +The proposed approach is trained by maximizing the log likelihood of the observed relations, $r^*$ in the dataset: $\mathcal{L} = \sum_{\mathcal{D}} \sum_{E} \log \mathrm{P}_{lg}(r^*)$ + +In this work, we obtain the set of nearest neighbour sentences to obtain $N$ based on representations obtained from SciBERT. Every sentence is represented by the average of the token (word-piece) representations: $\mathbf{v}_X = \frac{1}{L}\sum_{i=1}^{L}\operatorname{SciBERT}(x_i)$ . $K$ nearest neighbours of the query sentence $X_q$ were ranked by scores obtained as: cosine_sim( $\mathbf{v}_{Xq}$ , $\mathbf{v}_{Xn}$ ). We set $K = 5$ at training time to obtain the set of edges, $N$ . At test time we use $K = 40$ and $K = 20$ for WLP and MSPT respectively. In experiments, we work with approximate nearest neighbours obtained from the annoy package. Complete model hyperparameter and training details are presented in Appendix A.4. + +# 4 Results and Analysis + +We evaluate the proposed approach on two datasets of procedural scientific text: the Materials Science Procedural Text (MSPT) corpus and the Wet Lab Protocols (WLP) corpus. In both corpora we focus on the sentence level relation prediction task given gold entity mention spans. The experimental setup is detailed in Appendix A.1. + +# 4.1 Baselines + +We compare the proposed approach to several baseline approaches as well as prior work: + +KULKARNI18: The best approach proposed in prior work on the WLP corpus. This is an edge factored parametric approach using lexical, dependency and entity-type features. + +COPYGEN: This is the copy-generate model proposed in (See et al., 2017), modified for a relation prediction task. The method differs from ours in trying to predict a copy probability, $\alpha$ using a mixing network which trades off the copy/instance or generate/global component in a data-dependent manner. The model is detailed in Appendix A.2.1. + +STRINGCOPY: This approach attempts to copy the relation for a query edge $(m_{qs}, m_{qd})$ from a neighbour edge $(m_{ns}, m_{nd})$ , from the nearest neighbours $N$ , first based on exact string matches of the mention and next the entity type $t$ . If this is not possible it predicts $\varnothing$ . + +GLOBALMODEL: A parametric model approach without an instance learning component: $P_{g}(r|m_{s},m_{d}) = \mathrm{Softmax}(\mathrm{FFN}_{g}(\mathbf{e}_{\mathbf{q}}))$ . Since this is the dominant approach for relation prediction we believe it is the most reasonable relation prediction model to compare against to demonstrate the benefits of an instance learning approach. + +LOCALMODEL: Instance based local approach (Eq 1) without the global model. + +# 4.2 Results + +Overall results: Table 1 presents performance of the proposed approach against a host of baseline methods and prior work. From row I, we note that the inductive bias to copy is better suited to WLP than to MSPT, and that simple rule-based approaches don't perform at any useful level. Also note the proposed approach outperforms prior work on WLP (II vs VI). Next, we note that the parametric and the instance based approach (IV, V) trade off precision and recall as we would expect and that the proposed approach (VI) outperforms both these approaches. Also note the ablation of model components provided in this result (IV, V, VI). + +Next consider specifically the results on MSPT. Note here, the high-recall result of COPYGEN. We explain this as follows: First we note that given the formulaic nature of the data, the proposed approach is biased to have a higher precision given that it can copy labels. The COPYGEN and GLOBALMODELS lack this bias. The MSPT dataset has a sparser set + +
IDModelWLPMSPT
PrecisionRecallF1PrecisionRecallF1
ISTRINGCOPY6.9935.4511.681.4215.712.61
IIKULKARNI1880.9877.0478.96---
IIICOPYGEN81.1780.5980.8866.3372.1469.11
IVGLOBALMODEL81.0680.7780.9166.9370.6668.75
VLOCALMODEL81.3278.7580.0168.7264.1666.36
VIOUR METHOD82.2981.0281.6570.0469.4869.76
+ +of relations when considering all pairs of edges between entity mentions (1916/45732 = 4.1%) than WLP (8264/60338 = 13.6%). To perform well on a sparsely labelled dataset a model must be biased for precision (a conservative model biased for precision would label the true-positives and given the sparsity, have high recall and overall F1), since the COPYGEN/GLOBALMODELS models are not biased for precision they make predictions more liberally leading to higher recalls but see significant hits to precision, in contrast to the proposed method. Finally, we note the gap between CopyGen and GlobalModel in MSPT and attribute it to training variance given the smaller size of MSPT. + +Finally, we also compare to an alternative data-dependent method for combining a parametric and instance based approach (III vs VI) from See et al. (2017). Our approach with a stronger inductive bias to copy relations outperforms this. We also note that this approach performs similarly to GLOBALMODEL (III vs IV). Examination of the predicted copy-probability $(\alpha)$ on development examples in COPYGEN shows these values to be very small (MSPT mean: $10^{-5}$ , WLP mean: $10^{-5}$ ) confirming that the model always chooses to "generate" (i.e. use a parametric model) and lacks sufficient inductive bias to copy in our datasets. In contrast, in OUR METHOD the local model makes edge predictions in 1852 of 1916 edges $(96\%)$ in MSPT and 8131 of 8264 edges $(98\%)$ in WLP development sets. Confirming the intended and significant invocation of the local model in the proposed approach. + +Breakdown by label: As discussed in §3, given our small data regime, we believe a model with a simple inductive bias such as the local model generalizes better while the global model suffers a lack of training data to learn the majority label patterns, while in the case of very low frequency + +Table 1: Our methods compared against baseline approaches and prior work on the test sets of the Web Labs Protocols (WLP) and Material Science Procedural Text (MSPT) corpora. Results assume access to gold entity mentions and represent microaveraged performance. + +
Data %5102050100
WLPGM69.1872.7276.7878.7680.91
OM70.3273.6477.1279.2481.65
MSPTGM48.8757.9661.8865.8368.75
OM50.859.1760.8266.4269.76
+ +Table 2: Performance of GLOBALMODEL (GM) compared against the OUR METHOD (OM) with varying amounts of training data on test F1. + +labels the global component would perform at par with a simple parametric approach. We see this behaviour in Table 3. While this behaviour reverses the trend of methodologically similar instance based approaches (See et al., 2017; Snell et al., 2017; Khandelwal et al., 2020), we believe it to be reasonable specifically due to the formulaic writing in our corpora. + +Varying training data: Finally, in Table 2 we note that the proposed approach outperforms the parametric approach, GLOBALMODEL, at nearly all levels of training data. Demonstrating that the gains from copying labels from similar sentences in the training data hold out even as the pool of sentences to copy from shrinks, once again demonstrating the advantage of a model leveraging formulaic writing. + +# 5 Related Work + +Instance-based learning approaches have been applied to a rage number of information extraction tasks such as Semantic Role Labeling (SRL), Named Entity Recognition (NER), and Part of Speech (POS) tagging. Akbik and Li (2016) and Wiseman and Stratos (2019) presents the closest related work in terms of the task instance level methods are applied to. Akbik and Li (2016) apply a nearest-neighbors model for the SRL tasks of predicate and argument labeling based on pre-defined + +
WLPActs-onUsingMod-LinkMeronymCreatesCount
Count258910157083459380
OUR METHOD86.5172.3488.7253.6623.4482.76
GLOBALMODEL85.7170.7587.8458.0635.4881.38
MSPTParticipantAmountPrecursorConditionTargetType
Count3953751961358433
OUR METHOD64.5478.4254.1629.9151.4676.67
GLOBALMODEL61.0977.5358.9920.150.8580
+ +Table 3: Per label performance for OUR METHOD compared against the GLOBAL MODEL on a random subset of labels in each dataset sorted by test set count/frequency. Total label instances, WLP: 8563, MSPT: 3119 + +feature representations of predicate-argument pairs; our work presents an instance level approach for the argument-labeling sub-task. Wiseman and Stratos (2019) applied instance-based methods to the sequence labeling tasks of NER and POS tagging, copying nearest neighbor labels from a set of candidate sentences as in the current work but applied to text spans. More generally, instance-based methods have also proven useful for language modeling (Khandelwal et al., 2020), knowledge base reasoning tasks (Das et al., 2020), and few-shot classification (Snell et al., 2017; Sung et al., 2018) and regression (Quinlan, 1993) problems. + +Works in text generation such as summarization (See et al., 2017; Gu et al., 2016) have also incorporated "copy" mechanisms, pointing at long-tail phenomena from text to be summarized or translated rather than directly predicting them. These methods bear close methodological similarity to the proposed approach while differing in having a weaker inductive bias to copy labels. Also similar, are retrieve-and-edit approaches which have been applied instance based methods for generating complex structured outputs and text generation (Hashimoto et al., 2018; Guu et al., 2018). + +# 6 Conclusion + +We propose an edge factored instance based approach to the relation prediction sub-task within shallow semantic parsing for procedural scientific text. Our approach leverages the highly formulaic writing of procedural scientific text to achieve better generalization than baseline methods with weaker inductive biases to copy and prior approaches which represent parametric approaches on two corpora of English scientific text. While our work has only looked at predicting relations in an edge factored manner future work might explore ways of predicting higher order groups of edges. + +Other extensions might consider jointly predicting spans and edges as in Akbik and Li (2016). Future work might also consider questions of characterizing and measuring formulaicity in text and how a range of information extraction tasks may be tailored to these texts. Finally, our approach relies on a static retrieval of sentences, there may also be potential for this aspect to be improved upon with a dynamic retrieval model trained along side the label prediction models similar to Guu et al. (2020), we expect this would be feasible particularly given the small dataset sizes in this domain. + +# Acknowledgments + +We thank anonymous reviewers and members of UMass IESL group for helpful discussion and feedback. This work is funded in part by the Center for Data Science and the Center for Intelligent Information Retrieval, in part by the National Science Foundation under Grants No. IIS-1763618 and DMR-1922090, in part by USC (University of Southern California) subcontract no. 123875727 under Office of Naval Research (ONR) prime contract no. N660011924032, and in part by the Chan Zuckerberg Initiative under the project Scientific Knowledge Base Construction. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor. + +# References + +Alan Akbik and Yunyao Li. 2016. K-SRL: Instance-based learning for semantic role labeling. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. +Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. Scibert: Pretrained language model for scientific text. In EMNLP. + +Rajarshi Das, Ameya Godbole, Shehzaad Dhuliawala, Manzil Zaheer, and Andrew McCallum. 2020. A simple approach to case-based reasoning in knowledge bases. In *Automated Knowledge Base Construction*. +Ralph Grishman. 2001. Adaptive information extraction and sublanguage analysis. In Proceedings of the Workshop on Adaptive Text Extraction and Mining, Seventeenth International Joint Conference on Artificial Intelligence (IJCAI-2001), Seattle, Washington, August 5, 2001. +Ralph Grishman and Richard Kittredge. 1986. Analyzing language in restricted domains: sublanguage description and processing. Psychology Press. +Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In ACL. +Kelvin Guu, Tatsunori B Hashimoto, Yonatan Oren, and Percy Liang. 2018. Generating sentences by editing prototypes. Transactions of the Association for Computational Linguistics. +Kelvin Guu, Kenton Lee, Zora Tung, Panupong Papat, and Ming-Wei Chang. 2020. REALM: Retrieval-augmented language model pre-training. In Proceedings of the International Conference on Machine Learning 1 pre-proceedings (ICML). +Tatsunori B Hashimoto, Kelvin Guu, Yonatan Oren, and Percy S Liang. 2018. A Retrieve-and-Edit framework for predicting structured outputs. In Advances in Neural Information Processing Systems, pages 10052-10062. +Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. Generalization through memorization: Nearest neighbor language models. In International Conference on Learning Representations. +Chaitanya Kulkarni, Wei Xu, Alan Ritter, and Raghu Machiraju. 2018. An annotated corpus for machine reading of instructions in wet lab protocols. In Proceedings of NAACL-HLT. +Sheshera Mysore, Zach Jensen, Edward Kim, Kevin Huang, Haw-Shiuan Chang, Emma Strubell, Jeffrey Flanigan, Andrew McCallum, and Elsa Olivetti. 2019. The materials science procedural text corpus: Annotating materials synthesis procedures with shallow semantic structures. In Proceedings of the 13th Linguistic Annotation Workshop". Association for Computational Linguistics. +J Ross Quinlan. 1993. Combining instance-based and model-based learning. In Proceedings of the tenth international conference on machine learning, pages 236-243. + +Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). +Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. In Advances in neural information processing systems. +Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. 2018. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. +Sam Wiseman and Karl Stratos. 2019. Label-agnostic sequence labeling by copying nearest neighbors. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. + +# A Appendix + +# A.1 Experimental Setup + +WLP: We perform experiments with the splits provided by Kulkarni et al. (2018). In processing the dataset, we also exclude the "Misc-Link" as recommended, and cross sentence relations and relations with non-contiguous entities $(< 0.1\%)$ . + +MSPT: We use data and sfex splits provided as the alongside Mysore et al. (2019).4 A small number of relations labelled across sentences (< 1%) were removed. + +# A.2 Baseline Descriptions + +# A.2.1 Copy-Generate Based Relation Prediction + +The COPYGEN forms one of our baseline approaches and bears similarity to the pointer-generator network proposed by See et al. (2017) for text summarization. + +Here one component attempts to predict edges given entity mentions $m_s, m_d \in M$ and another which attempts to copy an edge relation label for $(m_s, m_d)$ from a set of edges, $e_n = (m_{ns}, m_{nd}, r_n) \in N$ obtained from nearest neighbour sentences to the current sentence from the training set. This model is formulated as follows: + +$$ +\begin{array}{l} \mathrm {P _ {c g}} (r _ {i} | m _ {s}, m _ {d}, N) = \alpha \mathrm {P _ {c o p y}} (r _ {i} | m _ {s}, m _ {d}, N) \\ + (1 - \alpha) \mathrm {P} _ {\text {g e n}} \left(r _ {i} \mid m _ {s}, m _ {d}\right) \\ \alpha = \sigma (E _ {m} (m _ {s}, m _ {d}, N)) \\ \end{array} +$$ + +Here, $\alpha \in [0,1]$ denotes a mixing factor for the copy and generate models, $\sigma$ denotes the sigmoid function, $E_{m}$ denotes the mixing network and $\mathrm{P_{cg}}$ , $\mathrm{P_{copy}}$ and $\mathrm{P_{gen}}$ denote the copy-generate, copy and generate models respectively. These individual models are defined as follows: + +$$ +\mathrm {P} _ {\text {g e n}} (r _ {i} | m _ {s}, m _ {d}) = \frac {e ^ {E _ {g} (r _ {i} , m _ {s} , m _ {d})}}{\sum_ {j = 1} ^ {| \mathcal {R} | + 1} e ^ {E _ {g} (r _ {j} , m _ {s} , m _ {d})}} +$$ + +$$ +\begin{array}{l} \mathrm {P} _ {\mathrm {c o p y}} (r _ {i} | m _ {s}, m _ {d}, N) = \sum_ {r _ {n k} = r _ {i}} \mathrm {P} _ {\mathrm {a t t}} (a _ {k} | m _ {s}, m _ {d}, N) \\ \mathrm {P _ {a t t}} (a _ {k} | m _ {s}, m _ {d}, N) = \frac {e ^ {E _ {c} (a _ {k} , m _ {s} , m _ {d} , N)}}{\sum_ {k = 1} ^ {| N |} e ^ {E _ {c} (a _ {k} , m _ {s} , m _ {d} , N)}} \\ \end{array} +$$ + +Here, $E_{g}$ and $E_{c}$ denote the generate and copy scoring functions respectively, and $\mathrm{P}_{\mathrm{att}}$ denotes an attention distribution over edges $(N)$ from the nearest neighbour sentences. While $E_{g}$ and $E_{c}$ are + +formulated similar to those in Section 3.1, $E_{m}$ is formulated as follows: + +$$ +\begin{array}{l} \alpha = \mathrm {F F N} _ {m} ([ \mathbf {e} _ {g}; \mathbf {N} ]) \\ \mathbf {N} = \sum_ {k = 1} ^ {| N |} \mathrm {P} _ {\mathrm {a t t}} (a _ {k}) \mathbf {e} _ {n k} \\ \end{array} +$$ + +Here, $\mathrm{FFN}_m$ yields scalar mixing scores based on the current edge representation $\mathbf{e}_g$ and a representation of the nearest neighbor set $\mathbf{N}$ obtained as a attention weighted sum of the neighbor edge representations. + +# A.3 Extended Results + +While Table 1 presented test set results we include performance on the development set in Table 4. + +# A.4 Hyperparameters and Compute Details + +Table 5 shows the choice of hyper parameters. We did not tune any hyperparameters other than the number of nearest neighbors. We evaluated the models for the following values of K: $\{5,10,15,20,30,40,50\}$ and chose the K with the best validation set F1 score for each dataset. During training, we only use $K = 5$ . We ran experiments on server nodes with 256G RAM on a single Nvidia Titan X GPU. Training models on the MSPT and WLP corpora took about 3 and 3-5 hours respectively. + +
WLPMSPT
PrecisionRecallF1PrecisionRecallF1
STRINGCOPY5.8131.239.801.3114.772.40
COPYGEN80.8379.9580.3966.4572.4969.34
GLOBALMODEL80.7579.0679.967.2269.9568.56
LOCALMODEL80.7676.1278.3867.2463.1565.13
OUR METHOD81.0680.7780.9170.368.0269.14
+ +Table 4: Our methods compared against baseline approaches and prior work on the validation sets of the Web Labs Protocols (WLP) and Material Science Procedural Text (MSPT) corpora. Results assume access to gold entity mentions and represent microaveraged performance. + +
ParameterWLPMSPT
FFNR**768 × 512 × 256 × 1512 × 256 × 128 × 1
FFNe**1920 × 512 × 256 × 2561920 × 256 × 128 × 128
FFNm**256 × 256 × 126 × 64 × 1128 × 256 × 126 × 64 × 1
FFNg**256 × 512 × 256 × 14128 × 256 × 128 × 19
Distance Feature Buckets*1110
Number of Neighbors (Training)55
Number of Neighbors (Testing)4020
Distance Feature Size (d)128128
Type Embedding Size (t)128128
Relation Embedding Size (r)256256
Learning rate1 × 10-41 × 10-4
Weight decay1 × 10-41 × 10-4
OptimizerADAMADAM
+ +Table 5: Hyperparameter settings for models. *Number of tokens between source and destination entities are bucketed. We take the range of distances up to the 90th percentile and divide it into equal buckets. Instances with greater distance than this range fall into the largest bucket. ** All feed forward networks use ReLU non-linearities between layers. \ No newline at end of file diff --git a/aninstancelevelapproachforshallowsemanticparsinginscientificproceduraltext/images.zip b/aninstancelevelapproachforshallowsemanticparsinginscientificproceduraltext/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..fae9d37a05da8622a9612f792e02e6a94de9f7bb --- /dev/null +++ b/aninstancelevelapproachforshallowsemanticparsinginscientificproceduraltext/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:419021059bf23df06a926579aa6dc1e7f95338b18a4ebeec4413fc5a09b04b8c +size 311930 diff --git a/aninstancelevelapproachforshallowsemanticparsinginscientificproceduraltext/layout.json b/aninstancelevelapproachforshallowsemanticparsinginscientificproceduraltext/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a018e1fff43109019982c31f5ad787f92ff57cef --- /dev/null +++ b/aninstancelevelapproachforshallowsemanticparsinginscientificproceduraltext/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5c77586b25643c91c7395fa7637683c83b3926d9578284543962a3bfc953779b +size 271436 diff --git a/aninvestigationofpotentialfunctiondesignsforneuralcrf/e4ff0bdb-1c45-475b-b800-68748812f1f6_content_list.json b/aninvestigationofpotentialfunctiondesignsforneuralcrf/e4ff0bdb-1c45-475b-b800-68748812f1f6_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0acc1d97481ad39d917e37f5b0ac5be0f72127d5 --- /dev/null +++ b/aninvestigationofpotentialfunctiondesignsforneuralcrf/e4ff0bdb-1c45-475b-b800-68748812f1f6_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7e6405a2dd0374409d64546b395aa8ca3cce3592d2da491ea78f44b60a67ae9 +size 89011 diff --git a/aninvestigationofpotentialfunctiondesignsforneuralcrf/e4ff0bdb-1c45-475b-b800-68748812f1f6_model.json b/aninvestigationofpotentialfunctiondesignsforneuralcrf/e4ff0bdb-1c45-475b-b800-68748812f1f6_model.json new file mode 100644 index 0000000000000000000000000000000000000000..48fa00f283d76317c0d632242ff6c4c9b9761350 --- /dev/null +++ b/aninvestigationofpotentialfunctiondesignsforneuralcrf/e4ff0bdb-1c45-475b-b800-68748812f1f6_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:06edf0028b9403bfdaf7b2bbb1e74335902ce61b0ad724ed6ad30f2b2e443204 +size 99740 diff --git a/aninvestigationofpotentialfunctiondesignsforneuralcrf/e4ff0bdb-1c45-475b-b800-68748812f1f6_origin.pdf b/aninvestigationofpotentialfunctiondesignsforneuralcrf/e4ff0bdb-1c45-475b-b800-68748812f1f6_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..41779924e5bf64f29886d814c8dee547f4627f7d --- /dev/null +++ b/aninvestigationofpotentialfunctiondesignsforneuralcrf/e4ff0bdb-1c45-475b-b800-68748812f1f6_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d7341f6b1ccffd25cb059ef6fb5d7995284b45444ac1928882cb3c3e4d0c1c62 +size 1296629 diff --git a/aninvestigationofpotentialfunctiondesignsforneuralcrf/full.md b/aninvestigationofpotentialfunctiondesignsforneuralcrf/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a0ead46265a643f5088d41804d71ded9885fdbe2 --- /dev/null +++ b/aninvestigationofpotentialfunctiondesignsforneuralcrf/full.md @@ -0,0 +1,330 @@ +# An Investigation of Potential Function Designs for Neural CRF + +Zechuan Hu $^{\diamond}$ , Yong Jiang $^{\dagger*}$ , Nguyen Bach $^{\dagger}$ , Tao Wang $^{\dagger}$ , Fei Huang $^{\dagger}$ , Kewei Tu $^{\diamond*}$ + +$^{\mathrm{a}}$ School of Information Science and Technology, ShanghaiTech University + +Shanghai Engineering Research Center of Intelligent Vision and Imaging + +Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences + +University of Chinese Academy of Sciences + +$\dagger$ DAMO Academy, Alibaba Group + +{huzch,tukw}@shanghaiitech.edu.cn + +{yongjiang.jy,nguyen.bach,leeo.wangt,f.huang}@alibaba-inc.com + +# Abstract + +The neural linear-chain CRF model is one of the most widely-used approach to sequence labeling. In this paper, we investigate a series of increasingly expressive potential functions for neural CRF models, which not only integrate the emission and transition functions, but also explicitly take the representations of the contextual words as input. Our extensive experiments show that the decomposed quadri-linear potential function based on the vector representations of two neighboring labels and two neighboring words consistently achieves the best performance. + +# 1 Introduction + +Sequence labeling is the task of labeling each token of a sequence. It is an important task in natural language processing and has a lot of applications such as Part-of-Speech Tagging (POS) (DeRose, 1988; Toutanova et al., 2003; Xin et al., 2018), Named Entity Recognition (NER) (Ritter et al., 2011; Akbik et al., 2019), Chunking (Tjong Kim Sang and Buchholz, 2000; Suzuki et al., 2006). + +The neural CRF model is one of the most widely-used approaches to sequence labeling and can achieve superior performance on many tasks (Collobert et al., 2011; Chen et al., 2015; Ling et al., 2015; Ma and Hovy, 2016; Lample et al., 2016a). It often employs an encoder such as a BiLSTM to compute the contextual vector representation of each word in the input sequence. The potential function at each position of the input sequence in a neural CRF is typically decomposed into an emission function (of the current label and the vector representation of the current word) and a transition function (of the previous and current labels) (Liu et al., 2018; Yang et al., 2018). + +![](images/211e9fb4e05b92545a5a9fe8f5be40d4571446a0f5ab19215d7c4edbcf77a4cd.jpg) +Figure 1: Neural architecture for sequence labeling + +In this paper, we design a series of increasingly expressive potential functions for neural CRF models. First, we compute the transition function from label embeddings (Ma et al., 2016; Nam et al., 2016; Cui and Zhang, 2019) instead of label identities. Second, we use a single potential function over the current word and the previous and current labels, instead of decomposing it into the emission and transition functions, leading to more expressiveness. We also employ tensor decomposition in order to keep the potential function tractable. Thirdly, we take the representations of additional neighboring words as input to the potential function, instead of solely relying on the BiLSTM to capture contextual information. + +To empirically evaluate different approaches, we conduct experiments on four well-known sequence labeling tasks: NER, Chunking, coarse- and fine-grained POS tagging. We find that it is beneficial for the potential function to take representations of neighboring words as input, and a quadrilinear potential function with a decomposed tensor parameter leads to the best overall performance. + +Our work is related to Reimers and Gurevych (2017); Yang et al. (2018), which also compared different network architectures and configurations and conducted empirical analysis on different sequence labeling tasks. However, our focus is on the potential function design of neural CRF models, which has not been sufficiently studied before. + +![](images/beb168d23cd17ca9d1c74da922f3c6375275167a66d7b1e4b5e36a53570cc47b.jpg) +(a) + +![](images/a9b1e04ddc8ae0f39a997daa3f9deafda4a9a2b083e6abdaa8a3cf3b78216d5b.jpg) +(b) + +![](images/d46879213090ff7b5a035c329e44c1adc27c8e37e83dbd592eb9275ec3ab073a.jpg) +(c) + +![](images/13c56e2b2a68ad112c9088a3a38ac2e07351548be3934befee88aadcfa48e1de.jpg) +(d) + +![](images/5c18b0b24dac8514a763b96154f9707d8cd40ee390555bc1c2198c902578e936.jpg) +(e) + +![](images/aba515131627b480020a62e99ed14a1753941e41f087344fd802486293e76ea2.jpg) +(f) +Figure 2: Factor graphs of different models. The solid circles and hollow circles indicate random variables of word encodings and labels respectively. The black squares represent factors. + +# 2 Models + +Our overall neural network architecture for sequence labeling is shown in Figure 1. It contains three parts: a word representation layer, a bi-directional LSTM (BiLSTM) encoder, and an inference layer. The BiLSTM encoder produces a sequence of output vectors $\mathbf{h}_1, \mathbf{h}_2, \dots, \mathbf{h}_M \in \mathbb{R}^{D_h}$ , which are utilized by the inference layer to predict the label sequence. The inference layer typically defines a potential function $s(\mathbf{x}, \mathbf{y}, i)$ for each position $i$ of the input sequence $\mathbf{x}$ and label sequence $\mathbf{y}$ and computes the conditional probability of the label sequence given the input sequence as follows: + +$$ +P (\mathbf {y} | \mathbf {x}) = \frac {\exp (\sum_ {i = 1} ^ {M} s (\mathbf {x} , \mathbf {y} , i))}{\sum_ {\mathbf {y} ^ {\prime}} \exp (\sum_ {i = 1} ^ {M} s (\mathbf {x} , \mathbf {y} ^ {\prime} , i))} +$$ + +where $M$ is the length of the sequence. + +The simplest inference layer assumes independence between labels. It applies a linear transformation to $\mathbf{h}_i$ followed by a Softmax function to predict the distribution of label $y_i$ at each position $i$ (Figure 2(a)). In many scenarios, however, it makes sense to model dependency between neighboring labels, which leads to linear-chain CRF models. + +Vanilla CRF In most previous work of neural CRFs, the potential function is decomposed to an emission function and a transition function (Figure 2(b)), and the transition function is represented by a table $\phi$ maintaining the transition scores between labels. + +$$ +s (\mathbf {x}, \mathbf {y}, i) = \mathbf {v} _ {y _ {i - 1}} ^ {T} \boldsymbol {\phi} \mathbf {v} _ {y _ {i}} + \mathbf {h} _ {i} ^ {T} \mathbf {W} _ {h} \mathbf {v} _ {y _ {i}} +$$ + +where $\mathbf{v}_{y_i}$ is a one-hot vector for label $y_{i}$ and $\mathbf{W}_h\in$ $\mathbb{R}^{D_h\times D_t}$ is a weight matrix. + +TwoBilinear Instead of one-hot vectors, we may use dense vectors to represent labels, which has the benefit of encoding similarities between labels. Accordingly, the emission and transition functions are modeled by two bilinear functions. + +$$ +s (\mathbf {x}, \mathbf {y}, i) = \mathbf {t} _ {y _ {i - 1}} ^ {T} \mathbf {W} _ {t} \mathbf {t} _ {y _ {i}} + \mathbf {h} _ {i} ^ {T} \mathbf {W} _ {h} \mathbf {t} _ {y _ {i}} +$$ + +where $\mathbf{W}_t\in \mathbb{R}^{D_t\times D_t}$ is a weight matrix, and $\mathbf{t}_{y_i}\in$ $\mathbb{R}^{D_t}$ is the embedding of label $y_{i}$ . The factor graph remains the same as vanilla CRF (Figure 2(b)). + +ThreeBilinear Figure 2(c) depicts the structure of ThreeBilinear. Compared with TwoBilinear, ThreeBilinear has an extra emission function between the current word representation and previous label. + +$$ +\begin{array}{l} s (\mathbf {x}, \mathbf {y}, i) = \mathbf {t} _ {y _ {i - 1}} ^ {T} \mathbf {W} _ {t} \mathbf {t} _ {y _ {i}} + \mathbf {h} _ {i} ^ {T} \mathbf {W} _ {h _ {1}} \mathbf {t} _ {y _ {i}} \\ + \mathbf {h} _ {i} ^ {T} \mathbf {W} _ {h _ {2}} \mathbf {t} _ {y _ {i - 1}} \\ \end{array} +$$ + +Trilinear Instead of three bilinear functions, we may use a trilinear function to model the correlation between $\mathbf{h}_i$ , $\mathbf{t}_{y_i}$ and $\mathbf{t}_{y_{i-1}}$ . It has strictly more representational power than the sum of three bilinear functions. + +$$ +s (\mathbf {x}, \mathbf {y}, i) = \mathbf {h} _ {i} ^ {T} \mathbf {U t} _ {y _ {i - 1}} \mathbf {t} _ {y _ {i}} +$$ + +where $\mathbf{U} \in \mathbb{R}^{D_h \times D_t \times D_t}$ is an order-3 weight tensor. Figure 2(d) presents the structure of Trilinear. + +D-Trilinear Despite the increased representational power of Trilinear, its space and time complexity becomes cubic. To reduce the computational complexity without too much compromise of the representational power, we assume that $\mathbf{U}$ has rank $D_r$ and can be decomposed into the product of three matrices $\mathbf{U}_{t_1},\mathbf{U}_{t_2}\in \mathbb{R}^{D_t\times D_r}$ and + +$\mathbf{U}_{\mathbf{h}} \in \mathbb{R}^{D_h \times D_r}$ . Then the trilinear function can be rewritten as, + +$$ +s (\mathbf {x}, \mathbf {y}, i) = \sum_ {j = 1} ^ {D _ {r}} \left(\mathbf {g} _ {1} \circ \mathbf {g} _ {2} \circ \mathbf {g} _ {3}\right) j +$$ + +$$ +\mathbf {g} _ {1} = \mathbf {t} _ {y _ {i - 1}} ^ {T} \mathbf {U} _ {t _ {1}}; \quad \mathbf {g} _ {2} = \mathbf {t} _ {y _ {i}} ^ {T} \mathbf {U} _ {t _ {2}}; \quad \mathbf {g} _ {3} = \mathbf {h} _ {i} ^ {T} \mathbf {U} _ {h} +$$ + +where $\circ$ denotes element-wise product. We call the resulting model D-Trilinear. The factor graph of D-Trilinear is the same as Trilinear (Figure 2(d)). + +D-Quadrilinear We may take the representation of the previous word as an additional input and use a quadrilinear function in the potential function. + +$$ +s (\mathbf {x}, \mathbf {y}, i) = \mathbf {h} _ {i - 1} ^ {T} \mathbf {h} _ {i} ^ {T} \mathbf {U t} _ {y _ {i - 1}} \mathbf {t} _ {y _ {i}} +$$ + +where $\mathbf{U}$ is an order-4 weight tensor. However, the computational complexity of this function becomes quartic. Hence we again decompose the tensor into the product of four matrices and rewrite the potential function as follows. + +$$ +s (\mathbf {x}, \mathbf {y}, i) = \sum_ {j = 1} ^ {D _ {r}} \left(\mathbf {g} _ {1} \circ \mathbf {g} _ {2} \circ \mathbf {g} _ {3} \circ \mathbf {g} _ {4}\right) _ {j} +$$ + +$$ +\mathbf {g} _ {1} = \mathbf {t} _ {y _ {i - 1}} ^ {T} \mathbf {U} _ {t _ {1}}; \quad \mathbf {g} _ {2} = \mathbf {t} _ {y _ {i}} ^ {T} \mathbf {U} _ {t _ {2}} +$$ + +$$ +\mathbf {g} _ {3} = \mathbf {h} _ {i - 1} ^ {T} \mathbf {U} _ {h _ {1}}; \quad \mathbf {g} _ {4} = \mathbf {h} _ {i} ^ {T} \mathbf {U} _ {h _ {2}} +$$ + +We call the resulting model D-Quadrilinear and its factor graph is shown in Figure 2(e). + +D-Pentalinear Following the same idea, we extend D-quadrilinear to D-Pentalinear by taking the representation of the next word as an additional input. Figure 2(f) shows the structure of D-Pentalinear. + +$$ +s (\mathbf {x}, \mathbf {y}, i) = \sum_ {j = 1} ^ {D _ {r}} \left(\mathbf {g} _ {1} \circ \mathbf {g} _ {2} \circ \mathbf {g} _ {3} \circ \mathbf {g} _ {4} \circ \mathbf {g} _ {5}\right) j +$$ + +$$ +\mathbf {g} _ {1} = \mathbf {t} _ {y _ {i - 1}} ^ {T} \mathbf {U} _ {t _ {1}}; \quad \mathbf {g} _ {2} = \mathbf {t} _ {y _ {i}} ^ {T} \mathbf {U} _ {t _ {2}} +$$ + +$$ +\mathbf {g} _ {3} = \mathbf {h} _ {i - 1} ^ {T} \mathbf {U} _ {h _ {1}}; \quad \mathbf {g} _ {4} = \mathbf {h} _ {i} ^ {T} \mathbf {U} _ {h _ {2}}; \quad \mathbf {g} _ {5} = \mathbf {h} _ {i + 1} ^ {T} \mathbf {U} _ {h _ {3}} +$$ + +# 3 Experiments + +We compare neural Softmax and the seven variants of neural CRFs on four sequence labeling tasks: NER, Chunking, coarse- and fine-grained POS tagging. For NER, we use the datasets from CoNLL 2002 and 2003 shared tasks (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003). For Chunking, we use the English and German + +datasets of the CoNLL 2003 shared task (Tjong Kim Sang and De Meulder, 2003) and the Vietnamese dataset (Pham et al., 2017). For the two POS tagging tasks, we select 8 languages from Universal Dependencies (UD v2.4) treebanks (Nivre et al., 2019). + +We conduct our experiments with pretrained word embeddings, character embeddings, and BERT embeddings (Devlin et al., 2019a). For NER and Chunking, we use the BIOES scheme for its better performance than the BIO scheme (Ratinov and Roth, 2009; Dai et al., 2015; Yang et al., 2018). We use F1-score as the evaluation metric for both NER and Chunking. We run each model for 5 times with different random seeds for each experiment and report the average score and the standard derivation. More details can be found in supplementary material. + +# 3.1 Results + +We show the detailed results on NER and Chunking with BERT embeddings in Table 1 and the averaged results on all the tasks in Table 2 (the complete results can be found in the supplementary materials). We make the following observations. Firstly, D-Quadrilinear has the best overall performance in all the tasks. Its advantage over D-Trilinear is somewhat surprising because the BiLSTM output $\mathbf{h}_i$ in D-Trilinear already contains information of both the current word and the previous word. We speculate that: 1) information of the previous word is useful in evaluating the local potential in sequence labeling (as shown by traditional feature-based approaches); and 2) information of the previous word is obfuscated in $\mathbf{h}_i$ and hence directly inputting $\mathbf{h}_{i - 1}$ into the potential function helps. Secondly, D-Quadrilinear greatly outperforms BiLSTM-LAN (Cui and Zhang, 2019), one of the state-of-the-art sequence labeling approaches which employs a hierarchically-refined label attention network. Thirdly, D-Trilinear clearly outperforms both ThreeBilinear and Trilinear. This suggests that tensor decomposition could be a viable way to both regularize multilinear potential functions and reduce their computational complexity. + +# 3.2 Analysis + +Small training data We train four of our models on randomly selected $10\%$ or $30\%$ of the training data on the NER and Chunking tasks. We run each experiment for 5 times. Figure 3 shows the average difference in F1-scores between each model and + +
NERCHUNKING
EnglishGermanDutchSpanishAvg.EnglishGermanVietnameseAvg.
BERT EMBEDdingSOFTMAX90.42±0.1681.91±0.1589.02±0.3185.86±0.3486.80±0.2490.72±0.1193.48±0.0774.13±0.2786.11±0.15
VANILLA CRF91.33±0.1883.56±0.1890.03±0.1887.32±0.3888.06±0.2391.05±0.1293.65±0.0876.07±0.0886.92±0.09
TWOBILINEAR91.23±0.0783.21±0.3590.02±0.2687.40±0.2487.96±0.2391.16±0.0493.60±0.1076.10±0.2386.95±0.13
THREEBILINEAR91.19±0.2483.35±0.1990.06±0.4587.38±0.1887.99±0.2691.13±0.1493.52±0.1475.98±0.2386.87±0.17
TRILINEAR91.24±0.1183.11±0.2790.53±0.4187.38±0.2688.07±0.2691.11±0.0493.68±0.0975.64±0.2586.81±0.13
D-TRILINEAR91.28±0.1683.25±0.3690.52±0.2587.68±0.1388.18±0.2291.32±0.0893.79±0.1076.18±0.1387.10±0.10
D-QUADRILINEAR91.46±0.0783.61±0.2290.76±0.1387.71±0.2988.38±0.1891.51±0.1194.08±0.0876.29±0.3687.29±0.18
D-PENTALINEAR91.47±0.2083.63±0.2690.50±0.2787.69±0.2088.33±0.2391.45±0.0894.23±0.0676.01±0.2087.23±0.11
+ +Table 1: Results on NER and Chunking with BERT embeddings. + +
NERCHUNKINGFINE-GRAINED POSCOARSE-GRAINED POS
WORDCHARWORDCHARWORDCHARBERTWORDCHARBERT
BILSTM-LAN77.70±0.3982.42±0.5585.59±0.1286.12±0.1294.45±0.1495.41±0.1394.75±0.1095.68±0.08
SOFTMAX78.22±0.3282.14±0.2684.99±0.1485.49±0.0794.91±0.0895.72±0.0795.83±0.0794.47±0.0995.58±0.0896.18±0.08
VANILLA CRF79.46±0.5783.59±0.6685.86±0.1186.39±0.0894.89±0.0895.70±0.1195.81±0.0994.53±0.1095.60±0.1096.23±0.09
TWOBILINEAR79.16±0.4283.36±0.4285.57±0.1985.94±0.1594.81±0.1195.64±0.1095.79±0.0994.48±0.0895.58±0.1196.18±0.09
THREEBILINEAR78.66±0.9483.53±0.2885.51±0.2385.95±0.2194.87±0.0995.66±0.0995.74±0.1194.49±0.0995.54±0.0996.14±0.08
TRILINEAR79.24±0.3583.50±0.3885.57±0.2886.08±0.3194.94±0.1395.71±0.1195.67±0.1194.61±0.1195.63±0.1296.17±0.14
D-TRILINEAR79.41±0.2483.75±0.3985.83±0.1386.42±0.1495.07±0.1095.75±0.0895.74±0.1194.70±0.1195.69±0.0896.25±0.08
D-QUADRILINEAR80.09±0.3584.20±0.3986.58±0.1487.07±0.1095.19±0.0895.88±0.0895.90±0.0994.91±0.1095.82±0.1096.32±0.07
D-PENTALINEAR79.52±0.2884.01±0.4286.53±0.1587.11±0.2095.07±0.1995.82±0.1195.85±0.0894.80±0.1595.79±0.1396.31±0.11
+ +Table 2: Results averaged over all the languages for each task. We also show the results of BiLSTM-LAN (Cui and Zhang, 2019), one of the current state-of-the-art sequence labeling approaches, for reference. We do not report the results of BiLSTM-LAN with BERT embedding because BERT is not available in the BiLSTM-LAN code. + +
NERCHUNKING
LAYERS=2VANILLA CRF79.86±0.4785.84±0.19
D-TRILINEAR80.21±0.3485.86±0.19
D-QUADRILINEAR80.36±0.3486.32±0.14
LAYERS=3VANILLA CRF78.72±0.6685.73±0.15
D-TRILINEAR79.84±0.6285.65±0.20
D-QUADRILINEAR79.97±0.3185.88±0.15
+ +Table 3: Average results with more BiLSTM layers. + +![](images/4fc69d000c54b2719673e884d4af2062b3c007df6b54e84bca8aba9acf74b58e.jpg) +Figure 3: The average differences in F1-scores compared with Vanilla CRF with different training data sizes. + +Vanilla CRF. It can be seen that with small data, the advantages of D-Trilinear and D-Quadrilinear over Vanilla CRF and Softmax become even larger. + +Multi-layers LSTM As discussed in section 3.1, D-Quadrilinear outperforms D-Trilinear probably because $\mathbf{h}_i$ , the BiLSTM output at position $i$ , does not contain sufficient information of the previous word. Here we study whether increasing the number of BiLSTM layers would inject more information into $\mathbf{h}_i$ and hence reduce the performance gap between the two models. Table 3 shows the results on the NER and Chunking tasks with word embedding. D-Quadrilinear still outperforms D-Trilinear, but by comparing Table 3 with Table 2, we see that their difference indeed becomes smaller with more BiLSTM layers. Another observation is that more BiLSTM layers often lead to lower scores. This is consistent with previous findings (Cui and Zhang, 2019) and is probably caused by overfitting. + +Speed We test the training and inference speed of our models. Our decomposed multilinear approaches are only a few percent slower than Vanilla CRF during training and as fast as Vanilla CRF during inference, which suggests their practical usefulness. The details can be found in the supple + +mentary material. + +# 4 Conclusion + +In this paper, we investigate several potential functions for neural CRF models. The proposed potential functions not only integrate the emission and transition functions, but also take into consideration representations of additional neighboring words. Our experiments show that D-Quadrilinear achieves the best overall performance. Our proposed approaches are simple and effective and could facilitate future research in neural sequence labeling. + +# Acknowledgement + +This work was supported by Alibaba Group through Alibaba Innovative Research Program. This work was also supported by the National Natural Science Foundation of China (61976139). + +# References + +Alan Akbik, Tanja Bergmann, and Roland Vollgraf. 2019. Pooled contextualized embeddings for named entity recognition. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 724-728, Minneapolis, Minnesota. Association for Computational Linguistics. +Xinchi Chen, Xipeng Qiu, Chenxi Zhu, Pengfei Liu, and Xuanjing Huang. 2015. Long short-term memory neural networks for Chinese word segmentation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1197-1206, Lisbon, Portugal. Association for Computational Linguistics. +Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. JMLR. +Leyang Cui and Yue Zhang. 2019. Hierarchically-refined label attention network for sequence labeling. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4106-4119, Hong Kong, China. Association for Computational Linguistics. +Hong-Jie Dai, Po-Ting Lai, Yung-Chun Chang, and Richard Tzong-Han Tsai. 2015. Enhancing of chemical compound and drug name recognition using representative tag scheme and fine-grained tokenization. Journal of cheminformatics, 7(1):S14. + +Steven J. DeRose. 1988. Grammatical category disambiguation by statistical optimization. Computational Linguistics, 14(1). +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019a. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019b. Bert: Pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT*. +Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov. 2018. Learning word vectors for 157 languages. *ArXiv*, abs/1802.06893. +Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016a. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260-270, San Diego, California. Association for Computational Linguistics. +Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016b. Neural architectures for named entity recognition. In NAACL, pages 260-270. +Wang Ling, Chris Dyer, Alan W Black, Isabel Trancoso, Ramón Fernández, Silvio Amir, Luís Marujo, and Tiago Luís. 2015. Finding function in form: Compositional character models for open vocabulary word representation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1520-1530, Lisbon, Portugal. Association for Computational Linguistics. +Liyuan Liu, Jingbo Shang, Xiang Ren, Frank Fangzheng Xu, Huan Gui, Jian Peng, and Jiawei Han. 2018. Empower sequence labeling with task-aware neural language model. In Thirty-Second AAAI Conference on Artificial Intelligence. +Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064-1074, Berlin, Germany. Association for Computational Linguistics. +Yukun Ma, Erik Cambria, and Sa Gao. 2016. Label embedding for zero-shot fine-grained named entity typing. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: + +Technical Papers, pages 171-180, Osaka, Japan. The COLING 2016 Organizing Committee. + +Jinseok Nam, Eneldo Loza Mencia, and Johannes Furnkranz. 2016. All-in text: Learning document, label, and word representations jointly. In Thirtieth AAAI Conference on Artificial Intelligence. + +Joakim Nivre, Mitchell Abrams, Željko Agić, Lars Ahrenberg, Gabriele Aleksandraviciute, Lene Antonsen, Katya Aplonova, Maria Jesus Aranzabe, Gashaw Arutie, Masayuki Asahara, Luma Ateyah, Mohammed Attia, Aitziber Atutxa, Liesbeth Augustinus, Elena Badmaeva, Miguel Ballesteros, Esha Banerjee, Sebastian Bank, Verginica Barbu Mtitelu, Victoria Basmov, John Bauer, Sandra Bellato, Kepa Bengoetxea, Yevgeni Berzak, Irshad Ahmad Bhat, Riyadh Ahmad Bhat, Erica Biagetti, Eckhard Bick, Agne Bielinskiene, Rogier Blokland, Victoria Bobicev, Loic Boizou, Emanuel Borges Völker, Carl Borstell, Cristina Bosco, Gosse Bouma, Sam Bowman, Adriane Boyd, Kristina Brokaite, Aljoscha Burchardt, Marie Candito, Bernard Caron, Gauthier Caron, Gülsen Cebiroğlu Eryigit, Flavio Massimiliano Cecchini, Giuseppe G. A. Celano, Slavomir Čéplö, Savas Cetin, Fabricio Chalub,inho Choi, Yongseok Cho, Jayeol Chun, Silvie Cinková, Aurélie Collomb, Çağr Üçoltekin, Miriam Connor, Marine Courtin, Elizabeth Davidson, Marie-Catherine de Marneffé, Valeria de Paiva, Arantza Diaz de Ilarraza, Carly Dickerson, Bamba Dione, Peter Dirix, Kaja Dobrovoljc, Timothy Dozat, Kira Droganova, Puneet Dwivedi, Hanne Eckhoff, Marhaba Eli, Ali Elkahky, Binyam Ephrem, Tomaz Erjavec, Aline Etienne, Richard Farkas, Hector Fernandez Alcalde, Jennifer Foster, Cláudia Freitas, Kazunori Fujita, Katarína Gajdoshová, Daniel Galbraith, Marcos Garcia, Moa Gärdenfors, Sebastian Garza, Kim Gerdes, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Memduh Gökirmak, Yoav Goldberg, Xavier Gómez Guinovart, Berta González Saavedra, Matias Grioni, Normunds Grūzītis, Bruno Guillaume, Céline Guillot-Barbance, Nizar Habash, Jan Hajic, Jan Hajic jr., Linh Hà My, Na-Rae Han, Kim Harris, Dag Haug, Johannes Heinecke, Felix Hennig, Barbora Hladká, Jaroslava Hlaváčová, Florinel Hociung, Petter Hohtle Jena Hwang Takumi Ikeda Radu Ion Elena Irimia Olajíde Ishola Tomás Jelinek Anders Johannsen Fredrik Jorgensen,Hüner Kasikara Andre Kaasen,Sylvain Kahane,Hiroshi Kanayama Jenna Kanerva Boris Katz Tolga Kayadelen Jessica Kenney,Václava Kettnerová,Jesse Kirchner Arne Köhn,Kamil Kopacewicz,Natalia Kotsyba,Jolanta Kovalevskaite Simon Krek Sookyoung Kwak Veronika Laippala Lorenzo Lambertino Lucia Lam Tatiana Lando Septina Dian Larasati Alexei Lavrentiev John Lee Phng Lê H'ong Alessandro Lenci,Saran Lertpradit,Herman Leung Cheuk Ying Li Josie Li Keying Li KyungTae Lim Yuan Li,Nikola Ljubesic Olga Loginova Olga Lyashevskaya Teresa Lynn Vivien Macketanz,Aibek Makazhanov Michael Mandl Christopher Manning,Ruli Manurung,Catalina + +Márǎnduc, David Mareček, Katrin Marheinecke, Héctor Martínez Alonso, André Martins, Jan Mašek, Yuji Matsumoto, Ryan McDonald, Sarah McGuinness, Gustavo Mendonça, Niko Miekka, Margarita Misirpashayeva, Anna Missilä, Catalin Mititelu, Yusuke Miyao, Simonetta Montemagni, Amir More, Laura Moreno Romero, Keiko Sophie Mori, Tomohiko Morioka, Shinsuke Mori, Shigeki Moro, Bjartur Mortensen, Bohdan Moskalevskyi, Kadri Muischnek, Yugo Murawaki, Kaili Mūürisep, Pinkey Nainwani, Juan Ignacio Navarro Horniacek, Anna Nedoluzhko, Gunta Nespore-Bérzkalne, Lng Nguyen~en Thi, Huy'en Nguy~en Thi Minh, Yoshihiro Nikaido, Vitaly Nikolaev, Rattima Nitisaroj, Hanna Nurmi, Stina Ojala, Adedayo Olúokun, Mai Omura, Petya Osenova, Robert Östling, Lilja Øvrelid, Niko Partanen, Elena Pascual, Marco Passarotti, Agnieszka Patejuk, Guilherme Paulino-Passos, Angelika Peljak-Lapińska, Siyao Peng, Cenel-Augusto Perez, Guy Perrier, Daria Petrova, Slav Petrov, Jussi Piitulainen, Tommi A Pirinen, Emily Pitler, Barbara Plank, Thierry Poibeaum, Martin Popel, Lauma Pretkalnina, Sophie Prévost, Prokopis Prokopidis, Adam Przepiörkowski, Tiina Puolakainen, Sampo Pyysalo, Andriela Raabis, Alexandre Rademaker, Loganathan Ramasamy, Taraka Rama, Carlos Ramisch, Vinit Ravishankar, Livy Real, Siva Reddy, Georg Rehm, Michael Rießler, Erika Rimkute, Larissa Rinaldi, Laura Rituma, Luisa Rocha, Mykhailo Romanenko, Rudolf Rosa, Davide Rovati, Valentin Rosca, Olga Rudina, Jack Rueter, Shoval Sadde, Benoit Sagot, Shadi Saleh, Alessio Salomoni, Tanja Samardžić, Stephanie Samson, Manuela Sanguinetti, Dage Särg, Baiba Saulite, Yanin Sawanakunanon, Nathan Schneider, Sebastian Schuster, Djame Seddah, Wolfgang Seeker, Mojgan Seraji, Mo Shen, Atsuko Shimada, Hiroyuki Shirasu, Muh Shohibussirri, Dmitry Sichinava, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simkó, María Šimková, Kiril Simov, Aaron Smith, Isabela Soares-Bastos, Carolyn Spadine, Antonio Stella Milan Straka, Jana Strnadová, Alane Suhr, Umut Sulubacak, Shingo Suzuki, Zsolt Szántó, Dima Taji, Yuta Takahashi, Fabio Tamburini, Takaaki Tanaka, Isabelle Tellier, Guillaume Thomas Liisi Torga Trond Trosterud Anna Trukhina Reut Tsarfaty Francis Tyers Sumire Uematsu Zdenka Urešová Larraitz Uria Hans Uszkoreit Sowmya Vajjala Daniel van Niekerk Gertjan van Noord Viktor Varga Eric Villemonte de la Clergerie Veronika Vincze Lars Wallin Abigail Walsh Jing Xian Wang Jonathan North Washington Maximilan Wendt Seyi Williams Mats Wiren Christian Wittern Tsegay Woldemariam Tak-sum Wong Alina Wróblewska Mary Yako Naoki Yamazaki Chunxiao Yan Koichi Yasuoka Marat M. Yavrumyan Zhuoran Yu Zdeněk Žabokrtský Amir Zeldes Daniel Zeman Manying Zhang and Hanzhi Zhu. 2019. Universal dependencies 2.4. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (UFAL), Faculty of Mathematics and Physics Charles University. + +Jeffrey Pennington, Richard Socher, and Christopher + +Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP. + +Thai-Hoang Pham, Xuan-Khoai Pham, Tuan-Anh Nguyen, and Phuong Le-Hong. 2017. NNVLP: A neural network-based Vietnamese language processing toolkit. In Proceedings of the IJCNLP 2017, System Demonstrations, pages 37-40, Tapei, Taiwan. Association for Computational Linguistics. + +Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009), pages 147-155, Boulder, Colorado. Association for Computational Linguistics. + +Nils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a difference: Performance study of LSTM-networks for sequence tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 338-348, Copenhagen, Denmark. Association for Computational Linguistics. + +Alan Ritter, Sam Clark, Mausam, and Oren Etzioni. 2011. Named entity recognition in tweets: An experimental study. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1524-1534, Edinburgh, Scotland, UK. Association for Computational Linguistics. + +Jun Suzuki, Erik McDermott, and Hideki Isozaki. 2006. Training conditional random fields with multivariate evaluation measures. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 217-224, Sydney, Australia. Association for Computational Linguistics. + +Erik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition. In COLING-02: The 6th Conference on Natural Language Learning 2002 (CoNLL-2002). + +Erik F. Tjong Kim Sang and Sabine Buchholz. 2000. Introduction to the CoNLL-2000 shared task chunking. In Fourth Conference on Computational Natural Language Learning and the Second Learning Language in Logic Workshop. + +Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142-147. + +Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter + +of the Association for Computational Linguistics, pages 252-259. + +Yingwei Xin, Ethan Hart, Vibhuti Mahajan, and Jean-David Ruvini. 2018. Learning better internal structure of words for sequence labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2584-2593, Brussels, Belgium. Association for Computational Linguistics. + +Jie Yang, Shuailong Liang, and Yue Zhang. 2018. Design challenges and misconceptions in neural sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3879-3889, Santa Fe, New Mexico, USA. Association for Computational Linguistics. + +# A Appendices + +# A.1 Dataset Statistics + +The statistics of the datasets used in our experiments are listed in table 4. + +
TaskD#train#dev#test#label
NERen140403250345317
de121522867300517
nl157962895519617
sp83191914151717
Chunkingen140403250345338
de121522867300513
vi628478678537
POSen125432002207750/17
de1381479997752/17
it1312156448239/17
id447755955781/16
nl12269718596194/16
hi133041659168431/16
zh399750050042/15
ja712551155037/16
+ +Table 4: The statistics of different datasets for corresponding tasks. D: Datasets. The statistics of Coarse-grained POS is the same as Fine-grained POS except that the number of labels are not the same. The left of $\prime /^{\prime}$ indicates the number of labels of Fine-grained POS and the right of $\prime /^{\prime}$ indicates the number of labels of Coarse-grained POS. + +# A.2 Word representations + +We have three different versions of word representations: + +- Word Embedding. We use pretrained word embeddings such as GloVe (Pennington et al., 2014) and FastText (Grave et al., 2018). +- Word Embedding and Character Embedding. We use the same character LSTMs as in Lample et al. (2016b) and set the hidden + +
HyperparametersSetting
LSTM Hidden Size512
Learning Rate0.1
Char Embedding Size25
Char Hidden Size50
Dropout Rate0.5
L2 Regularization1e-8
Batch Size32
Maximal Epochs300
Patience10
+ +size of the LSTM to 50. The final word representation is the concatenation of the output of the character LSTM and the pretrained word embedding. + +- BERT Embedding. We use the respective BERT embedding from (Devlin et al., 2019b) for each language. If there is no pretrained BERT embedding for a language, we then use the multilingual BERT (M-BERT) instead. The word representation is from the last four layers of the BERT embedding. + +We fine-tune the word embeddings and character embeddings during the training process. We don't fine-tune the BERT embeddings. + +# A.3 Hyperparameters setting + +We tune the following hyperparameters in our experiments. + +LSTM hidden size We test Softmax, Vanilla CRF, D-Trilinear and D-Quadlinear with LSTM hidden sizes of \{200, 512\} on the English and German datasets of each task and find that there is no significant difference between 200 and 512. Hence, we fix the LSTM hidden size to 512. + +Learning Rate We tune it in the range of $\{0.03, 0.1, 0.3\}$ on Softmax, Vanilla CRF, D-Trilinear and D-Quadlinear on the English and German datasets of each task. We find that the performance is always better when the learning rate is 0.1. So we fix the learning rate to 0.1. + +Tag Embedding Dimension $D_{t}$ We use tag embeddings in all the models except Softmax and Vanilla CRF. We search for the best dimension in $\{20, 50, 100, 200\}$ . + +Table 5: Other hyperparameters + +
EnglishGermanDutchSpanishAvg.
VANILLA CRF88.3378.5964.8881.278.25
D-QUADRILINEAR89.4979.9367.2381.679.56
+ +Table 7: Average results with transformer encoder. + +$\mathbf{Rank} D_r$ In D-Trilinear, D-Quadlinear, and D-Pentalinear, $D_r$ is a hyperparameter that controls the representational power of the multilinear functions. We select its value from $\{64, 128, 256, 384, 600\}$ . + +Other hyperparameter settings are list in table 5. + +# A.4 Additional Analysis + +Multilinear vs. Concatenation Our best-performing models are based on multilinear functions with decomposed parameter tensors. An alternative to multilinear functions is to apply an MLP with nonlinear activations to the concatenated input vectors. We run the comparison on the NER task with word embeddings and tune the tag embedding size from $\{20, 50, 100, 200\}$ and the hidden size of the MLP from $\{64, 128, 256, 384\}$ . As shown in table 6, the two concatenation-based models underperform their decomposed multilinear counterparts, but they do outperform TwoBilinear and ThreeBi-linear. + +Transformer vs. BiLSTM As we discussed in section 3.1, information of the previous word may be obfuscated in $\mathbf{h}_i$ . Transformer-like encoders which can model long-range context may alleviate the obfuscation. We use a 6-layers transformer encoder and run the comparison on vanilla CRF and D-Quadrilinear on NER tasks with word embeddings. As shown in table 7, with the transformer encoder, D-Quadrilinear outperforms the vanilla CRF by $1.31\%$ . In comparison, with the BiLSTM encoder, D-Quadrilinear outperforms the vanilla CRF by $0.63\%$ . So the advantage of our approach against the vanilla CRF becomes even larger when using the transformer encoder. + +Speed We use a Nvidia Titan V GPU to test the training and inference speed of the 8 models on the NER English dataset. Figure 4 shows the training and inference time averaged over 10 epochs. Softmax is much faster than all the other approaches because it does not need to run Forward-Backward and Viberbi and can parallelize the predictions at all the positions of a sequence. Our decomposed multilinear approaches are not significantly slower than Vanilla CRF but generally have better perfor + +
EnglishGermanDutchSpanishAvg.
TWOBILINEAR90.11±0.2573.69±0.3069.31±0.6783.53±0.4479.16±0.42
THREEBILINEAR90.12±0.2273.20±0.8567.50±2.3583.82±0.3278.66±0.94
TRILINEAR90.19±0.1973.39±0.4069.69±0.6083.70±0.1979.24±0.35
D-TRILINEAR90.43±0.2373.57±0.1769.50±0.3284.15±0.2379.41±0.24
1WORD+2LABEL89.91±0.0474.37±0.1268.96±0.8383.68±0.2279.23±0.30
D-QUADLINEAR90.44±0.0775.05±0.3570.49±0.6884.41±0.2980.10±0.35
2WORD+2LABEL90.27±0.0974.19±0.3270.34±0.0683.81±0.3779.65±0.21
+ +Table 6: Comparison with concatenation-based potential functions (1WORD+2LABEL and 2WORD+2LABEL) + +![](images/3014bfc0b103cea31bd00126decd697cc3623440c6e1680c2530b16f13617f6d.jpg) + +Figure 4: Training time and inference time averaged over 10 epochs. + +
NERCHUNKING
EnglishGermanDutchSpanishAvg.EnglishGermanVietnameseAvg.
WORD EMBEDDEDBILSTM-LAN89.46±0.2472.48±0.3566.02±0.7082.83±0.2677.70±0.3991.46±0.1193.16±0.1072.16±0.1485.59±0.12
SOFTMAX89.69±0.1372.59±0.3068.01±0.3782.60±0.4578.22±0.3290.77±0.1793.04±0.0971.17±0.1784.99±0.14
VANILLA CRF90.33±0.3673.96±0.2669.72±1.0083.81±0.6779.46±0.5791.19±0.1293.15±0.0973.23±0.1285.86±0.11
TWOBILINEAR90.11±0.2573.69±0.3069.31±0.6783.53±0.4479.16±0.4291.45±0.0992.98±0.0972.28±0.3985.57±0.19
THREEBILINEAR90.12±0.2273.20±0.8567.50±2.3583.82±0.3278.66±0.9491.35±0.2492.98±0.0772.21±0.3885.51±0.23
TRILINEAR90.19±0.1973.39±0.4069.69±0.6083.70±0.1979.24±0.3591.44±0.1993.00±0.0872.28±0.5685.57±0.28
D-TRILINEAR90.43±0.2373.57±0.1769.50±0.3284.15±0.2379.41±0.2491.54±0.1393.19±0.0772.76±0.1885.83±0.13
D-QUADLINEAR90.44±0.0775.05±0.3570.49±0.6884.41±0.2980.09±0.3591.97±0.1493.35±0.0574.42±0.2486.58±0.14
D-PENTALINEAR90.29±0.0674.09±0.4969.90±0.4583.81±0.1279.52±0.2891.98±0.0993.43±0.0774.17±0.2986.53±0.15
WORD & CHARBILSTM-LAN90.71±0.2077.18±0.2877.83±0.9083.97±0.8482.42±0.5591.84±0.1094.20±0.0972.33±0.1686.12±0.12
SOFTMAX90.39±0.1176.87±0.2677.40±0.3283.89±0.3682.14±0.2691.13±0.0894.02±0.0571.32±0.0885.49±0.07
VANILLA CRF91.15±0.2278.13±0.3679.65±1.5285.45±0.5583.59±0.6691.59±0.1294.23±0.0573.34±0.0786.39±0.08
TWOBILINEAR90.98±0.1077.84±0.4179.12±0.8585.48±0.3083.36±0.4291.78±0.1193.99±0.0772.07±0.2685.94±0.15
THREEBILINEAR91.24±0.1677.48±0.5380.15±0.3385.27±0.1083.53±0.2891.75±0.0693.92±0.1372.20±0.4485.95±0.21
TRILINEAR91.30±0.1177.41±0.2579.69±0.8185.60±0.3283.50±0.3891.70±0.2494.14±0.1172.42±0.5986.08±0.31
D-TRILINEAR91.18±0.1877.98±0.4580.02±0.6885.83±0.2583.75±0.3991.97±0.1794.24±0.1073.05±0.1586.42±0.14
D-QUADLINEAR91.34±0.1278.89±0.2980.81±0.7885.75±0.3984.20±0.3992.36±0.0694.52±0.0274.33±0.2387.07±0.10
D-PENTALINEAR91.08±0.3278.53±0.5780.99±0.5585.42±0.2384.01±0.4292.28±0.1494.58±0.0474.48±0.4187.11±0.20
+ +Table 8: Results on NER and Chunking tasks. BiLSTM-LAN (Cui and Zhang, 2019) is one of the current state-of-the-art sequence labeling approaches. + +mance, which suggests their practical usefulness. + +# A.5 Complete Experimental Results + +Table 8, 9, and 10 show the detailed results on the NER, Chunking and two POS tasks. + +In addition, we show results of BiLSTM-LAN (Cui and Zhang, 2019), which is one of the state-of-the-art sequence labeling approaches. We run the released code of BiLSTM-LAN on NER, Chunking and the two POS tagging tasks. We tune BiLSTM-LAN hyperparameters with the word-level hidden size of $\{100, 200, 400\}$ , LSTM layer number of $\{1, 2, 3, 4\}$ , learning rate of $\{0.003, 0.01, 0.03\}$ , and decay rate of $\{0.03, 0.035, 0.04\}$ . All the other hyperparameters follow their default settings. We do not report results of BiLSTM-LAN with BERT embedding because BERT is not available in the BiLSTM-LAN code. + +
ChineseDutchEnglishGermanHindiIndonesianItalianJapaneseAvg.
WORD EMBEDINGBiLSTM-LAN93.34±0.0994.44±0.1495.09±0.0894.18±0.1297.02±0.1091.76±0.3597.62±0.2094.54±0.2894.75±0.10
SOFTMAX93.13±0.0594.11±0.0494.66±0.0993.43±0.0996.72±0.0691.00±0.0697.34±0.0795.39±0.2694.47±0.09
VANILLA CRF93.10±0.0894.14±0.1494.70±0.0493.41±0.1396.74±0.0591.23±0.0597.41±0.0595.50±0.2494.53±0.10
TWOBILINEAR93.05±0.0594.16±0.0994.64±0.0893.40±0.1396.71±0.0591.11±0.0797.38±0.0595.41±0.1094.48±0.08
THREEBILINEAR93.06±0.0794.21±0.1194.66±0.0593.43±0.1196.72±0.0591.07±0.0997.33±0.1095.41±0.1694.49±0.09
TRILINEAR93.64±0.1594.35±0.0794.61±0.0893.38±0.1296.89±0.0690.99±0.1397.58±0.0595.44±0.2194.61±0.11
D-TRILINEAR93.71±0.1294.35±0.0794.77±0.1093.52±0.2096.93±0.0491.27±0.1197.59±0.0495.46±0.1894.70±0.11
D-QUADLINEAR94.21±0.1194.59±0.0894.85±0.0993.84±0.1397.02±0.0691.36±0.0697.62±0.0895.78±0.1794.91±0.10
D-PENTALINEAR93.91±0.2194.60±0.0694.78±0.1293.54±0.1897.01±0.1391.46±0.1297.51±0.1395.60±0.2394.80±0.15
WORD & CHARBI LSTM-LAN94.00±0.1295.45±0.1295.86±0.1594.72±0.0697.10±0.0293.80±0.0398.14±0.0996.34±0.0695.68±0.08
SOFTMAX93.67±0.0795.28±0.1295.92±0.0294.28±0.1196.96±0.1093.48±0.0697.88±0.0497.14±0.1495.58±0.08
VANILLA CRF93.55±0.1695.37±0.1496.01±0.0494.28±0.1396.96±0.0593.51±0.0597.94±0.0597.21±0.1795.60±0.10
TWOBILINEAR93.51±0.0695.23±0.1095.96±0.0794.43±0.1796.94±0.0893.48±0.1997.88±0.1197.20±0.0795.58±0.11
THREEBILINEAR93.50±0.1095.25±0.1095.92±0.0994.47±0.1096.89±0.0493.35±0.1597.84±0.0497.08±0.1195.54±0.09
TRILINEAR93.93±0.1495.25±0.0695.84±0.0994.43±0.1197.02±0.1193.33±0.1398.01±0.0697.22±0.2695.63±0.12
D-TRILINEAR94.03±0.1295.30±0.1096.04±0.0594.34±0.0697.06±0.0293.45±0.1298.09±0.0397.21±0.1795.69±0.08
D-QUADLINEAR94.58±0.1095.50±0.1696.01±0.1394.51±0.0997.19±0.0693.53±0.1398.02±0.0497.25±0.0895.82±0.10
D-PENTALINEAR94.50±0.2195.45±0.1396.08±0.1294.29±0.1897.20±0.1293.60±0.1297.98±0.0597.23±0.0895.79±0.13
BERT EMBEDINGSOFTMAX96.70±0.1396.19±0.0696.67±0.0495.06±0.1296.75±0.0392.90±0.1298.34±0.0596.84±0.1396.18±0.08
VANILLA CRF96.72±0.1096.20±0.1296.80±0.0895.30±0.0396.74±0.0892.90±0.0898.31±0.1696.85±0.1096.23±0.09
TWOBILINEAR96.54±0.0996.02±0.1996.78±0.0995.33±0.0596.71±0.0492.88±0.1098.36±0.0696.86±0.0896.18±0.09
THREEBILINEAR96.58±0.1095.96±0.0696.85±0.0695.13±0.0796.68±0.0692.81±0.1498.24±0.0696.84±0.0596.14±0.08
TRILINEAR96.71±0.0396.01±0.2996.84±0.4295.26±0.0996.68±0.0392.70±0.1098.25±0.1196.75±0.0696.17±0.14
D-TRILINEAR96.79±0.0696.24±0.1896.82±0.1095.33±0.0396.79±0.0692.88±0.0598.33±0.0896.85±0.0796.25±0.08
D-QUADLINEAR96.86±0.0696.25±0.0996.85±0.0695.30±0.0496.87±0.0392.98±0.1098.37±0.0797.05±0.1196.32±0.07
D-PENTALINEAR96.84±0.0396.20±0.1396.81±0.1895.50±0.1196.85±0.0492.93±0.1698.35±0.0996.98±0.1396.31±0.11
+ +Table 9: Results on Coarse POS task. BiLSTM-LAN (Cui and Zhang, 2019) is one of the current state-of-the-art sequence labeling approaches. + +
ChineseDutchEnglishGermanHindiIndonesianItalianJapaneseAvg.
WORD EMBEDDEDBILSTM-LAN93.16±0.0790.45±0.2594.60±0.1296.41±0.0496.54±0.0294.11±0.1197.62±0.0592.69±0.4594.45±0.14
SOFTMAX93.33±0.0992.17±0.1194.40±0.0596.22±0.0396.25±0.0894.66±0.0397.26±0.1095.02±0.1494.91±0.08
VANILLA CRF93.22±0.1192.13±0.0994.41±0.0996.22±0.0696.39±0.0994.60±0.0597.27±0.0694.86±0.1294.89±0.08
TWOBILINEAR93.19±0.1392.03±0.0994.17±0.0596.15±0.0496.34±0.0894.72±0.0697.24±0.0494.67±0.3594.81±0.11
THREEBILINEAR93.29±0.0992.12±0.0694.12±0.1096.20±0.0596.39±0.0694.82±0.1297.30±0.0694.73±0.1494.87±0.09
TRILINEAR93.79±0.0891.91±0.1794.20±0.0996.27±0.0596.42±0.1094.60±0.1197.49±0.1294.80±0.3294.94±0.13
D-TRILINEAR93.78±0.0292.21±0.2894.29±0.0996.27±0.0896.46±0.0594.77±0.0997.50±0.0595.25±0.1095.07±0.10
D-QUADLINEAR94.24±0.0792.36±0.1894.36±0.0696.28±0.0796.54±0.0794.94±0.0997.62±0.0395.20±0.0695.19±0.08
D-PENTALINEAR94.12±0.1892.21±0.2294.30±0.3496.25±0.4696.39±0.0894.70±0.0997.55±0.1095.03±0.0895.07±0.19
WORD & CHARBILSTM-LAN93.88±0.1192.38±0.4595.58±0.0897.14±0.0596.69±0.0394.80±0.0598.02±0.0194.75±0.2395.41±0.13
SOFTMAX93.86±0.1093.05±0.1095.59±0.0897.07±0.0696.55±0.0394.93±0.0697.85±0.0596.85±0.0795.72±0.07
VANILLA CRF93.69±0.0893.21±0.1495.58±0.1197.08±0.1196.59±0.0394.80±0.0897.89±0.0696.75±0.2795.70±0.11
TWOBILINEAR93.54±0.0892.91±0.2295.60±0.1397.03±0.0596.59±0.0694.93±0.0597.83±0.0996.67±0.1195.64±0.10
THREEBILINEAR93.62±0.0893.03±0.1395.55±0.0597.09±0.0696.61±0.0494.97±0.0997.79±0.0696.63±0.1995.66±0.09
TRILINEAR94.02±0.0792.99±0.2995.58±0.0997.10±0.0396.61±0.0794.84±0.1197.98±0.0696.59±0.1595.71±0.11
D-TRILINEAR94.05±0.1592.83±0.1795.69±0.0497.11±0.0596.64±0.0294.95±0.0897.96±0.0396.80±0.0895.75±0.08
D-QUADLINEAR94.49±0.1193.03±0.1895.64±0.0597.10±0.0396.78±0.0595.04±0.0998.02±0.0596.90±0.0795.88±0.08
D-PENTALINEAR94.20±0.1293.03±0.2095.63±0.0297.04±0.0596.70±0.0295.08±0.0597.91±0.3096.93±0.1295.82±0.11
BERT EMBEDDEDSOFTMAX96.54±0.0593.58±0.1396.39±0.0597.52±0.0696.26±0.1091.56±0.0698.24±0.0396.16±0.0695.83±0.07
VANILLA CRF96.53±0.1193.57±0.1296.38±0.0897.47±0.0396.31±0.0891.75±0.1098.24±0.0996.25±0.1195.81±0.09
TWOBILINEAR96.47±0.0593.45±0.1596.42±0.0597.52±0.0696.32±0.0691.75±0.1598.16±0.0496.24±0.1595.79±0.09
THREEBILINEAR96.45±0.1093.26±0.2196.32±0.0797.50±0.0696.27±0.0391.73±0.2098.18±0.0496.19±0.1895.74±0.11
TRILINEAR96.60±0.0893.53±0.1396.22±0.0597.55±0.0696.22±0.0890.97±0.2498.22±0.0796.09±0.1495.67±0.11
D-TRILINEAR96.68±0.0793.68±0.1996.31±0.0897.57±0.0696.37±0.0791.55±0.0998.26±0.0496.35±0.0995.74±0.11
D-QUADLINEAR96.74±0.1093.69±0.2196.39±0.0597.56±0.0396.41±0.0791.65±0.1898.29±0.0296.44±0.0595.90±0.09
D-PENTALINEAR97.10±0.0293.40±0.0396.40±0.0197.54±0.0996.33±0.1191.46±0.2098.29±0.0696.30±0.0895.85±0.08
+ +Table 10: Results on Fine POS task. BiLSTM-LAN (Cui and Zhang, 2019) is one of the current state-of-the-art sequence labeling approaches. \ No newline at end of file diff --git a/aninvestigationofpotentialfunctiondesignsforneuralcrf/images.zip b/aninvestigationofpotentialfunctiondesignsforneuralcrf/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..23b67f82db246c205dc3225a578be8dedf41cc14 --- /dev/null +++ b/aninvestigationofpotentialfunctiondesignsforneuralcrf/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:734be03b08ac4e031230de3829c32f0a94871d9030e56cad1ea0de3cb5d355bf +size 1236850 diff --git a/aninvestigationofpotentialfunctiondesignsforneuralcrf/layout.json b/aninvestigationofpotentialfunctiondesignsforneuralcrf/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..c1cdfce43b8648a46e27f6d5e81dcb477f1c82a1 --- /dev/null +++ b/aninvestigationofpotentialfunctiondesignsforneuralcrf/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4a3a6c007dafb3e3dbbea62684b3531008381014f454059ab38704bf97c759be +size 335434 diff --git a/answerspancorrectioninmachinereadingcomprehension/1834ff58-f219-459f-895a-ece63e482c54_content_list.json b/answerspancorrectioninmachinereadingcomprehension/1834ff58-f219-459f-895a-ece63e482c54_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..1993d678793f07e74951ef0053344087831104e5 --- /dev/null +++ b/answerspancorrectioninmachinereadingcomprehension/1834ff58-f219-459f-895a-ece63e482c54_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0a582a79f9e6d94b57d52e900b8416c58c0296390043d97408b2cca1483e94b3 +size 44107 diff --git a/answerspancorrectioninmachinereadingcomprehension/1834ff58-f219-459f-895a-ece63e482c54_model.json b/answerspancorrectioninmachinereadingcomprehension/1834ff58-f219-459f-895a-ece63e482c54_model.json new file mode 100644 index 0000000000000000000000000000000000000000..ab17e84e12bb8c700b554755055f23536e2659ee --- /dev/null +++ b/answerspancorrectioninmachinereadingcomprehension/1834ff58-f219-459f-895a-ece63e482c54_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c7df3b2286e1fb0c3e6d0e08fc5e75ba6ccc862387c06eb18f6ea31c1293762d +size 52293 diff --git a/answerspancorrectioninmachinereadingcomprehension/1834ff58-f219-459f-895a-ece63e482c54_origin.pdf b/answerspancorrectioninmachinereadingcomprehension/1834ff58-f219-459f-895a-ece63e482c54_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4bf8623b51717b324ca3d0829d2f5aded8b02502 --- /dev/null +++ b/answerspancorrectioninmachinereadingcomprehension/1834ff58-f219-459f-895a-ece63e482c54_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b19422ab47ab3f26ade8e3410141174afd4cf121349bd8f453389e3e037b6cef +size 602987 diff --git a/answerspancorrectioninmachinereadingcomprehension/full.md b/answerspancorrectioninmachinereadingcomprehension/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b08f95de1b09f70f62efd89a6ecda461b118a559 --- /dev/null +++ b/answerspancorrectioninmachinereadingcomprehension/full.md @@ -0,0 +1,183 @@ +# Answer Span Correction in Machine Reading Comprehension + +Revanth Gangi Reddy*, Md Arafat Sultan, Efsun Sarioglu Kayi, Rong Zhang, Vittorio Castelli†, Avirup Sil IBM Research AI + +g.revanthreddy111@gmail.com, arafat.sultan@ibm.com, {avi, zhangr, vittorio}@us.ibm.com, efsun@gwu.edu + +# Abstract + +Answer validation in machine reading comprehension (MRC) consists of verifying an extracted answer against an input context and question pair. Previous work has looked at re-assessing the "answerability" of the question given the extracted answer. Here we address a different problem: the tendency of existing MRC systems to produce partially correct answers when presented with answerable questions. We explore the nature of such errors and propose a post-processing correction method that yields statistically significant performance improvements over state-of-the-art MRC systems in both monolingual and multilingual evaluation. + +# 1 Introduction + +Extractive machine reading comprehension (MRC) has seen unprecedented progress in recent years (Pan et al., 2019; Liu et al., 2020; Khashabi et al., 2020; Lewis et al., 2019). Nevertheless, existing MRC systems—readers, henceforth—extract only partially correct answers in many cases. At the time of this writing, for example, the top systems on leaderboards like SQuAD (Rajpurkar et al., 2016), HotpotQA (Yang et al., 2018) and Quoref (Dasigi et al., 2019) all have a difference of 5–13 points between their exact match (EM) and F1 scores, which are measures of full and partial overlap with the ground truth answer(s), respectively. Figure 1 shows three examples of such errors that we observed in a state-of-the-art (SOTA) RoBERTa-large (Liu et al., 2019) model on the recently released Natural Questions (NQ) (Kwiatkowski et al., 2019) dataset. In this paper, we investigate the nature of such partial match errors in MRC and also their post hoc correction in context. + +Q: what type of pasta goes in itaiian wedding soup GT: usually cavatelli, acini di pepe, pastina, orzo, etc. Prediction: acini di pepe + +Q: when does precipitate form in a chemical reaction +GT: When the reaction occurs in a liquid solution +Prediction: Precipitation is the creation of a solid from a solution. When the reaction occurs in a liquid solution + +Q: what is most likely cause of algal blooms +GT: an excess of nutrients, particularly some +phosphates +Prediction: Freshwater algal blooms are the result of +an excess of nutrients + +Figure 1: Examples of partially correct MRC predictions and corresponding ground truth (GT) answers. The reader fails to find a minimal yet sufficient answer in all three cases. + +Recent work on answer validation (Peñas et al., 2007) has focused on improving the prediction of the answerability of a question given an already extracted answer. Hu et al. (2019) look for support of the extracted answer in local entailments between the answer sentence and the question. Back et al. (2020) propose an attention-based model that explicitly checks if the candidate answer satisfies all the conditions in the question. Zhang et al. (2020) use a two-stage reading process: a sketchy reader produces a preliminary judgment on answerability and an intensive reader extracts candidate answer spans to verify the answerability. + +Here we address the related problem of improving the answer span, and present a correction model that re-examines the extracted answer in context to suggest corrections. Specifically, we mark the extracted answer with special delimiter tokens and show that a corrector with architecture similar to that of the original reader can be trained to produce a new accurate prediction. + +Our main contributions are as follows: (1) We + +
Error%
Single-Span GT67
Prediction ⊆ GT33
GT ⊆ Prediction28
Prediction ∩ GT ≠∅6
Multi-Span GT33
+ +Table 1: Types of errors in NQ dev predictions with a partial match with the ground truth. + +analyze partially correct predictions of a SOTA English reader model, revealing a distribution over three broad categories of errors. (2) We show that an Answer Corrector model can be trained to correct errors in all three categories given the question and the original prediction in context. (3) We further show that our approach generalizes to other languages: our proposed answer corrector yields statistically significant improvements over strong RoBERTa and Multilingual BERT (mBERT) (Devlin et al., 2019) baselines on both monolingual and multilingual benchmarks. + +# 2 Partial Match in MRC + +Short-answer extractive MRC only extracts short sub-sentence answer spans, but locating the best span can still be hard. For example, the answer may contain complex substructures including multi-item lists or question-specific qualifications and contextualizations of the main answer entity. This section analyzes the distribution of broad categories of errors that neural readers make when they fail to pinpoint the exact ground truth span (GT) despite making a partially correct prediction. + +To investigate, we evaluate a RoBERTa-large reader (details in Section 3) on the NQ dev set and identify 587 examples where the predicted span has only a partial match $(\mathrm{EM} = 0, \mathrm{F1} > 0)$ with the GT. Since most existing MRC readers are trained to produce single spans, we discard examples where the NQ annotators provided multi-span answers consisting of multiple non-contiguous subsequences of the context. After discarding such multi-span GT examples, we retain $67\%$ of the 587 originally identified samples. + +There are three broad categories of partial match errors: + +1. Prediction $\subset$ GT: As the top example in Figure 1 shows, in these cases, the reader only extracts part of the GT and drops words/phrases such as items in a comma-separated list and + +![](images/75fd1fd4db49951e86453436c4415ed026a7a16c87045aed202e7cc3ba4bdea4.jpg) +Figure 2: Flow of an MRC instance through the reader-corrector pipeline. The corrector takes an input, with special delimiter tokens $\left([T_d]\right)$ marking the reader's predicted answer in context, and makes a new prediction. + +qualifications or syntactic completions of the main answer entity. + +2. GT $\subset$ Prediction: Exemplified by the second example in Figure 1, this category comprises cases where the model's prediction subsumes the closest GT, and is therefore not minimal. In many cases, these predictions lack syntactic structure and semantic coherence as a textual unit. +3. Prediction $\cap$ GT $\neq \emptyset$ : This final category consists of cases similar to the last example of Figure 1, where the prediction partially overlaps with the GT. (We slightly abuse the set notation for conciseness.) Such predictions generally exhibit both verosity and inadequacy. + +Table 1 shows the distribution of errors over all categories. + +# 3 Method + +In this section, we describe our approach to correcting partial-match predictions of the reader. + +# 3.1 The Reader + +We train a baseline reader for the standard MRC task of answer extraction from a passage given a question. The reader uses two classification heads on top of a pre-trained transformer-based language model (Liu et al., 2019), pointing to the start and end positions of the answer span. The entire network is then fine-tuned on the target MRC training data. For additional details on a transformer-based reader, see (Devlin et al., 2019). + +# 3.2 The Corrector + +Our correction model uses an architecture that is similar to the reader's, but takes a slightly different input. As shown in Figure 2, the input to the corrector contains special delimiter tokens marking + +the boundaries of the reader's prediction, while the rest is the same as the reader's input. Ideally, we want the model to keep answers that already match the GT intact and correct the rest. + +To generate training data for the corrector, we need a reader's predictions for the training set. To obtain these, we split the training set into five folds, train a reader on four of the folds and get predictions on the remaining fold. We repeat this process five times to produce predictions for all (question, answer) pairs in the training set. The training examples for the corrector are generated using these reader predictions and the original GT annotations. To create examples that do not require correction, we create a new example from each original example where we delimit the GT answer itself in the input, indicating no need for correction. For examples that need correction, we use the reader's top $k$ incorrect predictions ( $k$ is a hyperparameter) to create an example for each, where the input is the reader's predicted span and the target is the GT. The presence of both GT (correct) and incorrect predictions in the input data ensures that the corrector learns both to detect errors in the reader's predictions and to correct them. + +# 4 Experiments + +# 4.1 Datasets + +We evaluate our answer correction model on two benchmark datasets. + +Natural Questions (NQ) (Kwiatkowski et al., 2019) is an English MRC benchmark which contains questions from Google users, and requires systems to read and comprehend entire Wikipedia articles. We evaluate our system only on the answerable questions in the dev and test sets. NQ contains 307,373 instances in the train set, 3,456 answerable questions in the dev set and 7,842 total questions in the blind test set of which an undisclosed number is answerable. To compute exact match on answerable test set questions, we submitted a system that always outputs an answer and took the recall value from the leaderboard. $^{1}$ + +MLQA (Lewis et al., 2019) is a multilingual extractive MRC dataset with monolingual and crosslingual instances in seven languages: English (en), Arabic (ar), German (de), Spanish (es), Hindi (hi), Vietnamese (vi) and Simplified Chinese (zh). It + +
ModelDevTest
RoBERTa Reader61.262.4
+ Corrector62.863.7
+ +Table 2: Exact Match results on Natural Questions. + +has 15,747 answerable questions in the dev set and a much larger test set with 158,083 answerable questions. + +# 4.2 Setup + +Our NQ and MLQA readers fine-tune a RoBERTa-large and an mBERT (cased, 104 languages) language model, respectively. Following Alberti et al. (2019), we fine-tune the RoBERTa model first on SQuAD2.0 (Rajpurkar et al., 2018) and then on NQ. Our experiments showed that training on both answerable and unanswerable questions yields a stronger and more robust reader for NQ, even though we evaluate only on answerable questions. For MLQA, we follow Lewis et al. (2019) to train on SQuAD1.1 (Rajpurkar et al., 2016), as MLQA does not contain any training data. We obtain similar baseline results as reported in (Lewis et al., 2019). All our implementations are based on the Transformers library by Wolf et al. (2019). + +For each dataset, the answer corrector uses the same underlying transformer language model as the corresponding reader. While creating training data for the corrector, to generate examples that need correction, we take the two $(k = 2)$ highest-scoring incorrect reader predictions (the value of $k$ was tuned on dev). Since our goal is to fully correct any inaccuracies in the reader's prediction, we use exact match (EM) as our evaluation metric. We train the corrector model for one epoch with a batch size of 32, a warmup rate of 0.1 and a maximum query length of 30. For NQ, we use a learning rate of 2e-5 and a maximum sequence length of 512; the corresponding values for MLQA are 3e-5 and 384, respectively. + +# 4.3 Results + +We report results obtained by averaging over three seeds. Table 2 shows the results on the answerable questions of NQ. Our answer corrector improves upon the reader by 1.6 points on the dev set and 1.3 points on the blind test set. + +Results on MLQA are shown in Table 3. We compare performances in two settings: one with the paragraph in English and the question in any + +
En-ContextG-XLT
ModelDevTestDevTest
mBERT Reader47.545.635.034.7
+ Corrector48.346.435.535.3
+ +of the seven languages (En-Context), and the other being the Generalized Cross-Lingual task (G-XLT) proposed in (Lewis et al., 2019), where the final performance is the average over all 49 (question, paragraph) language pairs involving the seven languages. + +Table 3: Exact match results on MLQA. En-Context refers to examples with an English paragraph, G-XLT refers to the generalized cross-lingual transfer task. + +
q\ceneshividearzh
en0.2↑↓0.1↓0.2↓0.4↓0.1↓0.1↓0.3
es0.9↑↓0.2↓0.10.2↑0.8↑0.5↑1.4↑
hi0.8↑0.8↑0.8↑0.8↑0.6↑0.4↑0.2↑
vi0.9↑1.7↑0.7↑0.3↑1.3↑0.9↑0.5↑
de1.7↑0.6↑↓0.10.6↑0.1↑1.3↑0.9↑
ar0.5↑1.0↑0.4↑0.7↑0.9↑0.5↑0.4↑
zh0.9↑0.1↑0.9↑0.8↑1.3↑0.4↑0.3↑
AVG0.8↑0.6↑0.3↑0.4↑0.7↑0.6↑0.5↑
+ +Table 4 shows the differences in exact match scores for all 49 MLQA language pair combinations, from using the answer corrector over the reader. On average, the corrector gives performance gains for paragraphs in all languages (last row). The highest gains are observed in English contexts, which is expected as the model was trained to correct English answers in context. However, we find that the approach generalizes well to the other languages in a zero-shot setting as exact match improves in 40 of the 49 language pairs. + +We performed Fisher randomization tests (Fisher, 1936) on the exact match numbers to verify the statistical significance of our results. For MLQA, we found our reader + corrector pipeline to be significantly better than the baseline reader on the 158k-example test set at $p < 0.01$ . For NQ, the $p$ -value for the dev set results was approximately 0.05. + +Table 4: Changes in exact match with the answer corrector, for all the language pair combinations in the MLQA test set. The final row shows the gain for each paragraph language averaged over questions in different languages. + +
ModelEM
Reader61.2
Ensemble of Readers62.1
Reader + Corrector62.8
+ +# 5 Analysis + +# 5.1 Comparison with Equal Parameters + +In our approach, the reader and the corrector have a common architecture, but their parameters are separate and independently learned. To compare with an equally sized baseline, we build an ensemble system for NQ which averages the output logits of two different RoBERTa readers. As Table 5 shows, the corrector on top of a single reader still outperforms this ensemble of readers. These results confirm that the proposed correction objective complements the reader's extraction objective well and is fundamental to our overall performance gain. + +# 5.2 Changes in Answers + +We inspect the changes made by the answer corrector to the reader's predictions on the NQ dev set. Overall, it altered $13\%$ (450 out of 3,456) of the reader predictions. Of all changes, $24\%$ resulted in the correction of an incorrect or a partially correct answer to a GT answer and $10\%$ replaced the original correct answer with a new correct answer (due to multiple GT annotations in NQ). In $57\%$ of the cases, the change did not correct the error. On a closer look, however, we observe that the F1 score went up in more of these cases $(30\%)$ compared to when it dropped $(15\%)$ . Finally, $9\%$ of the changes introduced an error in a correct reader prediction. These statistics are shown in Table 6. + +Table 5: Error correction versus model ensembling. + +
R\R+CCorrectIncorrect
Correct45 (10%)43 (9%)
Incorrect109 (24%)253 (57%)
+ +Table 6: Statistics for the correction model altering original reader predictions. The row header refers to predictions from the reader and the column header refers to the final output from the corrector. + +Table 7 shows some examples of correction made by the model for each of the three single-span error categories of Table 1. Two examples wherein the corrector introduces an error into a previously correct output from the reader model are shown in Table 8. + +
Error ClassQuestionPassagePrediction
Prediction⊂GTwho won the king of dance season 2... Title Winner : LAAB Crew From Team Sherif , 1st Runner-up : ADS kids From Team Sherif , 2nd Runner-up : Bipin and Princy From Team Jeffery ...R: LAAB Crew
R+C: LAAB Crew From Team Sherif
GT⊂Predictionunsaturated fats are comprised of lipids that contain... An unsaturated fat is a fat or fatty acid in which there is at least one double bond within the fatty acid chain. A fatty acid chain is monounsaturated if it contains one double ...R: An unsaturated fat is a fat or fatty acid in which there is at least one double bond
R+C: at least one double bond
Prediction∩GT≠∅what is most likely cause of algal blooms... colloquially as red tides. Freshwater algal blooms are the result of an excess of nutrients, particularly some phos-phates. The excess of nutrients may originate from fertilizers that are applied to land for agricultural or recreational ...R: Freshwater algal blooms are the result of an excess of nutrients
R+C: an excess of nutrients, particularly some phosphates
+ +Table 7: Some examples for different error classes in the Natural Questions dev set wherein the answer corrector corrects a previously incorrect reader output. Ground truth answer is marked in bold in the passage. R and C refer to reader and corrector, respectively. + +
QuestionPassagePrediction
where are the cones in the eye located... Cone cells, or cones, are one of three types of photoreceptor cells in the retina of mammalian eyes (e.g. the human eye). They are responsible for color vision and function best in ..R: in the retina
R+C: retina
who sang the theme song to step by step... Jesse Frederick James Conaway (born 1948), known professionally as Jesse Frederick, is an American film and television composer and singer best known for writing ...R: Jesse Frederick James Conaway R+C: Jesse Frederick James Conaway (born 1948), known professionally as Jesse Frederick
+ +Table 9 shows the percentage of errors corrected in each error class. Corrections were made in all three categories, but more in $GT \subset Prediction$ and $Prediction \cap GT \neq \emptyset$ than in $Prediction \subset GT$ , indicating that the corrector learns the concepts of minimality and syntactic structure better than that of adequacy. We note that most existing MRC systems that only output a single contiguous span are not equipped to handle multi-span discontinuous GT. + +# 6 Conclusion + +We describe a novel method for answer span correction in machine reading comprehension. The proposed method operates by marking an original, possibly incorrect, answer prediction in context and then making a new prediction using a corrector model. We show that this method corrects the predictions of a state-of-the-art English-language reader in different error categories. In our experiments, the approach also generalizes well to multilingual and cross-lingual MRC in seven languages. Future work will explore joint answer span cor + +Table 8: Examples from the Natural Questions dev set wherein the answer corrector introduces an error in a previously correct reader output. The ground truth answer is marked in bold in each passage. R and C refer to reader and corrector, respectively. + +
Error classTotalCorrected
GT⊂ Prediction165 (28%)62 (38%)
Prediction⊂ GT191 (33%)18 (9%)
Prediction∩GT≠∅37 (6%)8 (22%)
Multi-span GT194 (34%)-
+ +Table 9: Correction statistics for different error categories in 587 partial match (EM=0, F1>0) reader predictions. + +rection and validation of the answerability of the question, re-using the original reader's output representations in the correction model and architectural changes enabling parameter sharing between the reader and the corrector. + +# Acknowledgments + +We thank Tom Kwiatkowski for his help with debugging while submitting to the Google Natural Questions leaderboard. We would also like to thank the multilingual NLP team at IBM Research AI and the anonymous reviewers for their helpful suggestions and feedback. + +# References + +Chris Alberti, Kenton Lee, and Michael Collins. 2019. A BERT Baseline for the Natural Questions. arXiv preprint arXiv:1901.08634. +Seohyun Back, Sai Chetan Chinthakindi, Akhil Kedia, Haejun Lee, and Jaegul Choo. 2020. NeurQuRI: Neural question requirement inspector for answerability prediction in machine reading comprehension. In International Conference on Learning Representations. +Pradeep Dasigi, Nelson F Liu, Ana Marasovic, Noah A Smith, and Matt Gardner. 2019. Quoref: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5927-5934. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Ronald Aylmer Fisher. 1936. Design of experiments. Br Med J, 1(3923):554-554. +Minghao Hu, Furu Wei, Yu xing Peng, Zhen Xian Huang, Nan Yang, and Ming Zhou. 2019. Read + Verify: Machine Reading Comprehension with Unanswerable Questions. In AAAI. +Daniel Khashabi, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2020. UnifiedQA: Crossing Format Boundaries With a Single QA System. arXiv preprint arXiv:2005.00700. +Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453-466. +Patrick Lewis, Barlas Oğuz, Rudy Rinott, Sebastian Riedel, and Holger Schwenk. 2019. MLQA: Evaluating Cross-lingual Extractive Question Answering. arXiv preprint arXiv:1910.07475. +Dayiheng Liu, Yeyun Gong, Jie Fu, Yu Yan, Jiusheng Chen, Daxin Jiang, Jiancheng Lv, and Nan Duan. 2020. RikiNet: Reading Wikipedia Pages for Natural Question Answering. arXiv preprint arXiv:2004.14560. + +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:1907.11692. +Lin Pan, Rishav Chakravarti, Anthony Ferritto, Michael Glass, Alfio Gliozzo, Salim Roukos, Radu Florian, and Avirup Sil. 2019. Frustratingly Easy Natural Question Answering. arXiv preprint arXiv:1909.05286. +Anselmo Peñas, Álvaro Rodrigo, and Felisa Verdejo. 2007. Overview of the answer validation exercise 2007. In Workshop of the Cross-Language Evaluation Forum for European Languages, pages 237-248. Springer. +Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know What You Don't Know: Unanswerable Questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784-789, Melbourne, Australia. Association for Computational Linguistics. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ Questions for Machine Comprehension of Text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R'emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace's Transformers: State-of-the-art Natural Language Processing. ArXiv, abs/1910.03771. +Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2369-2380. +Zhuosheng Zhang, Junjie Yang, and Hai Zhao. 2020. Retrospective reader for machine reading comprehension. arXiv preprint arXiv:2001.09694. \ No newline at end of file diff --git a/answerspancorrectioninmachinereadingcomprehension/images.zip b/answerspancorrectioninmachinereadingcomprehension/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e3577b02ff8a81f67c4d9b92864ac72ed8676392 --- /dev/null +++ b/answerspancorrectioninmachinereadingcomprehension/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:39ff7ce066c8fe02165af3474f4589561310e618cc1a955886d4bc83ed8f957c +size 314274 diff --git a/answerspancorrectioninmachinereadingcomprehension/layout.json b/answerspancorrectioninmachinereadingcomprehension/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..6f65cf872d39a17133814ffda9bb54f19e6a5bf1 --- /dev/null +++ b/answerspancorrectioninmachinereadingcomprehension/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f7547aacc02e0a4460efbfc586db6ae4ebdb313dff3f9fc748804e4df950cdb +size 186433 diff --git a/approximationofresponseknowledgeretrievalinknowledgegroundeddialoguegeneration/ced4b63c-ac56-418d-8e14-a99ee14744c3_content_list.json b/approximationofresponseknowledgeretrievalinknowledgegroundeddialoguegeneration/ced4b63c-ac56-418d-8e14-a99ee14744c3_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..62891777ffac955bcee4affe63c8656fb1bf0d2a --- /dev/null +++ b/approximationofresponseknowledgeretrievalinknowledgegroundeddialoguegeneration/ced4b63c-ac56-418d-8e14-a99ee14744c3_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:85c995ccf5eccdae34ddad217842a0b75ad2c2ab19e3093505ba1cc7d14f5d67 +size 83527 diff --git a/approximationofresponseknowledgeretrievalinknowledgegroundeddialoguegeneration/ced4b63c-ac56-418d-8e14-a99ee14744c3_model.json b/approximationofresponseknowledgeretrievalinknowledgegroundeddialoguegeneration/ced4b63c-ac56-418d-8e14-a99ee14744c3_model.json new file mode 100644 index 0000000000000000000000000000000000000000..e6074dee4d1e7651f8415897f37dc9d88a28df1a --- /dev/null +++ b/approximationofresponseknowledgeretrievalinknowledgegroundeddialoguegeneration/ced4b63c-ac56-418d-8e14-a99ee14744c3_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f3a4603438494b3c352ec312c33d1009a5f3d33ecd5b87d6e1ab3419ad156474 +size 97322 diff --git a/approximationofresponseknowledgeretrievalinknowledgegroundeddialoguegeneration/ced4b63c-ac56-418d-8e14-a99ee14744c3_origin.pdf b/approximationofresponseknowledgeretrievalinknowledgegroundeddialoguegeneration/ced4b63c-ac56-418d-8e14-a99ee14744c3_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0ea35c79beb846d3013fb0460cbcc7532b7649e6 --- /dev/null +++ b/approximationofresponseknowledgeretrievalinknowledgegroundeddialoguegeneration/ced4b63c-ac56-418d-8e14-a99ee14744c3_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:baa828d70d17e284231e00b9cd0d58fb8560bfdadec1691bafd82c5a2003d06b +size 553516 diff --git a/approximationofresponseknowledgeretrievalinknowledgegroundeddialoguegeneration/full.md b/approximationofresponseknowledgeretrievalinknowledgegroundeddialoguegeneration/full.md new file mode 100644 index 0000000000000000000000000000000000000000..22a956ebb8e94a73c37981415559202e4324dd67 --- /dev/null +++ b/approximationofresponseknowledgeretrievalinknowledgegroundeddialoguegeneration/full.md @@ -0,0 +1,323 @@ +# Approximation of Response Knowledge Retrieval in Knowledge-grounded Dialogue Generation + +Wen Zheng + +University of Nottingham + +wen.zheng@nottingham.ac.uk + +Natasa Milic-Frayling + +University of Nottingham + +natasa-milic@frayling.net + +# Ke Zhou + +University of Nottingham & Nokia Bell Labs + +Ke.Zhou@nottingham.ac.uk + +# Abstract + +This paper is concerned with improving dialogue generation models through injection of knowledge, e.g., content relevant to the post that can increase the quality of responses. Past research extends the training of the generative models by incorporating statistical properties of posts, responses and related knowledge, without explicitly assessing the knowledge quality. In our work, we demonstrate the importance of knowledge relevance and adopt a two-phase approach. We first apply a novel method, Transformer & Post based Posterior Approximation (TPPA) to select knowledge, and then use the Transformer with Expanded Decoder (TED) model to generate responses from both the post and the knowledge. TPPA method processes posts, post related knowledge, and response related knowledge at both word and sentence level. Our experiments with the TED generative model demonstrate the effectiveness of TPPA as it outperforms a set of strong baseline models. Our TPPA method is extendable and supports further optimization of knowledge retrieval and injection. + +# 1 Introduction + +In recent years, there have been concerted efforts to model dialogue interactions and generate an appropriate response to an initial user statement, referred to as a post. Research has led to generative models, e.g., Sequence-to-Sequence (Sutskever et al. (2014)) and Transformer (Vaswani et al., 2017), that produce reasonable responses using the original post solely during the generation process. + +Recent studies (Weston et al., 2018; Ghazvininejad et al., 2018; Zheng and Zhou, 2019) explored more realistic dialogue models that include knowledge related to the posts, typically a collection of sentences that refer to the topics in the posts and responses. Consequently, the response generation + +Wiz Post: Yep. you've got to select for safety standards, of course, but when you're designing at a Mercedes level the folks buying those cars are going to expect a certain standard of comfort, too! + +Wiz Response: Especially, I think consumers expect great in Formula One, highest class auto racing. + +TPPA (top 1): Formula One (also Formula 1 or F1 and officially the FIA Formula One World Championship) is the highest class of single seat auto racing that is sanctioned by the Federation Internationale de l'Automobile (FIA). + +TPPA (top 2): Stock car racing is a form of automobile racing found mainly and most prominently in the United States and Canada, with Australia, New Zealand and Brazil also having forms of stock car auto racing. + +PRK (top 1): Mercedes is part of the McQueen family and is the longest serving McQueen on the series. + +PRK (top 2): He also won races in midget cars, and sprint cars. + +RRK (top 1): Formula One (also Formula 1 or F1 and officially the FIA Formula One World Championship) is the highest class of single seat auto racing that is sanctioned by the Federation Internationale de l'Automobile (FIA). + +RRK (top 2): The FIA Formula One World Championship has been one of the premier forms of racing around the world since its inaugural season in 1950. + +Table 1: Example of a post and a response from the Wizard of Wikipedia (Wiz) data set (§5.1) with top 2 ranked outputs from TPPA, the post-retrieved knowledge PRK and the response-retrieved knowledge RRK. Blue indicate words present in the Wiz response and RRK but not in PRK. + +process involves an information retrieval component that needs to be optimized for the selection and injection of relevant knowledge into the generative model. + +Evaluation of such approaches has shown that the knowledge based on posts alone may lack focus, i.e., may exhibit topic drifts and thus introduce noise. Table 1 illustrates Post-Retrieved Knowledge (PRK) that has a good overlap with the post but introduces content that is not present in the response and thus deemed non-relevant. By contrast, the Response-Retrieved Knowledge (RRK) shares content with the response, thus illustrating that dialogue training needs to incorporate relevant knowledge related to the response. + +In practice, however, the key challenge is to implement an effective selection of response related knowledge, considering that the responses to posts + +are not observed during dialogue generation. In this paper, we present the Transformer & Post based Posterior Approximation (TPPA) method that achieves that by applying multi-stage processing of posts, post related knowledge and response related knowledge to capture word and sentence level characteristics (through word embeddings, Transformer and max-pooling), that can be useful for ranking and selecting knowledge of new posts during the test phase. + +Table 1 illustrates the high overlap that TPPA outputs achieve with true responses (for postresponse pair from the Wizard of Wikipedia (Wiz) data collection (Dinan et al., 2019). Furthermore, we empirically demonstrate the effectiveness of TPPA, by injecting TPPA selected knowledge into generative models, in particular the Transformer Extended Decoder (TED) that allows integrating knowledge from multiple sources (Zheng and Zhou, 2019). The combination of TED and TPPA outperforms a set of strong baseline systems, including systems that do not separate knowledge selection from modelling response generation: Post-KS (Lian et al., 2019) and SKT (Sequential Latent-knowledge Selection) (Kim et al., 2020). + +Most important contributions of our work are: + +1. Empirical evidence that generative models with injecting response-retrieved knowledge outperform those that use only post-retrieved knowledge (§3). +2. New method for knowledge selection (TPPA) that includes Transformer-based representations of posts and post related knowledge to select relevant knowledge processed with word embedding and MaxPooling (§4). +3. Experimental results that demonstrate the benefit of TPPA knowledge injection into the TED generative model (Zheng and Zhou, 2019), outperforming state-of-the-art models on two publicly available data sets ( $\S 5$ , $\S 6$ ). + +In addition, the separation of the knowledge selection from the generative models offers maximum flexibility for integrating and exploring alternative retrieval models and knowledge representations. We make our codes publicly available at https://github.com/tonywenuon/emnlp2020_tppa. + +# 2 Related Work + +In this section we first discuss retrieval models and then knowledge injection into generative models. + +Retrieval Models. Most traditional retrieval models, such as BM25 (Robertson et al., 2004), are unsupervised methods, relying on lexical matching between query terms and document text using different weighting and normalization schemes. In contrast, recent studies use neural ranking models, such as deep structured semantic models (DSSM) (Huang et al., 2013; Shen et al., 2014), weakly supervised neural ranking models (Dehghani et al., 2017) and jointly trained neural models (Yan et al., 2016; Mitra et al., 2017). They are built to respond to information needs represented by a query. We illustrate our approach by adopting BM25 for initial retrieval of relevant knowledge. We also use the post related results to create an extended representation of the post, similar to the pseudo-relevance-feedback in query-based search (Cao et al., 2008). + +Generative Models & Knowledge Injection. Injection of knowledge into generative models has been pursued to improve the quality of responses, considering that during dialogue generation only a post and related knowledge are observed. Ghazvininejad et al. (2018) encode and merge knowledge with a post representation, creating a final vector representation that is input into the decoder. Tam (2020) extends this method with a copy-mechanism that enables the model to generate response words either from the post or from the generative model. + +Zheng and Zhou (2019) use the Transformer Extended Decoder (TED) to incorporate words from multiple sources by assigning weights to knowledge sources based on relevance between the knowledge and the decoding words, and taking the weighted-sum vector to generate responses. + +Closest to our work is the PostKS model proposed by Lian et al. (2019) that includes a knowledge manager which fits the prior word distribution (from posts) to the posterior word distribution (with both post and response observed). By applying the Gumbel-Softmax method, they select the best knowledge for the dialogue generation. Similarly, the sequential latent-knowledge selection (SKT) proposed by Kim et al. (2020) jointly trains the knowledge selection and the dialogue generation model. Both methods consider knowledge relevance to posts and responses during training but do not leverage post-retrieved knowledge during testing. + +Our proposed Transformer & Post based Posterior Approximation (TPPA) model distinguishes + +itself by explicitly incorporating response related knowledge into training and applying pseudo relevance feedback approach by training an autopointer vector to identify potentially the most relevant knowledge. Combined with the TED generative model, TPPA leads to responses that outperform state-of-the-art methods (§6). + +# 3 Problem Statement and Motivation + +# 3.1 Key Notations and Research Objectives + +Dialogue generation models that incorporate knowledge aim to expand the input beyond the observable post and incorporate a responder's knowledge. It is assumed that the available knowledge $K_{p}$ for a given post $p$ includes content that is related to the response, although the quality of that knowledge is not certain. The key issue is, thus, to determine which of the knowledge statements $k \in K_{p}$ are relevant to the unobserved response $r$ . During the training phase, where the post $p$ , response $r$ and $K_{p}$ are all available, we use $p$ and $r$ as queries to rank all the statements in $K_{p}$ and create the corresponding ranked lists: Response-retrieved Knowledge RRK and Post-retrieved Knowledge PRK, respectively. We use lower-case $rrk_{1}$ and $prk_{1}$ to indicate top 1 ranked item in RRK and PRK, respectively. + +# 3.2 RRK Assessment on Wiz Training Data + +In this section, we analyze RRK for the Wiz training data (§5.1) where both posts $p$ and responses $r$ are known as well as the corresponding knowledge set $K_{p}$ . Assuming that we deploy a reasonable search algorithm, we expect that $rrk_{1}$ will have a high overlap with the response $r$ that is used as a query. We also assume that generative models will be able to use $rrk_{1}$ to generate a good quality response considering its overlap with the true response. The objective of this section is to gain insights on what difference RRK can make compared to the use of PRK alone. + +Word count. We compare the number of common words (after removing stop words) between the original response $r$ and the four sequences: (1) the post $p$ , (2) $prk_{1}$ , i.e., the top 1 ranked item in PRK, (3) $rrk_{1}$ , i.e., the top 1 ranked item in RRK, and (4) a random post chosen from the data set. The distributions of word overlaps are shown in Figure 1. The $x$ -axis indicates the count of common words and $y$ -axis shows the percentage of the posts $p$ and responses $r$ sample with the given word overlap. + +![](images/44d89bd074c7f94193de787f37e2cf032ef36b2ca370de7b6890e871e2450ea5.jpg) +Figure 1: Common words count distribution between each source and the target response on the Wiz training set. The dashed lines are the average count of common words of each group (after removing stop words). + +As expected, the word overlaps of $p$ and $prk_{1}$ with $r$ are similar, with the overlap of $p$ and $r$ being lower. For the randomly selected post $p$ , the average term overlap with $r$ is slightly lower but close to post $p$ , suggesting that posts alone are not very informative for the response generation. The difference for $prk_{1}$ and $rrk_{1}$ is quite marked showing that $rrk_{1}$ has on average almost twice the overlap of the $prk_{1}$ (98% increase). Based on the Kolmogorov-Smirnov test, all the differences among the four groups in Figure 1 are statistically significant. For the Holl-E data set (that is another data set we used in §5.1), a similar trend is observed. + +Response generation. We assess the effectiveness of RRK when injected into the generative model by conducting experiments with the standard Transformer (Vaswani et al., 2017) and the Transformer with Expanded Decoder (TED) (Zheng and Zhou, 2019). Transformer takes only a post while TED uses a post and multiple sources of knowledge to get the responses. + +Table 2a shows the results for Transformer with (1) original post, (2) a randomly selected sentence, (3) $prk_{1}$ , (4) $rrk_{1}$ and (5) a human selected knowledge, i.e., a sentence provided in Wiz. Table 2 with results metrics (BLEU, METEOR and Div-2, §5) show that replacing the original post by a randomly selected sentence reduces the performance significantly. Using $prk_{1}$ leads to lower performance, indicating a possible topic drift and noise. Using $rrk_{1}$ shows promising performance improvement; with higher retrieval performance, it may achieve the effectiveness of the human selected knowledge. Similarly, for the TED generative model, we incorporate the post content and evaluate the cumulative effect of adding knowledge from different sources. As expected, the best performance is achieved by the human selection of knowledge followed by the RRK (Table 2b). + +In conclusion, it is worthwhile putting an ef + +
(a) TransformerBLEU-4METEORDiv-2
Original Post1.766.67.3
Random Post0.394.470.19
prk11.236.365.62
rrk12.857.9912.88
Human selection4.69.9718.86
(b) TEDBLEU-4METEORDiv-2
Post+1 Random sentence2.87.1318.73
Post+prk13.358.4516.2
Post+rrk18.1411.3624.63
Post+Human selection10.0613.1325.7
+ +Table 2: Injection of various sources into the Transformer and TED using Wiz data set. All the values are percentages reported by the performance metrics $(\%)$ . + +fort to create resources that represent a responder's knowledge and effective retrieval methods to retrieve knowledge relevant to the response content. Since the response is not available, we devise TPPA to leverage post $p$ and post-retrieved knowledge PRK and train models to approximate RRK. + +# 4 TPPA Method + +In this section, we describe the architecture and the process of selecting knowledge using the TPPA method. Figure 2 depicts three TPPA components: + +(1) Post Processing Unit comprising a word embedding and a Transformer that incorporates the post $p$ and a set of $n$ of retrieved $prk_{i}$ , where $n$ is determined empirically (typically $n = 10$ out of 50 knowledge items in $K_{p}$ , on average). The results are a Transformer representation $v_{p}$ for the post and $v_{PRK}$ for all of the $prks$ . In the end, a single $v_{prk}$ (representing the potentially most useful $prk$ for identifying the $rrk_{1}$ ) is selected based on Auto-Pointer and Gumble Softmax algorithms. + +(2) Response Processing Unit that, during training, considers each response $r$ and corresponding $K_{p}$ to get $rrk_{1}$ and a set of negs (i.e., m negative samples which are non-relevant knowledge to the $rrk_{1}$ ) in order to train a word embedding that forms knowledge representation (we call it as $v_{k}$ ). The number of negative examples $m$ is selected empirically, to avoid overfitting. + +(3) Knowledge Selection Unit, a search component that uses $v_{p}$ and $v_{prk}$ as queries to score the knowledge representation $v_{k}$ . The score is a weighted sum of similarity metrics using a hyperparameter $\alpha$ that can be chosen to emphasize the similarity with $p$ or $prk$ . + +TPPA operation consists of Phase 1: Training phase that utilizes training data $(p, r, K_p)$ to train all the three components of the system based on known responses $r$ ; and Phase 2: Test phase during + +which individual post-knowledge samples $(p, K_p)$ are processed in order to arrive at a selection of knowledge $(k \in K_p)$ to be injected into the generative models. + +# 4.1 TPPA Training phase + +# 4.1.1 Post and PRK Processing + +The post $p$ and a set of $prk_{i}$ , $i = 1, \dots, n$ ( $i$ is the i-th ranked post-related knowledge) are processed with the same Transformer encoder to obtain word representations and then passed through the max-pooling to obtain the sequence semantic vector. + +$$ +e (p) = \operatorname {T r a n s f o r m e r} _ {\Theta} \left(e \left(w _ {i}\right)\right) 1 \leq i \leq L \tag {1} +$$ + +$$ +v _ {p} = \operatorname {m a x p o o l} (e (p)) \tag {2} +$$ + +where $\Theta$ is the trainable parameter set inside the Transformer. $p$ is the input post, $w_{i}$ is the $i$ -th word of the $p$ post sequence. $L$ is the maximum post length. $e(w_{i}) \in \mathbb{R}^{d}$ is the post word embedding for $w_{i}$ , and $d$ is the embedding dimension. $e(p)$ represents the semantic representation of all the words in the post while $v_{p}$ is the post representation (sentence-level). For the $prk_{i}$ , they follow exactly the same process following Equation 1 and 2. + +We consider multiple knowledge items $prk_{i}$ in order to construct an effective query for knowledge selection that complements the post and increases the chances of selecting knowledge that is relevant to the response. We train an auto-pointer to assign scores to each $prk_{i}$ . The auto-pointer module takes $v_{PRK}$ as input and outputs a PRK scores vector $(v_{ap})$ that indicates the importance degree of the $prks$ . This is followed by a Gumbel-Softmax (Jang et al., 2016) module to select the best $prk$ for knowledge retrieval: + +$$ +v _ {a p} = \left(v _ {P R K} W ^ {T} + b\right) W _ {\text {a u t o - p o i n t e r}} ^ {T} \tag {3} +$$ + +$$ +v _ {p r k} = \text {G u m b e l - S o f t m a x} \left(v _ {a p}, v _ {P R K}\right) \tag {4} +$$ + +where $v_{PRK} \in \mathbb{R}^{n \times d}$ represents all $prk_i$ representations obtained by Eq. 1 and 2 and $v_{prk}$ is the representation of the finally chosen post-related knowledge. $W \in \mathbb{R}^{d \times d}$ and $b \in \mathbb{R}^d$ are trainable parameters; $W_{auto\_ pointer} \in \mathbb{R}^{1 \times d}$ is the trainable auto-pointer for selecting useful $prk$ . + +# 4.1.2 Response Processing Unit + +The knowledge representation $v_{k}$ is obtained by going through raw knowledge word embedding1 + +![](images/7fff2b115ef6442d666e89310d761eefc21455d2597330c04cdc95c5612353a6.jpg) +Figure 2: TPPA Architecture comprises (1) Post Processing Unit, (2) Response Processing Unit (right) and (3) Knowledge Selection Unit (middle). + +and a max-pooling operation (seeing Figure 2 Response Processing Unit). The conduction of obtaining $v_{k}$ is similar to Eq. 1 and 2 but replacing the Transformer to a raw knowledge word embedding lookup operation. + +Since the objective is to augment vocabulary and avoid noise, during training, we constrain the positive knowledge to the highly relevant knowledge item, i.e. $rrk_{1}$ by using BM25. We also randomly select knowledge to be as negative samples (from the union of all $K_{p}$ after the $rrk_{1}s$ of the posts are removed). Both of the positive sample and negative samples will pass through the Response Processing Unit to gain their representations. + +# 4.1.3 Knowledge Scoring and Selection + +Following the post $v_{p}$ and $v_{prk}$ representation and knowledge representation $v_{k}$ , we compute similarities $S(p, k)$ and $S(prk, k)$ : + +$$ +S (p, k) = \frac {\operatorname {c o s i n e} \left(v _ {p} , v _ {k}\right)}{\| v _ {p} \| \cdot \| v _ {k} \|}; S (p r k, k) = \frac {\operatorname {c o s i n e} \left(v _ {p r k} , v _ {k}\right)}{\| v _ {p r k} \| \cdot \| v _ {k} \|} \tag {5} +$$ + +where $S(\cdot)$ designates the similarity function; $v_{p}$ , $v_{k}$ and $v_{prk}$ refer to the representations of the post, knowledge and the selected $prk$ , respectively. + +Depending on a type of dialogue, the response may incorporate the content of the post to a different degree. Thus, to support flexible scoring with regards to $p$ and $prk$ , we introduce a hyperparameter, $\alpha$ to the final scoring function: + +$$ +\operatorname {S c o r e} (p, p r k, k) = \alpha \times S (p, k) + (1 - \alpha) \times S (p r k, k) \tag {6} +$$ + +We tune $\alpha$ parameter on the training set and in the final $Score(p, prk, k)$ , setting it to 0.7 to give more importance to the post. + +After we get the scores of the positive and negative samples, for all the positive-negative sample pairs, we apply softmax to the similarity scores: + +$$ +P \left(k _ {i} \mid p, p r k\right) = \frac {\exp \left(\lambda S c o r e \left(p , p r k , k _ {i}\right)\right)}{\sum \exp \left(\lambda S c o r e \left(p , p r k , k _ {i}\right)\right)} \tag {7} +$$ + +calculating the probability of each $k_{i}$ given the post $p$ and prks. $k_{i} \in \{rrk_{1}; \text{neg}_{1}, \text{neg}_{2}, \dots, \text{neg}_{m}\}$ are shown in the response processing unit in Figure 2, where neg $_1$ , ..., neg $_m$ are m negative samples. $\lambda$ is a smoothing factor of the softmax function and is a trainable parameter (Huang et al., 2013). We maximise the difference between the positive sample and the negative samples scores. + +$$ +L o s s = \sum \left(- l o g (P (r r k _ {1} | p) + \sum_ {j} l o g (n e g _ {j} | p))\right) \tag {8} +$$ + +where $P(rrk_{1}|p)$ is the positive score, $P(neg_{j}|p)$ stands for the $j$ -th negative score, where $1 \leq j \leq m$ . $m$ is the number of negative samples. During training, all of the trainable parameters, including the post word embedding, Transformer architecture, auto-pointer and the knowledge word embedding, are updated by mini-batch gradient descent (the setup is in §5.2). + +# 4.2 TPPA Test Phase + +During the test phase, each new post $p$ and corresponding $K_{p}$ is processed using the Post Processing Unit and Response Processing Units, with parameter obtained during the training phase. Each knowledge $k_{i}$ and its corresponding post are scored using the Score $(p, prk, k_{i})$ (Eq. 6) and TPPA returns the final rank of the knowledge candidates. + +# 5 Experiments + +Our approach for knowledge injection separates the knowledge selection from the response generation models. We, thus, evaluate TPPA in terms of (1) precision in selecting relevant knowledge for a given post, judged by whether the $rrk_{1}$ can be ranked within top n position, and (2) effectiveness of the retrieved knowledge when injected into a response generation model. + +# 5.1 Data + +We experiment with two publicly available data sets: Wizard of Wikipedia (Wiz) (Dinan et al., 2019) comprises controlled human-to-human dialogue interactions where the participant can assume the role of a teacher or a student and take turns to discuss a topic. A teacher answers a student's post based on pre-retrieved knowledge that is related to the current topic and the dialogue context. The Wiz data set consists of 22,311 dialogues with 201,999 turns. Each post-response pair is assigned the related-knowledge, i.e., manually selected relevant sentences or paragraphs from Wikipedia. + +Holl-E (Moghe et al., 2018) comprises dialogues between two Amazon Mturk workers about a selected movie, supported by selected sources of background knowledge: movie plots, reviews, comments, and the fact tables related to the movie. A response to a post is either copied or suitably modified from the provided grounded knowledge, mixed from the four knowledge sources. Holl-E data contains 9,071 conversations, covering 921 movies. + +# 5.2 Baselines, Setup and Metrics + +Baselines In our experiments we compare TPPA knowledge selection on the retrieval performance with three baseline models: BM25 (Robertson and Walker, 1994) is an unsupervised probabilistic retrieval algorithm, which is robust for short document (sentence) retrieval. DrQA (Chen et al., 2017) uses bigram hashing and TF-IDF matching with a multi-layer recurrent neural network model. CNN-DSSM (Shen et al., 2014) uses CNN for semantic matching of queries and documents. + +In order to evaluate the effectiveness of the selected knowledge for response generation, we compare TPPA output with three models: WSeq (Tian et al., 2017) uses weighted sum and concatenation of the post and its contextual utter + +ances, and obtain representations through an RNN. MemNet (Ghazvininejad et al., 2018) leverages a multi-task learning framework to jointly train 'post-to-response', 'knowledge-to-response' and 'knowledge-to-knowledge' tasks for response generation. TED (Zheng and Zhou, 2019) adopts Transformer as the backbone framework to inject knowledge by assigning weights to the knowledge from multiple sources. + +Finally, we consider two methods that jointly train knowledge selection model and dialogue generation model, and use them in both sets of experiments: Post-KS (Lian et al., 2019) approximates posterior-distribution of knowledge, i.e., $p(k|p,r)$ using prior-distribution $p(k|p)$ and jointly train a knowledge selection model and a dialogue generation model. SKT (Kim et al., 2020) takes into a account context from multi-turn dialogues (current action and 2 prior turns) and considers knowledge selection as a sequential decision process. + +Experimental Setup In our experiments, the dimension of word embedding is 300, and the multihead number of Transformer is 4. The vocabulary is obtained by ranking the training data by word frequency, with the size of 50,000 top frequent terms selected. The minimum post length is set to 8 tokens. Each knowledge item is represented by a sentence. During model training, we use mini-batch size 64. Adam optimiser is used for optimisation. The initial learning rate is set to 0.001 and halved when reaching the plateau (decreasing patience is set to 2 epochs). All the experiments are run on a single TITAN V GPU. The TPPA model requires 2 hours to train on the Wiz data set. + +Metrics Quality of the generated responses is evaluated using five standard metrics: BLEU (Papineni et al., 2002), Meteor (Banerjee and Lavie, 2005), and Bert-Score (BS) (Zhang et al., 2019) that are based on co-occurrence of n-grams between the system response and the ground-truth, calculating the token similarity using contextual embeddings. In this work, the BS version we used is robertalarge_L17idf_version=0.3.3(hug_trans=2.8.0) $^3$ ; Diversity scores (Div-2) (Li et al., 2015) calculates the proportion of distinct bi-grams out of all the distinct words. + +For knowledge selection, we use $P@n$ that calculates the precision at a given rank $n$ , measuring whether the ground truth $(rrk_{1})$ exists within the top $n$ retrieved knowledge. + +
Exp ModelWizard of Wikipedia (%)
P@1P@5P@10
BM254.9†18.6†31.1†
DrQA4.1†13.6*†21.7*†
CNN-DSSM8.2*†31.3*†48.8*†
Post-KS6.2*†--
SKT9.01*--
TPPA1rrk-1neg-10prk8.9*†33.0*†49.2*†
1rrk-4neg-10prk10.0*36.5*†54.5*
1rrk-10neg-10prk9.8*36.4*†54.2*†
1rrk-20neg-10prk10.1*37.8*55.0*
1rrk-30neg-10prk10.1*38.0*55.1*
1rrk-40neg-10prk8.2*†31.3*†48.2*†
TPPA1rrk-30neg-1prk10.2*38.4*55.1*
1rrk-30neg-10prk10.1*38.0*55.1*
1rrk-30neg-20prk10.0*37.3*†55.1*
1rrk-30neg-30prk9.7*35.2*†52.4*†
Exp ModelHoll-E (%)
P@1P@5P@10
BM2510.5†33.4†48.5†
DrQA13.3*†29.4*†35.4*†
CNN-DSSM15.2*†34.9*†50.0†
Post-KS5.5*†--
SKT11.6*†--
TPPA1rrk-1neg-10prk13.6*†37.0*†51.3*†
1rrk-4neg-10prk15.5*†38.3*†52.7*†
1rrk-10neg-10prk16.6*40.4*54.5*
1rrk-20neg-10prk14.8*†36.9*†51.1†
1rrk-30neg-10prk15.7*†39.1*†53.2†
1rrk-40neg-10prk16.2*39.5*53.2
TPPA1rrk-10neg-1prk16.3*39.0*†52.7*†
1rrk-10neg-10prk16.6*40.4*54.5*
1rrk-10neg-20prk16.6*39.0*52.9*†
1rrk-10neg-30prk15.4*†38.6*52.7*†
+ +# 6 Experimental Results + +Knowledge Selection Evaluation. For the TPPA method, the quality of the selected knowledge is determined by the embedding parameters obtained during the training phase. They are, in turn, related to the knowledge resources used for training (Response Processing Unit) and the quality of the transformer representation of $p$ and $prk$ (Post Processing Unit), shown in Figure 2. The resources are constructed from individual knowledge sets $K_{p}$ , where $p$ is the post in the training set. For each training sample, it consists of a post $p$ , a $rrk_{1}$ (i.e. the top 1 ranked response-retrieved knowledge), $n prks$ (i.e. the top $n$ ranked post-retrieved knowledge) and $m negs$ (i.e. randomly chosen $m$ sentences). Thus, 1rrk-1neg-10prk indicates that we selected the $rrk_{1}$ , 1 random knowledge item and top 10 prks for each $p$ . In the test experiments, we monitor whether, for a new post $p$ in the test set, different retrieval models rank its corresponding ground truth, i.e., $rrk_{1}$ for $p$ within the top 1, 5, or 10 ranked items. + +Results in Table 3 show that: (1) TPPA provides + +Table 3: Retrieval precision on the Wiz and Holl-E data sets. \* means t-test $p < 0.05$ compared with the baseline BM25; $\dagger$ is the $p < 0.05$ compared with the best performing group. Bold indicates the best performance group when changing the number of negative samples. Underline indicates the best group among all methods. + +
Exp ModelWizard of Wikipedia (%)
BLEU-4METEORDiv-2BS
MemNet1.246.392.2481.5
WSeq2.137.1713.2982.86
Post-KS1.355.9622.3281.3
SKT3.147.2927.883.4
TED3.918.8218.1682.9
Exp ModelHoll-E (%)
BLEU-4METEORDiv-2BS
MemNet5.597.630.1884.6
WSeq5.97.943.6383.71
Post-KS3.795.982.4181.3
SKT9.168.4822.982.9
TED12.6610.3717.9584.1
+ +Table 4: Performance of generative models MemNet, WSeq and TED with the best TPPA knowledge selection. Post-KS and SKT rely on their jointly trained models. BS refers to Bert-Score. + +at least one model that outperforms all other models on the Wiz and Holl-E data sets, on all three metrics $\mathrm{P@1}$ , $\mathrm{P@5}$ , and $\mathrm{P@10}$ . (2) The composition of the knowledge base affects the TPPA knowledge selection: for the Wiz data set and fixed number of $10prk$ , increasing the number of neg items improves the performance until reaching its plateau at $1rrk-30neg-10prk$ ; for the Holl-E data set, the best combination is $1rrk-10neg-10prk$ . (3) For a fixed number of neg we vary the number of prks items and find that: (i) for Wiz and $n=30$ , the optimal prk number is 1; and (ii) for Holl-E and neg=10 the optimal prk number is 10. + +Based on these findings we use 1rrk-30neg-1prk for Wiz and 1rrk-10neg-10prk for Holl-E as sets for TPPA to select knowledge for use with MemNet, WSeq and TED models on response generation. + +Response Generation Evaluation. We conduct the initial set of experiments to assess the robustness of the generative models (Table 4) and find that: (i) SKT and TED models outperform others, (ii) MemNet has unstable performance and constantly under-performs on Div-2. Furthermore, since SKT and Post-KS cannot inject multiple knowledge items, for further discussion, we choose experiments with WSeq and TED. We combine them with knowledge selection from (i) BM25, (ii) SKT (single knowledge item), (iii) CNN-DSSM (supervised search algorithm on post only), (iv) TPPA using both post and post-retrieved knowledge items, and (v) $rrk_{i}$ ( $i$ means top $i$ ranked response-retrieved knowledge, it is set to 1, 5 and 10 in our setting), to determine the upper bound when responses are known). The comparisons for the two data sets are shown in Table 5 and Table 6. + +We observe that: (1) Injecting knowledge from + +
TED+Top 1BLEU-4METEORDiv-2BS
BM253.358.4516.282.7
SKT4.05*8.82*18.8*82.8*
CNN-DSSM3.58.6220.08*82.8
TPPA3.91*8.82*18.1682.9
rrk18.14*11.36*24.63*84.3*
TED+Top 5BLEU-4METEORDiv-2BS
BM253.177.8118.3382.99
CNN-DSSM3.818.8216.9883.16
TPPA3.88*8.97*17.22*83.23
rrk54.99*10.49*19.04*83.7*
TED + Top 10BLEU-4METEORDiv-2BS
BM253.017.9815.783.2
CNN-DSSM3.59*8.98*14.8*83.38
TPPA3.53*9.09*14.66*83.4*
rrk104.05*9.56*15.87*83.6*
WSeq+Top 1BLEU-4METEORDiv-2BS
BM251.946.9812.9682.76
SKT2.07.0213.7382.8
CNN-DSSM2.047.0713.2582.81
TPPA2.137.17*13.2982.86
rrk12.23*7.35*13.2383.0*
WSeq+Top 5BLEU-4METEORDiv-2BS
BM252.057.1817.5982.85
CNN-DSSM2.077.3718.3283.03*
TPPA2.15*7.57*18.55*83.1*
rrk52.61*8.0*18.75*83.3*
WSeq + Top 10BLEU-4METEORDiv-2BS
BM252.317.4419.4883.0
CNN-DSSM2.447.88*20.1983.3*
TPPA2.597.9719.7283.35
rrk103.01*8.67*21.0783.66*
+ +SKT, CNN-DSSM and TPPA generally outperforms the post only selection using BM25 (Table 5 and 6) on both the Wiz and Holl-E data sets in terms of the BLEU-4, METEOR and Bert-Score. TED performance suffers from increased knowledge injection. Indeed, for $\mathrm{TED} + rrk_{i}$ , i.e., using 'perfect knowledge' the performance decreases with the increasing number of knowledge items. Zheng and Zhou (2019) claim that TED lacks a noise-filtering mechanism and thus underperforms with too much data. (2) Not surprisingly, knowledge selection methods with better retrieval performance achieve better response generation metrics. We consider Table 5 and 6 and the corresponding retrieval performance in Table 3. For the Wiz data set, the TPPA with $1rrk - 30neg - 1prk$ achieves the best retrieval performance and better results (Table 5) on both generative models (TED and WSeq) across different settings. This is confirmed on the Holl-E data set (Table 6) where TPPA outperforms other models, including Post-KS and SKT. This confirms our conjecture that improving retrieval for knowledge injection should improve the response generation. + +Upper-bound Analysis. The upper bound for + +Table 5: Knowledge-injection results on the Wizard of Wikipedia data set. The values are percentages (\%). * means the t-test $p < 0.05$ compared with the BM25 algorithm. 'Top 1', 'Top 5' 'Top 10' denotes injecting top 1 or 5 or 10 ranking knowledge. BS is Bert-Score. Bold indicates the best score apart from the $rrk_{i}$ group. + +
TED+Top 1BLEU-4METEORDiv-2BS
BM259.879.0926.2183.6
SKT9.018.5619.86*83.4*
CNN-DSSM11.56*9.84*23.51*83.9
TPPA12.66*10.37*17.95*84.1*
rrk145.94*30.61*29.03*89.6*
TED+Top 5BLEU-4METEORDiv-2BS
BM2511.410.2224.1683.9
CNN-DSSM12.0210.423.7184.0
TPPA12.92*11.12*17.87*84.2
rrk521.81*17.15*24.96*85.9*
TED + Top 10BLEU-4METEORDiv-2BS
BM255.58.362.4583.5
CNN-DSSM5.398.242.6*83.6
TPPA5.68.242.53*83.6
rrk106.53*9.88*2.75*84.0*
WSeq+Top 1BLEU-4METEORDiv-2BS
BM254.587.254.3383.68
SKT5.81*7.77*3.0983.6*
CNN-DSSM5.6*7.62*4.48*83.5*
TPPA5.9*7.94*3.63*83.71
rrk16.5*8.95*4.6*83.97*
WSeq+Top 5BLEU-4METEORDiv-2BS
BM255.157.518.6583.43
CNN-DSSM5.53*7.699.78*83.17*
TPPA5.96*7.74*7.82*83.59*
rrk57.22*9.55*9.39*83.85*
WSeq + Top 10BLEU-4METEORDiv-2BS
BM255.287.1513.8583.43
CNN-DSSM5.88*7.35*16.26*83.3*
TPPA5.89*7.43*12.43*83.7*
rrk108.19*10.41*15.73*84.3*
+ +Table 6: Knowledge-injection results on the Holl-E data set. The values are percentages (\%). * means the t-test $p < 0.05$ compared with the BM25 algorithm. 'Top 1', 'Top 5', 'Top 10' denotes injecting top 1 or 5, or 10 ranking knowledge. BS is Bert-Score. Bold indicates the best score apart from the $rrk_{i}$ group. + +knowledge selection is the $rrk_{i}$ group. We observe how all of the retrieval models perform in combination with TED and WSeq (Table 5 and 6). For the sake of concreteness we focus on the BLEU-4 metric. Table 5 and 6 show that low levels of knowledge-injection, e.g., a single knowledge item (Top 1), leads to large differences between TPPA and RRK in BLEU-4: $4.23\%$ ( $8.14\% - 3.91\%$ ) for Wiz and $33.28\%$ ( $45.94\% - 12.66\%$ ) for Holl-E data set. Despite that, TPPA manages to better approximate RRK than other models and improves response generation. + +Analysis of Added Useful Words. In order to analyze the properties of the generated responses, we define two metrics to quantify: useful word and useful word overlapping rate (UwOR). If a word appears in the response but not in the post, it is useful. UwOR measures the coincidence ratio of two sequences and is defined as: $\text{UwOR}(p, r) = \text{overlap}(p, r) / \text{distinct}(r)$ for post $p$ and response $r$ . The overlap( $\cdot$ ) is the number of distinct overlapping useful words between two sequences. distinct( $\cdot$ ) is a distinct number of words. We remove the stop words of the two sequences before + +
Exp NameWizard of WikipediaHoll-E
UWOR(p, r)14.67.52
UWOR(k - p, r)BM254.119.42
SKT9.09.52
CNN-DSSM9.3214.92
TPPA10.2515.98
rrk134.5267.84
+ +Table 7: The useful word overlapping rate results of Wiz and Holl-E data sets. All values are shown as percentages $(\%)$ . + +calculating UWOR. + +We further test whether the retrieved knowledge brings additional useful words. We calculate $UWOR(k - p, r)$ , where $k - p$ is a set of words in the knowledge $(k \in K_p)$ but not in the associated post $p$ , i.e., $\{w | w \in k \cap w \notin p\}$ , $w$ is the word of a sequence. + +The results are shown in Table 7. For each experiment group in Table 7, we select the top 1 ranked sentence for calculation. $UWOR(p, r)$ values for the Wiz and Holl-E data sets are just $14.6\%$ and $7.52\%$ , respectively. Considering the TPPA, for Wiz the number of additionally added useful words are comparable to what the post brings (10.25% vs. 14.6%); for the Holl-E, the retrieved knowledge brings more than double the useful words than the post (15.98% vs. 7.52%). This demonstrates the effectiveness of TPPA that can expand additional useful words from knowledge. + +# 7 Conclusions and Discussions + +Our investigations of the knowledge associated with post-response pairs lead us valuable insights into how well selected response-retrieved knowledge RRK can improve the performance of the generative models. Considering that response is not observable in the test phase, we developed a TPPA method that selects knowledge items by the careful embedding of the knowledge and optimized representation of the post and post-related knowledge PRK. We empirically demonstrate the superiority of TPPA, and being separated from the generative models. This provides flexibility to explore alternative components and models. + +Despite its effectiveness, we now discuss one potential limitation of our TPPA model. We find that the quality of the knowledge base has a huge impact on the effectiveness of TPPA. The Wiz and Holl-E we experiment with are two data sets from which candidate knowledge items are of high quality and manually selected. As shown in Figure 1 for the Wiz dataset, $rrk_{1}$ group contains on average + +![](images/ea0d23a39a04ad39b39000f4172d145ff5330ef61ccd9c77a2571b0a9153a0e8.jpg) +Figure 3: Common words count distribution between each source and the target response on the Reddit training set. The dashed lines are the average count of common words of each group (after removing stop words). + +more than two common words than $prk_{1}$ group that would help to constitute the ground truth response. The same trend also holds for the Holl-E data set. + +However, when looking at the Reddit data set4, as shown in Figure 3, we find that $rrk_{1}$ group and $prk_{1}$ group almost contain the same number of common words, compared to the ground-truth response. This is not surprising given the nature of this dataset: Reddit is an online forum where each post is typically initiated with a URL to a web page (grounding) that defines the topic of the post, provided by the author. However, the repliers of the post might not read that information at all and respond according to their own knowledge. Empirically, we find TPPA can not benefit from the knowledge under this circumstance and perform worse than the baselines. This implies that when knowledge is potential of low quality, using PRK as the source of evidence for pseudo relevance feedback can result in potential topic drift. + +In future work, we would like to (1) make TPPA more robust irrespective of the quality of provided knowledge; (2) develop an end-to-end model that directly model response generation with the help of response-related knowledge. + +# Acknowledgments + +This work is partly supported by Engineering and Physical Sciences Research Council (EPSRC Grant No. EP/S515528/1, 2102871). The Titan V used for this research was donated by the NVIDIA Corporation. All content represents the opinion of the authors, which is not necessarily shared or endorsed by their respective employers and/or sponsors. + +# References + +Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pages 65-72. +Guihong Cao, Jian-Yun Nie, Jianfeng Gao, and Stephen Robertson. 2008. Selecting good expansion terms for pseudo-relevance feedback. In Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval, pages 243-250. +Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open-domain questions. arXiv preprint arXiv:1704.00051. +Mostafa Dehghani, Hamed Zamani, Aliaksei Severyn, Jaap Kamps, and W Bruce Croft. 2017. Neural ranking models with weak supervision. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 65-74. ACM. +Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents. In International Conference on Learning Representations. +Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. In Thirty-Second AAAI Conference on Artificial Intelligence. +Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM international conference on Information & Knowledge Management, pages 2333-2338. ACM. +Eric Jang, Shixiang Gu, and Ben Poole. 2016. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144. +Byeongchang Kim, Jaewoo Ahn, and Gunhee Kim. 2020. Sequential latent knowledge selection for knowledge-grounded dialogue. arXiv preprint arXiv:2002.07510. +Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A diversity-promoting objective function for neural conversation models. arXiv preprint arXiv:1510.03055. +Rongzhong Lian, Min Xie, Fan Wang, Jinhua Peng, and Hua Wu. 2019. Learning to select knowledge for response generation in dialog systems. arXiv preprint arXiv:1902.04911. + +Bhaskar Mitra, Fernando Diaz, and Nick Craswell. 2017. Learning to match using local and distributed representations of text for web search. In Proceedings of the 26th International Conference on World Wide Web, pages 1291-1299. International World Wide Web Conferences Steering Committee. +Nikita Moghe, Siddhartha Arora, Suman Banerjee, and Mitesh M Khapra. 2018. Towards exploiting background knowledge for building conversation systems. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2322-2332. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. +Stephen Robertson, Hugo Zaragoza, and Michael Taylor. 2004. Simple bm25 extension to multiple weighted fields. In Proceedings of the thirteenth ACM international conference on Information and knowledge management, pages 42-49. +Stephen E Robertson and Steve Walker. 1994. Some simple effective approximations to the 2-poisson model for probabilistic weighted retrieval. In SI-GIR'94, pages 232-241. Springer. +Yelong Shen, Xiaodong He, Jianfeng Gao, Li Deng, and Grégoire Mesnil. 2014. A latent semantic model with convolutional-pooling structure for information retrieval. In Proceedings of the 23rd ACM international conference on conference on information and knowledge management, pages 101-110. ACM. +Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104-3112. +Yik-Cheung Tam. 2020. Cluster-based beam search for pointer-generator chatbot grounded by knowledge. Computer Speech & Language, page 101094. +Zhiliang Tian, Rui Yan, Lili Mou, Yiping Song, Yansong Feng, and Dongyan Zhao. 2017. How to make context more useful? an empirical study on context-aware neural conversational models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 231-236. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008. +Jason Weston, Emily Dinan, and Alexander Miller. 2018. Retrieve and refine: Improved sequence generation models for dialogue. In Proceedings of the + +2018 EMNLP Workshop SCAI: The 2nd International Workshop on Search-Oriented Conversational AI, pages 87-92. +Rui Yan, Yiping Song, and Hua Wu. 2016. Learning to respond with deep neural networks for retrieval-based human-computer conversation system. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval, pages 55-64. ACM. +Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675. +Wen Zheng and Ke Zhou. 2019. Enhancing conversational dialogue models with grounded knowledge. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pages 709-718. \ No newline at end of file diff --git a/approximationofresponseknowledgeretrievalinknowledgegroundeddialoguegeneration/images.zip b/approximationofresponseknowledgeretrievalinknowledgegroundeddialoguegeneration/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..c0564de8192c81798c6916c0c0d6137e4dc6b86b --- /dev/null +++ b/approximationofresponseknowledgeretrievalinknowledgegroundeddialoguegeneration/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:33e7e9b6c40e38bb1345860b6ac9407f45552c57dc8427db6d5494ddedb8d099 +size 481764 diff --git a/approximationofresponseknowledgeretrievalinknowledgegroundeddialoguegeneration/layout.json b/approximationofresponseknowledgeretrievalinknowledgegroundeddialoguegeneration/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..31367ff098b823a79164a1cf260da55f6fe3176f --- /dev/null +++ b/approximationofresponseknowledgeretrievalinknowledgegroundeddialoguegeneration/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b991a8328a8750275f421be70f81922623234752a5a453223bd833dfe6e9a3c6 +size 462575 diff --git a/arramonajointnavigationassemblyinstructioninterpretationtaskindynamicenvironments/7e38fde0-0723-49f7-845d-785b0cc9becb_content_list.json b/arramonajointnavigationassemblyinstructioninterpretationtaskindynamicenvironments/7e38fde0-0723-49f7-845d-785b0cc9becb_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f08a7e550b8d2ef7246ac2e0357a86c1f7a0fdad --- /dev/null +++ b/arramonajointnavigationassemblyinstructioninterpretationtaskindynamicenvironments/7e38fde0-0723-49f7-845d-785b0cc9becb_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b212853954218a271e882328df201a733c43f3e226b70d4e09a7a2116bd9343e +size 133520 diff --git a/arramonajointnavigationassemblyinstructioninterpretationtaskindynamicenvironments/7e38fde0-0723-49f7-845d-785b0cc9becb_model.json b/arramonajointnavigationassemblyinstructioninterpretationtaskindynamicenvironments/7e38fde0-0723-49f7-845d-785b0cc9becb_model.json new file mode 100644 index 0000000000000000000000000000000000000000..3c66ecfea7345af59e5fa3980656d1423e10d7dd --- /dev/null +++ b/arramonajointnavigationassemblyinstructioninterpretationtaskindynamicenvironments/7e38fde0-0723-49f7-845d-785b0cc9becb_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b89b6097fcb2abfa8d1867059c412482dc6ba9a63908deb4c857856771a0d717 +size 171287 diff --git a/arramonajointnavigationassemblyinstructioninterpretationtaskindynamicenvironments/7e38fde0-0723-49f7-845d-785b0cc9becb_origin.pdf b/arramonajointnavigationassemblyinstructioninterpretationtaskindynamicenvironments/7e38fde0-0723-49f7-845d-785b0cc9becb_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..545843eca89a2d4e21eba769a4a36dd04376ffd6 --- /dev/null +++ b/arramonajointnavigationassemblyinstructioninterpretationtaskindynamicenvironments/7e38fde0-0723-49f7-845d-785b0cc9becb_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f74e4c86a27c1fc7954a1a3ce698c568fa708c75230a608b7fdaf2884eca898 +size 5373177 diff --git a/arramonajointnavigationassemblyinstructioninterpretationtaskindynamicenvironments/full.md b/arramonajointnavigationassemblyinstructioninterpretationtaskindynamicenvironments/full.md new file mode 100644 index 0000000000000000000000000000000000000000..cbe697481c1dfd7125c7701c3a6bb6bd3f8b6cc2 --- /dev/null +++ b/arramonajointnavigationassemblyinstructioninterpretationtaskindynamicenvironments/full.md @@ -0,0 +1,646 @@ +# ARRAMON: A Joint Navigation-Assembly Instruction Interpretation Task in Dynamic Environments + +Hyounghun Kim Abhay Zala Graham Burri Hao Tan Mohit Bansal + +UNC Chapel Hill + +{hyounghk, aszala, ghburri, airsplay, mbansal} @cs.unc.edu + +# Abstract + +For embodied agents, navigation is an important ability but not an isolated goal. Agents are also expected to perform specific tasks after reaching the target location, such as picking up objects and assembling them into a particular arrangement. We combine Vision-and-Language Navigation, assembling of collected objects, and object referring expression comprehension, to create a novel joint navigation- and-assembly task, named ARRAMON. During this task, the agent (similar to a Pokémon GO player) is asked to find and collect different target objects one-by-one by navigating based on natural language (English) instructions in a complex, realistic outdoor environment, but then also ARRAnge the collected objects part-by-part in an egocentric grid-Layout environment. To support this task, we implement a 3D dynamic environment simulator and collect a dataset with human-written navigation and assembling instructions, and the corresponding ground truth trajectories. We also filter the collected instructions via a verification stage, leading to a total of 7.7K task instances (30.8K instructions and paths). We present results for several baseline models (integrated and biased) and metrics (nDTW, CTC, rPOD, and PTC), and the large model-human performance gap demonstrates that our task is challenging and presents a wide scope for future work.1 + +# 1 Introduction + +Navigation guided via flexible natural language (NL) instructions is a crucial capability for robotic and embodied agents. Such systems should be capable of interpreting human instructions to correctly navigate realistic complex environments and reach destinations by understanding the environment, and associating referring expressions in the + +TNNN + +![](images/6f9902f03a3b0047afc81981e5d06827f81f3e34c0ca81f603d378dd0d267a2e.jpg) + +URN 1 Navigation Phase: Turn left to face the dumpster. Go around the building corner, and past the phone booth to the next intersection. At the intersection turn left. Next to the yellow building there is a green bucket. Pick the bucket up. + +![](images/f976e98f0f35129750d7fe19f9f6578b0af0591458cb918576100488318febbf.jpg) + +Assembly Phase: Turn right and place the bucket in front of the striped red mug. + +![](images/68271ab465acc9440a737662c7ef58c12382e384f62bfdf29c9119a3f70b3743.jpg) + +Navigation Phase: Turn around to face the speed limit sign. Go to the sign and then turn right around the corner. Go to the booth and a little past it and to the right there is a brown hourglass. Pick it up. + +![](images/61f68be4314461ce9e7343b58bfcd7dbd58ae78888d15a0a7dd2e64f376a32e0.jpg) + +Assembly Phase: Place the hourglass to the right of the red mug in front of you. + +Figure 1: Navigation and assembly phases (2 turns), via NL (English) instructions in a dynamic 3D environment. In the navigation phase, agents are asked to find and collect a target object. In the assembly phase, agents have to egocentrically place the collected object at a relative location (navigation turn 2 starts where turn 1 ends; we only show 3 snapshots here for space reasons, but the full simulator and its image set will be made available). + +instructions with the corresponding visual cues in the environment. Many research efforts have focused on this important vision-and-language navigation task (MacMahon et al., 2006; Mooney, 2008; Chen and Mooney, 2011; TELLEX et al., 2011; Mei et al., 2016; Hermann et al., 2017; Anderson et al., 2018; Misra et al., 2018; Das et al., 2018; Thomason et al., 2019; Chen et al., 2019; Jain et al., 2019; Shridhar et al., 2020; Qi et al., 2020; Hermann et al., 2020). However, in real-world applications, navigation alone is rarely the exclusive goal. In most + +cases, agents will navigate to perform another task at their destination, and also repeat subtasks, e.g., a warehouse robot may be asked to pick up several objects from different locations and then assemble the objects into a desired arrangement. When these additional tasks are interweaved with navigation, the degree of complexity increases exponentially due to cascading errors. Relatively few studies have focused on this idea of combining navigation with other tasks. Touchdown (Chen et al., 2019) combines navigation and object referring expression resolution, REVERIE (Qi et al., 2020) performs remote referring expression comprehension, and ALFRED (Shridhar et al., 2020) combines indoor navigation and household manipulation. However, there has been no task that integrates the navigation task in complex outdoor spaces with the assembling task (and object referring expression comprehension), requiring spatial relation understanding in an interweaved temporal way, in which the two tasks alternate for multiple turns with cascading error effects (see Figure 1). + +Thus, we introduce a new task that combines the navigation, assembling, and referring expression comprehension subtasks. This new task can be explained as an intuitive combination of the navigation and collection aspects of Pokémon GO² and an ARRAnging (assembling) aspect, hence we call it 'ARRAMON'. In this task, an agent needs to follow navigational NL instructions to navigate through a complex outdoor and fine-grained city environment to collect diverse target objects via referring expression comprehension and dynamic 3D visuospatial relationship understanding w.r.t. other distracter objects. Next, the agent is asked to place those objects at specific locations (relative to other objects) in a grid environment based on an assembling NL instruction. These two phases are performed repeatedly in an interleaved manner to create an overall configuration of the set of collected objects. For enabling the ARRAMON task, we also implement a simulator built in the Unity game engine³ to collect the dataset (see Appendix B.2 for the simulator interface). This simulator features a 3D synthetic city environment based on real-world street layouts with realistic buildings and textures (backed by Mapbox⁴) and a dynamic grid floor assembly room (Figure 1), + +both from an egocentric view (the full simulator and its image set will be made available). We take 7 disjoint sub-sections from the city map and collect instructions from workers within each section. Workers had to write instructions based on ground truth trajectories (represented as path lines in navigation, location highlighting during assembly). We placed diverse background objects as well as target objects so that the rich collected instructions require agents to utilize strong linguistic understanding. The instructions were next executed by a new set of annotators in a second verification stage and were filtered based on low match w.r.t. the original ground truth trajectory, and the accuracy of assembly placement. Overall, this resulted in a dataset of 7,692 task instances with multiple phases and turns (a total of 30,768 instructions and paths).5 + +To evaluate performance in our ARRAMON task, we employ both the existing metric of nDTW (Normalized Dynamic Time Warping) (Ilharco et al., 2019) and our newly-designed metrics: CTC-k (Collected Target Correctness), rPOD (Reciprocal Placed Object Distance), and PTC (Placed Target Correctness). In the navigation phase, nDTW measures how similar generated paths are to the ground truth paths, while CTC-k computes how closely agents reach the targets. In the assembly phase, rPOD calculates the reciprocal distance between target and agents' placement locations, and PTC counts the correspondence between those locations. Due to the interweaving property of our task with multiple navigation and assembling phases and turns, performance in the previous turn and phase cascadingly affects the metric scoring of the next turn and phase (Section 3.2). + +Lastly, we implement multiple baselines as good starting points and to verify our task is challenging and the dataset is unbiased. We present integrated vision-and-language, vision-only, language-only, and random-walk baselines. Our vision-and-language model shows better performance over the other baselines, which implies that our ARRAMON dataset is not skewed; moreover, there exists a very large gap between this model and the human performance, implying that our ARRAMON task is challenging and that there is substantial room for improvements by future work. We will publicly release the ARRAMON simulator, dataset, and code, along with a leaderboard to encourage further com + +![](images/dee95173f8fb0101bfe38307d1da7a271526f5dbdaeeeded2a7bcdc548074023.jpg) +Figure 2: Illustration of the basic object types that the agent must collect, and will also appear as distracter objects during both navigation and assembly phases. + +munity research on this realistic and challenging joint navigation-assembly task. + +# 2 Related Work + +Vision-and-Language Navigation. Recently, Vision-and-Language Navigation (VLN) tasks, in which agents follow NL instructions to navigate through an environment, have been actively studied in research communities (MacMahon et al., 2006; Mooney, 2008; Chen and Mooney, 2011; Tellex et al., 2011; Mei et al., 2016; Hermann et al., 2017; Anderson et al., 2018; Misra et al., 2018; Das et al., 2018; Thomason et al., 2019; Chen et al., 2019; Jain et al., 2019; Shridhar et al., 2020; Qi et al., 2020; Hermann et al., 2020). To encourage the exploration of this challenging research topic, multiple simulated environments have been introduced. Synthetic (Kempka et al., 2016; Beattie et al., 2016; Kolve et al., 2017; Brodeur et al., 2017; Wu et al., 2018; Savva et al., 2017; Zhu et al., 2017; Yan et al., 2018; Shah et al., 2018; Puig et al., 2018) as well as real-world and image-based environments (Anderson et al., 2018; Xia et al., 2018; Chen et al., 2019) have been used to provide agents with diverse and complement training environments. + +Referring Expression Comprehension. The ability to make connections between objects or spatial regions and the natural language expressions that describe those objects or regions, has been a focus of many studies. Given that humans regularly carry out complex symbolic-spatial reasoning, there has been much effort to improve the capability of referring expression comprehension (including remote objects) in agents (Kazemzadeh et al., 2014; Mao et al., 2016; Hu et al., 2016; Yu et al., 2018; Chen et al., 2019; Qi et al., 2020), but such reasoning remains challenging for current models. Our ARRAMON task integrates substantial usage of referring expression comprehension as a requirement, as it is necessary to the successful completion of both the navigation and assembly phases. + +Assembling Task. Object manipulation and configuration is another subject that has been studied + +along with language and vision grounding (Bisk et al., 2016; Wang et al., 2016; Li et al., 2016; Bisk et al., 2018). However, most studies focus on addressing the problem in relatively simple environments from a third-person view. Our ARRAMON task, on the other hand, provides a challenging dynamic, multi-step egocentric viewpoint within a more realistic and interactive 3D, depth-based environment. Moreover, the spatial relationships in ARRAMON dynamically change every time the agent moves, making 'spatial-action' reasoning more challenging. We believe that an egocentric viewpoint is a key part of how humans perform spatial reasoning, and that such an approach is therefore vital to producing high-quality models and datasets. + +These three directions of research are typically pursued independently (esp. navigation and assembling), and there have been only a few recent efforts to combine the traditional navigation task with other tasks. Touchdown (Chen et al., 2019) combines navigation and object referring expression resolution, REVERIE (Qi et al., 2020) performs remote referring expression comprehension, while ALFRED (Shridhar et al., 2020) combines indoor navigation and household manipulation. Our new complementary task merges navigation in a complex outdoor space with object referring expression comprehension and assembling tasks that require spatial relation understanding in an interleaved temporal style, in which the two tasks alternate for multiple turns leading to cascading error effects. This will allow development of agents with more integrated, human-like abilities that are essential in real-world applications such as moving and arranging items in warehouses; collecting material and assembling structures in construction sites; finding and rearranging household objects in homes. + +# 3 Task + +The ARRAMON task consists of two phases: navigation and assembly. We define one turn as one navigation phase plus one assembly phase (see Figure 1). Both phases are repeated twice (i.e., 2 turns), starting with the navigation phase. During the navigation phase, an agent is asked to navigate a rich outdoor city environment by following NL instructions, and then collect the target object identified in the instructions via diverse referring expressions. During the assembly phase, the agent is asked to place the collected object (from the previous nav + +![](images/23c8eebe83cedc3f2624ae1543726a1cc924615e1ede327529c0950590462960.jpg) +Figure 3: Illustration of the seven city sections in which data was collected. + +igation phase) at a target location on a grid layout, using a different NL instruction via relative spatial referring expressions. Target objects and distracter objects are selected from one of seven objects shown in Figure 2 and then are given one of two different patterns and one of seven different colors (see Figure 11 in Appendices). In both phases, the agent can take 4 actions: forward, left, right, and an end pickup/place action. Forward moves the agent 1 step ahead and left/right makes agents rotate $30^{\circ}$ in the respective direction. $^{6}$ + +# 3.1 Environment + +Navigation Phase. In this phase, agents are placed at a random spot in one of the seven disjoint subsections of the city environment (see Figure 3), provided with an NL instruction, and asked to find the target object. The city environment is filled with background objects: buildings and various objects found on streets (see Figure 4). There are also a few distracter objects in the city that are similar to target objects (in object type, pattern, and color). During this phase, the agent's end action is 'pickup'. The pick-up action allows agents to pick up any collectible object within range (a rectangular area: 0.5 unit distance from an agent toward both their left and right hand side and 3 unit distance forward). + +Assembly Phase. Once the agent picks up the collectible object in the navigation phase, they enter the assembly phase. In this phase, agents are again provided with an NL instruction, but they are now asked to place the target object they collected in the + +![](images/1419f12d1e364dd86d98f3e1db4e396ee46a76dbf9f7520ea132a162e20f6158.jpg) +Figure 4: Illustration of the background environmental objects scattered around the city environment. + +previous phase at the target location identified in the instruction. When the assembly phase begins, 8 decoy basic-type objects (Figure 2) with random pattern and color, are placed for use as distractions. In this phase, agents can only move on a 4-by-5 grid layout. The grid is bordered by 4 walls, each with a different texture/pattern (wood, brick, spotted, striped) to allow for more diverse expressions in the assembly phase. Their end action is 'place', which puts the collected object onto the grid one step ahead. Agents cannot place diagonally and, unlike in the navigation phase, cannot move forward diagonally. + +Hence, to accomplish the overall joint navigation-assembly task, it is required for agents to have integrated abilities. During navigation they must take actions based on understanding the egocentric view and aligning the NL instructions with the dynamic visual environment to successfully find the target objects (relevant metrics: nDTW and CTC-k, see Section 3.2). During assembly, from an egocentric view, they must understand 3D spatial relations among objects identified by referring expressions in order to place the target objects at the right relative location. (relevant metrics: PTC and rPOD, see Section 3.2). + +# 3.2 Metrics + +Normalized Dynamic Time Warping (nDTW). To encourage the agent to follow the paths closely during the navigation task, we employ nDTW (II-harco et al., 2019) as our task metric. nDTW measures the similarity between a ground-truth path and a predicted trajectory of an agent, thus penalizing randomly walking around to find and pick up the target object. + +Collected Target Correctness (CTC). An agent that understands the given NL instructions well should find and pick up a correct target object at the end of the navigation task. Therefore, we evaluate the agent's ability with CTC, which will have a value of 1 if the agent picks up a correct object, and a value of 0 if they pick up an incorrect object or do not pick up any object. Since collecting the correct object is a difficult task, we also implement the CTC-k metric. CTC-k measures the CTC score at distance k. If the agent is within k distance of the target object, then the value is 1, otherwise it is 0 (CTC-0 indicates the original CTC). + +Placed Target Correctness (PTC). In the assembly task, placing the collected object at the exact target position is most important. The PTC metric counts the correspondence between the target location and the placed location. If the placed and target locations match, then the PTC is 1, otherwise it is 0. If the collected object is not correct, then the score is also 0. + +Reciprocal Placed Object Distance (rPOD). We also consider the distance between the target position and the position where the collected object is eventually placed in the assembly task (Bisk et al., 2018). The distance squared is taken to penalize the agent more for placing the object far from the target position. Then 1 is added and the reciprocal is taken to normalize the final metric value: $\mathrm{rPOD} = \frac{1}{1 + D_a^2}$ , where $D_{a}$ is the Manhattan distance between the target and placed object positions. If the collected object is not correct, then the score is 0 (see Figure 9 in Appendices). + +Overall, our metrics reflect the interweaving property of our task. For example, if agents show poor performance in the first turn navigation phase (i.e., low nDTW and CTC-k scores), they will not obtain high scores in the continuing assembly phase (i.e., low PTC and rPOD scores), also leading to lower scores in the second turn navigation phase. + +# 4 ARRAMON Dataset + +Our ARRAMON navigation-assembly dataset is a collection of rich human-written NL (English) instructions. The navigation instructions explain how to navigate the large outdoor environments and describe which target objects to collect. The assembly instructions provide the desired target locations for placement relative to objects. Each instruction set in the dataset is accompanied by ground truth (GT) trajectories and placement locations. Data was col + +lected from the online crowd-sourcing platform Amazon Mechanical Turk (AMT). + +# 4.1 Data Collection + +The data collection process was broken into two stages: Stage 1: Writing Instructions, and Stage 2: Following/Verifying Instructions. Within each stage, there are two phases: Navigation and Assembly (see Figure 15 in Appendices for the interface of each stage and each phase). During the first stage's navigation phase, a crowdworker is placed in the city environment as described in Section 3.1 and moves along a blue navigation line (representing the GT path) that will lead them to a target object (see Appendix B.1 for the exact route generation details). While the worker travels this line, they write instructions describing their path (e.g., "Turn to face the building with the green triangle on a blue ... Walk past the bench to the dotted brown TV and pick it up"). Workers were bound to this navigation line to ensure that they wrote instructions only based on what they could see from the GT path. Next, the worker starts the first stage's assembly phase and is placed in a small assembly room, where they must place the object they just collected in a predetermined location (indicated by a transparent black outline of the object they just collected) and write instructions on where to place the object relative to other objects from an egocentric viewpoint (e.g., "Place the dotted brown TV in front of the striped white hourglass"). The worker is then returned to the city environment and repeats both phases once more. + +A natural way of verifying the instruction sets from Stage 1 is to have new workers follow them (Chen et al., 2019). Thus, during Stage 2 Verification, a new worker is placed in the environment encountered by the Stage 1 worker and is provided with the NL instructions that were written by that Stage 1 worker. The new worker has to follow the instructions to find the target objects in the city and place them in the correct positions in the assembly environment. Each instruction set from Stage 1 is verified by three unique crowdworkers to ensure instructions are correctly verified. Next, evaluation of the Stage 2 workers performance was done through the use of the nDTW and PTC metrics. If at least one of three different Stage 2 workers scored higher than 0.2 on nDTW in both navigation turns and had a score of 1 on PTC in both assembly turns, then the corresponding Stage 1 instruction + +![](images/015a458221e87655e9bb031b38ef121b15ecbd89af68ffc441ee31046821816e.jpg) +Figure 5: The frequency distribution of the 25 most common words in the dataset. Stopwords and target object words have been removed. + +set was considered high quality and kept in the dataset, otherwise it was discarded. The remaining dataset has a high average nDTW score of 0.66 and an even higher expert score of 0.81 (see Sec. 8).8 + +# 4.2 Data Quality Control + +Instructions written by the Stage 1 workers needed to be clear and understandable. Workers were encouraged to follow certain rules and guidelines so that the resulting instruction would be of high quality and made proper use of the environment. + +Guidelines, Automated Checks, and Qualification Tests. Detailed guidelines were put in place to help ensure that the instructions written contained as few errors as possible. Rules were shown to workers before the start of the task and active automated checks took place as the workers wrote. These active checks helped prevent poor instructions (such as those including certain symbols) from being submitted, requiring workers to fix them before submitting. In the case the instruction quality was questionable, an email notification was sent (see Appendix B.1 for the exact guidelines and checks that were implemented, as well as details regarding the email notifications). A screening test was also required at the start of both stages to test the crowdworkers' understanding of the task. If a wrong answer was chosen, an explanation was displayed and the crowdworker was allowed to try again (see Figure 13 and 14 in Appendices for the screening tests). To help workers place the object in the right location during Stage 2, we use a sim + +![](images/177d74a9666def853e85124e9d389cfe725e90e4a2dd35ace8aefe56e643b33c.jpg) +Figure 6: The frequency distributions of instruction lengths (left) and path lengths (right) in the navigation and assembly phases. Graphs cut off at length 125 since beyond that there are very few data points. + +ple placement test which they pass by placing an object at the correct place during a mock assembly phase (see Appendix B.1 for details). + +Worker Qualifications. Workers completing the task were required to pass certain qualifications before they could begin. As the Stage 1 and 2 tasks require reading English instructions (Stage 1 also involves writing), we required workers be from native-speaking English countries. Workers were required to have at least 1000 approved tasks and a $95\%$ or higher approval rating. A total of 96 unique workers for Stage 1 and 242 for Stage 2, were able to successfully complete their respective tasks. + +Worker Payment and Bonus Incentives. We kept fair and comparable pay rates based on similar datasets (Chen et al., 2019), writing (Stage 1) had a payment (including bonuses) of $1.00. Instruction verification (Stage 2) had a payment of $0.20. See Appendix B.1 for details on bonus criteria, rates. + +# 5 Data Analysis + +A total of 8,546 instruction sets were collected. Each set included two pairs of navigation and assembly instructions (thus, 34,184 instructions in total). After filtering from Stage 2 results, there remained 7,692 instruction sets (30,768 instructions in total). Our dataset size is comparable to other similar tasks, e.g., Touchdown (Chen et al., 2019) contains 9.3K examples (9.3K navigation and 27.5K SDR task), R2R (Anderson et al., 2018) has 21.5K navigation instructions, REVERIE has 21.7K instructions, ALFRED (Shridhar et al., 2020) has 25.7K language directives describing 8K demonstrations, and CVDN (Thomason et al., 2019) dataset with 7.4K NDH instances and 2K navigation dialogues. + +Linguistic Properties. From our dataset, we randomly sampled 50 instructions for manual analysis. A unique linguistic property found in our sample is 3D discrete referring expressions which utilize + +
LengthNavigationAssembly
maxavg.maxavg.
Instruction14747.999020.99
Path15648.1483.32
Action Sequence22475.783413.68
+ +Table 1: Lengths of the instructions (in words), paths, and action sequences for both turns across all subsections in the city. + +3D depth to guide the agent; implying that the combined navigation and assembly task requires that agents possess a full understanding of object relations in a 3D environment. Our analysis showed other linguistic properties, such as frequent directional references, ego and allocentric spatial relations, temporal conditions, and sequencing (see Appendix C.1 for the details and examples). + +Dataset Statistics. Figure 5 shows that the most frequently occurring words in our dataset. These words are primarily directional or spatial relations. This implies that agents should be able to understand the concept of direction and the spatial relations between objects, especially as they change with movement. Table 1 and Figure 6 show that navigation tends to have longer instructions and path lengths. Assembly occurs in a smaller environment, requiring agents to focus less on understanding paths than in navigation and more on understanding the 3D spatial relations of objects from the limited egocentric viewpoint. + +# 6 Models + +We train an integrated Vision-and-Language model as a good starting point baseline for our task. To verify that our dataset is not biased towards some specific factors, we trained ablated and random walk models and evaluated them on the dataset. + +Vision-and-Language Baseline. This model uses vision and NL instruction features together to predict the next actions (Figure 7). We implement each module for navigation/assembly phases as: + +$$ +L = \operatorname {E m b} _ {L} (\text {I n s t .}), \tilde {a} _ {t} = \operatorname {E m b} _ {A} (a _ {t}) \tag {1} +$$ + +$$ +V _ {t} = \operatorname {E n c} _ {V} \left(\operatorname {I m g} _ {t}\right), \quad \tilde {L} = \operatorname {E n c} _ {L} (L) \tag {2} +$$ + +$$ +h _ {t} = \operatorname {L S T M} \left(\tilde {a} _ {t - 1}, h _ {t - 1}\right) \tag {3} +$$ + +$$ +\hat {V} _ {t}, \hat {L} _ {t} = \operatorname {C r o s s - A t t n} \left(V _ {t}, \tilde {L}\right) \tag {4} +$$ + +$$ +v _ {t} = \operatorname {A t t n} \left(h _ {t}, \hat {V} _ {t}\right), \quad l _ {t} = \operatorname {A t t n} \left(h _ {t}, \hat {L} _ {t}\right) \tag {5} +$$ + +$$ +\operatorname {l o g i t} _ {a _ {t}} = \operatorname {L i n e a r} \left(v _ {t}, l _ {t}\right), a _ {t} = \max \left(\operatorname {l o g i t} _ {a _ {t}}\right) \tag {6} +$$ + +where $\operatorname{Img}_t$ is the view of an agent at time step $t$ , Inst. is natural language instructions given to the + +![](images/339e3f6fe1da16dec60258805e5f00843804d4842c23161659ece616bb6fb9b1.jpg) +Figure 7: Vision-and-Language model: environment visual features, instruction language features, and action features are aligned to generate the next action. + +agent, and $a_{t}$ is an action at time step $t$ . Instructions and actions are embedded via $\mathrm{Emb}_L$ and $\mathrm{Emb}_A$ , respectively. We use ResNet (He et al., 2016) for the visual encoder, $\mathrm{Enc}_V$ , to obtain visual features, $V_{t} \in \mathbb{R}^{w \times w \times d_{v}}$ , and LSTM (Hochreiter and Schmidhuber, 1997) for the instruction encoder, $\mathrm{Enc}_L$ , to obtain instruction features, $\tilde{L} \in \mathbb{R}^{l \times d_l}$ . We employ the bidirectional attention mechanism (Seo et al., 2017) for the cross attention Cross-Attn to align the visual and instruction features, and use the general attention Attn to align the action feature and each of fused visual and instruction features. See Appendix D for the detailed descriptions of Cross-Attn and Attn modules. + +We train the model with the teacher-forcing approach (Lamb et al., 2016) and cross entropy loss: $p_t(a_t) = \text{softmax}(\log \mathrm{it}_{a_t})$ ; $\mathcal{L} = -\sum_t \log p_t(a_t^*)$ , where $a_t^*$ is ground truth action at time step $t$ . + +Vision/Language only Baseline. To check the unimodality bias, we evaluate vision and language only baselines on our dataset. These exploit only single modality (visual or language) to predict the appropriate next action. To be specific, they use the same architecture as the Vision-and-Language baseline except the Cross-Attn module. + +Random Walk. Agents take a random action at each time step without considering instruction and environment information. + +Shortest Path. This baseline simulates an agent that follows the shortest path provided by A* algorithm (Hart et al., 1968) to show that the GT paths are optimal in terms of trajectory distances. + +
ModelVal SeenVal Unseen
NavigationAssemblyNavigationAssembly
nDTWCTCrPODPTCnDTWCTCrPODPTC
k=0k=3k=5k=7k=0k=3k=5k=7
V/L0.1350.0000.0980.1490.2000.0580.0440.1090.0000.0620.1080.1530.0360.028
V/O0.0550.0000.0430.0620.0870.0080.0010.0430.0000.0310.0570.0850.0070.002
L/O0.1100.0000.0440.0950.1470.0230.0170.1050.0000.0290.0680.1260.0170.013
R/W0.0450.0000.0300.0540.0920.0050.0010.0450.0000.0240.0430.0750.0050.001
S/P1.000-1.0001.0001.000--1.000-1.0001.0001.000--
H/W0.6711.0001.0001.0001.0000.8790.8610.6701.0001.0001.0001.0000.8690.856
+ +Table 2: Performance of baselines and humans on the metrics for the Val-Seen/Unseen splits. Overall, there is large human-model performance gap, indicating our ARRAMON task is very challenging (V/L:Vision-and-Language, V/O:Vision-Only, L/O:Language-Only, R/W:Random-Walk, S/P:Shortest Path, H/W:Human-Workers). + +
ModelTest Unseen
NavigationAssembly
nDTWCTCrPODPTC
k=0k=3k=5k=7
V/L0.1140.0000.0820.1220.1680.0470.035
H/W0.6641.0001.0001.0001.0000.8840.873
H/E0.8061.0001.0001.0001.0000.9920.990
+ +Table 3: The Vision-and-Language (V/L) baseline and Human performance on Test-Unseen split (H/W:Human-Workers, H/E:Human-Expert). + +# 7 Experiments + +We split the dataset into train/val-seen/val-unseen/test-unseen. We assign the city sub-sections 1 to 5 to train and val-seen, sub-section 6 to val-unseen, and section 7 to test-unseen splits. We randomly split data from sub-sections 1 to 5 into 80/20 ratio to get train and val-seen splits, respectively. Thus, the final number of task samples for each split is 4,267/1,065/1,155/1,205 (total: 17,068/4,260/4,620/4,820). The Stage 1 workers are equally distributed across the city sub-sections, so the dataset splits are not biased toward specific workers. We also keep the separate 2 sections (i.e., section 6 and 7) for the unseen dataset following Anderson et al. (2018), which allows the evaluation of the models' ability to generalize in new environments. Note that for agents to proceed to the next phase, we allow them to pick up the closest target object (in the navigation phase) or place collected object at the closest location (in the assembly phase) when they do not perform the required actions. Training Details: We use 128 as hidden size. For word and action embedding sizes, we use 300 and 64, respectively. We use Adam (Kingma and Ba, 2015) as the optimizer and set the learning rate to 0.001 (see Appendix E.2 for details). + +# 8 Results and Analysis + +As shown in Table 2, overall, there is large human-model performance gap, indicating that our ARRA + +MON task is very challenging and there is much room for model improvement. Performance in the navigation and assembly phases are directly related. If perfect performance is assumed in the navigation phase, rPOD and PTC are higher than if there were low CTC-k scores in navigation (e.g., 0.382 vs. 0.044 for PTC of the Vision-and-Language model on val-seen: see Appendix F for the comparison). This scoring behavior demonstrates that phases in our ARRAMON task are interweaved. Also, comparing scores from turn 1 and 2, all turn 2 scores are lower than their turn 1 counterparts (e.g., 0.222 vs. 0.049 nDTW of the Vision-and-Language model on val-seen split; see Appendix F for the detailed turn-wise results). This shows that the performance of the previous turn strongly affects the next turn's result. Note that to relax the difficulty of the task, we consider CTC-3 (instead of CTC-0; see Section 3.2) as successfully picking up the target object and then we calculate the assembly metrics under this assumption. If this was not done, then almost all the metrics across assembly would be nearly zero. + +# 8.1 Model Ablations + +Vision/Language Only Baseline. As shown in Table 2, our Vision-and-Language baseline shows better performance over both vision-only and language-only models, implying our dataset is not biased to a single modality and requires multimodal understanding to get high scores. + +Random Walk. The Random-Walk baseline shows poor performance on our task, implying that the task cannot be solved through random chance. + +Human Evaluation. We conducted human evaluations with workers (Table 2, 3) as well as an expert (Table 3). For workers' evaluations, we averaged all the workers' scores for the verified dataset (from Stage 2: verification/following, see Sec. 4.1). For expert evaluation, we took 50 random samples + +from test-unseen and asked our simulator developer to blindly complete the task. Both workers and the expert show very high performance on our task (0.66 nDTW and 0.87 PTC for workers; 0.81 nDTW and 0.99 PTC for expert), demonstrating a large model-human performance gap and allowing much room for further improvements by the community on our challenging ARRAMON dataset. + +# 8.2 Output Examples + +As shown in an output example in Figure 8, our model navigates quite well and reaches very close to the target in the 1st turn and then places the target object in the right place in the assembly phase. However, in the 2nd turn, our model fails to find the "striped red mug" by missing the left turn around the "yellow and white banner". In the next assembling phase, the model cannot identify the exact location ("in front of the spotted yellow mug") to place the collected object (assuming the model picked up the correct object in the previous phase) possibly being distracted by another mug and misunderstanding the spatial relation. See Appendix G for more output examples. + +# 9 Conclusion + +We introduced ARRAMON, a new joint navigation+assembling instruction following task in which agents collect target objects in a large realistic outdoor city environment and arrange them in a dynamic grid space from an egocentric view. We collected a challenging dataset via a 3D synthetic simulator with diverse object referring expressions, environments, and visuospatial relationships. We also provided several baseline models which have a large performance gap compared to humans, implying substantial room for improvements by future work. + +# Acknowledgments + +We thank the reviewers for their helpful comments. This work was supported by NSF Award 1840131, ARO-YIP Award W911NF-18-1-0336, DARPA MCS Grant N66001-19-2-4031, and a Google Foucused Award. The views contained in this article are those of the authors and not of the funding agency. + +# References + +Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sunderhauf, Ian Reid, Stephen + +Figure 8: Visual demonstrations by our model in navigation and assembly phases (top-down view for illustration). GT navigation paths are solid pink lines and model's paths are dotted green lines (start = black dot). GT assembly target location is solid black circle and model's target object placement is dashed blue circle (start = checkered yellow tile, agent facing brick wall). +![](images/8f378b7e6e988e448191dea3c196599523894176ec57cf2777069c56d6211fd1.jpg) +Turn around and walk to the traffic signal. Take a right and walk past the orange cone in the middle of the road. Pick up the dotted red bucket in the middle of the road. + +![](images/b3a97e0e3153c74682a90c9962388cc2cd039c5cd02a5b4be534cdd3d1ad369b.jpg) +Turn right and place the dotted red bucket on top of the brown striped bowl. + +![](images/c533516f17cd88164ac31cc1287a94c4233f8de42417a94e115062e6f8fbaa66.jpg) +Turn around, go forward, and take a left turn at the intersection. Keep going until you see the yellow and white banner, then turn left. Behind a phone booth on your right you will find a striped red mug. Pick it up. + +![](images/b0bd660de69e0b9c47c8bd384f8ba67f9c66d1e0e72075623f81154c93459852.jpg) +Place the striped red mug in front of the spotted yellow mug. + +Gould, and Anton van den Hengel. 2018. Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3674-3683. + +Charles Beattie, Joel Z Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich Kuttler, Andrew Lefrancq, Simon Green, Víctor Valdés, Amir Sadik, et al. 2016. Deepmind lab. arXiv preprint arXiv:1612.03801. + +Yonatan Bisk, Daniel Marcu, and William Wong. 2016. Towards a dataset for human computer communication via grounded language acquisition. In *Workshops at the Thirtieth AAAI Conference on Artificial Intelligence*. + +Yonatan Bisk, Kevin J Shih, Yejin Choi, and Daniel Marcu. 2018. Learning interpretable spatial operations in a rich 3d blocks world. In Thirty-Second AAAI Conference on Artificial Intelligence. + +Simon Brodeur, Ethan Perez, Ankesh Anand, Florian Golemo, Luca Celotti, Florian Strub, Jean Rouat, Hugo Larochelle, and Aaron Courville. 2017. Home: a household multimodal environment. In NeurIPS 2017's Visually-Grounded Interaction and Language Workshop. + +David L Chen and Raymond J Mooney. 2011. Learning to interpret natural language navigation instructions from observations. In Twenty-Fifth AAAI Conference on Artificial Intelligence. +Howard Chen, Alane Suhr, Dipendra Misra, Noah Snavely, and Yoav Artzi. 2019. Touchdown: Natural language navigation and spatial reasoning in visual street environments. In Conference on Computer Vision and Pattern Recognition. +Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, and Dhruv Batra. 2018. Embodied Question Answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). +P. E. Hart, N. J. Nilsson, and B. Raphael. 1968. A formal basis for the heuristic determination of minimum cost paths. IEEE Transactions on Systems Science and Cybernetics, 4(2):100-107. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778. +Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, et al. 2017. Grounded language learning in a simulated 3d world. arXiv preprint arXiv:1706.06551. +Karl Moritz Hermann, Mateusz Malinowski, Piotr Mirowski, Andras Banki-Horvath, Keith Anderson, and Raia Hadsell. 2020. Learning to follow directions in street view. Thirty-Fourth AAAI Conference on Artificial Intelligence. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780. +Ronghang Hu, Huazhe Xu, Marcus Rohrbach, Jiashi Feng, Kate Saenko, and Trevor Darrell. 2016. Natural language object retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4555-4564. +Gabriel Ilharco, Vihan Jain, Alexander Ku, Eugene Ie, and Jason Baldridge. 2019. Effective and general evaluation for instruction conditioned navigation using dynamic time warping. NeurIPS Visually Grounded Interaction and Language Workshop. +Vihan Jain, Gabriel Magalhaes, Alexander Ku, Ashish Vaswani, Eugene Ie, and Jason Baldridge. 2019. Stay on the Path: Instruction Fidelity in Vision-and-Language Navigation. In Proc. of ACL. +Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara Berg. 2014. Referitgame: Referring + +to objects in photographs of natural scenes. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 787-798. +Michal Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech Jaskowski. 2016. ViZ-Doom: A Doom-based AI research platform for visual reinforcement learning. In IEEE Conference on Computational Intelligence and Games, pages 341-348, Santorini, Greece. IEEE. The best paper award. +Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Eric Kolve, Roozbeh Mottaghi, Winson Han, Eli VanderBilt, Luca Weihs, Alvaro Herrasti, Daniel Gordon, Yuke Zhu, Abhinav Gupta, and Ali Farhadi. 2017. Ai2-thor: An interactive 3d environment for visual ai. arXiv preprint arXiv:1712.05474. +Alex M Lamb, Anirudh Goyal Alias Parth Goyal, Ying Zhang, Saizheng Zhang, Aaron C Courville, and Yoshua Bengio. 2016. Professor forcing: A new algorithm for training recurrent networks. In Advances In Neural Information Processing Systems, pages 4601-4609. +Shen Li, Rosario Scalise, Kenny Admoni, Stephanie Rosenthal, and Siddhartha S Srinivasa. 2016. Spatial references and perspective in natural language instructions for collaborative manipulation. In 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pages 44-51. IEEE. +Matt MacMahon, Brian Stankiewicz, and Benjamin Kuipers. 2006. Walk the talk: Connecting language, knowledge, and action in route instructions. Def, 2(6):4. +Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L Yuille, and Kevin Murphy. 2016. Generation and comprehension of unambiguous object descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 11-20. +Hongyuan Mei, Mohit Bansal, and Matthew R Walter. 2016. Listen, attend, and walk: Neural mapping of navigational instructions to action sequences. In Thirtieth AAAI Conference on Artificial Intelligence. +Dipendra Misra, Andrew Bennett, Valts Blukis, Eyvind Niklasson, Max Shatkhin, and Yoav Artzi. 2018. Mapping instructions to actions in 3d environments with visual goal prediction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2667-2678. +Raymond J Mooney. 2008. Learning to connect language and perception. In AAAI, pages 1598-1601. + +Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. In NIPS Autodiff Workshop. +Xavier Puig, Kevin Ra, Marko Boben, Jiaman Li, Tingwu Wang, Sanja Fidler, and Antonio Torralba. 2018. Virtualhome: Simulating household activities via programs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8494-8502. +Yuankai Qi, Qi Wu, Peter Anderson, Xin Wang, William Yang Wang, Chunhua Shen, and Anton van den Hengel. 2020. Reverie: Remote embodied visual referring expression in real indoor environments. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). +Manolis Savva, Angel X. Chang, Alexey Dosovitskiy, Thomas Funkhouser, and Vladlen Koltun. 2017. MI-NOS: Multimodal indoor simulator for navigation in complex environments. arXiv:1712.03931. +Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In ICLR. +Pararth Shah, Marek Fiser, Aleksandra Faust, Chase Kew, and Dilek Hakkani-Tur. 2018. Follownet: Robot navigation by following natural language directions with deep reinforcement learning. In Third Machine Learning in Planning and Control of Robot Motion Workshop at ICRA. +Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, and Dieter Fox. 2020. ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). +Stefanie A Tellex, Thomas Fleming Kollar, Steven R Dickerson, Matthew R Walter, Ashis Banerjee, Seth Teller, and Nicholas Roy. 2011. Understanding natural language commands for robotic navigation and mobile manipulation. In AAAI. +Jesse Thomason, Michael Murray, Maya Cakmak, and Luke Zettlemoyer. 2019. Vision-and-dialog navigation. In Conference on Robot Learning (CoRL). +Sida I Wang, Percy Liang, and Christopher D Manning. 2016. Learning language games through interaction. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2368-2378. +Yi Wu, Yuxin Wu, Georgia Gkioxari, and Yuandong Tian. 2018. Building generalizable agents with a realistic and rich 3d environment. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Workshop Track Proceedings. Open-Review.net. + +Fei Xia, Amir R Zamir, Zhiyang He, Alexander Sax, Jitendra Malik, and Silvio Savarese. 2018. Gibson env: Real-world perception for embodied agents. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9068-9079. +Claudia Yan, Dipendra Misra, Andrew Bennnett, Aaron Walsman, Yonatan Bisk, and Yoav Artzi. 2018. Chalet: Cornell house agent learning environment. arXiv preprint arXiv:1801.07357. +Licheng Yu, Zhe Lin, Xiaohui Shen, Jimei Yang, Xin Lu, Mohit Bansal, and Tamara L Berg. 2018. Mattnet: Modular attention network for referring expression comprehension. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1307-1315. +Yuke Zhu, Daniel Gordon, Eric Kolve, Dieter Fox, Li Fei-Fei, Abhinav Gupta, Roozbeh Mottaghi, and Ali Farhadi. 2017. Visual semantic planning using deep successor representations. In Proceedings of the IEEE international conference on computer vision, pages 483-492. + +# Appendices + +# A Task and Metrics + +As shown in Figure 9, the score of rPOD is decreased according to the placement error (the Manhattan distance) exponentially. Thus, to score high in the rPOD metric, agents should place the target objects as close to the target place as possible. + +# B Dataset + +To support the ARRAMON task, we collected a dataset. Our dataset is based on a large dynamic outdoor environment from which diverse instructions with interesting linguistic properties are derived. + +# B.1 Data Collection + +Route Generation. The ground truth trajectories is determined by the A* shortest path algorithm (Hart et al., 1968). Using the shortest path algorithm allows the resulting Ground Truth (GT) path to be straightforward and reach the target while avoiding going to unnecessary places. The blue navigation guideline provided to the Stage 1 workers is a mimic of this GT path (Figure 15a). + +Qualification Tests. When placing an object in the assembly phase, the item is placed 1 space in front of where the agent stands. To ensure that the workers who will be following instructions in Stage 2 fully understood this concept, at the start of Stage 2, they were presented with a small test (Figure 10) + +![](images/2d7261e31f5429281f9185dc8d10a9df4fdd112f2aa7a1928e547b5bfa366813.jpg) +Distance:0 rPOD:1.0 + +![](images/a3b72e1cd1d3830017f6be3f89a8f7e0ecef2c9488d04affcde22eb7253f7c5f.jpg) +Distance: 1 rPOD: 0.5 + +![](images/d2a1bf10d952b0a64fd8962c65db684db13cab3ea69ffb213b02e19bcf28919a.jpg) +Distance: 3 rPOD:0.1 + +![](images/c3daa8fd676231904e57561d19dbb58298ce168ed48734741c6748705f7dc39c.jpg) +Distance: 5 rPOD: 0.0385 + +![](images/0ed7e308aa040a00495a07a7e0570eb09a3c7be2534172a7cfd3853cce2d7078.jpg) +Figure 9: Distance and rPOD metric: as the Manhattan distance between target and agent placement locations increases, the rPOD score decreases exponentially. +Figure 10: Illustration of the assembly phase test before the start of stage 2. + +that would show them how to correctly move and place objects and required that they demonstrate that they could do so. Both Stage 1 and 2 workers were also required to pass a short screening test before they could begin their respective tasks. The tests are shown in Figure 13 (Stage 1) and Figure 14 (Stage 2). + +Worker Bonus Criteria and Rates. For Stage 1 workers who did the instruction writing task correctly $\{5,20,50\}$ times, a bonus of $\{\mathbb{S}0.10,\mathbb{S}0.90,$ $\$ 4.00\}$ respectively was awarded. Stage 1 workers were also provided a $\$ 0.10$ bonus for every instruction they wrote that was able to successfully pass Stage 2 verification with high nDTW and perfect assembling scores. + +Instruction Rules and Guidelines. Rules and guidelines were put into place to help ensure that instructions written by the Stage 1 workers were high quality and written with as few errors as possible. Particularly, the guidelines serve to prevent the workers from using other elements of the UI or tools we provided, such as the blue navigation line or guiding arrow (see Figure 15) and other elements that were not part of the true environment in + +![](images/37c4fe312890f325568cb797c81c23a9fe41f9e38dbaf167450d2f69d9114d56.jpg) +Brown #BC6F1F +Green #00FF22 +Blue 060FF +Purple #A100FF + +![](images/3268744160867834cd80e748b111ce5d1ea84d2d9223a9af5f1d21f2d035e9f6.jpg) +Red +#FF0002 +Yellow #F4FF00 +White #FFFFFF + +![](images/f60a534f428378963746ca3dbe4b8f9edb1b70e63de15d5fd9cd4c1649f7182b.jpg) +Dotted Pattern +Striped Pattern + +![](images/43e9644a6199d38e30e61373e0a998feca15f1994250e2fde624a4c1ab4fe731.jpg) +Figure 11: Illustration of the colors and patterns that collectable and distracter objects can have. +Figure 12: Illustration of the assembly grid with the starting position marked. + +their instructions. + +- Instructions must be written relative to objects and the environment and not contain exact counts of movements (e.g., “Go forward 10 times and then turn left 2 times” is bad). +- Instructions must be clear, concise, and descriptive. +- Do not write more than the text-field can hold. +- At the end of writing an instruction for the navigation phase be sure to include something similar to "pick up" or "collect" the object. +- At end of writing an instruction for the assembly phase be sure to include something similar to "place" or "put" the object you collected before. +- Do not reference the navigation line, the blue balls on the navigation line, the floating arrows above the objects, or any of the interface elements when writing instructions. +- Do not reference any buildings that are a solid gray color. +- Do not reference the transparent black outline or the white grid tiles on the floor (Figure 12 and Figure 15b) during the assembly phase +- Do not write vague or potentially misleading in + +# Quiz + +You must pass the quiz before you can continue to the task. + +What is a good example of a navigation instruction? + +a: Go forward and turn left. +b: Go forward 5 times to reach the red TV. Then turn 4 times left and continue to the yellow building. +c: Turn to face the purple bowl to your right. Continue forward till you reach a lamp post. Pick up the yellow bowl near the red traffic cone. + +What is a good example of a navigation instruction? + +a: Go forward to the intersection and then turn right. Go forward till you reach the green traffic cone. Collect the green ball next to the lamp post. +b: Go forward to the intersection and then turn left. Go forward following the blue guideline till you reach the red book. +c: Turn around and go forward till you reach the floating arrow. Pick up the green ball underneath. + +Which of the following is true? + +a: All the objects will be dotted. +$\odot$ b: Objects will always be the same color. +c: Objects will always be a book, hourglass, mug, bucket, ball, tv, or bowl, but may vary in color and texture. + +Which of the following is true? + +a: During Navigation phase, instruction writing is not required. Instruction writing is only required in Assembly phase +b: Both Navigation and Assembly phases require instructions to be written. +$\mathbb{O}$ c: Writing instructions is optional and should only be done if you feel like it. + +Which of the following is a good example of a Assembly instruction? + +a: Turn to face the left wall. Then place the dotted yellow TV on top of the striped red book. +b: Place the object. +c: Move forward. Turn right and then put down the green book. + +Get Results + +Figure 13: Screening test that is required to be taken prior to starting Stage 1. + +# Quiz + +You must pass the quiz before you can continue to the task. + +What is the overall goal of this task? + +a: Roam aimlessly until you are done. +$\mathbb{O}$ b: Follow the provided instructions as accurate as possible. +c: Pick up random things. + +Which of the following is true? + +a: All the objects will be dotted. +b: Objects will always be the same color. +c: Objects will always be a book, hourglass, mug, bucket, ball, tv, or bowl, but may vary in color and texture. + +Get Results + +Figure 14: Screening test that is required to be taken prior to starting Stage 2. + +structions and do not create any instructions that reference previous instructions such as "Go back to" or "Return to". + +- Avoid spelling and grammar mistakes. +- When writing instructions for the assembly phase, do not write movement instructions. Make sure to use object references (e.g., "the red dotted ball"). + +During the navigation phase, the instruction writing worker cannot stray from the navigation line, ensuring that they collect the objects in the correct order. During the assembly phase, regardless of where the instruction worker places the collected object, it will move into the correct position (work + +ers are not informed of this), ensuring that the objects are always in the correct formation for the next phase and future instructions do not become invalid. Additionally, we have implemented active quality checks which will prevent a worker from submitting their instructions if certain criteria is not met. If a worker is blocked by one of these checks, they will be shown which check failed so that they can easily correct the error. + +# General Active Quality Checks. + +Each instruction must contain at least 6 words. +Less than $40\%$ of the characters in the instructions can be spaces. + +![](images/1ace66ebbbb341089eafdafbdaab82b6cdc203f702d3a8c16c8c1d001966879e.jpg) + +![](images/054f4ebc6fe608579305b469c97d1f3ccff7cf353bb12bba2494cb3155e49149.jpg) +(a) Navigation phase in stage 1. +(c) Navigation phase in stage 2. +Figure 15: Simulation Interfaces of the Stage 1 (upper), Stage 2 (lower) showing separate examples, navigation phase (left), and assembly phase (right) of the data collection. (a) Workers are initially shown the navigation phase interface and must follow the blue navigation line to the target objects and write instructions as they go. (b) Workers are moved into assembly and must make assembly instructions guided by the highlighted (transparent black) objects. (c) Workers are provided with the navigation instructions and must find the target objects identified by the instructions. (d) Workers are provided with the assembly instructions and must place the collected object at the target position identified by the instructions. + +![](images/8d03c1e8936d9e8e7be5a3737839c53577ec195a979be7b749c262e7ba852368.jpg) + +![](images/8c5b971bdc79c34b1e7a709c252a698ce0b89c8cc604d92ffdfdd8b5b66ddf23.jpg) +(b) Assembly phase in stage 1. +(d) Assembly phase in stage 2. + +• The symbols (, [, ], ), &, *, ^, %. $, #, @,!, =, and + cannot be included. +- Single letter words other than "a" cannot be included. +- A single letter cannot be repeated 3 consecutive times. i.e "sss". +- The same word cannot be repeated twice in a row. +- At least $40\%$ of the words in the instruction must be unique. +- The term "key" cannot be included. +- The term "step" cannot be included. +- The term "time" cannot be included. +- The term "go back" cannot be included. +- The term "return" cannot be included. +- The term "came" cannot be included. +- The term "item" cannot be included. + +# Navigation Active Quality Checks. + +- If the ground truth path requires turning at the beginning of the path, the term "turn" must be included. +- The term "arrow" cannot be included. + +# Assembly Active Quality Checks. + +- The terms "tile" or "grid" cannot be included. + +- The term "space" cannot be included. +- The term "go" cannot be included. +- The term "corner" cannot be included. +- The term "move" cannot be included. +- The black outline cannot be referenced. + +Review Notifications. It is possible for instructions to be written that can pass all automated checks and still be of poor quality. However, there is no quick and reliable way to automatically check if an instruction passes the tests but is still vague or misleading. Additional active checks could be added, however, in cases of ambiguity, more active checks would result in potentially good instructions being blocked. Instead of blocking submission, checks that could have been incorrectly triggered, would send a notification email, allowing us to take quick action by manually reviewing the instruction in question to see if the worker who created it needs feedback on writing better instructions. + +# B.2 Interface + +Stage 1: Instruction Writing. The goal of this stage is to write instructions on how to navigate and place objects. The provided interface was designed to make this process easier for the workers completing the task. In both phases, the interface provides + +
Linguistic PropertyNavigation FrequencyAssembly FrequencyInstruction Examples
Egocentric Spatial Relation34%34%“...Go straight so the striped green bucket with the red tv on top of it is to your right...”
Allocentric Spatial Relation86%98%“Place the dotted yellow bucket on the left side of the striped brown bowl.”
Temporal Condition64%2%“...Continue to walk forward until you reach an intersection...”
Directional Reference96%68%“Make a slight left and walk forward stopping at the intersection.”
Sequencing66%58%“...Go forward past the dotted yellow bucket and past the lamp post near the blue phone booth...”
3D Discrete Referring Expressions72%34%“Put the striped blue book behind the dotted red mug.”
+ +Table 4: Linguistic properties and their frequencies found in within 50 randomly sampled instruction sets from the ARRAMON dataset. + +a arrow on the bottom left that will also point to the target destination and target location (depending on the active phase; navigation and assembly respectively.) + +- Navigation Phase: (Figure 15a) The workers will follow the provided navigation line and as they follow it, write instructions on how to reach the destination. Additionally, the workers are provided with the controls and a few tips that they should keep in mind while completing the navigation phase. A small preview of the next phase (Assembly) is shown in the lower right. +- Assembly Phase: (Figure 15b) The interface is similar to that of the navigation phase interface. During this phase, the Assembly preview which previously occupied the lower right corner will come into focus, and the navigation phase preview is now occupying that space. In this phase, no navigation line is provided, as there is nowhere that cannot be seen from the starting position. The controls and tip information are updated with information about the assembly phase. + +Stage 2: Instruction Following. The goal of this stage is for the instructions written in the previous to be validated. Again, this interface was designed to make completing this task easier for the workers. Workers are also provided with some check boxes, which they can use to flag an instruction for certain issues so that we can more easily identify poor instructions. + +- Navigation Phase: (Figure 15c) Workers are placed in an exact copy of the environment that a Stage 1 worker used, as well as given the instructions they wrote on how to accomplish the task, which are visible in the top right corner. This new worker is not provided the blue guideline and the indicating arrow, and must now navigate using the instructions alone. + +- Assembly Phase: (Figure 15d) The worker is again shifted into the assembly room, but will no longer see the transparent outline that indicates where the object should be placed. They must instead rely on the instructions written by a Stage 1 worker. The worker is also provided a real-time diagram indicating where they will place the object given the position they currently stand. The object is always placed 1 space directly in front of the worker's location. The worker is also provided with some tips that might help them. + +# C Data Analysis + +# C.1 Linguistic Properties + +As shown in Table 4, our instruction sets have diverse linguistic features that make our task more challenging. Our ARRAMON task requires that the agent be able to understand and distinguish between both egocentric and allocentric spatial relations, necessitating that they comprehend the relation between entities in the environment according to their location and orientation. The instructions contain many directional words and phrases which require that agents utilize strong navigational skills. Additionally, due to the large scale of the environment, temporal condition expressions are crucial for agents to navigate effectively, as they are useful for describing long-distance travel. + +# D Model + +Cross Attention. We employ the bidirectional attention mechanism (Seo et al., 2017) to align the visual feature $V$ and instruction feature $L$ . We calculate the similarity matrix, $S \in \mathbb{R}^{w' \times l}$ between visual and instruction. + +$$ +S _ {i j} = W _ {s} ^ {\top} \left(V _ {i} \odot L _ {j}\right) \tag {7} +$$ + +where $W_{s}\in \mathbb{R}^{d\times 1}$ is the trainable parameter, and $\odot$ is element-wise product. From the similarity + +
ModelVal SeenVal Unseen
NavigationAssemblyNavigationAssembly
nDTWCTCrPODPTCnDTWCTCrPODPTC
k=0k=3k=5k=7k=0k=3k=5k=7
V/LT10.2220.0000.1380.1940.2600.0880.0700.1860.0000.0800.1390.1920.0540.044
T20.0490.0000.0570.1030.1400.0270.0170.0330.0000.0440.0780.1130.0190.011
total0.1350.0000.0980.1490.2000.0580.0440.1090.0000.0620.1080.1530.0360.028
+ +![](images/e48d34c65680e2f3e5cc0a58b6c949a7ac6ef8c303c725a276132472003381fe.jpg) +Ground Truth + +![](images/5155e4f886698d136d341d9399024217c204192d650b16f4d9f166ef4e08115f.jpg) +Human + +Our Model +Figure 16: Navigation paths of ground truth, human evaluation, random walk, and our model. Pink is the GT path and the other paths are shown in green (turn 1 starts from the black dot and goes to the white dot. Turn 2 starts from white dot and goes to the end of the path). +![](images/c970b870a57eb585d3da630a3bc84f41fb2ab76e87db345d66a4b2a23fdfaedc.jpg) +Turn 1: Turn slightly left as you move ahead past the traffic light. Go toward the speed limit sign, and move past the dotted white barrier. Head to the left to the lamp post, and fetch the dotted brown tv past a blue cone. + +Random Walk +![](images/9133072706e4dddc63f6f479f3138a5fff3428f9399fc0fd7d760687c0fa712b.jpg) +Turn 1 ●: Turn right until you see the green banner. Go towards the tire stack to the right of it and take a left down the street behind it. Go forward and pass the barrel. In the intersection there is a dotted white bucket. Pick up the dotted white bucket. + +![](images/9e2676aef5a47382ecf6065581d9ab88a3c898d6d453f2687b156f65a9c53d0f.jpg) +Ground Truth + +![](images/b658ffa2fee07094e5fa5353906b1b6b1fe6fdae5ffbc437c3aab767fde43e8f.jpg) +Human + +Our Model +![](images/93a30a30255258554cf90b36bc65bdda462f77001804eb0f129edcdd3f35ba8e.jpg) +Turn 2: Turn around and pass the blue and orange cones. Keep going straight for a long way passing the speed limit sign. Head toward the two striped yellow barriers ahead, but pick up the striped yellow book before you reach them. + +Random Walk +![](images/53dba75e7736224f284bec4a3da29fc27bdeaa034659c9a8243cea9c0d1d6812.jpg) +Turn 2: Turn right until you see the green cone. Go forward and take a left at the first street. Go towards the trash bags and take a left at the street. Pass the black barrel and go towards the dotted blue bucket. Pick up the dotted blue bucket. + +Table 5: Performance of Vision-and-Language (V/L) baseline for turns T1 and T2, plus overall scores on the Val-Seen/Unseen splits. + +
ModelNavigationAssembly
CTC (k=3)rPODPTC
Vision-and-Language1.0000.5390.382
+ +Table 6: Scores in the assembly phase calculated under the assumption of the perfect performance in the navigation phase on Val-Seen split. + +matrix, the new fused instruction feature is: + +$$ +\bar {V} = \operatorname {s o f t m a x} \left(S ^ {\top}\right) V \tag {8} +$$ + +$$ +\hat {L} = W _ {L} ^ {\top} [ L; \bar {V}; L \odot \bar {V} ] \tag {9} +$$ + +Similarly, the new fused visual feature is: + +$$ +\bar {L} = \operatorname {s o f t m a x} (S) L \tag {10} +$$ + +$$ +\hat {V} = W _ {V} ^ {\top} [ V; \bar {L}; V \odot \bar {L} ] \tag {11} +$$ + +where $W_{L}$ and $W_{V}$ are trainable parameters. + +General Attention. We employ a basic attention mechanism for aligning action feature, $h$ , and each + +of visual and instruction features. + +$$ +A _ {i} = \hat {V} _ {i} ^ {\top} h \tag {12} +$$ + +$$ +\alpha = \operatorname {s o f t m a x} (A) \tag {13} +$$ + +$$ +v = \alpha^ {\top} \hat {V} \tag {14} +$$ + +# E Experiments + +# E.1 Simulator Setup + +Our task is quite challenging. In many cases, agents may not even be able to pick up an object in the navigation phase (agents would have to be in a position close enough to the object and of the correct rotation to pick the object. These factors along with the size of the environment, make this difficult). To decrease the difficulty of the task, in the event agents do not successfully pick up an object, we allow them to continue to the assembly phase with whatever object is the closest to their final location. Likewise in the assembly phase, if the time step limit is reached before the agent places + +![](images/e41d56266fff33e4eeaf9ca38bcfa227ce86413c20173a0d6e8223ee77a7ac26.jpg) +Turn left, go to the mailbox and turn right. Go past the dumpster then right at the next intersection. Go to the phone booth and collect the striped purple bowl. + +![](images/2a93165ec1631c68af5d320e9f0a2cd9f2635c14a1fa40dac42bef9765b64831.jpg) +Place bowl in front of the striped blue hourglass. + +![](images/8d47b67c26833f3e729b5316a3bc8e3ea8cb448153a5cf06adeacae7cbaea85e.jpg) +Turn left to face the short traffic light. Walk to it and turn right. Walk to the orange barrels and turn left. Walk past the barricade to the mailbox and pick up the striped blue hourglass. + +![](images/bde5b67bb0debda461894a07a4f2c74e2d1753764ee7df0c86bb41c73d3a9f09.jpg) +Place the striped blue hourglass against the brick wall and aligned with the purple bucket. + +![](images/9b4bed2b8e244808f4c455ca2227002ced9c9e071d922003b227230ee78e8412.jpg) +Turn around then go left between the blue and brown buildings. Go past the silver dumpster and collect the striped yellow ball next to the mailbox. + +![](images/98c1d7e57bbe954a47eb27147a164c6c36cb70dde529c4ebf9d4cf47d97b7162.jpg) +Place the ball on top of the striped purple bowl. +Figure 17: Visual demonstrations by our model in navigation and assembly phases. GT navigation paths are solid pink lines and model's paths are dotted green lines (start = black dot). GT assembly target location is solid black circle and model's target object placement is dashed blue circle (start = checkered yellow tile, agent facing brick wall). + +![](images/e66c9f886eda139338d60b5604e09992146df16a66fbe3f9c127334f7db0fcf0.jpg) +Turn to face the yellow and white flag. Walk to the orange barrels and turn left. Walk to the short traffic light and pick up the dotted purple mug. + +![](images/7446a6aa0b8b91d45963e903edd12a35e05a875c62508b362e0c607318103e83.jpg) +Place the dotted purple mug in between the blue hourglass and the purple bucket. + +the object down, the object will be placed in front of them (in the event "in front of them" is out of bounds, it is placed at their feet). Note that either of these actions will result in PTC and rPOD to be 0. + +# E.2 Training Details + +We use PyTorch (Paszke et al., 2017) to build our model. We take the average of the losses from navigation and assembly phase modules to calculate the final loss. We use 128 as a hidden size of linear layers and LSTM. For word and action embedding sizes, we use 300 and 64, respectively. The visual feature map size is $7 \times 7$ with 2048 channel size. For dropout p value, 0.3 is used. We use Adam (Kingma and Ba, 2015) as the optimizer and set the learning rate to 0.001. The number of trainable parameters of our Vision-and-Language model is 1.83M (Language-only: 1.11M, Vision-only: 0.73M). We use NVIDIA RTX 2080 Ti and Titan Xp for training and evaluation, respectively. + +# F Results and Analysis + +As shown in Table 5, almost all scores from turn 1 are improved compared to turn 2. Scoring in rPOD and PTC metrics in the assembly phase is largely dependent on the score of CTC-k in the navigation phase. Comparing the rPOD and PTC + +scores of Vision-and-Language model on the valseen split (Table 5) and the ones from Table 6, if the CTC-k is decreased by $1/10$ (1.0 to 0.098), the PTC is also decreased around $1/10$ (0.382 to 0.044). This demonstrates our ARRAMON task involves interweaving and is challenging to complete. + +# G Output Examples + +In the left path set of Figure 16, our model follows the instructions well in the beginning. However, the model goes a little bit further and fails to find the target object (dotted brown tv). In the second turn, the model turns around, but does not do it fully, so heads a different direction failing to reach the goal position. + +For the example on the right, the model performs very well in the first turn, but in the second turn fails to find the target object although reaches very close to it and then backtracks out of the alley. Also, as shown in the figure, the human performs the navigation almost perfectly, indicating there is significant room for improvement by future work, and random-walk shows quite poor performance, implying that our ARRAMON task cannot be completed by random chance. + +Figure 17 compares the model against the GT in both turns and phases. On the left set, the model almost reaches the target object, but it cannot find + +the target object (striped purple bowl) and goes a little further past it. In the corresponding assembly phase, the model places the collected object (assuming it picked up the correct object in the previous navigation phase) 1 space to the right of the target location. In the next navigation turn, due to the error in the previous turn, the model path starts a bit further away from the GT, however, it starts to realign itself towards the end around the corner. The model is able to locate the target object and stop to pick it up. In the next assembly phase, the model fails to place the collected object at the right location. On the right set, the model shows worse performance. It misses all of the turning needed to reach the target. In the assembly phase, the model misses the target location by 1 space, likely due to misunderstanding the complex spatial relationship in the instructions. In the next navigation phase, the model starts in the wrong place, so ends up arriving at a totally different place from the target position. In the next assembly phase, the performance of the previous turn affected the object configuration, so the model cannot find the place "between the blue hourglass and the purple bucket". \ No newline at end of file diff --git a/arramonajointnavigationassemblyinstructioninterpretationtaskindynamicenvironments/images.zip b/arramonajointnavigationassemblyinstructioninterpretationtaskindynamicenvironments/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..176207ad598b7f5603c3fa2f715ee60d96662380 --- /dev/null +++ b/arramonajointnavigationassemblyinstructioninterpretationtaskindynamicenvironments/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4142e9c8197159a1cb4203156ef4b12a7e0f84452b77a1032902a8509dfb4e37 +size 847145 diff --git a/arramonajointnavigationassemblyinstructioninterpretationtaskindynamicenvironments/layout.json b/arramonajointnavigationassemblyinstructioninterpretationtaskindynamicenvironments/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..4955d4fde5ac77cade72f7a023aaac52ea7ce8f4 --- /dev/null +++ b/arramonajointnavigationassemblyinstructioninterpretationtaskindynamicenvironments/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:df171922be4a7dcb90ff78f193b701bd91671693cda036fdb76b932109406768 +size 645962 diff --git a/assessinghumanparityinmachinetranslationonthesegmentlevel/1660575b-aeb3-4c3e-a895-fc195f1b6efc_content_list.json b/assessinghumanparityinmachinetranslationonthesegmentlevel/1660575b-aeb3-4c3e-a895-fc195f1b6efc_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..bd21090b78c07e9b464d7a250026e904a1060771 --- /dev/null +++ b/assessinghumanparityinmachinetranslationonthesegmentlevel/1660575b-aeb3-4c3e-a895-fc195f1b6efc_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9624089156f5940c0d9e2675b8ac27c8c29f4003e0bbd7020d5bd00b9ec9448c +size 60773 diff --git a/assessinghumanparityinmachinetranslationonthesegmentlevel/1660575b-aeb3-4c3e-a895-fc195f1b6efc_model.json b/assessinghumanparityinmachinetranslationonthesegmentlevel/1660575b-aeb3-4c3e-a895-fc195f1b6efc_model.json new file mode 100644 index 0000000000000000000000000000000000000000..4c7fec6394027f4452fa8831dc7a02a71ae0c0ff --- /dev/null +++ b/assessinghumanparityinmachinetranslationonthesegmentlevel/1660575b-aeb3-4c3e-a895-fc195f1b6efc_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:51b35f7ea529791a914ea0debefd49fd42e1e7c770f8e08964818d7e762139ba +size 69813 diff --git a/assessinghumanparityinmachinetranslationonthesegmentlevel/1660575b-aeb3-4c3e-a895-fc195f1b6efc_origin.pdf b/assessinghumanparityinmachinetranslationonthesegmentlevel/1660575b-aeb3-4c3e-a895-fc195f1b6efc_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b45623692f72d0ef319d4ee1e72381b80470ce07 --- /dev/null +++ b/assessinghumanparityinmachinetranslationonthesegmentlevel/1660575b-aeb3-4c3e-a895-fc195f1b6efc_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ed3d48cc87bde42320aaa05eda6638d0fdf402caf8f98619deaa6448878fe878 +size 1503839 diff --git a/assessinghumanparityinmachinetranslationonthesegmentlevel/full.md b/assessinghumanparityinmachinetranslationonthesegmentlevel/full.md new file mode 100644 index 0000000000000000000000000000000000000000..41c0441cc728a010a77e8bd3139018b570f7f20a --- /dev/null +++ b/assessinghumanparityinmachinetranslationonthesegmentlevel/full.md @@ -0,0 +1,197 @@ +# Assessing Human-Parity in Machine Translation on the Segment Level + +Yvette Graham + +ADAPT + +Trinity College Dublin + +ygraham@tcd.ie + +Maria Eskevich + +CLARIN ERIC + +Utrecht + +maria@clarin.eu + +Christian Federmann + +Microsoft Research + +chrife@microsoft.com + +Barry Haddow + +School of Informatics + +University of Edinburgh + +bhaddow@inf.ed.ac.uk + +# Abstract + +Recent machine translation shared tasks have shown top-performing systems to tie or in some cases even outperform human translation. Such conclusions about system and human performance are, however, based on estimates aggregated from scores collected over large test sets of translations and so leave some remaining questions unanswered. For instance, simply because a system significantly outperforms the human translator on average may not necessarily mean that it has done so for every translation in the test set. Furthermore, are there remaining source segments present in evaluation test sets that cause significant challenges for top-performing systems and can such challenging segments go unnoticed due to the opacity of current human evaluation procedures? To provide insight into these issues we carefully inspect the outputs of top-performing systems in the recent WMT19 news translation shared task for all language pairs in which a system either tied or outperformed human translation. Our analysis provides a new method of identifying the remaining segments for which either machine or human perform poorly. For example, in our close inspection of WMT19 English to German and German to English we discover the segments that disjointly proved a challenge for human and machine. For English to Russian, there were no segments included in our sample of translations that caused a significant challenge for the human translator, while we again identify the set of segments that caused issues for the top-performing system. + +# 1 Introduction + +Recent results of machine translation evaluation shared tasks indicate that state-of-the-art is now achieving and possibly even surpassing human performance, with the most recent annual Conference on Machine translation (WMT) news task provid + +ing extensive human evaluation of systems, concluding that several systems performed on average as well as human for English to Russian, English to German and German to English translation and a top system even surpassed human performance for the last two language pairs. + +Since 2017 the official results of the WMT news tasks have been based on the human evaluation methodology known as Direct Assessment (DA) (Graham et al., 2016), due to its many advantages over older technologies. DA, for example, includes quality control mechanisms that allow data collected anonymously from crowd-sourced workers to be filtered according to reliability.1 Although WMT news task results are admittedly based on substantially more valid methodology than those usually found in general in system comparisons using automatic metrics such as BLEU, results in WMT human evaluations still leave some questions unanswered. For example, DA scores are based on average ratings attributed to translations sampled from large test sets, and although such methodology does allow application of statistical significance testing to identify potentially meaningful differences in system performance, they do not provide any insight into the reasons behind a significantly higher score or the degree to which systems perform better when translating individual segments. Furthermore, DA score distributions produced in the human evaluation of the news task are based on individual DA scores that alone cannot be relied upon to reflect the quality of individual segments (Graham et al., 2015). + +Past work, has however provided a means of running a DA human evaluation in such a way that DA scores accurately reflect the performance of a system on a given individual segment (Graham + +et al., 2015). This method comes with the trade-off of requiring substantially more repeat assessments per segment than the test set level evaluation generally run, for example, to evaluate all primary submissions in the WMT news task. In this work we demonstrate how this method has the potential to be employed as a secondary method of evaluation in WMT tasks for a smaller subset of systems to provide segment-level insight into why the top-performing systems outperform one another or indeed to investigate the degree to which human and machine performance differs for individual segments. + +# 2 Related Work + +Over the past number of years, machine translation has been biting at the heels of human translation for a small number of language pairs. Beginning with the first claims that machines have surpassed human quality of translation for Chinese to English news text, conclusions received with some skepticism and even controversy (Hassan et al., 2018), as claims of human performance resulted in re-evaluations that scrutinized the methodology applied, highlighting the influence of reverse-created test data and lack of wider document context in evaluations (Läubli et al., 2018; Toral et al., 2018). Despite re-evaluations taking somewhat more care to eliminate such sources of inaccuracies, they additionally included some potential issues of their own, such as employing somewhat outdated human evaluation methodologies, non-standard methods of statistical significance testing and lack of planning evaluations in terms of statistical power. Graham et al. (2019, 2020), on the other hand re-run the evaluation, identify and fix remaining causes of error, and subsequently confirm that, on the overall level of the test set, with increased scrutiny on evaluation procedures, conclusions of human parity were still overly ambitious at that time. + +It was not long before results were shown to have reached human performance however, according to more scrutinous human evaluation procedures, as one year later at WMT 2019, MT system performance for some language pairs reached human performance and even surpassed it for two language pairs (Barrault et al., 2019). + +Although the admittedly rigorous human evaluation employed in WMT evaluations provides valid conclusions about systems significantly outperforming human translation, it nonetheless em + +ploys the somewhat opaque average Direct Assessment scores computed over large test sets of segments that subsequently leave some important questions unanswered in terms of human parity. For example, even if a system performs better on average than a given human translator, this does not necessarily mean that the system translates every sentence better than the human translator. When a tie occurs between human and machine translation, it would be useful to know how performance compares between the two on individual segments. The current WMT human evaluation methodology does not allow for this, however. + +In this paper, we carry out fine-grained segment-level comparison of system and human translations using human evaluation and provide a comparison on the segment-level of the top-performing MT systems from WMT-19 news task and the human translator for all language pairs in which a system was shown to either tie (English to Russian) or surpass human performance (English to German; German to English). Human evaluation is required, as opposed to segment-level BLEU, for example, because metrics such as BLEU are not sufficiently accurate to identify fine-grained segment-level differences in quality, as can be seen from low correlations with human assessment (Ma et al., 2019). We make all code and data collected in this work publicly available to aid future research. + +# 3 Segment-level Direct Assessment + +Segment-level Direct Assessment requires running human evaluation with sampling of translations carefully structured to ensure that repeat assessment of the same set of translations occurs a minimum of 15 times for both the translations produced by the systems of interest (Graham et al., 2015). For example, this can be carried out for a reduced number of translations and for a reduced number of systems than the entire test set, since collecting 15 repeat assessments makes exhaustive segment-level evaluation for every participating system likely to be overly costly. It is reasonable to focus the segment-level evaluation on a sample of approximately 500 translations selected at random for the two top-performing systems or indeed, as we do now, the top-performing system and the human translator. An important consideration however, is that regardless of which systems may + +be selected for fine-grained segment-level analysis, segment-level evaluation should be run for precisely the same set of segments for all systems of interest so that a comparison of the performance of systems on the same segments will ultimately be possible. + +The desired number of source language segments should therefore be sampled at random from the test set before pooling target side translations for the systems of interest, shuffling and arranging them within human intelligence tasks (HITs). We construct HITs of 100 translated segments, subsequently evaluated by humans blind to which system has produced each translation, or as in our case, blind to whether a human or machine produced the translated segment. The configuration of DA we employ is a source-based segment-level evaluation in which human assessors are (i) shown the source language input segment; (ii) the translated text (either human or machine-produced); and (iii) asked to rate the adequacy of the translation on a 0-100 rating scale. Source-based DA has the advantage of freeing up reference translations so that they can be included in the evaluation as if they had been produced by a system. Source-based DA comes with the trade-off, however of requiring bilingual human assessors. + +# 4 Experiments + +In order to investigate the degree to which human and machine perform differently on individual test set segments, we run segment-level DA on translations of the same random sample of 540 segments by the top-performing system and the human translator. We do this for each language pair in which there was a tie with human performance WMT-19 (English to Russian) or where machine translation performance had surpassed human translation quality (German to English; English to German).3 + +In order to access the bilingual speakers required for the source-based DA configuration we run all source-based DA HITs on an in-house crowdsourcing platform. In total 108,829 assessments were collected via the in-house platform. After removing quality controls, we ended up with 87,211 assessments for which we are confident of worker reliability, and employ all those assessments in our final analysis. + +![](images/4ac315948088f571a72b815d3f25ffe940ccc870bbcb60b2cfc88d9922a532b5.jpg) +English to Russian +Figure 1: Density plot of sample of 540 accurate segment-level DA scores for English to Russian news translation for top-performing system, FACEBOOK-FAIR, in WMT-19 versus the human translator where in the official results the system tied with human performance; Human denotes evaluation of segments translated by the creator of the standard WMT reference translations + +![](images/08d7e7a7d339f911f45a9b185d1b2c723662e650974bf0ce6b1216ef48efdce9.jpg) +English to German +Figure 2: Density plot of sample of 540 accurate segment-level DA scores for English to German news translation for the top-performing system, FACEBOOK-FAIR, in WMT-19 versus the human translator where in the official results the system beat human performance; Human denotes evaluation of segments translated by the creator of the standard WMT reference translations + +![](images/cdefb8ac483216bd47f1835e1d53d26e2c10909717e9512c9dd5f3306e0f78fc.jpg) +German to English + +Figures 1, 2 and 3 include density plots for human translation and the top-performing FACEBOOK-FAIR system (Ng et al., 2019) for the same 540 translated segments from WMT-19 for the three language pairs we investigate. + +For German to English and English to German translation in Figures 2 and 3 a similar pattern emerges in terms of comparison of human and machine-translated segments, as for both a slightly larger proportion of FACEBOOK-FAIR translations are scored high compared to the human translator – as can be seen from the higher red peak close to the extreme right of both plots indicating that the machine produces a marginally higher number of translations with higher levels of adequacy. For English to Russian translation, however, a different pattern occurs, as shown in Figure 1, as it appears that there are locations lower down on the adequacy scale in which the FACEBOOK-FAIR system performs worse than the human translator in three noticeable locations within its score distribution. However, these differences between language pairs are somewhat unsurprising considering that human and system were tied for English to Russian but system beat human in terms of statistical significance for both English to German and German to + +![](images/7b369ecaeb1cb11778127b7a34f81eb4421ed888f41c2e5d003de94dd42abc18.jpg) +Figure 3: Density plot of sample of 540 accurate segment-level DA scores for German to English translation new translation for the top-performing system, FACEBOOK-FAIR, in WMT-19 versus the human translator where in the official results the system beat human performance; Human denotes evaluation of segments translated by the creator of the standard WMT reference translations +English to Russian +Figure 4: Scatter plot of accurate segment-level DA scores for top-performing system, FACEBOOK-FAIR in WMT-19 versus the human translator where in the official results the system tied with human performance; Human A denotes evaluation of segments translated by the creator of the standard WMT reference translations; src denotes a source-based configuration of Direct Assessment was employed to collect scores; segment-level scores for human and machine are the average of a minimum of 15 human assessment scores + +English. + +# 4.1 Human V FACEBOOK-FAIR: English to Russian + +As revealed in the WMT-19 human evaluation results, a single system achieved a statistical tie with human assessment for English to Russian news translation. Differences in average overall scores computed on large test sets still leave some questions unanswered however, particularly in terms of which specific source inputs the machine or even human translator might still find challenging. Furthermore it does not provide any insight into differences in performance for specific source language input segments. + +Since we desire the ability to examine differences in translations of individual source segments for machine and human we examine scatter plots of accurate segment scores for translations of the same input source segment by the human translator and the top-performing machine, shown in Figure 4 for English to Russian for WMT-19 data. + +For English to Russian translation, the scatter plot of adequacy scores for human translator ver + +sus machine shown in Figure 4, in which each "+" signifies the translation of the same source language input test segment, reveals distinct levels of performance for human versus machine for individual segments. Figure 4 reveals that as expected the vast majority of translations score high for both human and machine translations, depicted by the location of the main bulk of translations within the upper right quadrant, as both human and machine translations in this quadrant received an average score above $50\%$ . A perhaps more interesting insight revealed by Figure 4 is the lack of translations appearing in the bottom right quadrant and this indicates that when the system does well on an input source segment so does the human. The reverse cannot be said however of the system, as the upper left quadrant in Figure 4 for English to Russian contains albeit a relatively small number of segments (12 or $2.4\%$ ) for which Facebook-FAIR translates poorly while corresponding adequacy scores for the human translator remain above $50\%$ . + +To gain more insight into what might take place in the case that either the machine or human performs poorly for the input segments scored below the $50\%$ threshold for English to Russian translation see Table 1, where we include the full translation examples for the two lowest scored FACEBOOK-FAIR translations. In example (a) in Table 1 the system is scored lower because it translates an unknown person on Capitol Hill incorrectly. While the human translator correctly expresses the fact that the person is from Capitol Hill, the system instead implies that the unknown person is on Capitol hill, i.e. as if that person were physically standing on a hill. All the other differences between the human and machine in terms of selection of words in the Russian translation are not critical and read well in terms of the fluent Russian. + +In example (b) in Figure 1 there is firstly a mistake in the system translation as it translates *detained* into Russian as *delayed* instead of the correct translation that is produced by the human translator. Secondly, in this same example, the system translates *migrant children* using a Russian term that only refers to children who are migrants themselves, while the human translator uses an arguably better term that includes both children who are migrants and the children of migrants. Finally, in example (b) the system translation appears to lose the intensity of the causality implication that the sentence originally has in English, while the human + +![](images/c03f330826e7b32f5b11d5cf282c154207f3a7d65d88e932769f25cd7545badf.jpg) +English to German +Figure 5: Scatter-plot of segment-level DA scores for top-performing system, FACEBOOK-FAIR in WMT-19 versus human translator; Human A (src) denotes evaluation of segments translated by the creator of the standard WMT reference translations in a source-based configuration of DA; segment-level scores for human and machine are the average of a minimum of 15 human assessment scores + +translation keeps this using the active form of the verb. Remaining English to Russian translations for which the system score falls below $50\%$ are included in Appendix A. + +As mentioned previously, for this English to Russian our analysis found no translations for which the human translator performed very poorly while the system succeeded. + +# 4.2 Human V Super-human + +# FACEBOOK-FAIR: English to German + +In the official WMT-19 human evaluation results of the English to German news task, again the same single system, FACEBOOK-FAIR, stood out as quite remarkably outperforming the human translator according to human assessment scores computed over the entire test set (Barrault et al., 2019). In order to further investigate this super-human performance, after collecting accurate segment-level scores for translations of the same 540 source language input segments for both FACEBOOK-FAIR and the human translator, we plot corresponding adequacy scores in Figure 5. + +In contrast to English to Russian (Figure 4), and perhaps not surprisingly since the system significantly outperforms the human translator as opposed + +
DA (%)
(a)Source:The information appeared online Thursday, posted by an unknown person on Capitol Hill during a Senate panel's hearing on the sexual misconduct allegations against Supreme Court nominee Brett Kavanaugh.
Facebook-FAIR:Информашия по引爆лесь в сети в чеверг, раимеценьем сениховский Лином на Кадусторухшим Роставlementовский Домогательстбак в садраховский сад Беретта Кабано.
Human:Информашия по引爆лесь сизлajн в чеверг, онибковая сениховский Ского Хлама в с年产值енй Кениховский Сенихаю obshincheniy в сениобачшем Роведенnis лотов канддата в Берховский сад Беретта Кабано.
(b)Source:The number of detainedig migrant children has spiked even though monthly border crossings have remained relatively unchanged, in part because harsh rhetoric and policies introduced by the Trump administration have made it harder to place children with sponsors.
Facebook-FAIR:Число за对接кехьх дешей-Mиралганто Взаско, Нсмогters на то, что exжемсячные对接кешия Грашиь остаяншь сизлajнов Нениховский, отуар CTин-ЗА менихий рпторки и плінтукь, въedingень сизлajн Всдллдата Трашпь, садллдата Трашпь, садллдата Tрашпь, садллдата Tрашпь, садллдата Tрашпь, садллдата Tрашпь, садллдата Tрашпь, садллдата Tрашпь, садллдата Tрашпь, садллдата Tрашпь, садлрлдата Tрашпь, садлрлдата Tрашпь, садлрлдата Tрашпь, садлрлдата Tрашпь, садлрлдата Tрашпь, садлрлдата Tрашпь, садлрлдата Tрашпь, садлрл徳ата Tрашпь, садлрл徳ата Tрашпь, садлрл徳ата Tрашпь, садлрл徳ата Tрашпь, садлрл徳ата Tрашпь, садлрл徳ата Tрашпь, садлрл徳ата Tрашпь, сддата Tрашпь, сддата Tрашпь, сддата Tрашпь, сддата Tрашпь, сддата Tрашпь, сддата Tрашпь, сддата Tрашпь, сддата Tрашпь, сддата Tрашпь, сдд�ы Tрашпь, сдд�ы Tрашпь, сдд�ы Tрашпь, сдд�ы Tрашпь, сдд�ы Tрашпь, сдд�ы Tрашпь, сдд�ы Tрашпь, сдд�ы Tрашпь, сдд�ы Tрашпь, сдд�ы,Tрашпь, сдд�ы,Tрашпь, сдд�ы,Tрашпь, сдд�ы,Tрашпь, сдд�ы,Tрашпь, сдд�ы,Tрашпь, сдд�ы,Tрашпь, сдд�ы,Tрашпь, сдд�ы,Tрашпь, сдд�ы,T PraM, сдд�ы,T PraM, сдд�ы,T PraM, сдд�ы,T PraM, сдд�ы,T PraM, сдд�ы,T PraM, сдд�ы,T PraM, сдд�ы,T PraM, сдд�ы,T PraM, сдд�ы,T PraM, сдд�ы,T PraM, сдд�ы,T PraM, +<|im_start|>assistant
+ +Table 1: English to Russian example translations from WMT-19 news task for which the top-performing system performed poorly; DA denotes average direct assessment scores for translations computed on a minimum of 15 human assessments; DA scores below the $50\%$ threshold highlighted in orange; DA scores above the $50\%$ threshold highlighted in blue + +to merely tying with it, the English to German system shows fewer machine translations receiving a low adequacy score combined with a high human score, as only two translations appear in the top-left quadrant of Figure 5. This highlights the fact that even though on average the system performs incredibly well, by on average outperforming human translation, there remains the possibility that this can take place in combination with a albeit small number of poor translations. + +To gain more insight into what might take place in the case that either the machine or human performs poorly for the input segments scored below the $50\%$ threshold see Table 2. Two out of the five translations that scored below $50\%$ by either human or machine were translated worst by machine as opposed to the human translator as can be seen by the lower DA scores (a) and (b) in Table 2. Firstly, in example (a) in Table 2 the system translation deviates from the syntactic structure of the source input sentence. It additionally ignores and in addition to translating scene as Unfallort (lit: location of the accident). In contrast, the human translator instead produces Ort des Geschehens which is arguably a better way to express scene. + +In example (b) in Table 2, the source word trough is mistranslated as $Trog$ by the system, which is a more common translation of the word trough but is in this context an incorrect lexical choice given that + +the source input sentence originates in the weather report domain, for which Tief is the appropriate translation, which the human translator correctly translates. + +Despite the system performing poorly on two segments for which the human translates correctly, perhaps more surprising is that there are three source input segments for which the machine translates well but the human translator does not. In example (c) in Table 2, the human translates broadcast networks somewhat too literally as Rundfunknetze instead of Rundfunksender. In addition, the human translator incorrectly changes the tense. Finally in example (c) in Table 2 full Senate is again translated too literally into vollem Senat. + +In example (d) in Table 2, the human translator chooses the incorrect present tense for the main verb, kiindigt ... an as opposed to the future tense. Lastly, in example (e) in Table 2, the human translator converts two-foot into $60~\mathrm{cm}$ which is only approximately correct, the source word brim is translated into Rand which is arguably correct but is nonetheless an unusual lexical choice compared to the system translation, Krempe. Again, tense in the latter part of the source input sentence is not preserved well in the human translation. + +
DA (%)
(a)Source :The driver of the car stopped and paramedics attended, but the man died at the scene.
Facebook-FAIR:Der Fahrer des Autos hielt an, Sanitäter kümmlerten sich umihn, noch der Mann starb noch am Unfallort.40.7
Human :Der Fahrer des Autos hielt an und Sanitäter kamen, aber der Mann starb am Ort des Geschehens.91.3
(b)Source :The approaching trough will bring some locally heavy rain to parts of the Southern California coastline.
Facebook-FAIR:Der herannahende Trog wird Teilen der sudkalifornischen Küste local heftigen Regen bringen.40.9
Human :Das sich nahernde Tief wird einzel optlich starke Regenfälle für Teile der sudkalifornischen Küste mit sich bringen.70.1
(c)Source :The cable and broadcast networks were all covering live hours later, when the Judiciary Committee was to vote to advance Kavanaugh's nomination to the full Senate for a vote.
Facebook-FAIR:Die Kabel- und Rundfunksender berichteten alle live Stunden später, als der Justizausschuss abstimmten sollen, um Kavanaughs Nominierung dem gesamtten Senat zur Abstimmung vorzulegen.74.5
Human :Die Kabel- und Rundfunknetze haben später live übertragen, als der Justizausschuss abstimmten sollen, um die Ernennung von Kavanaugh zum vollen Senat zur Abstimmung voranzutreiben.46.5
(d)Source :Foreign buyers are set to be charged a higher stamp duty rate when they buy property in the UK - with the extra cash used to help the homeless, Theresa May will announce today.
Facebook-FAIR:Ausländischen Käufern soll beim Kauf von Immobilien in Großbritannien eine höhere Stempelsteuer in Rechnung gestellt werden - mit dem zusätzlichen Geld, das für Obdachlose verwendet wird, wird Theresa May heute besteht.
Human :Ausländischen Käufern wird beim Kauf von Immobilien in Großbritannien einHigherer Stempelsteuersatz in Rechnung gestellt - das zusätzliche Geld wird für Obdachlose verwendet werden, kündigt Theresa May heute besteht.
(e)Source :The out-sized hats come hot on the heels of 'La Bomba', the straw hat with a two-foot wide brim that's been seen on everyone from Rihanna to Emily Ratajkowski.
Facebook-FAIR:Die überdimensionalen Hüte sind auf den Fersen von "La Bomba", dem Strohhut mit zwei Fuß breiter Krempe, den man von Rihanna bis Emily Ratajkowski gesehen hat.87.3
Human :Die überdimensionalen Hüte haben sich an die Fersen von "La Bomba" geklebt, dem Strohhut mit einem 60 cm breiten Rand, der bei jedem von Rihanna bis Emily Ratajkowski zu sehen ist.48.3
+ +Table 2: English to German translations from WMT-19 news task for which either the top-performing system or human translator perform poorly; DA denotes average direct assessment scores for translations computed on a minimum of 15 human assessments; DA scores below the $50\%$ threshold highlighted in orange; DA scores above the $50\%$ threshold highlighted in blue + +![](images/778df8b78fe9a0043d8dcfd04204fe76324c5c408a947a84cd899e3efe5c802e.jpg) +German to English +Figure 6: Scatter plot of adequacy scores of translations of the same source language input segment produced by (i) human and (ii) top-performing machine translation system from WMT-19, FACEBOOK-FAIR for German to English where machine significantly outperformed human translation + +# 4.3 Human V Super-human + +# FACEBOOK-FAIR: German to English + +For German to English translation, the scatter-plot of translation scores for our 540 source segment sample shown in Figure 6 reveals the bulk of translations located to a more extreme degree in the upper right corner of the plot compared to the other two language pairs. Like both English to Russian and English to German, there are segments for this language pair for which the top-performing system, FACEBOOK-FAIR performs poorly on compared to the human translator, as seven source segments $(1.4\%)$ appear in the upper-left quadrant, where the system received an adequacy score lower than $50\%$ while the human translation received a score higher than $50\%$ . Like English to German, however, for German to English translation, the reverse is also true, there are translations that catch out the human translator, for which he/she received a low score, while for the same source input, the machine receives a high score. Such translations, there are six in total $(1.2\%)$ , are located in the bottom-right quadrant of Figure 6. + +Table 3 shows the most extreme examples in + +
DA (%)
(a)Source :Im Ziel war er sein Padel vor Freude weg und reckte beiden Arme siegesicher in die Höhe - wohlwissenschaft, dass es mindestens für eine Medaille reichen wurde.
Facebook-FAIR:At the finish, he threw away his paddle for joy and raised both arms in victory - knowing that it would be enough for at least one medal.23.4
Human :He threw his paddle with joy at the finishing line and, confident of victory, threw both arms in the air - safe in the knowledge that his efforts would secure him a medal.67.5
(b)Source :Zur Vorsicht wurde auch noch der ÖAMTC-Notarzhubschrauber gerufen.
Facebook-FAIR:The ÖAMTC emergency medical helicopter was also called out as a precaution.42.7
Human :As a precautionary measure, an emergency air ambulance helicopter was also called into action.84.5
(c)Source :Hintergrund ist Musks überraschende Ankündigung vom August, Tesla von der Börsenehmen zu wollen.
Facebook-FAIR:The background is Musk's surprise announcement in August that he would take Tesla off the stock market.96.0
Human :The background is Musk's surprise announcement in August to take Tesla off the stock exchange.7.3
(d)Source :Zum 100-Jahr-Jubiläum der Republik, das in diesen Gedenkjahr seit mittlerweile fast zehn Monaten gefeiert wird, sind zahlreiche neue Bücher erschienen, die diese Frage mein Rückblick auf die vergangenen hundert Jahre beantworteten.
Facebook-FAIR:On the occasion of the 100th anniversary of the Republic, which has been celebrated in this commemorative year for almost ten months now, numerous new books have been published, most of which answer this question in retrospect of the past hundred years.97.7
Human :At the 100-year anniversary of the republic that has been celebrated in this commemorative year for almost ten months, many new books appeared that answer this question mainly looking back over the past hundred years.49.1
+ +Table 3: German to English translations from WMT-19 news task for which either the top-performing system or human translator perform poorly; DA denotes average direct assessment scores for translations computed on a minimum of 15 human assessments; DA scores below the $50\%$ threshold highlighted in orange; DA scores above the $50\%$ threshold highlighted in blue + +terms of contrast in adequacy scores for human versus machine translation for German to English for the top-performing system FACEBOOK-FAIR. Two of the examples (a) and (b) show segments for which the system performs worse than the human translator and on close inspection we can see why this could be. For example, the machine translates the source segment in example (a) in Table 3 too literally and omits the phrase "in the air". Although the human translator scores higher at $67.5\%$ they are still docked some marks probably because the human translator has also slightly mistranslated the German verb wegwerfen - to throw away, omitting away from his/her translation. Example (b) in Table 3 the machine translation system is hindered by the presence of an unknown acronym containing the German umlaut that remains as such incorrectly present in the English translation, receiving a score of $42.7\%$ . The human translator, achieving a score of $84.5\%$ , handles this better by omitting the acronym from the translation, but still there is possibly some meaning missing from its translation. + +Table 3 additionally includes some examples, (c) and (d), in which it was the human translator who was caught off guard by a particular source segment and substantially scored lower than the machine for its translation. For instance, example (c) in Table 3 the system correctly translates the German term Börse as stock market while the human translator + +chooses stock exchange which has likely caused a low human assessment score, as in general companies are added and removed from stock markets as opposed to stock exchanges. In example (d) in Table 3 it is likewise the human translator who translates the German term erschienen as appeared instead of the more appropriate published produced by FACEBOOK-FAIR. In addition the human misplaces the translation of meist – most – which refers back to the books in the preceding phrase and attaches it to the translation of Rückblick – looking back or retrospect – while the machine correctly translates meist. Remaining German to English translations for which either the system or human score falls below $50\%$ are included in Appendices B and C. + +# 5 Conclusions + +The question we ask in this work is highly relevant – what are the differences between human translations and the top MT translations on the segment level when “human parity” is reached. For the English-Russian system our analysis makes it clear that there are a number of segments where the human did better than the machine, but on close inspection of these sentences, there appears to be no generalizable difference that clearly characterizes these kinds of sentences. + +For English to/from German, the situation between human and machine is more finely balanced, + +and segment-level analysis has shown only a small number of random errors on each side, revealing only minor differences are present even on the segment level when we compare human and machine translations. + +# Acknowledgments + +This study was supported by the ADAPT Centre for Digital Content Technology (www.adaptcentre.ie) at Trinity College Dublin funded under the SFI Research Centres Programme (Grant 13/RC/2106) co-funded under the European Regional Development Fund, and has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 825299 (Gourmet). We would also like to thank the anonymous reviewers for their feedback. + +# References + +George Awad, A. Butt, K. Curtis, Y. Lee, J. Fiscus, A. Godil, A. Delgado, J. Zhang, E. Godard, L. Diduch, A. F. Smeaton, Y. Graham, and W. Kraaij. 2019. Trecvid 2019: An evaluation campaign to benchmark video activity detection, video captioning and matching, and video search & retrieval. In Proceedings of TRECVID, volume 2019. +Loïc Barrault, Ondrej Bojar, Marta R. Costa-jussà, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias Müller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine translation (WMT19). In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1-61, Florence, Italy. Association for Computational Linguistics. +Yvette Graham, George Awad, and Alan Smeaton. 2018. Evaluation of automatic video captioning using direct assessment. PLOS ONE, 13(9):1-20. +Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2016. Can machine translation systems be evaluated by the crowd alone. *Natural Language Engineering*, FirstView:1-28. +Yvette Graham, Barry Haddow, and Philipp Koehn. 2019. Translationese in machine translation evaluation. CoRR, abs/1906.09833. +Yvette Graham, Barry Haddow, and Philipp Koehn. 2020. Statistical power and translationese in machine translation evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, Virtual. Association for Computational Linguistics. + +Yvette Graham, Nitika Mathur, and Timothy Baldwin. 2015. Accurate evaluation of segment-level machine translation metrics. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics Human Language Technologies, Denver, Colorado. +Hany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Federmann, Xuedong Huang, Marcin Junczys-Dowmunt, William Lewis, Mu Li, Shujie Liu, Tie-Yan Liu, Renqian Luo, Arul Menezes, Tao Qin, Frank Seide, Xu Tan, Fei Tian, Lijun Wu, Shuangzhi Wu, Yingce Xia, Dongdong Zhang, Zhirui Zhang, and Ming Zhou. 2018. Achieving human parity on automatic chinese to english news translation. CoRR, abs/1803.05567. +Samuel Läubli, Rico Sennrich, and Martin Volk. 2018. Has Neural Machine Translation Achieved Human Parity? A Case for Document-level Evaluation. In EMNLP 2018, Brussels, Belgium. Association for Computational Linguistics. +Qingsong Ma, Johnny Wei, OndA™ej Bojar, and Yvette Graham. 2019. Results of the wmt19 metrics shared task: Segment-level and strong mt systems pose big challenges. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 62-90, Florence, Italy. Association for Computational Linguistics. +Simon Mille, Anja Belz, Bernd Bohnet, Yvette Graham, and Leo Wanner. 2019. The second multilingual surface realisation shared task (SR'19): Overview and evaluation results. In Proceedings of the 2nd Workshop on Multilingual Surface Realisation (MSR 2019), pages 1-17, Hong Kong, China. Association for Computational Linguistics. +Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. 2019. Facebook FAIR's WMT19 News Translation Task submission. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 314-319, Florence, Italy. Association for Computational Linguistics. +Antonio Toral, Sheila Castilho, Ke Hu, and Andy Way. 2018. Attaining the unattainable? reassessing claims of human parity in neural machine translation. CoRR, abs/1808.10432. \ No newline at end of file diff --git a/assessinghumanparityinmachinetranslationonthesegmentlevel/images.zip b/assessinghumanparityinmachinetranslationonthesegmentlevel/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..3c81ed66828229d14a6e58430f28c869ed7790ff --- /dev/null +++ b/assessinghumanparityinmachinetranslationonthesegmentlevel/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6ecaa09acdb61c48b48d65039c438f7e1a745d8900f2ea9909b4fba7e2aedf15 +size 590153 diff --git a/assessinghumanparityinmachinetranslationonthesegmentlevel/layout.json b/assessinghumanparityinmachinetranslationonthesegmentlevel/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..5a5e96fa5733b9c25e387c8b0b6486bcf4827976 --- /dev/null +++ b/assessinghumanparityinmachinetranslationonthesegmentlevel/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7edf67abd384d299ca9b98482c0ea3ad28988b01f3ec3e22e244c7e0e2b2b3b8 +size 217544 diff --git a/assessingrobustnessoftextclassificationthroughmaximalsaferadiuscomputation/137a9dfc-ece2-43d6-a55c-97a4b503f04c_content_list.json b/assessingrobustnessoftextclassificationthroughmaximalsaferadiuscomputation/137a9dfc-ece2-43d6-a55c-97a4b503f04c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f37b736b8af6c82463b9d123b38e020eee85e14a --- /dev/null +++ b/assessingrobustnessoftextclassificationthroughmaximalsaferadiuscomputation/137a9dfc-ece2-43d6-a55c-97a4b503f04c_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8590105f6d29ebfd60c1088d5f9574960f1fadc13543573f566b6426dbb53d0d +size 104194 diff --git a/assessingrobustnessoftextclassificationthroughmaximalsaferadiuscomputation/137a9dfc-ece2-43d6-a55c-97a4b503f04c_model.json b/assessingrobustnessoftextclassificationthroughmaximalsaferadiuscomputation/137a9dfc-ece2-43d6-a55c-97a4b503f04c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..8846dfdcf771de579c68283f457396665da4dec1 --- /dev/null +++ b/assessingrobustnessoftextclassificationthroughmaximalsaferadiuscomputation/137a9dfc-ece2-43d6-a55c-97a4b503f04c_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2fe27eeba925e7977549bd6e94f752d4a29cef847c981fc8b92e67d1e0df16b8 +size 126993 diff --git a/assessingrobustnessoftextclassificationthroughmaximalsaferadiuscomputation/137a9dfc-ece2-43d6-a55c-97a4b503f04c_origin.pdf b/assessingrobustnessoftextclassificationthroughmaximalsaferadiuscomputation/137a9dfc-ece2-43d6-a55c-97a4b503f04c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6d363cec50bc565882e37bccca92966c06c4c8b3 --- /dev/null +++ b/assessingrobustnessoftextclassificationthroughmaximalsaferadiuscomputation/137a9dfc-ece2-43d6-a55c-97a4b503f04c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:50ca91177c8eaa68bc216e6990286fbbb9ee8f1fa24bb9062afe48bad498e1bc +size 4046049 diff --git a/assessingrobustnessoftextclassificationthroughmaximalsaferadiuscomputation/full.md b/assessingrobustnessoftextclassificationthroughmaximalsaferadiuscomputation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1149da89e9ac441c6b699c1bca7c90d3e397543c --- /dev/null +++ b/assessingrobustnessoftextclassificationthroughmaximalsaferadiuscomputation/full.md @@ -0,0 +1,354 @@ +# Assessing Robustness of Text Classification through Maximal Safe Radius Computation + +Emanuele La Malfa† Min Wu† Luca Laurenti† Benjie Wang† Anthony Hartshorn§ Marta Kwiatkowska† +Department of Computer Science, University of Oxford, United Kingdom +§Genie AI, London, United Kingdom + +{emanuele.lamalfa, min.wu, luca.laurenti, benjie.wang, marta.kwiatkowska}@cs.ox.ac.uk {anthony.hartshorn}@genieai.co + +# Abstract + +Neural network NLP models are vulnerable to small modifications of the input that maintain the original meaning but result in a different prediction. In this paper, we focus on robustness of text classification against word substitutions, aiming to provide guarantees that the model prediction does not change if a word is replaced with a plausible alternative, such as a synonym. As a measure of robustness, we adopt the notion of the maximal safe radius for a given input text, which is the minimum distance in the embedding space to the decision boundary. Since computing the exact maximal safe radius is not feasible in practice, we instead approximate it by computing a lower and upper bound. For the upper bound computation, we employ Monte Carlo Tree Search in conjunction with syntactic filtering to analyse the effect of single and multiple word substitutions. The lower bound computation is achieved through an adaptation of the linear bounding techniques implemented in tools CNN-Cert and POPQORN, respectively for convolutional and recurrent network models. We evaluate the methods on sentiment analysis and news classification models for four datasets (IMDB, SST, AG News and NEWS) and a range of embeddings, and provide an analysis of robustness trends. We also apply our framework to interpretability analysis and compare it with LIME. + +# 1 Introduction + +Deep neural networks (DNNs) have shown great promise in Natural Language Processing (NLP), outperforming other machine learning techniques in sentiment analysis (Devlin et al., 2018), language translation (Chorowski et al., 2015), speech recognition (Jia et al., 2018) and many other tasks1. + +Despite these successes, concerns have been raised about robustness and interpretability of NLP models (Arras et al., 2016). It is known that DNNs are vulnerable to adversarial examples, that is, imperceptible perturbations of a test point that cause a prediction error (Goodfellow et al., 2014). In NLP this issue manifests itself as a sensitivity of the prediction to small modifications of the input text (e.g., replacing a word with a synonym). In this paper we work with DNNs for text analysis and, given a text and a word embedding, consider the problem of quantifying the robustness of the DNN with respect to word substitutions. In particular, we define the maximal safe radius (MSR) of a text as the minimum distance (in the embedding space) of the text from the decision boundary, i.e., from the nearest perturbed text that is classified differently from the original. Unfortunately, computation of the MSR for a neural network is an NP-hard problem and becomes impractical for real-world networks (Katz et al., 2017). As a consequence, we adapt constraint relaxation techniques (Weng et al., 2018a; Zhang et al., 2018; Wong and Kolter, 2018) developed to compute a guaranteed lower bound of the MSR for both convolutional (CNNs) and recurrent neural networks (RNNs). Furthermore, in order to compute an upper bound for the MSR we adapt the Monte Carlo Tree Search (MCTS) algorithm (Coulom, 2007) to word embeddings to search for (syntactically and semantically) plausible word substitutions that result in a classification different from the original; the distance to any such perturbed text is an upper bound, albeit possibly loose. We employ our framework to perform an empirical analysis of the robustness trends of sentiment analysis and news classification tasks for a range of embeddings on vanilla CNN and LTSM models. In particular, we consider the IMDB dataset (Maas et al., 2011), the Stanford Sentiment Treebank (SST) dataset (Socher et al., + +2013), the AG News Corpus Dataset (Zhang et al., 2015) and the NEWS Dataset (Vitale et al., 2012). We empirically observe that, although generally NLP models are vulnerable to minor perturbations and their robustness degrades with the dimensionality of the embedding, in some cases we are able to certify the text's classification against any word substitution. Furthermore, we show that our framework can be employed for interpretability analysis by computing a saliency measure for each word, which has the advantage of being able to take into account non-linearities of the decision boundary that local approaches such as LIME (Ribeiro et al., 2016) cannot handle. + +In summary this paper makes the following main contributions: + +- We develop a framework for quantifying the robustness of NLP models against (single and multiple) word substitutions based on MSR computation. +- We adapt existing techniques for approximating the MSR (notably CNN-Cert, POPQORN and MCTS) to word embeddings and semantically and syntactically plausible word substitutions. +- We evaluate vanilla CNN and LSTM sentiment and news classification models on a range of embeddings and datasets, and provide a systematic analysis of the robustness trends and comparison with LIME on interpretability analysis. + +Related Work. Deep neural networks are known to be vulnerable to adversarial attacks (small perturbations of the network input that result in a misclassification) (Szegedy et al., 2014; Biggio et al., 2013; Biggio and Roli, 2018). The NLP domain has also been shown to suffer from this issue (Belinkov and Bisk, 2018; Ettinger et al., 2017; Gao et al., 2018; Jia and Liang, 2017; Liang et al., 2017; Zhang et al., 2020). The vulnerabilities of NLP models have been exposed via, for example, small character perturbations (Ebrahimi et al., 2018), syntactically controlled paraphrasing (Iyyer et al., 2018), targeted keywords attacks (Alzantot et al., 2018; Cheng et al., 2018), and exploitation of back-translation systems (Ribeiro et al., 2018). Formal verification can guarantee that the classification of an input of a neural network is invariant to perturbations of a certain magnitude, which can + +be established through the concept of the maximal safe radius (Wu et al., 2020) or, dually, minimum adversarial distortion (Weng et al., 2018b). While verification methods based on constraint solving (Katz et al., 2017, 2019) and mixed integer programming (Dutta et al., 2018; Cheng et al., 2017) can provide complete robustness guarantees, in the sense of computing exact bounds, they are expensive and do not scale to real-world networks because the problem itself is NP-hard (Katz et al., 2017). To work around this, incomplete approaches, such as search-based methods (Huang et al., 2017; Wu and Kwiatkowska, 2020) or reachability computation (Ruan et al., 2018), instead compute looser robustness bounds with much greater scalability, albeit relying on the knowledge of nontrivial Lipschitz constants. In this work, we exploit approximate, scalable, linear constraint relaxation methods (Weng et al., 2018a; Zhang et al., 2018; Wong and Kolter, 2018), which do not assume Lipschitz continuity. In particular, we adapt the CNN-Cert tool (Boopathy et al., 2019) and its recurrent extension POPQORN (Ko et al., 2019) to compute robustness guarantees for text classification in the NLP domain. We note that NLP robustness has also been addressed using interval bound propagation (Huang et al., 2019; Jia et al., 2019). + +# 2 Robustness Quantification of Text Classification against Word Substitutions + +In text classification an algorithm processes a text and associates it to a category. Raw text, i.e., a sequence of words (or similarly sentences or phrases), is converted to a sequence of real-valued vectors through an embedding $\mathcal{E}: W \to \mathcal{X} \subseteq \mathbb{R}^d$ , which maps each element of a finite set $W$ (e.g., a vocabulary) into a vector of real numbers. There are many different ways to build embeddings (Goldberg and Levy, 2014; Pennington et al., 2014; Wallach, 2006), nonetheless their common objective is to capture relations among words. Furthermore, it is also possible to enforce into the embedding syntactic/semantic constraints, a technique commonly known as counter-fitting (Mrkšić et al., 2016), which we assess from a robustness perspective in Section 3. Each text is represented univocally by a sequence of vectors $\boldsymbol{x} = (\underline{x}_1, \dots, \underline{x}_m)$ , where $m \in \mathbb{N}$ , $\underline{x}_i \in \mathcal{X}$ , padding if necessary. In this work we consider text classification with neural networks, hence, a text embedding $\boldsymbol{x}$ is classified + +into a category $c \in C$ , through a trained network $\mathbf{N} : \mathbb{R}_{[0,1]}^{d \cdot m} \to \mathbb{R}^{|C|}$ , i.e., $c = \arg \max_{i \in C} \mathbf{N}_i(\boldsymbol{x})$ where without any loss of generality we assume that each dimension of the input space of $\mathbf{N}$ is normalized between 0 and 1. We note that pre-trained embeddings are scaled before training, thus resulting in a $L_{\infty}$ diameter whose maximum value is 1. Thus, the lower and upper bound measurements are affected by normalization only when one compares embeddings with different dimensions with norms different from $L_{\infty}$ . In this paper robustness is measured for both convolutional and recurrent neural networks with the distance between words in the embedding space that is calculated with either $L_2$ or $L_{\infty}$ -norm: while the former is a proxy for semantic similarity between words in polarized embeddings (this is discussed more in details in the Experimental Section), the latter, by taking into account the maximum variation along all the embedding dimensions, is used to compare different robustness profiles. + +# 2.1 Robustness Measure against Word Substitutions + +Given a text embedding $\pmb{x}$ , a metric $L_{p}$ , a subset of word indices $I \subseteq \{1, \dots, m\}$ , and a distance $\epsilon \in \mathbb{R}_{\geq 0}$ , we define $\mathrm{Ball}(\pmb{x}, \epsilon) = \{\pmb{x}' \in \mathbb{R}_{[0,1]}^{d \cdot m} | \| \pmb{x}_I - \pmb{x}_I' \|_p \leq \epsilon \land (\forall i \notin I, \underline{x}_i = \underline{x}_i')\}$ , where $\pmb{x}_I$ is the sub-vector of $\pmb{x}$ that contains only embedding vectors corresponding to words in $I$ . That is, $\mathrm{Ball}(\pmb{x}, \epsilon)$ is the set of embedded texts obtained by replacing words in $I$ within $\pmb{x}$ and whose distance to $\pmb{x}$ is no greater than $\epsilon$ . We elide the index set $I$ to simplify the notation. Below we define the notion of the maximal safe radius (MSR), which is the minimum distance of an embedding text from the decision boundary of the network. + +Definition 1 (Maximal Safe Radius). Given a neural network $\mathbf{N}$ , a subset of word indices $I \subseteq \{1, \ldots, m\}$ , and a text embedding $\pmb{x}$ , the maximal safe radius $MSR(\mathbf{N}, \pmb{x})$ is the minimum distance from input $\pmb{x}$ to the decision boundary, i.e., $MSR(\mathbf{N}, \pmb{x})$ is equal to the largest $\epsilon \in \mathbb{R}_{\geq 0}$ such that $\forall \pmb{x}' \in \operatorname{Ball}(\pmb{x}, \epsilon)$ : $\arg \max_i \mathbf{N}_{i \in C}(\pmb{x}') = \arg \max_i \mathbf{N}_{i \in C}(\pmb{x})$ . + +For a text $\pmb{x}$ let $\mathbf{d} = \max_{\pmb{x}' \in \mathbb{R}_{[0,1]}^{d \cdot m}} \| \pmb{x}_I - \pmb{x}_I' \|_p$ be the diameter of the embedding, then a large value for the normalised MSR, $\frac{\mathrm{MSR}(\mathbf{N}, \pmb{x})}{\mathbf{d}}$ , indicates that $\pmb{x}$ is robust to perturbations of the given subset $I$ of its words, as substitutions of these words do not result in a class change in the NN prediction (in + +![](images/4e5bf44c9c85cd5b14ebbb88ecaa45b74a23f6c4f219c422773d72dae412368b.jpg) +Figure 1: Illustration of the Maximal Safe Radius (MSR) and its upper and lower bounds. An upper bound of MSR is obtained by computing the distance of any perturbation resulting in a class change (blue ellipse) to the input text. A lower bound certifies that perturbations of the words contained within that radius are guaranteed to not change the classification decision (green ellipse). Both upper and lower bounds approximate the MSR (black ellipse). In this example the word strange can be safely substituted with odd. The word timeless is within upper and lower bound of the MSR, so our approach cannot guarantee it would not change the neural network prediction. + +particular, if the normalised MSR is greater than 1 then $x$ is robust to any perturbation of the words in $I$ . Conversely, low values of the normalised MSR indicate that the network's decision is vulnerable at $x$ because of the ease with which the classification outcomes can be manipulated. Further, averaging MSR over a set of inputs yields a robustness measure of the network, as opposed to being specific to a given text. Under standard assumptions of bounded variation of the underlying learning function, the MSR is also generally employed to quantify the robustness of the NN to adversarial examples (Wu et al., 2020; Weng et al., 2018a), that is, small perturbations that yield a prediction that differs from ground truth. Since computing the MSR is NP-hard (Katz et al., 2017), we instead approximate it by computing a lower and an upper bound for this quantity (see Figure 1). The strategy for obtaining an upper bound is detailed in Section 2.2, whereas for the lower bound (Section 2.3) we adapt constraint relaxation techniques developed for the verification of deep neural networks. + +# 2.2 Upper Bound: Monte Carlo Tree Search + +An upper bound for MSR is a perturbation of the text that is classified by the NN differently than the original text. In order to only consider perturba + +tions that are syntactically coherent with the input text, we use filtering in conjunction with an adaptation of the Monte Carlo Tree Search (MCTS) algorithm (Coulom, 2007) to the NLP scenario (Figure 2). The algorithm takes as input a text, embeds it as a sequence of vectors $\pmb{x}$ , and builds a tree where at each iteration a set of indices $I$ identifies the words that have been modified so far: at the first level of the tree a single word is changed to manipulate the classification outcome, at the second two words are perturbed, with the former being the same word as for the parent vertex, and so on (i.e., for each vertex, $I$ contains the indices of the words that have been perturbed plus that of the current vertex). We allow only word for word substitutions. At each stage the procedure outputs all the successful attacks (i.e., perturbed texts that are classified by the neural network differently from the original text) that have been found until the terminating condition is satisfied (e.g., a fixed fraction out of the total number of vertices has been explored). Successful perturbations can be used as diagnostic information in cases where ground truth information is available. The algorithm explores the tree according to the UCT heuristic (Browne et al., 2012), where urgent vertices are identified by the perturbations that induce the largest drop in the neural network's confidence. A detailed description of the resulting algorithm, which follows the classical algorithm (Coulom, 2007) while working directly with word embeddings, can be found in Appendix A.1. Perturbations are sampled by considering the $n$ -closest replacements in the word's neighbourhood: the distance between words is measured in the $\mathrm{L}_2$ norm, while the number of substitutions per word is limited to a fixed constant (e.g., in our experiments this is either 1000 or 10000). In order to enforce the syntactic consistency of the replacements we consider part-of-speech tagging of each word based on its context. Then, we filter all the replacements found by MCTS to exclude those that are not of the same type, or from a type that will maintain the syntactic consistency of the perturbed text (e.g., a noun sometimes can be replaced by an adjective). To accomplish this task we use the Natural Language Toolkit (Bird et al., 2009). More details are provided in Appendix A.1. + +# 2.3 Lower Bound: Constraint Relaxation + +A lower bound for $\mathbf{MSR}(\mathbf{N},\boldsymbol {x})$ is a real number $\epsilon_l > 0$ such that all texts in $\operatorname {Ball}(\pmb {x},\epsilon_{l})$ are classified + +![](images/7f35124bf1baad4d7e99f385286819656077e7751932b1c5ad0620cd3af9de8e.jpg) +Figure 2: Structure of the tree after two iterations of the MCTS algorithm. Simulations of 1-word substitutions are executed at each vertex on the first level to update the UCT statistics. The most urgent vertex is then expanded (e.g., word the) and several 2-words substitutions are executed combining the word identified by the current vertex (e.g., word movie at the second level of the tree) and that of its parent, i.e., the. Redundant substitutions may be avoided (greyed out branch). + +in the same class by $\mathbf{N}$ . Note that, as $\mathrm{MSR}(\mathbf{N},\boldsymbol{x})$ is defined in the embedding space, which is continuous, the perturbation space, $\operatorname{Ball}(\boldsymbol{x},\epsilon)$ , contains meaningful texts as well as texts that are not syntactically or semantically meaningful. In order to compute $\epsilon_{l}$ we leverage constraint relaxation techniques developed for CNNs (Boopathy et al., 2019) and LSTMs (Ko et al., 2019), namely CNN-Cert and POPQORN. For an input text $\boldsymbol{x}$ and a hyperbox around $\operatorname{Ball}(\boldsymbol{x},\epsilon)$ , these techniques find linear lower and upper bounds for the activation functions of each layer of the neural network and use these to propagate an over-approximation of the hyperbox through the network. $\epsilon_{l}$ is then computed as the largest real such that all the texts in $\operatorname{Ball}(\boldsymbol{x},\epsilon_{l})$ are in the same class, i.e., for all $\boldsymbol{x}' \in \operatorname{Ball}(\boldsymbol{x},\epsilon_{l})$ , $\arg \max_{i \in C} \mathbf{N}_{i}(\boldsymbol{x}) = \arg \max_{i \in C} \mathbf{N}_{i}(\boldsymbol{x}')$ . Note that, as $\operatorname{Ball}(\boldsymbol{x},\epsilon_{l})$ contains only texts obtained by perturbing a subset of the words (those whose index is in $I$ ), to adapt CNN-Cert and POPQORN to our setting, we have to fix the dimensions of $\boldsymbol{x}$ corresponding to words not in $I$ and only propagate through the network intervals corresponding to words in $I$ . + +# 3 Experimental Results + +We use our framework to empirically evaluate the robustness of neural networks for sentiment analysis and news classification on typical CNN and LSTM architectures. While we quantify lower + +
NEWSSSTAG NEWSIMDB
Inputs (Train, Test)22806, 9793117220, 1821120000, 700025000, 25000
Output Classes7242
Average Input Length17 ± 2.1717.058 ± 8.2737.295 ± 9.943230.8 ± 169.16
Max Input Length88521362315
Max Length Considered142549100
+ +Table 1: Datasets used for the experimental evaluation. We report the number of samples (training/test ratio as provided in the original works) and output classes, the average and maximum length of each input text before pre-processing and the maximum length considered in our experiments. + +bounds of MSR for CNNs and LSTMs, respectively, with CNN-Cert and POPQORN tools, we implement the MCTS algorithm introduced in Section 2.2 to search for meaningful perturbations (i.e., upper bounds), regardless of the NN architecture employed. In particular, in Section 3.1 we consider robustness against single and multiple word substitutions and investigate implicit biases of LSTM architectures. In Section 3.2 we study the effect of embedding on robustness, while in Section 3.3 we employ our framework to perform saliency analysis of the most relevant words in a text. + +Experimental Setup and Implementation We have trained several vanilla CNN and LSTM models on datasets that differ in length of each input, number of target classes and difficulty of the learning task. All our experiments were conducted on a server equipped with two 24 core Intel Xenon 6252 processors and 256GB of $\mathrm{RAM^{2,3}}$ . We consider the IMDB dataset (Maas et al., 2011), the Stanford Sentiment Treebank (SST) dataset (Socher et al., 2013), the AG News Corpus (Zhang et al., 2015) and the NEWS dataset (Vitale et al., 2012): details are in Table 1. In our experiments we consider different embeddings, and specifically both complex, probabilistically-constrained representations (GloVe and GloVeTwitter) trained on global word-word co-occurrence statistics from a corpus, as well as the simplified embedding provided by the Keras Python Deep Learning Library (referred to as Keras Custom) (Chollet et al., 2015), which allows one to fine tune the exact dimension of the vector space and only aims at minimizing the loss on the classification task. The resulting learned Keras Custom embedding does not capture com + +plete word semantics, just their emotional polarity. More details are reported in Appendix A.3 and Table 4. For our experiments, we consider a 3-layer CNN, where the first layer consists of bidimensional convolution with 150 filters, each of size $3 \times 3$ , and a LSTM model with 256 hidden neurons on each gate. We have trained more than 20 architectures on the embeddings and datasets mentioned above. We note that, though other architectures might offer higher accuracy for sentence classification (Kim, 2014), this vanilla setup has been chosen intentionally not to be optimized for a specific task, thus allowing us to measure robustness of baseline models. Both CNNs and LSTMs predict the output with a softmax output layer, while the categorical cross-entropy loss function is used during the optimization phase, which is performed with Adam (Kingma and Ba, 2014) algorithm (without early-stopping); further details are reported in Appendix A.3. + +# 3.1 Robustness to Word Substitutions + +For each combination of a neural network and embedding, we quantify the MSR against single and multiple word substitutions, meaning that the set of word indices $I$ (see Definition 1) consists of 1 or more indices. Interestingly, our framework is able to prove that certain input texts and architectures are robust for any single-word substitution, that is, replacing a single word of the text (any word) with any other possible other word, and not necessarily with a synonym or a grammatically correct word, will not affect the classification outcome. Figure 3 shows that for CNN models equipped with Keras Custom embedding the (lower bound of the) MSR on some texts from the IMDB dataset is greater than the diameter of the embedding space. To consider only perturbations that are semantically close and syntactically coherent with the input text, we employ the MCTS algorithm with filtering described in Section 2.2. An example of a successful + +![](images/da49094dc69b01c693d91de13fa8975f589e479352ae15e636408838a3e49d5b.jpg) +Figure 3: Lower bounds indicate classification invariance to any substitution when greater than the embedding diameter $\mathbf{d}$ (see diagram on the right and Section 2), here represented by the dotted vertical line. Left: Examples of words safe to any substitution (IMDB, Keras embedding $10d$ , text no 2). Middle: Examples of words vulnerable to substitutions that may change the classification (IMDB, Keras embedding $5d$ , text no 1). + +
DIMENSIONLOWER BOUND
Keras50.278
100.141
250.023
500.004
1000.002
GloVe500.007
1000.002
GloVeTwitter250.013
500.008
1000.0
+ +Table 2: Comparison of lower bounds for single-word substitutions computed by CNN-Cert on the SST dataset. Values are averaged over 100 input texts (approx. 2500 measurements) and normalized by the embedding diameter $(\mathrm{L}_2$ -norm). + +perturbation is shown in Figure 4, where we illustrate the effectiveness of single-word substitutions on inputs that differ in the confidence of the neural network prediction. We note that even with simple tagging it is possible to identify perturbations where replacements are meaningful. For the first example in Figure 4 (top), the network changes the output class to World when the word China is substituted for U.S.. Although this substitution may be relevant to that particular class, nonetheless we note that the perturbed text is coherent and the main topic remains sci-tech. Furthermore, the classification changes also when the word exists is replaced with a plausible alternative misses, a perturbation that is neutral, i.e. not informative for any of the possible output classes. In the third sentence in Figure 4 (bottom), we note that replacing championship with wrestling makes the model output class World, where originally it was Sport, indicating that the model relies + +on a small number of key words to make its decision. We report a few additional examples of word replacements for a CNN model equipped with GloVe-50d embedding. Given as input the review 'this is art paying homage to art' (from the SST dataset), when art is replaced by graffiti the network misclassifies the review (from positive to negative). Further, as mentioned earlier, the MCTS framework is capable of finding multiple word perturbations: considering the same setting as in the previous example, when in the review 'it's not horrible just horribly mediocre' the words horrible and horribly are replaced, respectively, with gratifying and decently, the review is classified as positive, while for the original sentence it was negative. Robustness results for high-dimensional embeddings are included in Table 3, where we report the trends of the average lower and upper bounds of MSR and the percentage of successful perturbations computed over 100 texts (per dataset) for different architectures and embeddings. Further results are in Appendix A.3, including statistics on lower bounds (Tables 5, 6) and single and multiple word substitutions (Tables 7, 8). + +CNNs vs. LTSMs By comparing the average robustness assigned to each word, respectively, by CNN-Cert and POPQORN over all the experiments on a fixed dataset, it clearly emerges that recurrent models are less robust to perturbations that occur in very first words of a sentence; interestingly, CNNs do not suffer from this problem. A visual comparison is shown in Figure 6. The key difference is the structure of LTSMs compared to CNNs: while in LTSMs the first input word influences the successive layers, thus amplifying the + +Single-Word Substitutions + +
EMBEDDINGLOWER BOUNDSUBSTITUTIONSUPPER BOUND
% per text% per word
IMDBKeras50d0.055 ± 0.0116.01.40.986
GloVe50d0.018 ± 0.00739.75.10.951
GloVeTwitter50d0.02 ± 0.00247.07.70.926
AG NewsKeras50d0.002 ± 0.00150.015.60.852
GloVe50d0.005 ± 0.00422.410.80.898
GloVeTwitter50d0.007 ± 0.00121.46.60.937
SSTKeras50d0.004 ± 0.00152.219.90.813
GloVe50d0.007 ± 0.00381.137.40.646
GloVeTwitter50d0.008 ± 0.00478.136.30.653
NEWSGloVe50d0.001 ± 0.00296.534.00.679
GloVe100d0.002 ± 0.00289.729.10.727
GloVeTwitter50d0.001 ± 0.00190.930.60.707
GloVeTwitter100d0.001 ± 0.00189.727.70.739
+ +Table 3: Statistics on single-word substitutions averaged on 100 input texts of each dataset. We report: the average lower bound of the MSR as measured with either CNN-Cert or POPQORN; the approximate ratio that given a word from a text we find a single-word substitution and the average number of words that substituted for a given word change the classification; the average upper bound computed as the distance between the original word and the closest substitution found by MCTS (when no successful perturbation is found we over-approximate the upper bound for that word with the diameter of the embedding). Values reported for lower bounds have been normalized by each embedding diameter (measurements in the $\mathrm{L}_2$ -norm). + +![](images/c64a62c77819fe97ecba3552f95b7bdb89efe487b7d62cd601de1e50a202c40b.jpg) +Figure 4: Single-word substitutions found with MCTS in conjunction with filtering. Grammatically consistent substitutions shown in green, inconsistent in red, a dash indicates that no substitution is found. + +manipulations, the output of a convolutional region is independent from any other of the same layer. On the other hand, both CNNs and LSTMs have in common an increased resilience to perturbations on texts that contain multiple polarized words, a trend that suggests that, independently of the architecture employed, robustness relies on a distributed representation of the content in a text (Figure 5). + +# 3.2 Influence of the Embedding on Robustness + +As illustrated in Table 2 and in Figure 3, models that employ small embeddings are more robust to perturbations. On the contrary, robustness de + +![](images/a9565d8fb25a81aa3b841383b74edd05f4d762ce2f151b77eb260b4a9c881c95.jpg) +Figure 5: Lower bound values for individual words obtained from POPQORN ( $L_{2}$ -norm), showing an increasing trend for consecutive words. (a) Two texts with padding ( denotes an unknown token). (b) Texts with several words related to a specific output class (U.S. and entertainment, respectively). + +creases, from one to two orders of magnitude, when words are mapped to high-dimensional spaces, a trend that is confirmed also by MCTS (see Appendix Table 8). This may be explained by the fact that adversarial perturbations are inherently related to the dimensionality of the input space (Carbone et al., 2020; Goodfellow et al., 2014). We also discover that models trained on longer inputs (e.g., IMDB) are more robust compared to those trained on shorter ones (e.g., SST): in long texts the decision made by the algorithm depends on multiple words that are evenly distributed across + +![](images/3d44a21981e27e97c4c3850b67bbda4950195ca99c203bbc25af0a310a1a85ca.jpg) +Figure 6: Robustness lower bound trends for successive input words for LSTMs (red dots) and CNNs (blue dots) on NEWS and AG News datasets. + +the input, while for shorter sequences the decision may depend on very few, polarized terms. From Table 3 we note that polarity-constrained embeddings (Keras) are more robust than those that are probabilistically-constrained (GloVe) on relatively large datasets (IMDB), whereas the opposite is true on smaller input dimensions: experiments suggest that models with embeddings that group together words closely related to a specific output class (e.g., positive words) are more robust, as opposed to models whose embeddings gather words together on a different principle (e.g., words that appear in the same context): intuitively, in the former case, words like good will be close to synonyms like better and nice, while in the latter words like good and bad, which often appear in the same context (think of the phrase 'the movie was good/bad'), will be closer in the embedding space. In the spirit of the analysis in (Baroni et al., 2014), we empirically measured whether robustness is affected by the nature of the embedding employed, that is, either prediction-based (i.e., embeddings that are trained alongside the classification task) or hybrid/count-based (e.g., GloVe, GloVeTwitter). By comparing the robustness of different embeddings and the distance between words that share the same polarity profile (e.g., positive vs. negative), we note that MSR is a particularly well suited robustness metric for prediction-based embeddings, with the distance between words serving as a reasonable estimator of word-to-word semantic similarity w.r.t. the classification task. On the other hand, for hybrid and count-based embeddings (e.g., GloVe), especially when words are + +![](images/a4e740096e14b8d1b210ea50688122d51facbafdd95dae11e227dc831ada3c09.jpg) +Figure 7: For an increasing number of substitutions per text we report the difference between MSR lower bounds of counter-fitted and vanilla embeddings (Keras and GloVeTwitter, 25d) on the AG News Dataset. + +represented as high-dimensional vectors, the distance between two words in the embedding space, when compressed into a single scalar, does not retain enough information to estimate the relevance of input variations. Therefore, in this scenario, an approach based solely on the MSR is limited by the choice of the distance function between words, and may lose its effectiveness unless additional factors such as context are considered. Further details of our evaluation are provided in Appendix A.3, Table 5 and Figure 11. + +Counter-fitting To mitigate the issue of robustness in multi-class datasets characterized by short sequences, we have repeated the robustness measurements with counter-fitted (Mrkšić et al., 2016) embeddings, i.e., a method of injecting additional constraints for antonyms and synonyms into vector space representations in order to improve the vectors' capability to encode semantic similarity. We observe that the estimated lower bound of MSR is in general increased for low-dimensional embeddings, up to twice the lower bound for non-counter-fitted embeddings. This phenomenon is particularly relevant when Keras Custom 5d and 10d are employed, see Appendix A.3, Table 6. On the other hand, the benefits of counter-fitting are less pronounced for high-dimensional embeddings. The same pattern can be observed in Figure 7, where multiple-word substitutions per text are allowed. Further details can be found in Appendix A.3, Tables 6 and 8. + +![](images/a8a895967d3ccab7733e019ae8cbf62fc34208d3f964115147fd3132be6ca34c.jpg) +Figure 8: Interpretability comparison of our framework with LIME. (a) Saliency map produced with CNN-Cert (top) and LIME (bottom) on IMDB (GloVeTwitter 25d embedding). (b) Saliency map produced with POPQORN (top) and LIME (bottom) on NEWS dataset (GloVe 100d embedding). + +# 3.3 Interpretability of Sentiment Analysis via Saliency Maps + +We employ our framework to perform interpretability analysis on a given text. For each word of a given text we compute the (lower bound of the) MSR and use this as a measure of its saliency, where small values of MSR indicate that minor perturbations of that word can have a significant influence on the classification outcome. We use the above measure to compute saliency maps for both CNNs and LSTMs, and compare our results with those obtained by LIME (Ribeiro et al., 2016), which assigns saliency to input features according to the best linear model that locally explains the decision boundary. Our method has the advantage of being able to account for non-linearities in the decision boundary that a local approach such as LIME cannot handle, albeit at a cost of higher computational complexity (a similar point was made in (Blaas et al., 2020) for Gaussian processes). As a result, we are able to discover words that our framework views as important, but LIME does not, and vice versa. In Figure 8 we report two examples, one for an IMDB positive review (Figure 8 (a)) and another from the NEWS dataset classified using a LTSM (Figure 8 (b)). In Figure 8 (a) our approach finds that the word many is salient + +and perturbing it slightly can make the NN change the class of the review to negative. In contrast, LIME does not identify many as significant. In order to verify this result empirically, we run our MCTS algorithm (Section 2.2) and find that simply substituting many with worst changes the classification to 'negative'. Similarly, for Figure 8 (b), where the input is assigned to class 5 (health), perturbing the punctuation mark (: ) may alter the classification, whereas LIME does not recognise its saliency. + +# 4 Conclusions + +We introduced a framework for evaluating robustness of NLP models against word substitutions. Through extensive experimental evaluation we demonstrated that our framework allows one to certify certain architectures against single word perturbations and illustrated how it can be employed for interpretability analysis. While we focus on perturbations that are syntactically coherent, we acknowledge that semantic similarity between phrases is a crucial aspect that nonetheless requires an approach which takes into account the context where substitutions happen: we will tackle this limitation in future. Furthermore, we will address robustness of more complex architectures, e.g., networks that exploit attention-based mechanisms (Vaswani et al., 2017). + +# Acknowledgements + +This project was partly funded by Innovate UK (reference 104814) and the ERC under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 834115). We thank the reviewers for their critical assessment and suggestions for improvement. + +# References + +Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. arXiv preprint arXiv:1804.07998. +Leila Arras, Franziska Horn, Gregoire Montavon, Klaus-Robert Müller, and Wojciech Samek. 2016. Explaining predictions of non-linear classifiers in nlp. arXiv preprint arXiv:1606.07298. +Marco Baroni, Georgiana Dinu, and German Kruszewski. 2014. Don't count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd Annual Meeting of the Association + +for Computational Linguistics (Volume 1: Long Papers), pages 238-247, Baltimore, Maryland. Association for Computational Linguistics. +Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine translation. In International Conference on Learning Representations. +Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. 2013. Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery in databases, pages 387-402. Springer. +Battista Biggio and Fabio Roli. 2018. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 84:317-331. +Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python: analyzing text with the natural language toolkit. "O'Reilly Media, Inc". +Arno Blaas, Andrea Patane, Luca Laurenti, Luca Cardelli, Marta Kwiatkowska, and Stephen Roberts. 2020. Adversarial robustness guarantees for classification with gaussian processes. In International Conference on Artificial Intelligence and Statistics, pages 3372-3382. +Akhilan Boopathy, Tsui-Wei Weng, Pin-Yu Chen, Sijia Liu, and Luca Daniel. 2019. Cnn-cert: An efficient framework for certifying robustness of convolutional neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3240-3247. +Cameron B Browne, Edward Powley, Daniel Whitehouse, Simon M Lucas, Peter I Cowling, Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis, and Simon Colton. 2012. A survey of monte carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in games, 4(1):1-43. +Ginevra Carbone, Matthew Wicker, Luca Laurenti, Andrea Patane, Luca Bortolussi, and Guido Sanguinetti. 2020. Robustness of bayesian neural networks to gradient-based attacks. arXiv preprint arXiv:2002.04359. +Chih-Hong Cheng, Georg Nührenberg, and Harald Ruess. 2017. Maximum resilience of artificial neural networks. In *Automated Technology for Verification and Analysis*, pages 251-268, Cham. Springer International Publishing. +Minhao Cheng, Jinfeng Yi, Huan Zhang, Pin-Yu Chen, and Cho-Jui Hsieh. 2018. Seq2sick: Evaluating the robustness of sequence-to-sequence models with adversarial examples. arXiv preprint arXiv:1803.01128. +François Chollet et al. 2015. keras. + +Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. 2015. Attention-based models for speech recognition. In Advances in neural information processing systems, pages 577-585. +Rémi Coulom. 2007. Efficient selectivity and backup operators in monte-carlo tree search. In Computers and Games, pages 72-83, Berlin, Heidelberg. Springer Berlin Heidelberg. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Souradeep Dutta, Susmit Jha, Sriram Sankaranarayanan, and Ashish Tiwari. 2018. Output range analysis for deep feedforward neural networks. In NASA Formal Methods, pages 121-138. +Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. Hotflip: White-box adversarial examples for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 31-36. +Allyson Ettinger, Sudha Rao, Hal Daumé III, and Emily M Bender. 2017. Towards linguistically generalizable nlp systems: A workshop and shared task. In Proceedings of the First Workshop on Building Linguistically Generalizable NLP Systems, pages 1-10. +Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yanjun Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers. In 2018 IEEE Security and Privacy Workshops (SPW), pages 50-56. IEEE. +Yoav Goldberg and Omer Levy. 2014. word2vec explained: deriving mikolov et al.'s negative-sampling word-embedding method. arXiv preprint arXiv:1402.3722. +Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. +Po-Sen Huang, Robert Stanforth, Johannes Welbl, Chris Dyer, Dani Yogatama, Sven Gowal, Krishnamurthy Dvijotham, and Pushmeet Kohli. 2019. Achieving verified robustness to symbol substitutions via interval bound propagation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4074-4084. +Xiaowei Huang, Marta Kwiatkowska, Sen Wang, and Min Wu. 2017. Safety verification of deep neural networks. In International Conference on Computer Aided Verification, pages 3-29. Springer. + +Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1875-1885. +Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2021-2031. +Robin Jia, Aditi Raghunathan, Kerem Goksel, and Percy Liang. 2019. Certified robustness to adversarial word substitutions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4120-4133. +Ye Jia, Yu Zhang, Ron Weiss, Quan Wang, Jonathan Shen, Fei Ren, Patrick Nguyen, Ruoming Pang, Ignacio Lopez Moreno, Yonghui Wu, et al. 2018. Transfer learning from speaker verification to multispeaker text-to-speech synthesis. In Advances in neural information processing systems, pages 4480-4490. +Guy Katz, Clark Barrett, David L Dill, Kyle Julian, and Mykel J Kochenderfer. 2017. Reluplex: An efficient smt solver for verifying deep neural networks. In International Conference on Computer Aided Verification, pages 97-117. Springer. +Guy Katz, Derek A Huang, Duligur Ibeling, Kyle Julian, Christopher Lazarus, Rachel Lim, Parth Shah, Shantanu Thakoor, Haoze Wu, Aleksandar Zeljic, et al. 2019. The marabou framework for verification and analysis of deep neural networks. In International Conference on Computer Aided Verification, pages 443-452. Springer. +Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751, Doha, Qatar. Association for Computational Linguistics. +Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. +Ching-Yun Ko, Zhaoyang Lyu, Lily Weng, Luca Daniel, Ngai Wong, and Dahua Lin. 2019. POPQORN: Quantifying robustness of recurrent neural networks. In International Conference on Machine Learning, pages 3468-3477. +Levente Kocsis and Csaba Szepesvári. 2006. Bandit based monte-carlo planning. In European conference on Machine Learning, pages 282-293. Springer. + +Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. 2017. Deep text classification can be fooled. arXiv preprint arXiv:1704.08006. +Wang Ling, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015. Two/too simple adaptations of word2vec for syntax problems. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1299-1304. +Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies-volume 1, pages 142-150. Association for Computational Linguistics. +Nikola Mrkšić, Diarmuid O Seaghda, Blaise Thomson, Milica Gašić, Lina Rojas-Barahona, PeiHao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting word vectors to linguistic constraints. arXiv preprint arXiv:1603.00892. +Roberto Navigli. 2009. Word sense disambiguation: A survey. ACM computing surveys (CSUR), 41(2):1-69. +Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543. +Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "why should I trust you?": Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, pages 1135-1144. +Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversarial rules for debugging nlp models. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 856-865. +Wenjie Ruan, Xiaowei Huang, and Marta Kwiatkowska. 2018. Reachability analysis of deep neural networks with provable guarantees. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, pages 2651-2659. AAAI Press. +Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631-1642. + +Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In International Conference on Learning Representations. +Andrew Trask, Phil Michalak, and John Liu. 2015. sense2vec-a fast and accurate method for word sense disambiguation in neural word embeddings. arXiv preprint arXiv:1511.06388. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008. +Daniele Vitale, Paolo Ferragina, and Ugo Scaiella. 2012. Classification of short texts by deploying topical annotations. In European Conference on Information Retrieval, pages 376-387. Springer. +Hanna M Wallach. 2006. Topic modeling: beyond bag-of-words. In Proceedings of the 23rd international conference on Machine learning, pages 977-984. +Lily Weng, Huan Zhang, Hongge Chen, Zhao Song, Cho-Jui Hsieh, Luca Daniel, Duane Boning, and Inderjit Dhillon. 2018a. Towards fast computation of certified robustness for relu networks. In International Conference on Machine Learning, pages 5276-5285. +Tsui-Wei Weng, Huan Zhang, Pin-Yu Chen, Jinfeng Yi, Dong Su, Yupeng Gao, Cho-Jui Hsieh, and Luca Daniel. 2018b. Evaluating the robustness of neural networks: An extreme value theory approach. In 6th International Conference on Learning Representations. +Eric Wong and Zico Kolter. 2018. Provable defenses against adversarial examples via the convex outer adversarial polytope. In Proceedings of the 35th International Conference on Machine Learning, pages 5286-5295. PMLR. +Min Wu and Marta Kwiatkowska. 2020. Robustness guarantees for deep neural networks on videos. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). +Min Wu, Matthew Wicker, Wenjie Ruan, Xiaowei Huang, and Marta Kwiatkowska. 2020. A game-based approximate verification of deep neural networks with provable guarantees. Theoretical Computer Science, 807:298 - 329. +Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, and Luca Daniel. 2018. Efficient neural network robustness certification with general activation functions. In Advances in neural information processing systems, pages 4939-4948. +Wei Emma Zhang, Quan Z Sheng, Ahoud Alhazmi, and Chenliang Li. 2020. Adversarial attacks on + +deep-learning models in natural language processing: A survey. ACM Transactions on Intelligent Systems and Technology (TIST), 11(3):1-41. +Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in neural information processing systems, pages 649-657. + +# A Appendix + +# A.1 Monte Carlo Tree Search (MCTS) + +We adapt the MCTS algorithm (Browne et al., 2012) to the NLP classification setting with word embedding, which we report here for completeness as Algorithm 1. The algorithm explores modifications to the original text by substituting one word at the time with nearest neighbour alternatives. It takes as input: text, expressed as a list of $T$ words; $\mathbf{N}$ , the neural network as introduced in Section 2; $\mathcal{E}$ , an embedding; $\text{sims}$ , an integer specifying the number of Monte Carlo samplings at each step; and $\alpha$ , a real-valued meta-parameter specifying the exploration/exploitation trade-off for vertices that can be further expanded. The salient steps of the MCTS procedure are: + +- Select: the most promising vertex to explore is chosen to be expanded (Line 14) according to the standard UCT heuristic: + +$\frac{Q(v)}{N(v)} + \alpha \sqrt{\frac{2lnN(v')}{N(v)}}$ , where $v$ and $v'$ are respectively the selected vertex and its parent; $\alpha$ is a meta-parameter that balances exploration-exploitation trade-off; $N()$ represents the number of times a vertex has been visited; and $Q()$ measures the neural network confidence drop, averaged over the Monte Carlo simulations for that specific vertex. + +- Expand: the tree is expanded with $T$ new vertices, one for each word in the input text (avoiding repetitions). A vertex at index $t \in \{1, \dots, T\}$ and depth $n > 0$ represents the strategy of perturbing the $t$ -th input word, plus all the words whose indices have been stored in the parents of the vertex itself, up to the root. +- Simulate: simulations are run from the current position in the tree to estimate how the neural network behaves against the perturbations sampled at that stage (Line 23). If one of the word substitutions induced by the simulation makes the network change the classification, a successful substitution is found and added to the results, while the value $Q$ of the current vertex is updated. Many heuristics can be considered at this stage, for example the average drop in the confidence of the network over all the simulations. We have found that the average drop is not a good measure + +of how the robustness of the network drops when some specific words are replaced, since for a high number of simulations a perturbation that is effective might pass unnoticed. We thus work with the maximum drop over all the simulations, which works slightly better in this scenario (Line 27). + +- Backpropagate: the reward received is backpropagated to the vertices visited during selection and expansion to update their UCT statistics. It is known that, when UCT is employed (Browne et al., 2012; Kocsis and Szepesvári, 2006), MCTS guarantees that the probability of selecting a sub-optimal perturbation tends to zero at a polynomial rate when the number of games grows to infinity (i.e., it is guaranteed to find a discrete perturbation, if it exists). + +For our implementation we adopted $\text{sims} = 1000$ and $\alpha = 0.5$ . Tables 8 and 7 give details of MCTS experiments with single and multiple word substitutions. + +MCTS Word Substitution Strategies We consider two refinements of MCTS: weighting the replacement words by importance and filtering to ensure syntactic/semantic coherence of the input text. The importance score of a word substitution is inversely proportional to its distance from the original word, e.g., $pickup(w \gets w') = \frac{1}{|U| - 1} \left( \frac{\sum_{u \in U \setminus \{w'\}} d(w, u)}{\sum_{u \in U} d(w, u)} \right)$ , where $w, w'$ are respectively the original and perturbed words, $d()$ is an $L^p$ norm of choice and $U$ a neighbourhood of $w$ , whose cardinality, which must be greater than 1, is denoted with $|U|$ (as shown in Figure 9). We can further filter words in the neighborhood such that only synonyms/antonyms are selected, thus guaranteeing that a word is replaced by a meaningful substitution; more details are provided in Section 2.2. While in this work we use a relatively simple method to find replacements that are syntactically coherent with the input text, more complex methods are available that try also to enforce semantic consistency (Navigli, 2009; Ling et al., 2015; Trask et al., 2015), despite this problem is known to be much harder and we reserve this for future works. + +Algorithm 1 Monte Carlo Tree Search with UCT heuristic +1: procedure MCTS(text, N, $\mathcal{E}$ , sims, $\alpha$ +2: $t\gets \arg \max_{i\in C}\mathrm{N}_i(\mathcal{E}(text))$ Store the unperturbed network output, ref. Section2 +3: Tree $\leftarrow$ createTree(text, c, N) Create the initial tree +4: root $\leftarrow$ getRoot.Tree) Store the initial vertex +5: $P\gets []$ List of final perturbations +6: while terminate.Tree) $\neq$ True do Loop over the MCTS steps +7: $v\gets$ SELECT.Tree, $\alpha$ +8: $C\gets$ EXPAND(v, text) +9: P.insert(SIMULATE(C, text, sims, N, $\mathcal{E}$ , t)) +10: BACKPROPAGATE(v, root) +11: return $P$ +12: procedure SELECT.Tree, $\alpha$ +13: $L\gets$ getLeaves.Tree) +14: return $\operatorname {argmax}_{v\in L}\frac{Q(v)}{N(v)} +\alpha \sqrt{\frac{2lnN(v')}{N(v)}}$ UCT best leaf +15: procedure EXPAND(v, text) +16: for $i = 0,i < length(text),i + +$ do +17: v.expand(i) Create v's i-th child +18: return getChildren(v) Return the expanded children +19: procedure SIMULATE(C, text, sims, N, $\mathcal{E}$ , t) +20: Perturbations $\leftarrow$ [] +21: for $c\in C$ do +22: for $i = 0,i < \mathrm{sims},i + +$ do +23: $text^{\prime}\gets$ samplePerturbation(text, c) Ref. Figure 9 +24: $x\gets \mathcal{E}(\text{text});x_i'\gets \mathcal{E}(\text{text}')$ Embed inputs +25: if $\mathbf{N}(x_i')\neq \mathbf{N}(x)$ then The output class changes +26: Perturbations.append(text') +27: $Q(c) = \max_{i\in \mathrm{sims}}(\mathbf{N}_t(x) - \mathbf{N}_t(x_i'))$ Update vertex heuristic +28: return Perturbations +29: procedure BACKPROPAGATE(v, root) Propagate UCT update +30: while $v\neq$ root do +31: updateUCT(v) +32: $v\gets$ getParent(v) + +![](images/7bbf6be2eb19a4444db3860666d4bbea5096743069e82aaa55bf6fedfe09cb9f.jpg) +Figure 9: Substitutions are selected either randomly or according to a score calculated as a function of the distance from the original word. The sampling region (red circle) is a finite fraction of the embedding space (blue circle). Selected candidates can be filtered to enforce semantic and syntactic constraints. Word the has been filtered out because it is not grammatically consistent with the original word strange, while words good, better and a are filtered out as they lie outside the neighborhood of the original word. + +# A.2 Experimental Setup + +The network architectures that have been employed in this work are shown in Figure 10, while the embeddings are summarised in Table 4. More details of both the embeddings and the architectures employed are provided in the main paper, Section 3. + +# A.3 Additional Robustness Results + +In the remainder of this section we present additional experimental results of our robustness evaluation. More specifically, we show the trends of upper and lower bounds for different datasets (Tables 5, 6, 7 and 8); include robustness results against multiple substitutions; and perform robustness comparison with counter-fitted models (Figure 11). + +![](images/6fd9fd039a1c6c9d6e740ef8144111b880de8dc0c8900d50878f77502ef4e3cf.jpg) +Input Text + +![](images/997a2f4494a40d7784ae40e041da8c660d1201ac520b25040fedff17f34cb6f5.jpg) +(a) + +![](images/c5588216a38e50f37802363f5df33badee564a8443a25d29706f29be7280ad2c.jpg) +(b) + +![](images/16d1d326f74f82e8750ec61a6972623bebaf5f78ce7c78d73245c0b33254f79f.jpg) +(c) + +![](images/805b0f1120686d7a6ef4de33e58d2b49aff457e459ee8ee54d92e4f008efe6d4.jpg) +中 + +![](images/644cd59ed11dc007b9a2e1383d1decab4b2dc069bb038ecae121459f3d8a2bd1.jpg) +(d) +Figure 10: Architecture of CNN and LSTM vanilla models used in this work. (a) Embedding of input words as vectors of real numbers that are passed as input to a network model that outputs the class to which a text belongs (shown here with two outputs, e.g., a positive, negative review of a movie). (b) Convolutional network (CNN) model. (d) LSTM network model. (c) A single LSTM cell in detail. + +Embeddings + +
DIMWORDSDIAMETERDIAMETER (raw)
Keras51771752.2361.144
10885873.1620.957
258858750.763
50885877.070.664
10088587100.612
GloVe504000037.07110.918
100400003108.133
GloVeTwitter251193517521.15
5011935177.07113.947
10011935171013.058
+ +Table 4: Embeddings used for the experimental evaluation: we report the number of dimensions, the number of words in each vocabulary and the maximum distance between the two farthest words, namely the diameter (both after normalization of the input vectors and the raw value, expressed in the $\mathrm{L}_2$ -norm). After normalization, an embedding of dimension $d$ will have a diameter equal to $\sqrt{d}$ , as a consequence of scaling to 1 the difference between maximum and minimum values for any dimension of the input. + +IMDB + +
DIMENSIONACCURACYLOWER BOUND
Keras50.7891.358 ± 0.604
100.7882.134 ± 1.257
250.781.234 ± 2.062
500.780.394 ± 0.079
1000.7780.31 ± 0.041
GloVe500.7580.133 ± 0.054
1000.7830.127 ± 0.055
GloVeTwitter250.7390.168 ± 0.093
500.7520.143 ± 0.02
1000.770.177 ± 0.057
+ +Stanford Sentiment Treebank (SST) + +
DIMENSIONACCURACYLOWER BOUND
Keras50.750.623 ± 0.28
100.7560.449 ± 0.283
250.7570.116 ± 0.14
500.8110.029 ± 0.012
1000.8180.023 ± 0.006
GloVe500.8240.053 ± 0.023
1000.8330.028 ± 0.015
GloVeTwitter250.7630.065 ± 0.023
500.8260.059 ± 0.031
1000.8230.0 ± 0.0 (NaN)
+ +NEWS Dataset + +
DIMENSIONACCURACYLOWER BOUND
GloVe500.6250.013 ± 0.015
1000.70.018 ± 0.017
GloVeTwitter500.6270.009 ± 0.006
1000.7160.008 ± 0.009
+ +Table 5: Lower bound results for single-word substitutions as found by CNN-Cert and POPQORN, respectively, on the IMDB, SST and NEWS datasets. Values reported refer to measurements in the $\mathrm{L}_2$ -norm. + +AG News Results: Single Word Substitution + +
DIAMETERACCURACYLOWER BOUND
VanillaCounter-fittedVanillaCounter-fitted
Keras50.4140.4640.072 ± 0.0660.145 ± 0.147
100.4910.5050.026 ± 0.0250.088 ± 0.087
250.5850.5970.022 ± 0.0250.032 ± 0.026
500.6920.7510.015 ± 0.0090.024 ± 0.015
1000.7790.8070.011 ± 0.0070.015 ± 0.009
GloVe500.8920.8790.04 ± 0.0280.043 ± 0.03
1000.9010.8870.027 ± 0.0180.0 ± 0.0 (NaN)
GloVeTwitter250.8480.8460.033 ± 0.0250.046 ± 0.036
500.8770.8660.05 ± 0.0120.033 ± 0.018
1000.8330.8830.019 ± 0.0120.026 ± 0.005
+ +AG News Results: Multiple Words Substitutions + +
DIAMETERL.B. 2 SUBSTITUTIONSL.B. 3 SUBSTITUTIONS
VanillaCounter-fittedVanillaCounter-fitted
Keras50.029 ± 0.0240.065 ± 0.0590.025 ± 0.0170.054 ± 0.044
100.013 ± 0.0120.043 ± 0.0420.008 ± 0.0080.028 ± 0.028
250.011 ± 0.0080.015 ± 0.0120.007 ± 0.0060.01 ± 0.008
500.007 ± 0.0040.012 ± 0.0070.005 ± 0.0030.008 ± 0.005
1000.006 ± 0.0040.006 ± 0.0040.003 ± 0.0030.003 ± 0.002
GloVe500.02 ± 0.0130.02 ± 0.0140.013 ± 0.0090.016 ± 0.01
1000.015 ± 0.0070.0 ± 0.0 (NaN)0.01 ± 0.0060.0 ± 0.0 (NaN)
GloVeTwitter250.014 ± 0.0110.023 ± 0.0170.01 ± 0.0080.0015 ± 0.012
500.024 ± 0.0050.015 ± 0.0090.016 ± 0.0040.011 ± 0.007
1000.009 ± 0.0060.013 ± 0.0020.006 ± 0.0040.008 ± 0.002
DIAMETERL.B. 4 SUBSTITUTIONSL.B. 5 SUBSTITUTIONS
VanillaCounter-fittedVanillaCounter-fitted
Keras50.018 ± 0.0120.035 ± 0.0280.014 ± 0.0090.03 ± 0.021
100.006 ± 0.0050.02 ± 0.0190.005 ± 0.0040.016 ± 0.015
250.005 ± 0.0040.007 ± 0.0060.004 ± 0.0030.006 ± 0.004
500.003 ± 0.0020.005 ± 0.0020.003 ± 0.0020.005 ± 0.003
1000.003 ± 0.0020.003 ± 0.0020.002 ± 0.0010.002 ± 0.001
GloVe500.009 ± 0.0060.01 ± 0.0060.008 ± 0.0050.008 ± 0.006
1000.007 ± 0.0040.0 ± 0.0 (NaN)0.005 ± 0.0030.0 ± 0.0 (NaN)
GloVeTwitter250.007 ± 0.0050.011 ± 0.0080.006 ± 0.0040.009 ± 0.006
500.008 ± 0.0040.008 ± 0.0060.009 ± 0.0010.006 ± 0.004
1000.004 ± 0.0030.006 ± 0.0010.003 ± 0.0020.005 ± 0.001
+ +Table 6: Lower bound results for single (top) and multiple word (middle and bottom) substitutions, comparing vanilla and counter-fitted models. Robustness of counter-fitted models is superior to the vanilla counterpart, except for high-dimensional embeddings such as GloVe 100d, where it has not been possible to obtain a bound for the counter-fitted embedding due to computational constraints (nonetheless the counterpart lower bound is close to zero). Values reported refer to measurements in the $\mathrm{L}_{\infty}$ -norm. + +MCTS Results + +
EMBEDDINGEXEC TIME [s]SUB. (% per-text)SUB. (% per-word)UB
IMDBKeras50d29.526.01.40.41 ± 0.04
GloVe50d39.6139.75.10.39 ± 0.016
GloVeTwitter50d54.147.07.70.329 ± 0.015
AG NEWSKeras50d21.0950.015.60.396 ± 0.02
GloVe50d19.2522.410.80.438 ± 0.042
GloVeTwitter50d17.7521.46.60.336 ± 0.019
SSTKeras50d8.3652.219.90.444 ± 0.077
GloVe50d11.9481.137.40.385 ± 0.024
GloVeTwitter50d11.9678.136.30.329 ± 0.024
NEWSGloVe50d75.7696.534.00.405 ± 0.045
GloVe100d79.3189.729.10.442 ± 0.042
GloVeTwitter50d77.7490.930.60.314 ± 0.033
GloVeTwitter100d81.2989.727.70.417 ± 0.042
+ +Table 7: Upper bound results for single-word substitutions as found by MCTS. We report: the average execution time for each experiment; the percentage of texts for which we have found at least one successful single-word substitution (which results in a class change) and the approximate ratio that selecting randomly 1 word from a text we find a replacement that is successful; the distance to the closest meaningful perturbation to the original word found, namely an upper bound (differently from Table 3 and for completeness, here values are reported only considering the values for those words where the perturbations were successful). Values reported refer to measurements in the $\mathrm{L}_2$ -norm. + +MCTS Multiple Substitutions + +
EMBEDDING2 SUBSTITUTIONS3 SUBSTITUTIONS4 SUBSTITUTIONS
% per-text% per-word% per-text% per-word% per-text% per-word
IMDBKeras50d8.55.013.45.918.26.6
GloVe50d43.817.752.021.657.524.5
GloVeTwitter50d44.118.349.323.057.126.4
AG NEWSKeras50d68.127.572.738.383.347.9
GloVe50d31.415.833.716.837.019.7
GloVeTwitter50d23.812.523.815.338.018.4
SSTKeras50d64.833.074.740.278.048.7
GloVe50d89.458.096.470.897.676.5
GloVeTwitter50d88.357.894.169.195.374.9
NEWSGloVe50d98.855.497.362.597.368.6
GloVe100d100.046.895.068.096.065.2
GloVeTwitter50d94.550.597.563.097.571.9
GloVeTwitter100d92.749.998.158.298.365.3
+ +Table 8: Upper bound results for multiple-word substitutions as found by MCTS. We report the percentage of texts for which we have found at least a single-word substitution and the approximate ratio that selecting randomly $k$ words from a text (where $k$ is the number of substitutions allowed) we find a replacement that is successful. We do not report the average execution times as they are (roughly) the same as in Table 7. Values reported refer to measurements in the ${\mathrm{L}}_{2}$ -norm. For more than 1 substitution, values reported are an estimate on several random replacements, as it quickly becomes prohibitive to cover all the possible multiple-word combinations. + +![](images/5870894c7942e57b82dbcd5451985057fd05fe2a9499048f1e0e9a5936871d4f.jpg) + +![](images/51f379b78c23922acf7a4e1fe80e6e6e107f225d3ffaa05c53cdaabf1dabfe2e.jpg) +(a) + +![](images/e6eb8943af3e0fba90b48d34aa8282d936d89f4fda6b71f9af5aa1763f261997.jpg) + +![](images/553ea228f702183420904dfced3d82929a61b3d51818e768e0d87a50879e93ef.jpg) +Figure 11: Comparison of robustness of vanilla vs. counter-fitted embeddings for an increasing number of dimensions and word substitutions on the AG News dataset. (a) Simple Keras Custom embeddings optimised for emotional polarity. (b) GloVeTwitter embeddings that encode more complex representations. Counter-fitted embeddings exhibit greater robustness on low-dimensional or simple embeddings. A reversed trend is observed on high-dimensional embeddings or more complex word representations. Values reported refer to measurements in the $\mathrm{L}_{\infty}$ -norm. + +![](images/e7df20030365b3045b9f8783e6d23df6ca8f1eab309de48ff78dc6ea2646aee0.jpg) +(b) + +![](images/da2a8fee1c6fcb5522d1bdab2af389133f79448e848194c42cf7e730ff5fc5ac.jpg) \ No newline at end of file diff --git a/assessingrobustnessoftextclassificationthroughmaximalsaferadiuscomputation/images.zip b/assessingrobustnessoftextclassificationthroughmaximalsaferadiuscomputation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..02512e8c3f5e2b8f713627d3540c5bec515ccefd --- /dev/null +++ b/assessingrobustnessoftextclassificationthroughmaximalsaferadiuscomputation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6b0f77d43915aa8d878073f58ff864d75cda961720eb2cfe751fb8ceaa32b275 +size 1047557 diff --git a/assessingrobustnessoftextclassificationthroughmaximalsaferadiuscomputation/layout.json b/assessingrobustnessoftextclassificationthroughmaximalsaferadiuscomputation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..3ac08814a51625d36e077c472ee9731b85d84431 --- /dev/null +++ b/assessingrobustnessoftextclassificationthroughmaximalsaferadiuscomputation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:762d9aef079db99bbf6763ad414b12b210f82a2e0709f70dcf48a360155b017c +size 508179 diff --git a/attendingtolongdistancedocumentcontextforsequencelabeling/b7ce4274-03c7-4271-9dc3-5bb1af2e11af_content_list.json b/attendingtolongdistancedocumentcontextforsequencelabeling/b7ce4274-03c7-4271-9dc3-5bb1af2e11af_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..04ababc1cc3aec6e9d04dd352555b9ab623f398a --- /dev/null +++ b/attendingtolongdistancedocumentcontextforsequencelabeling/b7ce4274-03c7-4271-9dc3-5bb1af2e11af_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d07418eb28098c15d1e93dcd774ad849e476fbc62aa0b3c649188436a854f4eb +size 89302 diff --git a/attendingtolongdistancedocumentcontextforsequencelabeling/b7ce4274-03c7-4271-9dc3-5bb1af2e11af_model.json b/attendingtolongdistancedocumentcontextforsequencelabeling/b7ce4274-03c7-4271-9dc3-5bb1af2e11af_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c976b27e0101743dacdfd73e5c3bc67048de32ef --- /dev/null +++ b/attendingtolongdistancedocumentcontextforsequencelabeling/b7ce4274-03c7-4271-9dc3-5bb1af2e11af_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0e875e7a946aaeeadec23d1635cff0fcc5aed8f8d73706a7bae18372a63578d0 +size 107979 diff --git a/attendingtolongdistancedocumentcontextforsequencelabeling/b7ce4274-03c7-4271-9dc3-5bb1af2e11af_origin.pdf b/attendingtolongdistancedocumentcontextforsequencelabeling/b7ce4274-03c7-4271-9dc3-5bb1af2e11af_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..13c6adb6cb27496f011a3d1b32448fbf399a5fd7 --- /dev/null +++ b/attendingtolongdistancedocumentcontextforsequencelabeling/b7ce4274-03c7-4271-9dc3-5bb1af2e11af_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:598b9c7b5934aaee9f0e87997a4beea7bcfd8f5df6feca7433b25b1a5a38dc86 +size 727689 diff --git a/attendingtolongdistancedocumentcontextforsequencelabeling/full.md b/attendingtolongdistancedocumentcontextforsequencelabeling/full.md new file mode 100644 index 0000000000000000000000000000000000000000..9d19b7eac5ae6281c94c28b4f720568878602ce5 --- /dev/null +++ b/attendingtolongdistancedocumentcontextforsequencelabeling/full.md @@ -0,0 +1,386 @@ +# Attending to Long-Distance Document Context for Sequence Labeling + +Matthew Jörke* + +Computer Science Department + +Stanford University + +joerke@stanford.edu + +Jon Gillick + +School of Information + +UC Berkeley + +jongillick@berkeley.edu + +Matthew Sims + +School of Information + +UC Berkeley + +mbsims@berkeley.edu + +David Bamman + +School of Information + +UC Berkeley + +dbamman@berkeley.edu + +# Abstract + +We present in this work a method for incorporating global context in long documents when making local decisions in sequence labeling problems like NER. Inspired by work in featurized log-linear models (Chieu and Ng, 2002; Sutton and McCallum, 2004), our model learns to attend to multiple mentions of the same word type in generating a representation for each token in context, extending that work to learning representations that can be incorporated into modern neural models. Attending to broader context at test time provides complementary information to pretraining (Gururangan et al., 2020), yields strong gains over equivalently parameterized models lacking such context, and performs best at recognizing entities with high TF-IDF scores (i.e., those that are important within a document). + +# 1 Introduction + +Many of the main datasets used in NLP are comprised of relatively short documents: English OntoNotes (Weischedel et al., 2012), for example, contains an average of 223 tokens per document, the WSJ portion of the Penn Treebank (Marcus et al., 1993) averages 501 tokens, the IMDb dataset (Maas et al., 2011) averages 272 tokens, and SQuAD 2.0 (Rajpurkar et al., 2018) contains an average of 134 tokens per passage. This focus has, in turn, led to the development of models specifically optimized for the characteristics of short documents, including a pervasive focus on the sentence as the atomic unit of analysis for such tasks as NER and parsing, and influencing the maximum context + +length of contextual language models like BERT (Devlin et al., 2019) to be limited to 512 tokens. + +At the same time, however, longer documents are increasingly the objects of empirical study in areas as diverse as computational social science and the digital humanities—including novels (Piper, 2018; Underwood, 2019), scientific articles (Jurgens et al., 2018) and political manifestos (Menini et al., 2017; Denny and Spirling, 2018). These long documents present not only challenges for NLP (such as any task, like coreference resolution, whose computational complexity is superlinear in the size of the document) but opportunities as well, since the longer document context presents greater opportunity for learning better representations. + +Recent work in NLP has begun exploring this link between longer documents and representation learning. First, while contextualized models (e.g. Peters et al., 2018; Devlin et al., 2019) generally consider the context of a few sentences, several recent advancements have enabled significantly longer input sequences (e.g. Dai et al., 2019; Beltagy et al., 2020; Kitaev et al., 2020; Rae et al., 2019); most, however, are either incapable of processing book-level documents or prohibitively resource-intensive for standard use. + +Second, domain- and task-adaptive pretraining has proven especially effective for adapting the weights of general-purpose language models to the distribution of a particular domain or task (Gururan-gan et al., 2020; Han and Eisenstein, 2019; Beltagy et al., 2019; Lee et al., 2020). While longer documents are able provide more context for these models to adapt to, pretraining operates at the broad level of a domain, and is unable to exploit new con + +text at evaluation time in unseen test documents. To highlight the value of considering document context at test time, consider the following sentence from E.M. Forster's A Room with a View (1908): + +"Mr. Beebe!" said the maid, and the new rector of Summer Street was shown in; he had at once started on friendly relations, owing to Lucy's praise of him in her letters from Florence. + +From the context of this sentence alone, it is unclear if Florence refers to "a city in Italy" or "a person named Florence"; this local contextual ambiguity might lead an NER system to classify Florence as either a PERSON or LOCATION. + +However, examining the broader document context clarifies this entity type: other mentions of Florence within the text more clearly indicate that it refers to the city: + +"I saw him in Florence," said Lucy... +- As her time at Florence drew to its close... +...two carriages stopped, half into Florence... + +We might hypothesize, in fact, that a model that can attend to multiple mentions of a term like Florence in a document will perform better at recognizing important entities—those that are frequently mentioned within it and that may be infrequently seen outside of it. This fundamental idea—that multiple mentions of a term can provide shared information to help disambiguate each one—originates in featurized log-linear models that incorporate global information in making local predictions (Chieu and Ng, 2002; Sutton and McCallum, 2004; Liu et al., 2010); we extend that work here to the context of learning representations that can be incorporated into state-of-the-art neural models, explicitly learning to attend over relevant context sequences that are available only at test time, providing a complementary source of information to domain- and task-adaptive pretraining. + +This work makes the following contributions: + +1. We present Doc-ARC (Document-Attentive Representation of Context), an attention-based method for incorporating document context in sequence labeling tasks, and demonstrate improvements over equivalently parameterized models without document attention. +2. We evaluate Doc-ARC on three datasets containing long documents from different domains (literature, biomedical texts, and news), + +and present a new dataset of the full text of biomedical articles paired with labeled annotations of their abstracts in the GENIA/JNLPBA dataset (Collier and Kim, 2004). + +3. We demonstrate that Doc-ARC outperforms alternative methods at recognizing important document entities (defined as those with a high TF-IDF score), identifying tangible scenarios where it would be advantageous to use. + +# 2 Doc-ARC + +The core idea behind Doc-ARC is to leverage nearby representations of the same word when generating a representation for a given token. Rather than representing Florence above through a contextual representation scoped only over one sentence, we represent it through a weighted combination of that token itself and other instances of Florence in the document. By attending over multiple instances of the same word, we are able to preserve the importance of the specific local context of a token, while also reasoning about its broader use in the rest of the document. While this model has application to a wide range of NLP tasks, we focus on the sequence labeling problem of NER. + +# 2.1 Model Overview + +Figure 1 illustrates this model for a sample text from the JNLPBA corpus. Consider a sequence $\mathbf{x} = \{x_1, \ldots, x_n\}$ with corresponding labels $\mathbf{y} = \{y_1, \ldots, y_n\}$ , drawn from a document $\mathcal{D}$ . Other sequences in $\mathcal{D}$ may or may not have labels and the labeled set may or may not be contiguous. + +Let $e(\mathbf{x})$ be an encoding of $\mathbf{x}$ under some language model (e.g. BERT). When predicting a label, we consider both $e(\mathbf{x})$ , the original encoding of the target sequence, and $c(\mathbf{x})$ , an attention-weighted sum over the encodings of each $x_{i} \in \mathbf{x}$ as they appear in the context of $\mathcal{D}$ . + +Formally, let us define $\mathcal{V}(x_i)$ to be the word type (drawn from vocabulary $\mathcal{V}$ ) for token $x_i$ .1 We define $\mathcal{S}_K(x_i) = \{(\mathbf{s}_k,i_k)\}_{k=1}^K$ to be the $K$ closest sequences to $\mathbf{x}$ in $\mathcal{D}$ which also contain a token of type $\mathcal{V}(x_i)$ ,2 where $\mathbf{s}_k$ is the $k$ -th closest context sequence to $\mathbf{x}$ and $i_k$ denotes the index of $\mathcal{V}(x_i)$ + +![](images/9ca4d78470833a46e5815e36579261786921f6ffe613e3b3b467f7611c073399.jpg) +Figure 1: Overview of Doc-ARC with an example from the JNLPBA corpus, a dataset for named-entity recognition in biomedical research papers. The model attends over the representation of $x_{i} = \mathrm{GATA - 3}$ in context sentences $\mathbf{s}_k$ to product the context encoding $c(\mathbf{x})$ . The BERT base model can be left trainable (dynamic Doc-ARC) for small encoders or frozen (static Doc-ARC) for large encoders. + +in $\mathbf{s}_k$ . For each $x_i \in \mathbf{x}$ and each $k \leq K$ , our model fetches $e_c(x_i)^{(k)}$ , an encoding of $\mathcal{V}(x_i)$ as it appears in the context of $\mathbf{s}_k$ , + +$$ +e _ {c} \left(x _ {i}\right) ^ {(k)} = \left[ e \left(\mathbf {s} _ {k}\right) _ {i _ {k}}; d \left(\mathbf {s} _ {k}, \mathbf {x}\right) \right], \tag {1} +$$ + +$$ +\left(\mathbf {s} _ {k}, i _ {k}\right) \in \mathcal {S} _ {K} \left(x _ {i}\right) +$$ + +with $d(\mathbf{s}_k,\mathbf{x})$ denoting a bucketed embedding of the distance between $\mathbf{s}_k$ and $\mathbf{x}$ . We adapt our distance buckets from Lee et al. (2017). + +Finally, we compute $c(x_i)$ by attending over each of the $e_c(x_i)^{(k)}$ . + +$$ +c \left(x _ {i}\right) = \sum_ {k = 1} ^ {K} \alpha_ {k} \cdot e _ {c} \left(x _ {i}\right) ^ {(k)} \tag {2} +$$ + +$$ +\alpha_ {k} \propto \exp \left(\mathbf {w} _ {\mathrm {a t t n}} ^ {\top} e _ {c} \left(x _ {i}\right) ^ {(k)}\right) \tag {3} +$$ + +If a given word type has $K' < K$ occurrences in $\mathcal{D}$ , we only attend over these $K'$ relevant instances. We allow sequences to attend over the target occurrence itself; that is, $(\mathbf{x}, i) \in S_K(x_i)$ . + +Our model generates a prediction by passing this composite representation through a sequence encoder $f_{s}$ (such as a bidirectional LSTM, GRU, or Transformer layer), and generating a distribution over labels through a softmax function: + +$$ +\mathbf {z} = f _ {s} \left(\left[ e (\mathbf {x}); c (\mathbf {x}) \right]\right) \tag {4} +$$ + +$$ +p (\mathbf {y} \mid \mathbf {x}, \mathcal {D}) = \operatorname {s o f t m a x} (\mathbf {z}) +$$ + +# 2.2 Static and Dynamic Doc-ARC + +When processing a single target sequence of length $N$ words, our model must process $O(NK)$ context sequences. If the context representation $e_c(\mathbf{x})$ is allowed to be trainable, $O(NK)$ model activation copies are stored for each target sentence, which becomes prohibitively expensive for large encoders. + +Though optimizations can be made using GPU/TPU parallelism (e.g. Raffel et al., 2019) and/or memory-efficient encoders (e.g. Kitaev et al., 2020; Lan et al., 2019), our work adopts a different focus. Instead, we consider two simple cases which encapsulate the trade-offs inherent to this method, regardless of encoder architecture: + +Static. Our static variant of Doc-ARC assumes that $e(\cdot)$ is fixed throughout training. This variant is applicable when the encoder is a memory-intensive language model such as BERT. To offset the effects of freezing BERT, we pass the context representations through a trainable 1-layer context encoder $f_{c}$ , which we found crucial to good performance in our experiments. + +$$ +e _ {c} (\mathbf {x}) ^ {(k)} = f _ {c} \left(e _ {c} \left(x _ {1}\right) ^ {(k)}, \dots , e _ {c} \left(x _ {n}\right) ^ {(k)}\right) \tag {5} +$$ + +To compute $c(\mathbf{x})$ , we first gather all of the unique sequences that $\mathbf{x}$ will attend over, compute the representations of the attended sequences with a + +
DatasetDocumentsSentencesTokens
TRAINDEVTESTLABELEDUNLABELLEDLABELEDUNLABELLED
LitBank8010108,562617,490210,53213,116,998
JNLPBA71416816810,116562,994273,3159,803,762
OntoNotes1000434703463,7651,125,758
+ +Table 1: Dataset statistics. JNLPBA consists of many small documents (research papers), while LitBank consists of considerably fewer, long documents (novels). Both LitBank and JNLPBA have approximately the same ratio of labeled to unlabeled data $(1 - 2\%)$ , providing complementary settings for evaluating Doc-ARC. OntoNotes1000 has the shortest documents on average, but each document is fully labeled. + +frozen base model, and cache these representations in CPU memory. + +Dynamic. Our dynamic variant assumes that $e(\cdot)$ is trainable, which necessitates a memory-efficient encoder (see §4.2). Here, each of the $O(NK)$ context sequences are processed by the encoder in a single batch, including duplicate sentences. Activations for all the sequences are held in GPU memory. We process single target sequence batches with gradient accumulation to achieve larger effective batch sizes. We do not include the context encoder $f_{c}$ . + +# 3 Datasets + +We evaluate our model on three named entity recognition (NER) datasets: LitBank (Bamman et al., 2019), JNLPBA (Collier and Kim, 2004) and OntoNotes (Weischedel et al., 2012). Table 1 lists descriptive statistics for each dataset. + +LitBank. The LitBank dataset (Bamman et al., 2019) is comprised of relatively long documents drawn from 100 English novels, with each document containing annotations for roughly 2,000 words. This dataset contains annotations for nested entities using six of the ACE 2005 (Walker et al., 2006) categories (PER, LOC, FAC, GPE, ORG, VEH). We convert that hierarchy into a flat structure suitable for NER by preserving only the outermost layer for any nested structure (using the same process used by JNLPBA for GENIA, described below); all annotations nested within another are removed. We use the same training, development and test splits reported in Bamman et al. (2019). + +While the labeled documents in LitBank are already quite long, they represent less than $2\%$ of the novels they are drawn from—the average full text novel in this collection is approximately 133,000 words. We draw on this broader context by treating the remainder of the novel as unlabeled document context that we can exploit. + +JNLPBA. To test our performance in the biomedical domain, we use data from the JNLPBA 2004 shared task on entity recognition (Collier and Kim, 2004); this data consists of flat annotations of MEDLINE abstracts extracted from the nested entity annotations in the GENIA corpus (Kim et al., 2003), with five labels (PROTEIN, CELL LINE, CELL TYPE, DNA and RNA). + +While the median document length in JNLPBA is only 245 words, these abstracts have a potentially much larger unlabeled context: the full text of the article themselves. One contribution we make in this work is constructing a new dataset by pairing the abstracts in GENIA with their full scientific articles. We do so by converting the MEDLINE identifiers encoded in the JNLPBA dataset to PubMed identifiers using mappings from the National Library of Medicine,3 querying PubMed to retrieve the article metadata,4 manually downloading the full-text article pdf, and OCR'ing each pdf using Abbyy FineReader. We are able to pair a total of 882 abstracts in the JNLPBA training set with their full-text articles (44.1%) and 168 abstracts in the test set (41.6%). To enable hyperparameter tuning, we divide the training set into 714 documents for training and 168 documents for development, holding out the 168 original test documents for evaluation. The average length of the unlabeled document context in this dataset is 9,337 words. + +OntoNotes1000. The OntoNotes 5.0 dataset (Weischedel et al., 2012) provides named entity annotations for a subset of documents, with 18 entity classes, including PERSON, LOCATION, MONEY and WORK OF ART. While the median length of documents in this collection is quite short at 277 words, we simulate a scenario of longer document context by only focusing on documents in + +
Base ModelLitBankJNLPBAOntoNotes1000
Doc-ARCBERT + LSTMDoc-ARCBERT + LSTMDoc-ARCBERT + LSTM
BERTBASE75.75 (0.45)74.22 (0.49)71.17 (0.49)69.28 (0.39)84.25 (0.41)82.20 (0.47)
BERTTAPT74.28 (0.80)72.08 (0.84)71.43 (0.93)69.77 (1.22)83.75 (0.51)82.35 (0.56)
+ +Table 2: Static Doc-ARC results. We report mean (SD) test $F_{1}$ scores across 5 runs. Our baseline comparison (BERT+LSTM) has a comparable number of trainable parameters, but lacks attention over context occurrences. Each Doc-ARC model was hyperparameter tuned over $K$ , listed in the Appendix A.2. + +OntoNotes that are over 1,000 words in length. + +We use the same training, development, and test splits of this data used in Pradhan et al. (2013), using the BIO labels in the OntoNotes-5.0-NER-BIO repository. Subsetting the data to only those documents within these partitions with over 1,000 words yields a total of 434 training documents, 70 development documents, and 34 test documents. + +Preprocessing. For Doc-ARC (both static and dynamic), all labeled sequences are kept at their original length; none were longer than BERT's maximum input length (512). All unlabeled (context) sequences longer than 256 tokens are partitioned into chunks of length $\leq 256$ tokens, since this limits the complexity of computing $c(\mathbf{x})$ (see §2.2). For baselines, unlabeled sequences are disregarded. + +# 4 Experiments + +We evaluate our static and dynamic Doc-ARC models on LitBank, JNLPBA, and OntoNotes $_{1000}$ . To enable a fair comparison of the specific contribution of document-level attention, each Doc-ARC model is compared to a baseline which lacks contextual inputs and has a comparable number of trainable parameters. + +# 4.1 Static Doc-ARC + +We compute $e(\mathbf{x})$ from a frozen BERTBASE model, using the last four layers of BERT as a token's representation. To offset the effects of freezing BERT's weights, we let $f_{s}$ and $f_{c}$ be trainable biLSTMs. We perform hyperparameter tuning on the development set over $K$ for each model. + +Task-adaptive pretraining (TAPT). The availability of unlabeled data drawn from the same documents as a labeled dataset is exactly the scenario that task-adaptive pretraining (Gururangan et al., 2020) has demonstrated sizeable effects for. To investigate this in the context of this NER task, we + +pretrain $\mathrm{BERT}_{\mathrm{BASE}}$ on the training documents' full text (both labeled and unlabeled) for 100 epochs, yielding a $\mathrm{BERT}_{\mathrm{TAPT}}$ model for each dataset. + +Baselines. We compare each static Doc-ARC model to a baseline with a comparable number of trainable and non-trainable parameters (frozen BERT representations input into two stacked bilstMs), but lacking attention over neighboring sequences; using the notation from §2, the only input to the baseline model is $e(\mathbf{x})$ , and not $c(\mathbf{x})$ . We train this baseline model on the labeled set only. + +Results. Table 2 lists results for Doc-ARC on all three datasets with the encoder fixed to both $\mathrm{BERT}_{\mathrm{BASE}}$ and $\mathrm{BERT}_{\mathrm{TAPT}}$ . We find that Doc-ARC performs above the baselines for all trials, a difference that can reasonably be attributed to Doc-ARC's document-level contextual attention mechanism. + +We find that task-adaptive pretraining is least beneficial for LitBank and OntoNotes1000 (perhaps due to the similarity in domain to BERT's training data of BookCorpus and Wikipedia), and most helpful for JNLPBA, which has a linguistic domain most distinct from BERT's training data. + +# 4.2 Dynamic Doc-ARC + +We compute $e(\mathbf{x})$ from the last layer of a Transformer $_{\mathrm{TINY}}$ model (Turc et al., 2019), a compact, two-layer Transformer distilled from BERT $_{\mathrm{BASE}}$ , which we will refer to as BERT $_{\mathrm{TINY}}$ . We do not process the context representations through $f_{c}$ , but maintain that $f_{s}$ is a trainable biLSTM (see Eq. 4). For all datasets, we attend over the $K = 10$ closest sequences, which was the largest configuration that could be trained on a single GPU for all three datasets. + +Baselines. We compare each dynamic Doc-ARC to trainable $\mathrm{BERT}_{\mathrm{TINY}}$ , as well as $\mathrm{BERT}_{\mathrm{TINY}}$ with one bi-LSTM attached. Analogous to the static case, the dynamic baseline has a comparable number of parameters to dynamic Doc-ARC, but lacks attention over neighboring context sequences. + +
DatasetDoc-ARCBERTINY+ LSTM
LitBank64.47 (1.27)56.17 (0.83)60.97 (0.40)
JNLPBA65.26 (0.79)56.96 (0.75)62.08 (0.66)
OntoNotes100072.55 (0.76)69.19 (0.67)71.32 (0.42)
+ +Table 3: Dynamic Doc-ARC results, all evaluated at $K = 10$ . The BERT+LSTM baseline has a comparable number of trainable parameters, but lacks attention over context occurrences. We report mean (SD) test $F_{1}$ scores across 5 runs. + +Results. We find that dynamic Doc-ARC significantly outperforms the baselines. Relative to $\mathrm{BERT}_{\mathrm{TINY}} + \mathrm{LSTM}$ baselines, we find that dynamic Doc-ARC gains are greater than their static counterparts for LitBank and JNLPBA. Though the dynamic models cannot match the performance of their static analogues, it is worth noting that the static variants have roughly twice as many trainable parameters. Moreover, $\mathrm{BERT}_{\mathrm{BASE}}$ has roughly 25 times as many parameters as $\mathrm{BERT}_{\mathrm{TINY}}$ . + +# 4.3 Task Fine-Tuning + +To contextualize the performance of our dynamic models, we can consider results for a fully task finetuned $\mathrm{BERT}_{\mathrm{BASE}}$ and $\mathrm{BERT}_{\mathrm{TAPT}}$ model; as Table 4 illustrates, when given the ability to fine-tune all of its parameters to the task, performance is significantly higher than the small dynamic models, and comparable to the larger (but static) Doc-ARC models. + +While a direct comparison is ill-suited given the disparity in trainable parameters in a task-tuned $\mathrm{BERT}_{\mathrm{BASE}}$ (11 times the number of trainable parameters as a static Doc-ARC and 25 times the number of trainable parameters as a dynamic Doc-ARC), it illustrates one direction of future work: incorporating a task-tuned contextual language model into Doc-ARC. However, even with a static model with an order of magnitude fewer parameters, we find that Doc-ARC can outperform even a trainable BERT baseline for certain classes of important entities, as illustrated in the following section. + +
DatasetBERTBASEBERTTAPT
LitBank76.90 (0.61)76.28 (0.36)
JNLPBA70.05 (0.81)70.62 (0.79)
OntoNotes100084.44 (0.18)85.22 (0.29)
+ +Table 4: Fully-trainable BERT finetuning results. We report mean (SD) test $F_{1}$ scores across 5 runs. + +# 5 Analysis + +Doc-ARC was designed to (1) improve the performance of NER systems for rare, but important entities by (2) leveraging rich contextual information in long documents. In this section, we characterize the extent to which these goals were met using both quantitative and qualitative analysis. + +# 5.1 Characterizing Important Entities + +We hypothesize that Doc-ARC is most beneficial for rare entities that occur primarily within the context of a single document (such as the names of major characters in a novel). Such entities have a unique relevance only within the context of their document and are often the entities of highest importance for downstream analyses. However, these entities are particularly difficult for NER systems to classify correctly due to their rarity, unusual surface forms, and/or ambiguous meaning across documents. Given that these entities occur multiple times throughout a document and in diverse contexts, Doc-ARC should have the capacity to leverage this additional context for greater accuracy among important entities. + +One means to identify important terms in a document is TF-IDF: words with high TF-IDF scores must appear frequently throughout a given document or appear characteristically within that document by appearing infrequently in other documents; terms with the highest scores satisfy both criteria. As Figure 2 illustrates, TF-IDF scores have a strong relationship with the presence of entity labels; words with high TF-IDF scores are more likely to be named entities across all three datasets. + +Table 5 lists the three entities with the highest TF-IDF scores for each of the datasets, which appear exclusively as named entities and capture important characters (LitBank), proteins (JNLPBA), and political entities (OntoNotes). + +Given that TF-IDF is a reasonable indicator for important entities, we analyze Doc-ARC's performance for high TF-IDF words in comparison to alternative models. First, we compute TF-IDF scores + +![](images/a1bce7862beba4efaba1483fdc8f8c4b49494adb616c9c49051cec9c11f03cb8.jpg) +Figure 2: Among words in the labeled test set, we compute the proportion of words that appear with NER labels for each TF-IDF quantile. Across all datasets, words with a higher TF-IDF score are more likely appear as named entities. + +
DatasetTop WordsEntity Type(s)
LitBankLucillaPerson
CresswellFacility
MarjoribanksPerson
JNLPBAAkt-1Protein
PlasminProtein
Siah-1Protein
OntoNotes1000LinpienGPE/NORP/LOC
DongguanGPE/ORG
KoreansNORP
+ +Table 5: Top three entities with the highest TF-IDF scores across all test sets, with entity type(s). + +for all words across all documents for each dataset, using the logarithm of the term-frequency to control for variation in document length. We then restrict our vocabulary to words in the labeled test set that appear with a named entity label at least once, thereby excluding spurious high TF-IDF words (e.g. document-characteristic adjectives and adverbs). We split this vocabulary of high TF-IDF entities at the $90^{\text{th}}$ , $95^{\text{th}}$ , and $99^{\text{th}}$ percentile and compute word-level $F_{1}$ scores within each percentile. $^{7}$ + +Results. In Figure 3, we compare word-level F1 scores between our best static Doc-ARC models with a fixed $\mathrm{BERT}_{\mathrm{BASE}}$ input (Table 2) and a taskfinetuned $\mathrm{BERT}_{\mathrm{BASE}}$ model (Table 4). We plot the difference in word-level $F_{1}$ scores across the entire test set and the top $10\%$ , top $5\%$ , and top $1\%$ of TF-IDF entities. + +While the static Doc-ARC underperforms a finetuned $\mathrm{BERT}_{\mathrm{BASE}}$ across all words (mirroring the results from Table 4), we find that the static Doc-ARC outperforms finetuned BERT for high TF-IDF + +![](images/840794914865fdc19d7f6265bffc3ae09fbc9562849c4e940d4ac178ce9a7125.jpg) +Figure 3: Difference in word level $F_{1}$ scores between static Doc-ARC and task-finetuned BERTBASE, compared across all words and the top TF-IDF entities. + +entities. Moreover, these performance gains increase with the TF-IDF threshold, indicating that Doc-ARC's performance is more sensitive to high-importance entities than a standard finetuned BERT model. These results are particularly pronounced for OntoNotes $_{1000}$ , where Doc-ARC outperforms a finetuned BERT model by over 17 points in the top $1\%$ of TF-IDF entities. + +# 5.2 Characterizing Context Attention + +We now turn to analyzing our model's use of attention over context occurrences. We parameterize this analysis via the attention width $(K)$ and the attention weight $(\alpha_{k})$ + +Attention Width. The attention width $(K)$ determines the number of context occurrences a target word can attend over. In order to better understand the impact of the attention width on our model's performance, we plot mean dev $F_{1}$ scores across three runs for several values of $K$ in Figure 5. We find that the optimal value of $K$ is dataset-specific and that performance does not monotonically increase with $K$ , indicating that too much context can be detrimental. The maximum dev $F_{1}$ scores were used to determine the final hyperparameters in Table 2 (further hyperparameter details can be found in the Appendix A.2). + +Attention Weight. In Figure 4, we plot the distribution attention weight as a function of the distance to the target word. Unsurprisingly, the model assigns the highest weight to the target sentence itself (distance $= 0$ ), including the target occurrence itself or multiple mentions of the target word within the target sentence. Though the attention weight distributions for distances greater than zero tend to + +![](images/37a36227719b13a531d4accae58fb562538d02d63f80f336ea3d39d1b9552f84.jpg) +Figure 4: For each distance bucket (x-axis), we plot the distribution of attention weights assigned to context sentence in each bucket. A context sentence's distance to the target sentence is measured via absolute difference in sentence index. A distance of zero corresponds to mentions of the target word within the target sentence. + +![](images/f6343ffd54121011286d83838cac74fd2a6cc21c779a381736dd2c660b969aac.jpg) +Figure 5: Mean dev $F_{1}$ with standard deviations (shaded) across three runs, for various values of $K$ . Each model was trained with $\mathrm{BERT}_{\mathrm{BASE}}$ . + +have small medians, they have very long tails; for certain rare context sequences, Doc-ARC assigns a weight higher than the target token itself. + +# 6 Related Work + +Our work draws from several strands of related research. First, our motivation for this work is rooted in early research exploring the global scope of information across an entire document in making token-level decisions. Chieu and Ng (2002) presents one of the earliest examples of this for NER, employing features scoped over both the local token context and the broader type context in a log-linear classifier. Our use of attention in building a representation of a token that is informed by other instances of the same type is likewise influenced by work on Skip-Chain CRFs (Sutton and McCallum, 2004), which explicitly model the label dependencies between words of the same type, including for the task of NER (Liu et al., 2010). + +Second, automatically retrieving relevant context has been shown to improve accuracy across a variety of NLP tasks. Searching for the $k$ most similar context sequences to a given target has been explored for language model pretraining (Gururangan et al., 2020), training (Kaiser et al., 2017; Lample et al., 2019), and inference (Khandelwal et al., 2020); incorporating shared span representations linked through coreference has also been shown to help in multi-task learning (Luan et al., 2018). Recently, Guu et al. (2020) introduced a neural knowledge retriever for open-domain question answering, trained to retrieve the $k$ most relevant documents during all of pretraining, finetuning, and inference. Though named-entity masking had previously shown not to improve standard BERT pretraining (Joshi et al., 2020), Guu et al. (2020) find that it significantly improves retrieval-augmented pretraining. Most prior work has computed similarity in embedding space, using either model internal representations (Khandelwal et al., 2020; Guu et al., 2020) or lightweight sentence encoders (Gururangan et al., 2019). Instead, we adopt word-type identity match as a simpler, yet effective heuristic. + +Finally, self-supervised pretraining within relevant domain and/or task data has been widely shown to be beneficial for downstream task accuracy (Gururangan et al., 2020; Han and Eisenstein, 2019; Beltagy et al., 2019; Lee et al., 2020), with applications generally focused on transfer representation learning. Gururangan et al. (2020) additionally investigate human curated task-adaptive pretraining—comparable to our long-document settings—in which labeled annotations are drawn from a larger pool of unlabeled texts. + +# 7 Conclusion + +We present in this work a new method for reasoning over the context of long documents by attending over representations of identical word types when generating a representation for a token in sequence labeling tasks like NER. We show that when comparing equivalently parameterized models, incorporating attention over the entire document context leads to performance gains over models that lack that contextual mechanisms; further, the gains are asymmetric, with a substantial increase in accuracy for important entities within a document (defined as those with high TF-IDF scores). In the context of long documents, our approach presents a novel alternative to established methods such as long sequence modeling and task-adaptive pretraining. + +Our work's main contribution is a computationally tractable method for attention in long documents, employing exact word match as a complexity-reducing heuristic. Though our attention mechanism is ostensibly simple, Doc-ARC's strong performance in comparison to non-contextual baselines demonstrates both the value of the exact-match heuristic and the general utility of our framework. + +This work leaves open several natural directions for future research, including incorporating document attention within a fully trainable task-tuned BERT model, and broadening the focus of attention beyond identical word types to words that bear other forms of similarity (such as similarity in subword morphology and meaning). Code and data to support this work can be found at https://github.com/mjoerke/Doc-ARC. + +# Acknowledgments + +Many thanks to the anonymous reviewers for their feedback. The research reported in this article was supported by funding from the National Science Foundation (IIS-1813470) and the National Endowment for the Humanities (HAA-256044-17), and by resources provided by NVIDIA and Berkeley Research Computing. + +# References + +David Bamman, Sejal Popat, and Sheng Shen. 2019. An annotated dataset of literary entities. In Proceedings of the 2019 Conference of the North American Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2138-2144. + +Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. Scibert: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3606-3611. +Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv:2004.05150. +Hai Leong Chieu and Hwee Tou Ng. 2002. Named entity recognition: A maximum entropy approach using global information. In COLING 2002: The 19th International Conference on Computational Linguistics. +Nigel Collier and Jin-Dong Kim. 2004. Introduction to the bio-entity recognition task at JNLPBA. In Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications (NLPBA/BioNLP), pages 73-78, Geneva, Switzerland. COLING. +Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2978-2988. +Matthew J. Denny and Arthur Spirling. 2018. Text preprocessing for unsupervised learning: Why it matters, when it misleads, and what to do about it. *Political Analysis*, 26(2):168-189. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Jesse Dodge, Suchin Gururangan, Dallas Card, Roy Schwartz, and Noah A Smith. 2019. Show your work: Improved reporting of experimental results. arXiv preprint arXiv:1909.03004. +E.M. Forster. 1908. A Room with a View. Edward Arnold. +Suchin Gururangan, Tam Dang, Dallas Card, and Noah A Smith. 2019. Variational pretraining for semi-supervised text classification. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5880-5894. +Suchin Gururangan, Ana Marasovic, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. + +Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. Realm: Retrievalaugmented language model pre-training. arXiv preprint arXiv:2002.08909. +Xiaochuang Han and Jacob Eisenstein. 2019. Unsupervised domain adaptation of contextualized embeddings for sequence labeling. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4229-4239. +Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64-77. +David Jurgens, Srijan Kumar, Raine Hoover, Dan McFarland, and Dan Jurafsky. 2018. Measuring the evolution of a scientific field through citation frames. Transactions of the Association for Computational Linguistics, 6:391-406. +Lukasz Kaiser, Ofir Nachum, Aurko Roy, and Samy Bengio. 2017. Learning to remember rare events. ArXiv, abs/1703.03129. +Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. Generalization through Memorization: Nearest Neighbor Language Models. In International Conference on Learning Representations (ICLR). +J-D Kim, Tomoko Ohta, Yuka Tateisi, and Jun'ichi Tsujii. 2003. Genia corpus—a semantically annotated corpus for bio-textmining. Bioinformatics, 19(suppl_1):i180-i182. +Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya. 2020. Reformer: The efficient transformer. In International Conference on Learning Representations. +Guillaume Lample, Alexandre Sablayrolles, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2019. Large memory layers with product keys. In Advances in Neural Information Processing Systems, pages 8546-8557. +Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. +Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240. +Kenton Lee, Luheng He, Mike Lewis, and Luke Zettle-moyer. 2017. End-to-end neural coreference resolution. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. + +Jingchen Liu, Minlie Huang, and Xiaoyan Zhu. 2010. Recognizing biomedical named entities using skipchain conditional random fields. In Proceedings of the 2010 Workshop on Biomedical Natural Language Processing, pages 10-18, Uppsala, Sweden. Association for Computational Linguistics. +Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3219-3232, Brussels, Belgium. Association for Computational Linguistics. +Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, HLT '11, pages 142-150, Stroudsburg, PA, USA. Association for Computational Linguistics. +Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313-330. +Stefano Menini, Federico Nanni, Simone Paolo Ponzetto, and Sara Tonelli. 2017. Topic-based agreement and disagreement in US electoral manifestos. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2938-2944, Copenhagen, Denmark. Association for Computational Linguistics. +Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics. +Andrew Piper. 2018. Enumerations. University of Chicago Press. +Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Björkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards robust linguistic analysis using OntoNotes. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 143-152, Sofia, Bulgaria. Association for Computational Linguistics. +Jack W Rae, Anna Potapenko, Siddhant M Jayakumar, and Timothy P Lillicrap. 2019. Compressive transformers for long-range sequence modelling. arXiv preprint arXiv:1911.05507. + +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. +Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784-789, Melbourne, Australia. Association for Computational Linguistics. +Charles Sutton and Andrew McCallum. 2004. Collective segmentation and labeling of distant entities in information extraction. In ICML Workshop on Statistical Relational Learning and Its Connections to Other Fields. +Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-read students learn better: On the importance of pre-training compact models. +Ted Underwood. 2019. Distant Horizons: Digital Evidence and Literary Change. University of Chicago Press. +Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. ACE 2005 multilingual training corpus. LDC. +Ralph Weischedel, Sameer Pradhan, Lance Ramshaw, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Nianwen Xue, Martha Palmer, Jena D. Hwang, Claire Bonial, Jinho Choi, Aous Mansouri, Maha Foster, Abdel aati Hawwary, Mitchell Marcus, Ann Taylor, Craig Greenberg, Eduard Hovy, Robert Belvin, and Ann Houston. 2012. Ontonotes release 5.0. + +# A Appendix + +Following Dodge et al. (2019), we report our computing infrastructure (A.1), hyperparameter details (A.2), running times (A.3), and development set results (A.4) to foster reproducible results. Our code and datasets are available at https://github.com/mjoerke/Doc-ARC. + +# A.1 Computing Infrastructure + +Each Doc-ARC model and baseline comparison was trained on a single NVIDIA Tesla® K80 GPU with 12GB GPU memory. Task-adaptive pretraining was performed on a Google Cloud v2-8 TPU. + +# A.2 Hyperparameters + +
ParameterValue(s)
Epochs30
Patience3
Batch Size16
Learning Rate0.001
K (Attention Width)[2, 5, 10, 15, 25, 35, 50]
H (LSTM Hidden)256
Trainable Parameters9.5M
Total Parameters118M
+ +Static Doc-ARC. We list hyperparameters details for static Doc-ARC results (Table 2) in Table 6. We perform hyperparameter tuning over $K$ only, choosing the optimal $K$ via mean dev $F_{1}$ across 3 trials. The final values of $K$ for both $\mathrm{BERT}_{\mathrm{BASE}}$ and $\mathrm{BERT}_{\mathrm{TAPT}}$ are listed in Table 7. For static Doc-ARC with $\mathrm{BERT}_{\mathrm{TAPT}}$ , we performed tuning over $K \leq 25$ on LitBank and JNLPBA due to time constraints. Each BERT+LSTM baseline was trained with identical hyperparameters (except for $K$ , which does not apply). + +Table 6: Static Doc-ARC hyperparameters + +
DatasetBase ModelK
LitBankBERTBASE25
BERTTAPT25
JNLPBABERTBASE25
BERTTAPT15
OntoNotes1000BERTBASE50
BERTTAPT50
+ +Dynamic Doc-ARC. We list hyperparameters details for dynamic Doc-ARC results (Table 3) in Table 8. Hyperparameter tuning over $K$ was limited to $K \leq 10$ since this was the largest configuration that could be trained on a single GPU. We perform hyperparameter tuning over $K$ only, choosing the optimal $K$ via mean dev $F_{1}$ across 3 trials; + +all models had the best results for $K = 10$ . Each BERT+LSTM baseline was trained with identical hyperparameters (except for $K$ , which does not apply). + +Table 7: Optimal $K$ for each static Doc-ARC model + +
ParameterValue(s)
Epochs30
Patience5
Batch Size4
Learning Rate0.0001
K (Attention Width)[2, 5, 10]
H (LSTM Hidden)128
Total Parameters4.8M
+ +Task Adaptive Pretraining. We perform task-adaptive pretraining on full texts within the training set for 100 epochs. Pretraining was performed using Google's BERT pretraining code8. Hyperparameters for pretraining are listed in Table 9. + +Table 8: Dynamic Document Attention + +
ParameterValue
Epochs100
Learning Rate2e-5
Batch Size32
Max Sequence Length128
Whole Word MaskingTrue
Masking Probability0.15
Short sequence Probability0
Next-sequence PredictionTrue
Warmup6%
+ +Task Finetuning. Finetuning hyperparameters for $\mathrm{BERT}_{\mathrm{BASE}}$ results (Table 4) and $\mathrm{BERT}_{\mathrm{TINY}}$ (Table 3) are listed in Table 10. + +Table 9: Task-Adaptive Pretraining (TAPT) hyperparameters + +
ParameterValue
Epochs10
Patience3
Learning Rate2e-5
Batch Size16
BERTBASEParameters108M
BERTTINYParameters4.4M
+ +Table 10: Task finetuning hyperparameters + +# A.3 Running Times. + +For each reported result, we list average training times in Table 11. Note that task-adaptive pretraining was only run once for each dataset. + +# A.4 Development Set Results. + +We reproduce each of the tables in the main paper with development set results. Table 12 lists dev results for Table 2, Table 13 lists dev results for Table 3, and Table 14 lists dev results for Table 4 + +
ModelDatasetTraining Time (Hours:Min)
Static Doc-ARCLitBank26:23
JNLPBA24:35
OntoNotes100021:27
Static BERT+LSTM baselineLitBank00:21
JNLPBA00:25
OntoNotes100002:04
Dynamic Doc-ARCLitBank01:51
JNLPBA02:14
OntoNotes100007:31
Dynamic BERT+LSTM baselineLitBank00:07
JNLPBA00:09
OntoNotes100000:42
BERTBASE finetuningLitBank00:27
JNLPBA00:26
OntoNotes100002:30
BERTTINY finetuningLitBank00:01
JNLPBA00:02
OntoNotes100000:08
Task-adaptive pretrainingLitBank04:48
JNLPBA04:27
OntoNotes100000:28
+ +Table 11: Average training times for each model. + +
Base ModelLitBankJNLPBAOntoNotes1000
Doc-ARCBERT + LSTMDoc-ARCBERT + LSTMDoc-ARCBERT + LSTM
BERTBASE73.34 (0.77)71.98 (0.93)75.88 (0.37)73.93 (0.30)85.45 (0.20)83.64 (0.10)
BERTTAPT71.50 (0.29)68.83 (0.71)77.11 (0.44)74.93 (0.28)85.00 (0.19)83.33 (0.32)
+ +Table 12: Static Doc-ARC results on the development set. We report mean (SD) $F_{1}$ scores across 5 runs. + +
DatasetDoc-ARCBERTINY+ LSTM
LitBank56.51 (0.92)45.80 (1.35)53.52 (0.75)
JNLPBA72.04 (0.29)61.03 (0.88)68.64 (0.24)
OntoNotes100074.11 (0.41)70.21 (0.30)72.98 (0.42)
+ +Table 13: Dynamic Doc-ARC results on the development set. We report mean (SD) $F_{1}$ scores across 5 runs. + +
DatasetBERTBASEBERTTAPT
LitBank73.41 (0.95)71.49 (0.40)
JNLPBA74.91 (0.78)75.96 (0.41)
OntoNotes100085.90 (0.36)86.04 (0.20)
+ +Table 14: BERT finetuning results on the development set. We report mean (SD) $F_{1}$ scores across 5 runs. \ No newline at end of file diff --git a/attendingtolongdistancedocumentcontextforsequencelabeling/images.zip b/attendingtolongdistancedocumentcontextforsequencelabeling/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..50685af95f359ab84a28e2fc6cf1fee7ee1a4fc4 --- /dev/null +++ b/attendingtolongdistancedocumentcontextforsequencelabeling/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:93c2d97f443b44ab5db365005c908ed1fcba33e71e8961d4c9b8e4d173c90457 +size 619668 diff --git a/attendingtolongdistancedocumentcontextforsequencelabeling/layout.json b/attendingtolongdistancedocumentcontextforsequencelabeling/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..349e4675722cef6efde4efd4d96192dc9fa0dce8 --- /dev/null +++ b/attendingtolongdistancedocumentcontextforsequencelabeling/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:96804b72f5a67dfae00d43bf8dd0bb965dac42d0083fad35534216af23b5f316 +size 476624 diff --git a/autoeterautomatedentitytyperepresentationforknowledgegraphembedding/6280cca5-31e9-4b7b-8630-5e5dae902722_content_list.json b/autoeterautomatedentitytyperepresentationforknowledgegraphembedding/6280cca5-31e9-4b7b-8630-5e5dae902722_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f7669ea05992a8cb6630ef017c472cc11e38de57 --- /dev/null +++ b/autoeterautomatedentitytyperepresentationforknowledgegraphembedding/6280cca5-31e9-4b7b-8630-5e5dae902722_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a29b771c4eb468a8fe225a4d0071b232659f66300ed48397c0919958b120d8aa +size 78347 diff --git a/autoeterautomatedentitytyperepresentationforknowledgegraphembedding/6280cca5-31e9-4b7b-8630-5e5dae902722_model.json b/autoeterautomatedentitytyperepresentationforknowledgegraphembedding/6280cca5-31e9-4b7b-8630-5e5dae902722_model.json new file mode 100644 index 0000000000000000000000000000000000000000..e70fedbfc84b06d02d1fee90265e718b5cf8f103 --- /dev/null +++ b/autoeterautomatedentitytyperepresentationforknowledgegraphembedding/6280cca5-31e9-4b7b-8630-5e5dae902722_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4457bd2c26a1cd403c35810b87d65c4a202ae619ad4ac0ec6d13e2bb872688c6 +size 93773 diff --git a/autoeterautomatedentitytyperepresentationforknowledgegraphembedding/6280cca5-31e9-4b7b-8630-5e5dae902722_origin.pdf b/autoeterautomatedentitytyperepresentationforknowledgegraphembedding/6280cca5-31e9-4b7b-8630-5e5dae902722_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..31d7884c355c86baa8538ba865a99f483046896c --- /dev/null +++ b/autoeterautomatedentitytyperepresentationforknowledgegraphembedding/6280cca5-31e9-4b7b-8630-5e5dae902722_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ebdbdc16323790e4fa825438f32c0c5435dd5a4bb25af1d5b68d0ae319480382 +size 767619 diff --git a/autoeterautomatedentitytyperepresentationforknowledgegraphembedding/full.md b/autoeterautomatedentitytyperepresentationforknowledgegraphembedding/full.md new file mode 100644 index 0000000000000000000000000000000000000000..57b9f9e8f8593ba7918436d1ac858104fc99432f --- /dev/null +++ b/autoeterautomatedentitytyperepresentationforknowledgegraphembedding/full.md @@ -0,0 +1,380 @@ +# AutoETER: Automated Entity Type Representation for Knowledge Graph Embedding + +Guanglin Niu $^{1}$ , Bo Li $^{1,2}$ , Yongfei Zhang $^{1,2,3*}$ , Shiliang Pu $^{4}$ , Jingyang Li $^{1}$ + +1Beijing Key Laboratory of Digital Media, School of Computer Science and Engineering, Beihang University, Beijing 100191, China + +$^{2}$ State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China $^{3}$ Pengcheng Laboratory, Shenzhen 518055, China + +$^{4}$ Hikvision Research Institute, Hangzhou 311500, China + +{beihangngl, boli, yfzhang, lijingyang} @ buaa.edu.cn, pushiliang.hri@hikvision.com + +# Abstract + +Recent advances in Knowledge Graph Embedding (KGE) allow for representing entities and relations in continuous vector spaces. Some traditional KGE models leveraging additional type information can improve the representation of entities which however totally rely on the explicit types or neglect the diverse type representations specific to various relations. Besides, none of the existing methods is capable of inferring all the relation patterns of symmetry, inversion and composition as well as the complex properties of 1-N, N-1 and N-N relations, simultaneously. To explore the type information for any KG, we develop a novel KGE framework with Automated Entity TypE Representation (AutoETER), which learns the latent type embedding of each entity by regarding each relation as a translation operation between the types of two entities with a relation-aware projection mechanism. Particularly, our designed automated type representation learning mechanism is a pluggable module which can be easily incorporated with any KGE model. Besides, our approach could model and infer all the relation patterns and complex relations. Experiments on four datasets demonstrate the superior performance of our model compared to state-of-the-art baselines on link prediction tasks, and the visualization of type clustering provides clearly the explanation of type embeddings and verifies the effectiveness of our model. + +# 1 Introduction + +In recent years, knowledge graph (KG) has been viewed as a powerful technique for recognition systems and prevalent in many fields such as E-commerce, intelligent healthcare, and public security. Knowledge graphs collect and store a great deal of commonsense or domain knowledge in factual triples composed of entity pairs with their relations. The existing large scale KGs such as + +![](images/cced92d42f5d5bff7eb67905d25e2b8bb1eaa71af9fb0448e48fb33dc51f791a.jpg) +Figure 1: An actual example of the entity-specific triples and the type-specific triples with relation-aware projection mechanism. Will Smith has multiple types such as Singer and Actor, but only the type Singer should be focused on for the relation SangSong. + +Freebase (Bollacker et al., 2008), WordNet (Miller, 1995), YAGO (Suchanek et al., 2007) have shown their validity in various applications, including question answering (Diefenbach et al., 2018), dialogue generation (He et al., 2017) and recommender systems (Wang et al., 2019). + +However, the existing KGs are inevitably incomplete whether they are constructed manually or automatically, limiting the effectiveness when exploited for downstream applications. Some existing KG inference approaches such as inductive logic programming algorithm (Ray, 2009), Markov logic networks based method (Qu and Tang, 2019) and reinforcement learning-based approach (Lin et al., 2018) try to predict entities or relations in KGs but face the limited performance and suffer from the low efficiency. Compared to the above approaches, knowledge graph embedding models could learn the latent representations of the entities and relations and show the best performance on the KG completion task. However, most of the KG embedding models such as TransE (Bordes et al., 2013) and its variants TransH (Wang et al., 2014), TransR (Lin et al., 2015b) learn KG embeddings relying on single triples, which simply exploit the structure information implied in KGs. + +Entity types define categories of entities that are valid to enhance the representation of entities. In + +many type-embodied models such as TKRL (Xie et al., 2016) and TransT (Ma et al., 2017), the explicit types are necessary while some KGs (i.e., WordNet) lack them, which limits the versatility of these approaches. JOIE (Hao et al., 2019) jointly encodes both the ontology and instance views of KGs. Nevertheless, ontologies' concepts always represent the general categories of entities but cannot reflect the specific types, primarily associated with different relations. Jain (Jain et al., 2018) learned the type embeddings by defining the compatibility between an entity type and a relation. Still, it ignores the semantics implied in a whole triple consisting of a relation jointly with its linked two entity types. Moreover, all the previous type-based approaches neglect the diversity of entity type representations specific to various relations. As Figure 1 shows, contrary to the previous researches considering entity types, the triples in the entity level could be extended to triples in the type level. Each entity has multiple types, and diverse types should be focused on for different specific relations. + +Additionally, some models embed the entities and relations into the complex vector space instead of the frequently-used real space to improve the capability of representation learning, including ComplEx (Trouillon et al., 2016) and RotatE (Sun et al., 2019). Nevertheless, none of the existing embedding models could model and infer all the relation patterns and the complex 1-N, N-1 and N-N relations, simultaneously. + +To conduct the KG inference from the perspectives of both entity-specific triples and type-specific triples on any KG, whether the explicit types exist, we propose AutoETER to automatically learn the diverse type representations of each entity when focusing on the various associated relations. Intuitively, the high-dimensional entity embeddings imply the individual features to distinguish the diverse entities. In contrast, the low-dimensional type embeddings capture the general features to discover the similarity of entities according to their categories. Inspired by the translational-based principle in TransE, we expect that given a head entity and its associated relation, the tail entity's type representation can be obtained by $\mathbf{type}_{\mathrm{head}} + \mathbf{relation} = \mathbf{type}_{\mathrm{tail}}$ . Particularly, the latent type embeddings of two head or two tail entities focused on the same relation should be close to each other since they imply the same type. Furthermore, the embeddings + +of the entity-specific triples and the type-specific triples are capable of modeling and inferring symmetry, inversion, composition, and complex 1-N, N-1, N-N relations. + +The contributions of this work are summarized as follows: + +- We model type representations to enrich the general features of entities. A novel model AutoETER is proposed to learn the embeddings of entities, relations and entity types from entity-specific triples and type-specific triples without explicit types in KGs. Furthermore, the type embeddings can be incorporated with the entity embeddings for inference. +- To the best of our knowledge, we are the first to model and infer all the relation patterns, including symmetry, inversion and composition, as well as complex relations of 1-N, N-1 and N-N for the KG inference. +- We conduct extensive experiments on link prediction on four real-world benchmark datasets. The evaluation results demonstrate the superiority of our proposed model over other state-of-the-art algorithms. The visualization of clustering type embeddings validates the effectiveness of automatically representing entity types with relation-aware projection. + +# 2 Related Works + +# 2.1 Knowledge Graph Inference + +To address the inherent incompleteness of KGs, multiple KG inference methods are investigated and have made significant progress. Traditional researches devote to generate logic rules based on inductive logic programming such as HAIL (Ray, 2009) to predict the missing entities in KGs. However, employing logic rules in KG inference limits generalization performance. Path ranking algorithm (PRA) (Lao et al., 2011) extracts the relational path features based on random-walk to infer the relationships between entity pairs. DeepPath (Lin et al., 2018) is a foundational approach that formulates the multi-hop reasoning as a Markov decision process and leverages reinforcement learning (RL) to find paths in KGs. However, the RL-based multi-hop KG reasoning approaches consume much time in searching paths. + +# 2.2 KG Embedding Models + +Various KG embedding models have been extensively developed for KG inference in recent years (Wang et al., 2017). KGE models are capable of capturing latent representations of entities and relations in KGs independently from hand-crafted rules, and they have shown a strong capacity of efficient computation in many knowledge-aware applications (Ji et al., 2020). TransE (Bordes et al., 2013) is the foundational translation-based method, which regards a relation as a translation operation from the head entity to the tail entity. Along with TransE, multiple variants are proposed to improve the embedding performance of KGs (Niu et al., 2020; Yuan et al., 2019; Xiao et al., 2016). ConvE (Dettmers et al., 2018) is a typical method representing entities and relations based on convolutional neural networks (CNN). Another category of KG embedding contains many tensor decomposition models, including DisMult (Yang et al., 2015). Particularly, ComplEx (Trouillon et al., 2016) extends DisMult to learn the KG embeddings in the complex space. RotatE (Sun et al., 2019) defines a relation as a rotation from source to target entities in a complex space but cannot infer the complex relations 1-N, N-1 and N-N. What's more, all the approaches above purely depend on the triples directly observed in KGs. + +# 2.3 Models Incorporating Entity Types + +To further improve the performance of KG embedding, various auxiliary information is introduced, such as paths (Lin et al., 2015a; Niu et al., 2020), graph structure (Michael et al., 2018) and entity types (Xie et al., 2016; Krompaß et al., 2015; Ma et al., 2017). Among such information, entity types contain less noise and are appropriate for providing more general semantics for each entity. TKRL (Xie et al., 2016) projects each entity with the type-specific projection matrices. TransT (Ma et al., 2017) measures the semantic similarity of entities and relations utilizing types. However, all the above type-based KG embedding models require the supervision of explicit types and cannot work on KGs without explicit types. JOIE (Hao et al., 2019) links entities to their concepts in the ontology for jointly embed the instance-view graph and the ontology-view graph, but the concepts in ontologies provide too broad or even noisy information to represent the specific and precise types of each entity. (Jain et al., 2018) introduces the compatibil + +ity between the embeddings of an entity type and a relation for link prediction. Still, all the existing type-enhanced models neglect that an entity's diverse types should be focused on when this entity is associated with various relations. Meanwhile, the association property implied in the embeddings of the type-specific triples has not been well modeled. + +# 3 AutoETER: KGE with Automated Entity Type Representation + +To cope with the above limitations, we describe the proposed model AutoETER, which aims to automatically learn a variant of type representations semantically compatible with various relations and infer all the relation patterns and complex relations. As figure 2 shows, we first embed the entities and relations into complex space via the entity-specific triple encoder with a hyper-plane projection strategy (§3.1). Additionally, the type-specific triple encoder is developed to learn type embeddings incorporated with a relation-aware projection mechanism (§3.2). Meanwhile, the type embeddings are constrained by their similarity derived from the associated relations (§3.3). Afterward, we propose the overall optimization objective with both entity-specific triple and type-specific triple representations and the similarity constraint of the type embeddings (§3.4). + +# 3.1 Entity-specific Triple Encoder + +We embed the entities and relations into the complex space and regard a relation as the rotation operation from the head entity to the tail entity as in RotatE (Sun et al., 2019). To further model and infer the complex relations such as 1-N, N-1 and N-N, we project entities into their associated relation hyper-planes to ensure each entity has various representations concerning the specific relations. In terms of an entity-specific triple $(h,r,t)$ , the energy function $E_{1}(h,r,t)$ is defined as + +$$ +\mathbf {e} _ {h, r} = \mathbf {h} - \mathbf {h} ^ {\top} \mathbf {w} _ {r} \cdot \mathbf {h}, \quad \mathbf {e} _ {t, r} = \mathbf {t} - \mathbf {t} ^ {\top} \mathbf {w} _ {r} \mathbf {t} \tag {1} +$$ + +$$ +E _ {1} (h, r, t) = \left\| \mathbf {e} _ {h, r} \circ \mathbf {r} - \mathbf {e} _ {t, r} \right\| \tag {2} +$$ + +where $\mathbf{h} \in \mathbb{C}^k$ , $\mathbf{t} \in \mathbb{C}^k$ , $\mathbf{r} \in \mathbb{C}^k$ are the embeddings of head entity $h$ , tail entity $t$ and relation $r$ in the complex space with dimension $k$ . $\mathbf{w}_r \in \mathbb{R}^k$ denotes the normal vector of the hyper-plane involved in the relation $r$ . $\mathbf{e}_{h,r} \in \mathbb{C}^k$ and $\mathbf{e}_{t,r} \in \mathbb{C}^k$ represent the entity embeddings of $h$ and $t$ projected in the hyper-plane $\mathbf{w}_r$ . $\circ$ is the Hadamard product. + +![](images/fce769814b40377012e4067aa2f70927d4b926352fef45f83c59c9369b3ff506.jpg) +Figure 2: The architecture of AutoETER. Given a triple fact $(h,r,t)$ , $\mathbf{e}_{h,r}$ and $\mathbf{e}_{t,r}$ are the projected entity embeddings in the hyper-plane of relation $r$ , $\mathbf{y}_{h,r}$ and $\mathbf{y}_{t,r}$ are type embeddings focusing on relation $r$ . Furthermore, we expect the embeddings of entity-specific triple satisfies rotation operation and type-specific triple satisfies translation operation from head to tail entities. Type embeddings associated with the same relation $r$ are constrained to be closer, where $\gamma$ is the margin enforced between two clusters of type embeddings related to different relations. + +On account of the embeddings of entity-specific triples, our model can infer all the relation patterns via the rotation operation from head to tail entities as in RotatE. Particularly, $\mathbf{r}$ is constrained to be $|r_i| = 1$ , $i = 1,2,\ldots,k$ for inferring the symmetric relation pattern and at least one element of $\mathbf{r}$ is -1 to ensure the diverse representations of head and tail entities. Moreover, the projection operation shown in Eq. 1 enables our model to infer the complex relations via various representations of entities regarding different relations. + +# 3.2 Type-specific Triple Encoder + +Given an entity $e$ and its associated relation $r$ in a triple, we aim to learn the type and relation embeddings with a relation-aware projection mechanism to output the most important information of the type representations: + +$$ +f _ {a t t} (e, r) = \mathbf {M} _ {r} \mathbf {y} _ {e} \tag {3} +$$ + +where $\mathbf{y}_e\in \mathbb{R}^d$ denotes the type embedding of entity $e$ in the real space with dimension $d$ . $\mathbf{M}_r\in \mathbb{R}^{d\times d}$ is defined as the projection weight matrix associated with the relation $r$ , which could automatically select the latent information of each type embedding most relevant to the relation $r$ . + +With the relation-aware projection defined in Eq. 3, the energy function involved in type-specific triples is defined as + +$$ +\begin{array}{l} \mathbf {y} _ {h, r} = f _ {\text {a t t}} (h, r), \quad \mathbf {y} _ {t, r} = f _ {\text {a t t}} (t, r) \tag {4} \\ E _ {2} (h, r, t) = \left\| \mathbf {y} _ {h, r} + \mathbf {y} _ {r} - \mathbf {y} _ {t, r} \right\| \\ \end{array} +$$ + +where $\mathbf{y}_{h,r}\in \mathbb{R}^d$ $\mathbf{y}_{t,r}\in \mathbb{R}^d$ are the type embeddings of entities $h$ and $t$ both focusing on the relation $r$ and $\mathbf{y}_r\in \mathbb{R}^d$ denotes the embedding of the relation $r$ in the type-specific triple. In terms of the energy function in Eq. 4, we expect that + +$$ +\mathbf {y} _ {h, r} + \mathbf {y} _ {r} = \mathbf {y} _ {t, r} \tag {5} +$$ + +Furthermore, with the type and relation embeddings learned in the real spaces, our model cost fewer parameters and could model and infer all the relation patterns including symmetry (Lemma 1), inversion (Lemma 2) and composition (Lemma 3) as well as the complex properties of relations: + +Lemma 1. Our model could infer relation pattern of symmetry by type-specific triple embeddings. + +Proof. If a relation $r$ is symmetric, two triples $(h, r, t)$ and $(t, r, h)$ will hold. From Eq. 5, the correlations among the embeddings of types and relations can be obtained as: + +$$ +\mathbf {y} _ {h, r} + \mathbf {y} _ {r} = \mathbf {y} _ {t, r}, \quad \mathbf {y} _ {t, r} + \mathbf {y} _ {r} = \mathbf {y} _ {h, r} \tag {6} +$$ + +From Eq. 6, we can further derive that + +$$ +\mathbf {y} _ {h, r} = \mathbf {y} _ {t, r}, \quad \mathbf {y} _ {r} = \mathbf {0} \tag {7} +$$ + +We prove that the embedding of a symmetric relation should be zero vector, and the type embeddings of head and tail entities should be equal. The above results are reasonable owing to the focused types of two entities linked by the symmetric relation are supposed to be same. + +Lemma 2. Our model is able to infer relation pattern of inversion by type-specific triple embeddings. + +Proof. With the inverse relations $r_1$ and $r_2$ , two triples $(h, r_1, t)$ and $(t, r_2, h)$ hold. From Eqs. 3, 4 and 5, it can be retrieved that + +$$ +\mathbf {M} _ {r _ {1}} \mathbf {y} _ {h} + \mathbf {y} _ {r _ {1}} = \mathbf {M} _ {r _ {1}} \mathbf {y} _ {t} \tag {8} +$$ + +$$ +\mathbf {M} _ {r _ {2}} \mathbf {y} _ {t} + \mathbf {y} _ {r _ {2}} = \mathbf {M} _ {r _ {2}} \mathbf {y} _ {h} \tag {9} +$$ + +We can define a transform matrix $\mathbf{P} \in \mathbb{R}^{d \times d}$ that satisfies + +$$ +\mathbf {M} _ {r _ {1}} = \mathbf {P M} _ {r _ {2}} \tag {10} +$$ + +Substituting Eq. 10 into Eq. 9, the latter can be modified as + +$$ +\mathbf {M} _ {r _ {1}} \mathbf {y} _ {t} + \mathbf {P y} _ {r _ {2}} = \mathbf {M} _ {r _ {1}} \mathbf {y} _ {h} \tag {11} +$$ + +Then, substituting Eq. 11 into Eq. 8, it yields that + +$$ +\mathbf {y} _ {r _ {1}} = - \mathbf {P} \mathbf {y} _ {r _ {2}} \tag {12} +$$ + +We can model and infer the inverse relations with the relation embeddings satisfying the relationship as in Eq. 12. + +Lemma 3. Our model is capable of inferring the relations of composition by type-specific triple embeddings. + +Proof. On account of the relations of composition pattern $r_3(a,c)\Leftarrow r_1(a,b)\land r_2(b,c)$ , the corresponding triples $(a,r_1,b),(b,r_2,c)$ and $(a,r_3,c)$ hold. Meanwhile, considering Eqs. 3, 4 and 5, it can be obtained that + +$$ +\mathbf {M} _ {r _ {1}} \mathbf {y} _ {a} + \mathbf {y} _ {r _ {1}} = \mathbf {M} _ {r _ {1}} \mathbf {y} _ {b} \tag {13} +$$ + +$$ +\mathbf {M} _ {r _ {2}} \mathbf {y} _ {b} + \mathbf {y} _ {r _ {2}} = \mathbf {M} _ {r _ {2}} \mathbf {y} _ {c} \tag {14} +$$ + +$$ +\mathbf {M} _ {r _ {3}} \mathbf {y} _ {a} + \mathbf {y} _ {r _ {3}} = \mathbf {M} _ {r _ {3}} \mathbf {y} _ {c} \tag {15} +$$ + +We can define two transform matrices $\mathbf{P} \in \mathbb{R}^{d \times d}$ and $\mathbf{Q} \in \mathbb{R}^{d \times d}$ to satisfy + +$$ +\mathbf {P M} _ {r _ {1}} = \mathbf {M} _ {r _ {3}} \tag {16} +$$ + +$$ +\mathbf {Q} \mathbf {M} _ {r _ {2}} = \mathbf {M} _ {r _ {3}} \tag {17} +$$ + +Substituting Eq. 16 into Eq. 13 and Eq. 17 into Eq. 14, respectively, we can derive that + +$$ +\mathbf {M} _ {r _ {3}} \mathbf {y} _ {a} + \mathbf {P} \mathbf {y} _ {r _ {1}} = \mathbf {M} _ {r _ {3}} \mathbf {y} _ {b} \tag {18} +$$ + +$$ +\mathbf {M} _ {r _ {3}} \mathbf {y} _ {b} + \mathbf {Q y} _ {r _ {2}} = \mathbf {M} _ {r _ {3}} \mathbf {y} _ {c} \tag {19} +$$ + +Substituting Eq. 18 into Eq. 19, it can be retrieved that + +$$ +\mathbf {M} _ {r _ {3}} \mathbf {y} _ {a} + \mathbf {P} \mathbf {y} _ {r _ {1}} + \mathbf {Q} \mathbf {y} _ {r _ {2}} = \mathbf {M} _ {r _ {3}} \mathbf {y} _ {c} \tag {20} +$$ + +Combining Eqs. 15 and 20, we can model the correlation among the relation embeddings of composition pattern as + +$$ +\mathbf {y} _ {r _ {3}} = \mathbf {P} \mathbf {y} _ {r _ {1}} + \mathbf {Q} \mathbf {y} _ {r _ {2}} \tag {21} +$$ + +We prove that we can model and infer the relations of composition pattern for type-specific triples with the relation embeddings as shown in Eq. 21. $\square$ + +Specific to the inference on type-specific triples with the relations of complex properties 1-N, N-1 and N-N, we could exploit the various representations of an entity type associated with different relations via the relation-aware projection mechanism defined in Eq. 3 to infer on these relations. + +# 3.3 Type Embeddings Similarity Constraint + +In addition to learning type embeddings by the type-specific triple encoder (§3.2), the type representations should be constrained by the similarity between the entity types. The type embeddings of head entities involved in the triples with the same relation are closer to each other (the same as type embeddings of tail entities). Thus, as for two triples with the same relation, we expect that + +$$ +\mathbf {y} _ {h _ {1}, r} = \mathbf {y} _ {h _ {2}, r}, \quad \mathbf {y} _ {t _ {1}, r} = \mathbf {y} _ {t _ {2}, r} \tag {22} +$$ + +where $\mathbf{y}_{h_1,r}$ and $\mathbf{y}_{h_2,r}$ are type embeddings of head entities while $\mathbf{y}_{t_1,r}$ and $\mathbf{y}_{t_2,r}$ are type embeddings of tail entities. Particularly, they all focus on the same relation $r$ by the relation-aware projection mechanism of Eq. 3. + +Now, considering any two triples $(h_1,r_1,t_1)$ and $(h_2,r_2,t_2)$ , we design the energy function for evaluating the dissimilarity of the type embeddings as + +$$ +\begin{array}{l} E _ {3} \left(\left(h _ {1}, r _ {1}, t _ {1}\right), \left(h _ {2}, r _ {2}, t _ {2}\right)\right) = \frac {1}{2} \left(\left\| \mathbf {y} _ {h _ {1}, r _ {1}} - \mathbf {y} _ {h _ {2}, r _ {2}} \right\|\right. \\ + \left\| \mathbf {y} _ {t _ {1}, r _ {1}} - \mathbf {y} _ {t _ {2}, r _ {2}} \right\|) \tag {23} \\ \end{array} +$$ + +where $\mathbf{y}_{h_1,r_1}$ and $\mathbf{y}_{h_2,r_2}$ are two head entity type embeddings, $\mathbf{y}_{t_1,r_1}$ and $\mathbf{y}_{t_2,r_2}$ are two tail entity type embeddings, and they are all associated with the relation $r_1$ or $r_2$ . Therefore, we expect the value derived from Eq. 23 tends to be smaller if $r_1$ and $r_2$ are the same relation. + +# 3.4 Optimization Objective + +The designed entity-specific triples encoder, type-specific triples encoder and type representations similarity constraint could be trained as a unified + +end-to-end model. We optimize our model according to a three-component objective function: + +$$ +L = \sum_ {(h, r, t) \in S} \left\{\sum_ {\left(h ^ {\prime}, r, t ^ {\prime}\right) \in S ^ {\prime}} \left\{L _ {1} + \alpha_ {1} L _ {2} \right\} + \alpha_ {2} L _ {3} \right\} \tag {24} +$$ + +in which the overall training objective consists of three components: $L_{1}$ and $L_{2}$ are two pair-wise loss functions that correspond to the entity-specific triple encoder and the type-specific triple encoder, respectively, and $L_{3}$ is a triple loss function for constraining the type embeddings. $\alpha_{1}$ and $\alpha_{2}$ denote the weights of $L_{2}$ and $L_{3}$ for the tradeoff between the entity-specific triple, the type-specific triple and the type similarity constraint. $S$ contains all the triples in the train set, and $S'$ is the negative sample set generated by replacing the entities in $S$ . Specifically, $L_{1}, L_{2}$ and $L_{3}$ are defined as + +$$ +\begin{array}{l} L _ {1} = - \log \sigma \left(\gamma_ {1} - E _ {1} (h, r, t)\right) \\ - \log \sigma \left(E _ {1} \left(h ^ {\prime}, r, t ^ {\prime}\right) - \gamma_ {1}\right) \tag {25} \\ \end{array} +$$ + +$$ +L _ {2} = \max \left[ 0, E _ {2} (h, r, t) + \gamma_ {2} - E _ {2} \left(h ^ {\prime}, r, t ^ {\prime}\right) \right] \tag {26} +$$ + +$$ +\begin{array}{l} L _ {3} = \sum_ {(h p, r, t p) \in Y} \sum_ {(h n, r ^ {\prime}, t n) \in Y ^ {\prime}} \max \left[ 0, E _ {3} ((h, r, t), \right. \\ \left. (h p, r, t p)\right) + \gamma_ {3} - E _ {3} \left(\left(h, r, t\right), \left(h n, r ^ {\prime}, t n\right)\right) \bigg ] \tag {27} \\ \end{array} +$$ + +where $\gamma_{1},\gamma_{2}$ and $\gamma_{3}$ denote the fixed margins in $L_{1},L_{2}$ and $L_{3}$ , respectively. In specific, $L_{3}$ can be viewed as the regularization in optimization for restraining the entity type embeddings. $\sigma$ denotes the sigmoid function. $\max [0,\mathbf{x}]$ is the function to select the larger value between 0 and x. Particularly, in Eq. 27, the triple $(h,r,t)$ is regarded as the anchor instance and $(h_p,r,t_p)$ is a positive instance in the set $Y$ containing other triples correlated to the same relation $r$ , while $(h_n,r',t_n)$ is any negative instance in the set $Y'$ containing the other triples without the relation $r$ . Besides, we employ self-adversarial sampling as in (Sun et al., 2019). + +# 4 Experiment Results + +In this section, we evaluate our model AutoETER for KG completion on four real-world benchmark datasets. Additionally, we visualize the clustering results of type embeddings for demonstrating the effectiveness of representing types automatically. + +
DatasetWN18YAGO3-10FB15KFB15K-237
#Entity40,943123,18214,95114,505
#Relation18371,345237
#Train141,4421,079,040483,142272,115
#Valid5,0005,00050,00017,535
#Test5,0005,00059,07120,466
+ +Table 1: Statistics of datasets used in the experiments. + +# 4.1 Experimental Setup + +# 4.1.1 Datasets + +We utilize four standard datasets1 for link prediction tasks: FB15K (Bordes et al., 2013) is a widely used dataset that is a subgraph of the commonsense knowledge graph Freebase. WN18 (Bordes et al., 2013) is a subset of the lexical knowledge graph WordNet. YAGO3-10 (Dettmers et al., 2018) is a subset of YAGO. Each of the three datasets consists of all the relation patterns, including symmetry, inversion, composition and complex 1-N, N-1 and N-N of relations. FB15K-237 (Toutanova and Chen, 2015) is a subset of FB15K and removes all the inverse relations. Table 1 exhibits the statistics of all the datasets exploited. + +# 4.1.2 Evaluation Protocol + +The link prediction task aims to predict when the head or tail entity of a triple in the test set is missing. For link prediction, all the entities in the KG are respectively replaced with the missing entity to generate the candidate triples. Then, on account of each candidate triple $(h,r,t)$ , we combine the two perspectives of the entity-specific triple jointly with the type-specific triple to evaluate the plausibility of this candidate triple, and the energy function for evaluation is designed as follows: + +$$ +E _ {p r e d} (h, r, t) = E _ {1} (h, r, t) + \alpha_ {1} E _ {2} (h, r, t) \tag {28} +$$ + +The above energy function $E_{pred}(h,r,t)$ is composed of the energy functions $E_{1}(h,r,t)$ (with regard to the entity-specific triple) and $E_{2}(h,r,t)$ (with respect to the type-specific triple) defined in Eqs. 2 and 4, respectively. $\alpha_{1}$ is the weight which is the same as in Eq. 24 for a trade-off. Then, the scores with respect to all the candidate triples are calculated by Eq. 28. Subsequently, these scores are sorted in ascending order, and further, the correct triple rank can be obtained. + +Three standard metrics are employed to evaluate the performance of link prediction: + +
ModelFB15KWN18
MRMRRHits@1Hits@3Hits@10MRMRRHits@1Hits@3Hits@10
TransE (Bordes et al., 2013)-0.4630.2970.5780.749-0.4950.1130.8880.943
DistMult (Yang et al., 2015)420.798--0.8936550.797--0.946
HoIE (Nickel et al., 2016)-0.5240.4020.6130.739-0.9380.9300.9450.947
ComplEx (Trouillon et al., 2016)-0.6920.5990.7590.840-0.9410.9360.9450.947
ConvE (Dettmers et al., 2018)510.6570.5580.7230.8313740.9430.9350.9460.956
RotatE (Sun et al., 2019)400.7970.7460.8300.8843090.9490.9440.9520.959
QuatE (Zhang et al., 2019)400.7650.6920.8190.8783930.9500.9420.9540.959
R-GCN (Michael et al., 2018)-0.6960.6010.7600.852-0.8190.6970.9290.964
PTransE (Lin et al., 2015a)540.6790.5650.7680.8554720.8900.9310.9420.945
TKRL (Xie et al., 2016)68---0.694-----
TypeComplex (Jain et al., 2018)-0.7530.677-0.869-0.9390.932-0.951
AutoETER330.7990.7500.8330.8961740.9510.9460.9540.961
+ +Table 2: Evaluation Results on FB15K and WN18. Best results are in bold and second best results are underlined. + +
ModelFB15K-237YAGO3-10
MRMRRHits@1Hits@3Hits@10MRMRRHits@1Hits@3Hits@10
TransE (Bordes et al., 2013)3570.294--0.465-----
DistMult (Yang et al., 2015)2540.2410.1550.2630.41959260.340.240.380.54
ComplEx (Trouillon et al., 2016)3390.2470.1580.2750.42865310.360.260.400.55
ConvE (Dettmers et al., 2018)2440.3250.2370.3560.50116710.440.350.490.62
RotatE (Sun et al., 2019)1770.3380.2410.3750.53317670.4950.4020.5500.670
QuatE (Zhang et al., 2019)1720.3110.2200.3440.495-----
R-GCN (Michael et al., 2018)-0.2490.1510.2640.417-----
PTransE (Lin et al., 2015a)3020.3630.2340.3740.526----
TypeComplex (Jain et al., 2018)-0.2590.186-0.411-0.4110.319-0.609
AutoETER1700.3440.2500.3820.53811790.5500.4650.6050.699
+ +Table 3: Evaluation Results on FB15K-237 and YAGO3-10 datasets. + +1) Mean Rank (MR) of the correct triples. +2) Mean Reciprocal Rank (MRR) of the correct triples. +3) Hits@n measures the proportion of the correct triples in top-n candidate triples. + +We also follow the filtered setting as the previous study (Dettmers et al., 2018) that evaluates the performance by filtering out the corrupt triples already exist in the KG. + +# 4.1.3 Baselines and Hyper-parameters + +We compare the developed model AutoETER with two categories of the state-of-the-art baselines: (1) Models only considering entity-specific triples including TransE, DisMult, HolE, ComplEx, ConvE, RotatE and QuatE; (2) Models introducing additional information such as TKRL with explicit types and the type-sensitive model TypeComplex, R-GCN with graph structure and PTransE with paths. All the baselines are selected because they achieve good performance and provide source codes for ensuring the reliability and reproducibility of the results. The results of R-GCN are from (Zhang et al., 2019). The results of TKRL are from (Xie et al., 2016). The results of PTransE $^2$ , + +TypeComplex3 and QuatE4 are obtained by using their source codes. The other results of the baselines are from (Sun et al., 2019). + +We tune our model utilizing a grid search to select the optimal hyper-parameters. The optimal configurations are provided as: the batch size is set as 1024, the learning rate is $lr = 0.0001$ , and the weights in optimization are $\alpha_{1} = 0.1$ , $\alpha_{2} = 0.5$ . The dimension of the entity and relation embeddings in entity-specific triples is $k = 1000$ , the dimension of the type and relation embeddings in type-specific triples is $d = 200$ . For datasets FB15K and YAGO3-10, the three fixed margins are set as $\gamma_{1} = 22$ , $\gamma_{2} = 8$ , $\gamma_{3} = 6$ . For datasets WN18 and FB15K-237, $\gamma_{1} = 10$ , $\gamma_{2} = 6$ , $\gamma_{3} = 3$ . + +# 4.2 Evaluation Results and Analyses + +Table 2 and Table 3 report the evaluation results of link prediction on the four datasets. We can observe that our model AutoETER outperforms all the baselines, including the state-of-the-art models RotatE and QuatE. These results demonstrate the superiority of modeling and inferring all the relation patterns and the complex relations by our model. Specifically, AutoETER performs better than the + +![](images/bc7879fc6f31aef3fa8f330279ca0dba0a39a03a2a62ce387319fff881a5b8af.jpg) +(a) + +![](images/ac5167d65d59cfd5b12df64a8dbd0b65698ce8bce49f6a3cb07dfa9d5ad29b30.jpg) +(b) +Figure 3: The visualization of type embeddings clustering on FB15K-237. (a) The clustering of the original type embeddings. (b) The clustering of the entity embeddings. (c) The clustering of the type embeddings all focusing on the relation /award/award_category/nominated_for. + +![](images/92f27ed0abecca342c94540f9c5de728a945eaa4702230be237a571298df41eb.jpg) +(c) + +
ModelHead Entity Prediction (Hits@10)Tail Entity Prediction (Hits@10)
1-11-NN-1N-N1-11-NN-1N-N
TransE (Bordes et al., 2013)0.4370.6570.1820.4720.4370.1970.6670.500
TransH (Wang et al., 2014)0.6680.8760.2870.6450.6550.3980.8330.672
TransR (Lin et al., 2015b)0.7880.8920.3410.6920.7920.3740.9040.721
RotatE (Sun et al., 2019)0.9220.9670.6020.8930.9230.7130.9610.922
PTransE (Lin et al., 2015a)0.9100.9280.6090.8380.9120.7400.8890.864
AutoETER0.9330.9790.6180.9030.9310.7170.9680.927
+ +type-embodied models TKRL and TypeComplex, emphasizing the type representations learned automatically with relation-aware projection by AutoETER are more effective for inference than totally leveraging the explicit types or ignoring the diversity of type embeddings focusing on various relations. Furthermore, AutoETER outperforms RotatE because AutoETER could infer the complex relations of 1-N, N-1 as well as N-N and takes advantage of type representations. These results all illustrate the type representations learned from KGs are available to predict entities more accurately by restricting the candidate entities with type embeddings. + +In view of more diverse relations existed in FB15K compared with the other three datasets, we select FB15K to evaluate link prediction performance by mapping 1-1, 1-N, N-1 and N-N relations. The results are shown in Table 4. Our model achieves better performance on both head entity prediction and tail entity prediction than other baselines particularly RotatE, which illustrates the superiority of capturing various representations of entities specific to different relations with the relation-aware projection mechanism to represent entity + +Table 4: Evaluation results on FB15K by mapping properties of relations. + +
ModelMRMRRH@1H@3H@10
AutoETER1700.3440.2500.3820.538
-TSC1750.3420.2460.3790.536
-TR1770.3400.2440.3770.534
+ +Table 5: Ablation study on FB15K-237. "H@" is the abbreviation of "Hits@". + +types. + +# 4.3 Ablation Study + +We conduct the ablation study of our model on dataset FB15K-237 when we only omit the type similarity constraint (-TSC) and omit the type representation (-TR) from our model. Table 5 demonstrates that our model performs better than the two ablated models. It illustrates the type representation and the type similarity constraint both significantly impact the performance of link prediction and suggests that our automatically learned type representations play a pivotal role in our approach. + +# 4.4 Visualization of Clustering Entity Type Representations + +We utilize Kmeans to cluster the type embeddings and further employ t-SNE to implement dimen + +sionality reduction for 2d visualization. As Figure 3(a) shows, some type embeddings are clustered into independent categories, while some clusters stay close to each other because these entities share many common types. For instance, johnny&june and the two towers are clustered into the same category which actually represents the type movie as we know. Figure 3(b) shows the clustering of the entity embeddings. It can be clearly observed that entity type clustering has better compactness than entity clustering, which demonstrates that entity type embeddings could reflect the characteristics of types. The type embeddings focusing on relation /award/award_category/nominated_for are visualized in Figure 3(c). It is evident that some type embeddings representing the type award such as academy award for best story and cannes best actor award are clustered into the same category while others stay far away. These visualization results explain the effectiveness of our type embeddings learned automatically with relation-aware projection from the KG. + +# 5 Conclusion and Future Work + +In this paper, we propose an AutoETER framework to learn type representations for enriching KG embedding automatically. We introduce two classes of encoders to learn the entity-specific triple and typespecific triple embeddings, which could model and infer all the relation patterns of symmetry, inversion and composition as well as the complex 1-N, N-1 and N-N relations. We also constrain the type embeddings by the type similarity. Our experiments on four real-world datasets for link prediction illustrate the superiority of our model and the visualization of the type embeddings clustering verifies the availability of representing types automatically. In future work, we intend to extend our approach to obtain the better type representations incorporating the supervision of ontologies. + +# Acknowledgments + +This work was partially supported by the National Natural Science Foundation of China (No. 61772054, 62072022), and the NSFC Key Project (No. 61632001) and the Fundamental Research Funds for the Central Universities. + +# References + +Kurt Bollacker, Colin Evans, Praveen Paritosh, and Tim Sturge. 2008. Freebase: A collaboratively created graph database for structuring human knowledge. In SIGMOD, page 1247-1250. +Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In NIPS, page 2787-2795. +Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2D knowledge graph embeddings. In AAAI, page 1811-1818. +Dennis Diefenbach, Kamal Singh, and Pierre Maret. 2018. Wdaqua-core1: a question answering service for rdf knowledge bases. In WWW, pages 1087-1091. +Junheng Hao, Muhao Chen, Wenchao Yu, Yizhou Sun, and Wei Wang. 2019. Universal representation learning of knowledge bases by jointly embedding instances and ontological concepts. In KDD, pages 1709-1719. +He He, Anusha Balakrishnan, Mihail Eric, and Percy Liang. 2017. Learning symmetric collaborative dialogue agents with dynamic knowledge graph embeddings. In ACL, pages 1766-1776. +Prachi Jain, Pankaj Kumar, Mausam1, and Soumen Chakrabarti. 2018. Type-sensitive knowledge base inference without explicit type supervision. In ACL, pages 75-80. +Shaoxiong Ji, Shirui Pan, Erik Cambria, Pekka Marttinen, and Philip S. Yu. 2020. A survey on knowledge graphs: Representation, acquisition and applications. arXiv preprint arXiv:2002.00388. +Denis Krompaß, Stephan Baier, and Volker Tresp. 2015. Type-constrained representation learning in knowledge graphs. In ISWC, page 640-655. +Ni Lao, Tom Mitchell, and William W. Cohen. 2011. Random walk inference and learning in a large scale knowledge base. In EMNLP, pages 529-539. +Xi Victoria Lin, Richard Socher, and Caiming Xiong. 2018. Multi-hop knowledge graph reasoning with reward shaping. In EMNLP, page 3243-3253. +Yankai Lin, Zhiyuan Liu, Huanbo Luan, Maosong Sun, Siwei Rao, and Song Liu. 2015a. Modeling relation paths for representation learning of knowledge bases. In EMNLP, pages 705-714. +Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015b. Learning entity and relation embeddings for knowledge graph completion. In AAAI 2015, page 2181-2187. + +Shiheng Ma, Jianhui Ding, Weijia Jia, Kun Wang, and Minyi Guo. 2017. TransT: Type-based multiple embedding representations for knowledge graph completion. In ECML PKDD, pages 717-733. +Schlichtkrull Michael, Kipf Thomas N., Bloem Peter, van den Berg Rianne, Titov Ivan, and Welling Max. 2018. Modeling relational data with graph convolutional networks. In ESWC. +George A. Miller. 1995. Wordnet: A lexical database for english. Communications of the ACM, 38(11):39-41. +Maximilian Nickel, Lorenzo Rosasco, and Tomaso A Poggio. 2016. Holographic embeddings of knowledge graphs. In AAAI, pages 1955-1961. +Guanglin Niu, Yongfei Zhang, Bo Li, Peng Cui, Si Liu, Jingyang Li, and Xiaowei Zhang. 2020. Rule-guided compositional representation learning on knowledge graphs. In AAAI, pages 2950-2958. +Meng Qu and Jian Tang. 2019. Probabilistic logic neural networks for reasoning. In NeurIPS. +Oliver Ray. 2009. Nonmonotonic abductive inductive learning. Journal of Applied Logic, 7:329-340. +Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. YAGO: A core of semantic knowledge. In WWW, pages 697-706. +Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. RotatE: Knowledge graph embedding by relational rotation in complex space. In ICLR. +Kristina Toutanova and Danqi Chen. 2015. Observed versus latent features for knowledge base and text inference. In CVSC, pages 57-66. +Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In ICML, page 2071-2080. +Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. 2017. Knowledge graph embedding: A survey of approaches and applications. IEEE Transactions on Knowledge and Dada Engineering, 29(12):2724-2743. +Xiang Wang, Xiangnan He, Yixin Cao, Meng Liu, and Tat-Seng Chua. 2019. KGAT: knowledge graph attention network for recommendation. In KDD, pages 950-958. +Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In AAAI 2014, page 1112-1119. +Han Xiao, Minlie Huang, and Xiaoyan Zhu. 2016. From one point to a manifold: Knowledge graph embedding for precise link prediction. In *IJCAI*, pages 1315-1321. + +Ruobing Xie, Zhiyuan Liu, and Maosong Sun. 2016. Representation learning of knowledge graphs with hierarchical types. In *IJCAI*, page 2965-2971. +Bishan Yang, Wen tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. In ICLR. +Jun Yuan, Neng Gao, , and Ji Xiang. 2019. TransGate: Knowledge graph embedding with shared gate structure. In AAAI. +Shuai Zhang, Yi Tay, Lina Yao, and Qi Liu. 2019. *Quaternion knowledge graph embeddings*. In *NeurIPS*, pages 2731-2741. \ No newline at end of file diff --git a/autoeterautomatedentitytyperepresentationforknowledgegraphembedding/images.zip b/autoeterautomatedentitytyperepresentationforknowledgegraphembedding/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b7ece182262f02c14573fca6a14755189bec3140 --- /dev/null +++ b/autoeterautomatedentitytyperepresentationforknowledgegraphembedding/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f8ac27a98b440947b9dc248877a178a386eec0ba08891e2485772558c5a54b5c +size 554872 diff --git a/autoeterautomatedentitytyperepresentationforknowledgegraphembedding/layout.json b/autoeterautomatedentitytyperepresentationforknowledgegraphembedding/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..f979f5c7dde0913ff974fb62d2bc6155b78e6f8c --- /dev/null +++ b/autoeterautomatedentitytyperepresentationforknowledgegraphembedding/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:77dabb6ca98ec8724d0ad69ac2d0c1a41b22366283cda13167b7478699d3a0a2 +size 407663 diff --git a/automaticallyidentifyinggenderissuesinmachinetranslationusingperturbations/5d8c512f-6a9f-4e6f-beb8-0d00fc34cb02_content_list.json b/automaticallyidentifyinggenderissuesinmachinetranslationusingperturbations/5d8c512f-6a9f-4e6f-beb8-0d00fc34cb02_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..e8a32ebe7636e92de26e6b6f3071928a2499a037 --- /dev/null +++ b/automaticallyidentifyinggenderissuesinmachinetranslationusingperturbations/5d8c512f-6a9f-4e6f-beb8-0d00fc34cb02_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b0f9457ca114b2e2b7ef9b86c9fd5bb846be6627af38b4917f89150ef88ead20 +size 37911 diff --git a/automaticallyidentifyinggenderissuesinmachinetranslationusingperturbations/5d8c512f-6a9f-4e6f-beb8-0d00fc34cb02_model.json b/automaticallyidentifyinggenderissuesinmachinetranslationusingperturbations/5d8c512f-6a9f-4e6f-beb8-0d00fc34cb02_model.json new file mode 100644 index 0000000000000000000000000000000000000000..1fb31324ef88b4c7137f0f24456c4f0edd64a835 --- /dev/null +++ b/automaticallyidentifyinggenderissuesinmachinetranslationusingperturbations/5d8c512f-6a9f-4e6f-beb8-0d00fc34cb02_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6239a76d785b2f430ed97406c4e2d626abfb45e60d71c6303e4e73ce49de0b8c +size 45101 diff --git a/automaticallyidentifyinggenderissuesinmachinetranslationusingperturbations/5d8c512f-6a9f-4e6f-beb8-0d00fc34cb02_origin.pdf b/automaticallyidentifyinggenderissuesinmachinetranslationusingperturbations/5d8c512f-6a9f-4e6f-beb8-0d00fc34cb02_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6be221964a6e34547f89e4d5b8422ef57693f0e0 --- /dev/null +++ b/automaticallyidentifyinggenderissuesinmachinetranslationusingperturbations/5d8c512f-6a9f-4e6f-beb8-0d00fc34cb02_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e9af7452852f14bb666fa9e46fe7982516805007f6d881191ee5bd11b4e872e1 +size 223979 diff --git a/automaticallyidentifyinggenderissuesinmachinetranslationusingperturbations/full.md b/automaticallyidentifyinggenderissuesinmachinetranslationusingperturbations/full.md new file mode 100644 index 0000000000000000000000000000000000000000..ee5925738ecf8c8641e46e65f6a32e12225175d9 --- /dev/null +++ b/automaticallyidentifyinggenderissuesinmachinetranslationusingperturbations/full.md @@ -0,0 +1,157 @@ +# Automatically Identifying Gender Issues in Machine Translation using Perturbations + +Hila Gonen + +Bar-Ilan University + +hilagnn@gmail.com + +Kellie Webster + +Google Research NYC + +websterk@google.com + +# Abstract + +The successful application of neural methods to machine translation has realized huge quality advances for the community. With these improvements, many have noted outstanding challenges, including the modeling and treatment of gendered language. While previous studies have identified issues using synthetic examples, we develop a novel technique to mine examples from real world data to explore challenges for deployed systems. We use our method to compile an evaluation benchmark spanning examples for four languages from three language families, which we publicly release to facilitate research. The examples in our benchmark expose where model representations are gendered, and the unintended consequences these gendered representations can have in downstream application. + +# 1 Introduction + +Machine translation (MT) has realized huge improvements in quality from the successful application and development of neural methods (Kalchbrenner and Blunsom, 2013; Cho et al., 2014; Vaswani et al., 2017; Johnson et al., 2017; Chen et al., 2018). As the community has explored this enhanced performance, many have noted the outstanding challenge of modeling and handling gendered language (Kuczmarski, 2018; Escudé Font and Costa-jussà, 2019). We extend this line of work, which identifies issues using synthetic examples manually curated for a target language (Stanovsky et al., 2019; Cho et al., 2019), by analyzing real world text across a range of languages to understand challenges for deployed systems. + +In this paper, we explore the class of issues which surface when a neutral reference to a person is translated to a gendered form (e.g. in Table 1, where the English counselor and nurse are translated into + +the French conseiller (masculine) and infirmière (feminine). For this class of examples, the MT task requires a system to produce a single translation without source cues, thus exposing a model's preferred gender for the reference form. + +With this scope, we make two key contributions. First, we design and implement an automatic pipeline for detecting examples of our class of gender issues in real world input, using a BERT-based perturbation method novel to this work. A key advantage of our pipeline beyond previous work is its extensibility: a) beyond word lists; b) to different language pairs and c) parts of speech. Second, using our new pipeline, we compile a dataset that we make publicly available to serve as a benchmark for future work. We focus on English as the source language, and explore four target gendered languages across three language families (French, German, Spanish, and Russian). Our examples expose where MT encodings are gendered, finding new issues not covered in previous manual approaches, and the unintended consequences of this for translation. + +# 2 Gender Marking Languages + +Gender-marking languages have rich grammatical systems for expressing gender (Corbett, 1991). To produce a valid sentence in a gender-marking language, gender may need to be marked not only on pronouns (he, she), as it is in English, but also nouns and even verbs, as well as words linked to these gendered nouns and verbs. This means that translating from a language like English, with little gender marking, to a gender-marking language like Spanish, requires a system to produce gender markings that may not have explicit evidence in the source. For instance, The tall teacher from English could be translated into the Spanish La maestre alta (feminine) or El maestro alto (masculine). + +
Source Sentence (En)Translation (Fr)M/F
so is that going to affect my chances of becoming a counselor?Alors, est - ce que cela va affecter mes chances devenir conseiller?M
so is that going to affect my chances of becoming a nurse?Alors, est - ce que cela va affecter mes chances devenir infirmière?F
+ +Table 1: An example from our dataset of a minimal pair of English gender-neutral source sentences, translated into two different genders in French. Red (italic) stands for masculine, cyan (normal) stands for feminine. + +# 3 Automatic Detection of Gender Issues + +The class of issues we are interested in are those where translation to a gender-marking language exposes a model's gender preference for a personal reference. The examples we find that demonstrate this are English sentence-pairs, a minimal pair differing by only a single word, e.g. doctor being replaced by nurse. In each of our examples, this minimal perturbation does not change the gender of the source but gives rise to gender differences upon translation, e.g. doctor becoming masculine and nurse feminine. + +In this section, we present a simple, extensible method to mine such examples from real-world text. Our method does not require expansive manually-curated word lists for each target language, which enables us to discover new kinds of entities that are susceptible to model bias but are not usually thought of this way. Indeed, while we demonstrate its utility with nouns with four target languages, our method is naturally extensible to new language pairs and parts of speech with no change in design. + +Filtering source sentences Our first step is to identify sentences that are gender neutral and that include a single human entity, e.g. A doctor works in a hospital. We focus on human entities since these have been the target of previous studies and present the largest risk of gender issues in translation. + +We use a BERT-based Named Entity Recognition (NER) model that identifies human entities, and exclude sentences that have more than one token tagged as such. We also remove sentences in which the entity is a gendered term in English1 (e.g. mother, nephew), a name, or not a noun. + +Note that all the sentences we get are naturally occurring sentences, and that we do not use any templates or predefined lists of target words that we want to handle. + +Perturbations using BERT We use BERT as a masked language model to find words which can substitute for the human entity identified in the previous filtering step, e.g. doctor $\rightarrow$ nurse. We aim to get natural-sounding output and maintain extensibility, and thus do not use predefined substitutions. We cap our search to the first 100 candidates BERT returns, accepting the first 10 which are tagged as person, and for which the resulting sentences also pass the filtering step. + +Translation We translate each of the generated sentences into our target languages using Google Translate2. A doctor/nurse works in a hospital $\rightarrow$ Un doctor/Una enfermera trabajo en un hospital. + +Alignment We align tokens in the original and translated sentences using fast-align (Dyer et al., 2010). This is needed in order to know which token in the translation output is the focus entity in the source sentence, whose gender we want to analyze. + +Gender Identification We use a morphological analyzer, implemented following Kong et al. (2017), to tag the gender of the target word. + +Identifying Examples The final step of our pipeline is identifying pairs of sentences to include in our dataset, pairs where different genders are assigned to the human entity. Our example would be included since doctor is translated with the masculine form Un doctor while nurse is translated with the feminine form Una enfermera. + +# 4 Challenge Dataset + +We compile our final dataset from the output of this pipeline, and explore its properties to understand the issues it represents for deployed systems. + +# 4.1 Random Sampling + +In our final dataset, we include both examples that passed the final example identification step above (pairs referred to as "at risk"), as well as a random + +selection that did not ("not at risk"). We do this in order to not be constrained too heavily by our choice of translation model; if we did not, we would have no chance of inspecting examples that our system did not spot as at risk but other models might have. + +# 4.2 Fixed Grammatical Gender Rating + +When we inspected the examples identified as at risk by our pipeline, the major source of error we found pertained to the issue of fixed grammatical gender. Consider the example in Figure 1: + +Sentence 1: + +En: you don't have to be the victim in whatever. + +Fr: you ne devez pas etre la victe de quoi que ce soit. + +Sentence 2: + +En: you don't have to be the expert in whatever. + +Fr: vous ne devez pas etre l'expert en quoi que ce soit. + +Figure 1: An example from our dataset, with fixed grammatical gender. Red (italic) stands for masculine, cyan (normal) stands for feminine. + +In this example, the word victim in the first English sentence is identified by our tagger as a human entity. However, its French translation victim is feminine by definition, and cannot be assigned another gender regardless of the context, causing a false positive result. + +We attempted to filter these examples automatically but came across a number of challenges. Most critically, we found no high-quality, comprehensive dictionary that included the required information for all languages, and heuristics we applied were noisy and not reliable.3 We observed that the underlying reason for these challenges was that there is no closed list of grammatically-fixed words as languages are evolving to be more gender-inclusive. In order to maximize and guarantee data quality, and to be sensitive to the nuances of language change, we decided to add a manual filtering step after our pipeline to select the positive (at risk) examples. + +We note that the problem of fixed grammatical gender is particular to nouns. Our pipeline is naturally extensible across parts of speech and we would not expect the same issues in future work perturbing adjectives or verbs. + +# 4.3 Dataset Statistics + +To create our dataset we mine text from the subreddit "career".4 From 29,330 sentences, we found 4,016 which referred to a single, non-gendered human entity. Introducing perturbations with BERT into these 4,016 sentences yielded 40,160 pairs. Out of those, 592 to 1,012 pairs are identified as at risk by our pipeline, depending on the target language. We asked humans to manually identify 100 true at risk examples for the final dataset, which was achieved for all languages except Russian, where we have 59 pairs.5 To this 100, we add a further 100 randomly sampled negative examples for each language. Table 2 shows a representative example for each language-pair. + +# 4.4 Exploratory Analysis + +Table 3 lists the most frequent focus personal references in each language-pair among the positive (at risk) and negative (not at risk) examples, along with the ratio between times the reference form was translated as masculine compared to feminine. Words with extreme values of this ratio indicate cases where a model has a systematic preference for one gender over another, i.e. a gendered representation. + +Among the negative examples, we see a prior for masculine translations across all terms. Positive examples break from this prior by exposing reference forms with a feminine preference: nurse and secretary are the most consistently feminine forms, consistent with the Bureau of Labor statistics used in previous work (Caliskan et al., 2017). + +Figure 2 shows two sentence pairs that appear as positive examples across all four language-pairs. Two of the three forms, nurse and mechanic, are consistent with the gender statistics of Caliskan et al.; the association of fighter with the masculine gender is a new discovery of our method. + +# 5 Related Work + +Our study builds on the literature around gender bias in machine translation. Cho et al. (2019) use sentence templates to probe for differences in Korean pronouns. Prates et al. (2019) and Stanovsky et al. (2019) also use sentence templates, but filled with word lists, of professions and adjectives in the + +
Source Sentence (En)TranslationM/F
Fralso should i ask the manager what the pay would be if i got the job prior to flying out?De plus, devrais - je demander au gestionnaire quel serait le salaire si je obténais le poste avant de prendre l'avion?M
also should i ask the secretary what the pay would be if i got the job prior to flying out?De plus, devrais - je demander à la secrétaire quel serait le salaire si je obténais le poste avant mon départ?F
Decurrently thinking about learning a trade (mostly a electrician). currently thinking about learning a trade (mostly a cook).Derzeit über das Erlernen eines Gewerbes nachdenken (meistens Elektriker). Derzeit über das Erlernen eines Gewerbes nachdenken (meistens eine Köchin).M
F
Es- decided to become a teacher: spent a year working 2 jobs and doing prerequisites for a masters in education. - decided to become a lecturer : spent a year working 2 jobs and doing prerequisites for a masters in education.- Decidí ser maestra: ISALEPRA: passé un año travajando en 2 trabajo y hacerly requisitos previos para una maestría en educación. - Decidí ser profesor: ISALEPRA: pasé un año travajando en 2 trabajo y hacerly requisitos previos para una maestría en educación.F
M
Rui read about a psychologist who upgraded into becoming a m.d. i read about a nurse who upgraded into becoming a m.d.Я чтajó o ncuxoloe, kotopьий певратулся в Мd.Я чтajó o meДсесге, kotopьий певратулся в ДOK-Topа менинны.M
F
+ +Table 2: Examples from our dataset of a minimal pair of English gender-neutral source sentences, translated into two different genders in all target languages. Red (italic) stands for masculine, cyan (normal) stands for feminine. + +
PositiveM:FNegativeM:F
Frnurse0:36manager685:1
secretary0:17employee406:0
teacher7:1employees364:0
assistant1:7parents353:0
manager8:0teacher337:0
Desecretary0:27manager594:0
nurse0:21employees409:1
teacher3:7friends359:0
receptionist0:9employee320:0
manager7:0students316:0
Esteacher4:29manager691:0
nurse0:31employee446:0
secretary0:26friends380:0
writer8:0parents374:0
employee5:0supervisor345:0
Runurse0:32manager713:0
babysitter0:13employees519:0
nurses0:5friends439:0
dishwasher0:4students417:0
technician3:0employee392:0
+ +Table 3: Top five human reference forms in our dataset, and their ratio of times they are translated as masculine compared to feminine. Positive indicates that the examples were taken from the at-risk group from our pipeline, and negative from the random sample among the not at-risk group. + +
Sentence pair 1: +Original: you need to have experience working with hydraulic lifts, & they like to see that you’ve worked or trained as a mechanic. +Substitution: you need to have experience working with hydraulic lifts, & they like to see that you’ve worked or trained as a nurse.
Sentence pair 2: +Original: in fact, probably not even as a seasoned nurse. +Substitution: in fact, probably not even as a seasoned fighter.
+ +Figure 2: Two sentence pairs from our dataset that found to be shared between all four target languages. + +former, and professions in the latter. A separate but related line of work focuses on generating correct inflections when translating to gender-marking languages (Vanmassenhove et al., 2018; Moryossef et al., 2019). + +# 6 Conclusion + +The primary contribution of our work is a novel, automatic method for identifying gender issues in machine translation. By performing BERT-based perturbations on naturally-occurring sentences, we are able to identify sentence pairs that behave differently upon translation to gender-marking languages. We demonstrate our technique over human reference forms and discover new sources of risk beyond the word lists used previously. Furthermore, the novelty of our approach is its + +natural extensibility to new language pairs, text genres, and different parts of speech. We look forward to future work exploring such applications. + +Using our new method, we compile a dataset across four languages from three language families. By publicly releasing our dataset, we hope to enable the community to work together towards solutions that are inclusive and equitable to all. + +# Acknowledgements + +We thank Melvin Johnson for his helpful feedback throughout this project, Dan Garrette for helping with some parts of the pipeline, and Dani Mitropolsky, Vitaly Nikolaev and Marisa Rossmann for the help with filtering the dataset. + +# References + +Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183-186. +Mia Xu Chen, Orhan Firat, Ankur Bapna, Melvin Johnson, Wolfgang Macherey, George Foster, Llion Jones, Mike Schuster, Noam Shazeer, Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Zhifeng Chen, Yonghui Wu, and Macduff Hughes. 2018. The best of both worlds: Combining recent advances in neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 76-86, Melbourne, Australia. Association for Computational Linguistics. +Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724-1734, Doha, Qatar. Association for Computational Linguistics. +Won Ik Cho, Ji Won Kim, Seok Min Kim, and Nam Soo Kim. 2019. On measuring gender bias in translation of gender-neutral pronouns. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing. +Greville G Corbett. 1991. Gender. +Chris Dyer, Adam Lopez, Juri Ganitkevitch, Jonathan Weese, Ferhan Ture, Phil Blunsom, Hendra Setiawan, Vladimir Eidelman, and Philip Resnik. 2010. cdec: A decoder, alignment, and learning framework for finite-state and context-free translation models. In Proceedings of the ACL 2010 System Demonstrations. + +Joel Escudé Font and Marta R. Costa-jussa. 2019. Equalizing gender bias in neural machine translation with word embeddings techniques. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing. +Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339-351. +Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1700-1709, Seattle, Washington, USA. Association for Computational Linguistics. +Lingpeng Kong, Chris Alberti, Daniel Andor, Ivan Bogatyy, and David Weiss. 2017. Dragnn: A transition-based framework for dynamically connected neural networks. arXiv preprint arXiv:1703.04474. +James Kuczmarski. 2018. Reducing gender bias in google translate. +Amit Moryossef, Roee Aharoni, and Yoav Goldberg. 2019. Filling gender & number gaps in neural machine translation with black-box context injection. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing. +Marcelo OR Prates, Pedro H Avelar, and Luis C Lamb. 2019. Assessing gender bias in machine translation: a case study with google translate. Neural Computing and Applications. +Gabriel Stanovsky, Noah A. Smith, and Luke Zettlemoyer. 2019. Evaluating gender bias in machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. +Eva Vanmassenhove, Christian Hardmeier, and Andy Way. 2018. Getting gender right in neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, L ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998-6008. Curran Associates, Inc. \ No newline at end of file diff --git a/automaticallyidentifyinggenderissuesinmachinetranslationusingperturbations/images.zip b/automaticallyidentifyinggenderissuesinmachinetranslationusingperturbations/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..a5568548e0f84b4465f6783bab0bb917cdfb06c0 --- /dev/null +++ b/automaticallyidentifyinggenderissuesinmachinetranslationusingperturbations/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1bc3895d128ae7eb1822115cd5cd14a70d270ee210cfc6550540b811672f64fd +size 287645 diff --git a/automaticallyidentifyinggenderissuesinmachinetranslationusingperturbations/layout.json b/automaticallyidentifyinggenderissuesinmachinetranslationusingperturbations/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..70adb66f692c3631035650a5d1aa346c26d3f4d0 --- /dev/null +++ b/automaticallyidentifyinggenderissuesinmachinetranslationusingperturbations/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3e02c849cadc633d13e419c0a7594f1c0cf978e35a8d92199826473b1a1aead0 +size 142204 diff --git a/automatictermnamegenerationforgeneontologytaskanddataset/ad46fe2b-6617-4cdc-a022-f8954ba41ac6_content_list.json b/automatictermnamegenerationforgeneontologytaskanddataset/ad46fe2b-6617-4cdc-a022-f8954ba41ac6_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..7a0e0c8a7f7ae281aab3f5a1bc78f6bb68106679 --- /dev/null +++ b/automatictermnamegenerationforgeneontologytaskanddataset/ad46fe2b-6617-4cdc-a022-f8954ba41ac6_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:65b3863bb6d0ad96d6d5cc652f5f1a9e39d4a5c4c3a4b73ab8893c5c71b82a71 +size 37444 diff --git a/automatictermnamegenerationforgeneontologytaskanddataset/ad46fe2b-6617-4cdc-a022-f8954ba41ac6_model.json b/automatictermnamegenerationforgeneontologytaskanddataset/ad46fe2b-6617-4cdc-a022-f8954ba41ac6_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c1e6af3a37b09affe47cd803871df84baface485 --- /dev/null +++ b/automatictermnamegenerationforgeneontologytaskanddataset/ad46fe2b-6617-4cdc-a022-f8954ba41ac6_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4fb2cf0e293c110caaae7f7f0967bd9a1214a432f3cac4702b9b6c7c45a426f2 +size 45921 diff --git a/automatictermnamegenerationforgeneontologytaskanddataset/ad46fe2b-6617-4cdc-a022-f8954ba41ac6_origin.pdf b/automatictermnamegenerationforgeneontologytaskanddataset/ad46fe2b-6617-4cdc-a022-f8954ba41ac6_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b5055968cf150d599fe3eb4f9cffb0d670b8f48f --- /dev/null +++ b/automatictermnamegenerationforgeneontologytaskanddataset/ad46fe2b-6617-4cdc-a022-f8954ba41ac6_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:462b0eb7709923eaa0b86fb711a0b6f9eb4b668f38b3958312e199da68f4cca3 +size 6558628 diff --git a/automatictermnamegenerationforgeneontologytaskanddataset/full.md b/automatictermnamegenerationforgeneontologytaskanddataset/full.md new file mode 100644 index 0000000000000000000000000000000000000000..8f973a139aa1329f934524ccb34c82af72615b7d --- /dev/null +++ b/automatictermnamegenerationforgeneontologytaskanddataset/full.md @@ -0,0 +1,183 @@ +# Automatic Term Name Generation for Gene Ontology: Task and Dataset + +Yanjian Zhang $^{1}$ , Qin Chen $^{1*}$ , Yiteng Zhang $^{1}$ , Yixu Gao $^{1}$ , Zhongyu Wei $^{15}$ , Jiajie Peng $^{2}$ , Zengfeng Huang $^{1}$ , Weijian Sun $^{3}$ , Xuanjing Huang $^{4}$ + +$^{1}$ School of Data Science, Fudan University + +$^{2}$ School of Computer Science, Northwestern Polytechnical University + +3Huawei Technologies Co., Ltd + +$^{4}$ School of Computer Science, Fudan University + +$^{5}$ Research Institute of Intelligent and Complex Systems, Fudan University + +{yanjianzhang16,qin_chen,yitengzhang19,yxgao19,zywei,huangzf,xjhuang}@fudan.edu.cn + +jiajiepeng@nwpu.edu.cn + +sunweijian@huawei.com + +# Abstract + +Terms contained in Gene Ontology (GO) have been widely used in biology and bio-medicine. Most previous research focuses on inferring new GO terms, while the term names that reflect the gene function are still named by the experts. To fill this gap, we propose a novel task, namely term name generation for GO, and build a large-scale benchmark dataset. Furthermore, we present a graph-based generative model that incorporates the relations between genes, words and terms for term name generation, which exhibits great advantages over the strong baselines. + +# 1 Introduction and Related Work + +Gene Ontology (GO) is a widely-used biological ontology, which contains a large number of terms to describe the gene function in three aspects, namely molecular function, biological process and cellular component (Consortium, 2015, 2016). The terms are organized hierarchically like a tree, and can be used to annotate the genes as demonstrated in Figure 1. GO has been extensively studied in the research community of bio-medicine and biology for its great value in many applications, such as protein function analysis (Cho et al., 2016) and disease association prediction (Menche et al., 2015). + +A major concern in GO is the GO construction, including term discovery, naming and organization (Mazandu et al., 2017; Koopmans et al., 2019). In early studies, the terms are manually defined and organized by the experts in particular areas of biology, which is very labor-consuming and inefficient given the large volume of biological literature published every year (Tomczak et al., 2018). Moreover, different experts may use different expressions to describe the same biological concept, causing an inconsistency problem in term naming. + +![](images/51d52a869569bf673c40a80f2cd6ad1e112cbfba74d54a34dd63a089c1fde0c0.jpg) +Figure 1: A term named "Regulation of cell growth" and the related genes with aliases and descriptions. + +Recently, many researchers turn to develop automatic methods for GO construction. Dutkowski et al. (Dutkowski et al., 2013) proposed a Network-XExtracted Ontology (NeXO), which clustered genes hierarchically based on their connections in the molecular networks, and recovered around $40\%$ of the terms according to the alignment between NeXO and GO. In order to further improve the performance, Kramer et al. (Kramer et al., 2014) identified the gene cliques which were treated as a term in an integrated biological network. Though these methods infer new GO terms and their relationships based on the structured networks automatically (Gligorijevic et al., 2014; Li and Yip, 2016; Peng et al., 2015), the new terms are still named manually by the experts, which is prone to the problems of inefficiency and inconsistency. Furthermore, only the structure information in existing networks is utilized, while the genes' rich textual information that potentially describes the corresponding term has not well been studied. + +In order to obtain term names automatically to boost GO construction, we propose a novel task that aims to generate term names based on the textual information of the related genes. An illustrative example of the task is shown in Figure 1. The + +![](images/17f3bd32e9fed77093e1ef27e62f76f9a897ea928e05bb734bddbc9da049aea7.jpg) + +![](images/686b8b756a36cce65aa1cd47c76b9e9230bde99c52b70545ba123735afa210c8.jpg) +Figure 2: Distributions of the dataset. + +![](images/e746610a0078b18584baff1966225da197edcbfcdadb8462fa9a8e500c2d339e.jpg) + +![](images/ee75eb73ccfd84016625cb2ec6ac0b4a1e79f13b436bfacb7898c3dda911a111.jpg) + +genes IGFBP3, OGFR and BAP1 are annotated by the term with the ID as GO:0001558 and name as "Regulation of cell growth". Since there are some word overlaps between the term name and gene text (alias and description) by our observations, we aim to generate the term name based on the gene text. To facilitate the research, we first present a dataset for term name generation in GO. Then, we propose a graph-based generative model that incorporates the potential relations between genes, words and terms for term name generation. The experimental results indicate the effectiveness of our proposed model. The contributions of our work are three-fold: (1) To the best of our knowledge, it is the first attempt to explore to generate term names for GO automatically. (2) We present a large-scale dataset for term name generation based on various biological resources, which will help boost the research in bio-medicine and biology. (3) We conduct extensive experiments with in-depth analyses, which verify the effectiveness of our proposed model. + +# 2 Dataset + +We build a large-scale dataset1 for term name generation, which contains the GO terms about Homosapiens (humankind). We collect the term ID, term name and the corresponding genes' ID from Gene Ontology Consortium2. In addition, the gene alias and descriptions are crawled from GeneCards3, which contains the information from Universal Protein Resource (UniProt)4. + +Our dataset contains 18,092 samples in total. Each sample contains a term ID, term name and the related genes with alias and descriptions as demonstrated in Figure 1. The statistics and distributions about the dataset are shown in Table 1 and Figure 2. We observe that about $51.3\%$ of the words are shared between term names and related genes, indicating the potential to utilize the textual + +information of genes for term name generation. It is also interesting to find that some patterns like "regulation of" appear in the term name frequently, which provide valuable clues for enhancing the performance of generation. + +
# of terms18,092
# of genes17,233
Avg. length of term name4.74
Avg. length of gene alias4.83
Avg. length of gene description66.1
Shared words between term and gene51.3%
+ +Table 1: Statistics of the dataset. + +# 3 Graph-based Generative Model + +The classical generative models such as Seq2Seq (Sutskever et al., 2014), HRNNLM (Lin et al., 2015) and Transformer (Vaswani et al., 2017) only incorporate the sequential information of the source text for sentence generation, while the potential structure within the text is neglected. To alleviate this problem, we build a heterogeneous graph with the words, genes and terms as nodes, and adopt a graph-based generative model for term name generation. The overall architecture of our graph-based generative model is shown in Figure 3, which consists of two components: the GCN based encoder and the graph attention based decoder. + +# 3.1 GCN based Encoder + +The GCN-based encoder aims to encode the relations between genes, words and terms for boosting term name generation. We first construct a heterogeneous graph based on the dataset, and then apply the Graph Convolutional Network (GCN) (Vashisht et al., 2019) for representation learning. + +Graph Construction. We build a heterogeneous graph where the nodes are the words, genes and terms, and the edges reflect the relations between them. The words come from the gene text. Regarding to the edges, there are two types: word-gene and gene-term. The value for the word-gene edge is the normalized count of the word in the + +![](images/86b36b574eb12ff40850f481afedb7775b23e07a453e859b49456a15a5b06a69.jpg) +Figure 3: The overall architecture of our Graph-based Generative Model. Prob("beta", g) and Prob("beta", c) denote the probabilities based on the generation-mode and copy-mode respectively. + +gene text, while the value for the gene-term edge is 1 if the gene can be annotated by the term. + +Representation Learning. The initial representation for the word node is the word embeddings. For the gene node, the gene alias and description encoded by a GRU model is used as the initial representation. Regarding to the term node, the pooling over all the representations of the related gene nodes is used as the initial representation. Then, we update the node representation via a GCN model due to its effectiveness in modeling the structure information (Kipf and Welling, 2016), which is formulated as follows: + +$$ +\mathcal {X} ^ {\prime} = \hat {A} \operatorname {R e L U} \left(\hat {A} \mathcal {X} W ^ {(0)}\right) W ^ {(1)} \tag {1} +$$ + +where $\hat{A} = A + I$ , $A$ is the adjacency matrix of the graph, and $I$ is the identity matrix. $\mathcal{X}$ is the initial representation for the nodes, denoted as $\mathcal{X} = (t,g_1\dots g_m,w_1,\dots,w_n)$ , where $g_{i},w_{i},t$ denote the initial representation for the $i$ th gene, word and term respectively. $W^{(0)}$ and $W^{(1)}$ represent the weight matrix in the first and second layer of GCN. + +# 3.2 Graph Attention based Decoder + +Motivated by the effectiveness of the attention mechanism for generation (Bahdanau et al., 2014), we adopt a graph attention based decoder to generate the term name. The attentive word node representation by GCN is utilized and formulated as: + +$$ +a _ {t} = \sum_ {j = 1} ^ {n} \alpha_ {j} w _ {j} ^ {\prime} \tag {2} +$$ + +$$ +\alpha_ {j} = s o f t m a x (v ^ {T} \mathrm {t a n h} (W _ {a} [ h _ {t - 1}; w _ {j} ^ {\prime} ]))) +$$ + +where $h_{t-1}$ is the previous hidden state, $w'_j$ is the word node representation by GCN, $v$ is a parameter vector, and $W_a$ is a parameter matrix. + +Given the word overlaps between the gene text and term name, we utilize the copy mechanism in + +CopyNet (Gu et al., 2016) for decoding, making it possible to generate the word from either the vocabulary of the training set or the current gene text. The initial hidden state $h_0$ is the term node representation (i.e., $t'$ ) obtained by GCN, and the hidden state is updated as: + +$$ +h _ {t} = f \left(\left[ h _ {t - 1}; w _ {t - 1}; a _ {t}; w _ {S R} ^ {\prime} \right]\right) \tag {3} +$$ + +where $f$ is the RNN function, $w_{t-1}$ is the word embedding of the previous generated word, $w_{SR}'$ is a selective read (SR) vector in CopyNet. When the previous generated word appears in the gene text, the next word will also probably come from it, and thus $w_{SR}'$ is the previous word node representation; otherwise it is a zero vector. + +The probability of generating a target word $y_{t}$ is calculated as a mixture of the probabilities by the generation-mode and copy-mode as follows: + +$$ +p \left(y _ {t} \mid h _ {t}\right) = \frac {1}{Z} e ^ {\psi_ {g} \left(y _ {t}\right)} + \frac {1}{Z} \sum e ^ {\psi_ {c} \left(x _ {j}\right)} \tag {4} +$$ + +where $\psi_g(y_t)$ and $\psi_c(x_j)$ are score functions for the generate-mode and copy-mode respectively, which can be defined as demonstrated in (Gu et al., 2016). $Z = \sum_{v\in \mathcal{V}}e^{\psi_g(v)} + \sum_{x\in S}e^{\psi_c(x)}$ , where $\mathcal{V}$ denotes the word vocabulary in the training set, and $S$ denotes the source word set in the gene text. It is notable that there are a lot of fixed patterns in the term names as mentioned in section 2. Therefore, we extract top ranked bigrams and trigrams, and treat them as new words for ease of generation. + +# 4 Experiment + +# 4.1 Experimental Setup + +Implementation Details. The dataset is divided into the training, validation and test sets with a proportion of 8:1:1. We adopt the widely used evaluation metrics like BLEU1-3 (Papineni et al., + +
ModelRouge-1Rouge-2Rouge-LBLEU-1BLEU-2BLEU-3
TF-IDF9.6**9.6**
LexRank9.7**9.7**
Seq2Seq18.810.016.011.77.42.5
HRNNLM19.010.116.311.77.42.8
Transformer17.78.716.715.09.13.9
full model21.610.322.117.810.64.0
Ablation study
No copy22.510.320.617.510.23.8
No pattern21.39.722.016.59.23.3
No copy and pattern21.010.118.615.69.23.1
+ +Table 2: Overall performance of different models. The best result is marked in bold. Only the Rouge-1 and BLEU-1 scores for the extractive models are shown since they usually extract the unigrams independently. + +2002) and Rouge $_{1,2,L}$ (Lin, 2004) for the generation task. The word embeddings are initialized from $\mathcal{N}(0,1)$ with a dimension of 300 and updated during training. The dimension of the hidden units for GRU (Chung et al., 2014) and GCN is 300. We initialize the parameters according to a uniform distribution with the Xavier scheme (Kumar, 2017), and the dropout rate is set to 0.5. The Adam (Kingma and Ba, 2014) method with a learning rate of 1e-3 is used for training. + +Baseline Methods. To evaluate the effectiveness of our proposed model, we apply the advanced baselines in two categories for comparison: (1) TF-IDF; (2) LexRank (Erkan and Radev, 2004); (3) Seq2Seq (Sutskever et al., 2014); (4) HRNNLM (Lin et al., 2015); (5) Transformer (Vaswani et al., 2017). The former two are extractive models which extract words from the gene text as the term name, and the latter three are generative models which generate words from the vocabulary space as the term name. + +# 4.2 Experimental Results + +The experimental results are shown in Table 2. It is observed that the generative models perform better than the extractive models by incorporating the language probability into generation, which makes the generated term name more coherent. Whereas, the extractive models usually extract keywords independently, which are hard to form a complete and brief term name. It is also notable that our graph-based generative model achieves the best performance in all cases by incorporating the relations between the genes, words and terms into generation. While other generative models bring unnecessary sequential information of multiple genes, which may have a side effect on term name generation. + +From the ablation study, we find that when we treat the frequent patterns as new words during generation and then restore them, the performance can + +be further boosted. In addition, the copy mechanism can help improve the generation performance especially in the metric of BLEU scores, which proves the effectiveness of using the shared words between genes and terms for term name generation. + +# 4.3 Visualization of Attention + +To have an insight of why our proposed graph-based generative model is more effective, we randomly sample a generated term name that is the same as the ground truth, and draw an attention heatmap for the words in the term name and the corresponding gene aliases in Figure 4. The attention result for the gene descriptions is not presented here due to the limited space. We observe that the word Tweety that represents a gene group5 in gene aliases is highly related to the words as Transporter and Activity in the term name, which indicates the potential of modeling the relations between words, genes and terms for enhancing the performance of term name generation. + +![](images/b85e3268ba3acb1b30b8798648e35c9eee633997f4792c86123110962469f311.jpg) +Figure 4: Attentive weight visualization. The vertical and horizontal axes denote the words in the term name and gene aliases respectively. + +# 5 Conclusions and Future Work + +In this paper, we propose a novel task of automatic term name generation based on the gene text for GO. We construct a large-scale dataset and provide the insights of this task. Experimental results show that our proposed graph-based generative model is superior to other strong baselines by modeling + +the relations between genes, words and terms. In the future, we will explore how to utilize more knowledge to guide term name generation. + +# Acknowledgement + +This work is partially supported by National Natural Science Foundation of China (No. 71991471, 61702421, 61906045), Science and Technology Commission of Shanghai Municipality Grant (No.20dz1200600, No.18DZ1201000, 17JC1420200), CURE (Hui-Chun Chin and Tsung-Dao Lee Chinese Undergraduate Research Endowment) (19931), China Postdoctoral Science Foundation (No.2019M661361), and National University Student Innovation Program (202010246045). + +# References + +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. +Hyunghoon Cho, Bonnie Berger, and Jian Peng. 2016. Compact integration of multi-network topology for functional analysis of genes. Cell systems, 3(6):540-548. +Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. +Gene Ontology Consortium. 2015. Gene ontology consortium: going forward. Nucleic acids research, 43(D1):D1049-D1056. +Gene Ontology Consortium. 2016. Expansion of the Gene Ontology knowledgebase and resources. *Nucleic acids research*, 45(D1):D331-D338. +Janusz Dutkowski, Michael Kramer, Michal A Surma, Rama Balakrishnan, J Michael Cherry, Nevan J Krogan, and Trey Ideker. 2013. A gene ontology inferred from molecular networks. Nature biotechnology, 31(1):38. +Günes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of artificial intelligence research, 22:457-479. +Vladimir Gligorijevic, Vuk Janić, and Nataša Pržulj. 2014. Integration of molecular network data reconstructs Gene Ontology. Bioinformatics, 30(17):i594-i600. +Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. arXiv preprint arXiv:1603.06393. + +Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. +Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. +Frank Koopmans, Pim van Nierop, Maria Andres-Alonso, Andrea Byrnes, Tony Cijsouw, Marcelo P Coba, L Niels Cornelisse, Ryan J Farrell, Hana L Goldschmidt, Daniel P Howrigan, et al. 2019. SynGO: an evidence-based, expert-curated knowledge base for the synapse. Neuron, 103(2):217-234. +Michael Kramer, Janusz Dutkowski, Michael Yu, Vineet Bafna, and Trey Ideker. 2014. Inferring gene ontologies from pairwise similarity data. Bioinformatics, 30(12):i34-i42. +Siddharth Krishna Kumar. 2017. On weight initialization in deep neural networks. arXiv preprint arXiv:1704.08863. +Le Li and Kevin Y Yip. 2016. Integrating information in biological ontologies and molecular networks to infer novel terms. *Scientific reports*, 6:39237. +Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out. +Rui Lin, Shujie Liu, Muyun Yang, Mu Li, Ming Zhou, and Sheng Li. 2015. Hierarchical recurrent neural network for document modeling. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 899-907. +Gaston K Mazandu, Emile R Chimusa, and Nicola J Mulder. 2017. Gene ontology semantic similarity tools: survey on features and challenges for biological knowledge discovery. Briefings in bioinformatics, 18(5):886-901. +Jörg Menche, Amitabh Sharma, Maksim Kitsak, Susan Dina Ghiassian, Marc Vidal, Joseph Loscalzo, and Albert-László Barabási. 2015. Uncovering disease-disease relationships through the incomplete interactome. Science, 347(6224):1257601. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311-318. Association for Computational Linguistics. +Jiajie Peng, Tao Wang, Jixuan Wang, Yadong Wang, and Jin Chen. 2015. Extending gene ontology with gene association networks. Bioinformatics, 32(8):1185-1194. +Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104-3112. + +Aurelie Tomczak, Jonathan M Mortensen, Rainer Winnenburg, Charles Liu, Dominique T Alessi, Varsha Swamy, Francesco Vallania, Shane Lofgren, Winston Haynes, Nigam H Shah, et al. 2018. Interpretation of biological experiments changes with evolution of the Gene Ontology and its annotations. Scientific reports, 8(1):1-10. +Shikhar Vashishth, Shib Sankar Dasgupta, Swayambhu Nath Ray, and Partha Talukdar. 2019. Dating documents using graph convolution networks. arXiv preprint arXiv:1902.00175. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008. \ No newline at end of file diff --git a/automatictermnamegenerationforgeneontologytaskanddataset/images.zip b/automatictermnamegenerationforgeneontologytaskanddataset/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..18c0d4eda51ac2f2640c08f94d95eee8aa721cbf --- /dev/null +++ b/automatictermnamegenerationforgeneontologytaskanddataset/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d6b2223aeb6eb7685afb2b7e2728ca404bb40a1b2113bed546ea67fe232a07be +size 229497 diff --git a/automatictermnamegenerationforgeneontologytaskanddataset/layout.json b/automatictermnamegenerationforgeneontologytaskanddataset/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..3ec5a7a3515a2384dbb3105fe13ba346cd5a082c --- /dev/null +++ b/automatictermnamegenerationforgeneontologytaskanddataset/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:23ea765b06249d78a08d8359492a6dbaae245b64995ac5abd92be954f2be7864 +size 189928 diff --git a/balancingviagenerationformulticlasstextclassificationimprovement/e7caf516-963d-4a5e-b1f4-6fbd9174e219_content_list.json b/balancingviagenerationformulticlasstextclassificationimprovement/e7caf516-963d-4a5e-b1f4-6fbd9174e219_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..6b35f67c761161922237da07a5e79c8d4e5ed965 --- /dev/null +++ b/balancingviagenerationformulticlasstextclassificationimprovement/e7caf516-963d-4a5e-b1f4-6fbd9174e219_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:708a3387e79eff07ed6afd62a7021da596f054c6f1649c89b41697df3ad6018a +size 85724 diff --git a/balancingviagenerationformulticlasstextclassificationimprovement/e7caf516-963d-4a5e-b1f4-6fbd9174e219_model.json b/balancingviagenerationformulticlasstextclassificationimprovement/e7caf516-963d-4a5e-b1f4-6fbd9174e219_model.json new file mode 100644 index 0000000000000000000000000000000000000000..2887f12ad5a04fa6a6f7fe1b441dc5758bf89ca2 --- /dev/null +++ b/balancingviagenerationformulticlasstextclassificationimprovement/e7caf516-963d-4a5e-b1f4-6fbd9174e219_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f41eb0de0c151395399f6ef6140c91ecce35367c0cd034093cbdaaf281e90e2a +size 107567 diff --git a/balancingviagenerationformulticlasstextclassificationimprovement/e7caf516-963d-4a5e-b1f4-6fbd9174e219_origin.pdf b/balancingviagenerationformulticlasstextclassificationimprovement/e7caf516-963d-4a5e-b1f4-6fbd9174e219_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a3491d1870adbc89003499f7dba497caca7b0820 --- /dev/null +++ b/balancingviagenerationformulticlasstextclassificationimprovement/e7caf516-963d-4a5e-b1f4-6fbd9174e219_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cd398660c022f10c59b31b6a9d42a7b99e86891c6d59eda9c3133d0d20d69328 +size 945930 diff --git a/balancingviagenerationformulticlasstextclassificationimprovement/full.md b/balancingviagenerationformulticlasstextclassificationimprovement/full.md new file mode 100644 index 0000000000000000000000000000000000000000..5f721a91b37926bb93c32670e470922201aeb7e9 --- /dev/null +++ b/balancingviagenerationformulticlasstextclassificationimprovement/full.md @@ -0,0 +1,375 @@ +# Balancing via Generation for Multi-Class Text Classification Improvement + +Naama Tepper*, Esther Goldbraich*, Naama Zwerdling, George Kour, Ateret Anaby-Tavor, Boaz Carmeli IBM Research + +{naama.tepper, esthergold, naamaz, atereta, boazc}@il.ibm.com, gkour@ibm.com + +# Abstract + +Data balancing is a known technique for improving the performance of classification tasks. In this work we define a novel balancing-viageneration framework termed BalaGen. BalaGen consists of a flexible balancing policy coupled with a text generation mechanism. Combined, these two techniques can be used to augment a dataset for more balanced distribution. We evaluate BalaGen on three publicly available semantic utterance classification (SUC) datasets. One of these is a new COVID-19 Q&A dataset published here for the first time. Our work demonstrates that optimal balancing policies can significantly improve classifier performance, while augmenting just part of the classes and under-sampling others. Furthermore, capitalizing on the advantages of balancing, we show its usefulness in all relevant BalaGen framework components. We validate the superiority of BalaGen on ten semantic utterance datasets taken from real-life goal-oriented dialogue systems. Based on our results we encourage using data balancing prior to training for text classification tasks. + +# 1 Introduction + +Imbalanced datasets pose a known difficulty in achieving ultimate classification performance as classifiers tend to be biased towards larger classes (Guo et al., 2008; Japkowicz and Stephen, 2002; Japkowicz, 2000). Moreover, identifying samples that belong to under-represented classes is of high importance in many real-life domains such as fraud detection, disease diagnosis, and cyber security. + +Although the imbalanced data classification problem is well-defined, and has been researched extensively over the last two decades (Estabrooks et al., 2004; Batista et al., 2004; Ramyachitra and Manikandan, 2014; Zhu et al., 2017; Buda et al., + +2018), there has been considerably less work devoted to balancing textual datasets. + +We propose a novel balancing via-generation framework, termed BalaGen, to improves textual classification performance. BalaGen uses a balancing policy to identify over- and under-represented classes. It then uses controlled text generation, coupled with a weak labeling mechanism to augment the under-represented classes. Additionally, it applies under-sampling to decrease the over-represented classes. + +Our analysis is focused on semantic utterance classification (SUC) (Tur et al., 2012; Tur and Deng, 2011; Schuurmans and Frasincar, 2019). SUC is a fundamental, multi-class, highly imbalanced textual classification problem. For example, it is widely used for intent (class) detection in goal-oriented dialogue systems (Henderson et al., 2014; Bohus and Rudnicky, 2009), and for frequently asked question (FAQ) retrieval (Sakata et al., 2019; Gupta and Carvalho, 2019; Wang et al., 2017). + +Correctly identifying scarce utterances is of great importance in many real life scenarios. For example, consider a scenario in which a user converses with the dialogue system in an online shop (Yan et al., 2017). For the store owner, the task of correctly identifying the buying-intent utterances is paramount. However, the number of utterances related to searching for products is expected to be significantly higher, thus biasing the classifier toward this intent. + +We analyzed BalaGen's capabilities on two publicly available SUC datasets. In addition, we introduce a new dataset called COVID-19 Q&A (CQA), which contains answers to questions frequently asked by the public during the pandemic period. Analysis of this new dataset further demonstrates improved performance using our approach. + +Our contribution is thus four-fold: i) We present BalaGen, a balancing-via-generation framework + +for optimizing classification performance on imbalanced multi-class textual datasets. (ii) We analyze different factors that affect BalaGen's performance, including quality of generated textual data, weak supervision mechanisms, and balancing of BalaGen's internal components. iii) We validate our approach on 3 publicly available datasets and a collection of 10 SUC datasets used to train real-life goal-oriented dialogue systems. iv) We contribute a new COVID-19 related SUC dataset. + +# 2 Related Work + +In imbalanced classification, also known as the "Class Imbalance Problem", classifiers tend to bias towards larger classes (Provost, 2000). This challenge, has garnered extensive research over the past decades (Estabrooks et al., 2004; Chawla et al., 2004; Sahare and Gupta, 2012). The range of approaches to solve this issue depends on the type of data and the target classifier (Zheng et al., 2004; Sun et al., 2009; Wang and Yao, 2009; Liu et al., 2009). Ramyachitra and Manikandan (2014) divide classification improvements over imbalanced datasets into five levels: data, algorithmic, cost sensitive, feature selection and ensemble. We focus our review on the data level and specifically on textual dataset balancing. + +Primary data-level methods vary the number of samples in the dataset via re-sampling. We follow the common terminology and refer to a method that adds samples to a dataset, as over-sampling, and to a method that removes samples as undersampling. sample-copy, i.e. duplicating existing samples, is the most straightforward over-sampling method and random-selection is the most straightforward under-sampling method. While these methods were shown to be effective to some extent for data balancing, they are insufficient when it comes to solving the problem (Branco et al., 2016). + +Traditional and well researched feature-based over-sampling techniques generate new samples via feature manipulation (Wong et al., 2016). Most of these techniques are based on the Synthetic Minority Oversampling TEchnique (SMOTE) (Chawla et al., 2002) or the ADAptive SYNthetic (ADASYN) approach (He et al., 2008). These approaches create synthetic samples by manipulating the feature values of existing samples. However, the latest deep learning (DL) models do not have an explainable features layer to manipulate. Although the embedding layer may be perceived as the DL + +analogy to the traditional feature layer, this layer is of high dimension and is not easy to interpret and manipulate while preserving the original class label. Thus, local changes to the embedding values of textual datasets does not yield the expected results. + +In contrast to feature-based over-sampling techniques, data augmentation generates additional samples through transformations applied directly to the data. For example, Easy Data Augmentation (EDA) (Wei and Zou, 2019) is a naive yet effective text augmentation technique based on synonym replacement using Wordnet (Fellbaum, 2012), random insertion, random swap, and random deletion of words. Language model-based Markov Chain (MC) (Barbieri et al., 2012) is another example of a word level second-order model that was shown to improve textual data-balancing (Akkaradamrongrat et al., 2019). Additional research works include structure preserving word replacement using a Language Model (Kobayashi, 2018), recurrent neural language generation for augmentation (Rizos et al., 2019), and various paraphrasing methods as done in (Gupta et al., 2017). + +Recently, transformer-based pre-trained architectures (Vaswani et al., 2017) have been developed and successfully applied to a wide set of Natural Language Generation (NLG), processing and understanding tasks. Examples of these include Generative Pre-trained (GPT) (Radford et al., 2019), which is a right-to-left language model based on the transformer's decoder architecture (Vaswani et al., 2017), BERT (Devlin et al., 2018), BART (Lewis et al., 2019) and T5 (Raffel et al., 2019). These attention-based architectures are capable of generating human-level high-quality text, making them a compelling choice for textual data augmentations. Specifically, CBERT (Wu et al., 2019) improves EDA by using BERT synonym prediction. Additional advanced transformer-based methods control the generation process by providing an existing sample, designated class label, or both. These methods were shown to be beneficial for data augmentation (Anaby-Tavor et al., 2019; Kumar et al., 2020). However, these methods suffer from several drawbacks: first, they were only shown to be successful on small sized datasets (five samples per class or $1\%$ of the dataset). Second, the augmentation process was shown to be error prone as the generated samples do not always preserve the class label of the original data. Third, as we + +show in this work, naively using these methods to generate a constant number of samples for each class in the dataset, as done in previous work, does not realize their full potential for improving textual classification tasks. + +Other approaches for data balancing can include weak-labeling of available unlabeled data (Ratner et al., 2020), or even active learning (Settles, 2009). However, both of these approaches require additional domain data which is not always available. + +Notably, some approaches aim at assuring interpretability of generated samples (Santos et al., 2017). However, BalaGen takes a different approach - aiming to improve performance without consideration of textual validity/interpretability of generated sentences as done in (Rizos et al., 2019). Thus, only class perseverance and ability to contribute to accuracy are considered. + +To the best of our knowledge, this is the first work to explore the use of transformer-based augmentation techniques directly towards data balancing to improve textual classification tasks. + +# 3 Method + +At the cornerstone of our methodology lie the recent controlled text generation methods, capable of synthesizing high quality samples (Kumar et al., 2020; Anaby-Tavor et al., 2019). We tested the hypothesis whereby enhancing these generation methods with a new balancing technique, which differentially add and remove samples from classes, can result in a significant improvement to classifier accuracy. + +To overcome the well-known drawback of oversampling via text generation, i.e., class label preservation is not guaranteed (Kumar et al., 2020), we employed a weak labeling mechanism which is used to select generated samples that have a high probability of preserving their class label. We further refer to weak labelers simply as labelers. + +In the rest of this section, we describe the steps of our BalaGen approach. We refer to the step numbers according to the enumeration in the pseudocode given in Algorithm 1 and the schematic flow diagram shown in Figure 1. + +Balancing policy: A balancing policy $\pi(\cdot)$ , generally, aims to reach a specific distribution of the samples among the classes, by adding and removing samples. In step (1) we use policies that determine a band $[B_{low}, B_{high}]$ , which within the set of classes are considered Well-Represented (WR). + +Consequently, the set of classes smaller than $B_{low}$ are referred to as Under-Represented (UR) and should be further over-sampled, e.g., via augmentation. Classes larger than $B_{high}$ are considered Over-Represented (OR) and will be under-sampled. + +In the following, let $c_{i}$ be the index of $i^{th}$ class after sorting the classes by their size (i.e., the number of samples) in an ascending order. Given that $n$ is the number of classes, $|c_{n}|$ is the size of the largest class. In Figure 2 we describe several types of balancing policies supported by BalaGen. + +While there may be many approaches to determine the $WR$ band, here we employ the following percentile approach: Given the parameters $\beta_{low}$ and $\beta_{high}$ , we set $B_{low}$ such that $\beta_{low}\%$ of the classes belong to the $UR$ set and set $B_{high}$ such that $\beta_{high}\%$ of the classes belong to the $OR$ set. Note that $\beta_{low} + \beta_{high} \leq 100$ . + +Algorithm 1: BalaGen + +Input:Training dataset $D$ + +Weak labeling models $\mathcal{L}_1,\dots,\mathcal{L}_k$ + +(Pre-trained) language model $\mathcal{G}$ + +Balancing policy $\pi (\cdot)$ + +Over-sampling method $\mathcal{OS}(\cdot ,\cdot)$ + +Under-sampling method $\mathcal{US}(\cdot ,\cdot)$ + +1 $[B_{low}, B_{high}] \gets \pi(D)$ +2 $D^{S}\gets \mathcal{OS}(\mathcal{US}(D,B_{high}),B_{low})$ +3 Fine-tune $\mathcal{G}$ using $D^S$ to obtain $\mathcal{G}_{\text{tuned}}$ and synthesize a set of labeled samples for the under-represented classes $D^*$ using $\mathcal{G}_{\text{tuned}}$ +4 $h_1\gets \mathcal{L}_1(D^S),\dots,h_k\gets \mathcal{L}_k(D^S)$ +5 Select best samples in $D^{*}$ using weak labelers $h_1, \ldots, h_k$ to obtain $D_{syn}$ +6 $D_{\text{Balanced}} \gets \mathcal{U}(D_{\text{syn}} \cup D, B_{\text{high}})$ +7 return $D_{\text{Balanced}}$ + +Balancing the train set of the generator and weak-labelers: In step (2) we compose a balanced dataset $D^{S}$ used to train the generator and the labeler(s). The under-sampling method is executed on the $OR$ classes targeting the $B_{high}$ threshold, while the oversampling method is executed on the $UR$ classes targeting the $B_{low}$ threshold. This step aims to reduce class biases of the generator and labelers. Formally, $\mathcal{OS}$ and $\mathcal{US}$ denote over and under sampling functions, respectively. Each accept two parameters: a dataset $D$ to perform on and the threshold $B$ . + +![](images/b1eef311604989a6ab2ed918808fbbe58997fafa6db50fb7f3a1d3362af0dc6d.jpg) +Figure 1: Flow diagram of BalaGen: Given dataset distribution $D$ ; (1) balancing policy is applied to determine $[B_{low}, B_{high}]$ band; (2) balanced $D^{S}$ is created for training BalaGen's components; (3) Language model is first trained, and then used to generate $D^{*}$ with synthetic samples for the $UR$ classes; (4) Weak labeling models are trained and then used to label samples in $D^{*}$ ; (5) generated samples are selected according to their labels up to $B_{low}$ creating $D_{syn}$ ; (6) $D$ is augmented with $D_{syn}$ and $OR$ classes in $D$ are under-sampled. $\mathcal{O}$ -over-sampling, $\mathcal{U}$ -under-sampling. + +![](images/344804ef5088ff5770c4035c1f3c249f856aacbfa0b05b9c2f7cb9500103ba37.jpg) + +![](images/f7e20a3aa3ef5426320a62d9b9e0b5db250a7b57375ebdde113bc0ee7cfa2922.jpg) +Figure 2: Balancing policies on an example dataset distribution: A. Baseline (no augmentation and no balancing) B. Augment-only (without balancing), C. Naive-OS ( $B_{low} = B_{high} = |c_n|$ ), D. Partial-OS ( $B_{low} < B_{high} = |c_n|$ ), E. Partial-OS-US ( $B_{low} < B_{high} < |c_n|$ ). Abbreviations: OS - over-sampling, US - undersampling, $|c_n|$ - number of samples in the largest class. + +Sample generation: In step (3) we first fine-tune (or train if its not a pre-trained model) the language model $\mathcal{G}$ on $D^S$ to obtain $\mathcal{G}_{\text{tuned}}$ . Then, $\mathcal{G}_{\text{tuned}}$ is used to generate $D^*$ . If a right-to-left pre-trained language model is used, such as GPT-2, the fine-tuning procedure follows the method proposed in (Anaby-Tavor et al., 2019); there, the class label is pretended to each sample during training. Then, conditioned on the class label, the fine-tuned model is used to generate samples for the $UR$ classes, denoted as $D^*$ . + +Weak labeling: In step (4) we train the labeler(s) $\mathcal{L}_1, \dots, \mathcal{L}_k$ on $D^s$ and then label the generated samples in $D^*$ . The weak labeling step is required as an additional quality assurance mechanism, since + +neither the quality of a generated sample nor the accuracy of its label can be guaranteed during the generation process. + +Sample selection: In step (5), a set of generated samples is selected, according to labels assigned by the labelers and added to each class up to the $B_{low}$ threshold. The resulting dataset is denoted $D_{syn}$ . + +Augmenting $UR$ classes and under-sampling $OR$ classes: In step (6), $D$ is augmented with the samples from $D_{syn}$ . Then, the $OR$ classes in $D$ are under-sampled. + +# 4 Real-life SUC Datasets + +# 4.1 COVID-19 Q&A Dataset (CQA) + +We present a new dataset called COVID-19 Q&A, and referred to as CQA (https://developer.ibm.com/exchanges/data/all/cqa/). + +The CQA dataset contains questions which were frequently asked by the public during the COVID-19 pandemic period. The questions were categorised according to user intents. The dataset was created to ramp-up a dialogue system that provides answers to questions frequently asked by the public. The data was collected by creating an initial classifier for a question answering dialogue system, which was further extended by selecting samples from its logs of user interactions and then labeling them. + +Table 1 shows examples of intents and utterances from the dataset. The dataset contains 884 user utterances, divided into 57 intents (classes) as shown + +
IntentSample Utterances
Quarantine visits• Can my friends visit me? +• What is a safe distance when someone brings me groceries?
COVID Description• What does covid stand for? +• How does the virus spread
Case Count• How many coronavirus cases are there in my area? +• How many ppl are infected in the us?
Symptoms• What are the early symptoms of covid-19? +• How to distinguish it from a common cold
+ +Table 1: Examples of utterances and their corresponding intents in CQA dataset. + +in Table 2. The CQA dataset is moderately imbalanced and characterized by a balance-ratio of 1:76 (ratio between the size of biggest class to the size of the smallest class). The dataset has an entropy-ratio of 0.91 (with an entropy of 3.7 out of a maximal entropy of 4.04). We publish the dataset here in the hopes of further promoting research on semantic utterance classification for goal-oriented dialogue systems. + +# 4.2 Analysis of SUC Corpora + +In addition to evaluating BalaGen on the CQA dataset, we also applied it on ten Semantic Utterance Classifier (SUC) datasets used to train real-life goal-oriented dialogue systems. Figure 3 present class distribution of the 10 SUC datasets, demonstrating their imbalance state and hence, the need for data balancing. Indeed, these datasets, are characterized by a high average balance-ratio of 1:222. The median number of classes in these datasets is 100 (std = 66), and median samples per class is 69 (std = 91). + +# 5 Experiments + +# 5.1 Experimental Settings + +Datasets Table 2 describes the datasets used in our experiments: + +- COVID-19 QA (CQA) - new dataset introduced in Section 4. +- Stack Exchange Frequently Asked Questions (SEAQ) $^{1}$ - FAQ retrieval test collection extracted + +![](images/d691bc3d1e0ec78967cc7abb8b3a435619f01f68c28f236e8a9915bcdb639897.jpg) +Figure 3: Imbalanced state of real-life Semantic Utterance Classifier (SUC) datasets. For each dataset, classes are aggregated into 20 bins, and median samples-per-class values are presented as a blue line. Median values for each bin over all datasets are presented as green bars. + +from Stack Exchange. Stack Exchange is a network of question-and-answer (QA) websites on topics in diverse fields. It is the most balanced dataset in our analysis with an entropy of 4.69. + +- Airline Travel Information Systems (ATIS) $^2$ - queries on flight-related information, widely used in language understanding research. ATIS is the most imbalanced dataset; it has an entropy of 1.11. This is due to most of its data belonging to the 'flight' class. + +Generative models: To assess the influence of the quality of the generated samples we used three text generation methods: EDA (Wei and Zou, 2019), Markov Chain (MC) (Barbieri et al., 2012), and Generative Pre-Train (GPT-2) (Radford et al., 2019). GPT-2 was further used for most of the experiments as it is considered to be superior in many textual tasks. To these, we added sample-copy as a baseline over-sampling method. + +Weak labeling: We examined various weak labeling methods, and used them to select generated samples in step (5): + +- No weak labeling - assign the class used by the generator to generate the sample as the final class. +- Double voting - train a labeler classifier on the original train dataset. Use it to weakly label the generated samples, and only keep those samples where the label of the original sample matches the weak label of the generated sample. +- Labeler ensemble - train an ensemble of labelers. For each apply the double voting mechanism and then aggregate the generated samples from all + +
Name# ClassesSizeH
CQA578843.68
SEAQ1257194.69
ATIS1753841.11
+ +Table 2: Datasets. Abbreviations: CQA - COVID-19 Q&A, SEAQ - StackExchange FAQ, ATIS - Flight Reservations. # Classes - number of classes. H - entropy. + +labelers. + +BalaGen's components training input: Because data-balancing is beneficial for classification performance, we examine the effect of also balancing the input for the framework components - the generator and the labelers. + +Evaluation metrics: To report our experimental results, we used the standard accuracy measure which calculates the correct prediction ratio (Eq. 1). Since we deal with imbalanced datasets, we also report the macro accuracy (Eq. 2), which measures the average correct prediction ratio across classes (Manning et al., 2008). Formally, + +$$ +a c c _ {m i c r o} = \sum_ {i = 1} ^ {n} \frac {t _ {i}}{| D |} \tag {1} +$$ + +$$ +a c c _ {m a c r o} = \frac {1}{n} \sum_ {i = 1} ^ {n} \frac {t _ {i}}{\left| c _ {i} \right|} \tag {2} +$$ + +where $t_i$ is the number of correct predictions in class $c_i$ , $|D|$ is the number of samples, and $n$ is the number of classes. + +Additionally, we report the entropy measure, similarly to Shannon's diversity index (Shannon, 1951) to capture the degree of class imbalance in the dataset. + +$$ +H = - \sum_ {i = 1} ^ {n} \frac {\left| c _ {i} \right|}{\left| D \right|} \cdot \log \frac {\left| c _ {i} \right|}{\left| D \right|} \tag {3} +$$ + +Where applicable, we statistically validated our results with the McNemar test (McNemar, 1947). + +# 5.1.1 Implementation + +BalaGen is classifier independent. In our implementation we use BERT, a state-of-the-art classifier for textual classification (Devlin et al., 2018), both as a classifier and for weak supervision. + +We divided each dataset into $80\% : 10\% : 10\%$ for train, validation and test, respectively. The validation set was used for early stopping and for tuning + +parameters such as $\beta_{low}$ and $\beta_{high}$ . Each experiment was repeated at least 3 times to ensure consistency. + +We restrict the number of generated samples by the generator to be $3 \times |c_{n}|$ . + +In our experiments, we balanced the training data for the generator and labelers using simple sample-copy over-sampling and random-selection under-sampling. Additional technical implementation details are given in the Appendix. + +# 5.2 Results + +In all experiments we compare classifier performance against the same held-out test set. Unless stated otherwise, we use GPT-2 as the generator and three BERT classifiers as labeler-ensemble. All model training was done on a balanced dataset. + +# 5.2.1 Augmentation vs. Balancing + +In the first experiment we compared data augmentation (via generation) to naïve data balancing. Specifically, we compared baseline results to: (1) balancing w/o augmentation; (2) augmentation w/o balancing; and (3) balancing via-augmentation. + +For balancing experiments (no. 1 and 3), We used the simplest balancing scheme depicted by Naive-OS balancing policy C ( $B_{low} = B_{high} = |c_n|$ , as defined in Section 3). Specifically, for balancing w/o augmentation (1) we used basic sample-copy over-sampling, and for balancing via augmentation (3) we applied BalaGen (using GPT2 as generator) to generate additional samples according to policy C. For augmentation w/o balancing (2) we applied BalaGen using Augment-only data policy B - adding a fixed number of generated samples to all classes. + +Table 3 presents the micro and macro accuracy measures for the three datasets. While balancing and augmentation increase the accuracy for all three datasets, combining them yields significantly higher results than the baseline for CQA and SEAQ. For ATIS the combination of augmentation and balancing using naive data balancing policy C was not significantly better than the baseline and was even lower than the simple sample-copy oversampling balancing. ATIS is a highly imbalanced dataset, which requires an enormous amount of generated data to fully balance it and adhere to balancing policy C. Hence, as shown in the next section, other data balancing policies achieve better accuracy results on this dataset. + +
DatasetBalancingAugmentation
No (copy)Yes (GPT-2)
CQANo(77.3,71.9)(78.6,73.2)
Yes(78.8,73.9)(80.9,74.7)
SEAQNo(48.2,46.2)(46.5,44.3)
Yes(52.2,50.5)(55.5,54.6)
ATISNo(97.4,91.9)(98.7,92.7)
Yes(98.7,95.6)(98.5,91.9)
+ +Table 3: Augmentation vs. balancing effect. The table compares baseline performance (left upper cell) to: (1) balancing w/o augmentation (left bottom cell); (2) augmentation w/o balancing (right upper cell); and (3) balancing-via-augmentation (right bottom cell). Each tuple contains micro and macro accuracy measures. Balancing was performed using Naive-OS balancing policy C. Augmentation alone was performed using Augment-only policy B. + +# 5.2.2 Exploring Partial Over-Sampling Using Different Generative Models + +Generated samples often differ in their quality from the original set of samples. Moreover, different generation algorithms differ in the quality of their generated samples (Kumar et al., 2020). This disparity presents a trade-off between the quantity of added samples and their quality. Partial-OS balancing policy D (as shown in Figure 2.D) enables to address this trade-off by adding generated samples up to a certain $B_{low}$ balancing level. + +Figure 4 illustrates macro accuracy for different text generation methods while setting the balancing threshold $B_{low}$ , such that $\beta_{low} = [0,10,30,50,70,80,90,95$ and $100]\%$ (namely, the percentage of classes that are treated as under-represented). + +![](images/3567146779fd0692fbe9f0fa0735a7f596df6e87de79aa2963c9fd27c52c6b47.jpg) +Figure 4: Macro accuracy for different text generation methods over varied $\beta_{low}$ values employing Partial-OS balancing policy D for SEAQ dataset. + +![](images/5350b2bbc6b9c337dbbe80aa6b45c54ace7afe90283a7736c8e506b733f88f41.jpg) + +![](images/955233d820bb7bd94b4b2910214763a3ba865769661b339777bede6729dff4f9.jpg) +Figure 5: Data augmentation with B, C, D and E balancing policies stating number of augmented and undersampled sentences for CQA dataset. The figure shows that in practice some classes are not fully augmented although their number of samples is below $\beta_{low}$ . Additionally, advanced balancing techniques - i.e. applying policy E - result in a more balanced distribution of the augmented dataset. + +![](images/936946ac37f05d669c02e26a2e5645b1e73d6d3a25b45a22ca0220584f190438.jpg) + +![](images/ec1588d2bf8aeb415bd9bfab2019e24d7ffab26f5b106a8a7ca87d8b7d56447c.jpg) + +First we observe that for all generation methods, there is a drop in accuracy towards $\beta_{low} = 100\%$ . This shows our first key finding, that augmenting all classes up to $|c_n|$ is a sub-optimal policy, in most cases, even for more advanced generation methods. Notably, the analysis of CQA and ATIS datasets also support this claim (not shown). + +Observing the general trend we noticed that GPT-2 dominates all other generation methods for most configurations, followed by EDA, and then sample-copy. Markov Chain (MC), which was the preferred algorithm in (Akkaradamrongrat et al., 2019) showed worse performance than sample-copy (the baseline over-sampling approach) for most $B_{low}$ thresholds. + +Another observation was that there is a correlation between climax's $B_{low}$ threshold and the quality of the generation method. GPT-2, the most advanced generation method, reaches its highest accuracy when generating with $\beta_{low} = 80\%$ , followed by EDA at $70\%$ and sample-copy at $50\%$ . + +# 5.2.3 Evaluation of Balancing Policies + +In the following experiment, we compared baseline results to BalaGen's performance employing Naive-OS, Partial-OS, and Partial-OS-US balancing policies as depicted in Figure 2. Table 4 presents our findings. $\beta_{low}$ and $\beta_{high}$ values were chosen by hyper-parameters search on a validation set. + +
PolicyCQASEAQATIS
accHΔSaccHΔSaccHΔS
A. Baseline(77.3, 71.9)3.70(48.2, 46.2)4.70(97.4, 91.9)1.10
C. Naïve-OS(80.9, 74.7)3.91150(55.5, 54.6)4.81440(98.2, 92.2)1.41662
D. Partial-OS(80.9, 75.5)4670(61, 59.9)4.8642(98.6, 96.6)1.81170
E. Partial-OS-US(82.1, 77.5)4619(61, 59.9)4.8642(98.7, 96.6)2.7-1704
+ +Table 4: Balancing policy effect. Showing micro accuracy, macro accuracy, entropy and change in number of samples. Abbreviations: acc - both (accmicro, accmacro) values. $H$ entropy. $\Delta S = |D_{Balanced}| - |D_{Train}|$ + +Partial-OS balancing policy $(\beta_{low} < 100)$ appears to be superior for all datasets. Specifically, for CQA $\beta_{low} = 90$ , and for SEAQ and ATIS $\beta_{low} = 80$ . For the CQA and ATIS datasets, undersampling the over-represented classes was shown to be beneficial with $\beta_{high} = 5$ . Notably, both entropy values increase and number of added samples decrease in correlation with the accuracy. + +CQA and ATIS datasets are highly unbalanced (as shown in Table 2). Hence, removing samples from their highly-represented classes was shown to further improve the accuracy. Figure 5 shows the number of samples added to (or removed from) each of the CQA classes in this experiment. There are classes that were not augmented with enough samples even for Partial-OS policy D with $B_{low} < |c_n|$ . This strengthens the need to under-sample the over-represented classes down to $B_{high}$ to achieve an even more balanced dataset. + +All in all we see a significant increase in performance for all datasets when comparing the best balancing policy to the baseline $(p - value < 0.1)$ : CAQ presents a relative increase of $(21.3\%, 19.8\%)$ in micro and macro accuracy respectively (comparing to optimal values) when applying Partial-OS-US policy E. For the SEAQ dataset we saw an overall increase of $(24.8\%, +25.3\%)$ in micro and macro accuracy respectively when applying Partial-OS policy D. Lastly, the ATIS dataset classification results also improved, showing an increase of $(50\%, 57.9\%)$ in micro accuracy and macro accuracy while applying Partial-OS-US policy E. Interestingly, in ATIS dataset, number of samples in policy E is smaller than the baseline while improving performance. + +The above significant increase in performance indicates our second key finding, that balancing datasets using BalaGen yields significantly improved classification performance. + +# 5.2.4 Balanced Input for Model Training + +While establishing that balanced dataset is beneficial for classification performance, we examined the effect of balancing the input to the generation and labelers models. After applying the best balancing policy, as described in the previous section, our results showed that balancing all network components improved results by an average increase of $12.4\%$ in micro accuracy and an average increase of $24\%$ in macro accuracy. (Detailed results are given in the Appendix). Thus, our third key finding is that holistically balancing BalaGen, including all its components, yields best performance. + +# 5.2.5 Weak Supervision Mechanism Analysis + +Finally, we evaluated different weak supervision mechanisms and found that the ensemble of labelers performs best as shown in Table 5. This leads to our fourth key finding that a weak supervision mechanism aids class label preservation. + +# 5.2.6 BalaGen Improving Real-Life SUC Corpora + +As a last experiment, and to further validate our findings, we applied BalaGen on 10 real-life SUC datasets. Table 6 shows number of classes and samples per dataset as well as relative improvement for these datasets. BalaGen markedly improved macro accuracy with relative increase of $11\%$ (comparing to the optimal). Micro accuracy increased by $3.8\%$ . Entropy increased by $5.6\%$ . As expected, the preferred balancing policy for all datasets is $\beta_{low} < 100$ . Additionally, half of the datasets reached best performance with $\beta_{high} = 5$ (for the rest we did not use under-sampling). It is worth noting that for two data sets (2 and 9) results show a trade-off between improving the macro accuracy at the expense of the micro one. In the end the decision about which metric to use in such cases depends on the gain from not missing out on the minority classes that may cost a small drop in the majority classes (which may still end up with relative + +
CQASEAQATIS
None(78.8, 75.7)(58.3, 57.5)(98.5, 92.4)
Dbl.(81.5, 75.4)(59.1, 57.8)(98.2, 95.1)
Ens.(82.1, 77.5)(61, 59.9)(98.7, 96.6)
+ +high performance) that the system owner should weigh. + +Further, we evaluated the classifier performance on the generated sentences alone (following (Wang et al., 2019)), without the train set, and found that micro accuracy falls by $17.5\%$ and macro accuracy by $7.9\%$ . This metric represents how well the generated dataset represents the train set. This interesting finding should be further researched together with the diversity of the entire corpus. + +# 6 Discussion and Future Work + +In this work we present BalaGen, a balancing-viagenation framework. We show that balancing textual datasets via generation is a promising technique. Furthermore our analysis reveals that the optimal balancing policy depends on the quality of the generated samples, the weak supervision mechanism applied, and the training of BalaGen's internal component. i.e., the generator and labelers. + +In Balagen we assume that each sample contributes the same gain to its class accuracy. A possible enhancement of BalaGen could take into account not only the number of samples in each class, but also their quality. Alternatively, balancing policies could also consider class accuracy. Additional enhancements for BalaGen could include employing more advanced under-sampling techniques such as data cleaning (Branco et al., 2016), cluster-based under-sampling (Song et al., 2016), or other distribution based techniques (Cui et al., 2019). + +BalaGen can also be used to explore setting $\beta_{low} > 100$ . Additional enhancements may also include investigating more sophisticated weak labeling ensemble mechanisms. + +We focused our evaluation on the Semantic Utterance Classification (SUC) domain which is characterized by highly imbalanced data. However, it is desirable to validate the applicability of our general balancing approach on other textual domains. + +Table 5: Weak supervision mechanism effect showing (accmicro, accmacro). Dbl. - double voting with single labeler. Ens. - Ensemble. + +
#Dataset%acc%HΔS
1(29, 13768)(1.3, 20)9.83133
2(32, 3538)(-0.6, 16)2.61822
3(63, 2543)(7.3, 11)9.11335
4(82, 2575)(5.2, 9)8.2192
5(87, 17024)(10.1, 13)3.111689
6(112, 1821)(4, 13)4.6573
7(135, 2387)(5.1, 11)3.6236
8(157, 5954)(2.7, 3)2.6443
9(176, 4338)(-3.5, 6)13.9-997
10(224, 3776)(6.3, 9)3.7453
Avg.(110, 5772)(3.8, 11)5.62404
+ +Table 6: BalaGen applied on 10 real-life SUC datasets. Showing (intent, samples), relative increase in (micro accuracy, macro accuracy), relative increase in entropy and change in number of samples. Abbreviations: %acc - (accmicro, accmacro) relative increase. %H - relative increase in entropy, $\Delta S = |D_{Balanced}| - |D_{Train}|$ + +# Acknowledgments + +We thank Mitch Mason, Senior Offering manager, IBM Watson Assistant, for his support and collaboration in creating the CQA data set. Additionally, we thank Inbal Ronen and Ofer Lavi for their useful comments on the manuscript. + +# References + +Suphamongkol Akkaradamrongrat, Pornpimon Kachamas, and Sukree Sinthupinyo. 2019. Text generation for imbalanced text classification. In 2019 16th International Joint Conference on Computer Science and Software Engineering (JCSSE), pages 181-186. IEEE. +Ateret Anaby-Tavor, Boaz Carmeli, Esther Goldbraich, Amir Kantor, George Kour, Segev Shlomov, Naama Tepper, and Naama Zwerdling. 2019. Not enough data? deep learning to the rescue! arXiv preprint arXiv:1911.03118. +Gabriele Barbieri, François Pachet, Pierre Roy, and Mirko Degli Esposti. 2012. Markov constraints for generating lyrics with style. In *Ecai*, volume 242, pages 115-120. +Gustavo EAPA Batista, Ronaldo C Prati, and Maria Carolina Monard. 2004. A study of the behavior of several methods for balancing machine learning training data. ACM SIGKDD explorations newsletter, 6(1):20-29. +Dan Bohus and Alexander I Rudnicky. 2009. The ravenclaw dialog management framework: Architec + +ture and systems. Computer Speech & Language, 23(3):332-361. +Paula Branco, Luis Torgo, and Rita Ribeiro. 2016. A survey of predictive modeling under imbalanced distributions. ACM Comput. Surv, 49(2):1-31. +Mateusz Buda, Atsuto Maki, and Maciej A Mazurowski. 2018. A systematic study of the class imbalance problem in convolutional neural networks. Neural Networks, 106:249-259. +Nitesh V Chawla, Kevin W Bowyer, Lawrence O Hall, and W Philip Kegelmeyer. 2002. Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research, 16:321-357. +Nitesh V Chawla, Nathalie Japkowicz, and Aleksander Kotcz. 2004. Special issue on learning from imbalanced data sets. ACM SIGKDD explorations newsletter, 6(1):1-6. +Yin Cui, Menglin Jia, Tsung-Yi Lin, Yang Song, and Serge Belongie. 2019. Class-balanced loss based on effective number of samples. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9268-9277. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Andrew Estabrooks, Taeho Jo, and Nathalie Japkowicz. 2004. A multiple resampling method for learning from imbalanced data sets. Computational intelligence, 20(1):18-36. +Christiane Fellbaum. 2012. Wordnet. The encyclopedia of applied linguistics. +Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2017. Allennlp: A deep semantic natural language processing platform. +Xinjian Guo, Yilong Yin, Cailing Dong, Gongping Yang, and Guangtong Zhou. 2008. On the class imbalance problem. In 2008 Fourth international conference on natural computation, volume 4, pages 192-201. IEEE. +Ankush Gupta, Arvind Agarwal, Prawaan Singh, and Piyush Rai. 2017. A deep generative framework for paraphrase generation. arXiv preprint arXiv:1709.05074. +Sparsh Gupta and Vitor R Carvalho. 2019. Faq retrieval using attentive matching. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 929-932. + +Haibo He, Yang Bai, Edwardo A Garcia, and Shutao Li. 2008. Adasyn: Adaptive synthetic sampling approach for imbalanced learning. In 2008 IEEE international joint conference on neural networks (IEEE world congress on computational intelligence), pages 1322-1328. IEEE. +Matthew Henderson, Blaise Thomson, and Jason D Williams. 2014. The second dialog state tracking challenge. In Proceedings of the 15th annual meeting of the special interest group on discourse and dialogue (SIGDIAL), pages 263-272. +Nathalie Japkowicz. 2000. The class imbalance problem: Significance and strategies. In Proc. of the Int'l Conf. on Artificial Intelligence. CiteSeer. +Nathalie Japkowicz and Shaju Stephen. 2002. The class imbalance problem: A systematic study. Intelligent data analysis, 6(5):429-449. +Sosuke Kobayashi. 2018. Contextual augmentation: Data augmentation by words with paradigmatic relations. +Varun Kumar, Ashutosh Choudhary, and Eunah Cho. 2020. Data augmentation using pre-trained transformer models. arXiv preprint arXiv:2003.02245. +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. +Ying Liu, Han Tong Loh, and Aixin Sun. 2009. Imbalanced text classification: A term weighting approach. Expert systems with Applications, 36(1):690-701. +Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schütze. 2008. Introduction to Information Retrieval. Cambridge University Press, Cambridge, UK. +Quinn McNemar. 1947. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika, 12(2):153-157. +Foster Provost. 2000. Machine learning from imbalanced data sets 101. In Proceedings of the AAAI'2000 workshop on imbalanced data sets, volume 68, pages 1-3. AAAI Press. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9. +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683. + +D Ramyachitra and P Manikandan. 2014. Imbalanced dataset classification and solutions: a review. International Journal of Computing and Business Research (IJCBR), 5(4). +Alexander Ratner, Stephen H Bach, Henry Ehrenberg, Jason Fries, Sen Wu, and Christopher Re. 2020. Snorkel: Rapid training data creation with weak supervision. The VLDB Journal, 29(2):709-730. +Georgios Rizos, Konstantin Hemker, and Björn Schuller. 2019. Augment to prevent: short-text data augmentation in deep learning for hate-speech classification. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pages 991-1000. +Mahendra Sahare and Hitesh Gupta. 2012. A review of multi-class classification for imbalanced data. International Journal of Advanced Computer Research, 2(3):160. +Wataru Sakata, Tomohide Shibata, Ribeka Tanaka, and Sadao Kurohashi. 2019. Faq retrieval using query-question similarity and bert-based query-answer relevance. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1113-1116. +Leandro Santos, Edilson Anselmo Correña Junior, Osvaldo Oliveira Jr, Diego Amancio, Leticia Mansur, and Sandra Aluisio. 2017. Enriching complex networks with word embeddings for detecting mild cognitive impairment from speech transcripts. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1284-1296, Vancouver, Canada. Association for Computational Linguistics. +Jetze Schuurmans and Flavius Frasincar. 2019. Intent classification for dialogue utterances. IEEE Intelligent Systems. +Burr Settles. 2009. Active learning literature survey. Technical report, University of Wisconsin-Madison Department of Computer Sciences. +Claude E Shannon. 1951. Prediction and entropy of printed english. Bell system technical journal, 30(1):50-64. +Jia Song, Xianglin Huang, Sijun Qin, and Qing Song. 2016. A bi-directional sampling based on k-means method for imbalance text classification. In 2016 IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS), pages 1-5. IEEE. +Aixin Sun, Ee-Peng Lim, and Ying Liu. 2009. On strategies for imbalanced text classification usingsvm: A comparative study. Decision Support Systems, 48(1):191-201. +Gokhan Tur and Li Deng. 2011. Intent determination and spoken utterance classification. Spoken + +language understanding: systems for extracting semantic information from speech. Wiley, Chichester, pages 93-118. +Gokhan Tur, Li Deng, Dilek Hakkani-Tür, and Xiaodong He. 2012. Towards deeper understanding: Deep convex networks for semantic utterance classification. In 2012 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 5045-5048. IEEE. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008. +Shuo Wang and Xin Yao. 2009. Diversity analysis on imbalanced data sets by using ensemble models. In 2009 IEEE Symposium on Computational Intelligence and Data Mining, pages 324-331. IEEE. +Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural language sentences. arXiv preprint arXiv:1702.03814. +Zixu Wang, Julia Ive, Sumithra Velupillai, and Lucia Specia. 2019. Is artificial data useful for biomedical natural language processing algorithms? In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 240-249. +Jason W Wei and Kai Zou. 2019. Eda: Easy data augmentation techniques for boosting performance on text classification tasks. arXiv preprint arXiv:1901.11196. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R'emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. +Sebastien C Wong, Adam Gatt, Victor Stamatescu, and Mark D McDonnell. 2016. Understanding data augmentation for classification: when to warp? In 2016 international conference on digital image computing: techniques and applications (DICTA), pages 1-6. IEEE. +Xing Wu, Shangwen Lv, Liangjun Zang, Jizhong Han, and Songlin Hu. 2019. Conditional bert contextual augmentation. In International Conference on Computational Science, pages 84-95. Springer. +Zhao Yan, Nan Duan, Peng Chen, Ming Zhou, Jianshe Zhou, and Zhoujun Li. 2017. Building task-oriented dialogue systems for online shopping. In Thirty-First AAAI Conference on Artificial Intelligence. +Zhaohui Zheng, Xiaoyun Wu, and Rohini Srihari. 2004. Feature selection for text categorization on imbalanced data. ACM SIGkdd Explorations Newsletter, 6(1):80-89. + +Bing Zhu, Bart Baesens, and Seppe KLM vanden Broucke. 2017. An empirical comparison of techniques for the class imbalance problem in churn prediction. Information sciences, 408:84-99. + +# Appendix + +In the following, we provide parameters related to training the models of GPT-2 in Table 9 and Bert in Table 8. Auxiliary experimental results in Table 7. In addition, we provide a snippet of the CQA dataset we introduced in this work in Table 1. + +We used the transformers3 Python package (Wolf et al., 2019) for GPT-2 (345M parameters) implementation, and Allen-NLP4 (Gardner et al., 2017) as a training framework that contains BERT implementation. We used model perplexity and accuracy on the validation set as a train stopping criteria for GPT-2 and BERT, respectively. Specifically, we used $BERT_{base}$ as classifier in all our experiments. A Markov chain was implemented using the Markovify5 package. + +We employed a single NVIDIA Tesla V100-SXM3 32GB GPU in all our experiments. The typical time for GPT-2 overall training was about 20 sec per 1K samples. The generation time was 200 seconds per 1K samples, and the BERT overall training time was about 7 minutes per 1K samples (50 epochs with 20 patient epochs). + +
DatasetBalance generatorBalance labelers
NoYes
CQANo(80.3,77.2)(78.8,74.5)
Yes(80.9,77.4)(82.1,77.5)
SEAQNo(56.1,54.7)(56.6,54.7)
Yes(54.2,53.4)(61.0,59.9)
ATISNo(98.4,91.5)(98.4,94.8)
Yes(98.5,92.6)(98.7,96.6)
+ +Table 7: Balancing generator input vs. balancing labelers inputs. Each tuple contains micro and macro accuracy measures + +
Model ParameterValue
model_namegpt2-medium
batch_size10
val_Every5
example_length50
generate_sample_length100
learning_rate1e-4
val_batch_count80
patience5
tf_only_train_transformer_layerstrue
max_generation Attempts50
optimizeradam
+ +Table 8: GPT-2 training and sampling parameters + +
Model ParametersValue
model_namebert-base-uncased
do_lowercasetrue
word_splitterbert-basic
top_layer_onlytrue
dropout_p0
batch_size8
num_epochs50
patience20
grad_clipping5
optimizerbert adam
learning_rate5e-5
warmup0.1
+ +Table 9: Bert Training parameters (used in all experiments) \ No newline at end of file diff --git a/balancingviagenerationformulticlasstextclassificationimprovement/images.zip b/balancingviagenerationformulticlasstextclassificationimprovement/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..0d88302c35ce1a820017ce15cd71950d166fb8e4 --- /dev/null +++ b/balancingviagenerationformulticlasstextclassificationimprovement/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c8a0b4c8926325bd7010172c403e318d50f377a1804325e253fb6a622403e6dd +size 525833 diff --git a/balancingviagenerationformulticlasstextclassificationimprovement/layout.json b/balancingviagenerationformulticlasstextclassificationimprovement/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..104a63d77e504b0985c3be1f54c1991d89210956 --- /dev/null +++ b/balancingviagenerationformulticlasstextclassificationimprovement/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bc8b869b386755784a8bcbf38e86fff54f7edb4da6e8433ffe3f4478bec58cde +size 471342 diff --git a/bedifferenttobebetterabenchmarktoleveragethecomplementarityoflanguageandvision/1e14e47a-eddc-4922-a1cf-23eb7a992fff_content_list.json b/bedifferenttobebetterabenchmarktoleveragethecomplementarityoflanguageandvision/1e14e47a-eddc-4922-a1cf-23eb7a992fff_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..71727cfce1eb2130ee440cb190eadfd61b970440 --- /dev/null +++ b/bedifferenttobebetterabenchmarktoleveragethecomplementarityoflanguageandvision/1e14e47a-eddc-4922-a1cf-23eb7a992fff_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:411a0792f64c6a5e052a055398194f2e888a5f0085f292b50305426ace36554e +size 118212 diff --git a/bedifferenttobebetterabenchmarktoleveragethecomplementarityoflanguageandvision/1e14e47a-eddc-4922-a1cf-23eb7a992fff_model.json b/bedifferenttobebetterabenchmarktoleveragethecomplementarityoflanguageandvision/1e14e47a-eddc-4922-a1cf-23eb7a992fff_model.json new file mode 100644 index 0000000000000000000000000000000000000000..5cd5ecfa05bb8a6e38be903b42ef1c9a90c21381 --- /dev/null +++ b/bedifferenttobebetterabenchmarktoleveragethecomplementarityoflanguageandvision/1e14e47a-eddc-4922-a1cf-23eb7a992fff_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a0e44a5a702a903c34d949b616784b4c09596b8dafadec54900522983a9b42bf +size 149107 diff --git a/bedifferenttobebetterabenchmarktoleveragethecomplementarityoflanguageandvision/1e14e47a-eddc-4922-a1cf-23eb7a992fff_origin.pdf b/bedifferenttobebetterabenchmarktoleveragethecomplementarityoflanguageandvision/1e14e47a-eddc-4922-a1cf-23eb7a992fff_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..74323f14d47ed5acb67be22cb56da975a5678be2 --- /dev/null +++ b/bedifferenttobebetterabenchmarktoleveragethecomplementarityoflanguageandvision/1e14e47a-eddc-4922-a1cf-23eb7a992fff_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d51b85f341a791b62f27c9bf79f2b9e86dc60cbd31802b795fd1fdc401ae7a73 +size 4044253 diff --git a/bedifferenttobebetterabenchmarktoleveragethecomplementarityoflanguageandvision/full.md b/bedifferenttobebetterabenchmarktoleveragethecomplementarityoflanguageandvision/full.md new file mode 100644 index 0000000000000000000000000000000000000000..6a98258957805ae805399cc7f1614c40ed3ba24d --- /dev/null +++ b/bedifferenttobebetterabenchmarktoleveragethecomplementarityoflanguageandvision/full.md @@ -0,0 +1,415 @@ +# Be Different to Be Better! A Benchmark to Leverage the Complementarity of Language and Vision + +Sandro Pezzelle1, Claudio Greco2, Greta Gandolfi2, Eleonora Gualdoni2, Raffaella Bernardi2,3 + +$^{1}$ Institute for Logic, Language and Computation, University of Amsterdam + +$^{2}$ CIMeC, $^{3}$ DISI, University of Trento + +s.pezzelle@uva.nl, + +{greta.gandolfi|eleonora.gualdoni}@studenti.unitn.it, + +{claudio.greco|raffaella.bernardi}@unitn.it + +# Abstract + +This paper introduces BD2BB, a novel language and vision benchmark that requires multimodal models combine complementary information from the two modalities. Recently, impressive progress has been made to develop universal multimodal encoders suitable for virtually any language and vision tasks. However, current approaches often require them to combine redundant information provided by language and vision. Inspired by real-life communicative contexts, we propose a novel task where either modality is necessary but not sufficient to make a correct prediction. To do so, we first build a dataset of images and corresponding sentences provided by human participants. Second, we evaluate state-of-the-art models and compare their performance against human speakers. We show that, while the task is relatively easy for humans, best-performing models struggle to achieve similar results. + +# 1 Introduction + +Human communication, in real-life situations, is multimodal (Kress, 2010): To convey and understand a message uttered in natural language, people build on what is present in the multimodal context surrounding them. As such, speakers do not need to "repeat" something that is already provided by the environment; similarly, listeners leverage information from various modalities, such as vision, to interpret the linguistic message. Integrating information from multiple modalities is indeed crucial for attention and perception (Partan and Marler, 1999) since combined information from concurrent modalities can give rise to different messages (McGurk and MacDonald, 1976). + +The argument that language and vision convey different, possibly complementary aspects of meaning has been largely made to motivate the need for multimodal semantic representations of words (Ba + +roni, 2016; Beinborn et al., 2018). However, computational approaches to language and vision typically do not fully explore this complementarity. To illustrate, given an image (e.g., the one depicted in Figure 1), popular tasks involve describing it in natural language, e.g., "A tennis player about to hit the ball" (Image Captioning; see Bernardi et al., 2016); answering questions that are grounded in it, e.g., Q: "What sport is he playing?", A: "Tennis" (Visual Question Answering; see Antol et al., 2015); having a dialogue on its entities, e.g., Q: "Is the person holding a racket?", A: "Yes." (visually-grounded dialogue; see De Vries et al., 2017; Das et al., 2017). While all these tasks challenge models to perform visual grounding, i.e., an effective alignment of language and vision, none of them require a genuine combination of complementary information provided by the two modalities. All the information is fully available in the visual scene, and language is used to describe or retrieve it. + +In this work, we propose a novel benchmark, Be Different to Be Better (in short, BD2BB), where the different, complementary information provided by the two modalities should push models develop a better, richer multimodal representation. As illustrated in Figure 1, models are asked to choose, among a set of candidate actions, the one a person who sees the visual context depicted by the image would do based on a certain intention (i.e., their goal, attitude or feeling). Crucially, the resulting multimodal input (the sum of the image and the intention) will be richer compared to that conveyed by either modality in isolation; in fact, the two modalities convey complementary or nonredundant information (Partan and Marler, 1999). + +To illustrate, a model that only relies on the (non-grounded) linguistic information conveyed by the intention, i.e., "If I have tons of energy", might consider as equally plausible any actions that have to do with playing a sport, e.g., "I will play base- + +![](images/1c2ca3c6fefbbe58160b602e1f970fd2a6bd52b2da730ada38403f791bfa49ca.jpg) +IMAGE + +![](images/805981825a4ead25cb57af12bef77d43d462a8f36c03006a1fbe4822ea569b5c.jpg) +If I have tons of energy +Figure 1: One real sample of our proposed task. Given an image depicting, e.g., a tennis player during a match and the intention "If I have tons of energy", the task involves choosing, from a list of 5 candidate actions, the target action that unequivocally applies to the combined multimodal input: "I will play a game of tennis with the man". The task is challenging: a model exploiting a language or vision bias could fall into the trap of decoy actions containing words highlighted in blue or orange, respectively. Therefore, selecting the target action requires models perform a genuine integration of the two modalities, whose information is complementary. Best viewed in color. + +# CANDIDATE ACTIONS + +I will play baseball with the men + +I will play a game of tennis with the man + +I will compare images of me hitting the tennis ball + +I will play baseball with the women + +I will applaud my favourite tennis player of all time + +# INTENTION + +ball with the men" or "I will play a game of tennis with the man". Similarly, a model that only relies on the visual information conveyed by the image-- a tennis player during a match--might consider as equally plausible any actions that have to do with 'tennis' and/or 'player', e.g., "I will applaud my favourite tennis player of all time" or "I will play a game of tennis with the man". In contrast, a model that genuinely combines information conveyed by both modalities should be able to select the target action, namely the only one that is both consistent with the intention and grounded in the image, i.e., "I will play a game of tennis with the man". Moreover, similarly to real-life communicative scenarios, in our approach different language inputs modulate differently the same visual context, and this gives rise to various multimodal messages. To illustrate, if the image in Figure 1 is paired with the intention "If I am tired watching", the target action "I will play a game of tennis with the man" is no longer valid. Indeed, the target action in this context is "I will leave the tennis court" (see Figure 3). + +Our work has the following key contributions: + +- We introduce a novel multimodal benchmark: the set of $\sim 10\mathrm{K}$ (image, intention, action) datapoints collected via crowdsourcing and enriched with meta-annotation; the multiple choice task, BD2BB, which requires proper integration of language and vision and is specifically aimed at testing SoA pretrained multimodal models. The benchmark, together with the code and trained models, is available at: + +https://sites.google.com/view/bd2bb + +- We test various models (including the SoA multimodal, transformer-based LXMERT; Tan and Bansal, 2019) and show that, while BD2BB is a relatively easy task for humans ( $\sim$ $80\%$ acc.), best systems struggle to achieve a similar performance ( $\sim$ $60\%$ acc.). +- We extensively analyze the results and show the advantage of exploiting multimodal pretrained representations. This confirms they are effective, but not enough to solve the task. + +# 2 Related Work + +Since the introduction of the earliest multimodal tasks, such as Image Captioning (IC; see Bernardi et al., 2016) and Visual Question Answering (VQA; Antol et al., 2015), a plethora of tasks dealing with language and vision have been proposed. In parallel, baseline models have been replaced by more powerful attention-based systems (Anderson et al., 2018) and, more recently, by transformer-based architectures pretrained on several tasks (Tan and Bansal, 2019; Lu et al., 2019; Chen et al., 2019). These latter models build on multimodal representations that are meant to be task-agnostic; as such, they can be transferred to virtually any other multimodal task with minimal fine-tuning. Our work contribute to these two lines of research by (1) introducing a novel multimodal task, and (2) by evaluating a SoA multimodal encoder on it. + +Multimodal tasks VQA was originally proposed to overcome the challenge of quantitatively evaluate IC models. The task (and its evaluation) + +is straightforward: given an image and a question about its visible objects, systems have to provide the correct answer by aligning information from the two modalities (Antol et al., 2015). Driven by VQA, several datasets have been proposed to minimize the bias observed in natural images (Goyal et al., 2017; Ray et al., 2019); to force models to "reason" over a joint visual and linguistic input (Johnson et al., 2017; Suhr et al., 2019); to deal with objects' attributes and relations (Krishna et al., 2017); to encompass more diverse (Zhu et al., 2016) and goal-oriented questions and answers (Gurari et al., 2018). At the same time, some work proposed higher-level evaluations of VQA models and showed their limitations (Hodosh and Hockenmaier, 2016; Shekhar et al., 2017); similarly, recent attention has been paid to understand what makes a question "difficult" for a model (Bhattacharya et al., 2019; Terao et al., 2020). Despite impressive progress, current approaches to VQA do not tackle one crucial limitation of the task: the answer to a question is given by the alignment of language and vision rather than their complementary integration. + +Moving from objects to actions, several tasks have been proposed to mimic more realistic settings where a higher degree of integration between modalities is required. One is visual storytelling (Huang et al., 2016; Gonzalez-Rico and Pineda, 2018; Lukin et al., 2018), where models have to understand the action depicted in each photo and their relations to generate a story. Similar abilities are required in the task of generating non-grounded, human-like questions about an image (Mostafazadeh et al., 2016; Jain et al., 2017), and in that of asking discriminative questions over pairs of similar scenes (Li et al., 2017). Related tasks are also those of predicting motivations of visually-grounded actions (Vondrick et al., 2016) or generating explanations for a given answer (Park et al., 2018; Hendricks et al., 2018). + +An even higher level of understanding of vision and language is required in the tasks of filling the blank with the correct answer (Yu et al., 2015); answering questions from videos and subtitles (Lei et al., 2018); having a dialogue on objects (De Vries et al., 2017; Das et al., 2017) or events (Mostafazadeh et al., 2017); answering and justifying commonsense questions (Zellers et al., 2019). However, all these tasks require making commonsense inferences over the two modalities rather than integrating their complementary infor + +mation to answer a grounded question. + +More akin to ours are the approaches by Iyyer et al. (2017), which aims to predict the subsequent scene and dialogue in a comic strip, and Kruk et al. (2019), where the goal is to compute the communicative intent of a social media post. Though they both require a challenging integration of language and vision, these tasks (as well as the type of data they use) are crucially different from BD2BB, where the task is to predict the action that is consequent to a given intention based on the image. + +Transformer-based multimodal models Developing universal multimodal encoders whose pretrained representations are suitable for virtually any multimodal task is a crucial challenge. Inspired by the success of BERT, a pretrained transformer-based language encoder (Devlin et al., 2019), similar architectures have been recently proposed in the domain of language and vision (Lu et al., 2019; Tan and Bansal, 2019; Chen et al., 2019; Su et al., 2020; and Nan Duan et al., 2020). While these architectures achieve state-of-the-art performance in many tasks, their novelty and complexity leave several questions open, and further work is needed to better understand, e.g., which layers are more suitable for transferability (Tamkin et al., 2020), or what is the relation between pretraining and downstream tasks (Zamir et al., 2018; Singh et al., 2020). Moreover, to prove they are readily applicable to novel multimodal benchmarks, pretrained universal encoders should be ideally effective with only minimal fine-tuning on the target tasks. + +In this light, we believe that more efforts should be put in developing datasets that are challenging and yet relatively small, in line with the 'diagnostic' datasets proposed for VQA (Johnson et al., 2017) and the easy vs. hard subsets introduced by Akula et al. (2020) for visual referring expression recognition. Our contribution follows this line of thought. + +# 3 Data + +In this section, we describe how we collected intentions and actions through crowdsourcing, and the subsequent phase of data meta-annotation. Consistently with our purposes, we needed images that elicit goals and feelings (the intentions) in the annotators, as well as consequent actions. To this end, we used the partition of the MS-COCO dataset (Lin et al., 2014) provided by Vondrick et al. (2016),1 + +![](images/0ed04cddbcb52a7e9e1e1ac703e8d45bbdbba77252a530a98d0186dd21a92282.jpg) +Figure 2: Data collection. Examples of good (top) and bad (bottom) annotations provided to participants in the task instructions. Errors and corresponding warnings are shown to make participants familiarize with the tool. + +![](images/410e4f157bad15242b69069c5fff6e60e14565036a1e3ed21949adc7e2fdb457.jpg) + +where each of the 10, 191 images depicts at least one person. This choice was aimed to make the participants' task more natural: indeed, the presence of people in the image allows more possibilities of interaction, and therefore guarantees that some actions can be performed in that situation. + +# 3.1 Data Collection + +We set up an annotation tool on Figure-Eight $^2$ (see Figure 2) where annotators were shown an image and asked to imagine themselves being in that situation, as ideal observers not represented in the picture. We instructed them to carefully look at the image and think about 1) an intention, i.e., how they might feel/ behave if they were in that situation; 2) an action, i.e., what they would do based on that feeling/ behavior. Intentions and actions were typed in free form by participants in two separate text boxes; by instructions, their sentences had to complete the provided opening words If I... and I will..., respectively. To ensure that intentions conveyed information that was complementary (nonredundant) to that by the image, participants were instructed not to mention any of the entities (people, objects, etc.) shown in the image. In contrast, to ensure that actions contained information that was grounded in the image, participants were asked to mention at least one visible entity when writing their action (see errors and warnings in Figure 2). + +We randomly selected $\sim 3.6\mathrm{K}$ images from the split by Vondrick et al. (2016) and, for each of them, we collected on average 5 (intention, action) tu + +![](images/c1e9a3ad5808051871754f6e2d10418ac3f8bf207cac147e4fdca6d4088d217e.jpg) +Figure 3: Five $\langle\text{intention},\text{action}\rangle$ tuples provided by 5 unique participants for the image in Figure 1. + +plies by 5 participants. In total, $\sim 18\mathrm{K}$ unique $\langle$ image, intention, action $\rangle$ datapoints were collected. Participants were recruited from native-English countries only. Overall, 477 annotators (based on the IP) took part in the data collection; on average, each of them provided 38 annotations. Participants were paid $0.04\$ per tuple.$ ^4 In total, the data collection costed $\sim 900\$$ + +A few filtering steps were needed to get rid of datapoints with invalid annotations. First, we discarded those datapoints where intentions and/or actions were either not in English (e.g., bot-generated *Lorem IPSum* sequences) or nonsense strings (e.g., random sequences of characters). This step was done semi-manually and filtered out $\sim 3\mathrm{K}$ datapoints. Second, we removed datapoints where the action did not contain any noun nor pronoun. After this, we were left with 12,457 valid datapoints. + +To illustrate the type of data collected, Figure 3 reports the 5 (intention, action) tuples provided by 5 annotators for the image in Figure 1. As can be noted, the same visual context elicits different intentions, which in turn give rise to different possible actions. Crucially, no intentions refer to + +anything that is visible in the image, which makes them suitable for virtually any visual context. As for the actions, in contrast, they all 1) mention at least one entity that is grounded in the given scene, e.g., "player" or "tennis court", which makes them plausible only for sports contexts, particularly 'tennis'; 2) match their corresponding intention, but not (or to a much lesser extent) the others; i.e., different intentions trigger different actions, and the verb in the action is a proxy for such diversity. Below, we describe the meta-annotation process we performed to categorize each datapoint with respect to: 1) the topic of its action, e.g., 'tennis'; and 2) the argument structure of the verbs in its action. + +# 3.2 Meta-Annotation + +Topic For each of the 12,457 datapoints, we built a 512-d semantic representation of its action using the off-the-shelf Universal Sentence Encoder (USE; Cer et al., 2018). We then run a \(k\)-means clustering algorithm over these vectors and obtained 60 topic clusters. By manual inspection, 54 clusters were found to consistently group together actions revolving on the same topic, e.g., 'tennis' or 'birthday', in a way that it was easy to label them using such terms. Since for the remaining 6 clusters this was not straightforward due to the presence of rather disconnected actions, we filtered these clusters out. We further polished the 54 clusters (a) by manually moving actions to clusters that fit them better, and (b) by removing actions that were not in line with the cluster topic. Moreover, we removed actions that did not comply with the instructions provided to annotators during the data collection. After these steps, we were left with 10,287 \(\langle\)image, intention, action \rangle\) datapoints. + +Argument structure Using the Stanford NLP Parser (Chen and Manning, 2014), we annotated the actions in each of the 10, 287 topic-categorized datapoints by means of a 4-code annotation schema. In particular, from each parsed action we extracted its main verb (code1) and its direct or indirect object (code2). Moreover, when present, the verb of the coordinate or subordinated sentence was also extracted (code3), as well as other nouns in any complement position of the main or secondary verb (code4). All the outputs by the parser were man + +ually checked and fixed where needed. Given the action "I will swing the ratchet to hit the ball", for example, we thus obtained the following argument structure annotation: $\langle \text{swing} \rangle$ (code1), $\langle \text{ratchet} \rangle$ (code2), $\langle \text{hit} \rangle$ (code3), $\langle \text{ball} \rangle$ (code4). As can be seen, this simplified representation of the action provides information on both its verbs (that are consequent to the intention) and nouns (grounded in the image). The 10,287 annotated datapoints were used to build the dataset for our task. + +# 4 Task + +We introduce the Be Different to Be Better (BD2BB) task, where the different, i.e., complementary information provided by the two modalities should push models develop a better, i.e., richer multimodal representation. To evaluate these abilities, we frame our task as a multiple-choice problem (similar to Antol et al., 2015; Yu et al., 2015; Zhu et al., 2016) where either modality is necessary but not sufficient to perform a correct prediction. The task is the following (see Figure 1): given an image and a corresponding intention, the model has to choose the correct action over a set of 5 candidate actions. We refer to the correct action as the target action; to the wrong actions as the decoy actions. Similarly to Chao et al. (2018), decoy actions are carefully selected to be as plausible as possible when evaluated against either the intention (2 decoys) or the image (the other 2) only. Below, we explain how language-based and image-based decoys were selected based on the meta-annotation. + +Language-based decoys For each of the 10,287 $\langle$ image,intention,action $\rangle$ datapoints, we randomly selected a number of datapoints from the entire data that had the following criteria: 1) their action belonged to a different topic cluster than the one including the target action; 2) their action did not share any noun with the target action, i.e., their $\langle$ code2 $\rangle$ and $\langle$ code4 $\rangle$ were different. We then computed a similarity score between the target action and each of these selected actions by means of the cosine of their USE representations. We ranked these scores and selected as our language decoys the two with the highest similarity. This way, we obtained language-based decoys that are semantically very similar to the target action, but are on a different topic and do not share any noun with it. + +Vision-based decoys For each datapoint, we randomly selected a number of datapoints from the + +I: If I want to protect myself, I will... +![](images/e1e94de4e555cd36410030bbf5f0d8c3bdcdc1e7e034d7fe88d69e857b5d56ca.jpg) +L: sit on my skateboard instead of actually riding it. +T: wear a helmet while riding my motor bike +V: look at the motorcycle display +V: challenge the people to a race + +If I want to enjoy the sun, I will... +![](images/a11db576537f569b63cf2bda1fb348ee4b8f324ea76dec7700a4374b1e0824ae.jpg) +L: wear jeans when racing on a skateboard +take a huge bite out of my sandwich +take a bite of the burger +eat my food on the roof patio +use my phone to order from a take out menu +assist the group with cutting food +take a ride on the aerial tramway +ride a horse in the rodeo +ride a motorcycle +seat next to a bike and read a book +help the person who has fallen off their bike + +![](images/d8edc4ff4a592e027728b8d4d98939b683a65da186f23a28f5f972f98f85b288.jpg) +Figure 4: Four samples from our dataset. I: Intention; T: Target action; L/V: Language-/Vision-based decoys. + +If I want to get the blood pumping, I will... +If I want to be noticed, I will... +![](images/c58e0b4480bece0245f9891c1f716be6d630e5dee84c10badacdeb4eb718bd45.jpg) +put on a costume and join the parade +join the men on the street +wear a sign +at least match my colors to look fancy +teach him how to tie a tie + +entire data that had the following criteria: 1) their action belonged to the same topic cluster of the target one; 2) their action did not share any verb with the target action, i.e., their $\langle code1\rangle$ and $\langle code3\rangle$ were different. We then ranked these actions with respect to their USE similarity with the target one, and selected as our vision-based decoys the two with the lowest score. This way, we obtained vision-based decoys that are about the same topic of the target action; at the same time, they do not share any verbs with it and are semantically different. + +# 4.1 Dataset + +Our final dataset includes 10,265 samples7 as the ones depicted in Figure 4: each sample consists of a unique datapoint paired with 4 carefully-selected decoy actions. Consistently with out purpose of making BD2BB a challenging benchmark for pretrained multimodal architectures (see Section 1), we split the dataset into "unusual" train/val/test partitions; i.e., we selected $20\%$ samples for training; the remaining for validation $(40\%)$ and test $(40\%)$ . We propose having small training data and larger validation and test sets should become a standard, as pretrained models already build on a massive amount of data. + +Table 1 reports the descriptive statistics of the dataset, including the number of unique images, intentions and actions per split, and the average length of the sentences. All the experiments reported in the paper are performed on these splits. + +# 5 Experiments + +To test the importance of combining information from the two modalities and the independent contribution of either modality, we experiment with 3 settings of the BD2BB task: $L$ , where the target + +action among the 5 candidates has to be guessed based on the intention only; $V$ , where only the image is provided; $LV$ , where both the image and the intention are provided. For each setting of the task, we evaluate the performance of (1) a simple baseline trained from scratch on the task; (2) a state-of-art transformer-based pretrained model fine-tuned on the task; (3) the same transformer-based model trained from scratch on the task. Moreover, results by models are compared to (4) human performance. + +# 5.1 Models + +Baseline For each $\langle$ image, intention, action $\rangle$ datapoint in the sample, baseline $_{LV}$ builds a multimodal representation by concatenating the 2048-d visual features of the image (extracted from a pretrained ResNet-101; He et al., 2016) with the 300-d embedding of the intention and the 300-d embedding of the action. Embeddings for both the intention and the action are obtained by summing the GloVe embeddings (Pennington et al., 2014) of the words in them. The concatenated features are linearly projected into a vector (8192-d), passed through ReLU, and linearly projected into a single value. Softmax probabilities are computed over the 5 sample's candidate values. The baseline $_{L}$ only concatenates intention and action embeddings (600-d representation); baseline $_{V}$ concatenates the visual features with the action embedding (2348-d). Finally, to account for any bias due to unavoidable association and repetition patterns among the actions, we test a version of the baseline which only encodes the actions. We refer to it as actions-only. + +RoBERTa In setting $L$ , we employ the robustly optimized version of BERT, RoBERTa (Liu et al., 2019); this model is a universal language encoder pretrained on the task of masked language modeling, which achieves best-performing performance in the challenging multiple-choice + +
#samples (%)#img#int#act#t-act#d-actavg int lenavg act len
train2102 (20%)1517168350632102422822.1535.34
val4082 (40%)2447277260823567413320.7636.20
test4081 (40%)2425272061083561413820.4936.00
total10265 (100%)3215619287518738633920.9435.94
+ +Table 1: Descriptive statistics of the dataset including, from left to right: 1) # (and %) of unique samples; 2) # of unique images; 3) # of unique intentions; 4) # of unique actions; 5) # of unique target actions; 6) # of unique decoy actions; 7) average number of tokens in intentions; 8) average number of tokens in actions. + +SWAG task (Zellers et al., 2018). We adapt RoBERTaBASE to our task as following: for each of the 5 datapoints in the sample, RoBERTa encodes the input as a sequence composed by , the intention, , the action, and . The encoding corresponding to the token (768-d) is passed through Tanh, linearly projected into a vector (768-d), passed to Dropout (Srivastava et al., 2014), and linearly projected into a single value. Softmax probabilities are computed over the 5 sample's candidate values. As mentioned above, we evaluate two model versions: RoBERTaL, pretrained and fine-tuned on our task, and RoBERTaS, trained from scratch on BD2BB. + +LXMERT In settings $LV$ and $V$ , we employ LXMERT (Learning Cross-Modality Encoder Representations from Transformers; Tan and Bansal, 2019), a universal multimodal encoder pretrained on five language and vision tasks which is state-of-art on VQA2.0 (Goyal et al., 2017). This model represents an image by the set of position-aware object embeddings for the 36 most salient regions detected by Faster R-CNN (Ren et al., 2015) and processes the textual input by position-aware randomly-initialized word embeddings. Like RoBERTa, LXMERT uses the special tokens $\langle CLS\rangle$ and $\langle SEP\rangle$ but, differently from RoBERTa, here $\langle SEP\rangle$ is used both to separate sequences and to denote the end of the textual input. Hence, we take this into account when adapting LXMERT to our task. Similar to RoBERTa, we use the encoding corresponding to $\langle CLS\rangle$ (768-d) to obtain a probability distribution over the 5 sample's candidate values. For each task setting, we evaluate each model in two versions, i.e., pretrained model fine-tuned on our task ( $LXMERT_{LV}$ and $LXMERT_V$ ); trained from scratch ( $LXMERT_S_LV$ ) and $LXMERT_V$ ). + +Experimental setup For baseline models, we perform hyperparameter search on learning rate, + +Dropout, and hidden size; as for transformer-based models, we use the best configurations reported in the source papers (reproducibility details in Appendix B). All models are trained with 3 random seeds for 50 epochs with Adam (Kingma and Ba, 2015) minimizing a Cross Entropy Loss between the probability distribution over the 5 sample's candidate actions and the ground-truth action. For each of the 3 runs, we consider the model with the highest validation accuracy. Average accuracy and standard deviation over 3 runs is computed. + +# 5.2 Human Evaluation + +We randomly extracted 300 unique samples from the dataset and split them into 3 partitions including 100 samples each. For each partition, we collected judgments by 3 participants in each setting of the task: $L$ , $V$ , and $LV$ . Crucially, participants did the task only once per partition; i.e., they judged each sample only in one of the 3 task settings. Using Quiz Maker,8 we collected 2,700 unique responses from 11 subjects who participated on a voluntary basis. For each setting of the task, we counted as 'correctly predicted' the samples where at least 2 out of 3 annotators converged on the target action. Moreover, for each task setting we computed the 'best' accuracy, i.e., the average of the 3 participants who achieved the highest accuracy in each split. + +# 6 Results + +Results by both models and humans are reported in Table 2. Several key observations can be made. + +Multimodal integration is the key. The overall best-performing model in BD2BB is LXMERT $_{LV}$ (62.2%), which outperforms the other pretrained models, i.e., RoBERT $_{L}$ (56.2%) and LXMERT $_{V}$ (59.2%). On the one hand, this shows that disposing of both modalities is beneficial to perform the + +
modelaccuracy
val ± stdtest ± std
SCRATCHactions-only44.0 ± 0.444.6 ± 0.8
baselineL45.3 ± 0.945.9 ± 0.9
baselineV45.8 ± 0.846.1 ± 0.8
baselineLV48.6 ± 0.949.0 ± 0.9
SCRATCHRoBERTaL47.0 ± 0.247.2 ± 0.1
LXMERTsV30.9 ± 0.931.8 ± 0.4
LXMERTsLV50.4 ± 0.351.3 ± 0.4
PRETRAINRoBERTaL55.9 ± 0.956.2 ± 1.3
LXMERTV59.1 ± 0.259.2 ± 0.6
LXMERTLV62.8 ± 2.362.2 ± 2.2
humansL50.0 (best 54.0)
humansV72.3 (best 73.7)
humansLV79.0 (best 82.3)
chance20.020.0
+ +Table 2: Results for the 3 settings: $L$ , $V$ , and $LV$ .s refers to transformer-based models trained from scratch. For each model, we report average accuracy and std over 3 runs. Human accuracy is computed over 300 samples (we report values based on both majority vote, i.e., 2 out of 3, and average of best participants; see 5.2). + +task. This is in line with the results by human participants, who achieve the highest accuracy in the multimodal setting (79% vs. 50% of $L$ and 72.3% of $V$ ). On the other hand, the finding that LXMERT $_V$ surpasses RoBERT $_L$ (+3%) confirms that the image provides more information compared to the intention. This, again, is consistent with human results, where the gap between $V$ and $LV$ (-7%) is much smaller compared to that between $L$ and $LV$ (-29%). For humans, this visual advantage is likely due to (MS-COCO) images depicting complex events that elicit a broad range of aspects related to people's experience of the world. As for the models, it confirms that LXMERT, thanks to its massive pretraining, is effective in extracting fine-grained information from images. + +Models are far from humans. Humans achieve around $80\%$ accuracy ('best' $82\%$ ) on the multimodal version of the task. This is a high result, in line with previous work with a similar setup (consider, e.g., SWAG, where 'expert' human accuracy is around $85\%$ with 4 choices, i.e., chance level at $25\%$ ; Zellers et al., 2018). At the same time, the non-perfect human accuracy reveals that the benchmark is challenging due to the careful selec + +tion of plausible decoys. Compared to humans, the best-performing $\mathrm{LXMERT}_{LV}$ achieves much lower results $(-17\%)$ , which indicates that BD2BB is challenging and far from being solved. Since the gap between best-performing models and human participants in unimodal settings is smaller $(-13\%$ in $V$ and $-6\%$ in $L$ ), the biggest computational challenge lies in the integration of complementary information from different modalities. + +Pretrained is better. Pretrained models neatly outperform the baseline in all the versions of the task and, more interestingly, also all their counterparts trained from scratch. As can be seen in Table 2, indeed, transformer-based models trained from scratch achieve results that are only slightly better than those by the baseline in both $LV$ and $L$ ; as for $V$ , $LXMERT_V^s$ turns out to perform worse than the baseline $s_V$ (and even worse than the actions-only baseline). This clearly shows that these architectures are very effective when building on their pretraining, but suffer when challenged to learn a task from scratch with relatively few samples. + +# 7 Analysis + +Best models' errors We perform an analysis on the errors made by the 3 pretrained models to check whether they fall more often into the language-based or vision-based decoys. To do so, we focus on each model's best run, and compute the proportion of wrong predictions in the test set that belong to one or the other decoy type. For comparison, a model that makes modality-balanced wrong predictions should fall into language-/vision-based decoys $50\%$ of the times. Quite surprisingly, $\mathrm{RoBERT}_L$ has only a moderate bias toward language-based decoys: in fact, only $60.2\%$ of its errors are of this type. As for $\mathrm{LXMERT}_V$ , no bias at all is observed toward the vision-based decoys $(48.6\%)$ . Finally, the best-performing $\mathrm{LXMERT}_{LV}$ is shown to be halfway between these models, with only a slight preference for language-based $(55.1\%)$ over vision-based decoys $(44.9\%)$ . + +In Figure 5, we report two cherrypicked examples where $\mathrm{LXMERT}_{LV}$ either correctly predicts the target action (left) or chooses a wrong one, in this case a vision-based decoy (right). It is worth mentioning that these two cases are challenging: for + +![](images/47679e3b3026d77bf23779a5d6eead403c18f3a44476dedd1ace703d1ad44998.jpg) +I: If I am in the mood to act silly, I will... +Figure 5: Two samples where humans give the correct answer in the $LV$ setting—but neither in $L$ nor in $V$ . LXMERT $_{LV}$ picks the correct answer (blue) in the left sample, a wrong one (red) in the right sample. I: Intention; T: Target action; L/V: Language-/Vision-based decoys. Best viewed in color. + +L: attend a dinner like this man holding a gift +L: buy him a cake and invite his friends to party +T: act silly with this man and eat cake +V: help my child cut their cake +V: have cake with soldiers + +![](images/0b78b5a5eadab7041b06c902a546a65d5e32e29333a46a52e8053c1d82b922d5.jpg) +If I don't like this, I will... + +sit next to the woman on the bench +get my face painted +I avert my eyes from the man who looks silly. +teach him how to tie a tie +wear a costume and march in a parade + +both of them, human annotators were able to pick the correct action only in the multimodal version of the task—but neither in $L$ nor in $V$ . As can be seen, in the leftmost example the model does a good job in combining complementary information from language and vision. In the righthmost one, instead, it picks an action that is very much plausible based on the image, but not in presence of the given intention containing a negation (don't). Taken together, these analyses indicate that no simple strategies can be exploited by models to detect and rule out decoy types. Language- and vision-based decoys are equally challenging, and combining complementary information is needed to solve the task. + +Hard test To explore the robustness of the pretrained models, we check how well they perform on a subset of the test set where several features of the samples were unseen in training. In particular, neither the image nor the intention were seen in training; moreover, the target action could be seen as a decoy but never as the target. In Table 3 we report the results by the 3 pretrained models on this subset (1, 505 samples); we refer to it as the hard test. As can be seen, all models experience a small decrease in accuracy compared to the whole test set—while humans do not. This indicates that the hard test is indeed more challenging. However, pretrained models are overall robust to unseen features. In line with the standard test set, $\mathrm{LXMERT}_{LV}$ still outperforms the unimodal models, though its drop in performance $(-4\%)$ is more pronounced compared to them $(-1/2\%)$ . This suggests that part of the advantage of the multimodal system over + +
modelaccuracyhumans
hard test ± std
RoBERTaL55.1 ± 1.656.5
LXMERTV56.9 ± 0.873.9
LXMERTLV58.3 ± 2.778.3
+ +Table 3: Accuracy of the pretrained transformer-based models on the hard samples of the test set. Human accuracy is computed over 92 samples. + +the unimodal ones is due to its fine-tuning. Indeed, pretraining on its own is not enough to properly combine complementary information from the intention and the image. Finally, since humans do not perform worse in these samples, the performance gap with $\mathrm{LXMERT}_{LV}$ increases to $\sim 20\%$ . + +# 8 Conclusion + +Inspired by real-life communicative contexts where language and vision are non-redundant, we proposed a novel benchmark to challenge models combine complementary multimodal information. This is a crucial ability that, we believe, our benchmark will contribute push further. In particular, recently proposed universal multimodal encoders can greatly benefit from relatively small but challenging resources as is BD2BB, which can be used to shed light on model abilities and help developing architectures which exhibit more human-like skills. + +Here, we evaluated LXMERT and showed that it struggles to achieve results that are comparable to those by humans. In the future, we plan to evaluate other multimodal encoders on it, and to contribute to the development of better multimodal systems. + +# Acknowledgments + +The authors kindly acknowledge SAP for sponsoring the work. We are grateful to Moin Nabi and Tassilo Klein (SAP AI Research) for the valuable discussion in the early stages of the project. We thank all the participants who voluntarily took part in the human evaluation, and the attendees of the SiVL workshop co-located with ECCV 2018 for their feedback on a preliminary version of the task and data collection pipeline. We kindly acknowledge the support of NVIDIA Corporation with the donation of the GPUs used in our research. The first author is funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 819455 awarded to Raquel Fernandez). + +# References + +Arjun R. Akula, Spandana Gella, Yaser Al-Onaizan, Song-Chun Zhu, and Siva Reddy. 2020. Words aren't enough, their order matters: On the robustness of grounding visual referring expressions. In The 58th Annual Meeting of the Association for Computational Linguistics (ACL). +Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6077-6086. +Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual Question Answering. In Proceedings of the IEEE International Conference on Computer Vision, pages 2425-2433. +Marco Baroni. 2016. Grounding distributional semantics in the visual world. *Language and Linguistics Compass*, 10(1):3-13. +Lisa Beinborn, Teresa Botschen, and Iryna Gurevych. 2018. Multimodal grounding for language processing. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2325-2339. +Raffaella Bernardi, Ruket Cakici, Desmond Elliott, Aykut Erdem, Erkut Erdem, Nazli Ikizler-Cinbis, Frank Keller, Adrian Muscat, and Barbara Plank. 2016. Automatic description generation from images: A survey of models, datasets, and evaluation measures. Journal of Artificial Intelligence Research, 55:409-442. +Nilavra Bhattacharya, Qing Li, and Danna Gurari. 2019. Why does a visual question have different answers? In Proceedings of the IEEE International Conference on Computer Vision, pages 4271-4280. +Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder for English. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 169-174. +Wei-Lun Chao, Hexiang Hu, and Fei Sha. 2018. Being negative but constructively: Lessons learnt from creating better visual question answering datasets. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 431-441. +Danqi Chen and Christopher D Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 conference on Empirical Methods in Natural Language Processing (EMNLP), pages 740-750. + +Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2019. UNITER: learning universal image-text representations. CoRR, abs/1909.11740. +Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José MF Moura, Devi Parikh, and Dhruv Batra. 2017. Visual dialog. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 326-335. +Harm De Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, and Aaron Courville. 2017. GuessWhat?! Visual object discovery through multi-modal dialogue. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5503-5512. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Diana Gonzalez-Rico and Gibran Fuentes Pineda. 2018. Contextualize, show and tell: A neural visual storyteller. CoRR, abs/1806.00738. +Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6904-6913. +Danna Gurari, Qing Li, Abigale J Stangl, Anhong Guo, Chi Lin, Kristen Grauman, Joebo Luo, and Jeffrey P Bigham. 2018. VizWiz grand challenge: Answering visual questions from blind people. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3608-3617. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770-778. +Lisa Anne Hendricks, Ronghang Hu, Trevor Darrell, and Zeynep Akata. 2018. Generating counterfactual explanations with natural language. In ICML Workshop on Human Interpretability in Machine Learning, pages 95-98. +Micah Hodosh and Julia Hockenmaier. 2016. Focused evaluation for image description with binary forced-choice tasks. In Proceedings of the 5th Workshop on Vision and Language, pages 19-28. +Ting-Hao Kenneth Huang, Francis Ferraro, Nasrin Mostafazadeh, Ishan Misra, Aishwarya Agrawal, Jacob Devlin, Ross Girshick, Xiaodong He, Pushmeet + +Kohli, Dhruv Batra, et al. 2016. Visual storytelling. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1233-1239. +Mohit Iyyer, Varun Manjunatha, Anupam Guha, Yogarshi Vyas, Jordan L Boyd-Graber, Hal Daumé III, and Larry S Davis. 2017. The amazing mysteries of the gutter: Drawing inferences between panels in comic book narratives. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6478-6487. +Allan Jabri, Armand Joulin, and Laurens Van Der Maaten. 2016. Revisiting visual question answering baselines. In European Conference on Computer Vision, pages 727-739. Springer. +Unnat Jain, Ziyu Zhang, and Alexander G Schwing. 2017. Creativity: Generating diverse questions using variational autoencoders. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6485-6494. +Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. 2017. CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2901-2910. +Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Gunther R Kress. 2010. Multimodality: A social semiotic approach to contemporary communication. Taylor & Francis. +Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123(1):32-73. +Julia Kruk, Jonah Lubin, Karan Sikka, Xiao Lin, Dan Jurafsky, and Ajay Divakaran. 2019. Integrating text and image: Determining multimodal document intent in Instagram posts. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4614-4624. +Jie Lei, Licheng Yu, Mohit Bansal, and Tamara Berg. 2018. TVQA: Localized, compositional video question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1369-1379. + +Yining Li, Chen Huang, Xiaou Tang, and Chen Change Loy. 2017. Learning to disambiguate by asking discriminative questions. In Proceedings of the IEEE International Conference on Computer Vision, pages 3419-3428. +Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. 2014. Microsoft COCO: Common objects in context. In European Conference on Computer Vision, pages 740-755. Springer. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly optimized BERT pretraining approach. CoRR, abs/1907.11692. +Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. ViLBERT: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems, pages 13-23. +Stephanie Lukin, Reginald Hobbs, and Clare Voss. 2018. A pipeline for creative visual storytelling. In Proceedings of the First Workshop on Storytelling, pages 20-32. +Harry McGurk and John MacDonald. 1976. Hearing lips and seeing voices. Nature, 264(5588):746-748. +Nasrin Mostafazadeh, Chris Brockett, Bill Dolan, Michel Galley, Jianfeng Gao, Georgios Spithourakis, and Lucy Vanderwende. 2017. Image-grounded conversations: Multimodal context for natural question and response generation. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 462-472. +Nasrin Mostafazadeh, Ishan Misra, Jacob Devlin, Margaret Mitchell, Xiaodong He, and Lucy Vanderwende. 2016. Generating natural questions about an image. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1802-1813. +Gen Li and Nan Duan, Yuejian Fang, Ming Gong, Daxin Jiang, and Ming Zhou. 2020. Unicoder-VL: A universal encoder for vision and language by cross-modal pre-training. In Proceedings of AAAI. +Dong Huk Park, Lisa Anne Hendricks, Zeynep Akata, Anna Rohrbach, Bernt Schiele, Trevor Darrell, and Marcus Rohrbach. 2018. Multimodal explanations: Justifying decisions and pointing to the evidence. In 31st IEEE Conference on Computer Vision and Pattern Recognition. +Sarah Partan and Peter Marler. 1999. Communication goes multimodal. Science, 283(5406):1272-1273. + +Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543. +Arijit Ray, Karan Sikka, Ajay Divakaran, Stefan Lee, and Giedrius Burachas. 2019. Sunny and dark outside?! Improving answer consistency in VQA through entailed question generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5860-5865, Hong Kong, China. Association for Computational Linguistics. +Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster R-CNN: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems, pages 91-99. +Ravi Shekhar, Sandro Pezzelle, Yauhen Klimovich, Aurélie Herbelot, Moin Nabi, Enver Sangineto, and Raffaella Bernardi. 2017. FOIL it! Find One mismatch between Image and Language caption. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 255-265. +Amanpreet Singh, Vedanuj Goswami, and Devi Parikh. 2020. Are we pretraining it right? Digging deeper into visio-linguistic pretraining. CoRR, abs/2004.08744. +Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958. +Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2020. VL-BERT: Pretraining of generic visual-linguistic representations. In ICLR. +Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. 2019. A corpus for reasoning about natural language grounded in photographs. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6418-6428. +Alex Tamkin, Trisha Singh, Davide Giovanardi, and Noah Goodman. 2020. Investigating transferability in pretrained language models. CoRR, abs/2004.14975. +Hao Tan and Mohit Bansal. 2019. LXMERT: Learning cross-modality encoder representations from transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5103-5114. + +Kento Terao, Toru Tamaki, Bisser Raytchev, Kazufumi Kaneda, and Shun'ichi Satoh. 2020. Which visual questions are difficult to answer? Analysis with entropy of answer distributions. CoRR, abs/2004.05595. +Carl Vondrick, Deniz Oktay, Hamed Pirsiavash, and Antonio Torralba. 2016. Predicting motivations of actions by leveraging text. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2997-3005. +Licheng Yu, Eunbyung Park, Alexander C Berg, and Tamara L Berg. 2015. Visual madlabs: Fill in the blank description generation and question answering. In Proceedings of the IEEE International Conference on Computer Vision, pages 2461-2469. +Amir R Zamir, Alexander Sax, William Shen, Leonidas J Guibas, Jitendra Malik, and Silvio Savarese. 2018. Taskonomy: Disentangling task transfer learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3712-3722. +Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. From recognition to cognition: Visual commonsense reasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6720-6731. +Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. SWAG: A large-scale adversarial dataset for grounded commonsense inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 93-104, Brussels, Belgium. Association for Computational Linguistics. +Yuke Zhu, Oliver Groth, Michael Bernstein, and Li Fei-Fei. 2016. Visual7W: Grounded question answering in images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4995-5004. + +# Appendices + +# A Further Details on Data + +# A.1 Data Collection + +Crowdsourcers are presented with detailed instructions and examples before starting with the annotation task. First, we introduce the task and provide them with some details to familiarize with the annotation tool. Then, we give them instructions regarding the constraints to be observed, i.e., for intentions: (1) to use the present tense and (2) do not mention any of the entities depicted in the image; for actions: (1) to use the present tense and (2) do mention entities that are visible in the image. To make instructions and constraints clearer, we show them several examples of good/wrong annotations + +![](images/efa90d89675d89c204bdf2d0f7b98ab3ad2f97f50e855777507fde7ab9345c36.jpg) +Figure 6: Data collection. One annotation sample presented to participants. Given an image, participants are asked to provide an intention and an action. To ensure they are doing the task properly, a verification question is asked preliminarily. Answering the question correctly (multiple correct answers) leads to the proper annotation phase. + +# Select one HUMAN entity from the list: (required) + +woman +$\bigcirc$ tennis +$\odot$ player +$\bigcirc$ ball +$\odot$ racket + +# Complete the sentence with your FEELING/BEHAVIOR:"If I ..." (required) + +want to give encouragement + +(1) Use present tense! (2) DO NOT mention any of the entities depicted in the image! + +# Complete the sentence with your ACTION: "I will ..." (required) + +applaud the tennis player + +(1) Use actions in present tense! (2) Mention entities that are VISIBLE in the image! + +(see Figure 2). Moreover, to make sure participants are performing the task properly (and, crucially, to avoid collecting fake data from automatic bots), a verification question is asked at the beginning of each image's annotation phase. The verification question has multiple correct answers, and only by picking one of these answers participants can proceed with the annotation phase (see Figure 6). + +In addition, we add two sanity checks to the collected intentions. We check that (1) they have a length of at least 5 tokens; if this is not the case, participants are shown a warning and asked to fix their sentence; (2) they do not contain any noun referring to an entity that is grounded in the image; this is checked by means of a simple heuristic which extracts all the nouns from a given image's MS-COCO captions. Nouns with frequency $>1$ are not allowed, and when typing them turkers are warned to modify their sentence. + +# A.2 BD2BB Dataset Statistics + +As described in Section 4, the final BD2BB dataset includes 10,265 samples, where each sample includes a triple associated with 4 selected decoy actions. These triples were provided by 430 unique annotators. In particular, 253 were from the USA, 111 from the United Kingdom, 53 from Canada, 6 from Ireland, 5 from New Zealand, 2 from Australia. Each of them provided, on average, 23.87 tuples contained in the dataset (min 1, max 192). + +Each sample contains 5 actions. On average, these actions were provided by 4.90 unique annotators (min 3, max 5); moreover, they were collected for 4.96 (min 3, max 5) unique images, i.e., the decoy actions in each sample refer to different images than the target one in most of the cases. + +# A.3 Meta-Annotation + +Topics We manually inspected the 60 clusters obtained through $k$ -means clustering and removed 6 clusters for which we could not identify a coherent topic. Examples of the actions for each of the remaining 54 clusters, and the corresponding labels we assigned to them, are provided in Table 4. The 60 clusters were reviewed by two of the authors. We kept only clusters for which full agreement was met. + +Numeric 4-Code Annotation We organize our data through a two-step system of wordcodes using codes to mark the syntactic class and the word-type. With the Stanford NLP parser (Chen and Manning, 2014), we extract from each action syntactic information and mark: 1) the main verb: "code1"; 2) the direct or indirect object of the main verb, as well as other complements related to the main verb: "code2"; 3) the second verb – if present (i.e., the verb of the coordinated or subordinated sentence): "code3"; 4) the object of the second verb – if present: "code4". In this case, we considered not only the direct object of the second verb, but also all the words referring to an object grounded + +
labelsaction examplecode1code2code3code4
tennisgrab my tennis raquet firmly and hit the ballgrabrackethitball
foodgrab some delicious foodgrabfood
cakecut the cakecutcake
snackspurchase a hot dogpurchasehotdog
actions with ballhit the ball as hard as i canhitball
skateboard 1go skateboardinggoskateboard
bikes and motostake a ride on the motorbikeridemotorbike
skateboard 4pull off this skateboard trickpull offtrick
surfgrab my surfboard and join the womangrabsurfboardjoinwoman
phonecall someone for a chatcallsomeone
interact with peoplejoin these people and talkjoinpeopletalk
baseball 2yell at the batter to distract himyellbatterdistractbatter
sport audiencewatch this gamewatchgame
approaching womentry to get the woman's attentiongetattention
pizzaorder a slice of pizzaorderpizza
skiuse my ski poles judiciouslyuseski poles
drinki will drink my drink and watch people walk bydrinkdrinkwatchpeople
kidsmove the baby so i can use the computermovebabyusecomputer
cookinghelp those women to cookhelpwomencook
videogamesgrab an extra remote and join the gamegrabremotejoingame
petstake a piece of cake and give it to the dogtakecakegivedog
clothingwear my sun glasseswearglasses
relaxi would look for a seat to restlook forseat
umbrellause the pink umbrellauseumbrella
urban activitiestry to cross the street to investigate the tramscrossstreetinvestigatetrams
laptopi will use that laptop the best wayuselaptop
baseball 3i will play as batter in a game of baseballplaygame
baseball 1watch a baseball gamewatchbaseball game
team sportsi play a soccer gameplaysoccer
frisbee 2join a frisbee teamjointeam
birthdayi will sing happy birthday to the girlsinghappy birthdaygirl
water sportsgrab my board and ride the wavesgrabboardridewave
phototo go to the bathroom to get a selfiego tobathroomgetselfie
zoo animalsride an elephantrideelephant
public transportsi will get on the bus and take a tripget onbustaketrip
skateboard 2will sit on the wall and watch the skateboardersitwallwatchskateboarder
frisbee 1i will leave these men to play their little frisbee gameleavemenplayfrisbee
wiiplay a wii gameplaywii
bedtimeinstead go into my room and lay downgoroomlay
manual work / hobbiesuse the scissors to make oragmiusescissorsmakeorigami
animals farmwatch the man shear the sheepwatchmanshearsheep
good intentionsget the right jobgetjob
kiteenjoy watching the people fly their kitesenjoywatchpeople
horse ridingride a horseridehorse
toilet thingsbrush my teethbrushteeth
skateboard 3i will go to skate parkgoskatepark
street scenesstealthily unzip his backpack and take his possessionsunzipbackpacktakepossession
ski and snowtake off my shirt and do a big ski jump in front of hertake offshirtdo jumpwoman
snowboardgo snowboardinggosnowboard
airportboard that ancient planeboardplane
fruitbuy and eat a bananabuybananaeatbanana
haircutuse the hairdryerusehairdryer
women and foodtell the girl i hope she enjoys her pizzatellgirlenjoypizza
readingread the newspaperreadnewspaper
+ +Table 4: We report the label assigned to each of the 54 clusters (which summarizes its main topic), and one example of the actions included in it. Each action was annotated with codes to mark the verb (code1) and the complement object (code2) of the main sentence, and the verb (code3) and complements (code4) of the secondary sentence. Clusters are listed by their size: in descending order, from biggest to smallest. + +
labels#actions#code1#code2#code3#code4
tennis58090507941
food40876638157
cake33460376574
snacks31668822650
actions with ball29871275434
skateboard 127061485143
bikes and motos26986555951
skateboard 426754253833
surf26266505222
phone26172486049
interact with people26166586222
baseball 225982426930
sport audience25070403246
approaching women22784544970
pizza22643233742
ski22353352634
drink22253465039
kids21378474173
cooking21368704545
videogames21247344240
pets20280474432
clothing20254614847
relax19233144661
umbrella18656243226
urban activities18175565559
laptop18069344345
baseball 31773330276
baseball 117742326044
team sports17238312750
frisbee 217225252922
birthday17062714659
water sports16587603841
photo16339213044
zoo animals16157253239
public transports15946282322
skateboard 215845363525
frisbee 115439113127
wii14936223522
bedtime14453385129
manual work / hobbies13969754460
animals farm13969413226
good intentions13266644432
kite12528183117
horse riding11849222229
toilet things10543382924
skateboard 39822161814
street scenes9656372635
ski and snow9548263123
snowboard 19427262117
airport9348303512
fruit8933182420
haircut5431211915
women and food4324182214
reading321111117
+ +Table 5: Statistics on the meta-annotation of the data. For each cluster, we report the number of actions, the number of verbs in the main (code1) and in the secondary sentence (code3), the number of nouns occurring as complements in the main (code2) and in the secondary sentence (code4). + +in the corresponding image that specify the action expressed by the sentence. This way, for each action in which this was possible, we have a word that underlines the link between the linguistic and the visual aspect of the annotation. All the outputs by the parser were manually checked and fixed + +were needed. This was done by two of the authors: First, a subset of the data was annotated by the two auhtors together; then, each of the authors annotated a different subset. Only doubtful cases were discussed. In Table 4, for each action given as an example of the cluster, we highlight the words cor + +
clusteractioncode1code2code3code4
foodjoin the people in the restaurant to enjoy a mealjoin 1people 77enjoy 15meal 28
foodget some food with the peopleget 107food 60people 666
frisbeejoin this man playing frisbeejoin 9man 11play 13frisbee 14
frisbeecatch the frisbee and throw it againcatch 777frisbee 777throw 8frisbee 14
+ +Table 6: Examples of actions and corresponding word-type codes. Note that: (1) a given verb, e.g., join, is assigned different codes in different clusters (lines 1 and 3); (2) a given object within the same cluster, e.g., frisbee at line 4, is assigned different codes in different syntactic positions; (3) a given object, e.g., frisbee at lines 3 and 4, is assigned the same code if belonging to the same cluster and in the same syntactic position. + +
ModelNumber of parameters
baselineL4931585
baselineV19251201
baselineLV21708801
RoBERTaL124646401
LXMERTV194352385
LXMERTLV194352385
+ +Table 7: Number of parameters of each model. The number of parameters is the same both in models trained from scratch and in pre-trained ones. + +responding to each of the four codes. Statistics about this meta-annotation are reported in Table 5. + +Furthermore, for each topic cluster, we assign a numeric wordcode to each unique word-type in the 4 syntactic classes described above. In other words, each sentence is translated into a code composed of 4 numbers, each one representing a unique word in the corresponding syntactic class.[10] Illustrative examples are given in Table 6.[11] + +# B Further Details on Experiments + +# B.1 Models + +The number of parameters of each model is reported in Table 7. The number of parameters is the same both in models trained from scratch and in pre-trained ones. The validation accuracy and epoch of the best models for each one of the three runs are reported in Table 8. For each of the three runs, we consider the model obtaining the best validation accuracy. For each model, we report mean and standard deviation of the test accuracies obtained across the three runs. + +Baseline Our baseline is inspired by Jabri et al. (2016), but we use Softmax instead of Sigmoid as + +the final activation function to compute a probability distribution over all the candidates and choose the best one. We consider a version receiving image, intention and actions $(\mathbf{baseline}_{LV})$ , a version receiving image and actions $(\mathbf{baseline}_V)$ , and a version receiving intention and actions $(\mathbf{baseline}_L)$ . We used PyTorch 1.4.0. Baseline models were run on a CPU and their training took 33 seconds per epoch on average. We used a batch size equal to 32. We performed a grid search over two hyperparameters: the size of the hidden layer receiving concatenated figures (we tried values 8192 and 2048) and the dropout probability of zeroing elements of the input tensor right after the ReLU activation function (we tried values 0.0 and 0.5). The combination of parameters which leaded to the best validation accuracy was a hidden layer having size 8192 and a dropout probability of 0.0 corresponding to not having any dropout. + +RoBERTa The RoBERTaBASE model we used has 12 self-attention layers with 12 heads each. It uses three special tokens, namely CLS, which is taken to be the representation of the given sequence, SEP, which separates sequences, and EOS, which denotes the end of the input. For each of the 5 datapoints in the sample, RoBERTa encodes the input as a sequence composed by CLS, the intention, SEP, the action, and EOS. As in the original work, we use the representation corresponding to the CLS token to use the encoder in the downstream task. For RoBERTa we used PyTorch 1.0.1 and we started from the source code available at https://github.com/huggingface/transformers. Both when fine-tuning the pre-trained model and when training the model from scratch, we used a batch size equal to 32 with 8 gradient accumulation steps, thereby having a batch size equal to 256, a weight decay equal to 0.01, gradient clipping equal to 5, and a learning rate which is warmed up over the + +
ModelRun 1Run 2Run 3
EpochValid. acc.EpochValid. acc.EpochValid. acc.
\( baseline_L \)190.449280.446410.462
\( baseline_V \)250.453210.467230.453
\( baseline_{LV} \)220.481340.496360.480
\( RoBERTa_L^s \)347.1246.8247.1
\( LXMERT_V^s \)832.0829.94830.7
\( LXMERT_{LV}^s \)3550.2950.82850.2
\( RoBERTa_L \)120.571360.557380.550
\( LXMERT_V \)380.593490.588310.592
\( LXMERT_{LV} \)440.643360.647180.595
+ +Table 8: Epoch and validation accuracy of the best models for each run. + +first $10\%$ steps to a peak value of 0.00005 and then linearly decayed. + +LXMERT The LXMERT model we used has a Object-Relationship Encoder and a Language Encoder which encode relationships between regions and relationships words, respectively, through a self-attention mechanism, and a Cross-Modality Encoder which encode relationships between regions and words and vice-versa through a cross-modal attention mechanism followed by a self-attention mechanism. The number of layers in the Language Encoder, Object-Relationship Encoder, and Cross-Modality Encoder are 9, 5, and 5, respectively. As in RoBERTa, LXMERT uses the special tokens CLS and SEP. Differently from RoBERTa, LXMERT uses the special token SEP both to separate sequences and to denote the end of the textual input. As in the original work, we use the representation corresponding to the CLS token to use the encoder in the downstream task. For RoBERTa we used PyTorch 1.0.1 and we started from the source code available at https://github.com/airsplay/lxmert. As with RoBERTa, both when fine-tuning the pre-trained model and when training the model from scratch, we used a batch size equal to 32 with 8 gradient accumulation steps, thereby having a batch size equal to 256, a weight decay equal to 0.01, gradient clipping equal to 5, and a learning rate which is warmed up over the first $10\%$ steps to a peak value of 0.00005 and then linearly decayed. \ No newline at end of file diff --git a/bedifferenttobebetterabenchmarktoleveragethecomplementarityoflanguageandvision/images.zip b/bedifferenttobebetterabenchmarktoleveragethecomplementarityoflanguageandvision/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..43d6f2c1d16a27a68686b85b12b8aacce298a968 --- /dev/null +++ b/bedifferenttobebetterabenchmarktoleveragethecomplementarityoflanguageandvision/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dd5a561cb95ddd32da2499a235c45a91dcdeb2c70d4a509c0ab4293078148d74 +size 893310 diff --git a/bedifferenttobebetterabenchmarktoleveragethecomplementarityoflanguageandvision/layout.json b/bedifferenttobebetterabenchmarktoleveragethecomplementarityoflanguageandvision/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..7bb72536e1efdb85e147e4de02c28a0a1bcb1a2d --- /dev/null +++ b/bedifferenttobebetterabenchmarktoleveragethecomplementarityoflanguageandvision/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:db11c47ee5ca117dae29f841d43c574eeda6d167d33f4c4206abafdfcba6c092 +size 551344 diff --git a/bertformonolingualandcrosslingualreversedictionary/d4bf055e-e413-4124-9cc0-f0e484820334_content_list.json b/bertformonolingualandcrosslingualreversedictionary/d4bf055e-e413-4124-9cc0-f0e484820334_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..789ee961b86e1b22d64ac560d7566dd6987aaf11 --- /dev/null +++ b/bertformonolingualandcrosslingualreversedictionary/d4bf055e-e413-4124-9cc0-f0e484820334_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bca38d55e16a423dcc5d6d58501c0f3a527ddb245e1c5103da3825e4d65e629c +size 71403 diff --git a/bertformonolingualandcrosslingualreversedictionary/d4bf055e-e413-4124-9cc0-f0e484820334_model.json b/bertformonolingualandcrosslingualreversedictionary/d4bf055e-e413-4124-9cc0-f0e484820334_model.json new file mode 100644 index 0000000000000000000000000000000000000000..4da2515cef25b755156a2850c7e861de52dada5e --- /dev/null +++ b/bertformonolingualandcrosslingualreversedictionary/d4bf055e-e413-4124-9cc0-f0e484820334_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:45fc41fa6db709eb6bc6852e6fbbe15c674959dc891734989675a78f40088bd6 +size 84708 diff --git a/bertformonolingualandcrosslingualreversedictionary/d4bf055e-e413-4124-9cc0-f0e484820334_origin.pdf b/bertformonolingualandcrosslingualreversedictionary/d4bf055e-e413-4124-9cc0-f0e484820334_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6b88bfccc316831c19c0d56bbb533560f6938b53 --- /dev/null +++ b/bertformonolingualandcrosslingualreversedictionary/d4bf055e-e413-4124-9cc0-f0e484820334_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:35a858dd08f238ea833a40a0f3d30ccf760a54255a86f47307a51f2735223fd6 +size 1100355 diff --git a/bertformonolingualandcrosslingualreversedictionary/full.md b/bertformonolingualandcrosslingualreversedictionary/full.md new file mode 100644 index 0000000000000000000000000000000000000000..cdd0026d96a1cd4fbde5a605fb49a1605d8602db --- /dev/null +++ b/bertformonolingualandcrosslingualreversedictionary/full.md @@ -0,0 +1,296 @@ +# BERT for Monolingual and Cross-Lingual Reverse Dictionary + +Hang Yan, Xiaonan Li, Xipeng Qiu*, Bocao Deng + +Shanghai Key Laboratory of Intelligent Information Processing, Fudan University + +School of Computer Science, Fudan University + +2005 Songhu Road, Shanghai, China + +{hyan19,xnli20,xpqiu}@fudan.edu.cn,dengbocao@gmail.com + +# Abstract + +Reverse dictionary is the task to find the proper target word given the word description. In this paper, we tried to incorporate BERT into this task. However, since BERT is based on the byte-pair-encoding (BPE) subword encoding, it is nontrivial to make BERT generate a word given the description. We propose a simple but effective method to make BERT generate the target word for this specific task. Besides, the cross-lingual reverse dictionary is the task to find the proper target word described in another language. Previous models have to keep two different word embeddings and learn to align these embeddings. Nevertheless, by using the Multilingual BERT (mBERT), we can efficiently conduct the cross-lingual reverse dictionary with one subword embedding, and the alignment between languages is not necessary. More importantly, mBERT can achieve remarkable cross-lingual reverse dictionary performance even without the parallel corpus, which means it can conduct the cross-lingual reverse dictionary with only corresponding monolingual data. Code is publicly available at https://github.com/yhcc/BertForRD.git. + +# 1 Introduction + +Reverse dictionary (Bilac et al., 2004; Hill et al., 2016) is the task to find the proper target word given the word description. Fig. 1 shows an example of the monolingual and the cross-lingual reverse dictionary. Reverse dictionary should be a useful tool to help writers, translators, and new language learners find a proper word when encountering the tip-of-the-tongue problem (Brown and McNeill, 1966). Moreover, the reverse dictionary can be used for educational evaluation. For example, teachers can ask the students to describe a + +![](images/281173dde8ab3cb9296194b3ce3274397d20e86a04764d00019c775eec35d1d3.jpg) +Figure 1: An example of the monolingual and crosslingual reverse dictionary. + +word, and the correct description should make the reverse dictionary model recall the word. + +The core of reverse dictionary is to match a word and its description semantically. Early methods (Bilac et al., 2004; Shaw et al., 2013) firstly extracted the handcrafted features and then used similarity-based approaches to find the target word. However, since these methods are mainly based on the surface form of words, they cannot extract the semantic meaning, resulting in bad performance when evaluated on the human-written search query. Recent methods usually adopt neural networks to encode the description and the candidate words into the same semantic embedding space and return the word which is closest to the description (Hill et al., 2016; Zhang et al., 2019). + +Although current neural methods can extract the semantic representations of the descriptions and words, they have three challenging issues: (1) The first issue is the data sparsity. It is hard to learn good embeddings for the low-frequent words; (2) The second issue is polysemy. The previous methods usually use the static word embedding (Mikolov et al., 2013; Pennington et al., 2014), making them struggle to find the target word when the target word is polysemous. Pilehvar (2019) used different word senses to represent a word. Nonetheless, gathering senses for all words is not easy; (3) The third issue is the alignment of cross-lingual word embeddings in the cross-lingual re + +verse dictionary scenario (Hill et al., 2016; Chen et al., 2019). + +In this paper, we leverage the pre-trained masked language model BERT (Devlin et al., 2019) to tackle the above issues. Firstly, since BERT tokenizes the words into subwords with byte-pair encoding (BPE) (Sennrich et al., 2016b), the common subwords between low-frequent and high-frequent words can alleviate the data sparsity problem. Secondly, BERT can output contextualized representation for a word. Thus the polysemy problem can be much relieved. Thirdly, the mBERT is suitable to tackle the cross-lingual reverse dictionary. Because BERT shares some subwords between different languages, there is no need to align different languages explicitly. Therefore, we formulate the reverse dictionary task into the masked language model framework and use BERT to deal with the reverse dictionary task in monolingual and cross-lingual scenarios. Besides, our proposed framework can also tackle the cross-lingual reverse dictionary task without the parallel (aligned) corpus. + +Our contributions can be summarized as follows: + +1. We propose a simple but effective solution to incorporate BERT into the reverse dictionary task. In the method, the target word is predicted according to masked language model predictions. With BERT, we achieve significant improvement for the monolingual reverse dictionary task. +2. By leveraging the Multilingual BERT (mBERT), we extend our methods into the cross-lingual reverse dictionary task, mBERT can not only avoid the explicit alignment between different language embeddings, but also achieve good performance. +3. We propose the unaligned cross-lingual reverse dictionary scenario and achieve encouraging performance only with monolingual reverse dictionary data. As far as we know, this is the first time the unaligned cross-lingual reverse dictionary is inspected. + +# 2 Related Work + +The reverse dictionary task has been investigated in several previous academic studies. Bilac et al. (2004) proposed using the information retrieval techniques to solve this task, and they first built a + +database based on available dictionaries. When a query came in, the system would find the closest definition in the database, then return the corresponding word. Different similarity metrics can be used to calculate the distance. Shaw et al. (2013) enhanced the retrieval system with WordNet (Miller, 1995). Hill et al. (2016) was the first to apply RNN into the reverse dictionary task, making the model free of handcrafted features. After encoding the definition into a dense vector, this vector is used to find its nearest neighbor word. This model formulation has been adopted in several papers (Pilehvar, 2019; Chen et al., 2019; Zhang et al., 2019; Morinaga and Yamaguchi, 2018; Hedderich et al., 2019), their difference lies in usage of different resources. Kartsaklis et al. (2018); Thorat and Choudhari (2016) used WordNet to form graphs to tackle the reverse dictionary task. + +The construction of the bilingual reverse dictionary has been studied in (Gollins and Sanderson, 2001; Lam and Kalita, 2013). Lam and Kalita (2013) relied on the availability of lexical resources, such as WordNet, to build a bilingual reverse dictionary. Chen et al. (2019) built several bilingual reverse dictionaries based on the Wiktionary1, but this kind of online data cannot ensure the data's quality. Building a bilingual reverse dictionary is not an easy task, and it will be even harder for low-resource language. Other than the low-quality problem, the vast number of language pairs is also a big obstacle, since if there are $N$ languages, they will form $N^2$ pairs. However, by the unaligned cross-lingual reverse dictionary, we can not only exploit the high-quality monolingual dictionaries, but also avoid the preparation of $N^2$ language pairs. + +Unsupervised machine translation is highly correlated with the unaligned cross-lingual reverse dictionary (Lample et al., 2018a; Conneau and Lample, 2019; Sennrich et al., 2016a). However, the unaligned cross-lingual reverse dictionary task differs from the unsupervised machine translation at least in two aspects. Firstly, the target for the cross-lingual reverse dictionary and machine translation is a word and a sentence, respectively. Secondly, theoretically, the translated sentence and the original sentence should contain the same information. Nevertheless, in the cross-lingual reverse dictionary task, on the one hand, the target word might contain more senses when it is polysemous. On the other hand, a description can correspond to several + +similar terms. The polysemy also makes the unsupervised word alignment hard to solve this task (Lample et al., 2018b). + +Last but not least, the pre-trained language model BERT has been extensively exploited in the Natural Language Processing (NLP) community since its introduction (Devlin et al., 2019; Conneau and Lample, 2019). Owing to BERT's ability to extract contextualized information, BERT has been successfully utilized to enhance various tasks substantially, such as the aspect-based sentiment analysis task (Sun et al., 2019), summarization (Zhong et al., 2019), named entity recognition (Yan et al., 2019; Li et al., 2020) and Chinese dependency parsing (Yan et al., 2020). However, most works used BERT as an encoder, and less work uses BERT to do generation (Wang and Cho, 2019; Conneau and Lample, 2019). Wang and Cho (2019) showed that BERT is a Markov random field language model. Therefore, sentences can be sampled from BERT. Conneau and Lample (2019) used pre-trained BERT to initialize the unsupervised machine training model an achieve good performance. Different from these work, although a word might contain several subwords, we use a simple but effective method to make BERT generate the word ranking list with only one forward pass. + +# 3 Methodology + +The reverse dictionary task is to find the target word $w$ given its definition $d = [w_1, w_2, \ldots, w_n]$ , where $d$ and $w$ can be in the same language or different languages. In this section, we first introduce BERT, then present the method we used to incorporate BERT into the reverse dictionary task. + +# 3.1 BERT + +BERT is a pre-trained model proposed in (Devlin et al., 2019). BERT contains several Transformer Encoder layers. BERT can be formulated as follows + +$$ +\hat {h} ^ {l} = \mathrm {L N} \left(h ^ {l - 1} + \mathrm {M H A t t} \left(h ^ {l - 1}\right)\right), \tag {1} +$$ + +$$ +h ^ {l} = \mathrm {L N} \left(\hat {h} ^ {l} + \operatorname {F F N} \left(\hat {h} ^ {l}\right)\right), \tag {2} +$$ + +where $h^0$ is the BERT input, for each token, it is the sum of its token embedding, position embedding, and segment embedding; LN is the layer normalization layer; MHAtt is the multi-head self-attention; FFN contains three layers, the first one is a linear projection layer, then an activation layer, then another linear projection layer; $l$ is the depth of the + +![](images/b9dfb165d2991b3b8f25471e55162c40216660cb8ea2d40734a8a00ed71df484.jpg) +Figure 2: The model structure for the monolingual and cross-lingual reverse dictionary. The “[MASK]” in the input is the placeholder where BERT needs to predict. Placeholders concatenate with the word definition before sending it into BERT. Postprocessing is required to convert the prediction for “[MASK)”s into the word ranking list. + +layer, the total number of layers in BERT is 12 or 24. + +Two tasks were used to pre-train BERT. The first is to replace some tokens with the “[MASK]” symbol, BERT has to recover this masked token from outputs of the last layer. The second one is the next sentence prediction. For two continuous sentences, $50\%$ of the time the second sentence will be replaced with other sentences, BERT has to figure out whether the input sequence is continuous based on the output vector of the “[CLS)” token. Another noticeable fact about BERT is that, instead of directly using the word, it used BPE subword (Sennrich et al., 2016b) to represent tokens. Therefore, one word may be split into several tokens. Next, we will show how we make BERT generate the word ranking list. + +# 3.2 BERT for Monolingual Reverse Dictionary + +The model structure is shown in Fig. 2. The input sequence $x$ has the form "[CLS] + [MASK] * $k +$ [SEP] + [subword sequence of the definition $d] +$ [SEP]". We want BERT to recover the target word $w$ from the $k$ ["MASK"] tokens based on the definition $d$ . We first utilize BERT to predict the masks as in its pre-training task. It can be formulated as + +$$ +S _ {\text {s u b w o r d}} = \operatorname {M L M} \left(H _ {k} ^ {L}\right), \tag {3} +$$ + +where $H_{k}^{L}\in \mathbb{R}^{k\times d_{model}}$ is the hidden states for the $k$ masked tokens in the last layer, MLM is the pre-trained masked language model, $S_{\text{subword}}\in \mathbb{R}^{k\times |V|}$ is the subword score distribution for the $k$ positions, $|V|$ is the number of subword tokens. Although we can make BERT directly predict word by using a word embedding, it will suffer from + +at least two problems: the first one is that it cannot take advantage of common subwords between words, such as prefixes and postfixes; the second one is that predicting word is inconsistent with the pre-trained tasks. + +After achieving $S_{\text{subword}}$ , we need to convert them back to word scores. However, there are $|V|^k$ kinds of subword combinations, which makes it intractable to represent words by crossing subwords. Another method is to generate subword one-by-one (Wang and Cho, 2019; Conneau and Lample, 2019), it is not suitable for this task, since this task needs to return a ranking list of words, but the generation can only offer limited answers. Nevertheless, for this specific task, the number of possible target words is fixed since the number of unique words in one language's dictionary is limited. Hence, instead of combining the subword sequence into different words, we can only care for the subword sequence, which can form a valid word. + +Specifically, for a given language, we first list all its valid words and find the subword sequence for each word. For a word $w$ with the subword sequence $[b_1, \dots, b_k]$ , its score is calculated by + +$$ +S _ {w o r d} = \sum_ {i = 1} ^ {k} S _ {s u b w o r d} ^ {i} [ b _ {i} ], \tag {4} +$$ + +where $S_{word} \in \mathbb{R}$ is the score for the word $w$ , $S_{subword}^{i} \in \mathbb{R}^{|V|}$ is the subword score distribution in the $i$ th position, $S_{subword}^{i}[b_i]$ is gathering the $b_i$ th element in $S_{subword}^{i}$ . However, not all words can be decomposed to $k$ subword tokens. If a word has subword tokens less than $k$ , we pad it with "[MASK]", while our method cannot handle words with more than $k$ subword tokens. By this method, each word can get a score. Therefore we can directly use the cross-entropy loss to finetune the model, + +$$ +L _ {w} = - \sum_ {i = 1} ^ {N} w ^ {(i)} \operatorname {l o g} _ {-} \operatorname {s o f t m a x} \left(S _ {w o r d} ^ {(i)}\right), \tag {5} +$$ + +where $N$ is the total number of samples, $w$ is the target word. When ranking, words are sorted by their scores. + +# 3.3 BERT for Cross-lingual Reverse Dictionary + +The model structure used in this setting is as depicted in Fig. 2. The only difference between this setting and the monolingual scenario is the pretrained model used. This setting uses the mBERT + +![](images/2960acb9b99917ab4b57b28a177d17ed1a438c14bdb906b666693eaf740dae71.jpg) +Figure 3: The model structure for the unaligned cross-lingual reverse dictionary. We add a randomly initialized language embedding to distinguish languages. Since we only have monolingual training data, "LG1" and "LG2" are of the same value in the training phase, but different in the evaluation phase. + +model. mBERT has the same structure as BERT, but it was trained on 104 languages. Therefore its token embedding contains subwords in different languages. + +# 3.4 BERT for Unaligned Cross-lingual Reverse Dictionary + +The model used for this setting is as depicted in Fig. 3. Compared with the BERT model, we add an extra learnable language embedding in the bottom, and the language embedding has the same dimension as the other embeddings. Except for the randomly initialized language embedding, the model is initialized with the pre-trained mBERT. + +Instead of using the MLM to get $S_{\text{subword}}$ , we use the following equation to get $S_{\text{subword}}$ + +$$ +S _ {\text {s u b w o r d}} = H _ {k} ^ {L} \operatorname {E m b} _ {\text {t o k e n}} ^ {T}, \tag {6} +$$ + +where $Emb_{token} \in \mathbb{R}^{|V| \times d_{model}}$ is the subword token embeddings. We found this formulation will lead to better performance than using the MLM, and we assume this is because the training data only contains monolingual data, thus it will be hard for the model to predict tokens in another language when evaluation, while if the $Emb_{token}$ is used, the model can utilize the similarity between subwords to make reasonable predictions. After getting $S_{subword}$ , we use Eq.4 to get the scores for each word, and different languages have different word lists, the loss is calculated by + +$$ +L _ {w} = - \sum_ {j = 1} ^ {M} \sum_ {i = 1} ^ {N _ {j}} w _ {j} ^ {(i)} \operatorname {l o g} _ {-} \operatorname {s o f t m a x} \left(S _ {w o r d _ {j}} ^ {(i)}\right), \tag {7} +$$ + +where $M$ is the number languages, $N_{k}$ is the number of samples for language $j$ , $w_{j}^{(i)}$ is the target + +
LanguageWordTypeTrainDevSeenUnseenDescriptionQuestion
English50.5KDef675.7K75.9K500500200-
Word45.0K5.0K500500200-
Chinese58.5KDef78.3K8.7K2.1K2.0K200272
Word54.0K6.1K1.4K1.4K200272
+ +Table 1: Dataset statistics for the monolingual reverse dictionary. The row "Def" and "Word" are the number of definition and distinct words in the split, respectively. + +word in language $j$ , $S_{word_j}^{(i)}$ is the score distribution for words in language $j$ . When getting the ranking list for a language, we only calculate word scores for that language. + +# 4 Experimental Setup + +# 4.1 Dataset + +For the monolingual reverse dictionary, we tested our methods in the English dataset and Chinese dataset released by (Hill et al., 2016) and (Zhang et al., 2019), respectively. Hill et al. (2016) built this dataset by extracting words and definitions from five electronic dictionaries and Wikipedia. Zhang et al. (2019) used the authoritative Modern Chinese Dictionary to build the Chinese reverse dictionary. There are four different test sets: (1) Seen definition set, words and their definitions are seen during the training phase; (2) Unseen definition set, none of the word's definitions have been seen during the training phase, but they might occur in other words' definition; (3) Description definition set, the description and its corresponding word are given by human. Methods rely on word matching may not perform well in this setting (Hill et al., 2016); (4) Question definition set, this dataset is only in Chinese, it contains 272 definitions appeared in Chinese exams. The detailed dataset statistics are shown in Table 1. + +For the cross-lingual and unaligned cross-lingual reverse dictionary, we use the dataset released in (Chen et al., 2019). This dataset includes four bilingual reverse dictionaries: English $\leftrightarrow$ French, English $\leftrightarrow$ Spanish. Besides, this dataset includes English, French, and Spanish monolingual reverse dictionary data. The test set for this dataset is four bilingual reverse dictionaries: En $\leftrightarrow$ Fr and En $\leftrightarrow$ Es. For the cross-lingual reverse dictionary, we use the paired bilingual reverse dictionary data to train our model; for the unaligned cross-lingual reverse dictionary, we use the three monolingual reverse dictionary data to train our model. And for both + +
ScenarioLanguageWordTypeTrainDevTest
MonolingualEn117.4KDef228.2K500501
Word117.3K499501
Fr52.4KDef104.4K500501
Word52.2K496501
Es22.5KDef47.6K500501
Word22.4K493501
BilingualEn-Fr45.6KDef49.7K500501
Word15.6K493488
Fr-En44.5KDef58.1K500501
Word16.8K487486
En-Es45.6KDef20.2K500501
Word7.9K484495
Es-En35.8KDef55.9K500501
Word15.9K489487
+ +Table 2: Dataset statistics for the cross-lingual and unaligned cross-lingual reverse dictionary. The upper block is the monolingual data used to train the unaligned cross-lingual reverse dictionary. The lower block is the cross-lingual reverse dictionary data. Both scenarios were evaluated in the test set in the lower part. "En-fr" means the target word is in English, the definition is in French. + +settings, we report results on the test sets of the four bilingual reverse dictionary. The detailed dataset statistics are shown in Table 2. + +# 4.2 Evaluation Metrics + +For the English and Chinese monolingual reverse dictionary, we report three metrics: the median rank of target words (Median Rank, lower better, lowestest is 0), the ratio that target words appear in top 1/10/100 (Acc@1/10/100, higher better, ranges from 0 to 1), and the variance of the rank of the correct target word (Rank Variance, lower better), these results are also reported in (Hill et al., 2016; Zhang et al., 2019). For the cross-lingual and unaligned cross-lingual reverse dictionary, we report the Acc@1/10, and the mean reciprocal rank (MRR, higher is better, ranges from 0 to 1), these results are also reported in (Chen et al., 2019). + +# 4.3 Hyper-parameter Settings + +The English BERT and Multilingual BERT (mBERT) are from (Devlin et al., 2019), the Chinese BERT is from (Cui et al., 2019). Since RoBERTa has the same model structure as BERT, we also report the performance with the English RoBERTa from (Liu et al., 2019) and the Chinese RoBERTa from (Cui et al., 2019) for the monolingual reverse dictionary. Both RoBERTa and BERT are the base version, and we use the uncased English BERT and cased mBERT. For all models, we + +find the hyper-parameters based on the Acc@10 in the development sets, the models with the best development set performance are evaluated on the test set. The data and detailed hyper-parameters for each setting will be released within the code ${}^{2}$ . We choose $k = 4$ for Chinese,and $k = 5$ for other languages, $k$ is determined by at least ${99}\%$ of the target words in the training set are included. + +# 5 Experimental Results + +# 5.1 Monolingual Reverse Dictionary + +Results for the English and Chinese monolingual reverse dictionary have been shown in Table 3 and Table 4, respectively. "OneLook" in Table 3 is the most used commercial reverse dictionary system, it indexed over 1061 dictionaries, even included online dictionaries, such as Wikipedia and WordNet (Miller, 1995). Therefore, its result in the unseen definition test set is ignored. "SuperSense", "RDWECI", "MS-LSTM" and "Mul-Channel" are from (Pilehvar, 2019; Morinaga and Yamaguchi, 2018; Kartsaklis et al., 2018; Zhang et al., 2019), respectively. From Table 3, RoBERTa achieves state-of-the-art performance on the human description test set. And owing to bigger models, in the seen definition test set, compared with the "Mul-channel", BERT and RoBERTa enhance the performance significantly. Although the MS-LSTM (Kartsaklis et al., 2018) performs remarkably in the seen test sets, it fails to generalize to unseen and description test sets. Besides, "RDWECI", "SuperSense", "Mul-channel" in Table 3 all used external knowledge, such as WordNet, Part-of-Speech tags. Combining BERT and structured knowledge should further improve the performance in all test sets, we leave it for further work. + +Table 4 presents the results for the Chinese reverse dictionary. For the seen definition setting, BERT and RoBERTa substantially improve the performance. Apart from the good performance in seen definitions, BERT and RoBERTa perform well in the human description test set, which depicts their capability to capture human's meaning. + +# 5.2 Cross-lingual Reverse Dictionary + +In this section, we will present the results for the cross-lingual reverse dictionary. The performance comparison is shown in Table 5, mBERT substantially enhances the performance in four test sets. + +
ModelSeenUnseenDescription
OneLook*0.66/.94/.95200---5.5.33/.54/.76332
RDWECI121.06/.20/.44420170.05/.19/.4342016.14/.41/.74306
SuperSense378.03/.15/.36462465.02/.11/.31454115.03/.15/.47396
MS-LSTM*0.92/.98/.9965276.03/.14/.374261000.01/.04/.18404
Mul-Channel16.20/.44/.7131054.09/.29/.583582.32/.64/.88203
BERT0.57/.86/.9224018.20/.46/.644181.36/.77/.9494
RoBERTa0.57/.84/.9222837.10/.36/.604051.43/.85/.9646
+ +Table 3: Results on the English reverse dictionary datasets. In each cell, the values are the "Median Rank", "Acc@1/10/100" and "Rank Variance". * results are from (Zhang et al., 2019). BERT and RoBERTa achieve a significant performance boost in both the description test set and the unseen test set. + +
ModelSeenUnseenDescriptionQuestion
BOW*59.08/.2840365.08/.2841140.07/.3035742.10/.28362
RDWECI*56.09/.3142383.08/.2843632.09/.3237645.12/.32384
Mul-Channel*1.49/.7822010.18/.493105.24/.562600.50/.73223
BERT0.88/.932015.27/.563603.34/.672600.57/.70325
RoBERTa0.88/.932005.28/.563503.33/.652300.59/.74310
+ +The contrast between "mBERT" and "mBERT-joint" shows that jointly train the reverse dictionary in different language pairs can improve the performance. + +# 5.3 Unaligned Cross-lingual Reverse Dictionary + +In this section, we present the results of the unaligned bilingual and cross-lingual reverse dictionary. Models are trained on several monolingual reverse dictionary data, but they will be evaluated on bilingual reverse dictionary data. Take the "En-Fr" as an example, models are trained on English + +Table 4: Results on the Chinese reverse dictionary datasets. In each cell, the values are the "Median Rank", "Acc@1/10" and "Rank Variance". * results are from (Zhang et al., 2019). Our proposed methods enhance the performance in all test sets substantially. + +
ModelEn-FrFr-EnEn-EsEs-En
ATT*.39/.47.41.40/.50.43.52/.59.53.60/.68.63
mBERT.88/.90.89.88/.90.89.79/.81.80.88/.90.89
ATT-joint*.64/.69.65.68/.75.71.69/.73.70.79/.83.80
mBERT-joint.90/.94.92.90/.93.91.83/.88.85.93/.95.93
+ +Table 5: Results for the cross-lingual reverse dictionary. In each cell, the values are “Acc@1/10” and “MRR”. * results are from (Chen et al., 2019). “En-Fr” means the target word is in English, while the description is in French. The “ATT” and “mBERT” used the bilingual corpus to train the model. The “ATT-joint” and “mBERT-joint” are trained on four bilingual reverse dictionary corpus simultaneously. + +
ModelEn-FrFr-EnEn-EsEs-En
ATT-joint*.64/.69.65.68/.75.71.69/.73.70.79/.83.80
BERT-joint.90/.94.92.90/.93.91.83/.88.85.93/.95.93
BERT-Match.35/.41-.20/.25-.23/.26-.17/.21-
BERT-Trans.46/.55-.42/.51-.44/.49-.29/.38-
BERT-Unaligned.70/.80.74.55/.66.59.52/.68.58.41/.59.48
BERT-joint-Unaligned.71/.80.74.56/.67.60.54/.68.59.41/.59.47
+ +Table 6: Results for the unaligned cross-lingual reverse dictionary. In each cell, the values are "Acc@1/10" and "MRR". * is from (Chen et al., 2019). "En-Fr" means the target word is in English, while the definition is in French. Models in the lower block do not use aligned data. While models in the upper block use aligned data to train the model. + +definitions to English words, French definitions to French words, while in the evaluation phase, the model is asked to recall an English word given the French description or vice versa. + +Since previous models do not consider this setting, we make a baseline by firstly getting words with the same language as the definition through a monolingual reverse dictionary model, then using the word translation or aligned word vectors to recall words in another language. Take "En-Fr" for instance, we first recall the top 10 French words with the French definition, then each French word is translated into an English word by either translations or word vectors. + +Models listed in Table 6 are as follows: (1) mBERT-Match uses aligned word vectors (Lample et al., 2018b) to recall the target words in another language; (2) mBERT-Trans uses the translation $\mathrm{API}^3$ ; (3) mBERT-Unaligned uses two monolingual reverse dictionary corpus to train one model. Therefore, the results of "En-Fr" and "Fr-En" in Table 6 are from the same model; (4) mBERT-joint-Unaligned is trained on all monolingual corpus. + +As shown in the Table 6, the "mBERT-Unaligned" and "mBERT-joint-Unaligned" perform much better than the "mBERT-Match" and "mBERT-Trans". Therefore, it is meaningful to explore the unaligned reverse dictionary scenario. As we will show in Section 6.4, the translation method might fail to recall the target words when the word is polysemous. + +From Table 6, we can see that jointly training three monolingual reverse dictionary tasks do not help to recall cross-lingual words. Therefore, how to utilize different languages to enhance the per + +formance of the unaligned reverse dictionary is an unsolved problem. Besides, compared with the top block of Table 6, the performance of the unaligned models lags much behind. Hence, there is a lot of room for unaligned performance improvement. + +# 6 Analysis + +# 6.1 Performance for Number of Senses + +Following (Zhang et al., 2019), we evaluate the accuracy of words with a different number of senses through WordNet(Miller, 1995). The results are shown in Fig. 4. BERT and RoBERTa significantly improve the accuracy of words with single and multiple senses, which means they can alleviate the polysemous issue. + +![](images/76fffd64e1e08801579411878bc9fe3e9633aedbc45fe0837282440bde4d2c0c.jpg) +Figure 4: The Acc@10 for English words with a different number of senses. + +# 6.2 Performance for Different Number of Subword + +Since BERT decomposes words into subwords, we want to investigate whether the number of subwords has an impact on performance. We evaluate the English development set, results are shown in Fig. 5. The model achieves the best accuracy in English words with one subword and Chinese words with two subwords. This might be caused by the fact that most English words and Chinese words have one subword and two subwords, respectively. + +# 6.3 Unseen Definition in Unaligned Cross-lingual Reverse Dictionary + +In this section, for the target words presented in bilingual test sets, we gradually remove their definitions from the monolingual training corpus. The performance changing curve is depicted in Table 6. As a reminder, the test sets need to recall target words in another language, while the deleted word and definition are in the same language. Since the number of removed samples is less than $2\%$ of the monolingual corpus, the performance decay cannot + +![](images/75fdcefdf64b983ed78a649ddcda434e0355ed790fc4ccf3bb2f2d14fc1862f2.jpg) + +![](images/18fd85b1e1ce1bc0bcd0c73952e4dc6b7403991f09e4579c2b69efea5cb785b7.jpg) +Figure 5: The Acc@10 for words with a different number of subwords. + +be totally ascribed to the reducing data. Based on Table 6, for the unaligned reverse dictionary task, we can enhance the cross-lingual word retrieval by including more monolingual word definitions. + +![](images/2f00e6655e075859f4a5f6d26a148f4623162379091f95913d6ad18e321a54d5.jpg) + +![](images/e6e4517560b70f3727821e830b3f86e61d9da90c2e222b83a406e44856a99b8b.jpg) +Figure 6: The performance for the unaligned reverse dictionary with the increment of deleted definitions in monolingual data. The dense and dotted lines are Acc@1, Acc@10, respectively. Although the deleted definition and word are in the same language, deleting them harms the performance of cross-lingual word retrieval. + +# 6.4 Case Study + +For the monolingual scenario, we present an example in Table 7 to show that decomposing words into subwords helps to recall related words. Table 8 shows the comparison between "mBERT-Trans" and "mBERT-joint-Unaligned". + +# 7 Conclusion + +In this paper, we formulate the reverse dictionary task under the masked language model framework and use BERT to predict the target word. Since + +
Definitionsomeone who studies secret code systems in order to obtain secret information
Mul-Channel BERT RoBERTacryptographer cryptologist spymaster snoop cryptanalyst codebreaker cryptographer coder codebreaker cryptanalyst cryptographer snooper
+ +Table 7: A Monolingual case displays the advantage of using subwords. In each row is the model's top recalled words; the underlined word is the target word. The predicted words by BERT or RoBERTa is either related to "someone" (corresponding to the "-analyst" or "er") or "code/secret" (corresponding to "code-" or "crypt-"). + +
DefinitionEl punto que esta a mitad del camino entre dos extremos. (The point that is halfway between two ends)
Spanish Trans. Unalignedcentro mitad medio punta core middle middle tip center centre middle mid
DefinitionPiece où l'on prépare et fait cuire les alimentés (Room where food is prepared and cooked)
French Trans. Unalignedcuisine restaurant piece cuire cookery restaurant room cook kitchen cook office restaurant
+ +Table 8: Unaligned reverse dictionary results by translation and the proposed unaligned reverse dictionary model. The target word is underlined, the "Trans." row is the word translation results. The Spanish "centroid" in the upper block also has the meaning "center", but without context, it gives the wrong translation, and the French word "cuisine" in the lower block makes the same error. + +BERT decomposes words into subwords, the score of the target word is the sum of the scores of its constituent subwords. With the incorporation of BERT, our method achieves state-of-the-art performances for both the monolingual and cross-lingual reverse dictionary tasks. Besides, we propose a new cross-lingual reverse dictionary task without aligned data. Our proposed framework can perform the cross-lingual reverse dictionary while being trained on monolingual corpora only. Although the performance of unaligned BERT is superior to the translation and word vector alignment method, it still lags behind the supervised aligned reverse dictionary model. Therefore, future work should be conducted to enhance performance on the unaligned reverse dictionary. + +# Acknowledgements + +We would like to thank the anonymous reviewers for their insightful comments. We also thank the + +developers of fastNLP $^{4}$ , Yunfan Shao and Yining Zheng, to develop this handy natural language processing package. This work was supported by the National Natural Science Foundation of China (No. 61751201, 62022027 and 61976056), Shanghai Municipal Science and Technology Major Project (No. 2018SHZDZX01) and ZJLab. + +# References + +Slaven Bilac, Wataru Watanabe, Taiichi Hashimoto, Takenobu Tokunaga, and Hozumi Tanaka. 2004. Dictionary search based on the target word description. In Proceedings of NLP. +Roger Brown and David McNeill. 1966. The "tip of the tongue" phenomenon. Journal of verbal learning and verbal behavior, 5(4):325-337. +Muhao Chen, Yingtao Tian, Haochen Chen, Kai-Wei Chang, Steven Skiena, and Carlo Zaniolo. 2019. Learning to represent bilingual dictionaries. In Proceedings of the 23rd Conference on Computational Natural Language Learning, CoNLL 2019, Hong Kong, China, November 3-4, 2019, pages 152-162. Association for Computational Linguistics. +Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. In NeurIPS, pages 7057-7067. +Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, and Guoping Hu. 2019. Pre-training with whole word masking for chinese BERT. CoRR, abs/1906.08101. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT*, pages 4171–4186. +Tim Gollins and Mark Sanderson. 2001. Improving cross language information retrieval with triangulated translation. In SIGIR, pages 90-95. +Michael A Hedderich, Andrew Yates, Dietrich Klakow, and Gerard de Melo. 2019. Using multi-sense vector embeddings for reverse dictionaries. In Proceedings of IWCS. +Felix Hill, Kyunghyun Cho, Anna Korhonen, and Yoshua Bengio. 2016. Learning to understand phrases by embedding the dictionary. TACL, 4:17-30. +Dimitri Kartsaklis, Mohammad Taher Pilehvar, and Nigel Collier. 2018. Mapping text to knowledge graph entities using multi-sense LSTMs. In Proceedings of EMNLP. +Khang Nhat Lam and Jugal Kumar Kalita. 2013. Creating reverse bilingual dictionaries. In HLT-NAACL. + +Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018a. Unsupervised machine translation using monolingual corpora only. In ICLR. +Guillaume Lample, Alexis Conneau, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018b. Word translation without parallel data. In ICLR. +Xiaonan Li, Hang Yan, Xipeng Qiu, and Xuanjing Huang. 2020. FLAT: chinese NER using flat-lattice transformer. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 6836-6842. Association for Computational Linguistics. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692. +Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In NeurIPS. +George A Miller. 1995. Wordnet: a lexical database for english. Communications of the Acm, 38(11):39-41. +Yuya Morinaga and Kazunori Yamaguchi. 2018. Improvement of reverse dictionary by tuning word vectors and category inference. In Proceedings of ICIST. +Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532-1543. +Mohammad Taher Pilehvar. 2019. On the importance of distinguishing word meaning representations: A case study on reverse dictionary mapping. In *NAACL-HLT*. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. In ACL. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In ACL. +Ryan Shaw, Anindya Datta, Debra E. VanderMeer, and Kaushik Dutta. 2013. Building a scalable database-driven reverse dictionary. TKDE, 25:528-540. +Chi Sun, Luyao Huang, and Xipeng Qiu. 2019. Utilizing BERT for aspect-based sentiment analysis via constructing auxiliary sentence. In *NAACL-HLT*, pages 380-385. +Sushrut Thorat and Varad Choudhari. 2016. Implementing a reverse dictionary, based on word definitions, using a node-graph architecture. In COLING. + +Alex Wang and Kyunghyun Cho. 2019. BERT has a mouth, and it must speak: BERT as a markov random field language model. CoRR, abs/1902.04094. +Hang Yan, Bocao Deng, Xiaonan Li, and Xipeng Qiu. 2019. TENER: adapting transformer encoder for named entity recognition. CoRR, abs/1911.04474. +Hang Yan, Xipeng Qiu, and Xuanjing Huang. 2020. A graph-based model for joint chinese word segmentation and dependency parsing. Trans. Assoc. Comput. Linguistics, 8:78-92. +Lei Zhang, Fanchao Qi, Zhiyuan Liu, Yasheng Wang, Qun Liu, and Maosong Sun. 2019. Multi-channel reverse dictionary model. CoRR, abs/1912.08441. +Ming Zhong, Pengfei Liu, Danqing Wang, Xipeng Qiu, and Xuanjing Huang. 2019. Searching for effective neural extractive summarization: What works and what's next. In ACL, pages 1049-1058. \ No newline at end of file diff --git a/bertformonolingualandcrosslingualreversedictionary/images.zip b/bertformonolingualandcrosslingualreversedictionary/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..ede5462b802ddf3b5b757bb713ba672c7d86410f --- /dev/null +++ b/bertformonolingualandcrosslingualreversedictionary/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3a8184d7af96eb13b59906e172afaefc0bed63e5b28709467a01aeb65a641218 +size 420176 diff --git a/bertformonolingualandcrosslingualreversedictionary/layout.json b/bertformonolingualandcrosslingualreversedictionary/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..5192b171fe5c36a95afdcf5862ef28110c8eb992 --- /dev/null +++ b/bertformonolingualandcrosslingualreversedictionary/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1a38cb1457cdd7e3f79fe3d75f8dcdd188f5b53b5e280bc8bc85dedd81c58406 +size 323102 diff --git a/bertknnaddingaknnsearchcomponenttopretrainedlanguagemodelsforbetterqa/2da21ca0-2906-4cb0-a346-41ddf7d03463_content_list.json b/bertknnaddingaknnsearchcomponenttopretrainedlanguagemodelsforbetterqa/2da21ca0-2906-4cb0-a346-41ddf7d03463_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..7a5215acea621e35b51fbce384d699514ba71a8f --- /dev/null +++ b/bertknnaddingaknnsearchcomponenttopretrainedlanguagemodelsforbetterqa/2da21ca0-2906-4cb0-a346-41ddf7d03463_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4e2ee4c787734ffc4e97e969059ffacdbcfbfd423129592322159b35721e90cd +size 49320 diff --git a/bertknnaddingaknnsearchcomponenttopretrainedlanguagemodelsforbetterqa/2da21ca0-2906-4cb0-a346-41ddf7d03463_model.json b/bertknnaddingaknnsearchcomponenttopretrainedlanguagemodelsforbetterqa/2da21ca0-2906-4cb0-a346-41ddf7d03463_model.json new file mode 100644 index 0000000000000000000000000000000000000000..9e2f6b6179ad0dffb0c0a2760a44bc94bbc11415 --- /dev/null +++ b/bertknnaddingaknnsearchcomponenttopretrainedlanguagemodelsforbetterqa/2da21ca0-2906-4cb0-a346-41ddf7d03463_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:27f10267e552b6897a0db780e056c5d4dd4a796a327552090acc9f95cbcad211 +size 59633 diff --git a/bertknnaddingaknnsearchcomponenttopretrainedlanguagemodelsforbetterqa/2da21ca0-2906-4cb0-a346-41ddf7d03463_origin.pdf b/bertknnaddingaknnsearchcomponenttopretrainedlanguagemodelsforbetterqa/2da21ca0-2906-4cb0-a346-41ddf7d03463_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..24b929691661159789f8ccf881975b2822f3258d --- /dev/null +++ b/bertknnaddingaknnsearchcomponenttopretrainedlanguagemodelsforbetterqa/2da21ca0-2906-4cb0-a346-41ddf7d03463_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f3c3a37b39cd32a376e757817d2c18b87e9bfbf55b7e28e51b844241d4e99f22 +size 355798 diff --git a/bertknnaddingaknnsearchcomponenttopretrainedlanguagemodelsforbetterqa/full.md b/bertknnaddingaknnsearchcomponenttopretrainedlanguagemodelsforbetterqa/full.md new file mode 100644 index 0000000000000000000000000000000000000000..9aabebdc7a5368727b69f40b77863713c5634f3e --- /dev/null +++ b/bertknnaddingaknnsearchcomponenttopretrainedlanguagemodelsforbetterqa/full.md @@ -0,0 +1,249 @@ +# BERT-kNN: Adding a kNN Search Component to Pretrained Language Models for Better QA + +Nora Kassner, Hinrich Schütze + +Center for Information and Language Processing (CIS) + +LMU Munich, Germany + +kassner@cis.lmu.de + +# Abstract + +Khandelwal et al. (2020) use a k-nearest-neighbor (kNN) component to improve language model performance. We show that this idea is beneficial for open-domain question answering (QA). To improve the recall of facts encountered during training, we combine BERT (Devlin et al., 2019) with a traditional information retrieval step (IR) and a kNN search over a large datastore of an embedded text collection. Our contributions are as follows: i) BERT-kNN outperforms BERT on cloze-style QA by large margins without any further training. ii) We show that BERT often identifies the correct response category (e.g., US city), but only kNN recovers the factually correct answer (e.g., "Miami"). iii) Compared to BERT, BERT-kNN excels for rare facts. iv) BERT-kNN can easily handle facts not covered by BERT's training set, e.g., recent events. + +# 1 Introduction + +Pretrained language models (PLMs) like BERT (Devlin et al., 2019), GPT-2 (Radford et al., 2019) and RoBERTa (Liu et al., 2019) have emerged as universal tools that not only capture a diverse range of linguistic, but also (as recent evidence seems to suggest) factual knowledge. + +Petroni et al. (2019) introduced LAMA (LAnGuage Model Analysis) to test BERT's performance on open-domain QA and therefore investigate PLMs' capacity to recall factual knowledge without the use of finetuning. Since the PLM training objective is to predict masked tokens, question answering tasks can be reformulated as cloze questions; e.g., "Who wrote 'Ulysses'?" is reformulated as "[MASK] wrote 'Ulysses'." In this setup, Petroni et al. (2019) show that, on QA, PLMs outperform baselines trained on automatically extracted knowledge bases (KBs). + +![](images/8c48a035a99833b4c157652836b5ce7cec3e036d625e2007c606410b12fc6b30.jpg) +Figure 1: BERT-kNN interpolates BERT's prediction for question $q$ with a kNN-search. The kNN search runs in BERT's embedding space, comparing the embedding of $q$ with the embeddings of a retrieved subset of a large text collection: Pairs of a word $w$ in the text collection and the BERT embedding of $w$ 's context $(BERT(s))$ are stored in a key-value datastore. An IR step is used to define a relevant subset of the full datastore (yellow). $BERT(q)$ (red) is BERT's embedding of the question. The kNN search runs between $BERT(q)$ and $BERT(s)$ and the corresponding distance $d$ and word $w$ is returned (orange). Finally, BERT's predictions (blue) are interpolated with this kNN search result. + +Still, given that PLMs have seen more text than humans read in a lifetime, their performance on open-domain QA seems poor. Also, many LAMA facts that PLMs do get right are not "recalled" from training, but are guesses instead (Poerner et al., 2019). To address PLMs' poor performance on facts and choosing BERT as our PLM, we introduce BERT-kNN. + +BERT-kNN combines BERT's predictions with a kNN search. The kNN search runs in BERT's embedding space, comparing the embedding of the question with the embeddings of a retrieved subset of a large text collection. The text collection can be BERT's training set or any other suitable text corpus. Due to its kNN component and its resulting ability to directly access facts stated in the searched text, BERT-kNN outperforms BERT on cloze-style + +
DatasetBERT-baseBERT-largeERNIEKnow-BERTE-BERTBERT-kNN
LAMA27.730.630.431.736.239.4
LAMA-UHN20.623.024.724.631.134.8
+ +Table 1: Mean P@1 on LAMA and LAMA-UHN on the TREx and GoogleRE subsets for BERT-base, BERT-large, ERNIE (Zhang et al., 2019), KnowBert (Peters et al., 2019), E-BERT (Poerner et al., 2019) and BERT-kNN. BERT-kNN performs best. + +# QA by large margins. + +A schematic depiction of the model is shown in Figure 1. Specifically, we use BERT to embed each token's masked context $s$ in the text collection $(BERT(s))$ . Each pair of context embedding and token is stored as a key-value pair in a datastore. Testing for a cloze question $q$ , the embedding of $q$ $(BERT(q))$ serves as query to find the $k$ context-target pairs in the subset of the datastore that are closest. The final prediction is an interpolation of the kNN search and the PLM predictions. + +We find that the kNN search over the full datastore alone does not obtain good results. Therefore, we first query a separate information retrieval (IR) index with the original question $q$ and only search over the most relevant subset of the full datastore when finding the $k$ -nearest-neighbors of $BERT(q)$ in embedding space. + +We find that the PLM often correctly predicts the answer category and therefore the correct answer is often among the top $k$ -nearest-neighbors. A typical example is "Albert Einstein was born in [MASK]": the PLM knows that a city is likely to follow and maybe even that it is a German city, but it fails to pick the correct city. On the other hand, the top-ranked answer in the kNN search is "Ulm" and so the correct filler for the mask can be identified. + +BERT-kNN sets a new state-of-the-art on the LAMA cloze-style QA dataset without any further training. Even though BERT-kNN is based on BERT-base, it also outperforms BERT-large. The performance gap between BERT and BERT-kNN is most pronounced on hard-to-guess facts. Our method can also make recent events available to BERT without any need of retraining: we can simply add embedded text collections covering recent events to BERT-kNN's datasotre. + +The source code of our experiments is available under: https://github.com/norakassner/BERT-kNN. + +# 2 Data + +The LAMA dataset is a cloze-style QA dataset that allows to query PLMs for facts in a way analogous + +to KB queries. A cloze question is generated using a subject-relation-object triple from a KB and a templatic statement for the relation that contains variables X and Y for subject and object; e.g., "X was born in Y". The subject is substituted for X and [MASK] for Y. In all LAMA triples, Y is a single-token answer. + +LAMA covers different sources: The GoogleRE1 set covers the relations "place of birth", "date of birth" and "place of death". TREx (ElSahar et al., 2018) consists of a subset of Wikidata triples covering 41 relations. ConceptNet (Li et al., 2016) combines 16 commonsense relations among words and phrases. The underlying Open Mind Common Sense corpus provides matching statements to query the language model. SQuAD (Rajpurkar et al., 2016) is a standard question answering dataset. LAMA contains a subset of 305 context-insensitive questions. Unlike KB queries, SQuAD uses manually reformulated cloze-style questions which are not based on a template. + +We use SQuAD and an additional 305 Concept-Net queries for hyperparameter search. + +Poerner et al. (2019) introduce LAMA-UHN, a subset of LAMA's TREx and GoogleRE questions from which easy-to-guess facts have been removed. + +To test BERT-kNN's performance on unseen facts, we collect Wikidata triples containing TREx relations from Wikipedia pages created January-May 2020 and add them to the dataset. + +# 3 Method + +BERT-kNN combines BERT with a kNN search component. Our method is generally applicable to PLMs. Here, we use BERT-base-uncased (Devlin et al., 2019). BERT is pretrained on the BookCorpus (Zhu et al., 2015) and the English Wikipedia. + +Datastore. Our text collection $C$ is the 2016-12-21 English Wikipedia. For each single-token word occurrence $w$ in a sentence in $C$ , we com + +
DatasetStatisticsModel
FactsRelBERTkNNBERT-kNN
GoogleRE552739.851.148.6
TReX340394229.134.438.7
ConceptNet111531615.64.711.6
SQuAD305-14.125.524.9
unseen346373218.821.527.1
+ +Table 2: Mean P@1 for BERT-base, kNN and their interpolation (BERT-kNN) for LAMA subsets and unseen facts. BERT results differ from Petroni et al. (2019) where a smaller vocabulary is used. + +
ConfigurationP@1
hidden layer 1236.8
hidden layer 1139.4
hidden layer 1034.7
hidden layer 11 (without IR)26.9
+ +Table 3: Mean P@1 on LAMA (TREx, GoogleRE subsets) for different context embedding strategies. Top: The context embedding is represented by the embedding of the masked token in different hidden layers. Best performance is obtained using BERT's hidden layer 11. Bottom: We show that BERT-kNN's performance without the additional IR step drops significantly. We therefore conclude that the IR step is an essential part of BERT-kNN. + +put the pair $(c, w)$ where $c$ is a context embedding computed by BERT. To be specific, we mask the occurrence of $w$ in the sentence and use the embedding of the masked token. We store all pairs $(c, w)$ in a key-value datastore $D$ where $c$ serves as key and $w$ as value. + +Information Retrieval. We find that just using the datastore $D$ does not give good results (see result section). We therefore use Chen et al. (2017)'s IR system to first select a small subset of $D$ using a keyword search. The IR index contains all Wikipedia articles. An article is represented as a bag of words and word bigrams. We find the top 3 relevant Wikipedia articles using TF-IDF search. For KB queries, we use the subject to query the IR index. If the subject has its dedicated Wikipedia page, we simply use this. For non-knowledge base queries, we use the cloze-style question $q$ ([MASK] is removed). + +Inference. During testing, we first run the IR search to identify the subset $D'$ of $D$ that corresponds to the relevant Wikipedia articles. For the kNN search, $q$ is embedded in the same way as the context representations $c$ in $D$ : we set $BERT(q)$ to the embedding computed by BERT for [MASK]. We then retrieve the $k = 128$ nearest-neighbors of + +![](images/fbb40081d0146fdc355d84ebff35da7bb58e03206bb6ef190ad3ab695b56c617.jpg) +Figure 2: Mean P@1, P@5, P@10 on LAMA for original BERT and BERT-kNN. + +$BERT(q)$ in $D^{\prime}$ . We convert the distances (Euclidean) between $BERT(q)$ and the kNNs to a probability distribution using softmax. Since a word $w$ can occur several times in kNN, we compute its final output probability as the sum over all occurrences. + +In the final step, we interpolate kNN's (weight 0.3) and BERT's original predictions (weight 0.7). We optimize hyperparameters on dev. See supplementary for details. + +Evaluation. Following Petroni et al. (2019) we report mean precision at rank $r$ ( $\mathrm{P}@\mathrm{r}$ ). $\mathrm{P}@\mathrm{r}$ is 1 if the top $r$ predictions contain the correct answer, otherwise it returns 0. To compute mean precision, we first average within each relation and then across relations. + +# 4 Results and Discussion + +Table 1 shows that BERT-kNN outperforms BERT on LAMA. It has about 10 precision point gain over BERT, base and large. Recall that BERT-kNN uses BERT-base. The performance gap between original BERT and BERT-kNN becomes even larger when evaluating on LAMA-UHN, a subset of LAMA with hard-to-guess facts. + +It also outperforms entity-enhanced versions of BERT (see related work) - ERNIE (Zhang et al., 2019), KnowBert (Peters et al., 2019) and E-BERT (Poerner et al., 2019) - on LAMA. + +Table 2 shows that BERT-kNN outperforms BERT on 3 out of 4 LAMA subsets. BERT prevails on ConceptNet; see discussion below. Huge gains are obtained on the GoogleRE dataset. Figure 2 shows precision at 1, 5 and 10. BERT-kNN performs better than BERT in all three categories. + +Table 3 compares different context embedding strategies. BERT's masked token embedding of + +
Query and True AnswerGeneration
Google REhans gefors was born in [MASK]. True: stockholmBERT-kNN: stockholm (0.36), oslo (0.15), copenhagen (0.13) +BERT: oslo (0.22), copenhagen (0.18), bergen (0.09) +kNN: stockholm (1.0), lund (0.00), hans (0.00)
TRExregiomontanus works in the field of [MASK]. True: mathematicsBERT-kNN: astronomy (0.20), mathematics (0.13), medicine (0.06) +BERT: medicine (0.09), law (0.05), physics (0.03) +kNN: astronomy (0.63), mathematics (0.36), astronomical (0.00)
Concept Netears can [MASK] sound. True: hearBERT-kNN: hear (0.27), detect (0.23), produce (0.06) +BERT: hear (0.28), detect (0.06), produce (0.04) +kNN: detect (0.77), hear (0.14), produce (0.10)
tesla was in favour of the [MASK] current type. True: acBERT-kNN: alternating (0.39), electric (0.18), direct (0.11) +BERT: electric (0.28), alternating (0.18), direct (0.11) +kNN: alternating (0.87), direct (0.12), ac (0.00)
+ +Table 4: Examples of generation for BERT-base, kNN, BERT-kNN. The last column reports the top three tokens generated together with the associated probability (in parentheses). + +hidden layer 11 performs best. We also show the necessity of the IR step by running a kNN search over all Wikipedia contexts, which results in precision lower than original BERT. To run an efficient kNN search over all contexts instead of the relevant subset identified by the IR step, we use the FAISS library (Johnson et al., 2017). + +Table 2 also shows that neither BERT nor kNN alone are sufficient for top performance, while the interpolation of the two yields optimal results. In many cases, BERT and kNN are complementary. kNN is worse than BERT on ConceptNet, presumably because commonsense knowledge like "birds can fly" is less well-represented in Wikipedia than entity triples and also because relevant articles are harder to find by IR search. We keep the interpolation parameter constant over all datasets. Table 4 shows that kNN often has high confidence for correct answers – in such cases it is likely to dominate less confident predictions by BERT. The converse is also true (not shown). Further optimization could be obtained by tuning interpolation per dataset. + +BERT-kNN answers facts unseen during pretraining better than BERT, see Table 2. BERT was not trained on 2020 events, so it must resort to guessing. Generally, we see that BERT's knowledge is mainly based on guessing as it has seen Wikipedia during training but is not able to recall the knowledge recovered by kNN. + +Table 4 gives examples for BERT and BERT-kNN predictions. We see that BERT predicts the answer category correctly, but it often needs help from kNN to recover the correct entity within that category. + +# 5 Related work + +PLMs are top performers for many tasks, including QA (Kwiatkowski et al., 2019; Alberti et al., + +2019; Bosselut et al., 2019). Petroni et al. (2019) introduced the LAMA QA task to probe PLMs' knowledge of facts typically modeled by KBs. + +The basic idea of BERT-kNN is similar to Khandelwal et al. (2020)'s interpolation of a PLM and kNN for language modeling. In contrast, we address QA. We introduce an IR step into the model that is essential for good performance. Also, our context representations differ as we use embeddings of the masked token. + +Grave et al. (2016) and Merity et al. (2017), inter alia, also make use of memory to store hidden states. They focus on recent history, making it easier to copy rare vocabulary items. + +DRQA (Chen et al., 2017) is an open-domain QA model that combines an IR step with a neural reading comprehension model. We use the same IR module, but our model differs significantly. DRQA does not predict masked tokens, but extracts answers from text. It does not use PLMs nor a kNN module. Most importantly, BERT-kNN is fully unsupervised and does not require any extra training. + +Some work on knowledge in PLMs focuses on injecting knowledge into BERT's encoder. ERNIE (Zhang et al., 2019) and KnowBert (Peters et al., 2019) are entity-enhanced versions of BERT. They introduce additional encoder layers that are integrated into BERT's original encoder by expensive additional pretraining. Poerner et al. (2019) inject factual entity knowledge into BERT's embeddings without pretraining by aligning Wikipedia2Vec entity vectors (Yamada et al., 2016) with BERT's wordpiece vocabulary. This approach is also limited to labeled entities. Our approach on the other hand is not limited to labeled entities nor does it require any pretraining. Our approach is conceptually different from entity-enhanced versions of BERT and could potentially be combined with them for + +even better performance. Also, these models address language modeling, not QA. + +The combination of PLMs with an IR step/kNN search has attracted a lot of recent research interest. The following paragraph lists concurrent work: + +Petroni et al. (2020) also combine BERT with an IR step to improve cloze-style QA. They do not use a kNN search nor an interpolation step but feed the retrieved contexts into BERT's encoder. Guu et al. (2020) augment PLMs with a latent knowledge retriever. In contrast to our work they continue the pretraining stage. They jointly optimize the masked language modeling objective and backpropagate through the retrieval step. Lewis et al. (2020); Izacard and Grave (2020) leverage retrieved contexts for better QA using finetuned generative models. They differ in that the latter fuse evidence of multiple contexts in the decoder. Joshi et al. (2020) integrate retrieved contexts into PMLs for better reading comprehension. + +# 6 Conclusion + +This work introduced BERT-kNN, an interpolation of BERT predictions with a kNN search for unsupervised cloze-style QA. BERT-kNN sets a state-of-the-art on LAMA without any further training. BERT-kNN can be easily enhanced with knowledge about new events that are not covered in the training text used for pretraining BERT. + +In future work, we want to exploit the utility of the kNN component for explainability: kNN predictions are based on retrieved contexts, which can be shown to users to justify an answer. + +# Acknowledgements + +This work has been funded by the German Federal Ministry of Education and Research (BMBF) under Grant No. 01IS18036A. The authors of this work take full responsibility for its content. + +# References + +Chris Alberti, Kenton Lee, and Michael Collins. 2019. A BERT baseline for the natural questions. ArXiv, abs/1901.08634. +Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: Commonsense transformers for automatic knowledge graph construction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4762-4779, Florence, Italy. Association for Computational Linguistics. + +Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870-1879, Vancouver, Canada. Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Hady ElSahar, Pavlos Vougiouklis, Arslen Remaci, Christophe Gravier, Jonathon S. Hare, Frédérique Laforest, and Elena Simperl. 2018. T-rex: A large scale alignment of natural language with knowledge base triples. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation, LREC 2018, Miyazaki, Japan, May 7-12, 2018. +Edouard Grave, Armand Joulin, and Nicolas Usunier. 2016. Improving neural language models with a continuous cache. *ICLR*, abs/1612.04426. +Kelvin Guu, Kenton Lee, Zora Tung, Panupong Papat, and Ming-Wei Chang. 2020. REALM: Retrieval-augmented language model pre-training. arXiv preprint arXiv:2002.08909. +Gautier Izacard and Edouard Grave. 2020. Leveraging passage retrieval with generative models for open domain question answering. arXiv preprint arXiv:2007.01282. +Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2017. Billion-scale similarity search with gpus. arXiv preprint arXiv:1702.08734. +Mandar Joshi, Kenton Lee, Yi Luan, and Kristina Toutanova. 2020. Contextualized representations using textual encyclopedic knowledge. arXiv preprint arXiv:2004.12006. +Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. Generalization through Memorization: Nearest Neighbor Language Models. In International Conference on Learning Representations (ICLR). +Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453-466. + +Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kttl, Mike Lewis, Wen tau Yih, Tim Rocktschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. arXiv preprint arXiv:2005.11401. +Xiang Li, Aynaz Taheri, Lifu Tu, and Kevin Gimpel. 2016. Commonsense knowledge base completion. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1445-1455, Berlin, Germany. Association for Computational Linguistics. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692. +Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer sentinel mixture models. In International Conference on Learning Representations (ICLR). +Matthew E. Peters, Mark Neumann, Robert Logan, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A. Smith. 2019. Knowledge enhanced contextual word representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 43-54, Hong Kong, China. Association for Computational Linguistics. +Fabio Petroni, Patrick Lewis, Aleksandra Piktus, Tim Rocttäschel, Yuxiang Wu, Alexander H. Miller, and Sebastian Riedel. 2020. How context affects language models' factual predictions. In *Automated Knowledge Base Construction*. +Fabio Petroni, Tim Rocttäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2463-2473, Hong Kong, China. Association for Computational Linguistics. +Nina Poerner, Ulli Waltinger, and Hinrich Schütze. 2019. Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. ArXiv, abs/1911.03681. +Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of + +the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics. +Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, and Yoshiyasu Takefuji. 2016. Joint learning of the embedding of words and entities for named entity disambiguation. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 250-259, Berlin, Germany. Association for Computational Linguistics. +Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: Enhanced language representation with informative entities. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1441-1451, Florence, Italy. Association for Computational Linguistics. +Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In The IEEE International Conference on Computer Vision (ICCV). + +# A Data + +LAMA and LAMA-UHN can be downloaded from: + +https://dl.fbaipublicfiles.com/LAMA/ + +For TREx unseen, we downloaded the latest Wikidata and Wikipedia dump from: + +```txt +https://dumps.wikimedia.org/ +wikidatawiki/entities.wikimedia_en/ +latest-all.json.bz2 +and +``` + +```txt +https://dumps.wikimedia.org/enwiki/ +latest/enwiki-latest-pages-articles.xml. +bz2. +``` + +We filter for TREx relations and only consider facts which have a Wikipedia page created after January 1st 2020. We only consider relations with 5 questions or more. We add the additional embedded Wikipedia articles to the datastore. + +# B Inference + +The probability of the kNN search for word $w$ is given by: + +$$ +p _ {k N N} (w \mid q) \sim \sum_ {(c _ {w}, w) \in k N N} e ^ {- d (B E R T (q), c _ {w}) / l}. +$$ + +The final probability of BERT-kNN is the interpolation of the predictions of BERT and the kNN search: + +$$ +p _ {B E R T - k N N} (q) = \lambda p _ {k N N} (q) + (1 - \lambda) p _ {B E R T} (q), +$$ + +with + +```latex +$q$ question, + $BERT(q)$ embedding q, + $w$ target word, + $s_w$ context of w, + $c_{w} = BERT(s)$ embedded c + $d$ distance, + $l$ distance scaling, + $\lambda$ interpolation parameter. +``` + +# C Hyperparameters + +Hyperparameter optimization is done with the 305 SQuAD questions and additional randomly sampled 305 ConceptNet questions. We remove the 305 ConceptNet questions from the test set. We run the hyperparameter search once. + +We run a grid search for the following hyperparameters: + +```txt +Number of documents $N = [1,2,3,4,5]$ Interpolation $\lambda = [0.2,0.3,0.4,0.5,0.6,0.7,0.8]$ Number of NN $k = [64,128,512]$ Distance scaling $l = [5,6,7,8,9,10,11,12]$ +``` + +The optimal $\mathrm{P@1}$ was found for: Number of documents $N = 3$ Interpolation parameter $\lambda = 0.3$ Number of NN $k = 128$ Distance scaling $l = 6$ + +# D kNN without IR + +To enable a kNN search over the full datastore we use FAISS index (Johnson et al., 2017). We train the index using 1M randomly sampled keys and 40960 number of clusters. Embeddings are quantized to 64 bytes. During inference the index looks up 64 clusters. + +# E Computational Infrastructure + +The creation of the datastore is computationally expensive but only a single forward pass is needed. The datastore creation is run on a server with 128 GB memory, Intel(R) Xeon(R) CPU E5-2630 v4, CPU rate 2.2GHz, number of cores 40(20), 8x GeForce GTX 1080Ti. One GPU embeds 300 contexts/s. The datastore includes 900M contexts. + +Evaluation is run on a server with 128 GB memory, Intel(R) Xeon(R) CPU E5-2630 v4, CPU rate 2.2GHz, number of cores 40(20). Evaluation time for one query is 2 s but code can be optimized for better performance. \ No newline at end of file diff --git a/bertknnaddingaknnsearchcomponenttopretrainedlanguagemodelsforbetterqa/images.zip b/bertknnaddingaknnsearchcomponenttopretrainedlanguagemodelsforbetterqa/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..c9a538b4efe0501436063b818703a779692f6fbc --- /dev/null +++ b/bertknnaddingaknnsearchcomponenttopretrainedlanguagemodelsforbetterqa/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d6ad64b608ecb787c1fe1a5b717c3d1aa73c486faa39481604f46e7c61e229d9 +size 207799 diff --git a/bertknnaddingaknnsearchcomponenttopretrainedlanguagemodelsforbetterqa/layout.json b/bertknnaddingaknnsearchcomponenttopretrainedlanguagemodelsforbetterqa/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a0922c9cfc9efa00f82bdfbd37942136c4d342e9 --- /dev/null +++ b/bertknnaddingaknnsearchcomponenttopretrainedlanguagemodelsforbetterqa/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8feeb7d6e1b9103964e56c3e95dd41f28879881436371bbbf7813703b2c76ae1 +size 254227 diff --git a/bertmkintegratinggraphcontextualizedknowledgeintopretrainedlanguagemodels/5f0d869b-f46c-488c-99f4-acbf70570da9_content_list.json b/bertmkintegratinggraphcontextualizedknowledgeintopretrainedlanguagemodels/5f0d869b-f46c-488c-99f4-acbf70570da9_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..edadddfefa33b278cf886632d2b6964ce6eaca74 --- /dev/null +++ b/bertmkintegratinggraphcontextualizedknowledgeintopretrainedlanguagemodels/5f0d869b-f46c-488c-99f4-acbf70570da9_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca749ceff7b7ab0e680f46d715e479c81667880e9a2f00f4c7913d5ec002554d +size 71263 diff --git a/bertmkintegratinggraphcontextualizedknowledgeintopretrainedlanguagemodels/5f0d869b-f46c-488c-99f4-acbf70570da9_model.json b/bertmkintegratinggraphcontextualizedknowledgeintopretrainedlanguagemodels/5f0d869b-f46c-488c-99f4-acbf70570da9_model.json new file mode 100644 index 0000000000000000000000000000000000000000..a0b002f4d53db33c93036e964786dd627ac21f24 --- /dev/null +++ b/bertmkintegratinggraphcontextualizedknowledgeintopretrainedlanguagemodels/5f0d869b-f46c-488c-99f4-acbf70570da9_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c1ab7855c15888b513ddc57ce1e96da89941785d230d1579d33d311c8ee1ac9b +size 86754 diff --git a/bertmkintegratinggraphcontextualizedknowledgeintopretrainedlanguagemodels/5f0d869b-f46c-488c-99f4-acbf70570da9_origin.pdf b/bertmkintegratinggraphcontextualizedknowledgeintopretrainedlanguagemodels/5f0d869b-f46c-488c-99f4-acbf70570da9_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f9b6b037bf2021f2829f6b735f0a19fea080492e --- /dev/null +++ b/bertmkintegratinggraphcontextualizedknowledgeintopretrainedlanguagemodels/5f0d869b-f46c-488c-99f4-acbf70570da9_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d684dbf9e692a7cb1ecdf875927a72a5d91454b590827d8a9e5e03346188bcb2 +size 3266031 diff --git a/bertmkintegratinggraphcontextualizedknowledgeintopretrainedlanguagemodels/full.md b/bertmkintegratinggraphcontextualizedknowledgeintopretrainedlanguagemodels/full.md new file mode 100644 index 0000000000000000000000000000000000000000..af2347e6f695d2adb41af0917fcbe304401251cf --- /dev/null +++ b/bertmkintegratinggraphcontextualizedknowledgeintopretrainedlanguagemodels/full.md @@ -0,0 +1,326 @@ +# BERT-MK: Integrating Graph Contextualized Knowledge into Pre-trained Language Models + +Bin He1, Di Zhou1, Jinghui Xiao1, Xin Jiang1, Qun Liu1, Nicholas Jing Yuan2, Tong Xu3 + +1Huawei Noah's Ark Lab + +$^{2}$ Huawei Cloud & AI + +$^{3}$ School of Computer Science, University of Science and Technology of China {hebin.nlp, zhoudi7, xiaojinghui4, jiang.xin, qun.liu, nicholas.yuan}@huawei.com, tongxu@ustc.edu.cn + +# Abstract + +Complex node interactions are common in knowledge graphs (KGs), and these interactions can be considered as contextualized knowledge exists in the topological structure of KGs. Traditional knowledge representation learning (KRL) methods usually treat a single triple as a training unit, neglecting the usage of graph contextualized knowledge. To utilize these unexploited graph-level knowledge, we propose an approach to model subgraphs in a medical KG. Then, the learned knowledge is integrated with a pre-trained language model to do the knowledge generalization. Experimental results demonstrate that our model achieves the state-of-the-art performance on several medical NLP tasks, and the improvement above MedERNIE indicates that graph contextualized knowledge is beneficial. + +# 1 Introduction + +In 1954, Harris (1954) proposed a distributional hypothesis that words occur in the same contexts tend to have similar meanings. Firth (1957) explained the context-dependent nature of meaning in linguistics by his famous quotation "you shall know a word by the company it keeps". Although the above-mentioned distributional hypothesis is proposed for language models, if we look at the knowledge graph from the perspective of this hypothesis, we can find that similar hypothesis exists in knowledge graphs (KGs). We call it KG distributional hypothesis: you shall know an entity by the relationships it involves. + +Given this hypothesis, contextualized information in language models can be mapped to knowledge graphs, which we call "graph contextualized knowledge". Figure 1 illustrates a knowledge subgraph that includes several medical entities. In this figure, four incoming and four outgoing neighboring nodes (hereinafter called "in-entity" and "out-entity") of node "Bacterial pneumonia" are linked + +![](images/60353a962641409088aa239ac6a88783b1700e24d1fc1dd74942165c3a2e8883.jpg) +Figure 1: A subgraph extracted from a medical knowledge graph. The rectangles represent entities and directed arrows denote relations. + +by various relation paths. These linked nodes and correlations can be seen as "graph contextualized information" of entity node "Bacterial pneumonia". In this study, we will explore how to integrate graph contextualized knowledge into pre-trained language models. + +Pre-trained language models learn contextualized word representations on large-scale text corpus through self-supervised learning methods, and obtain new state-of-the-art (SOTA) results on most downstream tasks (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2019). This gradually becomes a new paradigm for natural language processing research. Recently, several knowledge-enhanced pre-trained language models have been proposed, such as ERNIE-Baidu (Sun et al., 2019), ERNIE-Tsinghua (Zhang et al., 2019a), WKLM (Xiong et al., 2019) and K-ADAPTER (Wang et al., 2020). + +In this study, since we need to learn graph contextualized knowledge in a large-scale medical knowledge graph, ERNIE-Tsinghua (hereinafter called "ERNIE") is chosen as our backbone model. In ERNIE, entity embeddings are learned by TransE (Bordes et al., 2013), which is a popular transition-based method for knowledge representation learning (KRL). However, TransE cannot deal with the modeling of complex relations (Lin et al., + +2018), such as 1-to-n, n-to-1 and n-to-n relations. This shortcoming will be amplified in the medical knowledge graph, in which many entities have a large number of related neighbors. + +Inspired by previous work (Velicković et al., 2018; Nathani et al., 2019), we propose an approach to learn knowledge from subgraphs, and inject graph contextualized knowledge into the pretrained language model. We call this model BERT-MK (a BERT-based language model integrated with Medical Knowledge), our contributions are as follows: + +- We propose a novel knowledge-enhanced pretrained language model BERT-MK for medical NLP tasks, which integrates graph contextualized knowledge learned from the medical KG. +- Experimental results show that BERT-MK achieves better performance than previous state-of-the-art biomedical pre-trained language models on entity typing and relation classification tasks. + +# 2 Methodology + +Our model consists of two modules: the knowledge learning module and the language model pretraining module. The first module is utilized to learn graph contextualized knowledge existing in KGs, and the second one integrates the learned knowledge into the language model for knowledge generalization. The details will be described in the following subsections. + +# 2.1 Learning Graph Contextualized Knowledge + +We denote a knowledge graph as $\mathcal{G} = (\mathcal{E},\mathcal{R})$ where $\mathcal{E}$ represents the entity set and $\mathcal{R}$ is the set of relations between entity pairs. A triple in $\mathcal{G}$ is formalized as $(e_s,r,e_o)$ , where $e_{s}$ is a subjective entity, $e_o$ is an objective entity, and $r$ is the relation between $e_{s}$ and $e_{o}$ . In Figure 1, two entities (rectangles) and a relation (arrow) between them constructs a knowledge triple, for example, (Bacterial pneumonia, causative agent of, Bacteria). + +# 2.1.1 Subgraph Conversion + +To enrich the contextualized information in knowledge representations, we extract subgraphs from the knowledge graph to be the modeling objectives, and the generation process is described in Algorithm 1. For a given entity, its two 1-hop in-entities + +Algorithm 1: Subgraph generation. +Input: Knowledge graph $\mathcal{G} = (\mathcal{E},\mathcal{R},\mathcal{T})$ , duplicate number M Output: Subgraph set S +1 Initial $S = []$ . +2 foreach $e\in \mathcal{E}$ do +3 $d_{e}^{\mathrm{in}} =$ calculate_indegree $(\mathcal{G},e)$ . +4 $d_{e}^{\mathrm{out}} =$ calculate_outdegree $(\mathcal{G},e)$ . +5 $T_{e}^{\mathrm{in}} =$ extract_in_triples $(\mathcal{G},e)$ . +6 $T_{e}^{\mathrm{out}} =$ extract_out_triples $(\mathcal{G},e)$ . +7 $i = 0$ .. +8 while $i < (d_e^{\mathrm{in}} + d_e^{\mathrm{out}})*M / 2$ do +9 $T_{i}^{\mathrm{in}} =$ random_sample $(T_{e}^{\mathrm{in}},2)$ . +10 $T_{i}^{\mathrm{out}} =$ random_sample $(T_{e}^{\mathrm{out}},2)$ . +11 subgraph $= T_{i}^{\mathrm{in}} + T_{i}^{\mathrm{out}}$ . +12 $S = S +$ subgraph; +13 $i = i + 1$ . +14 end +15 end +16 return S + +![](images/da002d904b4170e063d6f0ceb9e7dc373115d5cc4a6fa4450a33dad3fbb14271.jpg) +(a) + +![](images/38a82343fac42b4a7b2251045568c37b2670169dcba9cf83ba817e897a669d04.jpg) +(b) + +![](images/c5b499c408a3c99437db5c16ae7842926b0a1095776e3cab8928844fa14600e4.jpg) +Node sequence + +![](images/75ff4faf2778b6be973483d28e5aa722abd72388a9d0d2ee7a509b95dfacf2a9.jpg) +Node position indexes + +![](images/23cee9224636f8d208820276ac71b38a83059bb7565305a31179b4484050909d.jpg) +aannnnnne +(c) +Figure 2: Converting a subgraph extracted from the knowledge graph into the input of the model. (a) $e$ refers to the entity, and $r$ represents the relation. (b) Relations are transformed into sequence nodes, and all nodes are assigned a numeric index. (c) Each row in the matrix of node position indexes represents the index list of an triple in (b); the adjacent matrix indicates the connectivity (the red points equal to 1 and the white points are 0) between the nodes in (b). + +and out-entities are sampled to generate a subgraph1, and we repeat the generation process M times for each entity. Figure 2(a) shows an instance of the knowledge subgraph, which consists of four 1-hop and four 2-hop relations. In this study, we propose a Transformer-based (Vaswani et al., 2017) module to model subgraphs. Relations are learned + +![](images/5087c19a8e72e395085e0fc330fc8ed2edf1d1e787a6a62eccea0c46440ef2f5.jpg) +Figure 3: The model architecture of BERT-MK. The left part is the pre-trained language model, in which entity information learned from the knowledge graph is incorporated. The right part is GCKE module. The subgraph in Figure 2 is utilized to describe the learning process. $e_1$ , $e_1^{(1)}$ , $e_1^O$ is the embedding of the input node, the updated node and the output node, respectively. + +as nodes equivalent to entities in our model, and the relation conversion process is illustrated in Figure 2(b). Therefore, knowledge graph $\mathcal{G}$ can be redefined as $G = (V,E)$ , where $V$ represents the nodes in $G$ , involving entities in $\mathcal{E}$ and relations in $\mathcal{R}$ , and $E$ denotes the directed edges among the nodes in $V$ . + +Then, subgraphs are converted into sequences of nodes. The conversion result of a subgraph is shown in Figure 2(c), including a node sequence, a node position index matrix and an adjacency matrix. Each row of the node position index matrix corresponds to a triple in the subgraph. For example, the triple $(e_1,r_1,e)$ is represented as the first row $(0,1,4)$ in this matrix. In the adjacency matrix, the element $\mathbf{A}_{ij}$ equals 1 if the node $i$ is connected to node $j$ in Figure 2(b), and 0 otherwise. + +# 2.1.2 GCKE + +After the subgraph conversion preprocessing, the input samples to learn graph contextualized knowledge are generated. Formally, we denote the node sequence as $\{x_{1},\ldots ,x_{N}\}$ , where $N$ is the length of the input sequence. Besides, the node position index matrix and the adjacency matrix are defined as $\mathbf{P}$ and $\mathbf{A}$ , respectively. Entity embeddings and relation embeddings are integrated in the same matrix $\mathbf{V}$ , where $\mathbf{V}\in \mathbb{R}^{(n_e + n_r)\times d}$ , $n_e$ is the entity number in $\mathcal{E}$ and $n_r$ is the relation type number in $\mathcal{R}$ . The node embeddings $\mathbf{X} = \{\mathbf{x}_1,\dots ,\mathbf{x}_N\}$ can be gen + +erated by looking up node sequence $\{x_{1},\ldots ,x_{N}\}$ in embedding matrix V. X, P and A constitute the input of the graph contextualized knowledge embedding learning module, called GCKE, as shown in Figure 3. + +The inputs are fed into a Transformer-based model to encode the node information. + +$$ +\mathbf {x} _ {i} ^ {\prime} = \bigoplus_ {h = 1} ^ {H} \sum_ {j = 1} ^ {N} \alpha_ {i j} ^ {h} \cdot \left(\mathbf {x} _ {j} \cdot \mathbf {W} _ {\mathrm {v}} ^ {h}\right), \tag {1} +$$ + +$$ +\alpha_ {i j} ^ {h} = \frac {\exp \left(a _ {i j} ^ {h}\right)}{\sqrt {d / H} \cdot \sum_ {n = 1} ^ {N} \exp \left(a _ {i n} ^ {h}\right)}, \tag {2} +$$ + +$$ +a _ {i j} ^ {h} = \operatorname {M a s k i n g} \left(\left(\mathbf {x} _ {i} \cdot \mathbf {W} _ {\mathrm {q}} ^ {h}\right) \cdot \left(\mathbf {x} _ {j} \cdot \mathbf {W} _ {\mathrm {k}} ^ {h}\right) ^ {\mathrm {T}}\right), \mathbf {A} _ {j i} + \mathbf {I} _ {i j}), \tag {3} +$$ + +where $\mathbf{x}_i^{\prime}$ is the new embedding for node $x_{i}$ . $\bigoplus$ denotes the concatenation of the $H$ attention heads in this layer, $\alpha_{ij}^{h}$ and $\mathbf{W}_{\mathrm{v}}^{h}$ are the attention weight of node $x_{j}$ and a linear transformation of node embedding $\mathbf{x}_j$ in the $h^{\mathrm{th}}$ attention head, respectively. The Masking function in Equation 3 restraints the contextualized dependency among the input nodes, only the degree-in nodes and the current node itself are involved to update the node embedding. The subfigure in the lower right corner of Figure 3 shows the contextualized dependencies. Similar to $\mathbf{W}_{\mathrm{v}}^{h}$ , $\mathbf{W}_{\mathrm{q}}^{h}$ and $\mathbf{W}_{\mathrm{k}}^{h}$ are independent linear transformations of node embeddings. Then, the updated + +node representations are fed into the feed forward layer for further encoding. The aforementioned Transformer blocks are stacked by $L$ times, and the output hidden states can be formalized as + +$$ +\mathbf {X} ^ {O} = \{\mathbf {x} _ {1} ^ {O}, \dots , \mathbf {x} _ {N} ^ {O} \}. \qquad (4) +$$ + +Then, the node position indexes $\mathbf{P}$ is utilized to restore triple representations: + +$$ +\mathbf {T} = \text {T r i p l e R e s t o r a t i o n} (\mathbf {X} ^ {O}, \mathbf {P}), \quad (5) +$$ + +where $\mathbf{P}_k = (e_s^k,r^k,e_o^k)$ is the position index of a valid knowledge triple, and $\mathbf{T}_k = (\mathbf{x}_{e_s^k}^O,\mathbf{x}_{r_k}^O,\mathbf{x}_{e_o^k}^O)$ is the representation of this triple. The subfigure in the upper right corner of Figure 3 shows the triple restoration process. + +In this study, the translation-based scoring function (Han et al., 2018) is adopted to measure the energy of a knowledge triple. The node embeddings are learned by minimizing a margin-based loss function on the training data: + +$$ +\mathcal {L} = \sum_ {\mathbf {t} \in \mathbf {T}} \max \left\{d (\mathbf {t}) - d (f (\mathbf {t})) + \gamma , 0 \right\}, \tag {6} +$$ + +where $\mathbf{t} = (\mathbf{t}_s,\mathbf{t}_r,\mathbf{t}_o)$ $d(\mathbf{t}) = |\mathbf{t}_s + \mathbf{t}_r - \mathbf{t}_o|,\gamma >$ 0 is a margin hyperparameter, $f(\mathbf{t})$ is an entity replacement operation that the head entity or the tail entity in a triple is replaced and the replaced triple is an invalid triple in the KG. + +# 2.2 Integrating Knowledge into the Language Model + +Given a comprehensive medical knowledge graph, graph contextualized knowledge representations can be learned using the GCKE module. We follow the language model architecture proposed in (Zhang et al., 2019a), and utilize graph contextualized knowledge to enhance medical language representations. The pre-training process is shown in the left part of Figure 3. The Transformer block encodes word contextualized representation while the aggregator block implements the fusion of knowledge and language information. + +According to the characteristics of medical NLP tasks, domain-specific finetuning procedure is designed. Similar to BioBERT (Lee et al., 2019), symbol “@” and “$” are used to mark the entity boundary, which indicate the entity positions in a sample and distinguish different relation samples sharing the same sentence. For example, the input sequence for the relation classification task can be + +Table 1: Statistics of UMLS. + +
# Entities# Relations# Triples
2,842,73587413,555,037
In-degreeOut-degreeMedian degree
5.055.054
+ +modified into “[CLS] pain control was initiated with morphine but was then changed to @ demerol $, which gave the patient better relief of @ his epigastric pain $”. In the entity typing task, entity mention and its context are critical to predict the entity type, so more localized features of the entity mention will benefit this prediction process. In our experiments, the entity start symbol is selected to represent an entity typing sample. + +# 3 Experiments + +# 3.1 Dataset + +# 3.1.1 Medical Knowledge Graph + +The Unified Medical Language System (UMLS) (Bodenreider, 2004) is a comprehensive knowledge base in the biomedical domain, which contains large-scale concept names and relations among them. The metathesaurus in UMLS involves various terminology systems and comprises about 14 million terms covering 25 different languages. In this study, a subset of this knowledge base is extracted to construct the medical knowledge graph. Non-English and long terms are filtered, and the final statistics is shown in Table 1. + +# 3.1.2 Corpus for Pre-training + +To ensure that sufficient medical knowledge can be integrated into the language model, PubMed abstracts $^2$ and PubMed Central full-text papers $^3$ are chosen as the pre-training corpus, which are open-access datasets for biomedical and life sciences journal literature. Since sentences in different paragraphs may not have good context coherence, paragraphs are selected as the document unit for next sentence prediction. The Natural Language Toolkit (NLTK) $^4$ is utilized to split the sentences within a paragraph, and sentences having less than 5 words are discarded. As a result, a large corpus containing 9.9B tokens is achieved for language model pre-training. + +2 https://www.ncbi.nlm.nih.gov/pubmed/. +3https://www.ncbi.nlm.nih.gov/pmc/. +4https://www.nltk.org/. + +Table 2: Statistics of the datasets. Most of these datasets do not follow a standard train-valid-test set partition, and we adopt some traditional data partition ways to do model training and evaluation. + +
TaskDataset# Train# Valid# Test
Entity Typing2010 i2b2/VA (Uzuner et al., 2011)16,519-31,161
JNLPBA (Kim et al., 2004)51,301-8,653
BC5CDR (Li et al., 2016)9,3859,5939,809
Relation Classification2010 i2b2/VA (Uzuner et al., 2011)10,233-19,115
GAD (Bravo et al., 2015)5,339--
EU-ADR (Van Mulligen et al., 2012)355--
+ +In our model, medical terms appearing in the corpus need to be aligned to the entities in the UMLS metathesaurus before pre-training. To make sure the coverage of identified entities in the metathesaurus, the forward maximum matching (FMM) algorithm is used to extract the term spans from the corpus aforementioned, and spans less than 5 characters are filtered. Then, BERT vocabulary is used to tokenize the input text into word pieces, and the medical entity is aligned with the first subword of the identified term. + +# 3.1.3 Downstream Tasks + +In this study, entity typing and relation classification tasks in the medical domain are used to evaluate the models. + +Entity Typing Given a sentence with an entity mention tagged, this task is to identify the semantic type of this entity mention. For example, the type "medical problem" is used to label the entity mention "asystole" in the sentence "he had a differential diagnosis of $\langle \mathsf{e}\rangle$ asystole $\langle / \mathsf{e} \rangle$ ". To the best of our knowledge, there are no publicly available entity typing datasets in the medical domain. Therefore, three entity typing datasets are constructed from the corresponding medical named entity recognition datasets. Entity mentions and entity types are annotated in these datasets, in this study, entity mentions are considered as input while entity types are the output labels. Table 2 shows the statistics of the datasets for the entity typing task. Datasets can be download from here5. + +Relation Classification Given two entities within one sentence, this task aims to determine the relation type between the entities. For example, in sentence "pain control was initiated with morphine but was then changed to $\langle \mathbf{e}_1\rangle$ demerol $\langle / \mathbf{e}_1\rangle$ , which + +gave the patient better relief of $\langle \mathbf{e}_2\rangle$ his epigastric pain $\langle / \mathbf{e}_2\rangle$ , the relation type between two entities is TrIP (Treatment Improves medical Problem). In this study, three relation classification datasets are utilized to evaluate our models, and the statistics of these datasets are shown in Table 2. Datasets can be downloaded from here6. + +# 3.2 Baselines + +In addition to the state-of-the-art models on these datasets, we have also added the popular BERT-Base model and another two models pre-trained on biomedical literature for further comparison. + +BERT-Base (Devlin et al., 2019) This is the original bidirectional pre-trained language model proposed by Google, which achieves state-of-the-art performance on a wide range of NLP tasks. + +BioBERT (Lee et al., 2019) This model follows the same model architecture as the BERT-Base model, but with the PubMed abstracts and PubMed Central full-text articles (about 18B tokens) used to do model finetuning upon BERT-Base. + +SCIBERT (Beltagy et al., 2019) In this model, a new wordpiece vocabulary is built based on a large scientific corpus (about 3.2B tokens). Then, a new BERT-based model is trained from scratch using this scientific vocabulary and the scientific corpus. Since a large portion of the scientific corpus consists of biomedical articles, this scientific vocabulary can also be regarded as a biomedical vocabulary, and helps improve the performance of downstream tasks in the biomedical domain. + +# 3.3 Implementation Details + +# 3.3.1 Graph Contextualized Knowledge + +Firstly, UMLS triples are fed into the TransE model to achieve a basic knowledge representation. We + +Table 3: Experimental results on the entity typing and relation classification tasks. Accuracy (Acc), Precision, Recall, and F1 scores are used to evaluate the model performance. The results reported in previous work are underlined. E-SVM is short for Ensemble SVM (Bhasuran and Natarajan, 2018), which achieves SOTA performance in the GAD dataset. CNN-M stands for CNN using multi-pooling (He et al., 2019), which is the SOTA model in the 2010 i2b2/VA dataset. + +
TaskDatasetMetricsE-SVMCNN-MBERT-BaseBioBERTSCIBERTBERT-MK
Entity2010 i2b2/VAAcc--96.7697.4397.7497.70
TypingJNLPBAAcc--94.1294.3794.6094.55
BC5CDRAcc--98.7899.2799.3899.54
Relation2010 i2b2/VAP-73.172.676.174.877.6
ClassificationR-66.765.771.371.672.0
F-69.769.273.673.174.7
GADP79.21-74.2876.4377.4781.67
R89.25-85.1187.6585.9492.79
F83.93-79.3381.6681.4586.87
EU-ADRP--75.4581.0578.4284.43
R--96.5593.9090.0991.17
F--84.7187.0085.5187.49
+ +use OpenKE toolkit (Han et al., 2018) to learn entity and relation embeddings. Knowledge embedding dimension is set to 100, while training epoch number is set to 10000. + +Following the initialization method used in (Nguyen et al., 2018; Nathani et al., 2019), the embeddings produced by TransE are utilized to initialize knowledge representations of the GCKE module. We set the layer number to 4, and each layer contains 4 heads. Due to the median degree of entities in UMLS is 4 (shown in Table1), we set the count of in-entities and two out-entities to 4, so each subgraph contains four 1-hop and four 2-hop relations. The GCKE module runs 1200 epochs on a single NVIDIA Tesla V100 (32GB) GPU to learn graph contextualized knowledge. The batch size is set to 50000. + +# 3.3.2 Pre-training + +In this study, two pre-trained language models are trained. The first one is MedERNIE, a medical ERNIE model trained on the UMLS triples and the PubMed corpus, inheriting the same model hyperparameters used in (Zhang et al., 2019a). Besides, the entity embeddings learned by GCKE module are integrated into the language model to train the BERT-MK model. In our work, we align the same pre-training epochs with BioBERT, which uses the same pre-training corpus as ours, and finetune the BERT-Base model on the PubMed corpus for one epoch. + +# 3.3.3 Finetune + +As shown in Table 2, there is no standard valid or test set in some datasets. For datasets containing + +a standard test set, if no standard valid set is provided, we divide the training set into new train-valid sets by 4:1. We preform each experiment 5 times under specific experimental settings with different random seeds. Besides, 10-fold cross-validation method is used to evaluate the model performance for the datasets without a standard test set. According to the maximum sequence length of the sentences in each dataset, the input sequence length for 2010 i2b2/VA (Uzuner et al., 2011), JNLPBA (Kim et al., 2004), BC5CDR (Li et al., 2016), GAD (Bravo et al., 2015) and EU-ADR (Van Mulligen et al., 2012) are set to 390, 280, 280, 130 and 220, respectively. The initial learning rate is set to 2e-5. + +# 3.4 Results + +# 3.4.1 Entity Typing + +Table 3 presents the experimental results on the entity typing and relation classification tasks. For entity typing tasks, all these pre-trained language models achieve high accuracy, indicating that the type of a medical entity is not as ambiguous as that in the general domain. BERT-MK outperforms BERT-Base and BioBERT on three datasets, and is competitive with SCIBERT. Without using external knowledge in the pre-trained language model, SCIBERT achieves comparable results to BERT-MK, which proves that a domain-specific vocabulary is critical to the feature encoding of inputs. Long tokens are relatively common in the medical domain, and these tokens will be split into short pieces when a domain-independent vocabulary is used, which will cause an overgeneralization of lexical features. Therefore, a medical vocabulary generated by the PubMed corpus can be introduced + +into BERT-MK in the following work. + +# 3.4.2 Relation Classification + +On the relation classification tasks, BERT-Base does not perform as well as other models, which indicates that pre-trained language models require a domain adaptation process when used in restricted domains. Compared with BioBERT, which utilizes the same domain-specific corpus as ours for domain adaptation, BERT-MK improves the F score of 2010 i2b2/VA, GAD and EU-ADR by $1.1\%$ , $5.21\%$ and $0.49\%$ , respectively, which demonstrates medical knowledge has indeed played a positive role in the identification of medical relations. + +The following example provides a brief explanation of why medical knowledge improves the model performance of the relation classification tasks. "On postoperative day number three, patient went into $\langle \mathbf{e}_1\rangle$ atrial fibrillation $\langle / \mathbf{e}_1\rangle$ , which was treated appropriately with $\langle \mathbf{e}_2\rangle$ metoprolol $\langle / \mathbf{e}_2\rangle$ and digoxin and converted back to sinus rhythm" is a relation sample from the 2010 i2b2/VA dataset, and the relation label is TrIP. Meanwhile, the above entity pair can be aligned to a knowledge triple (atrial fibrillation, may be treated by, metoprolol) in the medical knowledge graph. Obviously, this knowledge information is advantageous to identify the relation type of the aforementioned example. + +# 3.5 Discussion + +# 3.5.1 TransE vs. GCKE + +In order to explicitly analyze the improvement effect of the GCKE module on pre-trained language models, we compare MedERNIE (TransE-based) and BERT-MK (GCKE-based) on two relation classification datasets. Table 4 demonstrates the results of these two models. As we can see, integrating graph contextualized knowledge into the pre-trained language model, the performance increases F score by $0.9\%$ and $0.64\%$ on these two relation classification datasets, respectively. + +In Figure 4, as the amount of pre-training data increases, BERT-MK always outperforms Med-ERNIE on the 2010 i2b2/VA relation dataset, and + +Table 4: TransE vs. GCKE on the 2010 i2b2/VA relation and GAD datasets. + +
DatasetMedERNIEBERT-MK
PRFPRF
2010 i2b2/VA76.671.173.877.672.074.7
GAD81.2891.8686.2381.6792.7986.87
+ +![](images/0b54367a32ab65cddae3e85c6e9227120fbe963911a03de518b20c7392e91d35.jpg) + +![](images/5639b215b5cae1851627ac66e949a9d1afe97e3521298a2c628838d641283b92.jpg) +Figure 4: Model performance comparison with increasing amount of the pre-trained data. The x-axis represents the proportion of the medical data used for pre-training. 0 means no medical data is utilized, so the BERT-Base is used as an initialization parameter for the model finetuning. 100 indicates the model is pretrained on the medical corpus for one epoch. BioBERT pre-trains on the PubMed corpus for one epoch, which is drawn with dashed lines in the figure as a comparable baseline. + +the performance gap has an increasing trend. However, on the GAD dataset, the performance of BERT-MK and MedERNIE are intertwined. We link the entities in each relation sample to the medical KG, and find that some entity pairs have a connected relationship in the KG. Statistical analysis on 2-hop neighbor relationships between these entity pairs shows that there are 136 cases in the 2010 i2b2/VA dataset, while only 1 in GAD. The second case shown in Table 5 gives an example of the observation described above. Triple (CAD, member of, Other ischemic heart disease) and (Other ischemic heart disease, has member, Angina symptom) are triples in the medical KG, which indicates entity pair cad and angina symptoms in the relation sample have a 2-hop neighbor relationship in the KG. GCKE learns these 2-hop neighbor relationships in 2010 i2b2/VA and produces an improvement for BERT-MK. However, due to the characteristics of + +Table 5: Case study on the 2010 i2b2/VA relation dataset. The bold text spans in two cases are entities. In the first case, the corresponding triple can help identify the relationship between the entity pair in this relation sample. NPP, no relation between two medical problems; PIP, medical problem indicates medical problem. MI, myocardial infarction; CAD, coronary artery disease. + +
CasesThe Corresponding TriplesBioBERTMedERNIEBERT-MKGround Truth
1... coronary artery disease, status post mi x0, cabg ...(Coronary artery disease, associated with , MI)NPPPIPPIPPIP
20. cad: presented with anginal symptoms and ekg changes (stemi), with cardiac catheterization revealing lesions in lad, lcx, and plb.(CAD, member of, Other ischemic heart disease); (Other ischemic heart disease, has member, Angina symptom)NPPNPPPIPPIP
+ +the GAD dataset, the capability of GCKE is limited. + +# 3.5.2 Effect of Different Corpus Sizes in Pre-training + +Figure 4 shows the model performance comparison with different proportion of the pre-training corpus. From this figure, we observe that BERT-MK outperforms BioBERT by using only $10\% -20\%$ of the corpus, which indicates that medical knowledge has the capability to enhance pre-trained language models and save computational costs (Schwartz et al., 2019). + +# 4 Related Work + +Pre-trained language models represented by ELMO (Peters et al., 2018), GPT (Radford et al., 2018) and BERT (Devlin et al., 2019) have attracted great attention, and a large number of variant models have been proposed. Among these studies, some researchers devote their efforts to introducing knowledge into language models (Levine et al., 2019; Lauscher et al., 2019; Liu et al., 2019; Zhang et al., 2019b). ERNIE-Baidu (Sun et al., 2019) introduces new masking units such as phrases and entities to learn knowledge information in these masking units. As a reward, syntactic and semantic information from phrases and entities is implicitly integrated into the language model. Furthermore, a different knowledge information is explored in ERNIE-Tsinghua (Zhang et al., 2019a), which incorporates knowledge graph into BERT to learn lexical, syntactic and knowledge information simultaneously. Xiong et al. (2019) introduce entity replacement checking task into the pre-trained language model, and improve several entity-related downstream tasks, such as question answering and entity typing. Wang et al. (2020) propose a plug-in way to infuse knowledge into language models, and their method keeps different kinds of knowledge in different adapters. The knowledge information introduced by these methods does not pay much + +attention to the graph contextualized knowledge in the KG. + +Recently, several KRL methods have attempted to introduce more contextualized information into knowledge representations. Relational Graph Convolutional Networks (R-GCNs) (Schlichtkrull et al., 2018) is proposed to learn entity embeddings from their incoming neighbors, which greatly enhances the information interaction between related triples. Nathani et al. (2019) further extend the information flow from 1-hop in-entities to n-hop during the learning process of entity representations, and achieves the SOTA performance on multiple relation prediction datasets, especially for the ones containing higher in-degree nodes. We believe that the information contained in knowledge graphs is far from being sufficiently exploited. In this study, we develop an approach to integrate more graph contextualized information, which models subgraphs as training samples. This module has the ability to model any information in the KG. In addition, this learned knowledge is integrated into the language model to obtain an enhanced version of the medical pre-trained language model. + +# 5 Conclusion and Future Work + +We propose a novel approach to learn more comprehensive knowledge, focusing on modeling subgraphs in the knowledge graph by a knowledge learning module. Additionally, the learned medical knowledge is integrated into the pre-trained language model, which outperforms BERT-Base and another two domain-specific pre-trained language models on several medical NLP tasks. Our work validates the intuition that medical knowledge is beneficial to some medical NLP tasks and provides a preliminary exploration for the application of medical knowledge. + +In the follow-up work, some knowledge-guided tasks will be used to validate the effectiveness of the knowledge learning module GCKE. Moreover, we will explore some other knowledge injection + +ways to combine medical knowledge with language models, such as multi-task learning. More subgraph sampling strategies need to be explored, such as r-ego subgraph (Qiu et al., 2020) and degree-dependent subgraph. + +# Acknowledgment + +The authors would like to thank all the anonymous reviewers for their insightful comments. Thank Yasheng Wang for his help in code implementation. + +# References + +Iz Beltagy, Arman Cohan, and Kyle Lo. 2019. Scibert: Pretrained contextualized embeddings for scientific text. arXiv preprint arXiv:1903.10676. +Balu Bhasuran and Jeyakumar Natarajan. 2018. Automatic extraction of gene-disease associations from literature using joint ensemble learning. *PloS one*, 13(7):e0200699. +Olivier Bodenreider. 2004. The unified medical language system (umlts): integrating biomedical terminology. *Nucleic acids research*, 32(suppl_1):D267-D270. +Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Advances in neural information processing systems, pages 2787-2795. +Alex Bravo, Janet Pinero, Núria Queralt-Rosinach, Michael Rautschka, and Laura I Furlong. 2015. Extraction of relations between genes and diseases from text and large-scale data analysis: implications for translational research. BMC bioinformatics, 16(1):55. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186. +John R Firth. 1957. A synopsis of linguistic theory, 1930-1955. Studies in linguistic analysis. +Xu Han, Shulin Cao, Xin Lv, Yankai Lin, Zhiyuan Liu, Maosong Sun, and Juanzi Li. 2018. Openke: An open toolkit for knowledge embedding. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 139-144. +Zellig S Harris. 1954. Distributional structure. Word, 10(2-3):146-162. + +Bin He, Yi Guan, and Rui Dai. 2019. Classifying medical relations in clinical text via convolutional neural networks. Artificial intelligence in medicine, 93:43-49. +Jin-Dong Kim, Tomoko Ohta, Yoshimasa Tsuruoka, Yuka Tateisi, and Nigel Collier. 2004. Introduction to the bio-entity recognition task at jnlpba. In Proceedings of the international joint workshop on natural language processing in biomedicine and its applications, pages 70-75. CiteSeer. +Anne Lauscher, Ivan Vulic, Edoardo Maria Ponti, Anna Korhonen, and Goran Glavas. 2019. Informing unsupervised pretraining with external linguistic knowledge. arXiv preprint arXiv:1909.02339. +Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. Biobert: pre-trained biomedical language representation model for biomedical text mining. arXiv preprint arXiv:1901.08746. +Yoav Levine, Barak Lenz, Or Dagan, Dan Padnos, Or Sharir, Shai Shalev-Shwartz, Amnon Shashua, and Yoav Shoham. 2019. Sensebert: Driving some sense into bert. arXiv preprint arXiv:1908.05646. +Jiao Li, Yueping Sun, Robin J Johnson, Daniela Sciaky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J Mattingly, Thomas C Wiegers, and Zhiyong Lu. 2016. Biocreative v cdr task corpus: a resource for chemical disease relation extraction. Database, 2016. +Yankai Lin, Xu Han, Ruobing Xie, Zhiyuan Liu, and Maosong Sun. 2018. Knowledge representation learning: A quantitative review. arXiv preprint arXiv:1812.10901. +Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, and Ping Wang. 2019. K-bert: Enabling language representation with knowledge graph. arXiv preprint arXiv:1909.07606. +Deepak Nathani, Jatin Chauhan, Charu Sharma, and Manohar Kaul. 2019. Learning attention-based embeddings for relation prediction in knowledge graphs. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4710-4723, Florence, Italy. Association for Computational Linguistics. +Dai Quoc Nguyen, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Phung. 2018. A novel embedding model for knowledge base completion based on convolutional neural network. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 327-333. +Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of NAACL-HLT, pages 2227-2237. + +Jiezhong Qiu, Qibin Chen, Yuxiao Dong, Jing Zhang, Hongxia Yang, Ming Ding, Kuansan Wang, and Jie Tang. 2020. GCC: Graph contrastive coding for graph neural network pre-training. arXiv preprint arXiv:2006.09963. +Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. +Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In European Semantic Web Conference, pages 593-607. Springer. +Roy Schwartz, Jesse Dodge, Noah A. Smith, and Oren Etzioni. 2019. Green ai. +Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. Ernie: Enhanced representation through knowledge integration. arXiv preprint arXiv:1904.09223. +Özlem Uzuner, Brett R South, Shuying Shen, and Scott L DuVall. 2011. 2010 i2b2/va challenge on concepts, assertions, and relations in clinical text. Journal of the American Medical Informatics Association, 18(5):552-556. +Erik M Van Mulligen, Annie Fourrier-Reglat, David Gurwitz, Mariam Molokhia, Ainhoa Nieto, Gianluca Trifiro, Jan A Kors, and Laura I Furlong. 2012. The eu-adr corpus: annotated drugs, diseases, targets, and their relationships. Journal of biomedical informatics, 45(5):879-884. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008. +Petar Velicković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2018. Graph Attention Networks. International Conference on Learning Representations. +Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Cuihong Cao, Daxin Jiang, Ming Zhou, et al. 2020. K-adapter: Infusing knowledge into pre-trained models with adapters. arXiv preprint arXiv:2002.01808. +Wenhan Xiong, Jingfei Du, William Yang Wang, and Veselin Stoyanov. 2019. Pretrained encyclopedia: Weakly supervised knowledge-pretrained language model. arXiv preprint arXiv:1912.09637. +Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019a. ERNIE: Enhanced language representation with informative entities. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1441-1451, Florence, Italy. Association for Computational Linguistics. + +Zhuosheng Zhang, Yuwei Wu, Hai Zhao, Zuchao Li, Shuaijiang Zhang, Xi Zhou, and Xiang Zhou. 2019b. Semantics-aware bert for language understanding. arXiv preprint arXiv:1909.02209. + +# A Appendices + +# A.1 Comparison between MedERNIE and BERT-MK + +As shown in Table 6, BERT-MK outperforms Med-ERNIE on all datasets except BC5CDR. + +Table 6: MedERNIE vs. BERT-MK. + +
Entity Typing (Acc)
2010 i2b2/VAJNLPBABC5CDR
MedERNIE97.3794.4699.62
BERT-MK97.7094.5599.54
Relation Classification (F)
2010 i2b2/VAGADEU-ADR
MedERNIE73.886.2386.99
BERT-MK74.786.8787.49
\ No newline at end of file diff --git a/bertmkintegratinggraphcontextualizedknowledgeintopretrainedlanguagemodels/images.zip b/bertmkintegratinggraphcontextualizedknowledgeintopretrainedlanguagemodels/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..6efc7fa8d028b6c69a6759137974ad2da23df8a4 --- /dev/null +++ b/bertmkintegratinggraphcontextualizedknowledgeintopretrainedlanguagemodels/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:458ff45509672766273cefe5a95bf279baf300a05cf9e2125e631da3ab89d747 +size 406620 diff --git a/bertmkintegratinggraphcontextualizedknowledgeintopretrainedlanguagemodels/layout.json b/bertmkintegratinggraphcontextualizedknowledgeintopretrainedlanguagemodels/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..29818e431f435e12cbba03915d09ba6658fcf030 --- /dev/null +++ b/bertmkintegratinggraphcontextualizedknowledgeintopretrainedlanguagemodels/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:40d23af5bcd136de539ee040813fc2fa2a494a60194fa83031c41c8c3aaad16a +size 373438 diff --git a/bertqecontextualizedqueryexpansionfordocumentreranking/f3b9ad81-20d7-4a90-bdcf-d6a679f57afd_content_list.json b/bertqecontextualizedqueryexpansionfordocumentreranking/f3b9ad81-20d7-4a90-bdcf-d6a679f57afd_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..7c591b856e4d91ec0ba6c62e92ba27962e9260f4 --- /dev/null +++ b/bertqecontextualizedqueryexpansionfordocumentreranking/f3b9ad81-20d7-4a90-bdcf-d6a679f57afd_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:74245af21a9c7354d9cbe86e70dad8f4260de3bd3f9a59635cacbc31dfe31929 +size 75652 diff --git a/bertqecontextualizedqueryexpansionfordocumentreranking/f3b9ad81-20d7-4a90-bdcf-d6a679f57afd_model.json b/bertqecontextualizedqueryexpansionfordocumentreranking/f3b9ad81-20d7-4a90-bdcf-d6a679f57afd_model.json new file mode 100644 index 0000000000000000000000000000000000000000..37f4c18a197e2cf62546cb726b04c91119f925dc --- /dev/null +++ b/bertqecontextualizedqueryexpansionfordocumentreranking/f3b9ad81-20d7-4a90-bdcf-d6a679f57afd_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7ae081e677a6e0246237a4699e461f6b43b9a5922bffe8f3d19f22226aebad76 +size 91006 diff --git a/bertqecontextualizedqueryexpansionfordocumentreranking/f3b9ad81-20d7-4a90-bdcf-d6a679f57afd_origin.pdf b/bertqecontextualizedqueryexpansionfordocumentreranking/f3b9ad81-20d7-4a90-bdcf-d6a679f57afd_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..1fb642e160252cb0c13f21f48bf88ed054daca01 --- /dev/null +++ b/bertqecontextualizedqueryexpansionfordocumentreranking/f3b9ad81-20d7-4a90-bdcf-d6a679f57afd_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:12473d70834337ef437a7970c24fcd69b803417c4131e2ea96de704fb134ef2e +size 510136 diff --git a/bertqecontextualizedqueryexpansionfordocumentreranking/full.md b/bertqecontextualizedqueryexpansionfordocumentreranking/full.md new file mode 100644 index 0000000000000000000000000000000000000000..727e51e8722be1bcae8e2165d773e689dbceaff3 --- /dev/null +++ b/bertqecontextualizedqueryexpansionfordocumentreranking/full.md @@ -0,0 +1,292 @@ +# BERT-QE: Contextualized Query Expansion for Document Re-ranking + +Zhi Zheng $^{1,3}$ , Kai Hui $^{2*}$ , Ben He $^{1,3\boxtimes}$ , Xianpei Han $^{3}$ , Le Sun $^{3\boxtimes}$ , Andrew Yates $^{4}$ + +1 University of Chinese Academy of Sciences, Beijing, China + +2 Amazon Alexa, Berlin, Germany + +3 Institute of Software, Chinese Academy of Sciences, Beijing, China +4 Max Planck Institute for Informatics, Saarbrücken, Germany + +zhengzhi18@mails.ucas.ac.cn, kaihuibj@amazon.com + +benhe@ucas.ac.cn, {xianpei, sunle}@iscas.ac.cn, ayates@mpi-inf.mpg.de + +# Abstract + +Query expansion aims to mitigate the mismatch between the language used in a query and in a document. However, query expansion methods can suffer from introducing non-relevant information when expanding the query. To bridge this gap, inspired by recent advances in applying contextualized models like BERT to the document retrieval task, this paper proposes a novel query expansion model that leverages the strength of the BERT model to select relevant document chunks for expansion. In evaluation on the standard TREC Robust04 and GOV2 test collections, the proposed BERT-QE model significantly outperforms BERT-Large models. + +# 1 Introduction + +In information retrieval, the language used in a query and in a document differs in terms of ver-. bosity, formality, and even the format (e.g., the use of keywords in a query versus the use of natural language in an article from Wikipedia). In order to reduce this gap, different query expansion methods have been proposed and have enjoyed success in improving document rankings. Such methods commonly take a pseudo relevance feedback (PRF) approach in which the query is expanded using topranked documents and then the expanded query is used to rank the search results (Rocchio, 1971; Lavrenko and Croft, 2001; Amati, 2003; Metzler and Croft, 2007). + +Due to their reliance on pseudo relevance information, such expansion methods suffer from any non-relevant information in the feedback documents, which could pollute the query after expansion. Thus, selecting and re-weighting the information pieces from PRF according to their relevance before re-ranking are crucial for the effectiveness of + +the query expansions. Existing works identify expansion tokens according to the language model on top of feedback documents, as in RM3 (Lavrenko and Croft, 2001), extract the topical terms from feedback documents that diverge most from the corpus language model (Amati, 2003), or extract concepts for expansion (Metzler and Croft, 2007). In the context of neural approaches, the recent neural PRF architecture (Li et al., 2018) uses feedback documents directly for expansion. All these methods, however, are under-equipped to accurately evaluate the relevance of information pieces used for expansion. This can be caused by the mixing of relevant and non-relevant information in the expansion, like the tokens in RM3 (Lavrenko and Croft, 2001) and the documents in NPRF (Li et al., 2018); or by the facts that the models used for selecting and re-weighting the expansion information are not powerful enough, as they are essentially scalars based on counting. + +Inspired by the recent advances of pre-trained contextualized models like BERT on the ranking task (Yilmaz et al., 2019; Nogueira et al., 2020), this work attempts to develop query expansion models based on BERT with the goal of more effectively using the relevant information from PRF. In addition, as indicated in previous studies (Qiao et al., 2019; Dai and Callan, 2019), the (pre-)trained BERT-based ranking models have a strong ability to identify highly relevant chunks within documents. This actually provides advantages in choosing text chunks for expansion by providing more flexibility in terms of the granularity for expansions, as compared with using tokens (Lavrenko and Croft, 2001), concepts with one or two words (Metzler and Croft, 2007), or documents (Li et al., 2018). + +Given a query and a list of feedback documents from an initial ranking (e.g., from BM25), we propose to re-rank the documents in three sequential + +phases. In phase one, the documents are re-ranked with a fine-tuned BERT model and the top-ranked documents are used as PRF documents; in phase two, these PRF documents are decomposed into text chunks with fixed length (e.g., 10), and the relevance of individual chunks are evaluated; finally, to assess the relevance of a given document, the selected chunks and original query are used to score the document together. To this end, a novel query expansion model, coined as BERT-QE, based on the contextualized model is developed. + +Contributions of this work are threefold. 1) A novel query expansion model is proposed to exploit the strength of contextualized model BERT in identifying relevant information from feedback documents; 2) Evaluation on two standard TREC test collections, namely, Robust04 and GOV2, demonstrates that the proposed BERT-QE-LLL could advance the performance of BERT-Large significantly on both shallow and deep pool, when using BERT-Large in all three phases; 3) We further trade-off the efficiency and effectiveness, by replacing BERT-Large with smaller BERT architectures and demonstrate that, with a smaller variant of BERT-QE, e.g., BERT-QE-LMT, one could outperform BERT-Large significantly on shallow pool with as least as an extra $3\%$ computational cost; meanwhile, a larger variant, e.g., BERT-QE-LLS, could significantly outperform BERT-Large on both shallow and deep pool with $30\%$ more computations. + +# 2 Method + +In this section we describe BERT-QE, which takes a ranked list of documents as input (e.g., from an unsupervised ranking model) and outputs a re-ranked list based on the expanded query. + +# 2.1 Overview + +There are three phases in the proposed BERT-QE. Namely, phase one: the first-round re-ranking of the documents using a BERT model; phase two: chunk selection for query expansion from the top-ranked documents; and phase three: the final re-ranking using the selected expansion chunks. The essential parts of the proposed BERT-QE are the second and third phases, which are introduced in detail in Sections 2.2 and 2.3. Without loss of generality, a fine-tuned BERT model serves as the backbone of the proposed BERT-QE model and is used in all three phrases. We describe the fine-tuning process and phase one before describing + +phases two and three in more detail. + +Fine-tuning BERT model. Similar to (Yilmaz et al., 2019), a BERT model (e.g., BERT-large) is first initialized using a checkpoint that has been trained on MS MARCO (Bajaj et al., 2018). The model is subsequently fine-tuned on a target dataset (e.g., Robust04). This choice is to enable comparison with the best-performing BERT model, such as a fine-tuned BERT-Large (Yilmaz et al., 2019). Before fine-tuning the BERT model on a target dataset, we first use the aforementioned model trained on MS MARCO to identify the top-ranked passages in this dataset. These selected query-passage pairs are then used to fine-tune BERT using the loss function as in Equation (1). + +$$ +\mathcal {L} = - \sum_ {i \in I _ {p o s}} \log (p _ {i}) - \sum_ {i \in I _ {n e g}} \log (1 - p _ {i}) \quad (1) +$$ + +Therein, $I_{pos}$ and $I_{neg}$ are sets of indexes of the relevant and non-relevant documents, respectively, and $p_i$ is the probability of the document $d_i$ being relevant to the query. This configuration is similar to Dai and Callan (2019), with the difference that we use only passages with the highest scores instead of all passages. In our pilot experiments, this leads to comparable effectiveness but with a shorter training time. + +Phase one. Using the fine-tuned BERT model, we re-rank a list of documents from an unsupervised ranking model for use in the second phase. As shown in Equation (2), given a query $q$ and a document $d$ , $rel(q, d)$ assigns $d$ a relevance score by modeling the concatenation of the query and the document using the fine-tuned BERT. The ranked list is obtained by ranking the documents with respect to these relevance scores. We refer the reader to prior works describing BERT and ranking with BERT for further details (Devlin et al., 2019; Nogueira and Cho, 2019). + +$$ +r e l (q, d) = \operatorname {B E R T} (q, d) \tag {2} +$$ + +# 2.2 SelectingChunks forQueryExpansion + +In the second phase, the top- $k_{d}$ documents from the first phase are employed as feedback documents and $k_{c}$ chunks of relevant text are extracted from them. This phase is illustrated in Figure 1. In more detail, a sliding window spanning $m$ words is used to decompose each feedback document into overlapping chunks where two neighboring chunks + +![](images/fa529c4263723c470db99fef99cf598cfabbec9c7fb5565bbc91016274b0df66.jpg) +Figure 1: Chunk selection for query expansion in phase two. + +are overlapped by up to $m / 2$ words. The $i$ -th chunk is denoted as $c_{i}$ . As expected, these chunks are a mixture of relevant and non-relevant text pieces due to the lack of supervision signals. Therefore, the fine-tuned BERT model from Section 2.1 is used to score each individual chunk $c_{i}$ , as indicated in Equation (3). The top- $k_{c}$ chunks with the highest scores are selected. These $k_{c}$ chunks, which are the output from phase two, serve as a distillation of the feedback information in the feedback documents from phase one. We denote the chunks as $\mathcal{C} = [c_0, c_1, \dots, c_{k_c - 1}]$ . + +$$ +r e l (q, c _ {i}) = \operatorname {B E R T} (q, c _ {i}) \tag {3} +$$ + +# 2.3 Final Re-ranking using SelectedChunks + +In phase three, the chunks selected from phase two are used in combination with the original query to compute a final re-ranking. This process is illustrated in Figure 2. + +Evaluating the relevance of a document using the selected feedback chunks. For each individual document $d$ , the $k_{c}$ chunks selected in phase two are used to assess its relevance separately, and the $k_{c}$ evaluations are thereafter aggregated to generate the document's relevance score. As described in Equation (4), the fine-tuned BERT model from Section 2.1 is used to compute $\text{rel}(c_{i}, d)$ , which are further aggregated into a relevance score $\text{rel}(\mathcal{C}, d)$ . Akin to (Li et al., 2018), the relevance of individual chunks are incorporated as weights by using the softmax function $\text{softmax}_{c_{i} \in \mathcal{C}}(.)$ among all chunks in $\mathcal{C}$ on top of the $\text{rel}(q, c_{i})$ . + +$$ +r e l (\mathcal {C}, d) = \sum_ {c _ {i} \in \mathcal {C}} \operatorname {s o f t m a x} _ {c _ {i} \in \mathcal {C}} (r e l (q, c _ {i})) \cdot r e l (c _ {i}, d) \tag {4} +$$ + +Combining $rel(\mathcal{C},d)$ with $rel(q,d)$ . To generate the ultimate relevance score $rel(q,\mathcal{C},d)$ for $d$ , akin to the established PRF models like RM3 (Lavrenko and Croft, 2001) and NPRF (Li et al., 2018), the relevance scores based on the feedback and the original query are combined as in Equation (5). $\alpha$ is a hyper-parameter, governing the relative importance of the two parts. + +$$ +\operatorname {r e l} (q, \mathcal {C}, d) = (1 - \alpha) \cdot \operatorname {r e l} (q, d) + \alpha \cdot \operatorname {r e l} (\mathcal {C}, d) \tag {5} +$$ + +We note that the same fine-tuned BERT model does not necessarily need to be used in each phase. In our experiments, we consider the impact of using different BERT variants from Table 1 in each phase. For example, phases one and three might use the BERT-Large variant, while phase two uses the BERT-Small variant with fewer parameters. + +# 3 Experimental Setup + +In this section, we describe our experiment configurations. Source code, data partitions for cross-validation, result files of initial rankings, and the trained models are available online1. + +# 3.1 Dataset and Metrics + +Akin to (Guo et al., 2016; Yilmaz et al., 2019), we use the standard Robust04 (Voorhees, 2004) and GOV2 (Clarke et al., 2004) test collections. Robust04 consists of 528,155 documents and GOV2 consists of 25,205,179 documents. We employ 249 TREC keyword queries for Robust04 and 150 keyword queries for GOV2. Akin to (Yilmaz et al., 2019), in this work, all the rankings from BERT-based models, including the proposed models and + +![](images/e1865a4512157be38080946cd97add26f96c8cf840d01c50b8500ccdaae1fb91.jpg) +Figure 2: Re-rank documents using selected chunks in phase three. + +the baselines, have been interpolated with the initial ranking scores (DPH+KL in this work) in the same way wherein the hyper-parameters are tuned in cross-validation2. We report P@20, NDCG@20 to enable the comparisons on the shallow pool; and MAP@100, MAP@1000 are reported for deep pool. In addition, statistical significance for paired two-tailed t-test is reported, where the superscripts \*\*\*, \*\* and \* denote the significant level at 0.01, 0.05, and 0.1, respectively. + +# 3.2 Initial Ranking + +$\mathbf{DPH} + \mathbf{KL}$ is used as the ranking model to generate the initial ranking. DPH is an unsupervised retrieval model (Amati et al., 2007) derived from the divergence-from-randomness framework. DPH+KL ranks the documents with DPH after expanding the original queries with Rocchio's query expansion using Kullback-Leibler divergence (Amati, 2003; Rocchio, 1971), as implemented in the Terrier toolkit (Macdonald et al., 2012). Its results are also listed for comparison. + +# 3.3 Models in Comparisons + +Unsupervised query expansion models, like Rocchio's query expansion (Rocchio, 1971) with the KL divergence model (Amati, 2003), and RM3 (Lavrenko and Croft, 2001), are employed as a group of baseline models, wherein the query is expanded by selecting terms from top-ranked documents from the initial ranking. + +- BM25+RM3 is also used as a baseline model, which follows the experimental settings from (Yilmaz et al., 2019), and the implementation from + +Anserini (Lin et al., 2016) with default settings is used. + +- $\mathbf{QL} + \mathbf{RM3}$ is the query likelihood language model with RM3 for PRF (Lavrenko and Croft, 2001), for which the Anserini's (Lin et al., 2016) implementation with default settings is used. +Neural ranking models. We also include different neural ranking models for comparisons. +- SNRM (Zamani et al., 2018) is a standalone neural ranking model by introducing a sparsity property to learn a latent sparse representation for each query and document. The best-performing version of SNRM with PRF is included for comparison. +- NPRF (Li et al., 2018) is an end-to-end neural PRF framework that can be used with existing neural IR models, such as DRMM (Guo et al., 2016). The best-performing variant $\mathrm{NPRF}_{ds}$ -DRMM is included for comparison. +- CEDR (MacAvaney et al., 2019) incorporates the classification vector of BERT into existing neural models. The best-performing variant CEDRKNRM is included for comparison. +- Birch (Yilmaz et al., 2019) is a re-ranking approach by fine-tuning BERT successively on the MS MARCO and MicroBlog (MB) datasets. The best-performing version 3S: BERT(MS MARCO→MB), denoted as Birch(MS→MB) for brevity, is included for comparison. +- BERT-Large and BERT-Base in the MaxP configuration are fine-tuned on the training sets with cross-validation as described in Section 2.1. + +# 3.4 Variants of BERT + +Different variants of BERT models with different configurations are employed. We list the key hyperparameters of each variant in Table 1, namely, the + +
SizeConfiguration
Tiny (T)L=2, H=128, A=2
Small (S)L=4, H=256, A=4
Medium (M)L=8, H=512, A=8
Base (B)L=12, H=768, A=12
Large (L)L=24, H=1024, A=16
+ +Table 1: Configurations of different BERT variants. + +number of hidden layers, the hidden embedding size, and the number of attention heads, which are denoted as $L$ , $H$ and $A$ , respectively3. The details of these models can be found in (Turc et al., 2019). We indicate the configurations used for individual phases with the model's suffix. For example, BERT-QLS indicates that a fine-tuned BERT-Large is used in phases one and two, and in phase three a fine-tuned BERT-Small is used. + +# 3.5 Implementation of BERT-QE + +Individual documents are decomposed into overlapped passages with 100 words using a sliding window, wherein the stride is 50. For the proposed BERT-QE, in phase two, $k_{d} = 10$ top-ranked documents from the search results of phase one are used, from which $k_{c} = 10$ chunks are selected for expansion, and chunk length $m = 10$ is used. In phase one and phase three, the BERT model is used to re-rank the top-1000 documents. In Section 5, we also examine the use of different $k_{c}$ and $m$ , namely, $k_{c} = [5, 10, 20]$ and $m = [5, 10, 20]$ , investigating the impacts of different configurations. + +# 3.6 Training + +To feed individual query-document pairs into the model, the query $q$ and the document $^4 d$ for training are concatenated and the maximum sequence length is set to 384. We train BERT using cross-entropy loss for 2 epochs with a batch size of 32 on a TPU v3. The Adam optimizer (Kingma and Ba, 2015) is used with the learning rate schedule from (Nogueira and Cho, 2019) with an initial learning rate of 1e-6. We conduct a standard five-fold cross-validation. Namely, queries are split into five equal-sized partitions. The query partition on Robust04 follows the settings from (Dai and Callan, 2019). On GOV2, queries are partitioned by the + +order of TREC query id in a round-robin manner. In each fold, three partitions are used for training, one is for validation, and the remaining one is for testing. In each fold, we tune the hyper-parameters on the validation set and report the performance on test set based on the configurations with the highest NDCG@20 on the validation set5. The ultimate performance is the average among all folds. + +# 3.7 Computation of FLOPs + +Akin to literature (Liu et al., 2020), we report FLOPs (floating point operations) which measures the computational complexity of models. Similar to (Khattab and Zaharia, 2020), we report FLOPs that includes all computations in the three phases of BERT-QE. + +# 4 Results + +In this section, we report results for the proposed BERT-QE model and compare them to the baseline models. First, in Section 4.1, we use BERT-Large models for all three phases of BERT-QE. In Section 4.2, we evaluate the impact of using smaller BERT models (Table 1) for the second and third phases in order to improve the efficiency of the proposed model. + +# 4.1 Results for BERT-QE-LLL + +In this section, we examine the performance of the proposed BERT-QE by comparing it with a range of unsupervised ranking models, neural IR models, and re-ranking models based on BERT-Base and BERT-Large. We aim at advancing the state-of-the-art ranking performance of BERT-Large, and start with using BERT-Large for all three phases in BERT-QE. We denote this variant as BERT-QELL, where the suffix LLL indicates the use of the same fine-tuned BERT-Large in all three phases6. + +The effectiveness of BERT-QE-LLL. To put our results in context, we first compare BERT-QE-LLL with the reported effectiveness for different neural IR models from literature. Due to the fact that results for GOV2 have not been reported in these works, only the comparisons on Robust04 are included in Table 2. In comparison with the state-of-the-art results of a fine-tuned BERT-Large, namely, Birch(MS→MB) (Yilmaz et al., + +
ModelP@20NDCG@20MAP@1K
SNRM with PRF0.39480.43910.2971
NPRF0.40640.45760.2904
CEDR0.46670.5381-
Birch(MS→MB)0.46690.53250.3691
BERT-Large0.4769*0.53970.3743
BERT-QE-LLL0.4888***0.5533***0.3865***
+ +Table 2: Compare the effectiveness of BERT-QE-LLL with neural IR models and neural PRF model on Robust04 when using title queries. Statistical significance relative to Birch(MS→MB) (Yilmaz et al., 2019) at p-value < 0.01, 0.05, and 0.1 are denoted as * * *, **, and *, respectively. + +2019), it can be seen that the fine-tuned BERT-Large in this work achieves comparable results. In addition, BERT-QE-LLL significantly outperforms Birch(MS→MB) at the 0.01 level. The significance tests relative to other models are omitted because their result rankings are not available. + +As summarized in Table 3, we further compare BERT-QE-LLL with BERT-Base and BERT-Large on both Robust04 and GOV2. We also include several unsupervised baselines for reference. As can be seen, BERT-Large significantly outperforms all non-BERT baselines by a big margin, regardless of whether query expansion is used. Thus, only significance tests relative to BERT-Large are shown. From Table 3, on Robust04, in comparison with BERT-Large, BERT-QE-LLL could significantly improve the search results on both shallow and deep pool at 0.01 significant level, achieving a $2.5\%$ improvement in terms of NDCG@20 and a $3.3\%$ improvement for MAP@1K. On GOV2, we have similar observations that BERT-QE-LLL could significantly improve BERT-Large on all reported metrics. + +The efficiency of BERT-QE. Beyond the effectiveness, we are also interested in the efficiency of BERT-QE-LLL, for which the FLOPs is reported. The FLOPs per query for BERT-Large is 232.6T, meanwhile BERT-QE-LLL is 2603T. This means BERT-QE-LLL requires $11.19\mathrm{x}$ more computations than BERT-Large. This is mostly due to the use of BERT-Large models for all three phases as described in Section 2. Note that, one may be able to reduce the time consumption during inference by parallelizing the individual phases of BERT-QE. In the following, the efficiency of a model is reported in terms of its relative comparison to BERT-Large, namely, in the form of the times of BERT-Large's computational cost. + +# 4.2 Employing Smaller BERT Variants in BERT-QE + +According to Section 4.1, although with competitive effectiveness, BERT-QE-LLL is very expensive for computation due to the use of BERT-Large in all three phases. In this section, we further explore whether it is possible to replace the BERT-Large components with smaller BERT variants from Table 1 in the second and third phases, in order to further improve the efficiency of the proposed BERT-QE model. Given that our goal is to improve on BERT-Large, in this work, we always start with BERT-Large for the first-round ranking. + +Smaller BERT variants for chunk selector. As described in Section 2.2, in the second phase, a BERT model is used to select text chunks of a fixed length (i.e., $m = 10$ ) by evaluating individual text chunks from the top- $k_d$ documents and selecting the most relevant chunks using a BERT model. Intuitively, compared with ranking a document, evaluating the relevance of a short piece of text is a relatively simple task. Thus, we examine the use of smaller BERT variants as summarized in the second section (namely, BERT-QE-LXL, where X is T, S, M, or B) in Table 4. As shown, compared with using BERT-Large in phase two, on Robust04, all four BERT-QE variants can outperform BERT-Large significantly at the 0.01 level. Furthermore, BERT-QE-LML can even achieve slightly higher results than BERT-QE-LLL. On GOV2, on the other hand, the uses of BERT-Tiny, BERT-Small, and BERT-Medium could still outperform BERT-Large significantly at the 0.05 or 0.1 level, but with decreasing metrics in most cases. Overall, for phase two, BERT-Large is a good choice but the smaller BERT variants are also viable. The uses of BERT-Tiny, BERT-Small, and BERT-Medium in phase two can outperform BERT-Large significantly with lower FLOPs. + +Smaller BERT variants for final re-ranker. + +
ModelRobust04GOV2
P@20NDCG@20MAP@100MAP@1KP@20NDCG@20MAP@100MAP@1K
DPH0.36160.42200.21500.25120.52950.47600.17310.3012
BM25+RM30.38210.44070.24510.29030.56340.48510.20220.3350
QL+RM30.37230.42690.23140.27470.53590.45680.18370.3143
DPH+KL0.39240.43970.25280.30460.58960.51220.21820.3605
BERT-Base0.46530.52780.31530.36520.65910.58510.25350.3971
BERT-Large0.47690.53970.32380.37430.66380.59320.26120.4082
BERT-QE-LLL0.4888***0.5533***0.3363***0.3865***0.6748***0.6037***0.2681***0.4143***
+ +Table 3: Effectiveness of BERT-QE-LLL. Statistical significance relative to BERT-Large at p-value $< {0.01}$ , 0.05, and 0.1 are denoted as * * * , ** , and *, respectively. + +
ModelFLOPsRobust04GOV2
P@20NDCG@20MAP@100MAP@1KP@20NDCG@20MAP@100MAP@1K
BERT-Base0.28x0.46530.52780.31530.36520.65910.58510.25350.3971
BERT-Large1.00x0.47690.53970.32380.37430.66380.59320.26120.4082
BERT-QE-LLL11.19x0.4888***0.5533***0.3363***0.3865***0.6748***0.6037***0.2681***0.4143***
BERT-QE-LTL11.00x0.4855***0.5500***0.3318***0.3821***0.6691**0.5986*0.2663***0.4138***
BERT-QE-LSL11.00x0.4861***0.5504***0.3325***0.3828***0.6732***0.6011**0.2685***0.4142***
BERT-QE-LML11.01x0.4932***0.5592***0.3368***0.3870***0.6715**0.6013*0.2675*0.4136*
BERT-QE-LBL11.05x0.4839**0.5503***0.3339***0.3843***0.6725**0.60040.26390.4103
BERT-QE-LMT1.03x0.4839***0.5483***0.3276*0.37650.6698**0.5994**0.26420.4098
BERT-QE-LMS1.12x0.4910***0.5563***0.3315***0.3810**0.66580.59450.2654***0.4115***
BERT-QE-LMM1.85x0.4888***0.5569***0.3335***0.3829***0.6732***0.6002*0.2668***0.4131***
BERT-QE-LMB3.83x0.4906***0.5580***0.3367***0.3858***0.6728***0.6011**0.26490.4128**
BERT-QE-LLT1.20x0.4841***0.5466**0.3287**0.37710.6695**0.6009**0.2650**0.4110*
BERT-QE-LLS1.30x0.4869***0.5501**0.3304**0.3798*0.6688*0.5998**0.2657***0.4115***
BERT-QE-LLM2.03x0.48110.54700.3320**0.3815**0.6728***0.6013***0.2651**0.4107
BERT-QE-LLB4.01x0.4865***0.5507***0.3337***0.3834***0.66780.59840.2665**0.4127**
+ +Table 4: Employ different BERT variants for phase two and three in BERT-QE, wherein BERT-Tiny (T), BERT-Small (S), BERT-Medium (M), and BERT-Base (B) are used. Statistical significance relative to BERT-Large at p-value $< {0.01},{0.05}$ ,and 0.1 are denoted as * * * , ** , and *, respectively. + +According to Section 2.3, phase three is the most expensive phase, because a BERT model must compare each document to multiple expansion chunks. Thus, we further explore the possibility of replacing BERT-Large with smaller BERT variants for phase three. Based on the results in the previous section, we consider both BERT-Large and BERT-Medium as the chunk selector, due to the superior effectiveness of BERT-QE-LML. The results are summarized in the third and fourth sections (namely, BERT-QE-LMX and BERT-QE-LLX, where X is T, S, M, or B) of Table 4. On Robust04, the use of smaller BERT variants always leads to decreasing effectiveness. However, when using BERT-Small and BERT-Base for the final re-ranking, the corresponding BERT-QE variants always outperform BERT-Large significantly at the 0.1 level. BERT-QE-LMM, BERT-QE-LMB, and BERT-QE-LLB can even consistently outperform BERT-Large on all four metrics at the 0.01 level. On GOV2, on the other hand, the use of BERT-QE-LMT and BERT-QE-LLM significantly outperforms BERT-Large + +on shallow metrics, while BERT-QE-LMS and BERT-QE-LLB outperform BERT-Large on deep metrics. In addition, BERT-QE-LMM/LLT/LLS consistently outperform BERT-Large on all metrics at 0.1 level. Overall, considering shallow metrics on both datasets, BERT-QE-LMT can outperform BERT-Large consistently and significantly at the 0.05 level while requiring only $3\%$ more FLOPs. On both shallow and deep metrics, BERT-QE-LLS significantly outperforms BERT-Large with $30\%$ more FLOPs. + +# 5 Analysis + +# 5.1 First-round Re-ranker Ablation Analyses + +Intuitively, there are two functions of the first-round ranker: providing the $rel(q,d)$ score in Equation (5) used in the final re-ranking, and providing the top- $k_d$ documents from which the candidate chunks are selected, which are used to compute $rel(\mathcal{C},d)$ in Equation (4). In this section, we investigate the impact of the first-round re-ranker from these two perspectives. In particular, we conduct + +
ModelP@20NDCG@20MAP@1K
BERT-Large0.47690.53970.3743
BERT-QE-LLL0.4888***0.5533***0.3865***
Remove rel(q,d)0.47690.53720.3767
Chunks from DPH+KL0.47590.53910.3766
+ +Table 5: Ablation analyzes for the first-round re-ranker in BERT-QE-LLL, by removing the $rel(q,d)$ from Equation (5) and by replacing the chunks with the ones selected from top-ranked documents of $\mathrm{DPH + KL}$ when computing $rel(q,\mathcal{C})$ in Equation (4). Statistical significance relative to BERT-Large at p-value $< 0.01$ , 0.05, and 0.1 are denoted as \*\*\*, \*\*, and $*$ , respectively. + +![](images/667baa651dd4044533a8c8c688328f737a1462efb6afa0acfaf3dd24bd788e18.jpg) +Figure 3: Performance of BERT-QE with different configurations of $k_{c}$ and $m$ . The $\circ, \triangle, \square$ correspond to results in terms of P@20, NDCG@20, and MAP@1K, respectively. + +two ablation analyses: (1) we remove the $rel(q, d)$ from BERT-Large in Equation (5), but we continue to use the top documents from BERT-Large to select the top- $k_c$ chunks; and (2) we keep the $rel(q, d)$ from BERT-Large in Equation (5), but we select the top- $k_c$ chunks from documents returned by the unsupervised DPH+KL model. The results are summarized in Table 5. For the first ablation, when $rel(q, d)$ from BERT-Large is not used, BERT-QE cannot outperform BERT-Large. Similarly, in the second ablation, selecting chunks from the documents returned by DPH+KL also prevents BERT-QE from outperforming BERT-Large. These results highlight the importance of both functions of the first-round re-ranker. That is, we need a powerful model for the first-round re-ranker to provide ranking score $rel(q, d)$ and the high-quality feedback documents for the chunk selector. + +# 5.2 Hyper-parameter study + +There are two hyper-parameters in the proposed BERT-QE, namely $k_{c}$ and $m$ . $k_{c}$ is the number of chunks used in the final-round re-ranking as described in Equation (4). Meanwhile, the chunk size $m$ balances between contextual information and noise. Results for different hyper-parameter set + +tings on Robust04 are shown in Figure 3. For $k_{c}$ , it can be seen that $k_{c} = 10$ , 20 achieve similar performance, while $k_{c} = 5$ reduces the results. As the computational cost of phase three is proportional to $k_{c}$ and the performance gaps between $k_{c} = 10$ and $k_{c} = 20$ are actually quite small, $k_{c} = 10$ is a reasonable and robust configuration. Among different settings of $m$ , $m = 10$ achieves the best performance and therefore is used in the proposed model. + +# 6 Related Work + +BERT for IR. Inspired by the success of contextualized models like BERT on NLP tasks, Nogueira and Cho (2019) examine the performance of BERT on the passage re-ranking tasks using MS MARCO and TREC-CAR datasets, and demonstrate superior performances compared with the existing shallow ranking models like Co-PACRR (Hui et al., 2018) and KNRM (Xiong et al., 2017). Thereafter, the application of contextualized BERT model in ranking tasks have attracted many attentions. Dai and Callan (2019) split a document into fixed length passages and use a BERT ranker to predict the relevance of each passage independently. The score of the first passage, the best passage, or the sum of all passage scores is used as the document score. MacAvaney et al. (2019) incorporate BERT's classification vector into existing neural models, including DRMM (Guo et al., 2016), PACRR (Hui et al., 2017), and KNRM (Xiong et al., 2017), demonstrating promising performance boosts. Yilmaz et al. (2019) transfer models across different domains and aggregate sentence-level evidences to rank documents. Nogueira et al. (2019a) propose a multi-stage ranking architecture with BERT that can trade off quality against latency. Wu et al. (2020) propose the context-aware Passage-level Cumulative Gain to aggregate passage relevance representations scores, which is incorporated into a BERT-based model for document ranking. In ad + +dition to these efforts, this work further proposes to exploit the contextualized BERT model to expand the original queries in the proposed BERT-QE framework, boosting the ranking performance by using the pseudo feedback information effectively. Query expansion has long been applied to make use of the pseudo relevance feedback information (Hui et al., 2011) to tackle the vocabulary mismatch problem. Keyword query expansion methods, such as Rocchio's algorithm (Rocchio, 1971) and the KL query expansion model (Amati, 2003), have been shown to be effective when applied to text retrieval tasks. Moreover, Metzler and Croft (2007) propose to expand beyond unigram keywords by using a Markov random field model. Some query expansion methods use word embeddings to find the relevant terms to the query (Diaz et al., 2016; Zamani and Croft, 2016). Cao et al. (2008) perform query expansion by using classification models to select expansion terms. NPRF (Li et al., 2018) incorporates existing neural ranking models like DRMM (Guo et al., 2016) into an end-to-end neural PRF framework. Rather than expanding the query, Nogueira et al. (2019b) propose a document expansion method named Doc2query, which uses a neural machine translation method to generate queries that each document might answer. Doc2query is further improved by docTTTTQuery (Nogueira and Lin, 2019) which replaces the seq2seq transformer with T5 (Raffel et al., 2019). MacAvaney et al. (2020b) construct query and passage representations and perform passage expansion based on term importance. Despite the promising results of the above document expansion methods for passage retrieval, they are so far only applied to short text retrieval tasks to avoid excessive memory consumption. In comparison with these established expansion models, the proposed BERT-QE aims at better selecting and incorporating the information pieces from feedback, by taking advantages of the BERT model in identifying relevant information. + +# 7 Conclusion + +This work proposes a novel expansion model, coined as BERT-QE, to better select relevant information for query expansion. Evaluation on the Robust04 and GOV2 test collections confirms that BERT-QE significantly outperforms BERT-Large with relatively small extra computational cost (up to $30\%$ ). In future work, we plan to further im + +prove the efficiency of BERT-QE, by combining the proposed BERT-QE with the pre-computation techniques proposed recently (Khattab and Zaharia, 2020; MacAvaney et al., 2020a), wherein most of the computations could be performed offline. + +# Acknowledgments + +This research work is supported by the National Natural Science Foundation of China under Grants no. U1936207 and 61772505, Beijing Academy of Artificial Intelligence (BAAI2019QN0502), the Youth Innovation Promotion Association CAS (2018141), and University of Chinese Academy of Sciences. + +# References + +Giambattista Amati. 2003. Probability models for information retrieval based on divergence from randomness. Ph.D. thesis, University of Glasgow, UK. +Gianni Amati, Edgardo Ambrosi, Marco Bianchi, Carlo Gaibisso, and Giorgio Gambosi. 2007. Fub, IASI-CNR and university of tor vergata at TREC 2007 blog track. In TREC, volume Special Publication 500-274. National Institute of Standards and Technology (NIST). +Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, and Tong Wang. 2018. Ms marco: A human generated machine reading comprehension dataset. CoRR, abs/1611.09268v3. +Guihong Cao, Jian-Yun Nie, Jianfeng Gao, and Stephen Robertson. 2008. Selecting good expansion terms for pseudo-relevance feedback. In SIGIR, pages 243-250. ACM. +Charles L. A. Clarke, Nick Craswell, and Ian Soboroff. 2004. Overview of the TREC 2004 terabyte track. In TREC, volume Special Publication 500-261. National Institute of Standards and Technology (NIST). +Zhuyun Dai and Jamie Callan. 2019. Deeper text understanding for IR with contextual neural language modeling. In SIGIR, pages 985-988. ACM. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT* (1), pages 4171–4186. Association for Computational Linguistics. +Fernando Diaz, Bhaskar Mitra, and Nick Craswell. 2016. Query expansion with locally-trained word embeddings. In ACL (1). The Association for Computer Linguistics. + +Jiafeng Guo, Yixing Fan, Qingyao Ai, and W. Bruce Croft. 2016. A deep relevance matching model for ad-hoc retrieval. In CIKM, pages 55-64. ACM. +Kai Hui, Ben He, Tiejian Luo, and Bin Wang. 2011. A comparative study of pseudo relevance feedback for ad-hoc retrieval. In ICTIR, volume 6931 of Lecture Notes in Computer Science, pages 318-322. Springer. +Kai Hui, Andrew Yates, Klaus Berberich, and Gerard de Melo. 2017. PACRR: A position-aware neural IR model for relevance matching. In EMNLP, pages 1049-1058. Association for Computational Linguistics. +Kai Hui, Andrew Yates, Klaus Berberich, and Gerard de Melo. 2018. Co-pacrr: A context-aware neural IR model for ad-hoc retrieval. In WSDM, pages 279-287. ACM. +Omar Khattab and Matei Zaharia. 2020. Colbert: Efficient and effective passage search via contextualized late interaction over BERT. In SIGIR, pages 39-48. ACM. +Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR. +Victor Lavrenko and W. Bruce Croft. 2001. Relevance-based language models. In SIGIR, pages 120-127. ACM. +Canjia Li, Yingfei Sun, Ben He, Le Wang, Kai Hui, Andrew Yates, Le Sun, and Jungang Xu. 2018. NPRF: A neural pseudo relevance feedback framework for ad-hoc information retrieval. In EMNLP, pages 4482-4491. Association for Computational Linguistics. +Jimmy J. Lin, Matt Crane, Andrew Trotman, Jamie Callan, Ishan Chattopadhyaya, John Foley, Grant Ingersoll, Craig MacDonald, and Sebastiano Vigna. 2016. Toward reproducible baselines: The opensource IR reproducibility challenge. In ECIR, volume 9626 of Lecture Notes in Computer Science, pages 408-420. Springer. +Weijie Liu, Peng Zhou, Zhiruo Wang, Zhe Zhao, Haotang Deng, and Qi Ju. 2020. Fastbert: a self-distilling BERT with adaptive inference time. In ACL, pages 6035-6044. Association for Computational Linguistics. +Sean MacAvaney, Franco Maria Nardini, Raffaele Perego, Nicola Tonellotto, Nazli Goharian, and Ophir Frieder. 2020a. Efficient document re-ranking for transformers by precomputing term representations. In SIGIR, pages 49-58. ACM. +Sean MacAvaney, Franco Maria Nardini, Raffaele Perego, Nicola Tonellotto, Nazli Goharian, and Ophir Frieder. 2020b. Expansion via prediction of importance with contextualization. In SIGIR, pages 1573-1576. ACM. + +Sean MacAvaney, Andrew Yates, Arman Cohan, and Nazli Goharian. 2019. CEDR: contextualized embeddings for document ranking. In SIGIR, pages 1101-1104. ACM. +Craig Macdonald, Richard McCreadie, Rodrygo LT Santos, and Iadh Ounis. 2012. From puppy to maturity: Experiences in developing terrier. Proc. of OSIR at SIGIR, pages 60-63. +Donald Metzler and W. Bruce Croft. 2007. Latent concept expansion using markov random fields. In SI-GIR, pages 311-318. ACM. +Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage re-ranking with BERT. CoRR, abs/1901.04085. +Rodrigo Nogueira, Zhiying Jiang, and Jimmy Lin. 2020. Document ranking with a pretrained sequence-to-sequence model. CoRR, abs/2003.06713. +Rodrigo Nogueira and Jimmy Lin. 2019. From doc2query to docTTTTTquery. Technical report. +Rodrigo Nogueira, Wei Yang, Kyunghyun Cho, and Jimmy Lin. 2019a. Multi-stage document ranking with BERT. CoRR, abs/1910.14424. +Rodrigo Nogueira, Wei Yang, Jimmy Lin, and Kyunghyun Cho. 2019b. Document expansion by query prediction. CoRR, abs/1904.08375. +Yifan Qiao, Chenyan Xiong, Zheng-Hao Liu, and Zhiyuan Liu. 2019. Understanding the behaviors of BERT in ranking. CoRR, abs/1904.07531. +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. CoRR, abs/1910.10683. +J. Rocchio. 1971. Relevance feedback in information retrieval. In Gerard Salton, editor, The SMART retrieval system: experiments in automatic document processing, pages 313-323. Prentice Hall, Englewood, Cliffs, New Jersey. +Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-read students learn better: The impact of student initialization on knowledge distillation. CoRR, abs/1908.08962. +Ellen M. Voorhees. 2004. Overview of the TREC 2004 robust track. In TREC, volume Special Publication 500-261. National Institute of Standards and Technology (NIST). +Zhijing Wu, Jiaxin Mao, Yiqun Liu, Jingtao Zhan, Yukun Zheng, Min Zhang, and Shaoping Ma. 2020. Leveraging passage-level cumulative gain for document ranking. In WWW '20: The Web Conference 2020, Taipei, Taiwan, April 20-24, 2020, pages 2421-2431. ACM / IW3C2. + +Chenyan Xiong, Zhuyun Dai, Jamie Callan, Zhiyuan Liu, and Russell Power. 2017. End-to-end neural ad-hoc ranking with kernel pooling. In SIGIR, pages 55-64. ACM. + +Zeynep Akkalyoncu Yilmaz, Wei Yang, Haotian Zhang, and Jimmy Lin. 2019. Cross-domain modeling of sentence-level evidence for document retrieval. In EMNLP/IJCNLP (1), pages 3488-3494. Association for Computational Linguistics. + +Hamed Zamani and W. Bruce Croft. 2016. Embedding-based query language models. In ICTIR, pages 147-156. ACM. + +Hamed Zamani, Mostafa Dehghani, W. Bruce Croft, Erik G. Learned-Miller, and Jaap Kamps. 2018. From neural re-ranking to neural ranking: Learning a sparse representation for inverted indexing. In CIKM, pages 497-506. ACM. + +# A Appendices + +# A.1 Interpolation Parameters in BERT-QE + +
FoldRobust04
P@20NDCG@20MAP@100MAP@1Kαβ
10.47300.56060.32470.37650.40.9
20.49000.56660.39090.43620.40.8
30.47400.53280.29410.34710.40.9
40.46840.52130.29400.34400.60.9
50.54000.58680.37090.42330.30.9
GOV2
10.62330.57280.22570.36210.40.9
20.73970.66750.30460.43340.70.9
30.71670.61770.25580.44560.10.7
40.68500.60270.27180.41400.40.8
50.63000.57310.28600.42400.40.8
+ +There are two hyper-parameters in BERT-QE, namely $\alpha$ and $\beta$ , both of which are interpolation coefficients. $\alpha$ is introduced in Equation (5). In addition, akin to (Yilmaz et al., 2019), there is an interpolation with the initial ranking, i.e., $\mathrm{DPH} + \mathrm{KL}$ , which has been applied to all models, including BERT-QE and baselines, where $\beta$ is the hyper-parameter. As shown in the following equation, $M(q,d)$ denotes the scores from a re-ranking model, e.g., BERT-QE model. $I(q,d)$ denotes the scores from the initial ranking, namely, $\mathrm{DPH} + \mathrm{KL}$ . $\alpha$ and $\beta$ are both tuned on the validation set through grid search on (0,1) with stride 0.1. The models with best nDCG@20 on validation sets are chosen. Different configurations of $\alpha$ and $\beta$ and the corresponding results are summarized in Table 6. + +$$ +f i n a l \_ s c o r e = \beta \cdot \log (M (q, d)) + (1 - \beta) \cdot I (q, d) +$$ + +Table 6: Results on validation sets, as well as the chosen interpolation parameters $\alpha$ and $\beta$ based on validation sets for BERT-QE-LLL. + +
Size# of parameters
Tiny (T)4M
Small (S)11M
Medium (M)41M
Base (B)109M
Large (L)335M
+ +Table 7: Number of parameters in BERT variants. + +# A.2 Number of parameters in BERT variants + +We list the number of parameters in different BERT variants used in BERT-QE in Table 7. \ No newline at end of file diff --git a/bertqecontextualizedqueryexpansionfordocumentreranking/images.zip b/bertqecontextualizedqueryexpansionfordocumentreranking/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b18b9fa321d64da42bd3c2590faa53c0d5361b1a --- /dev/null +++ b/bertqecontextualizedqueryexpansionfordocumentreranking/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0ae4ab32f10c6c40486fe37fb238757f97182f80420868405e8346e0b19b7a2e +size 439152 diff --git a/bertqecontextualizedqueryexpansionfordocumentreranking/layout.json b/bertqecontextualizedqueryexpansionfordocumentreranking/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..5c8a6b40e14726d8f129f23741c2d453de42d6d1 --- /dev/null +++ b/bertqecontextualizedqueryexpansionfordocumentreranking/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3be1408ca1edf62eb6cf2c40e443b1bb0f06ec717ce11af6e3bb22939e1f1bec +size 372807 diff --git a/beyondlanguagelearningcommonsensefromimagesforreasoning/bc3efe93-d5b5-4061-b08f-0150d81c690e_content_list.json b/beyondlanguagelearningcommonsensefromimagesforreasoning/bc3efe93-d5b5-4061-b08f-0150d81c690e_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..a56fe5279e78bb3bb4455dad24cd4c3b9aa55349 --- /dev/null +++ b/beyondlanguagelearningcommonsensefromimagesforreasoning/bc3efe93-d5b5-4061-b08f-0150d81c690e_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:182e80ef2de6dd302a93918b94685bee04c36ea2e3aa37497a70d89d90315bf6 +size 79763 diff --git a/beyondlanguagelearningcommonsensefromimagesforreasoning/bc3efe93-d5b5-4061-b08f-0150d81c690e_model.json b/beyondlanguagelearningcommonsensefromimagesforreasoning/bc3efe93-d5b5-4061-b08f-0150d81c690e_model.json new file mode 100644 index 0000000000000000000000000000000000000000..57a77001a41b55634a4eab3c6024628ac69d2ff8 --- /dev/null +++ b/beyondlanguagelearningcommonsensefromimagesforreasoning/bc3efe93-d5b5-4061-b08f-0150d81c690e_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:67550e20f5c8794cc966b9f11629453bd77530db5538bd695e60cae38a7b66a3 +size 96474 diff --git a/beyondlanguagelearningcommonsensefromimagesforreasoning/bc3efe93-d5b5-4061-b08f-0150d81c690e_origin.pdf b/beyondlanguagelearningcommonsensefromimagesforreasoning/bc3efe93-d5b5-4061-b08f-0150d81c690e_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..849ddbb7bcb59768b35d62bc20c7779c2b858f85 --- /dev/null +++ b/beyondlanguagelearningcommonsensefromimagesforreasoning/bc3efe93-d5b5-4061-b08f-0150d81c690e_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:488887b2b990fc97165e66640b6c075a639f067b8073b70a37736e102857592b +size 4511194 diff --git a/beyondlanguagelearningcommonsensefromimagesforreasoning/full.md b/beyondlanguagelearningcommonsensefromimagesforreasoning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b386837bceb75c4e5f64d23a2ba66ad22048a854 --- /dev/null +++ b/beyondlanguagelearningcommonsensefromimagesforreasoning/full.md @@ -0,0 +1,330 @@ +# Beyond Language: Learning Commonsense from Images for Reasoning + +Wanqing Cui, Yanyan Lan*, Liang Pang, Jiafeng Guo, Xueqi Cheng + +CAS Key Lab of Network Data Science and Technology, + +Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China + +University of Chinese Academy of Sciences, Beijing, China + +cuiwanqing18z, lanyanyan, pangliang, guojiafeng, cxq@ict.ac.cn + +# Abstract + +This paper proposes a novel approach to learn commonsense from images, instead of limited raw texts or costly constructed knowledge bases, for the commonsense reasoning problem in NLP. Our motivation comes from the fact that an image is worth a thousand words, where richer scene information could be leveraged to help distill the commonsense knowledge, which is often hidden in languages. Our approach, namely Loire, consists of two stages. In the first stage, a bi-modal sequence-to-sequence approach is utilized to conduct the scene layout generation task, based on a text representation model ViBERT. In this way, the required visual scene knowledge, such as spatial relations, will be encoded in ViBERT by the supervised learning process with some bi-modal data like COCO. Then ViBERT is concatenated with a pre-trained language model to perform the downstream commonsense reasoning tasks. Experimental results on two commonsense reasoning problems, i.e. commonsense question answering and pronoun resolution, demonstrate that Loire outperforms traditional language-based methods. We also give some case studies to show what knowledge is learned from images and explain how the generated scene layout helps the commonsense reasoning process. + +# 1 Introduction + +Commonsense reasoning is an important yet challenging task in artificial intelligence and natural language processing. Take commonsense question answering as an example, given a question and multiple choices, some commonsense knowledge is usually required to make the correct answer from the provided choices. Table 1 show some typical commonsense question answering examples extracted from the dataset of commonsenseQA (Talmor et al., 2018). + +Table 1: Examples from CommonsenseQA dataset. + +
Question:Where is a good idea but not required to have a fire extinguisher?
Choices:(A) school bus (B) boat (C) house (D) hospital (E) school
Question:Where can you put a picture frame when it's not hung vertically?
Choices:(A) art show (B) wall (C) newspaper (D) car (E) table
+ +Existing commonsense reasoning methods mainly utilize raw texts to conduct the data representation and answer prediction process (Talmor et al., 2018; Rajani et al., 2019). However, the background knowledge required in the commonsense reasoning task, such as spatial relations, causes and effects, scientific facts and social conventions, are usually not explicitly provided by the text. Therefore, it is difficult to capture such knowledge solely from the raw texts. Some other works propose to leverage knowledge bases to extract related commonsense knowledge (Lin et al., 2019; Lv et al., 2019; Kipf and Welling, 2016; Ye et al., 2019; Li et al., 2019c; Ma et al., 2019). However, the construction of a knowledge base is expensive, and the contained knowledge is too limited to fulfill the requirement. Furthermore, most commonsense question answering datasets, such as CommonsenseQA, are constructed from an existing knowledge base, e.g., ConceptNet (Speer et al., 2017). So it is unfair to use the knowledge base in these tasks. To sum up, how to automatically learn commonsense remains a challenging problem in NLP. + +Motivated by the fact that images usually contain richer scene information, which can be viewed as an important supplementary resource to perceive for commonsense knowledge, this paper proposes to learn commonsense from images and incorporate such knowledge into the commonsense reasoning process. Take the question 'Where is a good idea but not required to have a fire extinguisher?' shown in Table 1 as an example. Solving this prob + +lem requires a strong background knowledge that fire extinguishers are usually equipped in public places, such as hospitals, schools, and school buses. We can see that such background knowledge is not explicitly provided by the raw texts, and meanwhile, too abstract and complex to be extracted by the current language model techniques. In this case, images will help. For example, we could find many images where fire extinguishers appear in these scenes of public places. Therefore, this commonsense knowledge could be learned by perceiving the scene information of these images, and the corresponding question will be well answered. These analyses are in accordance with Minsky's statement in Minsky (2000), 'perhaps a good architecture theory based on multiple representations and multi-modal reasoning would help us to design better systems that allow us to study and understand commonsense reasoning.' + +Our approach, named Loire (Learning Commonsense from Images for Reasoning), consists of two stages, i.e. visual commonsense learning and knowledge-augmented reasoning. In the first stage, a scene layout generation task is conducted on a bi-modal data such as the representative benchmark COCO (Lin et al., 2014). Firstly, a text encoder Visual BERT (ViBERT for short) is employed to obtain the representation of a caption. ViBERT is then incorporated into the recurrent encoder-decoder structure for the labeled bounding box generation. This module is trained separately by a supervised learning approach, based on the ground-truth bounding boxes of images. In this way, the required visual commonsense knowledge will be encoded in ViBERT. In the following commonsense reasoning stage, the concerned text representations (such as question and answer in commonsenseQA) will be obtained by concatenating ViBERT and a traditional pre-trained language model, e.g. BERT. Then the language model is fine-tuned on the commonsense reasoning data, with ViBERT fixed as some prior knowledge. Experimental results on two commonsense reasoning tasks, i.e. CommonsenseQA and WinoGrande (Sakaguchi et al., 2019), demonstrate that the learnt commonsense from images brings improvements to traditional models, such as BERT fine-tune (Devlin et al., 2018) and RoBERTa fine-tune (Liu et al., 2019). We also give some case studies to show how the learned visual commonsense knowledge helps the + +reasoning process. + +To the best of our knowledge, we are the first to propose learning commonsense knowledge from images to facilitate the commonsense reasoning in NLP. The proposed model of using scene layout generation as the supervision demonstrates a preliminary exploration in this direction. Other methods like learning commonsense from retrieved relevant images could also be investigated. We believe this novel approach may provide a new perspective for commonsense reasoning in NLP. + +# 2 Related Work + +# 2.1 Commonsense reasoning Methods + +There are mainly two kinds of commonsense reasoning methods, knowledge base approach and raw text approach. + +Knowledge base approach makes use of the existing knowledge bases (Speer et al., 2017; Sap et al., 2019) to conduct the commonsense reasoning process. Some methods regard knowledge base as a supplement and integrate extracted knowledge with information from the processed text. For example, Mihaylov and Frank (2018) encodes external commonsense knowledge as a key-value memory. Lv et al. (2019) and Lin et al. (2019) extract knowledge from ConceptNet and Wikipedia to construct graphs, then use Graph Convolutional Network (Kipf and Welling, 2016) for modeling and inference. Other methods (Zhong et al., 2018; Ma et al., 2019; Ye et al., 2019; Li et al., 2019c) use knowledge bases as another corpus for pre-training, and then refining the models on task-specific contents. + +Besides extracting knowledge from knowledge bases, some other methods directly learn commonsense knowledge from raw texts. A common way is to use pre-trained language models. Recently, Talmor et al. (2018); Da and Kusai (2019); Sakaguchi et al. (2019); Zhou et al. (2019) have made comprehensive empirical studies and shown that pre-trained language models significantly outperform traditional methods on the task of commonsense reasoning. In addition, Da and Kusai (2019) proves that pre-trained language models have the ability to encode some commonsense knowledge in the embedding space through the attribute classification evaluation. However, they also show that the encoded commonsense knowledge is limited, which could be improved by introducing some supplementary data, like ConceptNet. + +Moreover, some methods propose to leverage + +additional text information/data for better commonsense reasoning. Tandon et al. (2018) uses commonsense knowledge as constraints and large scale web corpus to steer the model away from unlikely predictions. Rajani et al. (2019) incorporates the generated explanations into the training of language models for enhancement. Xia et al. (2019) leverages two auxiliary relation-aware tasks to better model the interactions between question and candidate answers. Chalier et al. (2020) proposes a multi-faceted model of commonsense knowledge statements to capture more expressive meta-properties. + +Different from the above approaches, we propose to learn commonsense from images and incorporate this visual knowledge into the following commonsense reasoning process. + +# 2.2 Bi-modal Language Models + +Recently, some transformer-based bi-modal language models (Su et al., 2019; Li et al., 2019a; Alberti et al., 2019; Li et al., 2019b; Tan and Bansal, 2019; Lu et al., 2019) have been proposed to tackle with bi-modal reasoning problems in computer vision, such as visual question answering, visual commonsense reasoning, and image retrieval. They first encode image representation and text representation into a shared embedding space, then apply the joint embeddings for downstream reasoning. At first glance, these models are quite similar to ours. However, we should make it clear that they are totally different. The purpose of a bi-modal language model is to capture a cross-modal alignment between image and text to benefit the bi-modal task, which is only available when both image and text data are provided as input simultaneously. That is why they are usually popular in bi-modal scenarios like VQA. If we want to apply these models to commonsense reasoning in NLP, how to find corresponding images to the question, and how to employ the joint embeddings to the downstream NLP reasoning tasks is still unclear. Our model also adopts image data as a supplementary, but the modeling approach is different from bi-modal language models. We first encode the visual commonsense knowledge into ViBERT by the upstream layout generation process on a bi-modal data, then apply ViBERT as fixed prior knowledge to fine-tune the pre-trained language models for the downstream NLP reasoning tasks. + +![](images/855766dc48a7ca533ad8b78f9f85e7b8b21451d6cc9479aabc6ff0e1f9309f93.jpg) +Figure 1: Images and the associated bounding boxes from COCO with captions similar to 'a woman eats in the restaurant'. + +![](images/cd42a2c72c9cd27d28b678fef38f3aa5ca6dce391a9e56914e02e9fbec7618d9.jpg) + +![](images/795929cacf4f0e970bdc7112a15ad44658afbf312bd8bbf0e3c6dac7b934f8fb.jpg) + +# 3 Visual Commonsense Knowledge + +Images are made up of individual pixels, which are detailed but sometimes noisy. Therefore, how to extract useful commonsense knowledge from images remains a challenging problem. Inspired by the knowledge base in NLP, where knowledge is usually represented as a triple to demonstrate the relation between two entities, we focus on the attributes and relations of the objects in images. Clearly, such information can be well captured by the scene layout. Take the sentence 'a woman eats in the restaurant' as an example. Images related to this sentence are shown in the Figure 1. We can see that the scene layouts of these images, including bounding boxes and labels, contains a lot of useful information for commonsense reasoning: + +(1) Size attributes and relations can be easily obtained by the bounding boxes in images. For instance, the bounding boxes of tableware, e.g. fork, cup, spoon are always smaller than the bounding boxes of the dining table. +(2) Position can be accurately captured by the coordinate of each bounding box, to help understand some abstract commonsense. For instance, through the relative positions between the bounding boxes of person and table, one can figure out what "next to" means. Besides, since the bounding boxes of person and table are always close in the eating scene, one can learn that if a person is eating, he will be next to the table instead of standing far away, which provides some detailed information for the abstract word 'eating'. +(3) Co-occurrence relations of objects are expressed by the labels of bounding boxes. For instance, images of 'a woman eats in the restaurant' often contain labels of table, chair, person, food and tableware. So from the co-occurrence of these objects, one can infer that it is in a dinner or restaurant scenario, which offers rich context information for the abstract word 'eating'. + +From the above analysis, images usually contain rich scene information, such as size, position and + +![](images/1e76b730ae8a9ef9fc9e76bd559497257aac45a9e8fc2a1ffdd287850d125f02.jpg) +Figure 2: The recurrent structure of the visual commonsense learning stage. + +co-occurrence relations, which are useful for understanding the commonsense knowledge hidden in language. So we propose to learn such visual commonsense knowledge and incorporate them into the commonsense reasoning models in NLP. + +# 4 Our Approach: Loire + +Now we introduce Loire, which includes two stages, i.e. visual commonsense learning and knowledge-augmented reasoning. + +# 4.1 Visual Commonsense Learning + +The visual commonsense learning stage is conducted on bi-modal data, like the typical image caption data COCO. For a given image, the required scene layout is generated by a sequence-to-sequence approach, shown in Figure 2 and 3. This module consists of a text encoder, namely ViBERT, to map the input sentence to a latent representation, a layout encoder to encode the current generated scene layout, and a bounding box decoder to generate the next bounding box and its label. + +Specifically, we make the following notations. Let the input image caption be $S = \{w_{1}, w_{2}, \ldots, w_{L}\}$ , where $w_{i}$ stands for the $i$ -th word in the sentence, and $L$ is the sentence length. The output is a set of labeled bounding boxes $B_{1:T} = \{B_{1}, \ldots, B_{T}\}$ , with each labeled bounding box $B_{t}$ contains the position, size and category label of a corresponding object at the $t$ -th step. So we denote $B_{t} = (b_{t}, l_{t})$ , where $b_{t} = [b_{t}^{x}, b_{t}^{y}, b_{t}^{w}, b_{t}^{h}] \in \mathbb{R}^{4}$ stands for 2-dimensional coordinates, width and height, respectively. $l_{t} \in \{0,1\}^{C+1}$ is a one-hot vector to indicate the category label for an object, and the additional $C+1$ class is defined as a special indicator for the end of generation. + +# 4.1.1 ViBERT: Text Encoder + +The text encoder ViBERT is fine-tuned from BERT, which is a popular pre-trained language model introduced in Devlin et al. (2018). The network structure is a typical transformer-based architecture containing multiple transformer blocks of multi-headed scaled dot product attention and fully con + +![](images/f089a77a7d2010d560645f3ca6ab9112f5a8233afec6798dc79eefd518f8ceb0.jpg) +Figure 3: An illustration of the $t$ -th layout generation. + +nected layers (Vaswani et al., 2017). It has been proven to be effective in many natural language processing tasks. + +To adapt to the setting of BERT, the image caption is preprocessed as follows. The special tokens '[CLS]' and '[SEP]' are inserted into the beginning and the end of the sentence, to obtain $S = \{w_0, w_1, \dots, w_{L+1}\}$ , where $w_0, w_{L+1}$ stands for [CLS] and [SEP], respectively. After that, each word $w_i$ is mapped to its word embedding vector $e_i^S$ by ViBERT, so that $e(S) = \{e_0^S, e_1^S, \dots, e_{L+1}^S\}$ . With BERT, the output of '[CLS]' from the last transformer layer is fed into a pooled layer to obtain the representation of the whole sentence $e^S$ , + +$$ +e ^ {S} = \tanh \left(f \left(e _ {0} ^ {S}\right)\right), \tag {1} +$$ + +where $f$ is a single-layer perceptron. + +# 4.1.2 Layout Encoder + +At each time step $t$ , a layout encoder is utilized to encode the state of the current generated layout $B_{0:t-1}$ . Specifically, we construct a layout matrix $I_{t-1} \in \{0,1\}^{C \times W \times H}$ , where $W, H$ are width and height of this layout respectively. The value of $i_{lwh}$ in $I_{t-1}$ indicates whether the bounding box of object $l$ covers the pixel at coordinate $[w,h]$ . A blank layout without any object is used to initialize $B_0$ . A layout encoder takes layout matrix and previous layout representation as inputs, and uses a convolutional GRU architecture to output the representation of the current layout $e_t^I$ as follows: + +$$ +e _ {t} ^ {I} = \operatorname {C o n v G R U} \left(I _ {t - 1}, e _ {t - 1} ^ {I}\right). \tag {2} +$$ + +# 4.1.3 Bounding Box Decoder + +At each time step $t$ , a bounding box decoder is used to predict the labeled bounding box of next object, based on the caption representation $e^{S}$ from ViBERT and the current layout representation $e_t^I$ from the layout encoder. Specifically, we decompose the conditional joint bounding box probability as $p(b_{t},l_{t}|s,B_{0:t - 1}) =$ + +$p(l_{t}|S,B_{0:t - 1})p(b_{t}|S,B_{0:t - 1},l_{t}).$ The decoder firstly samples a class label $l_{t}$ according to $p(l_{t}|S,B_{0:t - 1})$ .. + +$$ +p (l _ {t} | s, B _ {0: t - 1}) = \operatorname {S o f t m a x} (g (u _ {t}, c _ {t})), +$$ + +$$ +u _ {t} ^ {l} = \phi^ {l} (e _ {t} ^ {I}, e ^ {S}), c _ {t} ^ {l} = \varphi^ {l} ([ u _ {t} ^ {l}; l _ {1: t - 1} ], e (S)), +$$ + +where $g$ is a two-layer perceptron, $\phi^l$ is a Convolution network (Xu et al., 2015) with spatial attention on $e_t^I$ , and $\varphi^l$ is a text-based attention module (Luong et al., 2015), which is used to focus on different parts of the caption. + +After that, the decoder tries to find out $b_{t}$ for object $l_{t}$ based on $p(b_{t}|S,B_{0:t - 1},l_{t})$ , which is obtained by a regression network $\theta$ with $\hat{b}_t = (\hat{x}_t,\hat{y}_t,\hat{w}_t,\hat{h}_t) = \theta (c_t^b,u_t^b)$ . The parameters $c_t^b$ and $u_{t}^{b}$ are represented similarly to $u_{t}$ and $c_{t}$ . That is, + +$$ +u _ {t} ^ {b} = \phi^ {b} (e _ {t} ^ {I}, c _ {t} ^ {b}), c _ {t} ^ {b} = \varphi^ {b} ([ u _ {t} ^ {l}; l _ {t} ], e (S)), +$$ + +where $\phi^b$ is an image-based attention module to find an appropriate position, and $\varphi^b$ is another text-based attention module but focuses more on the contents related to the current object. + +# 4.1.4 Training + +To reduce the expensive training ViBERT from scratch, we initialize ViBERT with the parameter weights of $\mathrm{BERT}_{BASE}$ released by Google $^{1}$ . Then the scene layout generator can be trained by minimizing the negative log-likelihood of the ground-truth object labels and the mean-square error of the ground-truth bounding box coordinates as follows: + +$$ +\mathcal {L} _ {\text {l a y o u t}} = \sum_ {t} \left(\left| \left| \hat {b} _ {t} - b _ {t} ^ {*} \right| \right| _ {2} - \log p \left(l _ {t} ^ {*}\right)\right), \tag {3} +$$ + +where $b_{t}^{*}$ and $l_{t}^{*}$ stands for the ground-truth bounding box and label, respectively. As for the generation order, we have observed that the model is difficult to converge with unfixed order, which may be caused by some dependencies among different bounding boxes. So we follow the existing image generation methods and simply fix the order from bottom to top, left to right. + +It should be noted that although we use BERT as a text encoder on image captions, we do not optimize the objective of the language model, i.e. the masked language model (MLM) objective. This is to avoid the possibility that the improvement + +of downstream reasoning task is due to the use of more text data, instead of visual commonsense knowledge from images. In our experiments, we have conducted some ablation studies to validate this point. + +# 4.2 Knowledge-Augmented Reasoning + +After using scene layout generation to encode visual commonsense knowledge into ViBERT, we can apply ViBERT as a fixed prior to enhance the downstream commonsense reasoning tasks. + +Here we use commonsenseQA as an example to demonstrate our method. For a given question $q_{i} \in Q$ , where $Q$ is the question set, and its candidate answers $A_{i} = \{a_{i}^{1}, \ldots, a_{i}^{n}\}$ , where $n$ denotes the number of choices, a common existing method is to first concatenate question and each candidate answer to a raw representation $[q_{i}; a_{i}^{j}]$ . Then a pre-trained language model is applied to obtain a semantic representation, denoted as $E_{i,j}^{(1)} = \mathrm{LM}([q_{i}; a_{i}^{j}])$ . In our method, ViBERT is applied on the raw representation $[q_{i}; a_{i}^{j}]$ to obtain a image scene-aware text representation, denoted as $E_{i,j}^{(2)} = \mathrm{ViBERT}([q_{i}; a_{i}^{j}])$ . Since the two representations are not always in the same space, we use a projection matrix $M$ to project $E_{i,j}^{(2)}$ to the space of $E_{i,j}^{(1)}$ . After that, they are simply concatenated and fed into a linear layer to compute the probability $p(a_{i}^{j}|p_{i})$ as follows. + +$$ +\operatorname {s c o r e} \left(a _ {i} ^ {j}\right) = h \left(E _ {i, j} ^ {(1)}; M ^ {T} E _ {i, j} ^ {(2)} \right]), +$$ + +$$ +p \left(a _ {i} ^ {j} \mid p _ {i}\right) = \operatorname {S o f t m a x} \left(\left\{\operatorname {s c o r e} \left(a _ {i} ^ {j}\right) \right\} _ {j}\right), +$$ + +where $h$ is a simple linear layer for classification, and the parameters of both language model and the linear layer will be fine-tuned on the downstream commonsense reasoning task. Specifically, in the training process, the objective is to minimize the negative log-likelihood of ground-truth answers $a_{i}^{*}$ as follows. After that, the choice with the highest score will be selected as the answer. + +$$ +\mathcal {L} _ {q a} = - \sum_ {i} \log p \left(a _ {i} ^ {*} \mid q _ {i}\right). \tag {4} +$$ + +# 5 Experiments + +This section demonstrates our experiments on two commonsense reasoning tasks, i.e. comonsense question answering and pronoun resolution. + +# 5.1 Datasets + +CommonsenseQA $^2$ (Talmor et al., 2018) is a typical commonsense question answering dataset, which consists of 12,102 natural language questions generated from ConceptNet. It covers various types of commonsense knowledge, including spatial, causal, social, and activity, etc. Each question has five candidate answers. Table 5 shows 3 question-answering examples. In our experiments on this dataset, we use the official random-split setting for fair comparisons with the reported results on CommonsenseQA's leaderboard. + +WinoGrande $^{3}$ (Sakaguchi et al., 2019) is a challenging pronoun resolution dataset extended from the original Winograd Schema Challenge (Levesque et al., 2012). The task is about resolving a pronoun (represented as a blank line) to one of its two probable co-referents in the sentence. For this task, each sentence is treated as a fill-in-the-blank question with binary choices. The line in the sentence is replaced by each option, and the model is required to provide the likelihood for the two resulting sentences for determination. In the training set of WinoGrande, there are five different sizes, i.e. XS(160), S (640), M (2,558), L (10,234) and XL (40,398). We experiment on all the five sizes and report their results for analysis. + +# 5.2 Experimental Settings + +For the upstream scene layout generation module, we train our ViBERT on 2 Nvidia K80 GPUs with a batch size of 32 for 15 epochs. The learning rate is $5e^{-5}$ , and the optimizer is Adam with StepLR schedule, where the step size is 3 and $\gamma$ is 0.8. In the training process, the bi-modal data COCO (Lin et al., 2014) is used to train our layout generation model. COCO consists of 123,287 images over 80 object categories, and each image is associated with instance-wise annotations and 5 image captions. For better training, we ignore small objects and filter images with more than 20 objects. This leaves us 119,146 images. We use the official train and validation splits, and set a max sequence length as 128. + +For the downstream commonsense reasoning module, we choose BERT and RoBERTa as our baseline models, which are the fundamental and competitive models for NLP tasks. + +BERT (Talmor et al., 2018) is a powerful contextualized word representation model and has been proven helpful in many NLP tasks. We apply uncased $\mathrm{BERT}_{\mathrm{BASE}}$ to downstream commonsense reasoning tasks by encoding each question and its candidate answers as a series of delimiter-separated sequences, i.e. '[CLS] question [SEP] choice [SEP]' for CommonsenseQA and '[CLS] segment1 [SEP] option segment2 [SEP]' for WinoGrande. Then the representation of '[CLS]' is then fed into a BERT-Pooler and converted to predictions by a linear classification layer. + +RoBERTa (Liu et al., 2019) is similar to BERT, but is usually pre-trained with a larger amount of training data and different techniques such as dynamic masking. Besides RoBERTaBASE, we also compare with a fine-tuned RoBERTaLARGE following the implementation released in fairseq4. And according to fairseq, we prepend a prefix of Q: to the question and A: to the answer for Common-senseQA, which was found to be helpful. + +Loire By using BERT and RoBERTa as a language model for text, we concatenate the representations from ViBERT and the pre-trained language model, and obtain two versions of our model, denote as Loire-BERT and Loire-RoBERTa, respectively. Since ViBERT is a static feature extractor and doesn't need to be fine-tuned in the downstream reasoning tasks, our running time is similar to the baselines without extra time cost. + +We train all models on 2 Nvidia K80 GPUs using AdamW (Loshchilov and Hutter, 2018) with WarmupLinearSchedule approach (He et al., 2016) for optimization, where the warmup percentage is set to 0.1 and 0.05 for BERT and RoBERTa, respectively. We use grid-search for hyper-parameters tuning. The learning rate, number of epochs and batch-size are chosen from $\{1,2\} \times e^{-5}$ , $\{3,5,8\}$ , and $\{8,16,32\}$ . The best development set accuracy from 5 random restarts of fine-tuning is reported, with the standard deviation. The best models on the development dataset are then submitted to the official private test dataset to return the test results. All our code and data are publicly available at https://github.com/VickiCui/Loire. + +# 5.3 Experimental Results + +On the dev set, the accuracy of the layout generation for label prediction is $63.4\%$ , and the mean + +Table 2: Results on CommonsenseQA (%), where ‘*’ indicates the reported result from the leaderboard. + +
ModelDev Acc.Dev Avg.Test Acc.
Ott et al. (2019)--72.1*
RoBERTaLARGE77.4776.65±0.5871.58
Loire-RoBERTaLARGE77.9477.56±0.2871.93
RoBERTaBASE65.4764.96±0.6259.82
Loire-RoBERTaBASE66.6766.12±0.4760.61
BERTBASE59.7158.95±0.6553.00*
Loire-BERTBASE61.1960.07±0.5854.91
Human--88.00
+ +Table 3: Results on WinoGrande with 5 training sizes, where \* indicates the reported result from the leaderboard. + +
ModelXSSMLXL
Dev Acc. (%)
BERTBASE50.7651.6152.8155.2660.19
Loire-BERTBASE51.6152.3453.956.7461.50
RoBERTaBASE51.7254.7157.9162.5267.94
Loire-RoBERTaBASE53.2655.1858.9364.0969.21
RoBERTaLarge52.4061.9568.6775.1479.08
Loire-RoBERTaLarge52.6463.0670.4076.5681.06
Test Acc. (%)
BERTBASE49.7549.7549.0151.5054.73
Loire-BERTBASE49.8649.2952.0753.8859.54
RoBERTaBASE50.9352.0157.6761.3565.42
Loire-RoBERTaBASE53.4253.4256.8262.3167.12
Levesque et al. (2012)50.3758.6367.5774.7079.12
Yang et al. (2020)55.0462.3766.7274.1978.21
Loire-RoBERTaLarge53.1463.2770.5176.1277.99
+ +square error for_bbox prediction is 0.015 (the coordinates of_bbox have been standardized between 0 and 1). This shows that the layout generator has a good performance and can generate good quality scene layouts, and the model does learn the corresponding knowledge. + +Table 2 shows the experimental results on CommonsenseQA. From the results, we can see that our approach leads to a $1.91\%$ , $0.79\%$ and $0.35\%$ improvement in terms of accuracy on the test set, as compared with $\mathrm{BERT}_{\mathrm{BASE}}$ , $\mathrm{RoBERTa}_{\mathrm{BASE}}$ and $\mathrm{RoBERTa}_{\mathrm{LARGE}}$ respectively. Similar results have been observed on the development set. Besides, the standard deviation of several random results on the development set becomes smaller when using Loire, which demonstrates better stability. Someone may argue that the improvement is marginal as compared with $\mathrm{RoBERTa}_{\mathrm{LARGE}}$ , and our result is worse than the best result of $\mathrm{RoBERTa}_{\mathrm{LARGE}}$ on the leaderboard (Ott et al., 2019). It should be noted that the best result of $\mathrm{RoBERTa}_{\mathrm{LARGE}}$ on the leaderboard is based on validation performance after 100 trials. However, we only conducted five trials in our experiments due to our limited computing resources. The purpose of this paper is to propose a + +Table 4: Accuracy $(\%)$ of different models on Common-senseQA development set. + +
Model.Dev Acc.Dev Avg
BERTBASE59.7158.95±1.03
+BERT*BASE59.8959.12±0.65
+BERTCAPTION60.2959.47±0.60
+ViBERT (ours)61.1960.07±0.58
+ +new perspective of learning commonsense from the image, rather than achieving a SOTA result. We can clearly see some improvement from the comparison with the baseline models. It is acceptable that when using more complicated language models, the effect of visual knowledge will be weakened. However, there are indeed some methods to improve the current results, which will be investigated in our future work. For example, we have filtered out small objects to make the training easier, which may result in insufficient details. Besides, the adopted bi-modal data COCO is very limited, with only 80 categories of objects. On the one hand, the coverage of the commonsense may be restricted. On the other hand, the layouts generated by our model may not be very accurate for some objects. For instance, the generated layout of 'laundry' is 'a suitcase' since COCO does not contain clothes in our case study. We plan to employ larger data such as Visual Genome (Krishna et al., 2017) to tackle this problem. + +Table 3 shows the experimental results on Wino-Grande. Specifically, five models are trained on five different training data sizes separately, and the development set and test set are identical for all models. As for the accuracy of the development set, We can see that Loire achieves consistent performance improvements across different sizes of training data, as compared with both BERTBASE, RoBERTaBASE and RoBERTaLARGE. While for the test accuracy (Levesque et al. (2012) and Yang et al. (2020) are two test results of RoBERTaLARGE from the leaderboard), except for a few ones, Loire consistently outperforms the corresponding baselines on across different sizes of training data. These results show the effectiveness of incorporating visual scene knowledge for commonsense reasoning. + +# 5.4 Ablation Study + +In order to validate that the performance improvement owes to the introduction of learned visual commonsense knowledge, rather than using more parameters or data, we conduct the following ablation studies on CommonsenseQA. The results are + +Table 5: Case study examples from the dev set of CommonsenseQA. + +
Question1:The man got a pail to catch the draining motor oil, where was he likely doing this at home?
Choices1:(A) garage (B) hardware store (C) utility room (D) wishing well (E) laundry
Question2:Where would a person be doing when having to wait their turn?
Choices2:(A) have patience (B) get in line (C) sing (D) stand in line (E) turn left
Question3:Where would you find magazines along side many other printed works?
Choices3:(A) doctor (B) bookstore (C) market (D) train station (E) mortuary
+ +shown in Table 4, where $^+$ ViBERT' denotes Loire. + +Firstly, we study whether the improvement owes to the use of additional parameters. To this end, we compare with the $\mathrm{BERT}_{\mathrm{BASE}}$ concatenated with freeze $\mathrm{BERT}_{\mathrm{BASE}}^{*}$ features, in which the parameters are set to be the same as $\mathrm{BERT}_{\mathrm{BASE}} + \mathrm{ViBERT}$ . From the results, we can see that, under the same setting, the accuracy of $\mathrm{BERT}_{\mathrm{BASE}}$ concatenated with freeze $\mathrm{BERT}_{\mathrm{BASE}}^{*}$ features is $59.89\%$ on dev set, which is worse than ours. + +Then we study whether the improvement owes to the use of additional text data, i.e. captions in COCO. We first fine-tune a BERTBASE model on COCO captions with MLM objective, denoted as BERTCAPTION. Then we concatenate it with BERTBASE, to perform a similar downstream fin-tuning as in Loire-BERTBASE. We also randomly initialized the model 5 times. The best dev result is $60.29\%$ , which is worse than Loire. + +In summary, these ablation studies prove that the commonsense knowledge learned form images, rather than the introduction of more parameters or text data, is responsible for the improvements. + +# 5.5 Case Study + +To understand what type of commonsense knowledge is learned by Loire, we analyze the relations between question concept and answer concept in CommonsenseQA according to ConceptNet. For the part of the questions that are done right by our model but wrong by the text-only model, which can be seen benefit from images, the top three relation types are AtLocation (36.4%), Causes (12.7%) and RelatedTo (8.5%). These relationships can indeed be expressed through the scenes shown in the images. So this is accordant with our motivation, and the introduction of images can indeed play a com + +Question: The man got a pail to catch the draining motor oil, where was he likely doing this at home? + +
Question Person Truck(A) Garage Car(B) Hardware Store TV
(C) Utility Room TV Chair(D) Wishing Well Person(E) Laundry Suitcase
+ +Figure 4: Scene layout of the first example in Table 5. + +plementary role. For complete statistics of relation types, please see Appendix A. + +Table 5 gives three examples in the development set of CommonsenseQA that benefit from visual commonsense knowledge. To better understand how visual commonsense helps, we generate the layout for each pair of question and choice by the trained upstream layout generator. Figure 5 shows the layouts of Question1 and its choices, and others can be found in Appendix B due to space limitations. + +Take the first question as an example, language models mainly rely on word co-occurrence or semantics for modeling, so they are easy to wrongly choose 'utility room' as the answer. That is because it is difficult to capture the commonsense of 'got a pail to catch the draining motor oil in garage' from the pure language. From Figure 5, we can see that the layout of question, the correct answer 'garage' and the wrong answer 'utility room' are 'a person' with 'a truck', 'cars', and 'chairs' and 'old televisions', respectively. That is to say, we can learn from images that 'got a pail to catch the draining motor oil' usually happen with the scene that a person is with a truck. By encoding this knowledge into ViBERT, it is easy for the language model to connect the similarity between 'truck' and 'cars', so Loire is able to choose the correct answer 'garage', instead of 'utility room'. + +# 6 Conclusion + +In this paper, we propose a novel two-stage pipeline approach Loire to learn commonsense from images. In the first stage, a text representation model ViBERT is trained in the bi-modal sequence-to-sequence approach for scene layout generation on COCO. Therefore, visual commonsense knowledge like spatial relations will be encoded in + +ViBERT by the supervision of caption and image layout. After that, ViBERT is concatenated with a pre-trained language model to perform a knowledge-augmented reasoning process. Experimental results show that Loire outperforms the current state-of-the-art language models BERT and RoBERTa on two NLP commonsense reasoning tasks, i.e. commonsense question answering data CommonsenseQA and pronoun resolution data WinoGrande. The ablation and case study further show that the improvements are truly owing to the learned visual commonsense knowledge, and how this knowledge helps the NLP reasoning process. + +The current approach is a preliminary study on the proposed direction of using images to automatically learn commonsense knowledge to facilitate the NLP reasoning tasks, which could be modified from the following aspects to further improve the empirical performances. Firstly, larger bi-modal data could be employed to learn more commonsense required in the reasoning task. Secondly, other bi-modal methods instead of training ViBERT by the supervision of scene layout generation may be investigated. Thirdly, how to design intrinsic evaluation to help to understand what is learned by Lorie is still challenging and will be considered in the future. + +# Acknowledgement + +This work was supported by the National Key R&D Program of China (2020AAA0105200), the National Natural Science Foundation of China (NSFC) under Grants No. 61722211, 61773362, 61872338, and 61906180, the Lenovo-CAS Joint Lab Youth Scientist Project, and the Foundation and Frontier Research Key Program of Chongqing Science and Technology Commission (No. cstc2017jcyjBX0059), the Tencent AI Lab Rhino-Bird Focused Research Program (No. JR202033). + +# References + +Chris Alberti, Jeffrey Ling, Michael Collins, and David Reitter. 2019. Fusion of detected objects in text for visual question answering. arXiv: Computation and Language. +Yohan Chalier, Simon Razniewski, and Gerhard Weikum. 2020. Joint reasoning for multifaceted commonsense knowledge. arXiv preprint arXiv:2001.04170. + +Jeff Da and Jungo Kusai. 2019. Cracking the contextual commonsense code: Understanding commonsense reasoning aptitude of deep contextual representations. arXiv preprint arXiv:1910.01157. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778. +Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. +Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123(1):32-73. +Hector Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning. +Gen Li, Nan Duan, Yuejian Fang, Ming Gong, Daxin Jiang, and Ming Zhou. 2019a. Unicoder-vl: A universal encoder for vision and language by cross-modal pre-training. arXiv: Computer Vision and Pattern Recognition. +Lianian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019b. Visualbert: A simple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557. +Shiyang Li, Jianshu Chen, and Dian Yu. 2019c. Teaching pretrained models with commonsense reasoning: A preliminary kb-based approach. arXiv preprint arXiv:1909.09743. +Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xiang Ren. 2019. Kagnet: Knowledge-aware graph networks for commonsense reasoning. arXiv preprint arXiv:1909.02151. +Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. + +Ilya Loshchilov and Frank Hutter. 2018. Fixing weight decay regularization in adam. +Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems, pages 13-23. +Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025. +Shangwen Lv, Daya Guo, Jingjing Xu, Duyu Tang, Nan Duan, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, and Songlin Hu. 2019. Graph-based reasoning over heterogeneous external knowledge for commonsense question answering. arXiv preprint arXiv:1909.05311. +Kaixin Ma, Jonathan Francis, Quanyang Lu, Eric Nyberg, and Alessandro Oltramari. 2019. Towards generalizable neuro-symbolic systems for commonsense question answering. arXiv preprint arXiv:1910.14087. +Todor Mihaylov and Anette Frank. 2018. Knowledgeable reader: Enhancing cloze-style reading comprehension with external commonsense knowledge. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 821-832. +Marvin Minsky. 2000. Commonsense-based interfaces. Communications of the ACM, 43(8):66-73. +Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations. +Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! leveraging language models for commonsense reasoning. arXiv preprint arXiv:1906.02361. +Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Winogrande: An adversarial winograd schema challenge at scale. arXiv preprint arXiv:1907.10641. +Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. 2019. Atomic: An atlas of machine commonsense for if then reasoning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3027-3035. +Robert Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Thirty-First AAAI Conference on Artificial Intelligence. + +Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2019. Vl-bert: Pretraining of generic visual-linguistic representations. +Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2018. Commonsenseqa: A question answering challenge targeting commonsense knowledge. arXiv preprint arXiv:1811.00937. +Hao Tan and Mohit Bansal. 2019. Lxmert: Learning cross-modality encoder representations from transformers. arXiv preprint arXiv:1908.07490. +Niket Tandon, Bhavana Dalvi, Joel Grus, Wen-tau Yih, Antoine Bosselut, and Peter Clark. 2018. Reasoning about actions and state changes by injecting commonsense knowledge. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 57-66. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008. +Jiangnan Xia, Chen Wu, and Ming Yan. 2019. Incorporating relation knowledge into commonsense reading comprehension with multi-task learning. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pages 2393-2396. +Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In International conference on machine learning, pages 2048-2057. +Yiben Yang, Chaitanya Malaviya, Jared Fernandez, Swabha Swayamdipta, Ronan Le Bras, Ji-Ping Wang, Chandra Bhagavatula, Yejin Choi, and Doug Downey. 2020. G-daug: Generative data augmentation for commonsense reasoning. arXiv preprint arXiv:2004.11546. +Zhi-Xiu Ye, Qian Chen, Wen Wang, and Zhen-Hua Ling. 2019. Align, mask and select: A simple method for incorporating commonsense knowledge into language representation models. arXiv preprint arXiv:1908.06725. +Wanjun Zhong, Duyu Tang, Nan Duan, Ming Zhou, Jiahai Wang, and Jian Yin. 2018. Improving question answering by commonsense-based pre-training. arXiv preprint arXiv:1809.03568. +Xuhui Zhou, Yue Zhang, Leyang Cui, and Dandan Huang. 2019. Evaluating commonsense in pre-trained language models. arXiv preprint arXiv:1911.11931. + +# A Relation Types Analysis + +Table 6: The relation types that benefit from images. + +
RelationsProportion(%)RelationsProportion(%)RelationsProportion(%)
MotivatedByGoal1.7HasProperty0.8CausesDesire4.2
HasPrerequisite5.1Desires0.8CapableOf5.9
HasSubevent5.1RelatedTo8.5DistinctFrom1.7
HasA1.7NotDesires0.8HasLastSubevent0.8
PartOf2.5UsedFor4.2AtLocation36.4
FormOf0.8Antonym5.9Causes12.7
+ +# B Layout Examples + +Question: Where would a person be doing when having to wait their turn? + +![](images/118a9a8617cf2415ee145826dd2e06d0626e3ff5e3317176becf97da19f6174d.jpg) +(a) + +Question: Where would you find magazines along side many other printed works? + +![](images/3690fe1d55e1d537e6e72b45f433e3ce9e5bdca6828b1f401e6e6a0e29da575b.jpg) +(b) +Figure 5: Layout example that generated by scene layout generator. Images in the first column are layouts for questions. The layout for each choice is given in the other images. + +In this appendix, we visualize two more layout examples to show how the learned visual commonsense knowledge in our model helps the commonsense reasoning process. + +As shown in Figure 5 (a), according to the question, we can get a layout "a line of people", which is similar to the layouts of correct answer 'stand in line' and choice 'get in line'. In this case, visual commonsense knowledge helps the model eliminate irrelevant choices. + +As shown in Figure 5 (b), we obtain the layout 'a row of books' for the question, which exactly matches the layout of the answer 'bookstore'. In this case, the visual commonsense knowledge directly helps the model get the correct answer. \ No newline at end of file diff --git a/beyondlanguagelearningcommonsensefromimagesforreasoning/images.zip b/beyondlanguagelearningcommonsensefromimagesforreasoning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..debe262b8c73a7e26278a9c2718752a242027d17 --- /dev/null +++ b/beyondlanguagelearningcommonsensefromimagesforreasoning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:edeaf86b4d92345405570311a1108948f78aaedd376db58aa86948a0056bcb78 +size 406086 diff --git a/beyondlanguagelearningcommonsensefromimagesforreasoning/layout.json b/beyondlanguagelearningcommonsensefromimagesforreasoning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..003a0ab2fef360c205d72a1336605d2bebf74f05 --- /dev/null +++ b/beyondlanguagelearningcommonsensefromimagesforreasoning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d1c1f69d70317a3e6481bc52b29be826113eb3316445307ba5fadb78e2f7b943 +size 385840 diff --git a/biomedicaleventextractionwithhierarchicalknowledgegraphs/b25f0c71-0795-4c7e-9134-c09cad4e59ee_content_list.json b/biomedicaleventextractionwithhierarchicalknowledgegraphs/b25f0c71-0795-4c7e-9134-c09cad4e59ee_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f78f2d3245e47c34303a3601d986c62c567277fa --- /dev/null +++ b/biomedicaleventextractionwithhierarchicalknowledgegraphs/b25f0c71-0795-4c7e-9134-c09cad4e59ee_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f9a85bf24003f5275eb6200eb09316c9634cf93c64f4dec6867636de3e837a9d +size 58232 diff --git a/biomedicaleventextractionwithhierarchicalknowledgegraphs/b25f0c71-0795-4c7e-9134-c09cad4e59ee_model.json b/biomedicaleventextractionwithhierarchicalknowledgegraphs/b25f0c71-0795-4c7e-9134-c09cad4e59ee_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c364e656fa29d4c27512994117cff5211de679bf --- /dev/null +++ b/biomedicaleventextractionwithhierarchicalknowledgegraphs/b25f0c71-0795-4c7e-9134-c09cad4e59ee_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4b61856d5f1778cf9c4f5af3d4c1ab78fbb33e0debc2a0027e2bd1bfec985e43 +size 72521 diff --git a/biomedicaleventextractionwithhierarchicalknowledgegraphs/b25f0c71-0795-4c7e-9134-c09cad4e59ee_origin.pdf b/biomedicaleventextractionwithhierarchicalknowledgegraphs/b25f0c71-0795-4c7e-9134-c09cad4e59ee_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2a8fa26496553511e366cbb08f0c0fd4d6af8c9c --- /dev/null +++ b/biomedicaleventextractionwithhierarchicalknowledgegraphs/b25f0c71-0795-4c7e-9134-c09cad4e59ee_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:74b3de01f4289c5f50b2ce42aa2f859f13e336a20c485105289a61821018400f +size 2107158 diff --git a/biomedicaleventextractionwithhierarchicalknowledgegraphs/full.md b/biomedicaleventextractionwithhierarchicalknowledgegraphs/full.md new file mode 100644 index 0000000000000000000000000000000000000000..630af7f475d1065dc786ce63e2e3d356a65402d1 --- /dev/null +++ b/biomedicaleventextractionwithhierarchicalknowledgegraphs/full.md @@ -0,0 +1,231 @@ +# Biomedical Event Extraction with Hierarchical Knowledge Graphs + +Kung-Hsiang Huang1 Mu Yang1 Nanyun Peng1,2 + +1 Information Sciences Institute, University of Southern California + +2 Computer Science Department, University of California, Los Angeles + +{kunghsia, yangmu}@usc.edu + +violetpeng@cs.ucla.edu + +# Abstract + +Biomedical event extraction is critical in understanding biomolecular interactions described in scientific corpus. One of the main challenges is to identify nested structured events that are associated with non-indicative trigger words. We propose to incorporate domain knowledge from Unified Medical Language System (UMLS) to a pre-trained language model via a hierarchical graph representation encoded by a proposed Graph Edge-conditioned Attention Networks (GEANet). To better recognize the trigger words, each sentence is first grounded to a sentence graph based on a jointly modeled hierarchical knowledge graph from UMLS. The grounded graphs are then propagated by GEANet, a novel graph neural networks for enhanced capabilities in inferring complex events. On BioNLP 2011 GENIA Event Extraction task, our approach achieved $1.41\%$ $F_{1}$ and $3.19\%$ $F_{1}$ improvements on all events and complex events, respectively. Ablation studies confirm the importance of GEANet and hierarchical KG. + +# 1 Introduction + +Biomedical event extraction is a task that identifies a set of actions among proteins or genes that are associated with biological processes from natural language texts (Kim et al., 2009, 2011). Development of biomedical event extraction tools enables many downstream applications, such as domain-specific text mining (Ananiadou et al., 2015; Spangher et al., 2020), semantic search engines (Miyao et al., 2006) and automatic population and enrichment of database (Hirschman et al., 2012). + +A typical event extraction system 1) finds triggers that most clearly demonstrate the presence of events, 2) recognizes the protein participants (arguments), and 3) associates the arguments with the corresponding event triggers. For instance, the + +![](images/5b1252c8c6d838da61052434efbcce4bd6c3241648bed449a2ef1ab9cc99645f.jpg) + +![](images/5f8755cf79335e07c1ed78cb7ff099084e332f0375cd68610bbcea5c25b564d3.jpg) + +Figure 1: An example of a UMLS-based hierarchical KG assisting event extraction. Circles represent concept nodes and triangles represent semantic nodes. Nodes associated with the tokens in the example sentence are boldfaced. Bidirectional edges imply hierarchical relation between concept and semantic nodes. The word "induces" is a trigger of a Positive regulation event, whose trigger role and corresponding argument role cannot be easily determined with only textual input. The KG provides clues for identifying this trigger and its corresponding arguments given the red and blue double line reasoning paths connecting nodes BMP-6, Induce, Phosphorylation, and Positive regulation of biological process. We can infer that: 1) "induces" is an action of biological function, 2) a biological function can be quantified by positive regulation, and 3) positive regulation can result in phosphorylation. + +sentence "Protein A inhibits the expression of Protein B" will be annotated with two nested events: Gene expression(Trigger: expression, Arg-Theme: Protein B) and Negative Regulation(Trigger: inhibits, Arg-Theme: Gene expression(Protein B), Arg-Cause: Protein A). + +Early attempts on biomedical event extraction adopted hand-crafted features (Björne et al., 2009; Björne and Salakoski, 2011; Riedel and McCallum, 2011; Venugopal et al., 2014a). Recent advances have shown improvements using deep neural networks via distributional word representations in the + +biomedical domain (Moen and Ananiadou, 2013; Rao et al., 2017a; Björne and Salakoski, 2018; ShafieiBavani et al., 2019). Li et al. (2019) further extends the word representations with embeddings of descriptive annotations from a knowledge base and demonstrates the importance of domain knowledge in biomedical event extraction. + +However, encoding knowledge with distributional embeddings does not provide adequate clues for identifying challenging events with non-indicative trigger words and nested structures. These embeddings do not contain structural or relational information about the biomedical entities. To overcome this challenge, we present a framework that incorporates knowledge from hierarchical knowledge graphs with graph neural networks (GNN) on top of a pre-trained language model. + +Our first contribution is a novel representation of knowledge as hierarchical knowledge graphs containing both conceptual and semantic reasoning paths that enable better trigger and word identification based on Unified Medical Language System (UMLS), a biomedical knowledge base. Fig. 1 shows an example where the Positive Regulation event can be better identified with knowledge graphs and factual relational reasoning. Our second contribution is a new GNN, Graph Edge-conditioned Attention Networks (GEANet), that encodes complex domain knowledge. By integrating edge information into the attention mechanism, GEANet has greater capabilities in reasoning the plausibility of different event structure through factual relational paths in knowledge graphs (KGs). + +Experiments show that our proposed method achieved state-of-the-art results on the BioNLP 2011 event extraction task (Kim et al., 2011).1 + +# 2 Background + +UMLS Knowledge Base. Unified Medical Language System (UMLS) is a knowledge base for biomedical terminology and standards, which includes three knowledge sources: Metathesaurus, Semantic Network, and Specialist Lexicon and Lexical Tools (Bodenreider, 2004). We use the former two sources to build hierarchical KGs. The concept network from Metathesaurus contains the relationship between each biomedical concept pairs, while each concept contains one or more semantic types + +![](images/7b316855d6d06af50c399c9f8b8310d61c6dbdd6d144067f09c4ebaf74065e8a.jpg) +Figure 2: Overview of knowledge incorporation. Contextualized embeddings for each token are generated by SciBERT. GEANet updates node embeddings for $v_{1}, v_{2}$ , and $v_{3}$ via corresponding sentence graph. + +that can be found in the semantic network. The concept network provides direct definition lookup of the recognized biomedical terms, while the semantic network supports with additional knowledge in the semantic aspect. Example tuples can be found in Figure 1. There are 3.35M concepts, 10 concept relations, 182 semantic types, and 49 semantic relations in total. + +# 3 Proposed Approach + +Our event extraction framework builds upon the pre-trained language model, SciBERT (Beltagy et al., 2019), and supplement it with a novel graph neural network model, GEANet, that encodes domain knowledge from hierarchical KGs. We will first illustrate each component and discuss how training and inference are done. + +# 3.1 Hierarchical Knowledge Graph Modeling + +The two knowledge sources discussed in Section 2 are jointly modeled as a hierarchical graph for each sentence, which we refer to as a sentence graph. Each sentence graph construction consists of three steps: concept mapping, concept network construction, and semantic type augmentation. + +The first step is to map each sentence in the corpus to UMLS biomedical concepts with MetaMap, an entity mapping tool for UMLS concepts (Aronson, 2001). There are 7903 concepts (entities) being mapped from the corpus, denoted as $K$ . The next step is concept network construction, where a minimum spanning tree (MST) that connects + +mapped concepts in the previous step is identified, forming concept reasoning paths. This step is NP-complete. $^3$ We adopt a 2-approximate solution that constructs a global MST for the corpora GE ' 11 by running breadth-first search, assuming all edges are of unit distance. To prune out less relevant nodes and to improve computation efficiency, concept nodes that are not in $K$ with less than $T$ neighbors in $K$ are removed. $^4$ The spanning tree for each sentence is then obtained by depth-first search on the global MST. Each matched token in the corpus is also included as a token node in the sentence graph, connecting with corresponding concept node. Finally, the semantic types for each concept node are modeled as nodes that are linked with associated concept nodes in the sentence graph. Two semantic type nodes will also be linked if they have known relationships in the semantic network. + +# 3.2 GEANet + +The majority of existing graph neural networks (GNN) consider only hidden states of nodes and adjacency matrix without modeling edge information. To properly model the hierarchy of the graph, it is essential for the message passing function of a GNN to consider edge features. We propose Graph Edge Conditioned Attention Networks (GEANet) to integrate edge features into the attention mechanism for message propagation. The node embeddings update of GEANet at the $l$ -th layer can be expressed as follows: + +$$ +\boldsymbol {x} _ {i} ^ {(l)} = \operatorname {M L P} _ {\theta} \boldsymbol {x} _ {i} ^ {(l - 1)} + \sum_ {j \in \mathcal {N} (i)} a _ {i, j} \cdot \boldsymbol {x} _ {j} ^ {(l - 1)} \tag {1} +$$ + +$$ +a _ {i, j} = \frac {\exp \left(\mathrm {M L P} _ {\psi} \left(e _ {i , j}\right)\right)}{\sum_ {k \in \mathcal {N} (i)} \exp \left(\mathrm {M L P} _ {\psi} \left(e _ {i , k}\right)\right)} \tag {2} +$$ + +where $\pmb{x}_i^{(l)}$ denotes the node embeddings at layer $l$ , $\pmb{e}_{i,j}$ denotes the embedding for edge $(i,j)$ , and $\mathrm{MLP}_{\psi}$ and $\mathrm{MLP}_{\theta}$ are two multi-layer perceptrons. + +GEANet is inspired by Edge Conditioned Convolution (ECC), where convolution operation depends on edge type (Simonovsky and Komodakis, 2017), + +$$ +\boldsymbol {x} _ {i} ^ {(l)} = \operatorname {M L P} _ {\theta} \boldsymbol {x} _ {i} ^ {(l - 1)} + \sum_ {j \in \mathcal {N} (i)} \boldsymbol {x} _ {j} ^ {(l - 1)} \cdot \operatorname {M L P} _ {\psi} (\boldsymbol {e} _ {i, j}) \tag {3} +$$ + +Compared to ECC, GEANet is able to determine the relative importance of neighboring nodes with attention mechanism. + +Knowledge Incorporation. We build GEANet on top of SciBERT (Peters et al., 2019) to incorporate domain knowledge into rich contextualized representations. Specifically, we take the contextual embeddings $\{h_1,\dots,h_n\}$ produced by SciBERT as inputs and produces knowledge-aware embeddings $\{\hat{h}_1,\dots,\hat{h}_n\}$ as outputs. To initialize the embeddings for a sentence graph, for a mapped token, we project its SciBERT contextual embedding to initialize its corresponding node embedding $h_{i,\mathrm{KG}} = h_iW_{\mathrm{KG}} + b_{\mathrm{KG}}$ . Other nodes and edges are initialized by pretrained KG embeddings (details in Section 4.1). To accommodate multiple relations between two entities in UMLS, edge embeddings $e_{i,j}$ are initialized by summing the embeddings of each relation between the nodes $i$ and $j$ . Then we apply layers of GEANet to encode the graph $h_{i,\mathrm{KG}}^l = \mathrm{GEANet}(h_{i,\mathrm{KG}})$ . The knowledge-aware representation is obtained by aggregating SciBERT representations and KG representations, $\hat{h}_i = h_{i,\mathrm{KG}}^l W_{\mathrm{LM}} + b_{\mathrm{LM}} + h_i$ . The process is illustrated in Figure 2 GEANet layer. + +# 3.3 Event Extraction + +The entire framework is trained with a multitask learning pipeline consisting of trigger classification and argument classification, following (Han et al., 2019a,b). Trigger classification predicts the trigger type for each token. The predicted score of each token is computed as $\hat{\pmb{y}}_i^{tri} = \mathrm{MLP}^{tri}(\hat{\pmb{h}}_i)$ . In the argument classification stage, each possible pair of gold trigger and gold entity is gathered and labeled with corresponding argument role. The argument scores between the $i$ -th token and $j$ -th token are computed as $\hat{\pmb{y}}_{i,j}^{arg} = \mathrm{MLP}^{arg}(\hat{\pmb{h}}_i; \hat{\pmb{h}}_j)$ , where $(\cdot)$ denotes concatenation. Cross Entropy loss $\mathcal{L}^t = -\frac{1}{N^t}\sum_{i=1}^{N^t}\pmb{y}_i^t \cdot \log \hat{\pmb{y}}_i^t$ , is used for both tasks, where $t$ denotes task, $N^t$ denotes the number of training instances of task $t$ , $\pmb{y}_i^t$ denotes the ground truth label, and $\hat{\pmb{y}}_i^t$ denotes the predicted label. The multitask learning minimizes the sum of the two losses $\mathcal{L} = \mathcal{L}^{tri} + \mathcal{L}^{arg}$ in the training stage. During inference, unmerging is conducted to combine identified triggers and arguments for multiple arguments events (Björne and Salakoski, 2011). We adopted similar unmerging heuristics. For Regulation events, we use the same heuristics as Björne et al. (2009). For Binding events, we subsume all Theme arguments associated with a trigger + +
ModelRecallPrec.F1
PriorTEES49.5657.6553.30
Stacked Gen.48.9666.4656.38
TEES CNN49.9469.4558.10
KB-driven T-LSTM52.1467.0158.65
OursSciBERT-FT53.8963.9758.50
GEANet-SciBERT56.1164.6160.06
+ +Table 1: Model comparison on GE' 11 test set. + +
ModelRecallPrec.F1
KB-driven T-LSTM41.7355.7347.72
SciBERT-FT45.3954.4849.52
GEANet- SciBERT47.2355.2150.91
+ +into one event such that every trigger corresponds to only one single Binding event. + +# 4 Experiments + +# 4.1 Experimental Setup + +Our models are evaluated on BioNLP 11 GENIA event extraction task (GE'11). All models were trained on the training set, validated on the dev set, and tested on the test set. A separate evaluation on Regulation events is conducted to validate the effectiveness of our framework on nested events with non-indicative trigger word. Reported results are obtained from the official evaluator under approximate span and recursive criteria. + +In the preprocessing step, the GE'11 corpora were parsed with TEES preprocessing pipeline (Björne and Salakoski, 2018). Tokenization is done by the SciBERT tokenizer. Biomedical concepts in each sentence are then recognized with MetaMap and aligned with their corresponding tokens. The best performing model was found by grid search conducted on the dev set. The edge and node representation in KGs were initialized with 300 dimensional pre-trained embeddings using TransE (Wang et al., 2014). The entire framework is optimized with BERTAdam optimizer for a maximum of 100 epochs with batch size of 4. Training is stopped if the dev set $F_{1}$ does not improve for 5 consecutive epochs (more details see Appendix). + +# 4.2 Results and Analysis + +Comparison with existing methods We compare our method with the following prior works: TEES and Stacked Gen. use SVM-based models with token and sentence-level features (Björne and Salakoski, 2011; Majumder et al., 2016); + +Table 2: Performance comparison on the Regulation events of the test set (including Regulation, Positive Regulation, and Negative Regulation sub-events). + +
ModelDev F1Test F1
GEANet-SciBERT60.3860.06
- GEANet59.3358.50
- STY nodes60.1259.34
GEANet→ECC58.5058.27
GEANet→GAT59.5559.87
+ +Table 3: Ablation study over different components. + +![](images/c5d5ae814202a5fd157bb0303f4f506b614a0879ffc4f0ad155819c4ed9916d2.jpg) +Figure 3: Performance comparison on the test set w.r.t. different amount of training data. + +TEES CNN leverages Convolutional Neural Networks and dependency parsing graph (Björne and Salakoski, 2018); KB-driven T-LSTM adopts an external knowledge base with type and sentence embeddings, into a Tree-LSTM model (Li et al., 2019). SciBERT-FT is a fine-tuned SciBERT without external resources, the knowledge-agnostic counterpart of GEANet-SciBERT. According to Table 1, SciBERT-FT achieves similar performance to KB-driven T-LSTM, implying that SciBERT may have stored domain knowledge implicitly during pre-training. Similar hypothesis has also been studied in commonsense reasoning (Wang et al., 2019). GEANet-SciBERT achieves an absolute improvement of $1.41\%$ in $F_{1}$ on the test data compared to the previous state-of-the-art method. In terms of Regulation events, Table 2 shows that GEANet-SciBERT outperforms the previous system and fine-tuned SciBERT by $3.19\%$ and $1.39\%$ in $F_{1}$ . + +Ablation study To better understand the importance of different model components, ablation study is conducted and summarized in Table 3. GEANet achieves the highest $F_{1}$ when compared to two other GNN variants, ECC and GAT (Velickovic et al., 2018), demonstrating its stronger knowledge incorporation capacity. Hierarchical knowledge graph representation is also shown to be critical. Removing semantic type (STY) nodes from hierarchical KGs leads to performance drop. + +Impact of amount of training data Model performance on different amount of randomly sampled training data is shown in Fig. 3. GEANet + +SciBERT shows consistent improvement over finetuned SciBERT across different fractions. The performance gain is slightly larger with less training data. This illustrates the robustness of GEANet in integrating domain knowledge and its particular advantage under low-resource setting. + +Error Analysis By comparing the predictions from GEANet-SciBERT and gold events in the dev set, two major failed cases are identified: + +- Adjective Trigger: Most events are associated with a verb or noun trigger. Adjective triggers are scarce in the training set ( $\sim 7\%$ ), which poses a challenge to identify this type of trigger. Although knowledge-aware methods should be able to resolve these errors theoretically, these adjective triggers often cannot be linked with UMLS concepts. Without proper grounding, it is hard for our model to recognize these triggers. +- Misleading Trigger: Triggers providing "clues" about incorrect events can be misleading. For instance, + +Furthermore, expression of an activated PKD1 mutant enhances HPK1-mediated NFkappaB activation. + +Our model predicts expression as a trigger of type Gene expression, while the gold label is Positive regulation. Despite that fact that our model is capable of handling such scenarios sometimes given grounded biomedical concepts and factual reasoning paths, there is still room for improvement. + +# 5 Related Works + +Event Extraction Most existing event extraction systems focus on extracting events in news. Early attempts relied on hand-crafted features and a pipeline architecture (Gupta and Ji, 2009; Li et al., 2013). Later studies gained significant improvement from neural architectures, such as convolutional neural networks (Chen et al., 2015; Nguyen and Grishman, 2015), and recurrent neural networks (Nguyen et al., 2016). More recent studies leverages large pre-trained language models to obtain richer contextual information (Wadden et al., 2019; Lin et al., 2020). Another line of works utilized GNN to enhance event extraction performance. Liu et al. (2018) applied attention-based + +graph convolution networks on dependency parsing trees. We instead propose a GNN, GEANet, for integrating domain knowledge into contextualized embeddings from pre-trained language models. + +Biomedical Event Extraction Event extraction for biomedicine is more challenging due to higher demand for domain knowledge. BioNLP 11 GENIA event extraction task (GE'11) is the major benchmark for measuring the quality of biomedical event extraction system (Kim et al., 2011). Similar to event extraction in news domain, initial studies tackle biomedical event extraction with human-engineered features and pipeline approaches (Miwa et al., 2012; Björne and Salakoski, 2011). Great portion of recent works observed significant gains from neural models (Venugopal et al., 2014b; Rao et al., 2017b; Jagannatha and Yu, 2016; Björne and Salakoski, 2018). Li et al. (2019) incorporated information from Gene Ontology, a biomedical knowledge base, into tree-LSTM models with distributional representations. Instead, our strategy is to model two knowledge graphs from UMLS hierarchically with conceptual and semantic reasoning paths, providing stronger clues for identifying challenging events in biomedical corpus. + +# 6 Conclusion + +We have proposed a framework to incorporate domain knowledge for biomedical event extraction. Evaluation results on GE ' 11 demonstrated the efficacy of GEANet and hierarchical KG representation in improving extraction of non-indicative trigger words associated nested events. We also show that our method is robust when applied to different amount of training data, while being advantageous in low-resource scenarios. Future works include grounding adjective triggers to knowledge bases, better biomedical knowledge representation and extracting biomedical events at document level. + +# Acknowledgements + +We thank Rujun Han for helpful advice during the development of our model. We also appreciate insightful feedback from PLUSLab members, and the anonymous reviewers. This research was sponsored by an NIH R01 (LM012592) and the Intelligence Advanced Research Projects Activity (IARPA), via Contract No. 2019-19051600007. The views and conclusions of this paper are those of the authors and do not reflect the official policy or position of NIH, IARPA, or the US government. + +# References + +Sophia Ananiadou, Paul Thompson, Raheel Nawaz, John McNaught, and Douglas B Kell. 2015. Event-based text mining for biology and functional genomics. Briefings in functional genomics, 14(3):213-230. +Alan R Aronson. 2001. Effective mapping of biomedical text to the umls metathesaurus: the metagram program. In Proceedings of the AMIA Symposium, page 17. American Medical Informatics Association. +Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3615-3620, Hong Kong, China. Association for Computational Linguistics. +Jari Björne, Juho Heimonen, Filip Ginter, Antti Airola, Tapio Pahikkala, and Tapio Salakoski. 2009. Extracting complex biological events with rich graph-based feature sets. In Proceedings of the BioNLP 2009 Workshop Companion Volume for Shared Task, pages 10-18, Boulder, Colorado. Association for Computational Linguistics. +Jari Björne and Tapio Salakoski. 2011. Generalizing biomedical event extraction. In Proceedings of BioNLP Shared Task 2011 Workshop, pages 183-191, Portland, Oregon, USA. Association for Computational Linguistics. +Jari Björne and Tapio Salakoski. 2018. Biomedical event extraction using convolutional neural networks and dependency parsing. In Proceedings of the BioNLP 2018 workshop, pages 98-108, Melbourne, Australia. Association for Computational Linguistics. +Olivier Bodenreider. 2004. The unified medical language system (uml5): integrating biomedical terminology. *Nucleic acids research*, 32(suppl_1):D267-D270. +Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multipooling convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 167-176, Beijing, China. Association for Computational Linguistics. +Prashant Gupta and Heng Ji. 2009. Predicting unknown time arguments based on cross-event propagation. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, pages 369-372, Suntec, Singapore. Association for Computational Linguistics. + +Rujun Han, I Hsu, Mu Yang, Aram Galstyan, Ralph Weischedel, and Nanyun Peng. 2019a. Deep structured neural network for event temporal relation extraction. In The 2019 SIGNLL Conference on Computational Natural Language Learning (CoNLL). +Rujun Han, Qiang Ning, and Nanyun Peng. 2019b. Joint event and temporal relation extraction with shared representations and structured prediction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 434-444, Hong Kong, China. Association for Computational Linguistics. +Xu Han, Shulin Cao, Lv Xin, Yankai Lin, Zhiyuan Liu, Maosong Sun, and Juanzi Li. 2018. Openke: An open toolkit for knowledge embedding. In Proceedings of EMNLP. +Lynette Hirschman, Gully AP Burns, Martin Krallinger, Cecilia Arighi, K Bretonnel Cohen, Alfonso Valencia, Cathy H Wu, Andrew Chattr-Aryamontri, Karen G Dowell, Eva Huala, et al. 2012. Text mining for the biocuration workflow. Database, 2012. +Abhyuday N Jagannatha and Hong Yu. 2016. Bidirectional RNN for medical event detection in electronic health records. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 473-482, San Diego, California. Association for Computational Linguistics. +Jin-Dong Kim, Tomoko Ohta, Sampo Pyysalo, Yoshi-nobu Kano, and Jun'ichi Tsujii. 2009. Overview of BioNLP'09 shared task on event extraction. In Proceedings of the BioNLP 2009 Workshop Companion Volume for Shared Task, pages 1-9, Boulder, Colorado. Association for Computational Linguistics. +Jin-Dong Kim, Sampo Pyysalo, Tomoko Ohta, Robert Bossy, Ngan Nguyen, and Jun'ichi Tsujii. 2011. Overview of bionlp shared task 2011. In Proceedings of the BioNLP shared task 2011 workshop, pages 1-6. Association for Computational Linguistics. +Diederick P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR). +Diya Li, Lifu Huang, Heng Ji, and Jiawei Han. 2019. Biomedical event extraction based on knowledge-driven tree-lstm. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1421-1430. +Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global features. In Proceedings of the 51st Annual Meeting of + +the Association for Computational Linguistics (Volume 1: Long Papers), pages 73-82, Sofia, Bulgaria. Association for Computational Linguistics. +Ying Lin, Heng Ji, Fei Huang, and Lingfei Wu. 2020. A joint neural model for information extraction with global features. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7999-8009, Online. Association for Computational Linguistics. +Xiao Liu, Zhunchen Luo, and Heyan Huang. 2018. Jointly multiple events extraction via attention-based graph information aggregation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1247-1256, Brussels, Belgium. Association for Computational Linguistics. +Amit Majumder, Asif Ekbal, and Sudip Kumar Naskar. 2016. Biomolecular event extraction using a stacked generalization based classifier. In Proceedings of the 13th International Conference on Natural Language Processing, pages 55-64, Varanasi, India. NLP Association of India. +Makoto Miwa, Paul Thompson, and Sophia Ananiadou. 2012. Boosting automatic event extraction from the literature using domain adaptation and coreference resolution. *Bioinformatics*, 28(13):1759-1765. +Yusuke Miyao, Tomoko Ohta, Katsuya Masuda, Yoshimasa Tsuruoka, Kazuhiro Yoshida, Takashi Ninomiya, and Jun'ichi Tsujii. 2006. Semantic retrieval for the accurate identification of relational concepts in massive textbases. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 1017-1024, Sydney, Australia. Association for Computational Linguistics. +SPFGH Moen and Tapio Salakoski2 Sophia Aniadou. 2013. Distributional semantics resources for biomedical text processing. Proceedings of LBM, pages 39-44. +Thien Huu Nguyen, Kyunghyun Cho, and Ralph Grishman. 2016. Joint event extraction via recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 300-309, San Diego, California. Association for Computational Linguistics. +Thien Huu Nguyen and Ralph Grishman. 2015. Event detection and domain adaptation with convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 365-371, Beijing, China. Association for Computational Linguistics. + +Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, pages 8024-8035. +Matthew E. Peters, Mark Neumann, Robert Logan, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A. Smith. 2019. Knowledge enhanced contextual word representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 43-54, Hong Kong, China. Association for Computational Linguistics. +Sudha Rao, Daniel Marcu, Kevin Knight, and Hal Daumé III. 2017a. Biomedical event extraction using abstract meaning representation. In BioNLP 2017, pages 126-135. +Sudha Rao, Daniel Marcu, Kevin Knight, and Hal Daume III. 2017b. Biomedical event extraction using Abstract Meaning Representation. In *BioNLP* 2017, pages 126-135, Vancouver, Canada., Association for Computational Linguistics. +Sebastian Riedel and Andrew McCallum. 2011. Fast and robust joint models for biomedical event extraction. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1-12. +Elaheh ShafieiBavani, Antonio Jimeno Yepes, and Xu Zhong. 2019. Global locality in event extraction. arXiv preprint arXiv:1909.04822. +Martin Simonovsky and Nikos Komodakis. 2017. Dynamic edge-conditioned filters in convolutional neural networks on graphs. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3693-3702. +Alexander Spangher, Nanyun Peng, Jonathan May, and Emilio Ferrara. 2020. Enabling low-resource transfer learning across COVID-19 corpora by combining event-extraction and co-training. In ACL 2020 Workshop on Natural Language Processing for COVID-19 (NLP-COVID). +Petar Velicković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2018. Graph attention networks. In International Conference on Learning Representations. +Deepak Venugopal, Chen Chen, Vibhav Gogate, and Vincent Ng. 2014a. Relieving the computational bottleneck: Joint inference for event extraction with high-dimensional features. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 831-843. + +Deepak Venugopal, Chen Chen, Vibhav Gogate, and Vincent Ng. 2014b. Relieving the computational bottleneck: Joint inference for event extraction with high-dimensional features. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 831-843, Doha, Qatar. Association for Computational Linguistics. +David Wadden, Ulme Wennberg, Yi Luan, and Hannaneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5784-5789, Hong Kong, China. Association for Computational Linguistics. +Cunxiang Wang, Shuailong Liang, Yue Zhang, Xiaonan Li, and Tian Gao. 2019. Does it make sense? and why? a pilot study for sense making and explanation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4020-4026, Florence, Italy. Association for Computational Linguistics. +Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In Twenty-Eighth AAAI conference on artificial intelligence. + +# A Implementation Details + +Our models are implemented in PyTorch (Paszke et al., 2019). Hyper-parameters are found by grid search within search range listed in Table 4. The hyper-parameters of the best performing model are summarized in 5. All experiments are conducted on a 12-CPU machine running CentOS Linux 7 (Core) and NVIDIA RTX 2080 with CUDA 10.1. + +To pre-train KGE, we leverage the TransE implementation from OpenKE (Han et al., 2018). All tuples associated with the selected nodes described in Section 3.1 are used for pre-training with margin loss and negative sampling, + +$$ +\mathcal {L} = \sum_ {(h, \ell , t) \in S} \sum_ {\left(h ^ {\prime}, \ell , t ^ {\prime}\right) \notin S} m a x (0, d (h, \ell , t) - d \left(h ^ {\prime}, \ell , t ^ {\prime}\right) + \gamma) +$$ + +where $\gamma$ denotes margin, and $d(x,x^{\prime})$ denotes the $\ell -1$ distance between $x$ and $x^{\prime}$ . $h$ and $t$ are embeddings of head and tail entities from the gold training sets $S$ with relation $\ell$ . $(h^{\prime},\ell ,t^{\prime})$ denotes a corrupted tuplet with either the head or tail entity replaced by a random entity. TransE is optimized using Adam (Kingma and Ba, 2015) with hyperparameters illustrated in Table 6. Every 50 epochs, the model checkpoint is saved if the mean reciprocal rank on the development set improves from the last checkpoint; otherwise, training will be stopped. + +# B Dataset + +The statistics of GE'11 is shown in 7. The corpus contains 14496 events with $37.2\%$ containing nested structure (Björne and Salakoski, 2011).7 We use the official dataset split for all the results reported. + +
Hyper-parameterRange
Relation MLP dim.{300,500,700,1000}
Trigger MLP dim.{300,500,700,1000}
Learning rate{1 × 10-5,3 × 10-5,5 × 10-5}
+ +Table 4: Hyper-paramter search range for fine-tuning SciBERT. + +
Hyper-parameterValue
Relation MLP dim.300
Trigger MLP dim.300
Learning rate3 × 10-5
GEANet node dim.300
GEANet edge dim.300
GEANet layers2
Dropout rate0.2
+ +Table 5: Hyper-params of the best performing GEANet-SciBERT model. + +
Hyper-parameterValue
Learning rate0.5
Margin3
Batch size128
# corrupted tuplets / # gold tuplets25
# Epochs500
+ +Table 6: Hyper-params for pre-training KGE. + +
MetricNumber
events14496
sentences11581
nested events37.2%
intersentence events6.0%
+ +Table 7: GE'11 dataset statistics \ No newline at end of file diff --git a/biomedicaleventextractionwithhierarchicalknowledgegraphs/images.zip b/biomedicaleventextractionwithhierarchicalknowledgegraphs/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..7898a7047dfc7c178c359924d45bbcb1e685e5ee --- /dev/null +++ b/biomedicaleventextractionwithhierarchicalknowledgegraphs/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b931ab3e3f69f71f5f65ea9194fb82245a129cc3864cb971ca69d4921fe88b2a +size 231355 diff --git a/biomedicaleventextractionwithhierarchicalknowledgegraphs/layout.json b/biomedicaleventextractionwithhierarchicalknowledgegraphs/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..f514f1801faa6309541e25da4da0da58864ac0b0 --- /dev/null +++ b/biomedicaleventextractionwithhierarchicalknowledgegraphs/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a87f5452350c9798e0dc0a7774cef1a63950dd1a9a2192499e7bef5c589d8564 +size 284154 diff --git a/blockwiseselfattentionforlongdocumentunderstanding/90fa6bbd-5c89-4b98-8004-4b565fe3ee4b_content_list.json b/blockwiseselfattentionforlongdocumentunderstanding/90fa6bbd-5c89-4b98-8004-4b565fe3ee4b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..98b6c1f2a6463f2b9c2f6944b1bc9d13813726c3 --- /dev/null +++ b/blockwiseselfattentionforlongdocumentunderstanding/90fa6bbd-5c89-4b98-8004-4b565fe3ee4b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2375573e922cf18a0f3e6a5387c043612102aeb56fc4163e945b3332455ad647 +size 76509 diff --git a/blockwiseselfattentionforlongdocumentunderstanding/90fa6bbd-5c89-4b98-8004-4b565fe3ee4b_model.json b/blockwiseselfattentionforlongdocumentunderstanding/90fa6bbd-5c89-4b98-8004-4b565fe3ee4b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c18905a19b194d2e5c3bce82e747e52a69ba825d --- /dev/null +++ b/blockwiseselfattentionforlongdocumentunderstanding/90fa6bbd-5c89-4b98-8004-4b565fe3ee4b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:970b934cf9ae6eab3e701964e6cdc5e5c09058b9165061621fd7dc19f62edbd1 +size 93782 diff --git a/blockwiseselfattentionforlongdocumentunderstanding/90fa6bbd-5c89-4b98-8004-4b565fe3ee4b_origin.pdf b/blockwiseselfattentionforlongdocumentunderstanding/90fa6bbd-5c89-4b98-8004-4b565fe3ee4b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d64101359102fd143ee5500195bdd33ec436278f --- /dev/null +++ b/blockwiseselfattentionforlongdocumentunderstanding/90fa6bbd-5c89-4b98-8004-4b565fe3ee4b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9ea5ca8122db4a84966350cca9bd481cd8b732cb4970ff5c4ef149eb6c9a9382 +size 1466783 diff --git a/blockwiseselfattentionforlongdocumentunderstanding/full.md b/blockwiseselfattentionforlongdocumentunderstanding/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b42c5bf933a0270c10c05d15defee5fbf8c07e7a --- /dev/null +++ b/blockwiseselfattentionforlongdocumentunderstanding/full.md @@ -0,0 +1,303 @@ +# Blockwise Self-Attention for Long Document Understanding + +Jiezhong Qiu $^{1*}$ , Hao Ma $^{2}$ , Omer Levy $^{2}$ , Wen-tau Yih $^{2}$ , Sinong Wang $^{2}$ , Jie Tang $^{1}$ $^{1}$ Department of Computer Science and Technology, Tsinghua University + +$^{2}$ Facebook AI + +qiujz16@mails.tsinghua.edu.cn + +{haom,omerlevy,scottyih,sinongwang}@fb.com + +jietang@tsinghua.edu.cn + +# Abstract + +We present BlockBERT, a lightweight and efficient BERT model for better modeling long-distance dependencies. Our model extends BERT by introducing sparse block structures into the attention matrix to reduce both memory consumption and training/inference time, which also enables attention heads to capture either short- or long-range contextual information. We conduct experiments on language model pre-training and several benchmark question answering datasets with various paragraph lengths. BlockBERT uses 18.7-36.1% less memory and 12.0-25.1% less time to learn the model. During testing, BlockBERT saves 27.8% inference time, while having comparable and sometimes better prediction accuracy, compared to an advanced BERT-based model, RoBERTa. + +# 1 Introduction + +Recent emergence of the pre-training and fine-tuning paradigm, exemplified by methods like ELMo (Peters et al., 2018), GPT-2/3 (Radford et al., 2019; Brown et al., 2020), BERT (Devlin et al., 2019), XLNet (Yang et al., 2019), RoBERTa (Liu et al., 2019) and ALBERT (Lan et al., 2019), has drastically reshaped the landscape of the natural language processing research. These methods first pre-train a deep model with language model objectives using a large corpus and then fine-tune the model using in-domain supervised data for target applications. Despite its conceptual simplicity, this paradigm has re-established the new state-of-the-art baselines across various tasks, such as question answering (Devlin et al., 2019), coreference resolution (Joshi et al., 2019b), relation extraction (Soares et al., 2019) and text retrieval (Lee et al., 2019; Nogueira and Cho, 2019), to name a few. + +Building such models in practice, however, is an extremely resource-intensive process. For instance, the training of BERT-family models is notoriously expensive. Devlin et al. (2019) report that it takes four days to pre-train BERT-Base/BERT-Large on 4/16 Cloud TPUs. In order to reduce the pre-training time of RoBERTa to 1 day, Liu et al. (2019) use 1,024 V100 GPUs. One crucial factor contributing to the long training time is the memory consumption of these deep models, as it directly affects the batch size. Although the fine-tuning stage is relatively inexpensive, the memory issue still restricts the scenarios in which BERT can be used. For instance, "it is currently not possible to re-produce most of the BERT-Large results on the paper using a GPU with 12GB-16GB of RAM, because the maximum batch size that can fit in memory is too small." + +Although one may think that model size is the main contributor to the large memory consumption, our analysis (Section 2.1) shows that one of the main bottlenecks is actually dot-product self-attention, operated in multiple layers of Transformers (Vaswani et al., 2017), the building block of BERT. As the attention operation is quadratic to the sequence length, this fundamentally limits the maximum length of the input sequence, and thus restricts the model capacity in terms of capturing long-distance dependencies. As a result, downstream tasks have to either truncate their sequences to leading tokens (Nogueira and Cho, 2019) or split their sequences with a sliding window (Joshi et al., 2019a,b). Ad-hoc handling of long sequences is also required in the pre-training stage, such as updating the model using only short sequences in the early stage (Devlin et al., 2019). + +Common strategies for reducing memory consumption, unfortunately, do not work. For instance, + +shrinking the model by lowering the number of layers $L$ , attention heads $A$ , or hidden units $H$ leads to significant performance degradation (Vaswani et al., 2017; Devlin et al., 2019) and does not address the long sequence issue. Alternatively, general low-memory training techniques, such as microbatching (Huang et al., 2018) and gradient checkpointing (Chen et al., 2016) essentially trade off training time for memory consumption, prolongs the already lengthy training process. + +In this work, we explore a different strategy, sparsifying the attention layers, intending to design a lightweight and effective BERT that can model long sequences in a memory-efficient way. Our BlockBERT extends BERT by introducing sparse block substructures into attention matrices to reduce both memory consumption and the number of floating-point operations (FLOPs), which also enables attention heads to capture either short- or long-range contextual information. Compared to the previous method that also enforces sparsity (Child et al., 2019), our approach is much simpler mathematically and very easy to implement. More importantly, the results of experiments conducted on several benchmark question answering datasets with various paragraph lengths show that BlockBERT performs comparably or even better than the original BERT-family models, while enjoying an $18.7 - 36.1\%$ reduction in memory usage, a $12.0 - 25.1\%$ reduction in training time, and a $27.8\%$ reduction in inference time. + +The rest of the paper is organized as follows. Section 2 gives a brief introduction of the BERT model, along with an in-depth analysis of its memory usage during training time. We describe our proposed model in Section 3 and contrast it with existing methods that aim for creating a lighter model. Section 4 presents the experimental results and ablation studies, followed by a survey of other related work in Section 5 and the conclusion in Section 6. + +# 2 Background: Memory Bottleneck in Training BERT + +We briefly review BERT and introduce its memory profiling in this section. Following the paradigm of language model pre-training and down-stream task fine-tuning, BERT (Devlin et al., 2019) consists of multiple layers of bidirectional Transformers (Vaswani et al., 2017), where each Transformer encoder has a multi-head self-attention layer and a position-wise feed-forward layer. Using the same + +notation as in (Devlin et al., 2019), we denote the number of Transformer layers by $L$ , the number of hidden units by $H$ , the number of attention heads by $A$ , the sequence length by $N$ , and the batch size by $B$ . We also assume the feed-forward hidden unit size to be $4H$ . + +# 2.1 Memory Profiling + +Training BERT is a memory-intensive process. In order to identify the bottleneck, we follow the memory model proposed by Sohoni et al. (2019), where memory usage throughout neural network training is categorized into three main types: (1) Model memory is used to store model parameters; (2) Optimizer memory is the additional memory used by the specific learning algorithm during the process; (3) Activation memory consists of the outputs of each layer, which are cached for reuse in backpropagation to compute gradients. + +Take BERT-Base training as an example. The model has 110 million parameters, so model memory occupies 0.2 GB if parameters are stored in half-precision floating-point format (FP16). For Adam (Kingma and Ba, 2014), the optimizer needs additional memory to store the gradients, first moments, and second moments of model parameters. If stored using the same precision, the optimizer memory should be three times of model memory. To calculate the exact size of activation memory is not trivial because it depends heavily on the implementation of the toolkit. Instead, we measure it empirically by training BERT-Base using Adam with a memory profiler (more details are provided in Appendix A.2). + +We use 32 NVIDIA V100 GPUs for training. Every single GPU thus consumes a minibatch of size $b = B / 32 = 8$ . Figure 1(a) shows the profiling result for a single GPU, where the model/optimizer/activation memory consumes $0.21 / 1.03 / 8.49$ GB, respectively. We can see that activation memory accounts for the vast majority of the total GPU memory (87.6%) and is thus the bottleneck. Notice that although our analysis is done on BERT-Base, it can also be generalized to BERT-Large and other models such as RoBERTa (Liu et al., 2019) and XLNet (Yang et al., 2019). + +![](images/a266078b58bdd078c938432904caaabca7fab79ed00cf06548f61906551e861e.jpg) +(a) BERT-Base Training Memory (b) Regression Analysis Profiling on Activation Memory +Figure 1: Memory Profiling for BERT. + +# 2.2 A Regression Analysis on Activation Memory + +For BERT, or more specifically, Transformer, the activation memory corresponds to intermediate results of different layers. It grows linearly in all the model hyper-parameters, except the sequence length $N$ , due to the attention layers. To quantify the linear and quadratic components in the activation memory more clearly, we conduct a regression analysis as follows. Assume that the activation memory (in each GPU) is a polynomial $a_2bN^2 + a_1bN + a_0$ , where $b$ is the batch size in each GPU and $a_i$ ( $i = 0,1,2$ ) are coefficients to be determined. If we fix the total number of tokens in a GPU to be constant (in our case, we fix $b \times N = 4096$ ), we should have a linear function w.r.t. $N$ , i.e., $4096a_2N + 4096a_1 + a_0$ . We enumerate $N$ from {128, 256, 512, 1024} in our experiments, and plot the corresponding profiled activation memory in Figure 1(b). Using ordinary least squares (OLS), with $b \times N = 4096$ , the estimated linear function for activation memory is $0.00715 \times N + 4.83$ , where the first term corresponds to the $O(N^2)$ component. When $N = 512$ (i.e., $b = 8$ ), we can see that for BERT-Base, the $O(N^2)$ component accounts for 3.66 GB, and the $O(N)$ component accounts for 4.83 GB. When the sequence length $N$ increases to 1024 (i.e., $b = 4$ ), the $O(N^2)$ component increases to 7.32 GB, while the $O(N)$ part is unchanged. + +# 2.3 Techniques for Reducing Traing Memory + +Observing that activation memory is the training bottleneck, we discuss common memory reduction techniques below. + +Low Precision (Micikevicius et al., 2017) Low precision is to use half-precision/mixed-precision for training neural networks. This technique has been widely used in Transformer training (Ott et al., 2019; Liu et al., 2019). In this work, we already + +assume to use mixed-precision training by default, as indicated in the aforementioned analysis. + +Microbatching (Huang et al., 2018) Microbatching is to split a batch into small microbatches (which can be fit into memory), and then run forward and backward passes on them separately with gradients for each micro-batch accumulated. Because it runs forward/backward pass multiple times for a single batch, it trades off time for memory. + +Gradient Checkpointing (Chen et al., 2016) Gradient checkpointing saves memory by only caching activations of a subset of layers. The un-cached activations will be recomputed during backpropagation from the latest checkpoint. This strategy trades off time for memory by repeating computations and will obviously extend training time. + +Knowledge Distillation (Hinton et al., 2015) Knowledge distillation aims to compress and transfer knowledge from a teacher model to a simpler student model. However, knowledge distillation relies on a teacher model (which is still expensive in training time) and usually suffers from a certain degree of performance degradation. + +As common techniques are limited in reducing both the training time and memory usage, we investigate how to optimize the dot-product attention layers and introduce our approach next. + +# 3 Model: BlockBERT + +Following (Vaswani et al., 2017), the dot-product attention in Transformer is defined as: + +$$ +\operatorname {A t t e n t i o n} (Q, K, V) = \operatorname {s o f t m a x} \left(\frac {Q K ^ {\top}}{\sqrt {d}}\right) V, +$$ + +where $Q, K, V \in \mathbb{R}^{N \times d}$ with $N$ to be the sequence length and $d$ to be a hidden dimension. As we can see, the inner product between $Q$ and $K$ consumes $O(N^2)$ memory. One simple way to reduce the memory consumption of attention is to sparsify the attention matrix. Suppose we have a masking matrix $M \in \{0,1\}^{N \times N}$ , we define a masked version of attention as follows: + +$$ +\operatorname {A t t e n t i o n} (Q, K, V, M) = \operatorname {s o f t m a x} \left(\frac {Q K ^ {\top}}{\sqrt {d}} \odot M\right) V, \tag {1} +$$ + +with operator $\odot$ defined by + +$$ +(\boldsymbol {A} \odot \boldsymbol {M}) _ {i j} = \left\{ \begin{array}{l l} \boldsymbol {A} _ {i j} & \text {i f} \boldsymbol {M} _ {i j} = 1 \\ - \infty & \text {i f} \boldsymbol {M} _ {i j} = 0 \end{array} \right.. +$$ + +In this work, we design $M$ to be a sparse block matrix, which not only reduces memory and the number of floating-point operations (FLOPs) but also benefits from efficient dense matrix support from deep learning frameworks, such as PyTorch and Tensorflow. More formally, we split the length- $N$ input sequence into $n$ blocks, with each block of length $\frac{N}{n}$ . The $N\times N$ attention matrix is then partitioned into $n\times n$ blocks, where each block matrix is of the size $\frac{N}{n}\times \frac{N}{n}$ . We define a sparse block matrix $M$ by a permutation $\pi$ of $\{1,2,\dots ,n\}$ : + +$$ +M _ {i j} = \left\{ \begin{array}{l l} 1 & \text {i f} \pi \left(\left\lfloor \frac {(i - 1) n}{N} + 1 \right\rfloor\right) = \left\lfloor \frac {(j - 1) n}{N} + 1 \right\rfloor , \\ 0 & \text {o t h e r w i s e .} \end{array} \right. \tag {2} +$$ + +By writing $Q, K, V$ as block matrices, such that $\begin{array}{rl} & Q = [q_1^{\top}\dots q_n^{\top}]^{\top},K = [\kappa_1^{\top}\dots \kappa_n^{\top}]^{\top} \end{array}$ and $\boldsymbol {V} = \left[v_{1}^{\top}\dots v_{n}^{\top}\right]^{\top}$ and plugging them into Equation 1, we can formally define Blockwise Attention as follows: + +Blockwise-Attention $(Q, K, V, M)$ + +$$ += \left[ \begin{array}{c} \operatorname {s o f t m a x} \left(\frac {Q _ {1} K _ {\pi (1)} ^ {\top}}{\sqrt {d}}\right) V _ {\pi (1)} \\ \vdots \\ \operatorname {s o f t m a x} \left(\frac {Q _ {n} K _ {\pi (n)} ^ {\top}}{\sqrt {d}}\right) V _ {\pi (n)} \end{array} \right]. \tag {3} +$$ + +Equation 3 only needs to compute and store $Q_{i}K_{\pi (i)}^{\top}(i = 1,\dots n)$ , each has size $\frac{N}{n}\times \frac{N}{n}$ . In other words, BlockBERT reduces both $O(N^{2})$ memory consumption and FLOPs by a factor of $n$ , since $\frac{N}{n}\times \frac{N}{n}\times n = \frac{N\times N}{n}$ . + +# 3.1 Blockwise Multi-Head Attention + +Analogous to Multi-head Attention (Vaswani et al., 2017), we allow queries, keys, and values to be projected multiple times and perform blockwise attentions in parallel. Moreover, different blockwise attention heads can use different masking matrices. The outputs of multiple heads are then concatenated and aggregated with another linear projection. Let $A$ be the number of attention heads and $H$ the number of hidden units. Blockwise multi-head attention is formally defined as follows: + +Blockwise-Multi-head-Attention $(Q, K, V)$ + +$$ += \operatorname {C o n c a t} \left(\operatorname {h e a d} _ {1}, \dots \operatorname {h e a d} _ {A}\right) \boldsymbol {W} ^ {O}, +$$ + +where for each head $i$ , $i = 1,2,\dots ,A$ + +$$ +\operatorname {h e a d} _ {i} = \text {B l o c k w i s e - A t t e n t i o n} \left(\boldsymbol {Q} \boldsymbol {W} _ {i} ^ {Q}, \boldsymbol {K} \boldsymbol {W} _ {i} ^ {K}, \boldsymbol {V} \boldsymbol {W} _ {i} ^ {V}, \boldsymbol {M} _ {i}\right), +$$ + +![](images/e7b90dc171ffba63c7953d21df8c5192b93789a41dd5ef4b839a203fe1f20779.jpg) +Figure 2: Architecture of Blockwise Multi-head Attention, which acts as building blocks of BlockBERT. The key idea is to introduce a sparse block masking matrix to the $N \times N$ attention matrix. The right panel shows the masking matrices we use when $n = 2, 3$ . For $n = 2$ , the masking matrices are defined by permutation (1, 2), (2, 1) and have $50\%$ non-zeros. For $n = 3$ , the masking matrices are defined by permutation (1, 2, 3), (2, 3, 1), and (3, 1, 2) and have $33.33\%$ non-zeros. + +with $d = \frac{H}{A}, W_i^Q, W_i^K, W_i^V \in \mathbb{R}^{H \times d}$ and the projection matrix $W^O \in \mathbb{R}^{H \times H}$ . Each masking matrix $M_i$ is determined by a permutation $\pi_i$ according to Equation 2. In particular, we choose $\pi$ from permutations generated by shifting one position: $\sigma = (2,3,\dots,n,1)$ , i.e., we select $\pi \in \{\sigma, \sigma^2, \dots, \sigma^n\}$ . For example, with 12 attention heads ( $A = 12$ ) and 2 blocks ( $n = 2$ ), we can assign 10 heads to permutation (1,2) and the other 2 heads to permutation (2,1). Figure 2 illustrates the blockwise multi-head attention with block number $n \in \{2,3\}$ . Blockwise sparsity captures both local and long-distance dependencies in a memory-efficiency way, which is crucial for long-document understanding tasks. For instance, the identity permutation, i.e., $(1,2,\dots,n)$ , enables each token to attend to its nearby tokens in self-attention, while other permutations allow tokens within the same block attending to tokens in another block. Our proposed BlockBERT essentially replaces the multi-head attention layers in Transformer/BERT with blockwise multi-head attention. + +# 3.2 Analysis of Memory Usage Reduction + +To validate our claim that BlockBERT with $n \times n$ blocks can reduce the $O(N^2)$ memory usage by a factor of $n$ , we perform the same memory profiling as described in sections 2.1 and 2.2. Again, We fix the number of tokens in each GPU ( $b \times N = 4096$ ) and choose $N$ from {128, 256, 512, 1024, 2048}.5 As we can see from Figure 3 and Table 1, the empirical results align well with the theoretical values. + +When we set the number of blocks to be 2 and 3 for BlockBERT, the estimated $O(N^2)$ activation memory decreases to 1/2 and 1/3 of BERT's $O(N^2)$ activation memory, respectively. As shown in Table 2, for the sequence length $N = 512$ , BlockBERT with 2 and 3 blocks saves $18.7\%$ and $23.8\%$ overall memory, respectively. The saving is more significant for longer sequences. When $N = 1024$ , the overall memory reduction of BlockBERT with 2 and 3 blocks is $27.3\%$ and $36.1\%$ , respectively. + +![](images/d139fe91e4669b1beb41aacdd9fc4390b11368e3ac08a12238294559ba92587c.jpg) +Figure 3: Regression analysis on activation memory for BERT and BlockBERT. + +
NbModelAct. Mem. (GB)
O(N)O(N2)
5128BERT4.833.66
BlockBERT n=24.841.83
BlockBERT n=34.871.22
10244BERT4.837.32
BlockBERT n=24.843.66
BlockBERT n=34.872.44
+ +Table 1: Estimated $O(N^2)$ and $O(N)$ activation memory for BERT and BlockBERT. + +# 4 Experiments + +We evaluate the pre-training and fine-tuning performance of BlockBERT. In particular, when $n = 2$ , we denote 10:2 to be the configuration which assigns 10 heads to permutation (1, 2) and 2 to permutation (2, 1); when $n = 3$ , we denote 8:2:2 to be the configuration which assigns 8, 2, 2 heads to permutation (1, 2, 3), (2, 3, 1), and (3, 1, 2), respectively. We compare BlockBERT with the following baselines: + +Google BERT Google BERT is the official pretrained model from (Devlin et al., 2019). + +RoBERTa-2seq & RoBERTa-1seq We compare with two versions of RoBERTa (Liu et al., 2019). RoBERTa-2seq is trained with both masked language model (MLM) task and next sentence pre + +diction (NSP) task, while RoBERTa-1seq refers to the pre-training model with only the MLM task. + +SparseBERT We pre-train BERT models with its Transformer encoder replaced by a Sparse Transformer encoder (Child et al., 2019). We set its sparsity hyper-parameters stride $\ell = 128$ and expressivity $c = 32$ . The attention masking matrix used in Sparse Transformer and more implementation details are discussed in Appendix A.3. A similar architecture was adopted in GPT-3 (Brown et al., 2020). + +# 4.1 Pre-training + +All the models follow the BERT-Base setting, i.e., $L = 12$ , $H = 768$ , $A = 12$ , and are trained on the same corpus — BooksCorpus and English Wikipedia with uncased word piece tokens. Thus all models use the same vocabulary as Google BERT (uncased version) with vocabulary size 30,522. We fix the number of tokens per batch $B \times N = 131,072$ , i.e., if sequence length $N = 512$ then batch size $B = 256$ , if sequence length $N = 1024$ then batch size $B = 128$ . The detailed pre-training configuration is listed in Appendix A.1. Moreover, the pre-training of SparseBERT and BlockBERT follows the RoBERTa-1seq setting, i.e., we drop the NSP (Next Sentence Prediction) task, and an input sequence is up to $N$ tokens until it reaches a document boundary. + +A summary of the pre-training performance comparison between BlockBERT and RoBERTa-1seq is shown in Table 2. Besides memory saving, we also achieve a significant speedup. For example, when $N = 1024$ , BlockBERT ( $n = 2$ ) reduces the training time from RoBERTa's 9.7 days to 7.5 days. + +# 4.2 Fine-tuning Tasks + +We evaluate BlockBERT on several question answering tasks, including SQuAD 1.1/2.0 (Rajpurkar et al., 2018) and five other tasks from the MrQA shared task $^{7}$ — HotpotQA (Yang et al., 2018), NewsQA (Trischler et al., 2017), SearchQA (Dunn et al., 2017), TriviaQA (Joshi et al., 2017) and NaturalQA (Kwiatkowski et al., 2019). Since MrQA does not have an official test set, we follow Joshi et al. (2019a) to split the devel + +
NModelTraining Time (day)Memory (per GPU, GB)Heads Config.Valid. ppl
512RoBERTa-1seq6.629.73-3.58
BlockBERT n=25.83 (-12.0%)7.91 (-18.7%)10:23.56
BlockBERT n=35.80 (-12.5%)7.32 (-23.8%)8:2:23.71
1024RoBERTa-1seq9.6613.39-3.60
BlockBERT n=27.51 (-22.3%)9.73 (-27.3%)9:33.57
BlockBERT n=37.23 (-25.1%)8.55 (-36.1%)8:2:23.63
+ +opment set evenly to build a new development set and test set. + +These QA datasets have different paragraph length distributions and are thus ideal for testing the effectiveness of BlockBERT $^{8}$ . For example, SQuAD, NaturalQA, and HotpotQA consist of mostly short paragraphs (shorter than 512), while paragraphs in SearchQA (average length 1,004) and TriviaQA (average length 934) have around 1,000 tokens. When the input sequence is longer than $N$ , we follow the common practice (Joshi et al., 2019a) to split it using a sliding window of size $N$ and stride 128. This means that for SearchQA and TriviaQA, a model with $N = 512$ can only capture half of the context, while a model with $N = 1024$ can accept the whole paragraph as input. + +For all models, we adopt the same fine-tuning QA setup from Devlin et al. (2019). The tokenized paragraph $(p_1,\dots ,p_s)$ and question $(q_{1},\dots ,q_{t})$ are concatenated to be a sequence [CLS] $q_{1}\dots q_{t}$ [SEP] $p_1\dots p_s$ [SEP]. The sequence is then fed into the pre-trained model with two extra linear layers for predicting the start and end positions of the answer spans. The detailed fine-tuning setting is listed in Appendix A.4. Table 3 and Table 4 report the experimental results. + +BlockBERT $(n = 2)$ v.s. RoBERTa-1seq Comparing BlockBERT with RoBERTa-1seq when $N = 512$ , we observe an absolute F1 difference from 0.04 (in NaturalQA) to 1.18 (in NewsQA), with an average of 0.55. For $N = 1024$ , BlockBERT achieves more comparable or even better performance to RoBERTa-1seq, In SearchQA, NewsQA and HotpotQA, BlockBERT achieves absolute F1 improvement of 0.39, 0.44 and 0.23, respectively. + +BlockBERT v.s. SparseBERT For $N = 512$ , it is interesting that BlockBERT with 3 blocks (density $33.33\%$ ) performs better than SparseBERT (den + +Table 2: Pre-training Performance Analysis. + +
NModelSQuAD 1.1SQuAD 2.0
EMF1EMF1
-Human Perf.82.3091.2086.8089.40
512Google BERT81.1988.4574.0877.16
XLNet--78.4681.33
RoBERTa-2seq82.9189.7875.7979.17
RoBERTa-1seq84.4391.4879.2282.27
SparseBERT80.4988.0974.1576.96
BlockBERT n=284.0890.7778.3481.46
BlockBERT n=382.3789.6477.3380.33
1024RoBERTa-1seq84.5891.1479.3482.26
SparseBERT81.0288.3774.5177.57
BlockBERT n=283.6590.7478.5581.45
BlockBERT n=382.7490.0576.7979.84
+ +Table 3: Dev set results on SQuAD 1.1/2.0. The result of XLNet(-Base) is from Yang et al. (2019). For BlockBERT models, their attention head configurations are the same as Table 2. + +sity $44.20\%$ ) in both SQuAD and MrQA tasks. Similar results can be observed for $N = 1024,$ too. These results show that off-diagonal masking matrices, e.g., the masking matrix defined by permutation $(2,3,1)$ and $(3,1,2)$ , play crucial roles in BlockBERT. Furthermore, BlockBERT with 2 blocks achieve a more significant improvement. + +Effect of Long Sequence Pre-training Our observations are twofold: (1) Long sequence pre-training benefits long sequence fine-tuning. In TriviaQA and SearchQA, of which paragraph lengths are around 1024, pre-training models with $N = 1024$ achieve significantly better performance. (2) The heterogeneity of pre-training and fine-tuning sequence length may hurt performance. For example, in SQuAD, we do not see significant performance gain by using pre-trained models with $N = 1024$ ; in HotpotQA and NewsQA, longer sequence pretraining even hurts performance. + +Effect of #Blocks It is not surprising that BlockBERT with 2 blocks ( $n = 2$ ) performs better than that with 3 blocks ( $n = 3$ ), because it keeps more attention matrix entries. The biggest + +difference is in SQuAD 2.0 and NewsQA with $N = 1024$ , where we observe an absolute loss of 1.6 F1 by increasing block number from 2 to 3. + +Efficient inference with BlockBERT We benchmark test efficiency of RoBERTa and BlockBERT. The benchmark code follows huggingface9. All experiments are run 30 times on a 32GB V100 GPU with half precision (FP16). We report the average running time in Table 5. As we can see, BlockBERT does achieve speedup and memory reduction during test time. Take $8 \times 1024$ , i.e., batch size $B = 8$ , sequence length $N = 1024$ , as an example, we can see that BlockBERT with 2 blocks saves $27.8\%$ of test time, and BlockBERT with 3 blocks saves more ( $30.4\%$ ). As for memory, we can observe that RoBERTa cannot handle an input of size $16 \times 1024$ , while it is possible for BlockBERT to work on it. + +In summary, not only BlockBERT saves training/inference time and memory, but it also has a competitive and sometimes better performance, especially for tasks with longer sequences. This demonstrates the effectiveness of our blockwise multi-head attention approach. + +# 4.3 Ablation Study + +We fix the assignment of attention heads in the above experiments. For example, BlockBERT with sequence length $N = 512$ and 2 blocks is trained with ten heads using permutation (1,2) and the other two using permutation (2,1). However, there are other ways to assign twelve attention heads, e.g., seven heads for permutation (1,2) and the other five for permutation (2,1). It would be interesting to see how the assignment of heads affects model performance. In this section, we grid search attention head assignments and plot their best validation performance in 1.2M training steps. The results are shown in Figure 4. + +Our observations are threefold: (1) Identity permutations, i.e., $(1,2)$ and $(1,2,3)$ , are important. As shown in Figure 4, all optimal solutions assign considerable attention heads to block-diagonal matrices, since those matrices enable each token to attend to its nearby tokens; (2) Non-identity permutations follow the rule of "vital few and trivial many." Although identity permutations are important, assigning all attention heads to them (corresponding to 12:0 and 12:0:0 in Figure 4) significantly hurts performance, since the model can not learn long + +term dependencies with only identity permutation; (3) Pre-training performance and fine-tuning performance are correlated but not always consistent. When $n = 3$ , pre-training performance suggests 10:1:1 to be the best head assignment — ten heads for permutation (1, 2, 3), one head for (2, 3, 1) and one head for (3, 1, 2), but we observe that the configuration of 8:2:2 achieves better performance in fine-tuning tasks. + +# 5 Related Work + +In this section, we review the related work of memory optimization for neural network training and recent efforts to simplify Transformer and BERT. + +# 5.1 Low-memory neural networks training + +Due to the large size of model parameters and deep architectures, modern neural networks training requires significant amounts of computing resources. As a result, there is an increasing interest in training neural networks with low memory (Sohoni et al., 2019). Mainstream techniques mostly address this problem with a better system or engineering design, such as low-precision training (Micikevicius et al., 2017), microbatching (Huang et al., 2018) and gradient checkpointing (Chen et al., 2016). Alternatively, there also exists some research focusing on the theoretical aspect, including the recently proposed lottery ticket hypothesis (Frankle and Carbin, 2018). + +# 5.2 Efficient Transformer + +Since the invention of Transformer (Vaswani et al., 2017) and its successful application to masked language model pre-training (Devlin et al., 2019; Radford et al., 2019; Yang et al., 2019; Liu et al., 2019; Lan et al., 2019), several approaches have been proposed to simplify the model and its training process. We summarize these attempts as follows: + +Attention layer simplification There are currently two lines of research trying to simplify the multi-head attention layers. The first one focuses on attention matrix sparsification. Notable examples include Star Transformer (Guo et al., 2019), Sparse Transformer (Child et al., 2019), Adaptive Sparse Transformer (Correia et al., 2019; Sukhbaatar et al., 2019), Log-Sparse Transformer (Li et al., 2019), Reformer (Kitaev et al., 2020) and Longformer (Beltagy et al., 2020). However, due to the insufficient support for sparse tensors from the current deep learning platforms, some + +
NModelSearchQATriviaQANewsQANaturalQAHotpotQA
EMF1EMF1EMF1EMF1EMF1
512Google BERT74.9480.3770.1875.3551.2766.2566.1378.2960.5077.08
RoBERTa-2seq76.1281.7471.9276.7952.4566.7366.9878.6361.5277.81
RoBERTa-1seq77.0982.6273.6578.2256.1370.6467.1479.0762.7779.28
SparseBERT73.3679.0168.7173.1551.1865.4765.5377.4658.5474.85
BlockBERT n=276.6882.3372.3677.5354.6669.4666.9479.0362.1379.15
BlockBERT n=375.5481.0772.0576.7453.8268.3966.1478.4760.6477.46
1024RoBERTa-1seq77.4783.1275.2980.2055.0069.6468.2880.3561.8978.71
SparseBERT74.8380.5470.5675.3451.6767.1665.0777.3159.6576.02
BlockBERT n=277.9583.5175.0679.4155.4470.0867.3179.3962.1378.94
BlockBERT n=376.9882.7674.7879.2853.4868.5065.9178.2061.8978.18
+ +![](images/7fb48606acee3cb239122c3c1def94701fa584fc64ebb23c769f03cd0017f0c0.jpg) +(a) $N = 512, n = 2$ + +![](images/c285ce8160637cbb5683c13b2d516603089daa439e9d28acbf6ad1a821fe08c8.jpg) +(b) $N = 1024,n = 2$ + +![](images/2f79d8ea511e8f547e52e9cc090855d2ed01dd3d1f483435fa0151d4d5580617.jpg) +(c) $N = 512, n = 3$ +Figure 4: Ablation over blockwise attention heads assignment. + +![](images/90ffa8c6d00d11429c2070d8e081220fb63b3ea90941f8feb6e0edc5267fcd03.jpg) +(d) $N = 1024,n = 3$ + +Table 4: MrQA test results (Tasks are sorted decreasingly by average paragraph length). For BlockBERT models, their attention head configurations are the same as Table 2. + +
B × N8 × 102416 × 102424 × 102432 × 1024
RoBERTa0.1371OOMOOMOOM
BlockBERT n=20.09900.1869OOMOOM
BlockBERT n=30.09540.17900.2634OOM
+ +Table 5: Test time statistics (sec) for different input size. OOM indicates out-of-memory. + +of them have to represent a sparse matrix using a dense matrix with a binary mask or rely on customized CUDA kernels (Gray et al., 2017). As a result, the speed-up or reduction in memory consumption is sometimes limited in practice. The second line of research prunes redundant attention heads. Examples include (Voita et al., 2019) and (Michel et al., 2019). Our BlockBERT model belongs to the first category, as we sparsify the attention matrix by replacing it with a block sparse matrix. + +Reducing model size for pre-training Knowledge distillation (Hinton et al., 2015) is a general technique that aims to compress and transfer knowledge from a teacher model to a simpler student model. There are two recent efforts that apply knowledge distillation to BERT pre-training for reducing model size: TinyBERT (Jiao et al., 2019) distills BERT using a smaller Transformer, + +and Tang et al. (2019) distills BERT with a BiLSTM (Hochreiter and Schmidhuber, 1997). In contrast, ALBERT (Lan et al., 2019) is a notable work that does not take the knowledge distillation approach. It uses parameter-sharing to reduce the number of parameters of the BERT model. As discussed in section 2.1, parameter-sharing reduces both model memory and optimizer memory. These two parts account for about $12.4\%$ of total training memory for BERT-base. As for efficiency, parameter-sharing reduces communication complexity in distributed training and thus saves training time as well. + +In the aforementioned efficient Transformers, the model quality is often demonstrated by comparable language model perplexity, or equivalently the bits per word/byte. It is often implicitly assumed that similar language model perplexity implies similar pre-training model quality, namely the same performance on the downstream tasks. We would like to point out that this assumption does not necessarily hold. For example, the experiments on the Enwik8 dataset by Child et al. (2019) demonstrates that Sparse Transformer "surpasses the 1.03 state-of-the-art (bits per byte) for a similarly-sized Transformer-XL and matching the 0.99 (bits per byte) of a model trained with more than double + +the number of parameters". However, if we compare SparseBERT (pre-training model with Sparse Transformer backbone) against XLNet (Yang et al., 2019) (pre-training model with Transformer-XL backbone) in SQuAD, Table 3 shows that XLNet still outperforms SparseBERT significantly. Therefore, we believe that it is necessary to conduct a comprehensive study and evaluation of existing efficient Transformer models when used for masked language model pre-training. Limited by resources, in this work, we mainly compare BlockBERT to pre-training using Sparse Transformer (Child et al., 2019), which is the earliest attempt to design efficient Transformer models and also the key contributor to the success of GPT-3 (Brown et al., 2020). We plan to benchmark more models in the future. + +# 6 Conclusion + +In this work, we study the lightweight BERT model with the goal of achieving both efficiency and effectiveness. We profile and analyze the memory bottlenecks of BERT and focus on optimize dot-product self-attention, which consumes quadratic memory with respect to the sequence length. To reduce both time and memory consumption, we present BlockBERT, which sparsifies the attention matrices to be sparse block matrices. The proposed model achieves time and memory saving without significant loss of performance. + +In the future, we plan to benchmark more efficient Transformers in language model pre-training and fine-tuning. We also would like to explore more applications of BlockBERT on NLP tasks involving long sequences such as coreference resolution (Joshi et al., 2019b) and document-level machine translation (Miculicich et al., 2018), and also non-NLP tasks such as protein sequence modeling (Rives et al., 2019; Rao et al., 2019). + +# Acknowledgments + +The authors would like to thank Zhilin Yang, Danqi Chen, Yinhan Liu, Mandar Joshi and Luke Zettlemoyer for the helpful suggestions. Jiezhong Qiu and Jie Tang were partially supported by the National Key R&D Program of China (2018YFB1402600), NSFC for Distinguished Young Scholar (61825602), and NSFC (61836013). + +# References + +Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150. +Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165. +Tianqi Chen, Bing Xu, Chiyuan Zhang, and Carlos Guestrin. 2016. Training deep nets with sublinear memory cost. arXiv preprint arXiv:1604.06174. +Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. 2019. Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509. +Gonçalo M Correia, Vlad Niculae, and André FT Martins. 2019. Adaptively sparse transformers. arXiv preprint arXiv:1909.00015. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT* 2019, pages 4171-4186. +Matthew Dunn, Levent Sagun, Mike Higgins, V Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. Searchqa: A new q&a dataset augmented with context from a search engine. arXiv preprint arXiv:1704.05179. +Jonathan Frankle and Michael Carbin. 2018. The lottery ticket hypothesis: Finding sparse, trainable neural networks. arXiv preprint arXiv:1803.03635. +Scott Gray, Alec Radford, and Diederik P Kingma. 2017. Gpu kernels for block-sparse weights. arXiv preprint arXiv:1711.09224. +Qipeng Guo, Xipeng Qiu, Pengfei Liu, Yunfan Shao, Xiangyang Xue, and Zheng Zhang. 2019. Star-transformer. In *NAACL-HLT* 2019, pages 1315-1325. +Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780. +Yanping Huang, Yonglong Cheng, Dehao Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V Le, and Zhifeng Chen. 2018. Gpipe: Efficient training of giant neural networks using pipeline parallelism. arXiv preprint arXiv:1811.06965. +Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019. Tinybert: Distilling bert for natural language understanding. arXiv preprint arXiv:1909.10351. + +Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2019a. Spanbert: Improving pre-training by representing and predicting spans. arXiv preprint arXiv:1907.10529. +Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In ACL'17, pages 1601-1611. +Mandar Joshi, Omer Levy, Daniel S Weld, and Luke Zettlemoyer. 2019b. Bert for coreference resolution: Baselines and analysis. arXiv preprint arXiv:1908.09091. +Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. +Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya. 2020. Reformer: The efficient transformer. arXiv preprint arXiv:2001.04451. +Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453-466. +Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. ALBERT: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942. +Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. arXiv preprint arXiv:1906.00300. +Shiyang Li, Xiaoyong Jin, Yao Xuan, Xiyou Zhou, Wenhu Chen, Yu-Xiang Wang, and Xifeng Yan. 2019. Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting. arXiv preprint arXiv:1907.00235. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. +Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one? arXiv preprint arXiv:1905.10650. +Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, et al. 2017. Mixed precision training. arXiv preprint arXiv:1710.03740. + +Lesly Miculicich, Dhananjay Ram, Nikolaos Pappas, and James Henderson. 2018. Document-level neural machine translation with hierarchical attention networks. In *EMNLP' 18*, pages 2947-2954. +Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage re-ranking with bert. arXiv preprint arXiv:1901.04085. +Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. arXiv preprint arXiv:1904.01038. +Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237. +Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. +Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for squad. arXiv preprint arXiv:1806.03822. +Roshan Rao, Nicholas Bhattacharya, Neil Thomas, Yan Duan, Peter Chen, John Canny, Pieter Abbeel, and Yun Song. 2019. Evaluating protein transfer learning with tape. In Advances in Neural Information Processing Systems, pages 9686-9698. +Alexander Rives, Siddharth Goyal, Joshua Meier, Demi Guo, Myle Ott, C Lawrence Zitnick, Jerry Ma, and Rob Fergus. 2019. Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences. bioRxiv, page 622803. +Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learning. arXiv preprint arXiv:1906.03158. +Nimit Sharad Sohoni, Christopher Richard Aberger, Megan Leszczynski, Jian Zhang, and Christopher Ré. 2019. Low-memory neural network training: A technical report. arXiv preprint arXiv:1904.10631. +Sainbayar Sukhbaatar, Edouard Grave, Piotr Bojanowski, and Armand Joulin. 2019. Adaptive attention span in transformers. arXiv preprint arXiv:1905.07799. +Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, and Jimmy Lin. 2019. Distilling task-specific knowledge from bert into simple neural networks. arXiv preprint arXiv:1903.12136. + +Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. Newsqa: A machine comprehension dataset. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 191-200. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008. +Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. arXiv preprint arXiv:1905.09418. +Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. +Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In EMNLP'18. \ No newline at end of file diff --git a/blockwiseselfattentionforlongdocumentunderstanding/images.zip b/blockwiseselfattentionforlongdocumentunderstanding/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..61f819275d57605d8e9c881b820249851742e2c6 --- /dev/null +++ b/blockwiseselfattentionforlongdocumentunderstanding/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aa1a28b38d2c4406981ad1dfd1a3470b3da9e164d9c672d29722d1c566beb0a2 +size 364039 diff --git a/blockwiseselfattentionforlongdocumentunderstanding/layout.json b/blockwiseselfattentionforlongdocumentunderstanding/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..46be34c962575f1d8acd5003b3b647aee7322199 --- /dev/null +++ b/blockwiseselfattentionforlongdocumentunderstanding/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4d1ed8e147acac3ed61e9e636ec725af2d9987c6ba7e82d0a625742711021646 +size 434662 diff --git a/bootstrappingacrosslingualsemanticparser/109d8091-ceb5-4ab7-aefb-444ea8dd4729_content_list.json b/bootstrappingacrosslingualsemanticparser/109d8091-ceb5-4ab7-aefb-444ea8dd4729_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..c4122df67c24a26248ea53be242f77796bfa83ca --- /dev/null +++ b/bootstrappingacrosslingualsemanticparser/109d8091-ceb5-4ab7-aefb-444ea8dd4729_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:342279d6d123f1478a60fdbfef0a8beb47f5dc490a01e8294d819ce7634799c4 +size 122694 diff --git a/bootstrappingacrosslingualsemanticparser/109d8091-ceb5-4ab7-aefb-444ea8dd4729_model.json b/bootstrappingacrosslingualsemanticparser/109d8091-ceb5-4ab7-aefb-444ea8dd4729_model.json new file mode 100644 index 0000000000000000000000000000000000000000..14b9e1ac838b9c20f8b955211c936d8739deaf99 --- /dev/null +++ b/bootstrappingacrosslingualsemanticparser/109d8091-ceb5-4ab7-aefb-444ea8dd4729_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:603065fefee67abaaa798df4e03a697c3d9d46c8d4cfae707ad27b8c9b1d60b7 +size 147798 diff --git a/bootstrappingacrosslingualsemanticparser/109d8091-ceb5-4ab7-aefb-444ea8dd4729_origin.pdf b/bootstrappingacrosslingualsemanticparser/109d8091-ceb5-4ab7-aefb-444ea8dd4729_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..9b3749080efa197a8c1ccd9174a8ca121c266a81 --- /dev/null +++ b/bootstrappingacrosslingualsemanticparser/109d8091-ceb5-4ab7-aefb-444ea8dd4729_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f1c88969a1650aac64d83a19a7ffda1fa585564e5acb8ae7e82cee5717a0b60c +size 1179566 diff --git a/bootstrappingacrosslingualsemanticparser/full.md b/bootstrappingacrosslingualsemanticparser/full.md new file mode 100644 index 0000000000000000000000000000000000000000..cfca30e25ca6732db3b2aedba3cbdcb46588b0b0 --- /dev/null +++ b/bootstrappingacrosslingualsemanticparser/full.md @@ -0,0 +1,402 @@ +# Bootstrapping a Crosslingual Semantic Parser + +Tom Sherborne, Yumo Xu and Mirella Lapata + +Institute for Language, Cognition and Computation + +School of Informatics, University of Edinburgh + +10 Crichton Street, Edinburgh EH8 9AB + +{tom.sherborne,yumo.xu}@ed.ac.uk, mlap@inf.ed.ac.uk + +# Abstract + +Recent progress in semantic parsing scarcely considers languages other than English but professional translation can be prohibitively expensive. We adapt a semantic parser trained on a single language, such as English, to new languages and multiple domains with minimal annotation. We query if machine translation is an adequate substitute for training data, and extend this to investigate bootstrapping using joint training with English, paraphrasing, and multilingual pre-trained models. We develop a Transformer-based parser combining paraphrases by assembling attention over multiple encoders and present new versions of ATIS and Overnight in German and Chinese for evaluation. Experimental results indicate that MT can approximate training data in a new language for accurate parsing when augmented with paraphrasing through multiple MT engines. Considering when MT is inadequate, we also find that using our approach achieves parsing accuracy within $2\%$ of complete translation using only $50\%$ of training data. $^{1}$ + +# 1 Introduction + +Semantic parsing is the task of mapping natural language utterances to machine-interpretable expressions such as SQL or a logical meaning representation. This has emerged as a key technology for developing natural language interfaces, especially in the context of question answering (Kwiatkowski et al., 2013; Berant et al., 2013; Liang, 2016; Kollar et al., 2018), where a semantically complex question is translated to an executable query to retrieve an answer, or denotation, from a knowledge base. + +Sequence-to-sequence neural networks (Sutskever et al., 2014) are a popular approach to semantic parsing, framing the task as sequence transduction from natural to formal languages (Jia and Liang, 2016; Dong and Lapata, 2016). + +Recent proposals include learning intermediate logic representations (Dong and Lapata, 2018; Guo et al., 2019), constrained decoding (Yin and Neubig, 2017; Krishnamurthy et al., 2017; Lin et al., 2019), and graph-based parsing (Bogin et al., 2019; Shaw et al., 2019). + +Given recent interest in semantic parsing and the data requirements of neural methods, it is unsurprising that many challenging datasets have been released in the past decade (Wang et al., 2015; Zhong et al., 2017; Iyer et al., 2017; Yu et al., 2018, 2019). However, these widely use English as synonymous for natural language. English is neither linguistically typical (Dryer and Haspelmath, 2013) nor the most widely spoken language worldwide (Eberhard et al., 2019), but is presently the lingua franca of both utterances and knowledge bases in semantic parsing. Natural language interfaces intended for international deployment must be adaptable to multiple locales beyond prototypes for English. However, it is uneconomical to create brand new datasets for every new language and domain. + +In this regard, most previous work has focused on multilingual semantic parsing i.e., learning from multiple natural languages in parallel assuming the availability of multilingual training data. Examples of multilingual datasets include GeoQuery (Zelle and Mooney, 1996), ATIS (Dahl et al., 1994) and NLMaps (Haas and Riezler, 2016) but each is limited to one domain. For larger datasets, professional translation can be prohibitively expensive and require many man-hours from experts and native speakers. Recently, Min et al. (2019) reproduced the public partitions of the SPIDER dataset (Yu et al., 2018) into Chinese, but this required three expert annotators for verification and agreement. We posit there exists a more efficient strategy for expanding semantic parsing to a new language. + +In this work, we consider crosslingual semantic parsing, adapting a semantic parser trained on English, to another language. We expand executable + +semantic parsing to new languages and multiple domains by bootstrapping from in-task English datasets, task-agnostic multilingual resources, and publicly available machine translation (MT) services, in lieu of expert translation of training data. We investigate a core hypothesis that MT can provide a noisy, but reasonable, approximation of training data in a new source language. We further explore the benefit of augmenting noisy MT data using pre-trained models, such as BERT (Devlin et al., 2019), and multilingual training with English. Additionally, we examine approaches to ensembling multiple machine translations as approximate paraphrases. This challenge combines both domain adaptation and localization, as a parser must generalize to the locale-specific style of queries using only noisy examples to learn from. + +For our evaluation, we present the first multidomain, executable semantic parsing dataset in three languages and an additional locale for a single-domain dataset. Specifically, we extend ATIS (Dahl et al., 1994), pairing Chinese (ZH) utterances from Susanto and Lu (2017a) to SQL queries and create a parallel German (DE) human-translation of the full dataset. Following this, we also make available a new version of the multidomain Overnight dataset (Wang et al., 2015) where only development and test sets are translations from native speakers of Chinese and German. This is representative of the real-world scenario where a semantic parser needs to be developed for new languages without gold-standard training data. + +Our contributions can be summarized as follows: (1) new versions of ATIS (Dahl et al., 1994) and Overnight (Wang et al., 2015) for generating executable logical forms from Chinese and German utterances; (2) a combined encoder-decoder attention mechanism to ensemble over multiple Transformer encoders; (3) a cost-effective methodology for bootstrapping semantic parsers to new languages using minimal new annotation. Our proposed method overcomes the paucity of gold-standard training data using pre-trained models, joint training with English, and paraphrasing through MT engines; and (4) an investigation into practical minimum gold-standard translation requirements for a fixed performance penalty when MT is unavailable. + +# 2 Related Work + +Across logical formalisms, there have been several proposals for multilingual semantic parsing + +which employ multiple natural languages in parallel (Jones et al., 2012; Andreas et al., 2013; Lu, 2014; Susanto and Lu, 2017b; Jie and Lu, 2018). + +Jie and Lu (2014) ensemble monolingual parsers to generate a single parse from $< 5$ source languages for GeoQuery (Zelle and Mooney, 1996). Similarly, Richardson et al. (2018) propose a polyglot automaton decoder for source-code generation in 45 languages. Susanto and Lu (2017a) explore a multilingual neural architecture in four languages for GeoQuery and three languages for ATIS by extending Dong and Lapata (2016) with multilingual encoders. Other work focuses on multilingual representations for semantic parsing based on universal dependencies (Reddy et al., 2017) or embeddings of logical forms (Zou and Lu, 2018). + +We capitalize on existing semantic parsing datasets to bootstrap from English to another language, and therefore, do not assume that multiple languages are available as parallel input. Our work is closest to Duong et al. (2017), however they explore how to parse both English and German simultaneously using a multilingual corpus. In contrast, we consider English data only as an augmentation to improve parsing in Chinese and German and do not use "real" utterances during training. Recently, Artetxe et al. (2020) studied MT for crosslingual entailment, however, our results in Section 5 suggest these prior findings may not extend to semantic parsing, owing to the heightened requirement for factual consistency across translations. + +Our work complements recent efforts in crosslingual language understanding such as XNLI for entailment (Conneau et al., 2018), semantic textual similarity (Cer et al., 2017) or the XTREME (Hu et al., 2020) and XGLUE (Liang et al., 2020) benchmarks. There has also been interest in parsing into interlingual graphical meaning representations (Damonte and Cohen, 2018; Zhang et al., 2018), spoken language understanding (Upadhyay et al., 2018) and $\lambda$ -calculus expressions (Kwiatkowski et al., 2010; Lu and Ng, 2011; Lu, 2014). In contrast, we focus on logical forms grounded in knowledge-bases and therefore do not consider these approaches further. + +# 3 Problem Formulation + +Throughout this work, we consider the real-world scenario where a typical developer wishes to develop a semantic parser to facilitate question answering from an existing commercial database to + +
Noun/Adjective Ambiguity (“first-class fares” is a noun object)
ENShow me the first class fares from Baltimore to Dallas
DEMTZeigen Sie mir die erstklassigen Tarife von Baltimore nach Dallas
DEHZeige mir die Preise in der ersten Klasse von Baltimore nach Dallas
Entity Misinterpretation (Airline names aren’t preserved)
ENWhich Northwest and United flights go through Denver before noon?
DEMTWelche Nordwesten und Vereinigten Flüge gehen durch Denver vor Mittag
DEHWelche Northwest und United Flüge gehen durch Denver vor Mittag
Question to Statement Mistranslation (rephrased as “You have a...”)
ENDo you have an 819 flight from Denver to San Francisco?
ZHMT你有一个从丹佛到旧金山的819航班
ZHH有没有从丹佛到旧金山的819航班
Contextual Misinterpretation (“blocks” translated to “街区” [street blocks])
ENWhat seasons did Kobe Bryant have only three blocks?
ZHMT什么季节科比布莱恩特只有三个街区
Referential Ambiguity (他[he] refers to either players or Kobe Bryant)
ENWhich players played more games than Kobe Bryant the seasons he played?
ZHMT在他打球的那些赛季中,哪些球员比科比布莱恩特打得更多
+ +Table 1: Examples from ATIS (Dahl et al., 1994) and Overnight (Wang et al., 2015). Utterances are translated into Chinese and German using both machine translation $(\mathrm{L}_{\mathrm{MT}})$ and crowdsourcing with verification $(\mathrm{L}_{\mathrm{H}})$ . We highlight issues with the noisy MT data (underlined and bolded) compared to improved human translations (underlined) for ATIS. + +customers in a new locale. For example, an engineer desiring to extend support to German speakers for a commercial database of USA flights in English. Without the resources of high-valued technology companies, costs for annotation and machine learning resources must be minimized to maintain commercial viability. To economize this task, the developer must minimize new annotation or professional translation and instead bootstrap a system with public resources. At a minimum, a test and development set of utterances from native speakers are required for evaluation. However, the extent of annotation and the utility of domain adaptation for training are unknown. Therefore, our main question is how successfully can a semantic parser learn with alternative data resources to generalize to novel queries in a new language? + +Crosslingual semantic parsing presents a unique challenge as an NLU task. It demands the generation of precise utterance semantics, aligned across languages while ensuring an accurate mapping between logical form and the idiomatic syntax of questions in every language under test. In com + +parison to NLU classification tasks such as XNLI (Conneau et al., 2018), our challenge is to preserve and generate meaning, constrained under a noisy MT channel. The misinterpretation of entities, relationships, and relative or numerical expressions can all result in an incorrect parse. + +Lexical translation in MT, however accurate it may be, is insufficient alone to represent queries from native speakers. For example, the English expression "dinner flights" can be directly translated to German as "Abendessenflug" [dinner flight], but "Flug zur Abendszeit" [evening flight] better represents typical German dialogue. This issue further concerns question phrasing. For example, the English query "do you have X?" is often mistranslated to a statement "你有一个X" [you have one X] but typical Chinese employs a positive-negative pattern ("有没有一个X?") [have not have one X?]) to query possession. Our parser must overcome each of these challenges without access to gold data. + +# 3.1 Neural Semantic Parsing + +We approach our semantic parsing task using a SEQ2SEQ architecture Transformer encoder-decoder network (Vaswani et al., 2017). The encoder computes a contextual representation for each input token through multi-head self-attention by combining parallel dot-product attention weightings, or "heads", over the input sequence. The decoder repeats this self-attention across the output sequence and incorporates the source sequence through multi-head attention over the encoder output. A Transformer layer maps input $X = \{x_{i}\}_{i = 0}^{N}$ where $x_{i}\in \mathbb{R}^{d_{x}}$ , to output $Y = \{y_{i}\}_{i = 0}^{N}$ using attention components of Query Q, Key K and Value V in $H$ attention heads: + +$$ +\mathbf {e} _ {i} ^ {(h)} = \frac {\mathbf {Q} W _ {Q} ^ {(h)} \left(\mathbf {K} W _ {K} ^ {(h)}\right) ^ {T}}{\sqrt {d _ {x} / H}}; \mathbf {s} _ {i} ^ {(h)} = \operatorname {s o f t m a x} \left(\mathbf {e} _ {i} ^ {(h)}\right) (1) +$$ + +$$ +\mathbf {z} _ {i} ^ {(h)} = \mathbf {s} _ {i} ^ {(h)} \left(\mathbf {V} W _ {V} ^ {(h)}\right); \quad \mathbf {z} _ {i} = \operatorname {c o n c a t} \left\{\mathbf {z} _ {i} ^ {(h)} \right\} _ {h = 1} ^ {H} \tag {2} +$$ + +$$ +\hat {\mathbf {y}} _ {i} = \text {L a y e r N o r m} (X + \mathbf {z} _ {i}) \tag {3} +$$ + +$$ +\mathbf {y} _ {i} = \text {L a y e r N o r m} \left(\hat {\mathbf {y}} _ {i} + \mathrm {F C} (\operatorname {R e L U} \left(\mathrm {F C} \left(\hat {\mathbf {y}} _ {i}\right)\right))\right) \tag {4} +$$ + +Following Wang et al. (2019), Equation 1 describes attention scores between Query $(Q)$ and Key $(K)$ , $\mathbf{z}_i^h$ is the $h^{\mathrm{th}}$ attention head, applying scores $\mathbf{s}_i^{(h)}$ to value $(V)$ into the multi-head attention function $\mathbf{z}_i$ with $W_{\{Q,K,V\}}^{(h)} \in \mathbb{R}^{d_x \times (d_x / H)}$ . Output prediction $\mathbf{y}_i$ combines $\mathbf{z}_i$ with a residual connection and two fully-connected (FC) layers, ReLU nonlinearity, and layer normalization (Ba et al., 2016). The encoder computes self-attention through query, key, and value all equal to the input, $\{\mathbf{Q},\mathbf{K},\mathbf{V}\} = X$ . Decoder layers use self-attention over output sequence, $\{\mathbf{Q},\mathbf{K},\mathbf{V}\} = Y_{out}$ , followed by attention over the encoder output $E$ ( $\mathbf{Q} = Y_{out}$ and $\{\mathbf{K},\mathbf{V}\} = E$ ) to incorporate the input encoding into decoding. + +# 3.2 Crosslingual Modeling + +Consider a parser, $\mathrm{SP}(x)$ , which transforms utterances in language $x_{\mathrm{L}}$ , to some executable logical form, $y$ . We express a dataset in some language L as $\mathcal{D}^{\mathrm{L}} = \left(\{x_n^{\mathrm{L}}, y_n, d_n\}_{n=1}^{N}, KB\right)$ , for $N$ examples where $x^{\mathrm{L}}$ is an utterance in language L, $y$ is the corresponding logical form and $d$ is a denotation from knowledge base, $d = KB(y)$ . The MT approximation of language L is described as $J$ ; using MT from English, $x' = \mathrm{MT}\left(x^{\mathrm{EN}}\right)$ . Our hypothesis is that $J \approx \mathrm{L}$ such that prediction $\hat{y} = \mathrm{SP}\left(x^{\mathrm{L}}\right)$ for test example $x^{\mathrm{L}}$ approaches gold logical form, $y_{\mathrm{gold}}$ , + +![](images/22a4d8e7c78a3e7c01f2b8dad97ecfc7ceb73ce1c1df868d0591282f683ee0eb.jpg) +Figure 1: (A) Machine Translation (MT) from English into some language, L, for training data. $J$ is the MT approximation of this language to be parsed. (B) Human translation of the development and test sets from English into language L. (C) Translation from language L into English using MT. Any system parsing language L must perform above this "back-translation" baseline to justify development. + +conditioned upon the quality of MT. An ideal parser will output non-spurious prediction, $\hat{y}$ , executing to return an equal denotation to $KB(y_{\mathrm{gold}}) = d_{\mathrm{gold}}$ . The proportion of predicted queries which retrieve the correct denotation defines the denotation accuracy. Generalization performance is always measured on real queries from native speakers e.g. $\mathcal{D}^J = \{\mathcal{D}_{\mathrm{train}}^J,\mathcal{D}_{\mathrm{dev}}^{\mathrm{L}},\mathcal{D}_{\mathrm{test}}^{\mathrm{L}}\}$ and $\mathcal{D}_{\mathrm{dev|test}}^{J} = \emptyset$ + +We evaluate parsing on two languages to compare transfer learning from English into varied locales. We investigate German, a similar Germanic language, and Mandarin Chinese, a dissimilar Sino-Tibetan language, due to the purported quality of existing MT systems (Wu et al., 2016) and availability of native speakers to verify or rewrite crowdsourced annotation. Similar to Conneau et al. (2018), we implement a "backtranslate into English" baseline wherein the test set in ZH/DE is machine translated into English and a semantic parser trained on the source English dataset predicts logical forms. Figure 1 indicates how each dataset is generated. To maintain a commercial motivation for developing an in-language parser, any proposed system must perform above this baseline. Note that we do not claim to be investigating semantic parsing for low-resource languages since, by virtue, we require adequate MT into each language of interest. We use Google Translate (Wu et al., 2016) as our primary MT system and complement this with systems from other global providers. The selection and use of MT is further discussed in Appendix C. + +![](images/a280c108c434065e7c853419a07cd6b6ed601180f4402eb64b693ef01aad6fe4.jpg) +Figure 2: The semantic parser (SP) predicts a logical form, $\hat{y}$ , from an utterance in language L, $x^{\mathrm{L}}$ . A knowledge base (KB) executes the logical form to predict a denotation, $\hat{d}$ . Approaches to crosslingual modeling involve: (A) using machine translation (MT) to approximate training data in language L; (B) training SP on both MT data and source English data; (C) using multiple MT systems to improve the approximation of L. + +# 3.3 Feature Augmentation + +Beyond using MT for in-language training data, we now describe our approach to further improve parsing using external resources and transfer learning. These approaches are described in Figure 2. + +Pre-trained Representations Motivated by the success of contextual word representations for semantic parsing of English by Shaw et al. (2019), we extend this technique to Chinese and German using implementations of BERT from Wolf et al. (2019). Rather than learning embeddings for the source language tabula rasa, we experiment with using pretrained 768-dimensional inputs from BERT-base in English, Chinese and German $^{2}$ , as well as the multilingual model trained on 104 languages. To account for rare entities which may be absent from pre-trained vocabularies, we append these representations to learnable embeddings. Representations for logical form tokens are trained from a random initialisation, as we lack a BERT-style pre-trained model for meaning representations (i.e., $\lambda$ -DCS or SQL queries). Early experiments considering multilingual word representations (Conneau et al., 2017; Song et al., 2018) yielded no significant improvement and these results are omitted for brevity. + +Multilingual “Shared” Encoder Following Duong et al. (2017) and Susanto and Lu (2017a), we experiment with an encoder trained with batches from multiple languages as input. Errors in the MT data are purportedly mitigated through the + +model observing an equivalent English utterance for the same logical form. The joint training dataset is described as $\mathcal{D}_{\mathrm{train}}^{\mathrm{EN} + J} = \mathcal{D}_{\mathrm{train}}^{\mathrm{EN}} \cup \mathcal{D}_{\mathrm{train}}^{J}$ for $J = \{\mathrm{ZH}, \mathrm{DE}\}$ . Consistent with Section 3.2, we measure validation and test performance using only utterances from native speakers, $\mathcal{D}_{\mathrm{dev}, \mathrm{test}}^{\mathrm{L}}$ , and ignore performance for English. This is similar to the A11 model from Duong et al. (2017), however, our objective is biased to maximize performance on one language rather than a balanced multilingual objective. + +Machine Translation as Paraphrasing Paraphrasing is a common augmentation for semantic parsers to improve generalization to unseen utterances (Berant and Liang, 2014; Dong et al., 2017; Iyer et al., 2017; Su and Yan, 2017; Utama et al., 2018). While there has been some study of multilingual paraphrase systems (Ganitkevitch and Callison-Burch, 2014), we instead use MT as a paraphrase resource, similar to Mallinson et al. (2017). Each MT system will have different outputs from different language models and therefore we hypothesize that an ensemble of multiple systems, $(J_{1},\ldots J_{N})$ , will provide greater linguistic diversity to better approximate L. Whereas prior work uses back-translation or beam search, a developer in our scenario lacks the resources to train a NMT system for such techniques. As a shortcut, we input the same English sentence into $m$ public APIs for MT to retrieve a set of candidate paraphrases in the language of interest (we use three APIs in experiments). + +We experiment with two approaches to utilising these pseudo-paraphrases. The first, MT-Paraphrase, aims to learn a single, robust language model for L by uniformly sampling one paraphrase from $(J_{1},\ldots J_{N})$ as input to the model during each epoch of training. The second approach, MT-Ensemble, is an ensemble architecture similar to Garmash and Monz (2016) and First et al. (2016) combining attention over each paraphrase in a single decoder. For $N$ paraphrases, we train $N$ parallel encoder models, $\{e_n\}_{n = 1}^N$ , and ensemble across each paraphrase by combining $N$ sets of encoder-decoder attention heads. For each encoder output, $E_{n} = e_{n}(X_{n})$ , we compute multi-head attention, $\mathbf{z}_i$ in Equation 2, with the decoder state, $D$ , as the query and $E_{n}$ as the key and value (Equation 5). Attention heads are combined through a combination function (Equation 6) and output $\mathbf{m}_{i\varepsilon}$ replaces $\mathbf{z}_i$ in Equation 3. + +We compare ensemble strategies using two combination functions: the mean of heads (Equation 7a) and a gating network (Garmash and Monz 2016; Equation 7b) with gating function $\mathbf{g}$ (Equation 8) where $W_{\mathbf{g}} \in R^{N \times |V|}, W_h \in R^{|V| \times N|V|}$ . We experimentally found the gating approach to be superior and we report results using only this method. + +$$ +\mathbf {m} _ {n} = \text {M u l t i H e a d A t t e n t i o n} (D, E _ {n}, E _ {n}) \tag {5} +$$ + +$$ +\mathbf {m} _ {i \varepsilon} = \operatorname {c o m b} \left(\mathbf {m} _ {1}, \dots \mathbf {m} _ {N}\right) \tag {6} +$$ + +$$ +\operatorname {c o m b} = \left\{ \begin{array}{l l} \frac {1}{N} \sum_ {n} ^ {N} \mathbf {m} _ {n} & (\mathrm {a}) \\ \sum_ {n} ^ {N} \mathbf {g} _ {n} \mathbf {m} _ {n} & (\mathrm {b}) \end{array} \right. \tag {7} +$$ + +$$ +\mathbf {g} = \operatorname {s o f t m a x} \left(W _ {\mathbf {g}} \tanh \left(W _ {h} [ \mathbf {m} _ {n}, \dots \mathbf {m} _ {N} ]\right)\right) \tag {8} +$$ + +Each expert submodel uses a shared embedding space to exploit similarity between paraphrases. During training, each encoder learns a language model specific to an individual MT source, yielding diversity among experts in the final system. However, in order to improve robustness of each encoder to translation variability, inputs to each encoder are shuffled by some tuned probability $p_{\text{shuffle}}$ . During prediction, the test utterance is input to all $N$ models in parallel. In initial experiments, we found negligible difference in MT-Paraphrase using random sampling or round-robin selection of each paraphrase. Therefore, we assume that both methods use all available paraphrases over training. Our two approaches differ in that MT-Paraphrase uses all paraphrases sequentially whereas MT-Ensemble uses paraphrases in parallel. Previous LSTM-based ensemble approaches propose training full parallel networks and ensemble at the final decoding step. However, we found this was too expensive given the nonrecurrent Transformer model. Our hybrid mechanism permits the decoder to attend to every paraphrased input and maintains a tractable model size with a single decoder. + +# 4 Data + +We consider two datasets in this work. Firstly, we evaluate our hypothesis that MT is an adequate proxy for "real" utterances using ATIS (Dahl et al., 1994). This single-domain dataset contains 5,418 utterances paired with SQL queries pertaining to a US flights database. ATIS was previously translated into Chinese by Susanto and Lu (2017a) + +for semantic parsing into $\lambda$ -calculus, whereas we present these Chinese utterances aligned with SQL queries from Iyer et al. (2017). In addition, we translate ATIS into German following the methodology described below. We use the split of 4,473/497/448 examples for train/validation/test from Kwiatkowski et al. (2011). + +We also examine the multi-domain Overnight dataset (Wang et al., 2015), which contains 13,682 English questions paired with $\lambda$ -DCS logical forms executable in SEMPRE (Berant et al., 2013). Overnight is $2.5\times$ larger than ATIS, so a complete translation of this dataset would be uneconomical for our case study. As a compromise, we collect human translations in German and Chinese only for the test and validation partitions of Overnight. We argue that having access to limited translation data better represents the crosslingual transfer required in localizing a parser. We define a fixed development partition of a stratified $20\%$ of the training set for a final split of 8,754/2,188/2,740 for training/validation/testing. Note we consider only Simplified Mandarin Chinese for both datasets. + +Crowdsourcing Translations The ATIS and Overnight datasets were translated to German and Chinese using Amazon Mechanical Turk, following best practices in related work (Callison-Burch, 2009; Zaidan and Callison-Burch, 2011; Behnke et al., 2018; Sosoni et al., 2018). + +We initially collected three translations per source sentence. Submissions were restricted to Turkers from Germany, Austria, and Switzerland for German and China, USA, or Singapore for Chinese. Our AMT interface barred empty submissions and copying or pasting anywhere within the page. Any attempts to bypass these controls triggered a warning message that using MT is prohibited. Submissions were rejected if they were $>80\%$ similar (by BLEU) to references from Google Translate (Wu et al., 2016), as were nonsensical or irrelevant submissions. + +In a second stage, workers cross-checked translations by rating the best translation from each candidate set, including an MT reference, with a rewrite option if no candidate was satisfactory. We collected three judgements per set to extract the best candidate translation. Turkers unanimously agreed on a single candidate in $87.8\%$ of the time (across datasets). Finally, as a third quality filter, we recruited bilingual native speakers to verify, rewrite, and break ties between all top candidates. Annota + +
DEZH
Back-translation to EN53.957.8
+BERT-base56.458.9
SEQ2SEQ66.966.2
+BERT (de/zh)67.867.4
Shared Encoder69.368.3
+BERT-ML69.568.9
+ +(a) training on gold-standard data + +
DE (MT)ZH (MT)
Back-translation to EN57.853.9
+BERT-base58.956.4
SEQ2SEQ61.055.2
+BERT-(de/zh)64.857.3
Shared Encoder64.158.7
+BERT-ML66.459.9
MT-Paraphrase62.264.5
+BERT-ML67.865.0
+Shared Encoder66.668.1
MT-Ensemble63.962.2
+BERT-ML64.865.5
+Shared Encoder68.568.3
+ +(b) training on machine translated (MT) data + +Table 2: Test set denotation Accuracy for ATIS in German (DE) and Chinese (ZH). + +tors chose to rewrite best candidates in only $3.2\%$ of cases, suggesting our crowdsourced dataset is well representative of utterances from native speakers. Example translations from annotators and MT are shown in Table 1. Further details of our crowdsourcing methodology and a sample of human-translated data can be found in Appendix C. + +Machine Translation All machine translation systems used in this work were treated as a black-box. For most experiments, we retrieved translations from English to the target language with the Google Translate API (Wu et al., 2016). We use this system owing to the purported translation quality (Duong et al., 2017) and the API public availability. For ensemble approaches, we used Baidu Translate and Youdao Translate for Mandarin, and Microsoft Translator Text and Yandex Translate for German (see Appendix C). + +# 5 Results and Analysis + +We compare the neural model defined in Section 3.1 (SEQ2SEQ) to models using each augmentation outlined in Section 3.3, a combination thereof, and the back-translation baseline. Table 2(a) details experiments for ATIS using human translated training data, contrasting to Table 2(b) which substitutes MT for training data in ZH and DE. Similar results for Overnight are then presented in Table 3. Finally we consider partial translation in Figure 3. Optimization, hyperparameter settings and reproducibility details are given in Appendix A. To the best of our knowledge, we present the first results for executable semantic parsing of ATIS and Overnight in any language other than English. While prior multilingual work using $\lambda$ -calculus logic is not comparable, we compare to similar results for English in Appendix B. + +ATIS Table 2(a) represents the ideal case of human translating the full dataset. While this would be the least economical option, all models demonstrate performance above back-translation with the best improvement of $+13.1\%$ and $+10.0\%$ for DE and ZH respectively. This suggests that an in-language parser is preferable over MT into English given available translations. Similar to Shaw et al. (2019) and Duong et al. (2017), we find that pretrained BERT representations and a shared encoder are respectively beneficial augmentations, with the best system using both for ZH and DE. However, the latter augmentation appears less beneficial for ZH than DE, potentially owing to decreased lexical overlap between EN and ZH $(20.1\%)$ compared to EN and DE $(51.9\%)$ . This could explain the decreased utility of the shared embedding space. The accuracy of our English model is $75.4\%$ (see Appendix B), incurring an upper-bound penalty of $-6.1\%$ for DE and $-6.5\%$ for ZH. Difficulty in parsing German, previously noted by Jie and Lu (2014), may be an artefact of comparatively complex morphology. We identified issues similar to Min et al. (2019) in parsing Chinese, namely word segmentation and dropped pronouns, which likely explain weaker parsing compared to English. + +Contrasting to back-translation, the SEQ2SEQ model without BERT in Table 2(b), improves upon the baseline by $+3.2\%$ for DE and $+1.3\%$ for ZH. The translation approach for German supersedes back-translation for all models, fulfilling the minimum requirement as a useful parser. However for + +
DE (MT)ZH (MT)
Ba.Bl.Ca.Ho.Pu.Rec.Res.So.Avg.Ba.Bl.Ca.Ho.Pu.Rec.Res.So.Avg.
Back-translation to EN17.644.111.337.020.523.127.434.026.918.233.67.730.224.226.922.329.424.1
+BERT-base59.151.628.638.629.837.032.260.042.147.133.633.934.433.536.627.452.937.4
SEQ2SEQ76.547.470.851.367.170.462.373.164.978.551.655.464.062.769.066.673.165.1
+BERT-(de/zh)74.256.680.460.865.873.670.879.270.284.748.664.973.068.968.570.578.369.7
Shared Encoder72.958.675.060.876.473.163.675.969.578.046.161.367.765.270.463.676.566.1
+BERT-(de/zh)80.860.478.661.471.478.266.979.872.281.151.466.771.465.267.674.777.569.4
MT-Paraphrase79.553.473.858.769.673.166.972.468.476.048.659.566.769.663.966.976.565.9
+BERT-ML82.455.473.867.269.675.979.276.772.582.450.463.774.667.769.970.577.469.6
+Shared Encoder82.660.778.666.172.077.375.079.273.981.350.969.675.765.872.269.077.970.3
MT-Ensemble72.155.874.154.467.970.264.968.666.071.145.858.362.261.562.061.171.461.7
+BERT-ML81.057.373.962.268.374.281.177.672.083.650.264.372.162.167.171.478.068.6
+Shared Encoder81.166.777.965.974.473.180.477.574.684.152.969.074.165.473.671.178.371.1
+ +Table 3: Test set denotation accuracy for Overnight in German (DE) and Chinese (ZH) from training on machine translated (MT) data. Results are shown for individual domains and an eight-domain average (best results in bold). Domains are Basketball, Blocks, Calendar, Housing, Publications, Recipes, Restaurants and Social Network. + +Chinese, the SEQ2SEQ approach requires further augmentation to perform above the $56.4\%$ baseline. For ATIS, the MT-Ensemble model, with a shared encoder and BERT-based inputs, yields the best accuracy. We find that the MT-Paraphrase model performs similarly as a base model and with pre-trained inputs. As the former model has $3\times$ the encoder parameters, it may be that additional data, $\mathcal{D}_{\mathrm{train}}^{\mathrm{EN}}$ , improves each encoder sufficiently for the MT-Ensemble to improve over smaller models. Comparing between gold-standard human translations, we find similar best-case penalties of $-1.0\%$ for DE and $-0.6\%$ for ZH using MT as training data. The model trained on MT achieves nearly the same generalization error as the model trained on the gold standard. Therefore, we consider the feasibility of our approach justified by this result. + +Overnight We now extend our experiments to the multi-domain Overnight dataset, wherein we have only utterances from native speakers for evaluation, in Table 3. Whereas back-translation was competitive for ATIS, here we find a significant collapse in accuracy for this baseline. This is largely due to translation errors stemming from ambiguity and idiomatic phrasing in each locale, leading to unnatural English phrasing and dropped details in each query. Whereas Artetxe et al. (2020) found back-translation to be competitive across 15 languages for NLI, this is not the case for semantic parsing where factual consistency and fluency in parsed utterances must be maintained. + +The SEQ2SEQ model with BERT outperforms + +the baseline by a considerable $+28.1\%$ for DE and $+32.3\%$ for ZH, further supporting the notion that an in-language parser is a more suitable strategy for the task. Our reference English parser attains an average $79.8\%$ accuracy, incurring a penalty from crosslingual transfer of $-14.9\%$ for DE and $-14.7\%$ for ZH with the SEQ2SEQ model. Similar to ATIS, we find MT-Ensemble as the most performant system, improving over the baseline by $+32.5\%$ and $+33.7\%$ for DE and ZH respectively. The best model minimises the crosslingual penalty to $-5.2\%$ for DE and $-8.7\%$ for ZH. Across both datasets, we find that single augmentations broadly have marginal gain and combining approaches maximizes accuracy. + +Challenges in Crosslingual Parsing We find several systematic errors across our results. Firstly, there are orthographic inconsistencies between translations that incur sub-optimal learned embeddings. For example, "5" can be expressed as "五" or "five". This issue also arises for Chinese measure words which are often mistranslated by MT. Multilingual BERT inputs appear to mostly mitigate this error, likely owing to pre-trained representations for each fragmented token. + +Secondly, we find that multilingual training improved entity translation errors e.g. resolving translations of "the Cavs" or "coach", which are ambiguous terms for "Cleveland Cavaliers" and "Economy Class". We find that pairing the training logical form with the source English utterance allows a system to better disambiguate and correctly + +![](images/1fac7715b2c0dcce3ebb9d702e414014218e321482de69c3ce5eb120e3d6ce5a.jpg) +Figure 3: Denotation Accuracy against number of training examples in (a) German and (b) Chinese. Augmenting the training data with English, $EN \cup L$ , uses all 4,473 English training utterances (y axis shared between figures). Each point averages results on three random splits of the dataset. + +translate rare entities from DE/ZH. This disparity arises during inference because human translators are more likely to preserve named entities but this is often missed by MT with insufficient context. + +Finally, paraphrasing techniques benefit parsing expressions in DE/ZH equivalent to peculiar, or KB-specific, English phrases. For example, the Restaurants domain heavily discusses "dollar-sign" ratings for price and "star sign" ratings for quality. There is high variation in how native speakers translate such phrases and subsequently, the linguistic diversity provided through paraphrasing benefits parsing of these widely variable utterances. + +Partial Translation Our earlier experiments explored the utility of MT for training data, which assumes the availability of adequate MT. To examine the converse case, without adequate MT, we report performance with partial human-translation in Figure 3. Parsing accuracy on ATIS broadly increases with additional training examples for both languages, with accuracy converging to the best case performance outlined in Table 2(a). When translating $50\%$ of the dataset, the SEQ2SEQ model performs $-10.9\%$ for DE and $-13.1\%$ for ZH below the ideal case. However, by using both the shared encoder augmentation and multilingual BERT $(EN \cup L + BERT_{ML})$ , this penalty is minimized to $-1.5\%$ and $-0.7\%$ for DE and ZH, respectively. While this is below the best system using MT in Table 2(b), it underlines the potential of crosslingual parsing without MT as future work. + +# 6 Conclusions + +We presented an investigation into bootstrapping a crosslingual semantic parser for Chinese and German using only public resources. Our contributions include a Transformer with attention ensembling and new versions of ATIS and Overnight in Chinese and German. Our experimental results showed that a) multiple MT systems can be queried to generate paraphrases and combining these with pre-trained representations and joint training with English data can yield competitive parsing accuracy; b) multiple encoders trained with shuffled inputs can outperform a single encoder; c) back-translation can underperform by losing required details in an utterance; and finally d) partial translation can yield accuracies $< 2\%$ below complete translation using only $50\%$ of training data. Our results from paraphrasing and partial translation suggest that exploring semi-supervised and zero-shot parsing techniques is an interesting avenue for future work. + +Acknowledgements The authors gratefully acknowledge the support of the UK Engineering and Physical Sciences Research Council (grant EP/L016427/1; Sherborne) and the European Research Council (award number 681760; Lapata). + +# References + +Jacob Andreas, Andreas Vlachos, and Stephen Clark. 2013. Semantic parsing as machine translation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 47-52, Sofia, Bulgaria. + +Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2020. Translation artifacts in cross-lingual transfer learning. arXiv preprint arXiv:2004.04721. +Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. +Maximiliana Behnke, Antonio Valerio Miceli Barone, Rico Sennrich, Vilelmini Sosoni, Thanasis Naskos, Eirini Takoulidou, Maria Stasimioti, Menno Van Za'anen, Sheila Castilho, Federico Gaspari, Panayota Georgakopoulou, Valia Kordoni, Markus Egg, and Katia Lida Kermanidis. 2018. Improving Machine Translation of Educational Content via Crowdsourcing. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation, pages 3343-3347, Miyazaki, Japan. +Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1533-1544, Seattle, Washington, USA. +Jonathan Berant and Percy Liang. 2014. Semantic Parsing via Paraphrasing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1415-1425, Stroudsburg, PA, USA. +Steven Bird and Edward Loper. 2004. NLTK: The natural language toolkit. In Proceedings of the ACL Interactive Poster and Demonstration Sessions, pages 214-217, Barcelona, Spain. +Ben Boin, Matt Gardner, and Jonathan Berant. 2019. Representing Schema Structure with Graph Neural Networks for Text-to-SQL parsing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 4560-4565, Florence, Italy. Association for Computational Linguistics. +Chris Callison-Burch. 2009. Fast, cheap, and creative: evaluating translation quality using Amazon's Mechanical Turk. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 286-295, Singapore. +Ruisheng Cao, Su Zhu, Chen Liu, Jieyu Li, and Kai Yu. 2019. Semantic parsing with dual learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 51-64, Florence, Italy. Association for Computational Linguistics. +Ruisheng Cao, Su Zhu, Chenyu Yang, Chen Liu, Rao Ma, Yanbin Zhao, Lu Chen, and Kai Yu. 2020. Unsupervised dual paraphrasing for two-stage semantic parsing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. + +Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1-14, Vancouver, Canada. +Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087. +Alexis Conneau, Rudy Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating cross-lingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475-2485, Brussels, Belgium. Association for Computational Linguistics. +Deborah A. Dahl, Madeleine Bates, Michael Brown, William Fisher, Kate Hunicke-Smith, David Pallett, Christine Pao, Alexander Rudnicky, and Elizabeth Shriberg. 1994. Expanding the scope of the ATIS task: The ATIS-3 corpus. In Proceedings of the Workshop on Human Language Technology, HLT '94, pages 43-48, Stroudsburg, PA, USA. +Marco Damonte and Shay B. Cohen. 2018. Cross-Lingual Abstract Meaning Representation Parsing. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1146-1155, Stroudsburg, PA, USA. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. +Li Dong and Mirella Lapata. 2016. Language to Logical Form with Neural Attention. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 33-43, Stroudsburg, PA, USA. +Li Dong and Mirella Lapata. 2018. Coarse-to-Fine Decoding for Neural Semantic Parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 731-742, Melbourne, Australia. +Li Dong, Jonathan Mallinson, Siva Reddy, and Mirella Lapata. 2017. Learning to paraphrase for question answering. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 875-886, Copenhagen, Denmark. + +Matthew S. Dryer and Martin Haspelmath, editors. 2013. WALS Online. Max Planck Institute for Evolutionary Anthropology, Leipzig. +Long Duong, Hadi Afshar, Dominique Estival, Glen Pink, Philip Cohen, and Mark Johnson. 2017. Multilingual Semantic Parsing And Code-Switching. In Proceedings of the 21st Conference on Computational Natural Language Learning, pages 379-389, Vancouver, Canada. +David Eberhard, Gary Simons, and Charles. 2019. Languages of the World. *Ethnologue: Languages of the World*. Twenty-Second Edition, 22. +Catherine Finegan-Dollak, Jonathan K. Kummerfeld, Li Zhang, Karthik Ramanathan, Sesh Sadasivam, Rui Zhang, and Dragomir Radev. 2018. Improving Text-to-SQL Evaluation Methodology. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 351-360, Melbourne, Australia. +Orhan First, Baskaran Sankaran, Yaser Al-Onaizan, Fatos T. Yarman Vural, and Kyunghyun Cho. 2016. Zero-Resource Translation with Multi-Lingual Neural Machine Translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 268-277, Stroudsburg, PA, USA. +Juri Ganitkevitch and Chris Callison-Burch. 2014. The multilingual paraphrase database. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 4276-4283, Reykjavik, Iceland. +Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew E. Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2018. Allennlp: A deep semantic natural language processing platform. ArXiv, abs/1803.07640. +Ekaterina Garmash and Christof Monz. 2016. Ensemble Learning for Multi-Source Neural Machine Translation. In Proceedings of the 26th International Conference on Computational Linguistics: Technical Papers, pages 1409-1418, Osaka, Japan. +Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In *In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS'10)*. Society for Artificial Intelligence and Statistics. +Jiaqi Guo, Zecheng Zhan, Yan Gao, Yan Xiao, Jian-Guang Lou, Ting Liu, and Dongmei Zhang. 2019. Towards complex text-to-SQL in cross-domain database with intermediate representation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4524-4535, Florence, Italy. Association for Computational Linguistics. + +Carolin Haas and Stefan Riezler. 2016. A Corpus and Semantic Parser for Multilingual Natural Language Querying of OpenStreetMap. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 740-750, Stroudsburg, PA, USA. +Kotaro Hara, Abigail Adams, Kristy Milland, Saiph Savage, Benjamin V Hanrahan, Jeffrey P Bigham, and Chris Callison-Burch. 2019. Worker Demographics and Earnings on Amazon Mechanical Turk: An Exploratory Analysis. CHI'19 Late Breaking Work. +Jonathan Herzig and Jonathan Berant. 2017. Neural Semantic Parsing over Multiple Knowledge-bases. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 623-628, Stroudsburg, PA, USA. +Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan First, and Melvin Johnson. 2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalization. +Huseyin A. Inan, Gaurav Singh Tomar, and Huapu Pan. 2019. Improving semantic parsing with neural generator-eranker architecture. ArXiv, abs/1909.12764. +Srinivasan Iyer, Alvin Cheung, and Luke Zettlemoyer. 2019. Learning Programmatic Idioms for Scalable Semantic Parsing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 5425-5434, Hong Kong, China. +Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke Zettlemoyer. 2017. Learning a neural semantic parser from user feedback. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 963-973, Vancouver, Canada. +Robin Jia and Percy Liang. 2016. Data Recombination for Neural Semantic Parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12-22, Stroudsburg, PA, USA. +Zhanming Jie and Wei Lu. 2014. Multilingual Semantic Parsing : Parsing Multiple Languages into Semantic Representations. In Proceedings of the 25th International Conference on Computational Linguistics: Technical Papers, pages 1291-1301, Dublin, Ireland. +Zhanming Jie and Wei Lu. 2018. Dependency-based Hybrid Trees for Semantic Parsing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2431-2441, Brussels, Belgium. + +Bevan Keeley Jones, Mark Johnson, and Sharon Goldwater. 2012. Semantic Parsing with Bayesian Tree Transducers. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1, pages 488-496, Stroudsburg, PA, USA. +Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. +Thomas Kollar, Danielle Berry, Lauren Stuart, Karolina Owczarzak, Tagyoung Chung, Lambert Mathias, Michael Kayser, Bradford Snow, and Spyros Matsoukas. 2018. The Alexa meaning representation language. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers), pages 177-184, New Orleans - Louisiana. +Jayant Krishnamurthy, Pradeep Dasigi, and Matt Gardner. 2017. Neural semantic parsing with type constraints for semi-structured tables. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1516–1526, Copenhagen, Denmark. +Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1545-1556, Seattle, Washington, USA. +Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higher-order unification. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1223-1233, Stroudsburg, PA, USA. +Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2011. Lexical Generalization in CCG Grammar Induction for Semantic Parsing. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1512-1523, Edinburgh, Scotland, UK. +Percy Liang. 2016. Learning executable semantic parsers for natural language understanding. Commun. ACM, 59(9):68-76. +Percy Liang, Michael I Jordan, and Dan Klein. 2013. Learning Dependency-based Compositional Semantics. Comput. Linguist., 39(2):389-446. +Yaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fenfei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, Xiaodong Fan, Bruce Zhang, Rahul Agrawal, Edward Cui, Sining Wei, Taroon Bharti, Jiun-Hung Chen, Winnie Wu, Shuguang Liu, Fan Yang, and Ming Zhou. 2020. Xglue: A new benchmark dataset for cross-lingual pre-training, understanding and generation. + +Kevin Lin, Ben Bogin, Mark Neumann, Jonathan Berant, and Matt Gardner. 2019. Grammar-based neural text-to-sql generation. CoRR, abs/1905.13326. +Wei Lu. 2014. Semantic parsing with relaxed hybrid trees. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1308-1318, Doha, Qatar. +Wei Lu and Hwee Tou Ng. 2011. A probabilistic forest-to-string model for language generation from typed lambda calculus expressions. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1611-1622, Edinburgh, Scotland, UK. +Jonathan Mallinson, Rico Sennrich, and Mirella Lapata. 2017. Paraphrasing revisited with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 881-893, Valencia, Spain. +Qingkai Min, Yuefeng Shi, and Yue Zhang. 2019. A Pilot Study for Chinese SQL Semantic Parsing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 3643-3649, Stroudsburg, PA, USA. +Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. In NIPS Autodiff Workshop. +Ellie Pavlick, Matt Post, Ann Irvine, Dmitry Kachaev, and Chris Callison-Burch. 2014. The Language Demographics of Amazon Mechanical Turk. Transactions of the Association for Computational Linguistics, 2. +Matt Post, Chris Callison-Burch, and Miles Osborne. 2012. Constructing Parallel Corpora for Six Indian Languages via Crowdsourcing. In Proceedings of the Seventh Workshop on Statistical Machine Translation, pages 401-409. +Siva Reddy, Oscar Täckström, Slav Petrov, Mark Steedman, and Mirella Lapata. 2017. Universal semantic parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 89-101, Copenhagen, Denmark. +Kyle Richardson, Jonathan Berant, and Jonas Kuhn. 2018. Polyglot Semantic Parsing in APIs. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), New Orleans, Louisiana. +Peter Shaw, Philip Massey, Angelica Chen, Francesco Piccinno, and Yasemin Altun. 2019. Generating logical forms from graph representations of text and + +entities. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 95-106, Florence, Italy. Association for Computational Linguistics. +Yan Song, Shuming Shi, Jing Li, and Haisong Zhang. 2018. Directional skip-gram: Explicitly distinguishing left and right context for word embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 175–180, New Orleans, Louisiana. +Vilelmini Sosoni, Katia Lida Kermanidis, Maria Stasimioti, Thanasis Naskos, Eirini Takoulidou, Menno Van Zaanen, Sheila Castilho, Panayota Georgakopoulou, Valia Kordoni, and Markus Egg. 2018. Translation Crowdsourcing: Creating a Multilingual Corpus of Online Educational Content. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation), Miyazaki, Japan. +Yu Su and Xifeng Yan. 2017. Cross-domain semantic parsing via paraphrasing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1235-1246, Copenhagen, Denmark. +Raymond Hendy Susanto and Wei Lu. 2017a. Neural Architectures for Multilingual Semantic Parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 38-44, Stroudsburg, PA, USA. +Raymond Hendy Susanto and Wei Lu. 2017b. Semantic parsing with neural hybrid trees. In AAAI Conference on Artificial Intelligence, San Francisco, California, USA. +Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. CoRR, abs/1409.3215. +S. Upadhyay, M. Faruqui, G. Tur, H. Dilek, and L. Heck. 2018. (almost) zero-shot cross-lingual spoken language understanding. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6034-6038. +P. Utama, N. Weir, F. Basik, C. Binnig, U. Cetinternetel, B. Hattasch, A. Ilkhechi, S. Ramaswamy, and A. Usta. 2018. An End-to-end Neural Natural Language Interface for Databases. ArXiv e-prints. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30, pages 5998-6008. Curran Associates, Inc. +Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, and Matthew Richardson. 2019. Rat-sql: Relation-aware schema encoding and linking for text-to-sql parsers. + +Chenglong Wang, Po-Sen Huang, Alex Polozov, Marc Brockschmidt, and Rishabh Singh. 2018. Execution-guided neural program decoding. CoRR, abs/1807.03100. +Yushi Wang, Jonathan Berant, and Percy Liang. 2015. Building a Semantic Parser Overnight. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 1332-1342, Stroudsburg, PA, USA. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R'emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. +Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144. +Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 440-450, Vancouver, Canada. +Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018. Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-SQL Task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. +Tao Yu, Rui Zhang, Michihiro Yasunaga, Yi Chern Tan, Xi Victoria Lin, Suyi Li, Heyang Er, Irene Li, Bo Pang, Tao Chen, Emily Ji, Shreya Dixit, David Proctor, Sungrok Shim, Jonathan Kraft, Vincent Zhang, Caiming Xiong, Richard Socher, and Dragomir Radev. 2019. SParC: Cross-domain semantic parsing in context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4511-4523, Florence, Italy. Association for Computational Linguistics. +Omar F. Zaidan and Chris Callison-Burch. 2011. Crowdsourcing Translation: Professional Quality from Non-Professionals. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1220–1229. + +John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proceedings of the Thirteenth National Conference on Artificial Intelligence - Volume 2, AAAI'96, pages 1050-1055. + +Sheng Zhang, Xutai Ma, Rachel Rudinger, Kevin Duh, and Benjamin Van Durme. 2018. Cross-lingual Decompositional Semantic Parsing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1664–1675, Brussels, Belgium. + +Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2SQL: Generating structured queries from natural language using reinforcement learning. CoRR, abs/1709.00103. + +Yanyan Zou and Wei Lu. 2018. Learning Cross-lingual Distributed Logical Representations for Semantic Parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 673–679, Melbourne, Australia. + +# 7 Appendices + +# A Experimental Setup + +For ATIS, we implement models trained on both real and machine-translated utterances in German and Chinese. The former is our upper bound, representing the ideal case, and the latter is the minimal scenario for our developer. Comparison between these cases demonstrates both the capability of a system in the new locale and delineates the adequacy of MT for the task. Following this, we explore the multi-domain case of the Overnight dataset wherein there is no gold-standard training data in either language. + +Preprocessing Data are pre-processed by removing punctuation and lowercase with NLTK (Bird and Loper, 2004), except for cased pre-trained vocabularies and Chinese. Logical forms are split on whitespace and natural language is tokenized using the sentencepiece tokeniser to model language-agnostic subwords. We found this critical for Chinese, which lacks whitespace delimitation in sentences, and for German, to model word compounding. For ATIS, we experimented with the entity anonymization scheme from Iyer et al. (2017), however, this was found to be detrimental when combined with pre-trained input representations and was subsequently not used. + +Evaluation and Model Selection Neural models are optimized through a grid search between an embedding/hidden layer size of $2^{[7,\dots 10]}$ , the number of layers between $\{2,\ldots 8\}$ , the number of heads between $\{4,\ldots 8\}$ and the shuffling probability for the MT-Ensemble model between $p_{\mathrm{shuffle}} = \{0.1,\dots 0.5\}$ . The best hyperparameters had 6 layers for encoder and decoder, an embedding/hidden layer size of 128, 8 attention heads per layer, a dropout rate of 0.1 and for MT-Ensemble models, we show results for the gated combination approach, which was superior in all cases, and the optimal shuffling probability was 0.4. Models range in size from 4.2-5.7 million parameters. All weights are initialized with Xavier initialization (Glorot and Bengio, 2010) except pre-trained representations which remain frozen. Model weights, $\theta$ , are optimized using sequence cross-entropy loss against gold-standard logical forms as supervision. + +Each experiment trains a network for 200 epochs using the Adam Optimizer (Kingma and Ba, 2014) with a learning rate of 0.001. We follow the Noam learning rate scheduling approach with a warmup of 10 epochs. Minimum validation loss is used as an early stopping metric for model selection, with a patience of 30 epochs. We use teacher forcing for prediction during training and beam search, with a beam size of 5, during inference. + +Predicted logical forms are input to the knowledge base for ATIS, an SQL database, and Overnight, SEMPRE (Berant et al., 2013), to retrieve denotations. All results are reported as exact-match (hard) denotation accuracy, the proportion of predicted logical forms which execute to retrieve the same denotation as the reference query. Models are built using PyTorch (Paszke et al., 2017), AllenNLP (Gardner et al., 2018) and HuggingFace BERT models (Wolf et al., 2019). Each parser is trained using a cluster of 16 NVIDIA P100 GPUs with 16GB memory, with each model demanding 6-16 hours to train on a single GPU. + +# B English Results + +We compare our reference model for English to prior work in Table 5. Our best system for this language uses the SEQ2SEQ model outlined in Section 3.1 with input features from the pre-trained BERT-base model. We acknowledge our system performs below the state of the art for ATIS by $-7.8\%$ and Overnight by $-3.9\%$ , but this is most likely because we omit any English-specific fea + +
DEMT1MT2MT3ZHMT1MT2MT3
G0.7320.5760.611G0.5170.5380.525
MT10.6500.667MT10.6600.645
MT20.677MT20.738
+ +(a) ATIS + +
DEMT1MT2MT3ZHMT1MT2MT3
MT10.5700.513MT10.6140.604
MT20.585MT20.653
+ +(b) Overnight +Table 4: Corpus BLEU between gold-standard translations (G) and machine translations from sources 1-3 for (a) ATIS and (b) Overnight. For German (DE): MT1 is Google Translate, MT2 is Microsoft Translator Text and MT3 is Yandex. For Chinese (ZH): MT1 is Google Translate, MT2 is Baidu Translate and MT3 is Youdao Translate. + +
ATISOvernight
Ba.Bl.Ca.Ho.Pu.Rec.Res.So.Avg
Wang et al. (2015)46.341.974.454.559.070.875.948.258.8
Su and Yan (2017)88.262.782.778.880.786.183.783.180.8
Herzig and Berant (2017)86.262.782.178.380.782.982.281.779.6
Iyer et al. (2017)82.5
Wang et al. (2018)77.9
Iyer et al. (2019)83.2
Cao et al. (2019)87.563.779.873.081.481.581.683.078.9
Inan et al. (2019)89.065.785.183.681.488.091.086.083.7
Cao et al. (2020)87.265.780.475.780.186.182.882.780.1
SEQ2SEQ74.985.264.977.477.278.984.385.581.279.3
+BERT-base75.487.765.481.079.471.485.685.882.079.8
+ +Table 5: Test denotation accuracy on ATIS and Overnight for reference model for English. Best accuracy is bolded. Note that Inan et al. (2019) evaluate on ATIS, but use the non-executable $\lambda$ -calculus logical form and are therefore not comparable to our results. Domains are Basketball, Blocks, Calendar, Housing, Publications, Recipes, Restaurants, and Social Network. + +ture augmentation other than BERT. In comparison to prior work, we do not use entity anonymization, paraphrasing, execution-guided decoding or a mechanism to incorporate feedback for incorrect predictions from humans or neural critics. The closest comparable model to ours is reported by Wang et al. (2018), implementing a similar SEQ2SEQ model demonstrating $77.0\%$ test set accuracy. However, this result uses entity anonymization for ATIS to replace each entity with a generic label for the respective entity type. Prior study broadly found this technique to yield improved parsing accuracy (Iyer et al., 2017; Dong and Lapata, 2016; Finegan-Dollak et al., 2018), a crosslingual implementation requires crafting multiple language-specific trans + +lation tables for entity recognition. We attempted to implement such an approach but found it to be unreliable and largely incompatible with the vocabularies of pre-trained models. + +# C Data Collection + +Translation through Crowdsourcing For the task of crosslingual semantic parsing, we consider the ATIS dataset (Dahl et al., 1994) and the Overnight dataset (Wang et al., 2015). The former is a single-domain dataset of utterances paired with SQL queries pertaining to a database of travel information in the USA. Overnight covers eight domains using logical forms in the $\lambda$ -DCS formalism (Liang et al., 2013) which can be executed in + +the SEMPRE framework (Berant et al., 2013). + +ATIS has been previously translated into Chinese and Indonesian for the study of semantic parsing into $\lambda$ -calculus logical forms (Susanto and Lu, 2017a), however Overnight exists only in English. To the best of our knowledge, there is presently no multi-domain dataset for executable semantic parsing in more than two languages. As previously mentioned in Section 4, we consider Chinese and German in this paper to contrast between a language similar and dissimilar to English and also due to the reported availability of crowd-sourced workers for translation (Pavlick et al., 2014) and bilingual native speakers for verification. + +To facilitate task evaluation in all languages of interest, we require a full parallel translation of ATIS in German, for comparison to the existing Chinese implementation, and a partial translation of Overnight in both German and Chinese. For task evaluation in all languages, we require a full parallel translation of ATIS to complement the existing Chinese translation from (Susanto and Lu, 2017a). As previously discussed, we translate only the development and test set of Overnight (Wang et al., 2015) into Chinese and German for assessment of crosslingual semantic parsing in a multi-domain setting. Therefore, we translate all 5,473 utterances in ATIS and 4,311 utterances in Overnight. The original Overnight dataset did not correct spelling errors from collected English paraphrases, however, we consider it unreasonable to ask participants in our task to translate misspelled words, as ambiguity in correction could lead to inaccurate translations. We subsequently identified and corrected spelling errors using word processing software. + +We use Amazon Mechanical Turk (MTurk) to solicit three translations per English source sentence from crowdsourced workers (Turkers), under the assumption that this will collect at least one adequate translation (Callison-Burch, 2009). Our task design largely followed practices for translation without expert labels on MTurk (Zaidan and Callison-Burch, 2011; Post et al., 2012; Behnke et al., 2018; Sosoni et al., 2018). The task solicits translations by asking a Turker to translate 10 sentences and answer demographic questions concerning country of origin and native language. Submissions were restricted to Turkers from Germany, Austria and Switzerland or China, Singapore, and the USA for German and Chinese respectively. We built an AMT interface with quality controls which + +restricted Turkers from inputting whitespace and disabled copy/paste anywhere within the webpage. Attempting to copy or paste in the submission window triggered a warning that using online translation tools will result in rejection. Inauthentic translations were rejected if they held an $>80\%$ average BLEU to reference translations from Google Translate (Wu et al., 2016), as were nonsensical or irrelevant submissions. For the Chinese data collection, we also rejected submissions using Traditional Chinese Characters or Pinyin romanization. Instructions for the initial candidate collection task are given in Figure 4 and the ranking task in Figure 5. We found $94\%$ of workers completed the optional demographic survey and that all workers reported their first language Chinese or German as desired. For Chinese, $94\%$ of workers came from the USA and reported to have spoken Chinese for $>20$ years, and remaining workers resided in China. For German, all workers came from Germany and had spoken German for $>25$ years. + +Turkers submitted 10 translations per task for \(0.7 and \)0.25 to rank 10 candidate translations, at an average rate to receive an equivalent full-time wage of $8.23/hour. This is markedly above the average wage for US workers of $3.01/hour discovered by Hara et al. (2019). To ensure data quality and filter disfluencies or personal biases from Turkers, we then recruited bilingual postgraduate students, native speakers of the task language, to judge if the best chosen translation from Turk was satisfactory or required rewriting. If an annotator was dissatisfied with the translation ranked best from Turk then they provided their own, which only occurred for \(3.2\%\) of all translations. Verifiers preferred the MT candidate over the Turk submissions for \(29.5\%\) of German rankings and \(22.6\%\) of Chinese rankings, however, this preference bias arose only in translations of small sentences (five or fewer words) where MT and the Turk translation were practically identical. We paid \)12 an hour for this verification but to minimize cost, we did not collect multiple judgments per translation. We found that verification was completed at a rate of 60 judgments per hour, leading to an approximate cost of $2200 per language for Overnight and $2500 for ATIS into German. While this may be considered expensive, this is the minimum cost to permit comparable evaluation in every language. Sample translations for ATIS into German are given in Table 6 and sample translations for Overnight into + +German and Chinese are given in Table 7. + +Machine Translation In this work, we evaluate the feasibility of using machine translation (MT) as a proxy to generate in-language training data for semantic parsing of two languages. All MT systems are treated as black-box models without inspection of underlying translation mechanics or recourse for correction. For most experiments in this work, we use translations from English to the target language using Google Translate (Wu et al., 2016). We use this system owing to the purported translation quality (Duong et al., 2017) and because the API is publicly available, contrasting to the closed MT used in Conneau et al. (2018). + +Additionally, we explore two approaches to modeling an ensemble of translations from multiple MT sources. We expect, but cannot guarantee, that each MT system will translate each utterance differently for greater diversity in the training corpus overall. For this approach, we consider two additional MT systems each for Chinese and German. For Mandarin, we use Baidu Translate and Youdao Translate. For German, we use Microsoft Translator Text and Yandex Translate. To verify that the ensemble of multiple MT systems provides some additional diversity, we measure the corpus level BLEU between training utterances from each source. These scores for ATIS, with comparison to human translation, and Overnight are detailed in Table 4. + +Overall, we find that each MT system provides a different set of translations, with no two translation sets more similar than any other. We also find that for ATIS in German, Wu et al. (2016) provides the most similar training dataset to the gold training data. However, we find that Microsoft Translator Text appears to narrowly improve translation into Chinese by $+0.021$ BLEU. This arises as an effect of a systematic preference for a polite form of Chinese question, beginning with "请" [please], preferred by the professional translator. Overall, we collected all training data using MT for $<$ 50 across both datasets and languages. + +# Translate all 10 sentences into Simplified Chinese + +In this task, we ask you to provide a translation into Simplified Chinese of an English question. + +You must be native speaker of Chinese (Mandarin) and proficient in English to complete this HIT. + +We ask you to use only Simplified Chinese characters (简体汉字) and do not use Pinyin (汉语拼音). + +Attempt to translate every word into Chinese. If this is difficult for rare words you do not understand, such as a person's name or place names, then please copy the English word into the translation. + +You can assume all currency amounts are US Dollars and all measurements are in feet and inches. + +In order to receive payment, you must complete all translations without using online translation services. + +The use of online translation websites or software will be considered cheating. + +Identified cheating will result in withheld payment and a ban on completing further HITs. + +The demographic questionnaire is optional and you are welcome to complete as many HITs as you like. + +Figure 4: Instructions provided to Turkers for the English to Chinese translation task of Overnight (Wang et al., 2015). We specify the requirement to answer in Simplified Chinese characters and specify the basis for rejection of submitted work. Instructions are condensed for brevity. + +# Select the best German translation for 10 English sentences + +In this HIT, you will be presented with an English question and three candidate translations of this English sentence in German. We ask you to use your judgment as a native-speaker of German to select the best German translation from the three candidates. + +If you consider all candidate translations to be inadequate, then provide your own translation. You must be native speaker of German and proficient in English to complete this HIT. + +We consider the best translation as one which asks the same question in the style of a native speaker of German, rather than the best direct translation of English. Occasionally, multiple candidates will be very similar, or identical, in this case select the first identical candidate. + +You must complete all 10 to submit the HIT and receive payment. + +You are welcome to submit as many HITs as you like. + +Figure 5: Instructions provided to Turkers for the English to German translation ranking for both ATIS (Dahl et al., 1994) and Overnight(Wang et al., 2015). Instructions are condensed for brevity. + +
EnglishTranslation into German
What ground transportation is available from the Pittsburgh airport to the town?Welche Verkehrs Anbindung gibt es vom Pittsburgh Flughafen in die Stadt?
Could you please find me a nonstop flight from Atlanta to Baltimore on a Boeing 757 arriving at 7pm?Könntest du für michitte einen Direktflug von Atlanta nach Baltimore auf einer Boeing 757 um 19 Uhr ankommend finden?
What is fare code QO mean?Was bedeutet der ticketpreiscode QO?
Show me the cities served by Canadian Airlines International.Zeige mir die Städte, die von den Canadian Airlines International angeflogen werden.
Is there a flight tomorrow morning from Columbus to Nashville?Gibt es einen Flug morgen früh von Columbus nach Nashville?
Is there a Continental flight leaving from Las Vegas to New York nonstop?Gibt es einen Continental-flug ohne Zwischenstopps, der von Las Vegas nach New York fliegt?
I would like flight information from Phoenix to Denver.Ich hatteGREnergieinformationen zu Flügen von Phoenix nach Denver.
List flights from Indianapolis to Memphis with fares on Monday.Liste Flüge von Indianapolis nach Memphis am Montag inklusive ticketpreisen auf.
How about a flight from Milwaukee to St. Louis that leaves Monday night?Wieäre es mit einem Flug von Milwaukee nach St. Louis, der Montag Nacht abfliegt?
A flight from St. Louis to Burbank that leaves Tuesday afternoon.Einen Flug von St. Louis nach Burbank, der Dienstag Nachmittag abfliegt.
+ +Table 6: Sample translations from English to German for the ATIS dataset (Dahl et al., 1994). + +
EnglishTranslation into German
What kind of cuisine is Thai Cafe?Welche Art von Küche bietet das Thai Café?
What neighborhood has the largest number of restaurants?Welche Wohngegend hat die meisten Restaurants?
Which recipe requires the longest cooking time?Welches Rezept bestehtigt die langste Kochzeit?
Which player had a higher number of assists in a season than Kobe Bryant?Welcher Spieler hatte eine höhere Anzahl an Vorlagen in einer Saison als Kobe Bryant?
Housing with monthly rent of 1500 dollars that was posted on January 2?Welche Wohnung hat eine monatliche Miete von 1500 Dollar und wurde am 2. Januar veröffentlich?
What article is cited at least twice?Welcher Antikel wurde mindestens zweimal zitiert?
What block is to the right of the pyramid shaped block?Welcher Block befindet sich rechts darüber dem pyramidenfornigen Block?
What is the birthplace of students who graduated before 2002?Was ist der Geburtsort von Studenten, die vor 2002 ihren Abschluss gemacht haben?
Who is the shortest person in my network?Wer ist die klinste Person in meinem Netzwerk?
Find me the employee who quit between 2004 and 2010.Welche Angestellten haben zwischen 2004 und 2010 gekündigt?
EnglishTranslation into Chinese
Hotels that have a higher rating than 3 stars?评级高于3星级的酒店
Thai restaurants that accept credit cards?接受信用卡的泰式餐馆
Show me recipes posted in 2004 or in 2010?告诉我2004年或2010年发布的食谱
Which player has played in fewer games than Kobe Bryant?哪个球员比科比布莱恩特打得比赛少?
Meeting that has duration of less than three hours?时长短于3小时的会议
Meetings in Greenberg Cafe that end at 10am?在Greenberg咖啡厅举行并且在早上10点结束的会议
Housing units that are smaller than 123 Sesame Street?比123芝麻街要小的房屋单元
Publisher of article citing Multivariate Data Analysis?引用多变量数据分析的文章出版商
Block that is below at least two blocks?在至少两个块以下的块
Find me all students who attended either Brown University or UCLA.给我找到所有要么在布朗大学要么在UCLA上学的学生们
+ +Table 7: Sample translations from English to German and Chinese for the Overnight dataset (Wang et al., 2015). \ No newline at end of file diff --git a/bootstrappingacrosslingualsemanticparser/images.zip b/bootstrappingacrosslingualsemanticparser/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..2bee5b94c5d3681d5a849d78aa158baca990aa15 --- /dev/null +++ b/bootstrappingacrosslingualsemanticparser/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7e4b2f8deafaf8ee6fc4e7fc35054c0190e5260b403c0ded69a0b3d69bc98d5c +size 939124 diff --git a/bootstrappingacrosslingualsemanticparser/layout.json b/bootstrappingacrosslingualsemanticparser/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d3e6b7d6ba62513874639e43e1b4b30f19e3b5cf --- /dev/null +++ b/bootstrappingacrosslingualsemanticparser/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bf2ae9d3aa2b9c2f22f3248aae643fc64cd2c355c0289f5e3b2e88fdc7d71e62 +size 527300 diff --git a/bridgingtextualandtabulardataforcrossdomaintexttosqlsemanticparsing/dd14fb8d-ed1f-4287-9d97-15a23169e76e_content_list.json b/bridgingtextualandtabulardataforcrossdomaintexttosqlsemanticparsing/dd14fb8d-ed1f-4287-9d97-15a23169e76e_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..d5ecf442052c317a6f4f40a2f670d07320db10e3 --- /dev/null +++ b/bridgingtextualandtabulardataforcrossdomaintexttosqlsemanticparsing/dd14fb8d-ed1f-4287-9d97-15a23169e76e_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c917f676a66e9c233204e4336467bf8ba845f6c2497ebc294e3c4f766aeb4720 +size 106961 diff --git a/bridgingtextualandtabulardataforcrossdomaintexttosqlsemanticparsing/dd14fb8d-ed1f-4287-9d97-15a23169e76e_model.json b/bridgingtextualandtabulardataforcrossdomaintexttosqlsemanticparsing/dd14fb8d-ed1f-4287-9d97-15a23169e76e_model.json new file mode 100644 index 0000000000000000000000000000000000000000..66d05f3a108d52caf34910a79c7b29cfd6647496 --- /dev/null +++ b/bridgingtextualandtabulardataforcrossdomaintexttosqlsemanticparsing/dd14fb8d-ed1f-4287-9d97-15a23169e76e_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:15d6548efc98a34ee62c06477c52547d67c012450baf225efa316cdc902c6244 +size 127080 diff --git a/bridgingtextualandtabulardataforcrossdomaintexttosqlsemanticparsing/dd14fb8d-ed1f-4287-9d97-15a23169e76e_origin.pdf b/bridgingtextualandtabulardataforcrossdomaintexttosqlsemanticparsing/dd14fb8d-ed1f-4287-9d97-15a23169e76e_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4b72b42ee05d3bc61b37d540c80cb112f24e603f --- /dev/null +++ b/bridgingtextualandtabulardataforcrossdomaintexttosqlsemanticparsing/dd14fb8d-ed1f-4287-9d97-15a23169e76e_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb073f051a197e110acde550fbb3e545c36431664b19bdfd279fc0c1a03257fa +size 2168540 diff --git a/bridgingtextualandtabulardataforcrossdomaintexttosqlsemanticparsing/full.md b/bridgingtextualandtabulardataforcrossdomaintexttosqlsemanticparsing/full.md new file mode 100644 index 0000000000000000000000000000000000000000..ca848d4379eefb7a3f2193f10ddce0b3d0be20b2 --- /dev/null +++ b/bridgingtextualandtabulardataforcrossdomaintexttosqlsemanticparsing/full.md @@ -0,0 +1,451 @@ +# Bridging Textual and Tabular Data for Cross-Domain Text-to-SQL Semantic Parsing + +Xi Victoria Lin + +Richard Socher + +Caiming Xiong + +Salesforce Research + +{xilin,rsocher,cxiong}@salesforce.com + +# Abstract + +We present BRIDGE, a powerful sequential architecture for modeling dependencies between natural language questions and relational databases in cross-DB semantic parsing. BRIDGE represents the question and DB schema in a tagged sequence where a subset of the fields are augmented with cell values mentioned in the question. The hybrid sequence is encoded by BERT with minimal subsequent layers and the text-DB contextualization is realized via the fine-tuned deep attention in BERT. Combined with a pointer-generator decoder with schema-consistency driven search space pruning, BRIDGE attained state-of-the-art performance on the well-studied Spider benchmark (65.5% dev, 59.2% test), despite being much simpler than most recently proposed models for this task. Our analysis shows that BRIDGE effectively captures the desired cross-modal dependencies and has the potential to generalize to more text-DB related tasks. Our implementation is available at https://github.com/salesforce/TabularSemanticParsing. + +# 1 Introduction + +Text-to-SQL semantic parsing addresses the problem of mapping natural language utterances to executable relational DB queries. Early work in this area focuses on training and testing the semantic parser on a single DB (Hemphill et al., 1990; Dahl et al., 1994; Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Dong and Lapata, 2016). However, DBs are widely used in many domains and developing a semantic parser for each individual DB is unlikely to scale in practice. + +More recently, large-scale datasets consisting of hundreds of DBs and the corresponding question-SQL pairs have been released (Yu et al., 2018; Zhong et al., 2017; Yu et al., 2019b,a) to encourage the development of semantic parsers that can work + +![](images/6ce2528871e843641c99a03b2c7c863b3535bc29f49901876533aed3496078b3.jpg) + +![](images/2edbee25cdbd4bfab5a5a28618295ed6a54bc9c02e28196fb76a2dd36568aa28.jpg) +Figure 1: Two questions from the Spider dataset with similar intent resulted in completely different SQL logical forms on two DBs. In cross-DB text-to-SQL semantic parsing, the interpretation of a natural language question is strictly grounded in the underlying relational DB schema. + +well across different DBs (Guo et al., 2019; Bogin et al., 2019b; Zhang et al., 2019; Wang et al., 2019; Suhr et al., 2020; Choi et al., 2020). The setup is challenging as it requires the model to interpret a question conditioned on a relational DB unseen during training and accurately express the question intent via SQL logic. Consider the two examples shown in Figure 1, both questions have the intent to count, but the corresponding SQL queries are drastically different due to differences in the target DB schema. As a result, cross-DB text-to-SQL semantic parsers cannot trivially memorize seen SQL patterns, but instead has to accurately model the natural language question, the target DB structure, and the contextualization of both. + +State-of-the-art cross-DB text-to-SQL semantic parsers adopt the following design principles to + +address the aforementioned challenges. First, the question and schema representation should be contextualized with each other (Hwang et al., 2019; Guo et al., 2019; Wang et al., 2019; Yin et al., 2020). Second, large-scale pre-trained language models (LMs) such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019c) can significantly boost parsing accuracy by providing better representations of text and capturing long-term dependencies. Third, under data privacy constraints, leveraging available DB content can resolve ambiguities in the DB schema (Bogin et al., 2019b; Wang et al., 2019; Yin et al., 2020). Consider the second example in Figure 1, knowing "PLVDB" is a value of the field Journal.Name helps the model to generate the WHERE condition. + +We present BRIDGE, a powerful sequential text-DB encoding framework assembling the three design principles mentioned above. BRIDGE represents the relational DB schema as a tagged sequence concatenated to the question. Different from previous work which proposed special-purpose layers for modeling the DB schema (Bogin et al., 2019a,b; Zhang et al., 2019; Choi et al., 2020) and cross text-DB linking (Guo et al., 2019; Wang et al., 2019), BRIDGE encodes the tagged hybrid sequence with BERT and lightweight subsequent layers - two single-layer bi-directional LSTMs (Hochreiter and Schmidhuber, 1997). Each schema component (table or field) is simply represented using the hidden state of its special token in the hybrid sequence. To better align the schema components with the question, BRIDGE augments the hybrid sequence with anchor texts, which are automatically extracted DB cell values mentioned in the question. Anchor texts are appended to their corresponding fields in the hybrid sequence (Figure 2). The text-DB alignment is then implicitly achieved via fine-tuned BERT attention between overlapped lexical tokens. + +Combined with a pointer-generator decoder (See et al., 2017) and schema-consistency driven search space pruning, BRIDGE performs competitively on the well studied Spider benchmark (Structure Acc: $65.6\%$ dev, $59.2\%$ test, top-4 rank; Execution Acc: $59.9\%$ test, top-1 rank), outperforming most of recently proposed models with more sophisticated neural architectures. Our analysis shows that when applied to Spider, the BERT-encoded hybrid representation can effectively capture useful cross-modal + +dependencies and the anchor text augmentation resulted in significant performance improvement. + +# 2 Model + +In this section, we present the BRIDGE model that combines a BERT-based encoder with a sequential pointer-generator to perform end-to-end cross-DB text-to-SQL semantic parsing. + +# 2.1 Problem Definition + +We formally defined the cross-DB text-to-SQL task as the following. Given a natural language question $Q$ and the schema $S = \langle \mathcal{T},\mathcal{C}\rangle$ for a relational database, the parser needs to generate the corresponding SQL query $Y$ . The schema consists of tables $\mathcal{T} = \{t_1,\dots ,t_N\}$ and fields $\mathcal{C} = \{c_{11},\ldots ,c_{1|T_1|},\ldots ,c_{n1},\ldots ,c_{N|T_N|}\}$ . Each table $t_i$ and each field $c_{ij}$ has a textual name. Some fields are primary keys, used for uniquely indexing eachEar data record, and some are foreign keys, used to reference a primary key in a different table. In addition, each field has a data type, $\tau \in \{\text{number},\text{text},\text{time},\text{boolean},\text{etc.}\}$ . + +Most existing solutions for this task do not consider DB content (Zhong et al., 2017; Yu et al., 2018). Recent approaches show accessing DB content significantly improves system performance (Liang et al., 2018; Wang et al., 2019; Yin et al., 2020). We consider the setting adopted by Wang et al. (2019) where the model has access to the value set of each field instead of full DB content. For example, the field Property_Type_Code in Figure 2 can take one of the following values: {"Apartment", "Field", "House", "Shop", "Other)}. We call such value sets picklists. This setting protects individual data record and sensitive fields such as user IDs or credit numbers can be hidden. + +# 2.2 Question-Schema Serialization and Encoding + +As shown in Figure 2, we represent each table with its table name followed by its fields. Each table name is preceded by the special token [T] and each field name is preceded by [c]. The representations of multiple tables are concatenated to form a serialization of the schema, which is surrounded by two [SEP] tokens and concatenated to the question. Finally, following the input format of BERT, the + +question is preceded by [CLS] to form the hybrid question-schema serialization + +$$ +\begin{array}{l} X = [ \mathrm {C L S} ], Q, [ \mathrm {S E P} ], [ \mathrm {T} ], t _ {1}, [ \mathrm {C} ], c _ {1 1} \dots , c _ {1 | T _ {1} |}, \\ [ T ], t _ {2}, [ C ], c _ {2 1}, \dots , [ C ], c _ {N | T _ {N |}}, [ S E P ]. \\ \end{array} +$$ + +$X$ is encoded with BERT, followed by a bi-directional LSTM to form the base encoding $\pmb{h}_{\mathrm{X}} \in \mathbb{R}^{|X| \times n}$ . The question segment of $\pmb{h}_{\mathrm{X}}$ is passed through another bi-LSTM to obtain the question encoding $\pmb{h}_{\mathrm{Q}} \in \mathbb{R}^{|Q| \times n}$ . Each table/field is represented using the slice of $\pmb{h}_{\mathrm{X}}$ corresponding to its special token [T]/[C]. + +Meta-data Features We train dense look-up features to represent meta-data of the schema. This includes whether a field is a primary key ( $f_{\mathrm{pri}} \in \mathbb{R}^{2 \times n}$ ), whether the field appears in a foreign key pair ( $f_{\mathrm{for}} \in \mathbb{R}^{2 \times n}$ ) and the data type of the field ( $f_{\mathrm{type}} \in \mathbb{R}^{|\tau| \times n}$ ). These meta-data features are fused with the base encoding of the schema component via a projection layer $g$ to obtain the following encoding output: + +$$ +\boldsymbol {h} _ {S} ^ {t _ {i}} = g \left(\left[ \boldsymbol {h} _ {\mathrm {X}} ^ {p}; \boldsymbol {0}; \boldsymbol {0}; \boldsymbol {0} \right]\right), \tag {1} +$$ + +$$ +\begin{array}{l} \boldsymbol {h} _ {S} ^ {c _ {i j}} = g \left(\left[ h _ {\mathrm {X}} ^ {q}; f _ {\text {p r i}} ^ {u}; f _ {\text {f o r}} ^ {v}; f _ {\text {t y p e}} ^ {w} \right]\right) \tag {2} \\ = \operatorname {R e L U} \left(\boldsymbol {W} _ {g} \left[ \boldsymbol {h} _ {\mathrm {X}} ^ {m}; \boldsymbol {f} _ {\text {p r i}} ^ {u}; \boldsymbol {f} _ {\text {f o r}} ^ {v}; \boldsymbol {f} _ {\text {t y p e}} ^ {w} \right] + \boldsymbol {b} _ {g}\right) \\ \end{array} +$$ + +$$ +\boldsymbol {h} _ {S} = \left[ \boldsymbol {h} ^ {t _ {1}}, \dots , \boldsymbol {h} ^ {t _ {| T |}}, \boldsymbol {h} ^ {c _ {1 1}}, \dots , \boldsymbol {h} ^ {c _ {N | T _ {N} |}} \right] \in \mathbb {R} ^ {| S | \times n}, \tag {3} +$$ + +where $p$ is the index of [T] associated with table $t_i$ in $X$ and $q$ is the index of [C] associated with field $c_{ij}$ in X. $u$ , $\nu$ and $w$ are feature indices indicating the properties of $c_{ij}$ . $[h_{\mathrm{X}}^{m};f_{\mathrm{pri}}^{u};f_{\mathrm{for}}^{\nu};f_{\mathrm{type}}^{w}]\in \mathbb{R}^{4n}$ is the concatenation of the four vectors. The meta-data features are specific to fields and the table representations are fused with place-holder 0 vectors. + +# 2.3 Bridging + +Modeling only the table/field names and their relations is not always enough to capture the semantics of the schema and its dependencies with the question. Consider the example in Figure 2, Property_type_Code is a general expression not explicitly mentioned in the question and without access to the set of possible field values, it is difficult to associate "houses" and "apartments" with it. To resolve this problem, we make use of anchor text to link value mentions in the question with the corresponding DB fields. We perform fuzzy string match between $Q$ and the picklist of each field in the DB. The matched field values (anchor texts) + +are inserted into the question-schema representation $X$ , succeeding the corresponding field names and separated by the special token [v]. If multiple values were matched for one field, we concatenate all of them in matching order (Figure 2). If a question mention is matched with values in multiple fields. We add all matches and let the model learn to resolve ambiguity1. + +The anchor texts provide additional lexical clues for BERT to identify the corresponding mention in $Q$ . And we name this mechanism "bridging". + +# 2.4 Decoder + +We use an LSTM-based pointer-generator (See et al., 2017) with multi-head attention (Vaswani et al., 2017) as the decoder. The decoder starts from the final state of the question encoder. At each step, the decoder performs one of the following actions: generating a token from the vocabulary $\mathcal{V}$ , copying a token from the question $Q$ or copying a schema component from $S$ . + +Mathematically, at each step $t$ , given the decoder state $\boldsymbol{s}_t$ and the encoder representation $[\boldsymbol{h}_Q; \boldsymbol{h}_S] \in \mathbb{R}^{(|Q| + |S|) \times n}$ , we compute the multi-head attention as defined in Vaswani et al. (2017): + +$$ +\begin{array}{l} e _ {t j} ^ {(h)} = \frac {\mathbf {s} _ {t} W _ {U} ^ {(h)} \left(\mathbf {h} _ {j} W _ {V} ^ {(h)}\right) ^ {\top}}{\sqrt {n / H}}; \quad \alpha_ {t j} ^ {(h)} = \operatorname {s o f t m a x} _ {j} \left\{e _ {t j} ^ {(h)} \right\} (4) \\ \boldsymbol {z} _ {t} ^ {(h)} = \sum_ {j = 1} ^ {| Q | + | S |} \alpha_ {t j} ^ {(h)} \left(\boldsymbol {h} _ {j} W _ {V} ^ {(h)}\right); \quad \boldsymbol {z} _ {t} = \left[ \boldsymbol {z} _ {t} ^ {(1)}; \dots ; \boldsymbol {z} _ {t} ^ {(H)} \right], (5) \\ \end{array} +$$ + +where $h\in [1,\dots ,H]$ is the head number and $H$ is the total number of heads. + +The scalar probability of generating from $\mathcal{V}$ and the output distribution are + +$$ +p _ {\text {g e n}} ^ {t} = \operatorname {s i g m o i d} \left(s _ {t} \boldsymbol {W} _ {\text {g e n}} ^ {s} + z _ {t} \boldsymbol {W} _ {\text {g e n}} ^ {z} + \boldsymbol {b} _ {\text {g e n}}\right) \tag {6} +$$ + +$$ +p _ {\text {o u t}} ^ {t} = p _ {\text {g e n}} ^ {t} P _ {\mathcal {V}} \left(y _ {t}\right) + \left(1 - p _ {\text {g e n}} ^ {t}\right) \sum_ {j: \tilde {X} _ {j} = y _ {t}} \alpha_ {t j} ^ {(H)}, \tag {7} +$$ + +where $P_{\mathcal{V}}(y_t)$ is the softmax LSTM output distribution and $\tilde{X}$ is the length- $(|Q| + |S|)$ sequence that consists of only the question words and special tokens [T] and [C] from $X$ . We use the attention weights of the last head to compute the pointing distribution2. + +We extend the input state to the LSTM decoder using selective read proposed by Gu et al. (2016). + +![](images/d8eadd7a980930551fa79bd06d0a6b5eedfac9437f6e06b8f5a770f72e7cdfde.jpg) +Figure 2: The BRIDGE encoder. The two phrases "houses" and "apartments" in the input question both matched to two DB fields. The matched values are appended to the corresponding field names in the hybrid sequence. + +The technical details of this extension can be found in $\S A.2$ + +# 2.5 Schema-Consistency Guided Decoding + +We propose a simple pruning strategy for sequence decoders, based on the fact that the DB fields appeared in each SQL clause must only come from the tables in the FROM clause. + +Generating SQL Clauses in Execution Order To this end we rearrange the clauses of each SQL query in the training set into the standard DB execution order (Rob and Coronel, 1995) shown in table 1. For example, the SQL SELECT COUNT(*) FROM Properties is converted to FROM Properties SELECT COUNT(*) $^3$ . We can show that all SQL queries with clauses in execution order satisfy the following lemma + +Lemma 1 Let $Y_{\text{exec}}$ be a SQL query with clauses arranged in execution order, then any table field in $Y_{\text{exec}}$ must appear after the table. + +As a result, we adopt a binary attention mask $\xi$ + +$$ +\tilde {\alpha} _ {t} ^ {(H)} = \alpha_ {t} ^ {(H)} \cdot \xi \tag {8} +$$ + +which initially has entries corresponding to all fields set to 0. Once a table $t_i$ is decoded, we set all entries in $\xi$ corresponding to $\{c_{i1},\dots ,c_{i|T_i|}\}$ to 1. This allows the decoder to only search in the space specified by the condition in Lemma 1 with little overhead in decoding speed. + +Written: SELECT FROM WHERE GROUPBY HAVING ORDER BY LIMIT Exec: FROM WHERE GROUP BY HAVING SELECT ORDER BY LIMIT + +Table 1: The written order vs. execution order of all SQL clauses appeared in Spider. + +# 3 Related Work + +Text-to-SQL Semantic Parsing Recently the field has witnessed a re-surge of interest for text-to-SQL semantic parsing (Androutsopoulos et al., 1995), by virtue of the newly released large-scale datasets (Zhong et al., 2017; Yu et al., 2018; Zhang et al., 2019) and matured neural network modeling tools (Vaswani et al., 2017; Shaw et al., 2018; Devlin et al., 2019). While existing models have surpassed human performance on benchmarks consisting of single-table and simple SQL queries (Hwang et al., 2019; Lyu et al., 2020; He et al., 2019a), ample space of improvement still remains for the Spider benchmark which consists of relational DBs and complex SQL queries4. + +Recent architectures proposed for this problem show increasing complexity in both the encoder and the decoder (Guo et al., 2019; Wang et al., 2019; Choi et al., 2020). Boin et al. (2019a,b) proposed to encode relational DB schema as a graph and also use the graph structure to guide decoding. Guo et al. (2019) proposes schema-linking and SemQL, an intermediate SQL representation customized for questions in the Spider dataset which + +was synthesized via a tree-based decoder. Wang et al. (2019) proposes RAT-SQL, a unified graph encoding mechanism which effectively covers relations in the schema graph and its linking with the question. The overall architecture of RAT-SQL is deep, consisting of 8 relational self-attention layers on top of BERT-large. + +In comparison, BRIDGE uses BERT combined with minimal subsequent layers. It uses a simple sequence decoder with search space-pruning heuristics and applies little abstraction to the SQL surface form. Its encoding architecture took inspiration from the table-aware BERT encoder proposed by Hwang et al. (2019), which is very effective for WikiSQL but has not been successful adapted to Spider. Yavuz et al. (2018) uses question-value matches to achieve high-precision condition predictions on WikiSQL. Shaw et al. (2019) also shows that value information is critical to the cross-DB semantic parsing tasks, yet the paper reported negative results augmenting an GNN encoder with BERT and the overall model performance is much below state-of-the-art. While previous work such as (Guo et al., 2019; Wang et al., 2019; Yin et al., 2020) use feature embeddings or relational attention layers to explicitly model schema linking, BRIDGE models the linking implicitly with BERT and lexical anchors. + +An earlier version of this model is implemented within the Photon NLIDB model (Zeng et al., 2020), with up to one anchor text per field and an inferior anchor text matching algorithm. + +Joint Text-Table Representation and Pretraining BRIDGE is a general framework for jointly representing question, relational DB schema and DB values, and has the potential to be applied to a wide range of problems that requires joint textual-tabular data understanding. Recently, Yin et al. (2020) proposes TaBERT, an LM for jointly representing textual and tabular data pre-trained over millions of web tables. Similarly, Herzig et al. (2020) proposes TAPAs, a pretrained text-table LM that supports arithmetic operations for weakly supervised table QA. Both TaBERT and TAPASAND SUPPORTS ARIT focus on representing text with a single table. TaBERT was applied to Spider by encoding each table individually and modeling crosstab correlation through hierarchical attention. In comparison, BRIDGE serialized the relational DB schema and uses BERT to model cross-table dependencies. TaBERT adopts the "content snapshot" + +
#Q#SQL#DB
Train8,6954,730140
Dev1,03456420
Test2,147-40
+ +Table 2: Spider Dataset Statistics + +mechanism which retrieves rows from a table most similar to the input question and jointly encodes them with the table header. Compared to BRIDGE which uses the anchor texts, table rows are not always available if DB content access is restricted. Furthermore, anchor texts provide more focused signals that link the text and the DB schema. + +# 4 Experiment Setup + +# 4.1 Dataset + +We evaluate BRIDGE using Spider (Yu et al., 2018), a large-scale, human annotated, cross-database text-to-SQL benchmark. Table 2 shows the statistics of its train/dev/test splits. The test set is hidden. We run hyperparameter search and analysis on the dev set and report the test set performance only using our best approach. + +# 4.2 Evaluation Metrics + +We report the official evaluation metrics proposed by the Spider team. + +Exact Set Match (E-SM) This metrics evaluates the structural correctness of the predicted SQL by checking the orderless set match of each SQL clause in the predicted query w.r.t. the ground truth. It ignores errors in the predicted values. + +Execution Accuracy (EA) This metrics checks if the predicted SQL is executable on the target DB and if the execution results of match those of the ground truth. It is a performance upper bound as two SQL queries with different semantics can execute to the same results on a DB. + +# 4.3 Implementation Details + +Anchor Text Selection Given a DB, we compute the pickist of each field using the official DB files. We designed a fuzzy matching algorithm to match a question to possible value mentions in the DB (described in detail in §A.3). We include up to $k$ matches per field, and break ties by taking the longer match. We exclude all number matches as + +![](images/500cb778ed3c23c0815ae6f45ed06d9546e1a5e456bd7c4f5189c382e2d6abfd.jpg) +Figure 3: Distribution of # non-numeric values in the ground truth SQL queries on Spider dev set. + +a number mention in the question often does not correspond to a DB cell (e.g. "shoes lower than \(50") or cannot effectively discriminate between different fields. Figure 3 shows the distribution of non-numeric values in the ground truth SQL queries on Spider dev set. \(33\%\) of the examples contain one or more non-numeric values in the ground truth queries and can potentially benefit from the bridging mechanism. + +Data Repair The original Spider dataset contains errors in both the example files and database files. We manually corrected some errors in the train and dev examples. For comparison with others in §5.1, we report metrics using the official dev/test sets. For our own ablation study and analysis, we report metrics using the corrected dev files. We also use a high-precision heuristics to identify missing foreign key pairs in the databases and combine them with the released ones during training and inference: if two fields of different tables have identical name and one of them is a primary key, we count them as a foreign key pair6. + +Training We train our model using cross-entropy loss. We use Adam-SGD (Kingma and Ba, 2015) with default parameters and a mini-batch size of 32. We use the uncased BERT-base model from the Huggingface's transformer library (Wolf et al., 2019). We set all LSTMs to 1-layer and set the hidden state dimension $n = 512$ . We train a maximum of 50,000 steps and set the learning rate to $5e - 4$ in the first 5,000 iterations and linearly shrink it to 0. We fine-tune BERT with a fine-tuning rate linearly increasing from $3e - 5$ to $8e - 5$ in the first 5,000 iterations and linearly decaying to 0. We randomly permute the table order in a DB schema and drop one table which does not appear in the ground truth with probability 0.3 in every training step. The training time of our model on a Tesla + +
ModelDevTest
Global-GNN (Bogin et al., 2019b)52.747.4
EditSQL + BERT (Zhang et al., 2019)57.653.4
GNN + Bertrand-DR (Kelkar et al., 2020)57.954.6
IRNet + BERT (Guo et al., 2019)61.954.7
RAT-SQL v2 (*) (Wang et al., 2019)62.757.2
RYANSQL + BERTL (Choi et al., 2020)66.658.2
RYANSQL v2 + BERTL◇70.660.6
RAT-SQL v3 + BERTL♦ (Wang et al., 2019)69.765.6
BRIDGE (k=1) (ours)65.3-
BRIDGE (k=2) (ours)65.559.2
+ +Table 3: Exact set match on the Spider dev and test sets, compared to the other top-performing approaches on the leaderboard as of June 1st, 2020. The test set results were issued by the Spider team. $\mathrm{BERT}_L$ denotes $\mathrm{BERT}_{\mathrm{LARGE}}$ . $\diamond$ denotes approaches without publication reference. $\spadesuit$ denotes approaches using DB content. $\odot$ denote approaches that output executable SQL queries. + +V100-SXM2-16GB GPU is approximately 33h (including intermediate results verification time). + +Decoding The decoder uses a generation vocabulary consisting of 70 SQL keywords and reserved tokens, plus the 10 digits to generate numbers not explicitly mentioned in the question (e.g. "first", "second", "youngest" etc.). We use a beam size of 256 for leaderboard evaluation. All other experiments use a beam size of 16. We use schema-consistency guided decoding during inference only. It cannot guarantee schema consistency7 and we run a static SQL correctness check on the beam search output to eliminate predictions that are either syntactically incorrect or violates schema consistency8 If no predictions in the beam satisfy the two criteria, we output a default SQL query which count the number of entries in the first table. + +# 5 Results + +# 5.1 End-to-end Performance Evaluation + +Table 3 shows the E-SM accuracy of BRIDGE compared to other approaches ranking at the top of the Spider leaderboard. BRIDGE per + +forms very competitively, significantly outperforming most of recently proposed architectures with more complicated, task-specific layers (GlobalGNN, EditSQL+BERT, IRNet+BERT, RAT-SQL v2, RYANSQL+BERT $_L$ ). We find changing $k$ from 1 to 2 yield marginal performance improvement since only 77 SQL queries in the dev set contains more than one textual values (Figure 3). In addition, BRIDGE generates executable SQL queries by copying values from the input question while most existing models do not. As of June 1st, 2020, BRIDGE ranks top-1 on the Spider leaderboard by execution accuracy. + +The two approaches significantly better than BRIDGE by E-SM are RYANSQL v2+BERT $_L$ and RAT-SQL v3+BERT $_L$ . We further look at the performance comparison with RAT-SQL v3+BERT $_L$ across different difficulty levels in Table 4. Both model achieves $>80\%$ E-SM accuracy in the easy category, but BRIDGE shows more significant overfitting. BRIDGE also underperforms RAT-SQL v3+BERT $_L$ in the other three categories, with considerable gaps in medium and hard. + +As described in §3, RAT-SQL v3 uses very different encoder and decoder architectures compared to BRIDGE and it is difficult to conduct a direct comparison without a model ablation. We hypothesize that the most critical difference that leads to the performance gap is in their encoding schemes. RAT-SQL v3 explicitly models the question-schema-value matching via a graph and the matching condition (full-word match, partial match, etc.) are used to label the graph edge. BRIDGE represents the same information in a tagged sequence and uses fine-tuned BERT to implicitly obtain such mapping. While the anchor text selection algorithm (§4.3) has taken into account string variations, BERT may not be able to capture the linking when string variations exist – it has not seen tabular input during pre-training. The tokenization scheme adopted by BERT and other pre-trained LMs (e.g. GPT-2) cannot effectively capture partial string matches in a novel input (e.g. “cats” and “cat” are two different words in the vocabularies of BERT and GPT-2). We think recent works on text-table joint pretraining have the potential to overcome this problem (Yin et al., 2020; Herzig et al., 2020). + +RAT-SQL v3 uses BERTLARGE which has a significantly larger number of parameters than + +
ModelEasyMediumHardEx-HardAll
count2504401741701034
Dev
BRIDGE (k=2)♣88.46851.739.465.5
RAT-SQL v3+BL♣86.473.662.142.969.7
Test
BRIDGE (k=2)♣80625135.659.2
IRNet+B77.258.748.125.354.7
RAT-SQL v3+BL♣83.071.358.338.465.6
+ +Table 4: E-SM broken by hardness level compared to other top-performing approaches on Spider leaderboard. + +
ModelExact Set Match (%)
MeanMax
BRIDGE (k=2)65.8 ± 0.866.9
- SC-guided decoding65.4 ± 0.766.3 (-0.6)
- static SQL check64.8 ± 0.965.9 (-1.0)
- execution order64.2 ± 0.164.3 (-2.6)
- table shuffle & drop63.9 ± 0.364.3 (-2.6)
- anchor text63.3 ± 0.663.9 (-3.0)
- BERT17.7 ± 0.718.3 (-48.6)
+ +Table 5: BRIDGE ablations on the dev set. We report the exact set match accuracy of each model variations averaged over 3 runs. + +BRIDGE. While we hypothetically attribute some of the performance gap to the difference in model sizes, preliminary experiments of BRIDGE + BERT $_{\text{LARGE}}$ offers only a small amount of improvement (66.9 → 67.9 on the cleaned dev set). + +# 5.2 Ablation Study + +We perform a thorough ablation study to show the contribution of each BRIDGE sub-component (Table 5). Overall, all sub-components significantly contributed to the model performance. The decoding search space pruning strategies we introduced (including generation in execution order, schema-consistency guided decoding and static SQL correctness check) are effective, with absolute E-SM improvements ranging from $0.6\%$ to $2.6\%$ . However, encoding techniques for bridging textual and tabular input contribute more. Especially, adding anchor texts results in an absolute E-SM improvement of $3\%$ . A further comparison between BRIDGE with and without anchor texts (Table A3) shows that anchor text augmentation improves the model performance at all hardness levels, especially in the hard and extra-hard categories. Shuffling and randomly dropping non-ground-truth tables during training also significantly helps our ap + +![](images/de1d5d428db27a51db0a567d0aa29db09d9b765930578d7bba831e696b3b8a5e.jpg) +Figure 4: BRIDGE error type counts. + +proach, as it increases the diversity of DB schema seen by the model and reduces overfitting to a particular table arrangement. + +Moreover, BERT is critical to the performance of BRIDGE, magnifying performance of the base model by more than three folds. This is considerably larger than the improvement prior approaches have obtained from adding BERT. Consider the performances of RAT-SQL v2 and RAT-SQL v2+BERT $_L$ in Table 3, the improvement with $\mathrm{BERT}_L$ is $7\%$ . This shows that simply adding BERT to existing approaches results in significant redundancy in the model architecture. We perform a qualitative attention analysis in §A.6 to show that after fine-tuning, the BERT layers effectively capture the linking between question mentions and the anchor texts, as well as the relational DB structures. + +# 5.3 Error Analysis + +We randomly sampled 50 dev set examples for which the best BRIDGE model failed to produce a prediction that matches the ground truth and manually categorized the errors. Each example is assigned to only the category it fits most. + +Error Types Figure 4 shows the number of examples in each category. $24\%$ of the examined predictions are false negatives. Among them, 7 are semantically equivalent to the ground truths; 4 contain GROUP BY keys different but equivalent to those of the ground truth (e.g. GROUY BY car_.models.name vs. GROUP BY car_models.id); 1 has the wrong ground truth annotation. Among the true negatives, 11 have SQL structures completely deviated from the ground truth. 22 have errors that can be pinpointed to specific clauses: FROM (8), WHERE (7), SELECT (5), GROUP BY (1), ORDER BY (1). 4 have errors in the operators: 3 in the aggregation operator and 1 in the comparison operator. 1 example has non-grammatical natural language question. + +Error Causes A prominent cause of errors for BRIDGE is irregular design and naming in the DB schema. Table 6 shows 3 examples where BRIDGE made a wrong prediction from the medium hardness level in the dev set. In the second example, the DB contains a field named "hand" which stores information that indicates whether a tennis player is right-handed or left-handed. While "hand" is already a rarely seen field name (comparing to "name", "address" etc.), the problem is worsened by the fact that the field values are acronyms which bypassed the anchor text match. Similarly, in the third example, BRIDGE fails to detect that "highschooler", normally written as "high schooler" is a synonym of student. Occasionally, however, BRIDGE still makes mistakes w.r.t. schema components explicitly mentioned in the question, as shown by the first example. Addressing such error cases could further improve its performance. + +Sample Error Cases Table 6 shows examples of errors made by BRIDGE on the Spider dev set, all selected from the medium hardness level. The first example represents a type of errors that have a surprisingly high occurrence in the dev set. In this case the input question is unambiguous but the model simply missed seemingly obvious information. In the shown example while "released years" were explicitly mentioned in the question, the model still predicts the "Age" field instead, which is related to the tail of the question. The second example illustrates a DB with a rare relation "left-handed" represented with an obscure table name "hand". Interpreting this column requires background knowledge about the table. The example is made even harder given that the corresponding value "left" is denoted with only the first letter "L" in the table. The third example shows a complex case where the graph structure of the DB is critical for understanding the question. Here instead of predicting the table storing all student records, BRIDGE predicted the table storing the "friendship" relationship among students. + +# 5.4 Performance by Database + +We further compute the E-SM accuracy of BRIDGE over different DBs in the Spider dev set. Figure 5 shows drastic performance differences across DBs. While BRIDGE achieves near perfect score on some, the performance is only $30\% - 40\%$ on the others. The performance does not always negatively correlate with the schema size. + +![](images/9e48268dd82d309f96f16774199c5a3a9e65244e47baf87352989d27b123b848.jpg) +Table 6: Errors cases of BRIDGE on the Spider dev set. The samples were randomly selected from the medium hardness level. $\times$ denotes the wrong predictions made by BRIDGE and $\checkmark$ denotes the ground truths. + +![](images/5fa8c5b905f596546e44ede40ba149424e6c21aecb37c1bd65b0504c185e2c6b.jpg) +Figure 5: E-SM accuracy of BRIDGE by DB in Spider dev set. From top to bottom, the DBs are sorted by their schema sizes from small to large. + +We hypothesize that the model scores better on DB schema similar to those seen during training and better characterization of the "similarity" here could help transfer learning. + +# 6 Discussion + +Anchor Selection BRIDGE adopts simple string matching for anchor text selection. In our experiments, improving anchor text selection accuracy significantly improves the end-to-end accuracy. Extending anchor text matching to cases beyond simple string match (e.g. “LA”→“Los Angeles”) is a future direction. Furthermore, this step can be learned either independently or jointly with the text-to-SQL objective. Currently BRIDGE ignores number mentions. We may introduce features which indicate a specific number in the question falls within the value range of a specific column. + +Input Size As BRIDGE serializes all inputs into a sequence with special tags, a fair concern is that + +the input would be too long for large relational DBs. We believe this can be addressed with recent architecture advancements in transformers (Beltagy et al., 2020), which have scaled up the attention mechanism to model very long sequences. + +Relation Encoding BRIDGE fuses DB schema meta data features to each individual table field representations. This mechanism is not as strong as directly modeling the original graph structure. It works well in Spider, where the foreign key pairs often have exactly the same names. We consider regularizing specific attention heads to capture DB connections (Strubell et al., 2018) a promising way to model the graph structure of relational DBs within the BRIDGE framework without introducing (a lot of) additional parameters. + +# 7 Conclusion + +We present BRIDGE, a powerful sequential architecture for modeling dependencies between natural language question and relational DBs in cross-DB semantic parsing. BRIDGE serializes the question and DB schema into a tagged sequence and maximally utilizes pre-trained LMs such as BERT to capture the linking between text mentions and the DB schema components. It uses anchor texts to further improve the alignment between the two cross-modal inputs. Combined with a simple sequential pointer-generator decoder with schema-consistency driven search space pruning, BRIDGE attained state-of-the-art performance on Spider. In the future, we plan to study the application of BRIDGE and its extensions to other text-table related tasks such as fact checking and weakly supervised semantic parsing. + +# Acknowledgements + +We thank Yingbo Zhou for helpful discussions. We thank the anonymous reviewers and members of Salesforce Research for their thoughtful feedback. A significant part of the experiments were completed during the California Bay Area shelter-in-place order for COVID-19. Our heartful thanks go to all who worked hard to keep others safe and enjoy a well-functioning life during this challenging time. + +# References + +I. Androutsopoulos, G.D. Ritchie, and P. Thanisch. 1995. Natural language interfaces to databases - an introduction. Natural Language Engineering, 1(1):29-81. +Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. CoRR, abs/2004.05150. +Ben Boin, Jonathan Berant, and Matt Gardner. 2019a. Representing schema structure with graph neural networks for text-to-sql parsing. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 4560-4565. Association for Computational Linguistics. +Ben Bogin, Matt Gardner, and Jonathan Berant. 2019b. Global reasoning over database structures for text-to-sql parsing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 3657-3662. Association for Computational Linguistics. +DongHyun Choi, Myeong Cheol Shin, EungGyun Kim, and Dong Ryeol Shin. 2020. RYANSQL: recursively applying sketch-based slot fillings for complex text-to-sql in cross-domain databases. CoRR, abs/2004.03125. +Deborah A. Dahl, Madeleine Bates, Michael Brown, William M. Fisher, Kate Hunicke-Smith, David S. Pallett, Christine Pao, Alexander I. Rudnicky, and Elizabeth Shriberg. 1994. Expanding the scope of the ATIS task: The ATIS-3 corpus. In Human Language Technology, Proceedings of a Workshop held at Plainsboro, New Jerey, USA, March 8-11, 1994. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language + +Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1, pages 4171-4186. +Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. The Association for Computer Linguistics. +Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O. K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. The Association for Computer Linguistics. +Jiaqi Guo, Zecheng Zhan, Yan Gao, Yan Xiao, Jian-Guang Lou, Ting Liu, and Dongmei Zhang. 2019. Towards complex text-to-sql in cross-domain database with intermediate representation. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 4524-4535. +Pengcheng He, Yi Mao, Kaushik Chakrabarti, and Weizhu Chen. 2019a. X-SQL: reinforce schema representation with context. CoRR, abs/1908.08113. +Pengcheng He, Yi Mao, Kaushik Chakrabarti, and Weizhu Chen. 2019b. X-sql: reinforce schema representation with context. arXiv preprint arXiv:1908.08113. +Charles T. Hemphill, John J. Godfrey, and George R. Doddington. 1990. The ATIS spoken language systems pilot corpus. In Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, USA, June 24-27, 1990. +Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno, and Julian Martin Eisenschlos. 2020. Tapas: Weakly supervised table parsing via pre-training. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Seattle, Washington, United States. To appear. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780. +Wonseok Hwang, Jinyeung Yim, Seunghyun Park, and Minjoon Seo. 2019. A comprehensive exploration on wikisql with table-aware word contextualization. CoRR, abs/1902.01069. +Amol Kelkar, Rohan Relan, Vaishali Bhardwaj, Saurabh Vaichal, and Peter Relan. 2020. Bertrandr: Improving text-to-sql using a discriminative re-ranker. arXiv preprint arXiv:2002.00557. + +Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Anna Korhonen, David R. Traum, and Lluis Márquez, editors. 2019. Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers. Association for Computational Linguistics. +Chen Liang, Mohammad Norouzi, Jonathan Berant, Quoc V. Le, and Ni Lao. 2018. Memory augmented policy optimization for program synthesis and semantic parsing. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018, Montréal, Canada, pages 10015-10027. +Tianyu Liu, Fuli Luo, Pengcheng Yang, Wei Wu, Baobao Chang, and Zhifang Sui. 2019a. Towards comprehensive description generation from factual attribute-value tables. In (Korhonen et al., 2019), pages 5985-5996. +Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019b. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4487-4496, Florence, Italy. Association for Computational Linguistics. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019c. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692. +Qin Lyu, Kaushik Chakrabarti, Shobhit Hathi, Souvik Kundu, Jianwen Zhang, and Zheng Chen. 2020. Hybrid ranking network for text-to-sql. Technical Report MSR-TR-2020-7, Microsoft Dynamics 365 AI. +Peter Rob and Carlos Coronel. 1995. Database systems - design, implementation, and management (2. ed.). Boyd and Fraser. +Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1073-1083. +Peter Shaw, Philip Massey, Angelica Chen, Francesco Piccinno, and Yasemin Altun. 2019. Generating logical forms from graph representations of text and entities. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 95-106. Association for Computational Linguistics. + +Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 2 (Short Papers), pages 464-468. Association for Computational Linguistics. +Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguistically-informed self-attention for semantic role labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 5027-5038. Association for Computational Linguistics. +Alane Suhr, Ming-Wei Chang, Peter Shaw, and Kenton Lee. 2020. Exploring unexplored generalization challenges for cross-database semantic parsing. In The 58th annual meeting of the Association for Computational Linguistics (ACL). +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 5998-6008. +Jesse Vig. 2019. A multiscale visualization of attention in the transformer model. arXiv preprint arXiv:1906.05714. +Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, and Margot Richardson. 2019. Rat-sql: Relation-aware schema encoding and linking for text-to-sql parsers. ArXiv, abs/1911.04942. +Chenglong Wang, Po-Sen Huang, Alex Polozov, Marc Brockschmidt, and Rishabh Singh. 2018. Execution-guided neural program decoding. CoRR, abs/1807.03100. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R'emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. +Semin Yavuz, Izzeddin Gur, Yu Su, and Xifeng Yan. 2018. What it takes to achieve 100 percent condition accuracy on wikisql. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 1702-1711. Association for Computational Linguistics. +Pengcheng Yin, Graham Neubig, Wen-tau Yih, and Sebastian Riedel. 2020. Tabert: Pretraining for joint understanding of textual and tabular data. CoRR, abs/2005.08314. + +Tao Yu, Michihiro Yasunaga, Kai Yang, Rui Zhang, Dongxu Wang, Zifan Li, and Dragomir R. Radev. 2018. Syntaxsqlnet: Syntax tree networks for complex and cross-domain text-to-sql task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 1653-1663. + +Tao Yu, Rui Zhang, Heyang Er, Suyi Li, Eric Xue, Bo Pang, Xi Victoria Lin, Yi Chern Tan, Tianze Shi, Zihan Li, Youxuan Jiang, Michihiro Yasunaga, Sungrok Shim, Tao Chen, Alexander Richard Fabbri, Zifan Li, Luyao Chen, Yuwen Zhang, Shreya Dixit, Vincent Zhang, Caiming Xiong, Richard Socher, Walter S. Lasecki, and Dragomir R. Radev. 2019a. Cosql: A conversational text-to-sql challenge towards cross-domain natural language interfaces to databases. CoRR, abs/1909.05378. + +Tao Yu, Rui Zhang, Michihiro Yasunaga, Yi Chern Tan, Xi Victoria Lin, Suyi Li, Heyang Er, Irene Li, Bo Pang, Tao Chen, Emily Ji, Shreya Dixit, David Proctor, Sungrok Shim, Jonathan Kraft, Vincent Zhang, Caiming Xiong, Richard Socher, and Dragomir R. Radev. 2019b. Sparc: Cross-domain semantic parsing in context. In (Korhonen et al., 2019), pages 4511-4523. + +John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proceedings of the Thirteenth National Conference on Artificial Intelligence and Eighth Innovative Applications of Artificial Intelligence Conference, AAAI 96, IAAI 96, Portland, Oregon, USA, August 4-8, 1996, Volume 2., pages 1050-1055. + +Jichuan Zeng, Xi Victoria Lin, Steven C. H. Hoi, Richard Socher, Caiting Xiong, Michael R. Lyu, and Irwin King. 2020. Photon: A robust cross-domain text-to-sql system. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, ACL 2020, Online, July 5-10, 2020, pages 204-214. Association for Computational Linguistics. + +Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In UAI '05, Proceedings of the 21st Conference in Uncertainty in Artificial Intelligence, Edinburgh, Scotland, July 26-29, 2005, pages 658-666. AUAI Press. + +Rui Zhang, Tao Yu, Heyang Er, Sungrok Shim, Eric Xue, Xi Victoria Lin, Tianze Shi, Caiming Xiong, Richard Socher, and Dragomir R. Radev. 2019. Editing-based SQL query generation for cross-domain context-dependent questions. CoRR, abs/1909.00786. + +Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. CoRR, abs/1709.00103. + +# A Appendix + +# A.1 Examples of SQL queries with clauses arranged in execution order + +We show more examples of complex SQL queries with their clauses arranged in written order vs. execution order in Table A1. + +# A.2 Selective read decoder extension + +The selective read operation was introduced by Gu et al. (2016). It extends the input state to the decoder LSTM with the corresponding encoder hidden states of the tokens being copied. This way the decoder was provided information on which part of the input has been copied. + +Specifically, we modified the input state of our decoder LSTM to the following: + +$$ +\mathbf {y} _ {t} = \left[ \boldsymbol {e} _ {t - 1}; \zeta_ {t - 1} \right] \in \mathbb {R} ^ {2 n}, \tag {9} +$$ + +where $\pmb{e}_{t-1} \in \mathbb{R}^n$ is either the embedding of a generated vocabulary token or a learned vector indicating if a table, field or question token is copied in step $t - 1$ . $\zeta_{t-1} \in \mathbb{R}^n$ is the selective read vector, which is a weighted sum of the encoder hidden states corresponding to the tokens copied in step $t - 1$ : + +$$ +\zeta \left(\mathrm {y} _ {t - 1}\right) = \sum_ {j = 1} ^ {| Q | + | S |} \rho_ {t - 1, j} \boldsymbol {h} _ {j}; \quad \rho_ {t - 1, j} = \left\{ \begin{array}{c c} \frac {1}{K} \alpha_ {t - 1, j} ^ {(H)}, & \tilde {X} _ {j} = \mathrm {y} _ {t - 1} \\ 0 & \text {o t h e r w i s e} \end{array} \right. \tag {10} +$$ + +Here $K = \sum_{j:\tilde{X}_j = y_{t - 1}}\alpha_{t - 1,j}^{(H)}$ is a normalization term considering there may be multiple positions equals to $y_{t - 1}$ in $\tilde{X}$ . + +# A.3 Anchor text selection + +We convert the question and field values into lower cased character sequences and compute the longest sub-sequence match with heuristically determined matching boundaries. For example, the sentence "how many students keep cats as pets?" matches with the cell value "cat" $(s_c)$ and the matched substring is "cat" $(s_m)$ . We further search the question starting from the start and end character indices $i$ , $j$ of $s_m$ in the question to make sure that word boundaries can be detected within $i - 2$ to $j + 2$ , otherwise the match is invalidated. This excludes matches which are sub-strings of the question words, e.g. "cat" vs. "category". Denoting matched whole-word phrase in the question as $s_q$ , we define the question match score and cell match score as + +$$ +\beta_ {q} = \left| s _ {m} \right| / \left| s _ {q} \right| \tag {11} +$$ + +$$ +\beta_ {c} = \left| s _ {c} \right| / \left| s _ {q} \right| \tag {12} +$$ + +Written: SELECT rid FROM routes WHERE dst_apid IN (SELECT apid FROM airports WHERE country = 'United States') AND src_apid IN (SELECT apid FROM airports WHERE country = 'United States') +Exec: FROM routes WHERE dst_apid IN (FROM airports WHERE country = 'United States' SELECT apid) AND src_apid IN (FROM airports WHERE country = 'United States' SELECT apid) SELECT rid + +Written: SELECT t3.name FROM publication_keyword AS t4 JOIN keyword AS t1 ON t4.kid = t1.kid +JOIN publication AS t2 ON t2.pid = t4.pid JOIN journal AS t3 ON t2.jid = t3.jid WHERE t1_keyword = "Relational Database" GROUP BY t3.name HAVING COUNT(DISTINCT t2.title) = 60 +Exec: FROM publication_keyword AS t4 JOIN keyword AS t1 ON t4.kid = t1.kid JOIN publication AS t2 ON t2.pid = t4.pid JOIN journal AS t3 ON t2.jid = t3.jid WHERE t1_keyword = "Relational Database" GROUP BY t3.name HAVING COUNT(DISTINCT t2.title) = 60 SELECT t3.name + +Written: SELECT COUNT(DISTINCT state) FROM college WHERE enr < (SELECT AVG(enr) FROM college) +Exec: FROM college WHERE enr < (FROM college SELECT AVG(enr)) SELECT COUNT(DISTINCT state) + +Written: SELECT DISTINCT T1.LName FROM STUDENT AS T1 JOIN VOTING_record AS T2 ON T1.StuID = PRESIDENT_Vote EXCEPT SELECT DISTINCT LName FROM STUDENT WHERE Advisor = "2192" Exec: FROM STUDENT AS T1 JOIN VOTING_record AS T2 ON T1.StuID = PRESIDENT_Vote SELECT DISTINCT T1.LName EXCEPT FROM STUDENT WHERE Advisor = 2192 SELECT DISTINCT LName + +Table A1: Examples of complex SQL queries with clauses in the normal order and the DB execution order. + +FROM STUDENT JOIN VOTING_record ON STUDENT.StuID = VOTING_record.PRESIDENT_Vote SELECT DISTINCT STUDENT.LName EXCEPTFROM STUDENT WHERE STUDENT.Advisor = 2192 SELECT DISTINCT VOTING_record.PRESIDENT_Vote + +Table A2: An example sequence satisfies the condition of Lemma 1 but violates schema consistency. Here the field VOTING_record.PRESIDENT_Vote in the second sub-query is out of scope. + +We define a coarse accuracy measurement to tune the question match score threshold $\theta_q$ and the cell match threshold $\theta_c$ . Namely, given the list of matched anchor texts $\mathcal{P}$ obtained using the aforementioned procedure and the list of textual values $\mathcal{G}$ extracted from the ground truth SQL query, when compute the percentage of anchor texts appeared in $\mathcal{G}$ and the percentage of values in $\mathcal{G}$ that appeared in $\mathcal{P}$ as approximated precision $(p')$ and recall $(r')$ . Note that this metrics does not evaluate if the matched anchor texts are associated with the correct field. + +For $k = 2$ , we set $\theta_q = 0.5$ and $\theta_c = 0.8$ . On the training set, the resulting $p' = 73.7$ , $r' = 74.9$ . $25.7\%$ examples have at least one anchor text match with 1.89 average number of matches per example among them. On the dev set, the resulting $p' = 90.0$ , $r' = 92.2$ . $30.9\%$ examples have at least one match with 1.73 average number of matches per example among them. The training set metrics are lower as some training databases do not have DB content files. + +
ModelEasyMediumHardEx-HardAll
count2504401741701034
BRIDGE (k=2)88.768.4544466.9
-value augmentation85.566.649.439.863.9
+ +Table A3: Comparison between BRIDGE and BRIDGE without value augmentation on our manually corrected dev set. + +# A.4 Anchor text ablation by hardness level + +Table A3 shows the E-SM comparison between models with and without anchor text augmentation at different hardness level. Anchor text augmentation improves performance at all hardness levels, with the improvement especially significant in the hard and extra-hard categories. + +# A.5 WikiSQL Experiments + +We test BRIDGE on PostgreSQL and report the comparison to other top-performing entries on the leaderboard in Table A4. BRIDGE achieves SOTA performance on PostgreSQL, surpassing the widely cited SQLova model (Hwang et al., 2019) by a significant margin. Among the baselines shown in + +
ModelDevTest
EMEXEMEX
SQLova (Hwang et al., 2019)81.687.280.786.2
X-SQL (He et al., 2019b)83.889.583.388.7
HydraNet (Lyu et al., 2020)83.689.183.889.2
BRIDGE +BL (k = 2)♠85.191.184.890.4
SQLova+EG (Hwang et al., 2019)84.290.283.689.6
BRIDGE +BL (k = 2)+EG♠86.192.585.891.7
X-SQL+EG (He et al., 2019b)86.292.386.091.8
HydraNet+EG (Lyu et al., 2020)86.692.486.592.2
+ +Table A4: Comparison between BRIDGE and other top-performing models on the WikiSQL leaderboard as of August 20, 2020. $\spadesuit$ denotes approaches using DB content. +EG denotes approaches using execution-guided decoding. + +Table A4, SQLova is the one that's strictly comparable to BRIDGE as both use BERT-large-uncased. Hydra-Net uses RoBERTa-Large (Liu et al., 2019a) and X-SQL uses MT-DNN (Liu et al., 2019b). Leveraging table content (anchor texts) enables BRIDGE to be the best-performing model without execution-guided decoding (Wang et al., 2018). However, it seems to also reduce the degree the model can benefit from it (after adding execution-guided decoding, the improvement from BRIDGE is significantly less than the other models). + +# A.6 Visualizing fine-turneded BERT attention of BRIDGE + +We visualize attention in the fine-tuned BERT layers of BRIDGE to qualitatively evaluate if the model functions as an effective text-DB encoder as we expect. We use the BERTViz library $^{10}$ developed by Vig (2019). + +We perform the analysis on the smallest DB in the Spider dev set to ensure the attention graphs are readable. This DB consists of two tables, PokerPlayer and People that store information of poker players and their match results. While the BERT attention is a complicated computation graph consisting of 12 layers and 12 heads, we were able to identify prominent patterns in a subset of the layers. + +First, we examine if anchor texts indeed have the effect of bridging information across the textual and tabular segments. The example question we use is "show names of people whose nationality is not Russia" and "Russia" in the field People.Nationality is identified as the anchor text. As show in Fig- + +ure A1 and Figure A2, we find strong connection between the anchor text and their corresponding question mention in layer 2, 4, 5, 10 and 11. + +We further notice that the layers effectively captures the relational DB structure. As shown in Figure A3 and Figure A4, we found attention patterns in layer 5 that connect tables with their primary keys and foreign key pairs. + +We notice that all interpretable attention connections are between lexical items in the input sequence, not including the special tokens ([T], [C], [V]). This is somewhat counter-intuitive as the subsequent layers of BRIDGE use the special tokens to represent each schema component. We hence examined attention over the special tokens (Figure A5) and found that they function as bindings of tokens in the table names and field names. The pattern is especially visible in layer 1. As shown in Figure A5, each token in the table name "poker player" has high attention to the corresponding [T]. Similarly, each token in the field name "poker player ID" has high attention to the corresponding [C]. We hypothesize that this way the special tokens function similarly as the cell pooling layers proposed in TaBERT (Yin et al., 2020). + +![](images/79b9a76730617eadd9cf1dbb75c0c37f35d7248f943c9638fb259fae581a5617.jpg) +(a) Layer $= 2$ + +![](images/2104ff0b08bcfc0e47b4278191dae71378b3a37c8e0ab0f94c272efe0730ec7e.jpg) +(b) Layer $= 4$ +Figure A1: Visualization of attention to anchor text "Russia" from other words. In the shown layers, weights from the textual mention "Russia" is significantly higher than the other tokens. + +![](images/f364b206289551db9c3532480987dd047fd0bc777b524420cd4edd34c19a4423.jpg) +(c) Layer $= 5$ + +![](images/e600bc6e2d2a79eb6ba7790b14407f494bcf1c38e2298ba762335af7935c2008.jpg) +(a) Layer $= 10$ + +![](images/bb33e34f1b43dd587363627237d7d5a2de60855d85ba639186645fa4c4724783.jpg) +(b) Layer $= 11$ +Figure A2: Visualization of attention to anchor text "Russia" from other words. Continue from Figure A1. + +![](images/5b379ba841e679c85f7181af272db9d050e6f4b7796878877ac27d3fbb733aa5.jpg) +(a) Table PokerPLAYER +Figure A3: Visualization of attention in layer 5 from tables to their primary keys. In Figure A3b, the table name People has high attention weights to PokerPLAYER.People_ID, a foreign key referring to its primary key People.People_ID. + +![](images/63214a45c9516876c9cc0c21063da0d2f455ed11602a853f961aeb1416d497f0.jpg) +(b) Table People + +![](images/4fec7338bb1b72d85dfdf7616a148d107972399ace1efd7fc18f9338d26cd162.jpg) +(a) PokerPLAYER.People_ID $\rightarrow$ People.People_ID +Figure A4: Visualization of attention in layer 5 between a pair of foreign keys. + +![](images/a11a2f68105aa5c07ec805c4087d793ed25ecc4b3a3cff78376a6a4f0d6744de.jpg) +(b) People.People_ID $\rightarrow$ PokerPLAYER.People_ID + +![](images/e43c482d60f4e2cc531fbead98e17d3a93824710f0c0dbc037aa4a1d29196f52.jpg) +Figure A5: Visualization of attention over special tokens [T] and [c] in layer 1. + +![](images/b79e75f7197191b41ed38d5167085a3cb10eff1133fde67de0317d0cb4cb866e.jpg) \ No newline at end of file diff --git a/bridgingtextualandtabulardataforcrossdomaintexttosqlsemanticparsing/images.zip b/bridgingtextualandtabulardataforcrossdomaintexttosqlsemanticparsing/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..8095feb267cb9d9759bafcec888dd15d02f2a6fd --- /dev/null +++ b/bridgingtextualandtabulardataforcrossdomaintexttosqlsemanticparsing/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:da93831ab033c0372ed8ec09b95d01e1ee461d4ed9749bbd23d1cfc15cd83121 +size 1164998 diff --git a/bridgingtextualandtabulardataforcrossdomaintexttosqlsemanticparsing/layout.json b/bridgingtextualandtabulardataforcrossdomaintexttosqlsemanticparsing/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..15dbb83d7d0b42e90a40f69ca52dfcd640400f56 --- /dev/null +++ b/bridgingtextualandtabulardataforcrossdomaintexttosqlsemanticparsing/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:93c96cbe0ce87a46429800032f919d60ea0505f728c3a497ea80587bf54debb1 +size 515826 diff --git a/bytepairencodingissuboptimalforlanguagemodelpretraining/1823c9bb-de25-4d82-9ef0-7482a2ac672f_content_list.json b/bytepairencodingissuboptimalforlanguagemodelpretraining/1823c9bb-de25-4d82-9ef0-7482a2ac672f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..6b6006e307fd4ad068305b2fc188016dab97f459 --- /dev/null +++ b/bytepairencodingissuboptimalforlanguagemodelpretraining/1823c9bb-de25-4d82-9ef0-7482a2ac672f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5b7e36b4e5f45802ab7b625db3114da5b93c2e78a250bb3ba249ebf470111799 +size 48374 diff --git a/bytepairencodingissuboptimalforlanguagemodelpretraining/1823c9bb-de25-4d82-9ef0-7482a2ac672f_model.json b/bytepairencodingissuboptimalforlanguagemodelpretraining/1823c9bb-de25-4d82-9ef0-7482a2ac672f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..e1dd0e029184434946a7e4bda6dc205afbe293a5 --- /dev/null +++ b/bytepairencodingissuboptimalforlanguagemodelpretraining/1823c9bb-de25-4d82-9ef0-7482a2ac672f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cf8b95e38954053a25532da0eeadbe8717d6dec0ed41a0a9819ec0241a35c14d +size 57588 diff --git a/bytepairencodingissuboptimalforlanguagemodelpretraining/1823c9bb-de25-4d82-9ef0-7482a2ac672f_origin.pdf b/bytepairencodingissuboptimalforlanguagemodelpretraining/1823c9bb-de25-4d82-9ef0-7482a2ac672f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b8e8a078e2c07dc1e6243d5478e6101ce02b0e68 --- /dev/null +++ b/bytepairencodingissuboptimalforlanguagemodelpretraining/1823c9bb-de25-4d82-9ef0-7482a2ac672f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cc71260ddab52e87ca12a4fb308f641fa0e1b2c29b4fcae0507ef189d3278270 +size 352911 diff --git a/bytepairencodingissuboptimalforlanguagemodelpretraining/full.md b/bytepairencodingissuboptimalforlanguagemodelpretraining/full.md new file mode 100644 index 0000000000000000000000000000000000000000..6f724d3ec845fdb8fbcbefdeca13efe271414169 --- /dev/null +++ b/bytepairencodingissuboptimalforlanguagemodelpretraining/full.md @@ -0,0 +1,215 @@ +# Byte Pair Encoding is Suboptimal for Language Model Pretraining + +Kaj Bostrom and Greg Durrett +Department of Computer Science +The University of Texas at Austin +{kaj,gdurrett}@cs.utexas.edu + +# Abstract + +The success of pretrained transformer language models (LMs) in natural language processing has led to a wide range of pretraining setups. In particular, these models employ a variety of subword tokenization methods, most notably byte-pair encoding (BPE) (Sennrich et al., 2016; Gage, 1994), the WordPiece method (Schuster and Nakajima, 2012), and unigram language modeling (Kudo, 2018), to segment text. However, to the best of our knowledge, the literature does not contain a direct evaluation of the impact of tokenization on language model pretraining. We analyze differences between BPE and unigram LM tokenization, finding that the latter method recovers subword units that align more closely with morphology and avoids problems stemming from BPE's greedy construction procedure. We then compare the fine-tuned task performance of identical transformer masked language models pretrained with these tokenizations. Across downstream tasks and two languages (English and Japanese), we find that the unigram LM tokenization method matches or outperforms BPE. We hope that developers of future pretrained LMs will consider adopting the unigram LM method over the more prevalent BPE. + +# 1 Introduction + +Large transformers (Vaswani et al., 2017) pretrained with variants of a language modeling objective, such as BERT (Devlin et al., 2019), have proven their effectiveness at flexibly transferring to a variety of domains and tasks. One design decision that makes them particularly adaptable is their graceful handling of the open vocabulary problem through subword tokenization. Subword tokenization, popularized in the neural machine translation literature (Sennrich et al., 2016; Vaswani et al., 2017; Wu et al., 2016), produces tokens at multiple + +levels of granularity, from individual characters to full words. As a result, rare words are broken down into a collection of subword units, bottoming out in characters in the worst case. + +Critically, a pretrained language model's subword vocabulary cannot be altered: any downstream application of these models must tokenize input or generate output using the original subword vocabulary, making the choice of tokenization a particularly significant decision. + +A variety of subword tokenization methods have seen use in pretrained language models. BERT uses the WordPiece method (Schuster and Nakajima, 2012), a language-modeling based variant of BPE; T5 (Raffel et al., 2019) uses character-level BPE; GPT2 (Radford et al., 2019) and ROBERTA (Liu et al., 2019) use BPE over raw bytes instead of unicode characters; XLNET (Yang et al., 2019) and ALBERT (Lan et al., 2019) use the Sentence-Piece library (Kudo and Richardson, 2018) which implements both BPE and unigram language model tokenization, but in both cases fail to clarify which of these methods they chose. The effects of tokenization are not examined in a reported experiment in any of the above works except Liu et al. (2019), who note that WordPiece gave a small advantage over BPE in their preliminary investigation. In the machine translation literature, Kudo (2018) introduced the unigram language model tokenization method in the context of machine translation and found it comparable in performance to BPE. Domingo et al. (2018) performed further experiments to investigate the effects of tokenization on neural machine translation, but used a shared BPE vocabulary across all experiments. Gallé (2019) examined algorithms in the BPE family, but did not compare to unigram language modeling. + +In this work, we characterize the space of proposed subword tokenization algorithms and analyze the differences between the two methods with + +publicly available implementations: BPE (merging tokens based on bigram frequency) and unigram language modeling (pruning tokens based on unigram LM perplexity). While the vocabularies resulting from these schemes are heavily overlapping, we compare each method to reference morphological segmentations and find that the unigram LM method produces tokens better aligned with morphology. To understand whether this more natural tokenization leads to improved performance, we pretrain separate language models using the ROBERTA objective (Liu et al., 2019) with each tokenization for both English and Japanese, two typologically distant languages. On downstream tasks, we find a performance gap across tasks and languages, with the unigram LM method providing an improvement over BPE of up to $10\%$ in our Japanese QA experiments, indicating the benefits of adopting this technique in the context of language model pretraining. + +# 2 Algorithms + +Subword tokenization algorithms consist of two components: a vocabulary construction procedure, which takes a corpus of text and returns a vocabulary with the desired size, and a tokenization procedure, which takes the built vocabulary and applies it to new text, returning a sequence of tokens. In theory, these two steps can be independent, although for the algorithms we examine the tokenization procedure is tightly coupled to the vocabulary construction procedure. + +A BPE vocabulary is constructed as follows: + +Algorithm 1 Byte-pair encoding (Sennrich et al., 2016; Gage, 1994) +1: Input: set of strings $D$ , target vocab size $k$ +2: procedure BPE $(D, k)$ +3: $V \gets$ all unique characters in $D$ +4: (about 4,000 in English Wikipedia) +5: while $|V| < k$ do ▷ Merge tokens +6: $t_L, t_R \gets$ Most frequent bigram in $D$ +7: $t_{\mathrm{NEW}} \gets t_L + t_R \quad \triangleright$ Make new token +8: $V \gets V + [t_{\mathrm{NEW}}]$ +9: Replace each occurrence of $t_L, t_R$ in $D$ with $t_{\mathrm{NEW}}$ +11: end while +12: return $V$ +13: end procedure + +BPE tokenization takes the vocabulary $V$ con + +taining ordered merges and applies them to new text in the same order as they occurred during vocabulary construction. + +The WordPiece algorithm (Schuster and Nakajima, 2012), used to construct BERT's vocabulary, closely resembles BPE. However, instead of merging the most frequent token bigram, each potential merge is scored based on the likelihood of an $n$ -gram language model trained on a version of the corpus incorporating that merge. Schuster and Nakajima (2012) note that the process of estimating language model parameters for every potential merge is prohibitive, so they employ aggressive heuristics to reduce the number of potential merges considered. As their implementation is not public, we are unable to make a comparison to this method. + +The unigram LM method (Kudo, 2018), in contrast to the bottom-up construction process of BPE and WordPiece, begins with a superset of the final vocabulary, pruning it to the desired size: + +Algorithm 2 Unigram LM (Kudo, 2018) +1: Input: set of strings $D$ , target vocab size $k$ +2: procedure UNIGRAMLM $(D, k)$ +3: $V \gets$ all substrings occurring more than once in $D$ (not crossing words) +4: +while $|V| > k$ do Prune tokens +6: Fit unigram LM $\theta$ to $D$ +7: for $t \in V$ do Estimate token 'loss' +8: $L_t \gets p_\theta(D) - p_{\theta'}(D)$ +9: where $\theta'$ is the LM without token $t$ +10: end for +11: Remove $\min(|V| - k, \lfloor \alpha |V| \rfloor)$ of the tokens $t$ with highest $L_t$ from $V$ , where $\alpha \in [0, 1]$ is a hyperparameter +12: end while +15: Fit final unigram LM $\theta$ to $D$ +16: return $V, \theta$ +17: end procedure + +Unigram LM tokenization takes the vocabulary $V$ and unigram LM parameters $\theta$ and performs Viterbi inference to decode the segmentation with maximum likelihood under $\theta$ . This method is similar to Morfessor's unsupervised segmentation (Creutz andLAGus, 2005) without its informed prior over token length. + +
Original: furiously Original: tricycles Original: nanotechnology BPE: _fur iously BPE: _t ric y cles BPE: _n an ote chn ology Uni. LM: _fur ious ly Uni. LM: _tri cycles s Uni. LM: _nano technology
Original: Completely preposterous suggestions BPE: -Complete t ely _prep ost erous _suggest ions Unigram LM: -Complete ly _pre post er ous _suggestion s
Original: corrupted Original: 1848 and 1852, BPE: _cor rupted BPE: _184 8 _and _185 2, Unigram LM: _corrupt ed Unigram LM: _1848 and _1852 ,
Original: 磁性は樣を分類がなさてる。 BPE: 磁 性は 樣を に分類 かなさてる。 Unigram LM: 磁 性は 樣を に 分類 かなさてる。 Gloss magnetism (top.) various ways in classification is done . Translation Magnetism is classified in various ways.
+ +![](images/ab7fddb0da6c527bfb9b04362495a790b1aa5c551aee73f0e72f1135b61eba42.jpg) +(a) Token length distributions within each vocabulary +Figure 2: English subword vocabulary and corpus profiles. The unigram LM method produces longer tokens on average (a) and uses its vocabulary space more effectively (b), with more tokens of moderate frequency. + +![](images/5d775b41421018f3d294e9a36d2d1784cf3ee1157ccffd2553bed829f9e6fe50.jpg) +Figure 1: Example tokenizations. The character $\underline{\underline{\mathbf{\Pi}}}_{-}^{\prime}$ is a word boundary marker. BPE merges common tokens, such as English inflectional suffixes and Japanese particles, into their neighbors even when the resulting unit is not semantically meaningful. +(b) Token frequency profiles over the corpus + +In the course of our experiments we did not observe a major difference in speed between the two algorithms. Both require similar amounts of time to construct a vocabulary, and both have a negligible impact on overall model inference latency. + +# 3 Comparison of Segmentations + +# 3.1 Morphology + +In Figure 1 we illustrate the differences in tokenization output between BPE and the unigram LM method. We observe that the unigram LM method produces subword units that qualitatively align with morphology much better than those produced by BPE. In particular, we note that the unigram LM method recovers common affixes such as -ly, -s, pre-, and tri- while BPE does not, instead absorbing them into adjacent units (-cles) while also producing meaningless single-character units. + +This trend is supported by Table 1, in which + +
More frequent in
BPEUnigram LM
_H_L_M_T_B_s_,_ed_d
_P_C_K_D_R_ing_ely_t_a
+ +Table 1: Tokens with the highest difference in frequency between tokenizations. The unigram LM method tends to produce more parsimonious prefixes and suffixes. + +
Tokenization
BPEUnigram LM
Tokens per word type4.7214.633
Tokens per word1.3431.318
+ +Table 2: Mean subword units per word for each method across all of English Wikipedia. + +we observe that recognizable affixes appear much more frequently in the unigram LM tokenization of our pretraining corpus than in the BPE tokenization. + +
MethodEnglish (w.r.t. CELEX2)Japanese (w.r.t. MeCab)
PrecisionRecallF1PrecisionRecallF1
BPE38.6%12.9%19.3%78.6%69.5%73.8%
Uni. LM62.2%20.1%30.3%82.2%72.8%77.2%
+ +Table 3: Correspondence of subword boundaries between unsupervised tokenization methods and morphological reference segmentations. + +As the BPE tokenization is constructed greedily according to frequency, common affixes (and punctuation) are frequently absorbed into other tokens.2 + +We see in Figure 2a that the unigram LM tokenization tends to have longer subword units than BPE. This is closer to the length distribution of gold-standard English morphs, which have a mean length of approximately 6 characters (Creutz and Linden, 2004). + +Comparison with morphological segmenters In Table 3, we further corroborate these observations by performing a quantitative evaluation of the degree to which each unsupervised segmentation algorithm aligns with morphological baselines for each language. For English, we produce gold surface allomorph boundaries from the CELEX2 lexical database (Baayen et al., 1995) in the manner of Creutz and Lindén (2004). We then compare each algorithm's subword unit boundaries with gold morpheme boundaries for words with 2 or more morphemes, weighted by their frequency in English Wikipedia. For Japanese, we compare subword tokenizations of Japanese Wikipedia sentences to morphological reference tokenizations produced using the MeCab morphological analysis and tokenization tool (Kudo, 2006) using version 2.3.0 of the UniDic dictionary (Den et al., 2007). + +We find that for both languages, the segmentations produced by the unigram LM method correspond more closely to the morphological references, confirming our qualitative analysis. On English data, both unsupervised methods exhibit low boundary recall; we attribute this to the fact that they represent many common words with underlying derivational morphology as single tokens, although for BPE this is compounded by effects we discuss in Section 3.2. + +The ability of the unigram LM method to recover the morphological structure of the text without explicit supervision aligns with the main findings of + +Creutz andLAGus (2005),who successfully use maximum-a-posteriori unigram language models to perform unsupervised morphological segmentation of English and Finnish. + +# 3.2 Vocabulary Allocation + +By surfacing subword units that align with morphology, the unigram LM tokenization provides the opportunity for the model to learn composable subword embeddings. If an affix reliably signals a linguistic feature, rather than needing to store that information redundantly across the embeddings of many tokens containing the affix, the model can store it in just the embedding of the affix. + +These results suggest that the unigram LM method may allocate its vocabulary more economically. We note in Figure 2b that both vocabularies contain a "dead zone" of tokens whose frequency is much lower than the rest of the vocabulary. This is largely the result of the presence of a number of very uncommon characters, including Chinese and Japanese kanji, in the training corpus. In the BPE tokenization, however, this effect is exacerbated, with the dead zone containing about 1500 more entries as a result of the tendency of its vocabulary construction process to produce intermediate "junk" tokens. For example, in the case where three tokens almost always occur as a group, in order to merge them into a single token, BPE must first merge one pair before incorporating the third token; this leaves an intermediate token in the vocabulary that will only occur rarely on its own. Additionally, tokens that appear in many contexts, such as inflectional affixes (-s, -ed), will tend to merge with many adjacent units due to their frequency. However, these merges lead to embedding redundancy, as these affixes usually have the same linguistic function in every context. Since the unigram LM method selects tokens during vocabulary construction using a global optimization procedure, it does not produce junk tokens; this property also allows it to avoid merging frequent tokens with their neighbors too aggressively. + +Japanese vocabulary comparisons are included + +
ModelSQuAD 1.1 (dev.)EnglishCoNLL NERJapanese
EMF1Acc. (m)Acc. (mm)Dev. F1Test F1EMF1
Ours, BPE80.6 ± .288.2 ± .181.4 ± .382.4 ± .394.0 ± .190.2 ± .041.4 ± 0.642.1 ± 0.6
Ours, Uni. LM81.8 ± .289.3 ± .182.8 ± .282.9 ± .294.3 ± .190.4 ± .153.7 ± 1.354.4 ± 1.2
BERTBASE80.588.584.683.496.492.4--
+ +Table 4: Fine-tuning results. Metrics are averaged across 5 fine-tuning seeds with standard deviations indicated by $\pm$ ; due to computational constraints we did not pretrain more than once per tokenization. We include fine-tuning results for a transformer with a comparable architecture, $\mathrm{BERT}_{\mathrm{BASE}}$ , for reference, although we note that a direct comparison cannot be made due to $\mathrm{BERT}_{\mathrm{BASE}}$ using both a larger pretraining corpus and a larger subword vocabulary. + +in Appendix B. + +# 4 Downstream Task Experiments + +In order to make a fair experimental comparison between these two methods on downstream tasks, we do not use an existing pretrained language model like BERT, but instead train our own language models from scratch, controlling for the data, training objective, and optimization procedure. We pretrain four transformer masked language models using the architecture and training objective of ROBERTA-BASE (Liu et al., 2019) using the reference fairseq implementation (Ott et al., 2019). Two are pretrained on the text of English Wikipedia, comprising $\sim 3\mathrm{B}$ tokens under either tokenization. The other two are pretrained on the text of Japanese Wikipedia, comprising $\sim 0.6\mathrm{B}$ tokens. In each pair, one model is pretrained on the BPE tokenization of the corpus, and the other on the unigram LM tokenization, each with a vocabulary of 20,000 tokens. Hyperparameters are listed in Appendix A. + +We subsequently fine-tune each of the pretrained English models on the SQuAD question-answering task (Rajpurkar et al., 2016), the MNLI textual entailment task (Williams et al., 2018), and the English portion of the CoNLL 2003 named-entity recognition shared task (Tjong Kim Sang and De Meulder, 2003). We fine-tune the Japanese models on the Japanese minimal-answer subset of the TyDi question-answering task (Clark et al., 2020). We base our fine-tuning implementations on those of the transformers toolkit (Wolf et al., 2019). + +The results of our fine-tuning experiments are presented in Table 4. We show that fine-tuning models pretrained with unigram LM tokenization produces better performance than fine-tuning models pretrained with BPE tokenization for all tasks. These results suggest that the higher morpholog + +ical plausibility of the unigram LM tokenization may translate into better downstream task performance as well. Larger performance gaps are evident on SQuAD and MNLI, but the largest gap appears on Japanese TyDi. Differences in pretraining may be more evident in this setting due to the fact that the Japanese portion of the TyDi training split only contains $\sim 5\mathrm{k}$ examples, compared to the $\sim 88\mathrm{k}$ examples available for fine-tuning on SQuAD. Additionally, written Japanese does not feature whitespace between words, so it is possible for tokenizations to differ in word boundary placement as well as subword segmentation. + +# 5 Conclusion + +In this work we show that the choice of input encoding makes a difference in how well pretrained language models are able to perform end tasks. This indicates that tokenization encodes a surprising amount of inductive bias, and we suggest that unigram LM tokenization may be the better choice for development of future pretrained models. + +# Acknowledgments + +This work was partially supported by NSF Grant IIS-1814522 and a gift from Arm. This material is also based on research that is supported by the Air Force Research Laboratory (AFRL), DARPA, for the KAIROS program under agreement number FA8750-19-2-1003. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Research Laboratory (AFRL), DARPA, or the U.S. Government. + +# References + +R. Harald Baayen, Richard Piepenbrock, and Leon Gulikers. 1995. The CELEX lexical database (release 2). +Jonathan Clark, Jennimaria Palomaki, Vitaly Nikolaev, Eunsol Choi, Dan Garrette, Michael Collins, and Tom Kwiatkowski. 2020. TyDi QA: A benchmark for information-seeking question answering in typologically diverse languages. Transactions of the Association for Computational Linguistics, 8(0):454-470. +Mathias Creutz and KristaLAGus.2005.Unsupervised morpheme segmentation and morphology induction from text corpora using Morfessor 1.0. Helsinki University of Technology Helsinki. +Mathias Creutz and Krister Lindén. 2004. Morpheme segmentation gold standards for finnish and english. Report, Helsinki University of Technology. +Mathias Johan Philip Creutz and Bo Krister Johan Linden. 2004. Morpheme segmentation gold standards for Finnish and English. *Publications in Computer and Information Science Report A77*. +Yasuharu Den, Toshinobu Ogiso, Hideki Ogura, Atsushi Yamada, Nobuaki Minematsu, Kiyotaka Uchimoto, and Hanae Koiso. 2007. The development of an electronic dictionary for morphological analysis and its application to japanese corpus linguistics. Japanese Linguistics, 22:101-123. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Miguel Domingo, Mercedes Garcia-Martinez, Alexandre Helle, and Francisco Casacuberta. 2018. How much does tokenization affect in neural machine translation? arXiv preprint arXiv:1812.08621. +Philip Gage. 1994. A new algorithm for data compression. C Users Journal, 12(2):23-38. +Matthias Galle. 2019. Investigating the effectiveness of BPE: The power of shorter sequences. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1375-1381, Hong Kong, China. Association for Computational Linguistics. +Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. + +Taku Kudo. 2006. MeCab: Yet another part-of-speech and morphological analyzer. +Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 66-75, Melbourne, Australia. Association for Computational Linguistics. +Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and tokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics. +Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. ALBERT: A lite BERT for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692. +Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. *fairseq: A fast, extensible toolkit for sequence modeling.* In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*, pages 48-53, Minneapolis, Minnesota. Association for Computational Linguistics. +Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics. +Mike Schuster and Kaisuke Nakajima. 2012. Japanese and Korean voice search. 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5149-5152. + +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics. + +Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142-147. + +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998-6008. + +Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguistics. + +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtopicz, et al. 2019. Transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771. + +Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. + +Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. + +A Hyperparameters + +
Pretraining
Model architectureROBERTA-BASE +(Liu et al., 2019)
Implementationfairseq +(Ott et al., 2019)
OptimizerADAM, ε = 1e-6 +β = (0.9, 0.98) +(Kingma and Ba, 2015)
Learning rate decayPolynomial
Peak learning rate0.0005
Warmup steps10000
Weight decay0.01
Batch size2048
Sequence length512
Total updates125000
MLP dropout0.1
Attention dropout0.1
Precision16-bit
Fine-tuning
Implementationstransformers +(Wolf et al., 2019)
OptimizerADAM, ε = 1e-8 +β = (0.9, 0.999)
Learning rate decayLinear
Peak learning rate5e-5
Warmup steps0
Weight decay0
Batch size32
Sequence length +(SQuAD, TyDi QA)512
Passage stride +(SQuAD, TyDi QA)192
Sequence length +(MNLI, NER)128
Epochs3
Precision16-bit
Tokenization
ImplementationsSentencePiece +(Kudo and Richardson, 2018)
Vocabulary size20000
Unigram LM α0.25
+ +# B Japanese vocabulary comparison + +
BPEMore frequent inUnigram LM
)、は).ごは-).の)、2スのliloていくviてじまら
、2hi0%tonota
+ +Table 5: Tokens with the highest difference in frequency between tokenizations. The BPE method merges common tokens, such as particles and punctuation, even when they do not form meaningful units. The unigram LM method recovers the units 和 and 和 which are productive components of the Japanese verb conjugation system. + +![](images/45f0a6e0ddb20f27a54e44060d0ed07941a83d2e85c1897507a60637b09988c0.jpg) +(a) Token length distributions within each vocabulary + +![](images/768b30e53b2ef92bfc2b221bb7f70c1048f41d44480b33fc1215bc9da14402e1.jpg) +(b) Token frequency profiles over the corpus +Figure 3: Japanese subword vocabulary and corpus profiles. (a) The unigram LM method produces longer tokens, as it does in English. (b) Token frequency profiles resemble those of English, though the effect of the "dead zone" is less pronounced. \ No newline at end of file diff --git a/bytepairencodingissuboptimalforlanguagemodelpretraining/images.zip b/bytepairencodingissuboptimalforlanguagemodelpretraining/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..05af438844ebf2a43c1458b0d0f26e89897685e5 --- /dev/null +++ b/bytepairencodingissuboptimalforlanguagemodelpretraining/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7c3ca2f44b8f0c754fcb9980a2a60fd2c6782936e994178a4db785d29fffcb4 +size 329397 diff --git a/bytepairencodingissuboptimalforlanguagemodelpretraining/layout.json b/bytepairencodingissuboptimalforlanguagemodelpretraining/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..f502c000baa6ca1e0d4d9406169af726a7bf5895 --- /dev/null +++ b/bytepairencodingissuboptimalforlanguagemodelpretraining/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b492f52944f2d5043cb620bbf10db4efae2e71a6e2eb9c50461bd84b8b9c0481 +size 227355 diff --git a/canpretraininghelpvqawithlexicalvariations/c095413e-3d0c-4bc8-b56a-5d5764308cf4_content_list.json b/canpretraininghelpvqawithlexicalvariations/c095413e-3d0c-4bc8-b56a-5d5764308cf4_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..8f9504e9120023adea54f40e8aea80fe7a114622 --- /dev/null +++ b/canpretraininghelpvqawithlexicalvariations/c095413e-3d0c-4bc8-b56a-5d5764308cf4_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6076501bd3a76f39ecc7b2e7b7dca242f0b537c10df62aa2e393350d96dfb3d2 +size 39761 diff --git a/canpretraininghelpvqawithlexicalvariations/c095413e-3d0c-4bc8-b56a-5d5764308cf4_model.json b/canpretraininghelpvqawithlexicalvariations/c095413e-3d0c-4bc8-b56a-5d5764308cf4_model.json new file mode 100644 index 0000000000000000000000000000000000000000..2616326be19da45694a0fffec4e4b70fac572246 --- /dev/null +++ b/canpretraininghelpvqawithlexicalvariations/c095413e-3d0c-4bc8-b56a-5d5764308cf4_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6143767ec1218436328f99a197d59552d80438f55b9432fe9049988223fd6339 +size 52808 diff --git a/canpretraininghelpvqawithlexicalvariations/c095413e-3d0c-4bc8-b56a-5d5764308cf4_origin.pdf b/canpretraininghelpvqawithlexicalvariations/c095413e-3d0c-4bc8-b56a-5d5764308cf4_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..713192c5974003c73567d73e8b4e2ac9459ebf2f --- /dev/null +++ b/canpretraininghelpvqawithlexicalvariations/c095413e-3d0c-4bc8-b56a-5d5764308cf4_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:20835a67651a8e196f27f45459fc47940e7118e7f30c19842b75f6021395b968 +size 345484 diff --git a/canpretraininghelpvqawithlexicalvariations/full.md b/canpretraininghelpvqawithlexicalvariations/full.md new file mode 100644 index 0000000000000000000000000000000000000000..fc7b2c17f2deb5dff985d9dcaf8ae3213807201a --- /dev/null +++ b/canpretraininghelpvqawithlexicalvariations/full.md @@ -0,0 +1,167 @@ +# Can Pre-training help VQA with Lexical Variations? + +Shailza Jolly + +TU Kaiserslautern, Germany +DFKI GmbH, Germany + +shailza.jolly@dfki.de + +Shubham Kapoor1 + +Amazon Research, Germany + +kapooshu@amazon.com + +# Abstract + +Rephrasings or paraphrases are sentences with similar meanings expressed in different ways. Visual Question Answering (VQA) models are closing the gap with the oracle performance for datasets like VQA2.0. However, these models fail to perform well on rephrasings of a question, which raises some important questions like Are these models robust towards linguistic variations? Is it the architecture or the dataset that we need to optimize? In this paper, we analyzed VQA models in the space of paraphrasing. We explored the role of language & cross-modal pre-training to investigate the robustness of VQA models towards lexical variations. Our experiments find that pre-trained language encoders generate efficient representations of question rephrasings, which help VQA models correctly infer these samples. We empirically determine why pre-training language encoders improve lexical robustness. Finally, we observe that although pre-training all VQA components obtain state-of-the-art results on the VQA-Rephrasings dataset, it still fails to completely close the performance gap between original and rephrasing validation splits. + +# 1 Introduction + +Visual Question Answering (VQA) (Antol et al., 2015) is an image conditioned question answering task which has gained immense popularity in vision & language community. Since the introduction of the VQA challenge $^{2}$ , there has been significant progress in the field of VQA, where new model architectures and training techniques are closing the gap between the model and oracle accuracy on benchmarking datasets like VQA2.0 (Goyal et al., 2017). A majority of models obtained higher gains + +![](images/f8ab5e94cfd129be44f5556cec00f195efb286afebb47ff17262d43389af118d.jpg) +Is that a fire truck? +No -- 100% +Do you know +No--62.73% +is a fire truck? +Yes - 36.89% + +![](images/eb7ad4158db886ac73cf137736faa0111d24440374e482ab6da6464d7c7789c9.jpg) +Is the girl on the horse +afraid? +No -- 100% +Does the girl seem +Yes..99.85% +scared to be riding on +the horse? +Figure 1: Example from VQA-Rephrasings dataset (Shah et al., 2019). The answers are obtained using Pythia (Jiang et al., 2018) where green text refers to correct answer and red text refers to wrong answer. + +by introducing semantically rich visual features (Anderson et al., 2018), efficient attention schemes (Lu et al., 2016; Yang et al., 2016), and advance multimodal fusion techniques (Fukui et al., 2016; Yu et al., 2017). + +However, to deploy these state-of-the-art VQA models into real-world settings, the models must be robust to linguistic variations that originate from interactions with real users. Recently, Shah et al. (2019) showed that state-of-the-art VQA models (Jiang et al., 2018; Kim et al., 2018) are extremely sensitive to the lexical variations which result in a significant performance drop on the VQA test datasets when the questions are replaced with their rephrases. Figure 1 shows the shift in confidence scores of answers for a rephrasing of the original question. To handle these scenarios, they provided a model-agnostic cyclic-consistency (CC) approach that generates question rephrases on the fly during training, which makes the underlying VQA model lexically robust. The best-reported model with their approach achieves $56.59\%$ VQA accuracy on question rephrasings. + +Nevertheless, all the models that Shah et al. (2019) experimented with their CC framework in + +corporate an RNN based language encoder. Recently, transformer-based models (Vaswani et al., 2017) led to immense improvements in the whole NLP task spectrum (Wang et al., 2018a). Multi-headed self-attention, the core of transformer architecture, encodes the relationship of a word with its neighbors in several different representational subspaces, thus making these representations robust to linguistic variations. + +Since existing datasets expose VQA models to a small subset of the language distribution, it leads to incorrect inference when the model receives rephrasings of the original question. Although training on large datasets may overcome the problem, however, building such extensive annotated datasets is time-consuming & cost-intensive. Pre-trained models like ULMFiT (Howard and Ruder, 2018), BERT (Devlin et al., 2018), and GPT (Radford et al., 2018) have improved performances on various NLP tasks (Rajpurkar et al., 2016; Wang et al., 2018a) trained with limited data. Recently, Tan and Bansal (2019); Lu et al. (2019); Chen et al. (2019) used cross-modal pre-training methods to alleviate this problem in VQA. + +In this paper, we study the impact of using pretraining methods to make VQA models linguistically robust. Our contributions are summarized as follows: + +- We show that pre-trained language encoders make VQA models lexically robust. We also analyze how pre-trained encoders efficiently extract the same semantic information from syntactically different sentences. +- We show that pre-training is the key to achieve lexical robustness even with complex transformer-based VQA architectures. + +To the best of our knowledge, our work is the first one that explores the effect of pre-training to tackle lexical variations, especially for paraphrases, in VQA architectures. + +# 2 Background + +In this section, we explain the building blocks of our experiments in this study. + +SBERT (Reimers and Gurevych, 2019) is a BERT-based language encoder that generates semantically rich sentence embeddings. It uses siamese and triplet networks (Schroff et al., 2015) to finetune BERT (Devlin et al., 2018), which is + +![](images/938526ced39e2ad486159b52be83fb2dd6982852620b4980ccc7bd56597ebd6d.jpg) +Figure 2: Distribution of cosine similarity of ORG-REP tuples, where each tuple comprises of 1 original sentence and its 3 rephrasings. We calculate the average cosine similarity of rephrasings with its original sentence. + +a pre-trained transformer encoder trained on large amounts of monolingual data. It obtains state-of-the-art results on common semantic textual similarity and transfer learning tasks. + +BUTD (Anderson et al., 2018) $^4$ uses a GRU to encode input questions and uses them to attend image RoI features, enabling region-based attention to generate the answer. BUTD is the base architecture for many other VQA architectures like Pythia (Jiang et al., 2018) and BAN (Kim et al., 2018). + +LXMERT (Tan and Bansal, 2019) is a vision-language cross-modality pre-training framework. In contrast to single modality pre-training like BERT, LXMERT focuses on vision-language interactions, which helps to understand better visual contents, language semantics, and the relationship between them. It contains three transformer encoders, namely an object relationship encoder, a language encoder, and a cross-modality encoder, pre-trained using five different vision-language tasks. It must be noted that LXMERT is just a placeholder for transformer-based VQA architectures to investigate if a model architecture plays any role in improving lexical robustness. + +# 3 Experiments + +# 3.1 Dataset + +We used the training split of the VQA2.0 dataset (VQA2.0-train) for training the models in this work and evaluated them against the two splits of the VQA-Rephrasings (VQA-R) dataset. It contains + +
ModelVQA-Rephrasings
ORIREP
OANUMY/NORGOANUMY/NORG
BUTD63.1341.5381.2754.98-54.2733.0875.7343.52-
BUTD+SBERT62.5040.2281.4653.91-0.9957.2135.9177.4647.40+5.42
LXMERT (a)63.8643.3881.8655.54-54.7933.8675.7344.36-
LXMERT (b)64.8644.3283.2256.28+1.5658.2139.2578.847.55+6.24
LXMERT (c)73.6155.8888.5666.9+15.2666.2750.6383.3257.42+20.95
+ +Table 1: VQA Accuracy results on both splits of VQA-R. OA refers to overall accuracy. NUM, Y/N and O refers to accuracies for number, yes/no and other answer class. RG refers to relative gain. RG for BUTD+SBERT and LXMERT (c) (and LXMERT (b)) are computed w.r.t BUTD and LXMERT (a) respectively. + +a randomly sampled 40,504 question-image pairs from VQA2.0-val. Shah et al. (2019) collected three rephrasings for each question using human annotators, which amount to 121,512 pairs. During data collection, the authors ensured that the rephrasings are syntactically correct and semantically aligned with original questions. We call the original split as ORI and rephrasings split as REP in our experiments. + +# 3.2 Implementation Details + +Unlike original BUTD architecture, we use only 36 RoI per image to obtain visual features and use ReLU activation units. We train the model using Adamax (Kingma and Ba, 2014) with an initial learning rate of $2 \times 10^{-3}$ on the full training set, and the standard VQA accuracy (Antol et al., 2015) is reported for each split of VQA-Rephrasings dataset. In our experiments, we replace the GRU of BUTD with SBERT to obtain BUTD+SBERT. We pass the question embeddings from SBERT through a fully-connected (FC) layer, which is later combined with image embeddings to produce a multi-modal representation of the image-question pair. The size of SBERT embeddings is 768, and the FC layer size is 512. + +We train three variants of LXMERT: (a) all parameters are randomly initialized (b) only language encoder is initialized with BERT weights (c) all parameters except VQA task head are initialized with the pre-trained LXMERT weights. It is worth mentioning that we don't use any part of VQA2.0-val during training or finetuning to ensure the fairness of results on each split of VQA-R. In our + +experiments, we use the default hyperparameters set in the original implementation. LXMERT variant (a), (b), and (c) converged at 17 (30 hours), 10 (18 hours), and 4 epochs (8 hours) respectively on Nvidia V100 GPU. + +# 4 Results and Analysis + +# 4.1 Syntactic Variation causes Data Distribution Shift + +Machine learning models perform generally well on test samples drawn from a distribution similar to their training data and fail to generalize when test data distribution differs. However, Wang et al. (2018b); Agrawal et al. (2016) showed that networks are misled by contextual heuristics in training data instead of learning underlying generalizations. McCoy et al. (2019) showed a similar trend in NLI and found that state-of-the-art language models like BERT indeed adopt underlying heuristics, thus failing to generalize for test samples. We observe that the VQA2.0-train and VQA2.0-val have similar distributions whereas the distribution of VQA-R is different6. Since we train the language encoder of BUTD using VQA2.0-train, it performs significantly better on ORI than REP (in Table 1). Therefore, a shift in the lexical distribution of REP is a contributing factor towards this artifact. + +# 4.2 Pre-trained Language Encoders generate Lexically Robust Representations + +Although REP and ORI contain the same amount of semantic information, a significant performance drop for REP is due to the poor representation of input questions by the GRU. One can alleviate this problem by introducing a better language encoder. Therefore, we replace the GRU of the BUTD with SBERT, which is robust to lexical variations and efficiently extracts the overall semantics. As shown in Table 1, our approach (BUTD+SBERT) improves the accuracy of REP by $5.41\%$ relative to BUTD and performs slightly better than BAN+CC which is the reported state-of-the-art model of Shah et al. (2019). One must note that the architecture of BUTD is relatively simpler than BAN, and our approach doesn’t train any auxiliary component like the question generation module in CC. + +However, BUTD+SBERT obtains a comparable performance on ORI, whose distribution is similar to VQA2.0-train. Since we train GRU on VQA2.0-train, it generates semantically rich question embeddings of ORI than the generalized embeddings from SBERT, which never interacts with VQA language data. Tan and Bansal (2019) observed a similar trend in VQA2.0-dev accuracies when they used BERT as the language encoder. Considering SBERT doesn't directly improve VQA models, it raises a question What are the underlying factors that allow SBERT to improve the REP accuracy? + +We investigate it by generating the SBERT & GRU embeddings for the original question and its three rephrases, and calculate the average cosine similarity of the rephrases with their original counterpart. As shown in Fig. 2, we observe that SBERT moves the embeddings of rephrases significantly closer to the original question in its representational vector space; whereas, GRU fails to extract the underlying common semantics due to its lexical sensitivity. The average cosine similarity of ORG-REP tuple for SBERT and GRU is $91\%$ and $60\%$ respectively. Hence, we conclude that major accuracy gains for REP are derived from the pre-trained language encoder, thus making our approach model-agnostic. + +# 4.3 Pre-trained Language Encoders latch on Keywords + +A sentence and its rephrases share some common keywords which control their semantics. A lexically robust language encoder must latch on these + +keywords to generate semantically rich vector representations. In our experiment7, we build an ordered sequence of keywords $S1$ extracted from a complete sentence $S2$ . We encode $S1$ and $S2$ using a language encoder and measure the cosine similarity of the pair. We hypothesize that a lexically robust language encoder generates similar representations of $S1$ and $S2$ in its vector space. We found that the average cosine similarity over the whole VQA-R dataset for SBERT and GRU is 0.85 and 0.64 respectively8. The ability to stress on keywords makes SBERT circumvent syntactic deviations in paraphrases and embed them closer to each other in its vector space. + +# 4.4 Transformers are Good but Pre-training makes them Great + +As shown in Table 1, LXMERT (c) achieves state-of-the-art results on both ORI and REP. LXMERT's pre-training, in comparison to SBERT, is conditioned on both vision & language modality, which generates better multi-modal representations. Since a single image is associated with multiple questions, cross-modal attention helps obtain efficient language representations, making VQA models robust towards question rephrasings. + +However, the high performance of LXMERT (c) raises an important question Are the gains coming from pre-training or LXMERT architecture? Since LXMERT (a) achieves similar performance to BUTD on REP split, it shows that even a complex cross-modality architecture is not enough to make VQA models lexically robust. However, when we train LXMERT initialized with BERT weights, we observe relative gains of $1.56\%$ in ORI, and $6.24\%$ in REP. Furthermore, when we finetune LXMERT with pre-trained language, vision, and cross-modality encoders, the gains in REP grows further to $20.95\%$ relative to LXMERT (a). + +Single modality pre-training, like BERT, only captures intra-modal relationships, while VL pretraining, like LXMERT (c), learns cross-modality relationships. Since cross-modal attention aligns entities across input modalities, it induces semantically rich and robust joint representations, thus outperforming BERT only initialization. These results validate that pre-training is a crucial component for obtaining lexical robustness even for highly complex architectures. + +# 5 Discussion + +Since pre-trained language models like BERT are trained on large and diverse data, it is generally hypothesized that such models are very robust to linguistic variations. Our results show that pre-trained language encoders like SBERT indeed improve the performance of REP split by $5.42\%$ relative to a GRU encoder; however, it still underperforms by $9.37\%$ relative to semantically similar ORI questions, modeled by a GRU encoder. We observed a similar trend with task-specific multimodal pretraining as well, where LXMERT (c) struggles to close the relative performance gap of about $10\%$ between REP and ORI. In this work, we show that pre-training indeed improves the linguistic robustness of VQA models while simultaneously revealing the limitations of pre-trained language encoders for standard tasks. + +# 6 Conclusion and Future Work + +In this paper, we show that pre-trained language encoders, like SBERT, produce semantically similar embeddings for multiple rephrases of a sentence by latching on keywords, thus making VQA models robust to lexical variations. Combining cross-modal pre-training with transformer-based VQA architectures obtains state-of-the-art results on the VQA-Rephrasings dataset. + +In the future, we plan to investigate the factors that prevent closing the accuracy gap between ORI & REP despite using extensive cross-modal pretraining. Further, we will study why some answer classes like number benefits the most from pretraining while others achieve significantly less relative performance gains. + +# Acknowledgments + +Shailza Jolly was supported by the BMBF project DeFuseNN (Grant 01IW17002) and the NVIDIA AI Lab program. + +# References + +Aishwarya Agrawal, Dhruv Batra, and Devi Parikh. 2016. Analyzing the behavior of visual question answering models. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. +Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for + +image captioning and visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6077-6086. +Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering. In Proceedings of the IEEE international conference on computer vision, pages 2425-2433. +Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2019. Uniter: Universal image-text representation learning. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016. Multimodal compact bilinear pooling for visual question answering and visual grounding. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. +Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6904-6913. +Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146. +Yu Jiang, Vivek Natarajan, Xinlei Chen, Marcus Rohrbach, Dhruv Batra, and Devi Parikh. 2018. Pythia v0.1: the winning entry to the vqa challenge 2018. arXiv preprint arXiv:1807.09956. +Jin-Hwa Kim, Jaehyun Jun, and Byoung-Tak Zhang. 2018. Bilinear attention networks. In Advances in Neural Information Processing Systems, pages 1564-1574. +Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. +Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visi-olinguistic representations for vision-and-language tasks. In H. Wallach, H. Larochelle, A. Beygelz-imer, F. dAlché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 13-23. Curran Associates, Inc. +Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2016. Hierarchical question-image co-attention for visual question answering. In Advances in neural information processing systems, pages 289-297. + +Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428-3448, Florence, Italy. Association for Computational Linguistics. +Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai-assetss/researchcovers/languageeunsupervised/language understanding paper. pdf. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: $100,000+$ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. +Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. +Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 815-823. +Meet Shah, Xinlei Chen, Marcus Rohrbach, and Devi Parikh. 2019. Cycle-consistency for robust visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6649-6658. +Hao Tan and Mohit Bansal. 2019. Lxmert: Learning cross-modality encoder representations from transformers. arXiv preprint arXiv:1908.07490. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008. +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018a. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics. +Jianyu Wang, Zhishuai Zhang, Cihang Xie, Yuyin Zhou, Vittal Premachandran, Jun Zhu, Lingxi Xie, and Alan Yuille. 2018b. Visual concepts and compositional voting. Annals of Mathematical Sciences and Applications, 3(1):151-188. +Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. 2016. Stacked attention networks for image question answering. In Proceedings of + +the IEEE conference on computer vision and pattern recognition, pages 21-29. +Zhou Yu, Jun Yu, Jianping Fan, and Dacheng Tao. 2017. Multi-modal factorized bilinear pooling with co-attention learning for visual question answering. 2017 IEEE International Conference on Computer Vision (ICCV). \ No newline at end of file diff --git a/canpretraininghelpvqawithlexicalvariations/images.zip b/canpretraininghelpvqawithlexicalvariations/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..eb1d319b354221ffef6747a35e7cd49665acda04 --- /dev/null +++ b/canpretraininghelpvqawithlexicalvariations/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1642fd120076e21bc6617f5f035d3cdc395c14121f505a7cbd0a5456f13b101d +size 94024 diff --git a/canpretraininghelpvqawithlexicalvariations/layout.json b/canpretraininghelpvqawithlexicalvariations/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..46148bee95596e08665b8cc46d16de7618fcc08a --- /dev/null +++ b/canpretraininghelpvqawithlexicalvariations/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d3f0f7e0689fbe54bde389541a6eba1890f6659366afdd658601205ecf0adc3b +size 195923 diff --git a/cascadedsemanticandpositionalselfattentionnetworkfordocumentclassification/a0761c29-920e-4733-bec9-4ac408fd82af_content_list.json b/cascadedsemanticandpositionalselfattentionnetworkfordocumentclassification/a0761c29-920e-4733-bec9-4ac408fd82af_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ec81597393992106414285f129782689b5e0d0de --- /dev/null +++ b/cascadedsemanticandpositionalselfattentionnetworkfordocumentclassification/a0761c29-920e-4733-bec9-4ac408fd82af_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:83c1a830b38298fe990c8ca0fe63bce49afbeadf8ec46921d847b8612a49ddc0 +size 66353 diff --git a/cascadedsemanticandpositionalselfattentionnetworkfordocumentclassification/a0761c29-920e-4733-bec9-4ac408fd82af_model.json b/cascadedsemanticandpositionalselfattentionnetworkfordocumentclassification/a0761c29-920e-4733-bec9-4ac408fd82af_model.json new file mode 100644 index 0000000000000000000000000000000000000000..415e4ae8dda11316054adc83d08d3b601b500eff --- /dev/null +++ b/cascadedsemanticandpositionalselfattentionnetworkfordocumentclassification/a0761c29-920e-4733-bec9-4ac408fd82af_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a22180a9b3acf92bb2e683e4e22e2df165a0e26ce344a61ecfb88661c946e7f3 +size 80056 diff --git a/cascadedsemanticandpositionalselfattentionnetworkfordocumentclassification/a0761c29-920e-4733-bec9-4ac408fd82af_origin.pdf b/cascadedsemanticandpositionalselfattentionnetworkfordocumentclassification/a0761c29-920e-4733-bec9-4ac408fd82af_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0578ce028e8d116078afc633bbeefbcc85e019e6 --- /dev/null +++ b/cascadedsemanticandpositionalselfattentionnetworkfordocumentclassification/a0761c29-920e-4733-bec9-4ac408fd82af_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:07cb5b9f222c75b418dc13fa2d53201473b210c8263958872d2a76137fafe0d1 +size 989700 diff --git a/cascadedsemanticandpositionalselfattentionnetworkfordocumentclassification/full.md b/cascadedsemanticandpositionalselfattentionnetworkfordocumentclassification/full.md new file mode 100644 index 0000000000000000000000000000000000000000..9aab3bf3febadbf0934e7d78d4c79fc965f2e57c --- /dev/null +++ b/cascadedsemanticandpositionalselfattentionnetworkfordocumentclassification/full.md @@ -0,0 +1,318 @@ +# Cascaded Semantic and Positional Self-Attention Network for Document Classification + +Juyong Jiang $^{1}$ , Jie Zhang $^{2}$ , Kai Zhang $^{3*}$ + +1College of Internet of Things Engineering, Hohai University, Nanjing, China + +$^{2}$ Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China + +$^{3}$ Department of Computer & Information Sciences, Temple University, PA, USA + +'jianguyong@hhu.edu.cn + +${}^{2}$ jzhang080@gmail.com, ${}^{3}$ zhang.kai@temple.edu + +# Abstract + +Transformers have shown great success in learning representations for language modelling. However, an open challenge still remains on how to systematically aggregate semantic information (word embedding) with positional (or temporal) information (word orders). In this work, we propose a new architecture to aggregate the two sources of information using cascaded semantic and positional self-attention network (CSPAN) in the context of document classification. The CSPAN uses a semantic self-attention layer cascaded with Bi-LSTM to process the semantic and positional information in a sequential manner, and then adaptively combine them together through a residual connection. Compared with commonly used positional encoding schemes, CSPAN can exploit the interaction between semantics and word positions in a more interpretable and adaptive manner, and the classification performance can be notably improved while simultaneously preserving a compact model size and high convergence rate. We evaluate the CSPAN model on several benchmark data sets for document classification with careful ablation studies, and demonstrate the encouraging results compared with state of the art. + +# 1 Introduction + +Document classification is one of the fundamental problems in natural language processing, which is aimed at assigning one or multiple labels to a (typically) short text paragraph. Wide applications can be found in sentiment analysis (Moraes et al., 2013; Tang et al., 2015), subject categorization + +(Wang et al., 2012), spam email detection (Sahami et al., 1998) and document ranking (Wang et al., 2014). In recent years, deep neural networks have shown great potential in document classification and updated state-of-the-art performance. Popular approaches include Recurrent neural networks (RNN) (Yogatama et al., 2017), convolutional neural networks (CNN) (Zhang et al., 2015) and Attention-based methods (Transformers) (Gong et al., 2019; Adhikari et al., 2019), or a mixture of them. + +Different lines of methods have their respective pros and cons. For example, RNNs are highly effective models for exploiting word orders in learning useful representations, thanks to the iterative update of the hidden states that depend on both the semantics of the current word and that of historical words (or a concise summary of them), and the long-range dependency made possible through LSTMs (Yang et al., 2016; Stephen et al., 2018; Adhikari et al., 2019). Of course, the sequential processing nature makes it less efficient computationally. CNNs have gained huge success in image processing and classification and were recently introduced to NLP domains like document classification (Zhang et al., 2015; Lei et al., 2015; Conneau et al., 2016; Kim and Yang, 2018; Kim, 2014). The local convolutional operator is sensitive to word orders but only partially and limited by the size of the kernel, and so long-term relations may need many layers and therefore be challenging. Transformers, different from both, fully exploit the modelling power of self-attention mechanism (Shen et al., 2018; Gao et al., 2018; Zheng et al., 2018) and have significantly improved state of the art in many NLP tasks such as machine translation (Vaswani et al., 2017), + +language understanding (Devlin et al., 2018) and language modeling (Dai et al., 2019), etc. + +Despite the great successes, how to systematically aggregate the semantic information (word embedding) with the positional information (word orders) is still an open challenge in transformers. A common practice is the positional encoding (Vaswani et al., 2017), which encodes the position of the $t$ th word as a $d$ -dimensional sinusoidal vector, as + +$$ +p _ {t, 2 i} = \sin (t / 1 0 0 0 0 ^ {2 i / d}), \tag {1} +$$ + +$$ +p _ {t, 2 i + 1} = \cos (t / 1 0 0 0 0 ^ {2 i / d}). \tag {2} +$$ + +The positional vector of each word is then added to the $d$ -dimensional word embedding vector, so that subsequent predictors can numerically utilize the temporal information. However, empirically, adding positional vectors to the word vectors brings little performance gains in document classification, compared with when no positional encoding is adopted at all (See Section 3.4 Table 5 for detailed empirical results). + +There are two reasons which we believe are related to the low performance gains from using positional encodings. First, such a strategy leads to an interaction (inner product) between the semantic and temporal component that is hard to interpret. To see this, let $x_{i}$ and $p_i$ be the word vector and position vector for the $i$ th word. Then the attention score between $i$ th and $j$ th word will be computed as (before normalization) + +$$ +\begin{array}{l} e _ {i j} = \langle x _ {i} + p _ {i}, x _ {j} + p _ {j} \rangle \\ = \langle x _ {i}, x _ {j} \rangle + \langle p _ {i}, p _ {j} \rangle + \langle x _ {i}, p _ {j} \rangle \\ + \langle p _ {i}, x _ {j} \rangle \tag {3} \\ \end{array} +$$ + +where $\langle \cdot ,\cdot \rangle$ denotes the inner product between two vectors, and without loss of generality we have assumed identity transforms in generating the key and query views of each word. + +Obviously, as the inner product between a word vector and a positional vector, $\langle x_i,p_j\rangle$ and $\langle p_i,x_j\rangle$ do not bear meaningful interpretation. Therefore these two terms could very likely hamper the semantic attention term $\langle x_i,x_j\rangle$ and the positional attention term $\langle p_i,p_j\rangle$ by behaving like noise, such as deflating an important attention or exaggerating a marginal one. This can negatively affect the learned representations through the self-attention mechanism. Indeed, similar observations were made in (Yan et al., 2019), where the authors show that the self-attention mechanism, when mixed + +with the positional vectors, can no longer effectively quantify the relative positional distance between the words (namely the positional attention term $\langle p_i,p_j\rangle$ is perturbed in an undesired manner). + +Second, the relative weights of the word vector and the position vector (in their summation) is hard-coded, leading to a fixed combination, while in practice the relative importance of the semantic and positional components in affecting the similarity among the words can definitely be more complex. + +In order to solve these challenges with positional encoding, we explore a new architecture in combining the semantic and temporal information in document classification, called " Cascaded semantic and positional self-attention network" (CSPAN). There are three main characteristics of the proposed architecture. First, instead of combining the word vectors with positional vectors from scratch, we choose to first explore the two sources of information with their respective processing layers, namely, a self-attention layer that works only on the semantic space, and a BiLSTM layer which further incorporates the temporal order information in the updated word representations. Second, these two layers are cascaded so that semantic information and the temporal information can be finally combined through the use of a residual connection; this not only avoids non-interpretable operations defined between word vectors and positional vectors, but also serves as an adaptive transformation in combining the two information sources. Third, a multi-query attention scheme is adopted to extract multi-faceted, fixed dimensional document features, which makes the resultant model highly compact and memory efficient. + +The CSPAN model is shown to effectively improve performance of document classification in comparison to several state-of-the-art methods including transformer-styled architecture. In the meantime, it demonstrates very compact model size and fast convergence rate during the training process, which is particularly desirable for large problems. We also conducted careful ablation studies to further quantify the performance gains of each component of the CSPAN model. + +Our study demonstrates the importance of the way semantic and temporal information are aggregated in capturing the structures and meaning of documents, which we will continue exploring in the more challenging language modelling tasks + +such as sequence tagging (Huang et al., 2015), natural language inference (Chen et al., 2016) and modeling sentence pairs (Tan et al., 2018) in our future research. + +# 2 Method + +The overall architecture of the proposed CSPAN model is shown in Figure 1. It is a highly compact model with three basic building blocks. + +First, we use a self-attention block to update the word representations in each document. Here, the embedding of each word will be collectively affected by all other words with related semantics in the same document. Note that we will not look into any positional information in this stage. Instead, the temporal information will be taken into account in the next block, after the word representations have been fully updated through semantic self-attention alone. As we shall see, such a sequential processing pipeline allows more flexible combination of the semantic and positional information. + +Second, the updated word embeddings are fed into a Bi-LSTM layer, so that the relative position of the words are naturally exploited to further refine the word representations specific to the organization of each document. In the meantime, a residual connection is adopted to combine the semantic representation derived from the self-attention block, together with the output derived from the Bi-LSTM block; we call this `Semantic and Positional Residual Connection', because it combines the semantic information (out of self-attention block) with the positional information (out of the Bi-LSTM block) using residual connections. As we shall see, such a combination is more flexible than directly combining word vector with positional vector as in existing positional encoding schemes. + +Third, we adopt a multiple-query attention in the final block to extract fixed-dimensional document features for final classification. Compared with multi-head attention, the multi-query attention can significantly reduce the number of parameters in the network, while giving promising classification results. We describe the details of different structures and components of our model in the following sections. + +# 2.1 Semantic Self-Attention + +Self-attention as proposed by (Vaswani et al., 2017) + +![](images/1f733a4392cda816407a2c3a7b853e6af894d7339ab57ff8c1c9d18e86d5c551.jpg) +Figure 1: The architecture of the proposed CSPAN model. + +calculates attention weight between each pair of objects to capture global correlations and improve representation learning. We apply this framework in computing the word representations since it can capture long-range dependencies. However, we do make a number of important rectifications which prove to be quite useful in improving the performance of document classification. + +First, rather than using three independent transformation matrices corresponding to the key, value, and query views for each word, we discard these transformations, and use the original word vectors in all the three views. The reason is that we want to activate a full, pairwise interaction between the words in the original word embedding space and then apply transformations in subsequent (BiLSTM) layer, in order to maximally preserve the power of self-attention based representation learning. In comparison, if one chooses to apply transformation (e.g. dimensionality reduction in most cases), then chances are that the semantic information encoded in the word vectors might suffer certain losses before entering the next layer. Empirically, we have observed that implementing self-attention in the full-dimensional word vectors leads to better performance than that on the lower dimensional, transformed word-vectors. + +Second, rather than considering the use of the positional information in self-attention, we choose to implement self-attention only based on the semantic information, and consider the positional information in subsequent information processing blocks. This in contrast to current practices in + +which the semantic information and positional information of each word is used together in calculating the self-attention coefficients. The reason is that directly adding the word vector and positional vector can lead to noisy fluctuations in attention scores, as has been discussed in the introduction. Therefore, the semantic information will first be processed alone, and then subject to the positional information through subsequent LSTM layer, which is a more natural way of injecting positional information. + +Given these two design principles, our self-attention block can be described as follows. Let the input text sequence be $D = (w_{1}, w_{2}, \dots, w_{L})$ of $L$ elements where $w_{i} \in \mathbb{R}^{d}$ is the i-th word embedding. Self-attention compares each element $w_{i}$ to every other element $w_{j}$ in the sequence followed by layer normalization. As a result, a new sequence $S = (s_{1}, s_{2}, \dots, s_{L})$ of the same length is constructed, in which each element $s_{i} \in \mathbb{R}^{d}$ is a weighted average of all elements $w_{i}$ in the input sequence, as + +$$ +\begin{array}{l} S = \text {A t t e n t i o n} (D, D, D) \\ = \operatorname {s o f t m a x} \left(\frac {D D ^ {T}}{\sqrt {d}}\right) D \tag {4} \\ \end{array} +$$ + +Here, the original word embedding matrix $D \in \mathbb{R}^{L \times d}$ appears three times because we do not differentiate among the key, value and query views. The term $DD^T$ is used to generate a weight matrix based on the inner-product similarity of the elements in the sequence. After normalization and re-scaling, the weight matrix is multiplied with $D$ to generate the new sequence representation $S$ . The self-attention can enhance the semantic representation of word embeddings and capture both the local and long-range dependencies. + +# 2.2 Semantic and Positional Residual Connection + +In the second block, we apply a Bi-LSTM layer to inject temporal information in the word representations computed via the self-attention block. The Bi-LSTM is a powerful model in handling sequential data, and is known to capture long-term dependencies due to the use of the gating mechanism (Graves and Schmidhuber, 2005). Therefore this layer is supposed to further improve the word representations obtained from the self-attention layer, which proceeds as + +$$ +\vec {h} _ {t} = \overrightarrow {L S T M} \left(s _ {t}\right) \tag {5} +$$ + +$$ +\bar {h} _ {t} = \overleftarrow {L S T M} \left(s _ {t}\right) \tag {6} +$$ + +$$ +h _ {t} = \left[ \vec {h} _ {t}, \vec {h} _ {t} \right] \tag {7} +$$ + +$$ +P = \text {A t t e n t i o n} (H, H, H) \tag {8} +$$ + +Here, the word vectors obtained through the self-attention layer, $s_i' s \in \mathbb{R}^d$ are fed into a single-layer Bi-LSTM, and then the hidden state of the LSTM in the forward and backward directions are concatenated as $h_t = [\vec{h}_t, \vec{h}_t]$ . Finally, another self-attention layer is used to enhance the representations $H = [h_1, h_2, \dots, h_L]$ , followed by a layer-wise normalization to obtain the position-aware representations $P = (p_1, p_2, \dots, p_L)$ . + +Although LSTMs are known to handle long-range dependencies, it can still be challenging in long documents. Therefore, following the custom in transformers (Vaswani et al., 2017), we use a residual connection that combines the output of the self-attention layer with that of the Bi-LSTM layer, computed as shown below. + +$$ +F _ {t} ^ {s p} = s _ {t} + p _ {t} \tag {9} +$$ + +Here, $s_t \in \mathbb{R}^d$ represents the output of first building block (Semantic self-attention), $p_t \in \mathbb{R}^d$ stands for the output of second building blocks (BiLSTM). To guarantee that the two vectors can be added together, the hidden-state dimension of the Bi-LSTM is chosen as half of the input dimension, i.e., $d/2$ , so that the concatenated hidden state from the forward and backward direction (7) has the same dimension as the input word vectors. By combining the semantic and positional information, we obtain a final, high-level representation of each document. + +The residual connection (He et al., 2016) has shown to be highly useful in facilitating an effective backpropagation so that the learning process approaches a better model. In our context, the residual connection has an interesting interpretation of combining semantic and positional information in an adaptive manner. Note that the output of the self-attention layer is all about the semantic component of the words; on the other hand, the output of the Bi-LSTM layer can be deemed as word representations that incorporated the positional information, thanks to the sequential processing nature of the Bi-LSTM. Besides, since the output of the Bi-LSTM layer, its hidden state, is a transformation of the input word vectors, we can then consider the output of the residual connection as an adaptive combination of the semantic components and positional components. This not only avoids the non-interpretability of + +directly combining word vector with position vectors, but also successfully adjusts their relative importance through the learning of the transformation matrices in the Bi-LSTM model. We speculate that this is an important reason why the proposed architecture can effectively improve the classification performance. + +# 2.3 Multi-Query Soft Attention + +In the final block, we learn a number of query vectors in the space of $F_{t}^{sp}$ (9) so that each query can capture a certain aspect of the meaning of the document, in the form of a fixed-dimensional feature (context) vector. This is in contrast to the single-query attention where only a single query vector is learned to summarize the content of a document (Yang et al., 2016). It is worthwhile to note that the multi-query attention in extracting document features can be computationally more effective than multi-head attention. In the latter case, one attention head is associated with a independent set of transformation matrices, therefore the model size can be quite large. In comparison, in our approach only multiple query vectors need to be learned in the same latent space of word representations, which has a much smaller memory footprint. + +More Specifically, the multi-query attention is defined as follows. + +$$ +u _ {t} = \tanh \left(F _ {t} ^ {s p} W ^ {h} + b ^ {h}\right) \tag {10} +$$ + +$$ +\alpha_ {i t} = \frac {\exp \left(u _ {t} ^ {T} Q _ {i}\right)}{\sum_ {t} \exp \left(u _ {t} ^ {T} Q _ {i}\right)} \tag {11} +$$ + +$$ +F _ {i} ^ {s p m q} = \sum_ {t} \alpha_ {i t} F _ {t} ^ {s p} \tag {12} +$$ + +$$ +\tilde {F} ^ {s p m q} = \operatorname {C o n c a t} \left(F _ {1} ^ {s p m q}, \dots , F _ {m} ^ {s p m q}\right) W ^ {f} \tag {13} +$$ + +That is, we first feed the $F_{t}^{sp}\in \mathbb{R}^{d}$ through a one-layer MLP to get $u_{t}\in \mathbb{R}^{d}$ as a hidden representation of $F_{t}^{sp}\in \mathbb{R}^{d}$ , then we measure the importance of the word as the similarity of $u_{t}$ with a query vector $Q_{i}\in \mathbb{R}^{d}$ and get a normalized importance weight $\alpha_{i}\in \mathbb{R}^{L}$ through a softmax function. The multi-query matrix is randomly initialized and jointly learned during the training process. After that, we compute the $F_{i}^{spmq}\in \mathbb{R}^{d}$ as a weighted sum of the $F_{t}^{sp}\in \mathbb{R}^{d}$ based on the weighting. Finally, we concatenate all $F_{i}^{spmq}$ vectors and then use a fusion matrix $W^{f}\in \mathbb{R}^{md\times d}$ to get a high-level representation of each document. + +Here we discuss in more detail the memory footprint of the proposed multi-query attention, in + +comparison and commonly used multi-head attention. Let the dimension of the residual connection be $d$ ; the number of query vectors be $m$ . Then the model space complexity is $O(md + d^2)$ . In comparison, if one adopts the multi-head attention with $m$ attention heads, then the model space complexity will be $O(md^2)$ since each attention head will have its own transformation parameters. As can be seen, the memory saving is almost proportional to the dimensionality; the higher the word vector dimensions, the more significant the memory saving. This will be a desired property for real-world applications. It is also worthwhile to note that the CSPAN model only has 3 blocks, while the standard transformer has a cascade of 6 layers of self-attention each of which may require an independent set of transformation matrices. + +# 2.4 Classification Layer + +In the final layer we apply a softmax classifier on the document representation $\tilde{F}^{spmq}$ to get a predicted label $\hat{y}$ , where $\hat{y} \in Y$ and $Y$ is the class label set, i.e., + +$$ +\hat {y} = \operatorname {a r g m a x} p (Y | \tilde {F} ^ {\text {s p m q}}) \tag {14} +$$ + +where + +$$ +p \left(Y | \tilde {F} ^ {s p m q}\right) = \operatorname {s o f t m a x} \left(W ^ {o} \tilde {F} ^ {s p m q} + b ^ {o}\right) \tag {15} +$$ + +Here, $W^{o}$ and $b^{o}$ are the transformation matrix and the bias term, respectively. Therefore, we can use the negative log-likelihood to define the loss function as follows: + +$$ +L = - \log p (\hat {y} | \tilde {F} ^ {s p m q}) \tag {16} +$$ + +# 3 Experiments + +In this section, we will report a number of experimental results on 4 benchmark datasets for document classification, together with careful ablation studies to illustrate the effectiveness of the building blocks of the proposed method. + +# 3.1 Datasets and Methods + +We evaluate the effectiveness of the proposed CSPAN model on four document classification datasets as in (Zhang et al., 2015). The detailed statistics of the data sets are shown in Table 1. + +AG's News. Topic classification over four categories of internet news articles composed of titles plus description classified into: World, + +Sports, Business and Sci/Tech. The number of + +
DatasetClassesTrainTestAverage #sMax #sAverage #wMax #w
AG's News4120,0007,6001.31546.6277
Yelp Review Polarity2560,00038,0008.4119161.41345
Yelp Review Full5650,00050,0008.4151163.31418
Yahoo! Answers101,400,00060,0005.7515115.92746
+ +Table 1: Detailed statistics of the datasets: #s denotes the number of sentences (average and maximum per document), #w denotes the number of words (average and maximum per document). + +training samples for each class is 30,000 and testing 1900. + +Yelp Review Polarity. The same dataset of text reviews from Yelp Dataset Challenge in 2015, except that a coarser sentiment definition is considered: 1 and 2 are negative, and 4 and 5 as positive. The polarity dataset has 280,000 training samples and 19,000 test samples in each polarity. + +Yelp Review Full. The dataset is obtained from the Yelp Dataset Challenge in 2015 on sentiment classification of polarity star labels ranging from 1 to 5. The full dataset has 130,000 training samples and 10,000 testing samples in each star. + +Yahoo! Answer. Topic classification over ten largest main categories from Yahoo Answers Comprehensive Questions and Answers version 1.0: Society & Culture, Science & Mathematics, Health, Education & Reference, Computers & Internet, Sports, Business & Finance, Entertainment & Music, Family & Relationships and Politics & Government. The document we use includes question titles, question contexts and best answers. Each class contains 140,000 training samples and 5,000 testing samples. + +Methods. We have included altogether eleven competing methods from (Zhang et al., 2015) and (Gong et al., 2019). For our approach, we have two versions: the CSPAN (base) using single-layer BiLSTM and 16 query vectors, and CSPAN (big) using three hidden layers in Bi-LSTM and 128 query vectors. We trained the base models for 30 epochs and the big models for 60 epochs. + +# 3.2 Model configuration and training + +In the experiments, we use 300-dimensional GloVe 6B pre-trained word embedding (Pennington et al., 2014) to initialize the word embedding at https://nlp.stanford.edu/projects/glove. We choose 150 hidden units for the Bi-LSTM models. The Adam Optimizer (Kingma et al., 2014) with learning rate of 1e-3 and weight decay of 1e-4 is used to train the model parameters. The size of + +mini-batch is set to 64 and the number of multiquery to 16. We train all neural networks for 30 epochs and the learning rate divides by 10 at 20 and 25 epochs. All of our experiments are performed on NVIDIA Titan RTX GPUs, with PyTorch 1.1.0 as the backend framework. + +# 3.3 Results and analysis + +The experimental results on all data sets are shown in Table 2. The results of the competing methods are directly cited from the respective papers as listed in Table 2. + +From Table 2 we can see that CSPAN model achieves the best performance on all the 4 datasets of AG's News, Yelp P, Yelp F. and Yahoo datasets (rows 12/13), which demonstrates its effectiveness in document classification. Particularly, CSPAN consistently outperforms the baseline deep learning networks using RNN/CNN, such as LSTM, CNN-char and CNN-word by a substantial margin on all datasets (rows 1, 2 and 3). + +Compared to the CSPAN (base), the CSPAN (big) gives a comparable or slightly better performance on all the datasets. This observation shows that the CSPAN actually prefers simpler models against highly complex ones, which is an advantage for large problems. + +# 3.4 Ablation Study + +Component-wise gains. To investigate the impact of each of the key components of CSPAN model for document classification, we conducted an ablation study on the AG's News dataset. Firstly, we validate the impact of each component, including semantic self-attention, semantic and positional residual connection, and multi-query soft attention. The results are shown in Table 3. + +The standard Bi-LSTM baseline provides a test accuracy of 89.36. As we expected, integrating semantic self-attention significantly improved the classification performance with test accuracy of 92.61. It shows that using self-attention can + +
MethodsAGNewsYelp P.Yelp F.Yahoo
Zhang et al., 2015LSTM86.0694.7458.1770.84
CNN-char89.1394.4662.0269.98
CNN-word91.4595.1160.4870.94
Gong et al., 2019Deep CNN91.2795.7264.2673.43
FastText92.5095.7063.9072.30
HAN92.3695.5963.3275.80
SASEM91.5094.9063.40-
DiSAN92.5194.3962.0876.15
LEAM92.4595.3164.0977.42
SWEM92.2493.7661.1173.53
HLAN92.8995.8363.7877.55
This paperCSPAN (base)93.6896.1165.9377.61
CSPAN (big)93.6296.1865.9577.75
+ +Table 2: Test accuracy of competing methods on benchmark document classification tasks, in percentage. + +
ComponentAccuracy
Standard Bi-LSTM(baseline)89.36
+ self-att92.61
+ residual93.03
+ multi-query93.68
+ +enhance the semantic. Furthermore, integrating residual connection improves the classification performance from 92.61 to 93.03. Finally, when multi-query attention is adopted, the classification performance is significantly improved with an overall gain of $4.32\%$ over the baseline. + +Model Size. As mentioned in (Adhikari et al., 2019), increasingly complex network components and modeling techniques are accompanied by smaller and smaller improvements in effectiveness on standard benchmark datasets. We have observed similar trend in CSPAN, as shown in Table 4. + +From Table 4, we can see that when the number of hidden layer in Bi-LSTM is set to 3, the performance can be worse than 1-layer or 2-layer Bi-LSTMS (the latter with even less query vectors). In other words, a compact Bi-LSTM is preferred. On the other hand, the optimal number of query vectors seems to be around 16 for 1-layer BiLSTM; more query vectors than this brings limited or even negative performance gains. + +Fusion Methods. We also conducted extensive comparative studies on the performance of + +Table 3: Impact of each building block in the proposed CSPAN model on AG's News dataset. + +
Layers (BiLSTM)QueryMemory(MB)Accuracy
11155792.84
18164192.95
116173993.68
28166593.05
216176592.88
232196193.04
332199792.92
364240192.71
3128320193.14
+ +Table 4: Impact of model size. + +
#MethodsAccuracy
(a)Embedding92.38
(b)Embedding + Position92.71
(c)Embedding + Relative-Position92.39
(d)Embedding + Bi-LSTM93.03
(e)Embedding // Bi-LSTM93.68
+ +Table 5: Different ways in combining the semantic and the positional information and their accuracy on AG's News dataset. + +different ways in combining the semantic and the positional information, as shown in Figure 2. + +From Table 5, we can see that directly combining the positional vector with the word vector (fusion method (b), a "light-weight" transformer) brings an improvement of $0.33\%$ compared with the baseline (method (a), without any positional information). In addition, using relative positional + +![](images/753c58470fab440b6e0c0e5a89df510d12a9c14f29a3ee40576e935ef34a12d8.jpg) +(a) + +![](images/9abc5c9f77d07b60c0c1a802b4bb687d29f4e1c096fdf7fc90b8838891705a64.jpg) +(b) + +![](images/58883646221b0f3554f378ebeb9486810078807cfa20ea86afc019adc0cde60d.jpg) +(c) + +![](images/39c40aac77fae730f7c794a2fe458792e811b19b972f95185059cd5673d64193.jpg) +(d) +Figure 2: Different schemes of combining the semantic and position information for a comparative study, where (b) corresponds to a "light-weight"-transformer, and (e) is the proposed architecture. + +![](images/1d2eaa453c1ca880ffcae64e69757ee01dca58a4f3039e0343d844aa0834bd96.jpg) +(e) + +encoding schemes (Shaw et al., 2018) (fusion method (c)) leads to almost the same result as the baseline method. If we use Bi-LSTM directly on the input word vectors, i.e., a parallel combination scheme of the semantic and positional information (fusion method (d)), the performance gain approaches $0.65\%$ . Finally, the proposed fusion scheme in CSPAN (fusion method (e)), i.e., sequential processing of semantic and positional information equipped with a residual connection, the performance gain is around $1.30\%$ . This comparative study clearly demonstrates the advantage of the proposed CSPAN model in combining semantic and positional information. + +Computational Considerations. It is usually believed that transformers are computationally efficient by virtue of the parallel processing pipeline associated with the self-attention mechanism. However, empirically, we find that the large model size and extensive, pairwise self-attention cost can significantly slow down the computation. For example, standard transformers have 6 layers of self-attention in the encoding stage alone, leading to a huge set of transformation matrix parameters $W^{Q}, W^{K}, W^{V}$ and the cost of back-propagation can be huge. On the other hand, $O(n^{2})$ time and space are needed in each layer in computing the self-attention among a document of $n$ words. Therefore, standard transformer is time consuming in our experimental evaluations and typically won't converge until after tens or even 100 epochs even on the smallest data set (AG's News). This is why we implemented and compared with the "light-weight" version of transformers in our experiments (e.g., method (b) in Figure 2). The proposed CSPAN model, on the other hand, is more compact and approaches a satisfactory result in just a few epochs, and the time taken for each + +epoch is also much less than standard transformers. Therefore, our approach is computationally very efficient, especially for classification of short or median-length documents. + +# 4 Conclusion + +We presented the cascaded semantic and positional self-attention to aggregate semantic and positional information in document classification. It overcomes the limitation of existing positional encoding schemes, and shows encouraging performance against state-of-the-art methods using transformers and CNNs. In the meantime, it has a compact model size and is computational efficient. Our studies demonstrate the importance of properly aggregating semantic and positional components, and we will further extend it more challenging NLP tasks in our future research. + +# Acknowledgments + +Jie Zhang is supported by NSFC 61973086, Shanghai Municipal Science and Technology Major Project (No.2018SHZDX01) and ZJ Lab. + +# References + +Ashutosh Adhikari, Achyudh Ram, Raphael Tang, and Jimmy Lin. 2019. Docbert: Bert for document classification. arXiv preprint arXiv:1904.08398. +Ashutosh Adhikari, Achyudh Ram, Raphael Tang, and Jimmy Lin. 2019. Rethinking complex neural network architectures for document classification. In NAACL-HLT, volume 1, pages 4046-4051. +Alexis Conneau, Holger Schwenk, Loic Barrault, and Yann Lecun. 2016. Very deep convolutional networks for natural language processing. arXiv preprint arXiv:1606.017812. +Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui + +Jiang, and Diana Inkpen. 2016. Enhanced LSTM for natural language inference. arXiv preprint arXiv:1609.06038. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860. +Shang Gao, Arvind Ramanathan, and Georgia Tourassi 2018. Hierarchical convolutional attention networks for text classification. The Third Workshop on Representation Learning for NLP. +Alex Graves and Jürgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural networks, 18(5-6): 602-610. +Changjin Gong, Kaize Shi, and Zhendong Niu. 2019. Hierarchical Text-Label Integrated Attention Network for Document Classification. Proceedings of the 2019 3rd High Performance Computing and Cluster Technologies Conference. +Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR, pages 770-778. +Taehoon Kim, and Jihoon Yang. 2018. Abstractive text classification using sequence-to-convolution neural networks. arXiv preprint arXiv:1805.07745. +Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv: 1408.5882. +Diederik P. Kingma, and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. +Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2015. Molding cnns for text: non-linear, non-consecutive convolutions. arXiv preprint arXiv:1508.04112. +Rodrigo Moraes, Joao Francisco Valiati, and Wilson P. Gaviao Neto. 2013. Document-level sentiment classification: An empirical comparison between SVM and ANN. Expert Systems with Applications, 40(2): 621-633. +Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. Regularizing and optimizing LSTM language models. In ICLR. +Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word + +representation. In EMNLP. +Mehran Sahami, Susan Dumais, David Heckerman, and Eric Horvitz. 1998. A Bayesian approach to filtering junk e-mail. In Learning for Text Categorization: Papers from the 1998 workshop, volume 62, pages 98-105. +Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. arXiv preprint arXiv:1803.02155. +Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, and Chengqi Zhang. 2018. Bi-directional block selfattention for fast and memory-efficient sequence modeling. arXiv preprint arXiv:1804.00857. +Duyu Tang, Bing Qin, and Ting Liu. 2015. Document modeling with gated recurrent neural network for sentiment classification. In EMNLP. +Chuanqi Tan, Furu Wei, Wenhui Wang, Weifeng Lv, and Ming Zhou. 2018. Multiway Attention Networks for Modeling Sentence Pairs. In IJCAI. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS, pages 5998-6008. +Mingqiang Wang, Mengting Liu, Shi Feng, Daling Wang, and Yifei Zhang. 2014. A novel calibrated label ranking based method for multiple emotions detection in Chinese microblogs. In *Natural Language Processing and Chinese Computing*, pages 238-250. +Sida Wang, and Christopher D. Manning. 2012. Baselines and bigrams: Simple, good sentiment and topic classification. In ACL. +Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In NAACL-HLT, pages1480-1489. +Hang Yan, Bocao Deng, Xiaonan Li, and Xipeng Qiu. 2019. TENER: Adapting Transformer Encoder for Name Entity Recognition. arXiv preprint arXiv:1911.04474. +Dani Yogatama, Chris Dyer, Wang Ling, and Phil Blunsom. 2017. Generative and discriminative text classification with recurrent neural networks. arXiv preprint arXiv:1703.01898. +Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In NIPS. +Jianming Zheng, Fei Cai, Taihua Shao, and Honghui Chen. 2018. Self-interaction attention mechanism-based text representation for document classification. Applied Sciences, 8(4): 613. \ No newline at end of file diff --git a/cascadedsemanticandpositionalselfattentionnetworkfordocumentclassification/images.zip b/cascadedsemanticandpositionalselfattentionnetworkfordocumentclassification/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..232474454bbe2fa92052e71290cc4ab6178295fd --- /dev/null +++ b/cascadedsemanticandpositionalselfattentionnetworkfordocumentclassification/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e74a4101bd30ad249c06ac036889bc8281a880509db89cbe86ecea6b85079340 +size 300325 diff --git a/cascadedsemanticandpositionalselfattentionnetworkfordocumentclassification/layout.json b/cascadedsemanticandpositionalselfattentionnetworkfordocumentclassification/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..fba2d6c710db36e650dce7b0b90e477de9b8c639 --- /dev/null +++ b/cascadedsemanticandpositionalselfattentionnetworkfordocumentclassification/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5383ad2f345b3790cc36df793008008d324c3db30c7543eb200f93db09478d94 +size 318080 diff --git a/cdevalsummanempiricalstudyofcrossdatasetevaluationforneuralsummarizationsystems/e4674bd6-c276-46fe-baf0-c7ddd4d6a411_content_list.json b/cdevalsummanempiricalstudyofcrossdatasetevaluationforneuralsummarizationsystems/e4674bd6-c276-46fe-baf0-c7ddd4d6a411_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..a6a1a6071aadbf5a30dfad558c5f14bc5c13bef5 --- /dev/null +++ b/cdevalsummanempiricalstudyofcrossdatasetevaluationforneuralsummarizationsystems/e4674bd6-c276-46fe-baf0-c7ddd4d6a411_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:156477972f0efbff416e4dbfa1823875d4519fa5c4b3a9e93d80fd4d0a5d51a3 +size 127988 diff --git a/cdevalsummanempiricalstudyofcrossdatasetevaluationforneuralsummarizationsystems/e4674bd6-c276-46fe-baf0-c7ddd4d6a411_model.json b/cdevalsummanempiricalstudyofcrossdatasetevaluationforneuralsummarizationsystems/e4674bd6-c276-46fe-baf0-c7ddd4d6a411_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b1fcb826aa312724921c1a13f7df1b4cc263d56e --- /dev/null +++ b/cdevalsummanempiricalstudyofcrossdatasetevaluationforneuralsummarizationsystems/e4674bd6-c276-46fe-baf0-c7ddd4d6a411_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:97c75dc7d7637a785d4928f2aad49f62163892cdd178e92bbe8f5547f999ccd0 +size 147201 diff --git a/cdevalsummanempiricalstudyofcrossdatasetevaluationforneuralsummarizationsystems/e4674bd6-c276-46fe-baf0-c7ddd4d6a411_origin.pdf b/cdevalsummanempiricalstudyofcrossdatasetevaluationforneuralsummarizationsystems/e4674bd6-c276-46fe-baf0-c7ddd4d6a411_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0f2537929c2fc80c465a8b65c173999070606b64 --- /dev/null +++ b/cdevalsummanempiricalstudyofcrossdatasetevaluationforneuralsummarizationsystems/e4674bd6-c276-46fe-baf0-c7ddd4d6a411_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:08139cf8d992fab02fc1e7dd5393f20fbea9a1255c1a958a2f806f7b7d1481b0 +size 3576537 diff --git a/cdevalsummanempiricalstudyofcrossdatasetevaluationforneuralsummarizationsystems/full.md b/cdevalsummanempiricalstudyofcrossdatasetevaluationforneuralsummarizationsystems/full.md new file mode 100644 index 0000000000000000000000000000000000000000..41af0f89a5f6896ecea0c76709e3ad71d5be50a1 --- /dev/null +++ b/cdevalsummanempiricalstudyofcrossdatasetevaluationforneuralsummarizationsystems/full.md @@ -0,0 +1,433 @@ +# CDEvalSumm: An Empirical Study of Cross-Dataset Evaluation for Neural Summarization Systems + +Yiran Chen,\* Pengfei Liu\*, Ming Zhong, Zi-Yi Dou\*, Danqing Wang, Xipeng Qiu†, Xuanjing Huang + +Shanghai Key Laboratory of Intelligent Information Processing, Fudan University School of Computer Science, Fudan University + +2005 Songhu Road, Shanghai, China + +Carnegie Mellon University + +{yrchen19,mzhong18,dqwang18,xpqiu,xjhuang}@fudan.edu.cn + +{zdou,pliu3}@cs.cmu.edu + +# Abstract + +Neural network-based models augmented with unsupervised pre-trained knowledge have achieved impressive performance on text summarization. However, most existing evaluation methods are limited to an in-domain setting, where summarizers are trained and evaluated on the same dataset. We argue that this approach can narrow our understanding of the generalization ability for different summarization systems. In this paper, we perform an in-depth analysis of characteristics of different datasets and investigate the performance of different summarization models under a cross-dataset setting, in which a summarizer trained on one corpus will be evaluated on a range of out-of-domain corpora. A comprehensive study of 11 representative summarization systems on 5 datasets from different domains reveals the effect of model architectures and generation ways (i.e. abstractive and extractive) on model generalization ability. Further, experimental results shed light on the limitations of existing summarizers. Brief introduction and supplementary code can be found in https://github.com/zide05/CDEvalSumm. + +# 1 Introduction + +Neural summarizers have achieved impressive performance when evaluated by ROUGE (Lin, 2004) on in-domain setting, and the recent success of pretrained models drives the state-of-the-art results on benchmarks to a new level (Liu and Lapata, 2019; Liu, 2019; Zhong et al., 2019a; Zhang et al., 2019; Lewis et al., 2019; Zhong et al., 2020). However, the superior performance is not a guarantee of a perfect system since existing models tend to show defects when evaluated from other aspects. For example, Zhang et al. (2018) observes that + +![](images/00f521a8dc2bf57cb0b669f42762c6c95a57f82a2330843b1a92cd1ce8266e5e.jpg) +(a) In-dataset R2 + +![](images/9b7f7c879ba3ef8f2e07692c35857471499b8185fb43c4270950c6e76a65b92a.jpg) +(b) Stiff-R2 + +![](images/d31e9a8d2f8d09836d67bfc823c820d4d3f250381b778949017463d8ef8584de.jpg) +Figure 1: Ranking (descending order) of current 11 top-scoring summarization systems (Abstractive models are red while extractive ones are blue). Each system is evaluated based on three diverse evaluation methods: (a) averaging each system's in-dataset ROUGE-2 F1 scores (R2) over five datasets; (b-c) evaluating systems using our designed cross-dataset measures: stiff-R2, stable-R2 (Sec. 5). Notably, $BERT_{match}$ and $BART$ are two state-of-the-art models for extractive and abstractive summarization respectively (highlighted by blue and red boxes). + +![](images/ebe79649a9de5f1b6a0ed43896ca16e4cfec46cdb966b2f312e5cf39a57d20f1.jpg) +(c) Stable-R2 + +many abstractive systems tend to be near-extractive in practice. Cao et al. (2018); Wang et al. (2020); Krysciński et al. (2019); Maynez et al. (2020) reveal that most generated summaries are factually incorrect. These non-mainstream evaluation methods make it easier to identify the model's weaknesses. + +Orthogonal to above two evaluation aspects, we aim to diagnose the limitation of existing systems under cross-dataset evaluation, in which a summarization system trained on one corpus would be evaluated on a range of out-of-dataset corpora. Instead of evaluating the quality of summarizers solely based on one dataset or multiple datasets individually, cross-dataset evaluation enables us to evaluate model performance from a different angle. For example, Fig. 1 shows the ranking of 11 summarization systems studied in this paper under different evaluation metrics, in which the ranking list "(a) in-dataset R2" is obtained by traditional ranking criteria while other two are based on our + +designed cross-dataset measures. Intuitively, we observe that 1) there are different definitions of a "good" system in various evaluation aspects; 2) abstractive and extractive systems exhibit diverse behaviors when evaluated under the cross-dataset setting. + +The above example recaps the general motivation of this work, encouraging us to rethink the generalization ability of current top-scoring summarization systems from the perspective of cross-dataset evaluation. Specifically, we ask two questions as follows: + +Q1: How do different neural architectures of summarizers influence the cross-dataset generalization performances? When designing summarization systems, a plethora of neural components can be adopted (Zhou et al., 2018; Chen and Bansal, 2018; Gehrmann et al., 2018; Cheng and Lapata, 2016; Nallapati et al., 2017). For example, will copy (Gu et al., 2016) and coverage (See et al., 2017) mechanisms improve the cross-dataset generalization ability of summarizers? Is there a risk that BERT-based summarizers will perform worse when adapted to new areas compared with the ones without BERT? So far, the generalization ability of current summarization systems when transferring to new datasets still remains unclear, which poses a significant challenge to design a reliable system in realistic scenarios. Thus, in this work, we take a closer look at the effect of model architectures on cross-dataset generalization setting. + +Q2: Do different generation ways (extractive and abstractive) of summarizers influence the cross-dataset generalization ability? Extractive and abstractive models, as two typical ways to summarize texts, usually follow diverse learning frameworks and favor different datasets. It would be absorbing to know their discrepancy from the perspective of cross-dataset generalization. (e.g., whether abstractive summarizers are better at generating informative or faithful summaries on a new test set?) + +To answer the questions above, we have conducted a comprehensive experimental analysis, which involves eleven summarization systems (including the state-of-the-art models), five benchmark datasets from different domains, and two evaluation aspects. Tab. 1 illustrates the overall analysis framework. We explore the effect of different architectures and generation ways on model generalization ability in order to answer $Q1$ and $Q2$ . Semantic equivalency (e.g., ROUGE) and factual + +
FrameworkSemantic equivalency (e.g., ROUGE)Factuality (e.g., Factcc)
Q1: Architecture (e.g., Transformer v.s. LSTM)Sec. 6.1.1Sec. 6.2
Q2: Generation way (e.g., BERT v.s. BART)Sec. 6.1.2Sec. 6.2
+ +Table 1: Overall analysis framework. + +ity are adopted to characterize the different aspects of cross-dataset generalization ability. Additionally, we strengthen our analysis by presenting two views of evaluation: holistic and fine-grained views (Sec. 5). + +Our contributions can be summarized as: 1) Cross-dataset evaluation is orthogonal to other evaluation aspects (e.g., semantic equivalence, factuality), which can be used to re-evaluate current summarization systems, accelerating the creation of more robust summarization systems. 2) We have design two measures Stiffness and Stableness, which could help us to characterize generalization ability in different views, encouraging us to diagnose the weaknesses of state-of-the-art systems. 3) We conduct dataset bias-aided analysis (Sec. 4.3) and suggest that a better understanding of datasets will be helpful for us to interpret systems' behaviours. + +# 2 Representative Systems + +Although it's intractable to cover all neural summarization systems, we try to include more representative models to make a comprehensive evaluation. Our selection strategy follows: 1) the source codes of systems are publicly available; 2) systems with state-of-the-art performance or the top performance on benchmark datasets (e.g., CNNDM (Nallapati et al., 2016)) 3) systems equipped with typical neural components (e.g., Transformer, LSTM) or mechanism (e.g., copy). + +# 2.1 Extractive Summarizers + +Extractive summarizers directly choose and output the salient sentences (or phrases) in the original document. Generally, most of the existing extractive summarization systems follow a framework consisting of three major modules: sentence encoder, document encoder and decoder. In this paper, we investigate extractive summarizers with different choices of encoders and decoders. + +$\mathbf{L}\mathbf{S}\mathbf{T}\mathbf{M}_{\mathit{non}}$ (Kedzie et al., 2018) This summarizer adopts convolutional neural network as sentence encoder and LSTM to model the cross-sentence + +relation. Finally, each sentence will be selected in a non-autoregressive way. + +Transnon (Liu and Lapata, 2019) The TransformerExt model in Liu and Lapata (2019), similar to above setting except that the document encoder is replaced with the Transformer layer. + +Transauto (Zhong et al., 2019a) The decoder is replaced with a pointer network to avoid the repetition (autoregressive). + +$\mathbf{BERT}_{non}$ (Liu and Lapata, 2019) The BertSumExt model in Liu and Lapata (2019), this model is an extension of $\mathrm{Trans}_{\mathrm{non}}$ by introducing a BERT (Devlin et al., 2018) layer. + +$\mathbf{BERT}_{\text {match }}$ (Zhong et al., 2020) This is the existing state-of-the-art extractive summarization system, which introduce a matching layer using siamese BERT. + +# 2.2 Abstractive Summarizers + +The abstractive approach involves paraphrasing the inputs using novel words. The current abstractive summarization systems mainly focus on the encoder-decoder paradigm. + +$\mathbf{L2L}_{ptr}^{cov}$ (See et al., 2017) The model is a LSTM based sequence to sequence summarizer with copy and coverage mechanism. + +$\mathbf{L2L}_{ptr}$ We remove the coverage module and keep other parts unchanged. + +L2L This model is implemented by removing the pointer network of the above summarizer. + +T2T (Liu and Lapata, 2019) A sequence to sequence model with Transformer as the encoder and decoder. + +BE2T (Liu and Lapata, 2019) A sequence to sequence model with BERT as encoder and Transformer as decoder. + +BART (Lewis et al., 2019) A fully pre-trained sequence to sequence model. It is the existing state-of-the-art abstractive summarization system. + +# 3 Datasets + +We explore five typical summarization datasets: CNNDM, Xsum, PubMed, Bigpatent B and Reddit TIFU. CNNDM (Nallapati et al., 2016) and Xsum (Narayan et al., 2018) are news domain summarization datasets which are various in their publications and abstractiveness. PubMed (Cohan et al., 2018) is a scientific paper dataset, which can be used to investigate the generalization ability of models on scientific domain. Bigpatent B (Sharma et al., 2019) is the B category of + +Bigpatent (a dataset consisting of patent documents from Google Patents Public Datasets). Reddit TIFU (Kim et al., 2019) is a dataset with less formal posts collected from the online discussion forum Reddit. Detailed statistics and introduction of datasets are presented in the appendix section. + +# 4 Evaluation for Summarization + +Existing summarization systems are usually evaluated on different datasets individually based on an automatic metric: $r = \operatorname{eval}(D, S, m)$ , where $D, S$ represents a dataset (e.g., CNNDM) and system (e.g., L2L) respectively. $m$ denotes an evaluation metric (e.g., ROUGE). + +![](images/3794e9ced62e9374e86c84f08dea255672c5dabfcfd588441d5fea07065197c5.jpg) +Figure 2: Different metrics characterized by a relation chart among generated summaries (Gsum), references (Ref) and input documents (Doc). + +To evaluate the quality of generated summaries, metrics can be designed from diverse perspectives, which can be abstractly characterized in Fig. 2. Specifically, semantic equivalence is used to quantify the relation between generated summaries (Gsum) and references (Ref) while factuality aims to characterize the relation between generated summaries (Gsum) and input documents (Doc). + +Besides evaluation metrics, in this paper, we also introduce some measures that quantify the relation between input documents (Doc) and references (Ref). We claim that a better understanding of dataset biases can help us interpret models' discrepancies. + +# 4.1 Semantic Equivalence + +ROUGE (Lin, 2004) is a classic metric to evaluate the quality of model generated summaries by counting the number of overlapped $n$ -grams between the evaluated summaries and the ideal references. + +# 4.2 Factuality + +Apart from evaluating the semantic equivalence between generated summaries and the references, another evaluation aspect of recent interest is factuality. In order to analyze the generalization performance of models in different perspectives, in this + +![](images/8d21e8d42b0b702ac178336f8e1273a82001e87a1a58eacfda72596e2a393191.jpg) +(a) CNN. + +![](images/5565f4b2f81df5dbee7f8ffe0240144207956a3f077baff579da906162f50c3e.jpg) +(b) Xsum +Figure 3: Characteristics of test set for each dataset (the train set possesses almost the same property thus is not displayed here): coverage, copy length, novelty, sentence fusion score, repetition. Here we choose 2-gram to calculate the novelty and 3-gram for the repetition. + +![](images/62abde9d02cec0c8d7e7f18577cd257fe294fe20c836822bd029cdacd7bb65d3.jpg) +(c) PubMed + +![](images/31e938fc9584d1ac6c6becee3fb0470761a2aae49796a7a3b59788d9fe8f0cef.jpg) +(d) Bigatent b + +![](images/f1c90da712cf9116e1d853221c58df4f5ecb90c6830197b03e88ba918459f471.jpg) +(e) Reddit + +work, we also take the factuality evaluation into consideration. + +Factcc Factcc (Krysciński et al., 2019) is introduced to measure the fact consistency between the generated summaries and source documents. It is a model based metric which is weakly-supervised. We use the proportion of summary sentences that factcc predicts as factually consistent as the factuality score in this paper. + +# 4.3 Dataset Bias + +We detail several measures that could quantify the characteristics of datasets, which are helpful for us to understand the differences among models. + +**Coverage (Grusky et al., 2018)** illustrates the overlap rate between document and summary, it is defined as the proportion of the copied segments in summary. + +Copy Length measures the average length of segments in summary copied from source document. + +Novelty (See et al., 2017) is defined as the proportion of segments in the summaries that haven't appeared in source documents. The segments can be instantiated as n-grams. + +Repetition (See et al., 2017) measures the rate of repeated segments in summaries. Similar to the above measure, we choose n-gram (n ranges from one to four) as segment unit. + +Sentence fusion score is calculated using the result of the algorithm proposed by (Lebanoff et al., 2019), which is to find whether summary sentence is compressed from one sentence or fused from several sentences. Then, sentence fusion score is calculated as the proportion of fused sentences (sentences that are fused from two or three document sentences) to all summary sentences. + +A high value of coverage and copy length suggests the dataset is more extractive, while novelty represents the rate of novel units in summary and + +sentence fusion score represents the proportion of sentences that is fused from more than two document sentences. Zhong et al. (2019b) also explores dataset bias to aid the analysis of model performance, but they only focus on metrics for extractive summarizers. + +# 4.4 Dataset Bias Analysis + +According to the coverage and copy length results in Fig. 3, CNNDM is the most extractive dataset. Bigpatent B also exhibits relatively higher copy rate in summary but the copy segments is shorter than CNNDM. On the other hand, Bigaptent b, Xsum obtain higher sentence fusion score, which suggests that the proportion of fused sentences in these two datasets are high. Xsum and Reddit obtain more 3-gram novel units in summary, reflecting these two datasets are more abstractive. In terms of repetition in Fig. 3, only PubMed and Bigpatent B contain more 2-gram repeated phrases in summary. + +
ModelsROUGE 1
CNN.*CNN.XsumPubm.Patent bRed.
Ext.LSTMnon41.2241.3619.5142.9839.2920.46
Transnon40.9040.8415.7438.4534.4116.25
Transauto41.3641.3519.2942.7438.7618.55
BERTnon43.2542.6921.7638.7435.8521.84
BERTmatch44.2244.2624.9741.1938.8925.32
Abs.L2L31.3332.8028.3127.8430.4616.89
L2Lptr36.4437.0629.6732.0431.0321.32
L2Lcovptr39.5339.9528.8335.2735.9021.28
T2T40.2139.9029.0130.7142.9419.96
BE2T41.7241.3438.9937.1143.1026.66
BART44.1644.7544.7345.0245.7834.00
+ +Table 2: Representative summarizers studied in this paper and their corresponding performance (ROUGE-1 F1 score) on different datasets (CNNDM, Xsum, PubMed, Bigpatent B, Reddit). We re-implement all 11 systems on five datasets by ourselves. All implemented results can outperform or slightly lower than the performances reported in original papers (the column of CNN.*). + +![](images/62b4823b0e887d305d84e2ffe19a8a5e8fabd5558abee14aac3ad23382e4ea0a.jpg) +Table 3: Illustration of two views (Stiffness: $r^u$ and Stableness: $r^\sigma$ ) to characterize the cross-dataset (a and b) generalization based on model $A$ and $B$ . $\mathbf{U}_{\mathbf{A}}$ and $\mathbf{U}_{\mathbf{B}}$ represent two cross-dataset matrix of two models. $r^\mu (\mathbf{U}_{\mathbf{A}}) < r^\mu (\mathbf{U}_{\mathbf{B}})$ means the model $B$ gains a better cross-dataset absolute performance while $r^\sigma (\mathbf{U}_{\mathbf{A}}) > r^\sigma (\mathbf{U}_{\mathbf{B}})$ suggests the model $A$ is more robust. + +![](images/658286c9bd9fea885026d2bfbec1fe875a255de7eee0388018ff627f6a2cf044.jpg) + +![](images/f3a1a3fdbafa8a378526b4e2bf5468cb057d3851da39f88b0871f03d3eaaff7e.jpg) + +# 5 Cross-dataset Evaluation + +Despite recent impressive results on diverse summarization datasets, modern summarization systems mainly focus on extensive in-dataset architecture engineering while ignore the generalization ability which is indispensable when systems are required to process samples from new datasets or domains. Therefore, instead of evaluating the quality of summarization system solely based on one dataset, we introduce cross-dataset evaluation (a summarizer (e.g., $L2L$ ) trained on one dataset (e.g., CNNDM) will be evaluated on a range of other datasets (e.g., XSUM)). Methodologically, we perform cross-dataset evaluation from two views: fine-grained and holistic and we will detail them below. + +# 5.1 Methodology + +Given a summarization system $S$ , a set of datasets $\mathcal{D} = D_1, \dots, D_N$ , and evaluation metric $m$ , we can design different evaluation functions to quantify the system's quality: $\mathbf{r} = \mathrm{eval}(\mathcal{D}, S, m)$ . Depending on different forms of function $\mathrm{eval}(\cdot)$ , $\mathbf{r}$ could be instantiated as either a scalar or a vector (or matrix). + +# 5.1.1 Fine-grained Measures + +Once $\mathbf{r}$ , the cross-dataset evaluation result, is instantiated as a matrix, we can characterize the given system in a fine-grained way. Specifically, we define $\mathbf{r}$ as: $\mathbf{r} = \mathbf{U} \in \mathbb{R}^{N \times N}$ where each cell $\mathbf{U}_{i,j}$ refers to the metric result (e.g., ROUGE) when a summarizer is trained in dataset $D_{i}$ and tested in dataset $D_{j}$ (N refers to the number of datasets). + +Additionally, we can normalize each cell by the diagonal value, $\mathbf{r} = \mathbf{U}_{ij} / \mathbf{U}_{jj} \times 100\% = \hat{\mathbf{U}}$ , $\mathbf{U}_{ij} / \mathbf{U}_{jj}$ measures how close the out-of-dataset performance (trained in $D_{i}$ and tested in $D_{j}$ ) of a + +system is to its in-dataset performance (trained in $D_{j}$ and tested in $D_{j}$ ). + +# 5.1.2 Holistic Measures + +Instead of using a matrix, holistically, we can quantify the cross-dataset generalization ability of each summarization system using a scalar. Specifically, we propose two views to characterize the cross-dataset generalization. + +Stiffness This measure reflects the absolute performance of a system under cross-dataset setting. Given a system, its stiffness can be calculated as: $r^{\mu} = \frac{1}{N\times N}\sum_{i,j}\mathbf{U}_{ij}$ + +Intuitively, a higher value of stiffness suggests the system obtains better performance when transferred to new datasets. + +Stableness It characterizes the relative performance gap between in-dataset and cross-dataset test. $r^{\sigma} = \frac{1}{N \times N} \sum_{i,j} \mathbf{U}_{ij} / \mathbf{U}_{jj} \times 100\%$ + +Generally, a higher value of stableness suggests that the variance between in-dataset and cross-dataset results is smaller. + +Tab. 3 gives an example to characterize generalization ability in two views. It shows that stiffness and stableness are not always unanimous, a model with higher stiffness may obtain lower stableness. + +![](images/4358e86ad2a301e8cc8a88fc2d25343448e77045ee9d14999fa699dace7087bc.jpg) +(a) stiffness $(\mathbf{r}^{\mu})$ + +![](images/7dfd352af367603caf3e43b6d0b4a16e950aa45bfb2b044826359f57273fb882.jpg) +(b) stableness $(\mathbf{r}^{\sigma})$ +Figure 4: Illustration of stiffness and stableness of ROUGE-1 F1 scores for various models. Yellow bars stand for extractive models and grey bars stand for abstractive models. + +# 6 Experiment + +In what follows, we analyze different summarization systems in terms of semantic equivalence and factuality. Moreover, the results are studied in holistic and fine-grained views based on the measures defined above. Holistic results are showed in Fig. 4 + +
analysis aspectArchitecture
model typeEXT
compare modelsBERTmatch vs. BERTnonL2Lptr vs. L2LL2Llowptr vs. L2L
holistic analysisstiff.: 32.27 vs. 28.98stable.: 91.98 vs. 88.93
+ +Table 4: The difference of ROUGE-1 F1 scores between different model pairs. Every column of the table represents the compared results of one pair of models. The line of holistic analysis displays the overall stiffness and stableness of compared models. The rest of the table is fine-grained results, the first line of which is the origin compared results $(\mathbf{U}_{\mathbf{A}} - \mathbf{U}_{\mathbf{B}}$ for model pairs $A$ and $B)$ and the second line is the normalized compared results $(\hat{\mathbf{U}}_{\mathbf{A}} - \hat{\mathbf{U}}_{\mathbf{B}}$ for model pairs $A$ and $B)$ . For all heatmap, 'grey' and 'red' represent positive and negative respectively. Here we only display compared results for limited pairs of models, all other results are displayed in appendix. + +and Fig. 5. On the other hand, Tab. 4 and Tab. 5 display the fine-grained observations. Tab. 2 dispalys the in-dataset results of all models on five benchmark datasets. + +# 6.1 Semantic Equivalence Analysis + +We conduct pair-wise Wilcoxon Signed-Rank significant test with $\alpha = 0.05$ . The null hypothesis is that the expected performances (stiffness and stableness) of a pair of summarization models are identical. We report the observations that are statistically significant. + +# 6.1.1 Architecture + +Match based reranking improves stiffness significantly $BERT_{match}$ , which using semantic match scores to rerank candidate summaries enhances the stiffness of model significantly in Fig. 4a while obtaining comparable stableness with other extractive models in Fig. 4b. This indicates that $BERT_{match}$ not only increases the absolute performance but also retaining robustness. + +$BERT_{match}$ is not stable when transferred from other datasets to Bigpatent B As Tab. 4g shows, when compared to $BERT_{non}$ , $BERT_{match}$ obtains larger in-dataset and cross-dataset performance gap when tested in Bigpatent B. This is because Bigpatent B possesses higher sentence fusion score and higher repetition compared with other datasets as Sec. 4.4 demonstrates. When served as test set, such dataset brings great challenge for $BERT_{match}$ to correctly rank the can + +didate summaries while it provides more training signals when served as training set. Thus the in-dataset (Bigpatent b) trained model obtain much higher score compared with cross-dataset models which trained from other datasets and cause lower stableness. + +Non-autoregressive decoder is more robust than autoregressive for extractive models. Regarding the decoder of extractive systems, as shown in Fig. 4a and Fig. 4b, the non-autoregressive extractive decoder $(Trans_{non})$ is more stable while it possesses lower stiffness than its autoregressive counterpart $(Trans_{auto})$ . + +Pointer network and coverage mechanism are instrumental in improving stiffness and stability of abstractive systems. The pointer network and coverage mechanism do enhance the absolute performance of abstractive system as Fig. 4a demonstrates $(\mathrm{r}^{\mu}(L2L_{ptr}^{cov}) > \mathrm{r}^{\mu}(L2L_{ptr}) > \mathrm{r}^{\mu}(L2L))$ . Also, the stableness results of $L2L_{ptr}$ and $L2L$ in Fig. 4b reveals that once removing the pointer mechanism, the value of $r^{\sigma}$ for $L2L_{ptr}$ decreases, which suggests that the system will be more stable if it's augmented the ability to directly extract text spans from the source document. + +However, pointer network brings trivial improvement when tested in Xsum and Reddit The absolute model performance improvement of pointer network is trivial when tested in xsum and Reddit as showed in Tab. 4c, which is in line with expectations because these two datasets are + +more abstractive as analyzed in Sec. 4.4. + +On the other hand, coverage is not that helpful when tested in Reddit and Xsum and even harmful when trained in Xsum. The heatmap of $L2L_{ptr}^{cov}$ vs. $L2L_{ptr}$ in Tab.4d) shows that when tested in Reddit and Xsum, the improvement of coverage mechanism is trivial. These two datasets possess less repetition, thus coverage can not provide much help when transferred to these datasets. Moreover, when trained in Xsum, $L2L_{ptr}^{cov}$ gets lower stiffness compared with $L2L_{ptr}$ , which is in accordance with the normalized result in Tab. 4j. This is because the gold summaries of Xsum exhibit lower repetition score (as analyzed in Sec. 4.4), thus can't provide enough learning signals for coverage mechanism. + +BERT sometimes brings unstableness. As shown in Fig. 4a, there is no doubt that once summarizers (extractive or abstractive) are equipped with pre-trained encoder, the stiffness will increase significantly (e.g., $r^{\mu}(BE2T) >> r^{\mu}(T2T)$ , suggesting that the overall cross-dataset performance has been improved. However, we are surprised to find (from Fig. 4b) that BERT sometimes leads to unstableness (i.e., $r^{\sigma}(Trans_{non}) > r^{\sigma}(BERT_{non})$ ). This result enlightens us to search for other architectures or learning schemas to offset the unstableness brought by BERT. + +As the heatmap of $BERT_{non}$ vs. $Trans_{non}$ in Tab. 4h shows, BERT brings unstableness especially when tested in Reddit and Xsum. + +BERT sometimes can even harm the absolute cross-dataset performance. $BERT_{non}$ performs worse than $Trans_{non}$ in some cells (e.g., trained in Xsum and tested in CNNDM) in Tab. 4b + +BART shows superior performance in terms of stiffness and stableness. As Fig. 4a shows, $BART$ obtains the highest stiffness among all abstractive models, and is even comparable with $BERT_{match}$ . In addition, $BART$ is also outstanding in terms of stableness when compared with other abstractive models (Fig. 4b). The performance gap between $BART$ and $BE2T$ proves that for abstractive models, pre-training the whole sequence to sequence model works better than using the pretrained model in either side of encoder or decoder. + +# 6.1.2 Generation ways + +Extractive models are superior to abstractive models in terms of stiffness and robustness. + +![](images/a55bc0392acd4c4476756e8e52b86d29310d76a3db99e348d9f619f28f86ba6b.jpg) +(a) stiffness $(\mathbf{r}^{\mu})$ + +![](images/f75a8d6c987f10675a9c9365ab4b60af878726b078c4f02ef1dbd1ef885e244a.jpg) +(b) stableness $(\mathbf{r}^{\sigma})$ +Figure 5: Illustration of stiffness and stableness of factuality scores for various models. Yellow bars stand for extractive systems and grey bars stand for abstractive systems. + +Extractive models show superior advantage of absolute performance as shown in Fig. 4a. Moreover, comparing the stableness of abstractive and extractive models in Fig. 4b, we surprisingly find that abstractive approaches except for BART are extremely brittle since their $r^{\sigma}$ value is much lower than any extractive approaches with a maximum margin of $37\%$ , and the gap can be reduced by introducing pointer network. This observation poses a great challenge to the development of the abstractive systems, encouraging research to pay more attention to improve the generalization ability. Also, we have provided hints for the solution, such as enabling the model to extract granular information from the source document or using the well pretrained sequence to sequence model (e.g., BART). + +When tested in Xsum and Reddit, abstractive systems possess comparable or even better performance. The supremacy of extractive models is not retained in all datasets (Tab. 4f and Tab. 4e). Though extractive models obtain higher stiffness scores when tested in CNNDM and PubMed, abstractive approaches (BE2T, L2L) obtained higher or comparable stiffness scores when tested at XSUM and Reddit. This is because Xsum and Reddit are more abstractive as analyzed in Sec. 4.4. + +# 6.2 Factuality Analysis + +1) All extractive models can achieve higher factuality scores while all abstractive models obtain quite lower ones (Fig. 5a). One interesting observation is, for extractive models, not all factuality scores under the in-dataset setting are $100\%$ in Tab. 5 (on-diagonal values), which reveals the limitation of + +
EXT modelsTransmonavgBERTmatchavg
CNN.XSUMPubm.Patent BRed.CNN.XSUMPubm.Patent BRed.
CNN100.0100.098.099.1100.099.499.899.492.995.799.197.4
XSUM99.8100.097.498.2100.099.199.799.593.295.198.897.3
Pubm.97.798.895.194.7100.097.399.799.293.195.299.397.3
Patent B98.399.896.397.499.598.399.799.093.094.598.496.9
Reddit90.394.194.186.796.392.399.799.393.196.199.397.5
avg97.298.696.295.299.297.399.799.393.095.399.097.3
ABS modelsT2TavgBARTavg
CNN.XSUMPubm.Patent BRed.CNN.XSUMPubm.Patent BRed.
CNN72.475.771.571.870.572.469.977.987.484.190.281.9
XSUM9.722.610.89.919.114.435.524.736.150.150.739.4
Pubm.58.559.356.272.334.956.269.561.558.461.394.169.0
Patent B79.281.284.468.773.977.552.153.869.067.476.863.8
Reddit34.835.750.644.652.543.659.650.369.149.344.254.5
avg50.954.954.753.550.252.857.353.664.062.471.261.7
+ +Table 5: Cross-dataset factuality scores for extractive and abstractive models. + +existing factuality checker. + +2) BART can significantly improve the ability to generate factual summaries compared with other abstractive models as showed in Fig. 5a, even compared with $L2L_{ptr}$ which equipped with pointer network and tend to copy from source document. +3) Abstractive models obtain higher stableness of factuality scores in Fig. 5b which surpass $100\%$ . This is because when tested in abstractive datasets (e.g., Xsum as Sec. 4.4 shows), abstractive summarizers trained in-dataset tend to be more abstractive and obtain lower factuality score while it gets higher factuality score when trained on other datasets which are more extractive (e.g., CNNDM). The superiority of cross-dataset results over in-dataset results thus leads to higher stableness. + +# 7 Related Work + +Our work is connected to the following threads of topics of NLP research. + +Cross-Dataset Generalization in NLP Recently, more researchers shift their focus from individual dataset to cross-dataset evaluation, aiming to get a comprehensive understanding of system's generalization ability. Fried et al. (2019) explores the generalization ability of different constituency parsers. Talmor and Berant (2019), on the other hand, shows the generalization ability of reading comprehension models can be improved by pretraining on one or two other reading comprehension datasets. Fu et al. (2020) studies the model generalization in the field of NER. They point out the bottleneck of the existing NER systems through in-depth analyses and provide suggestions for further improvement. Different from the above works, we attempt to explore generalization ability for summarization systems. + +Diagnosing Limitations of Existing Summarization Systems Beyond ROUGE, some recent works try to explore the weaknesses of existing systems from diverse aspects. Zhang et al. (2018) tries to figure out to what extent the neural abstractive summarization systems are abstractive and discovers many of abstractive systems tend to perform near-extractive. On the other hand, Cao et al. (2018) and Krysciński et al. (2019) study the factuality problem in modern neural summarization systems. The former puts forward one model that combining source document and preliminary extracted fact description and prove the effectiveness of this model in terms of factuality correctness. While the latter contributes to design a model-based automatic factuality evaluation metric. Abstractiveness and factuality error the above works studied are orthogonal to this work and can be easily combined with cross-dataset evaluation framework in this paper as Sec. 6.2 shows. Moreover, Wang et al. (2019); Hua and Wang (2017) attempt to investigate the domain shift problem on text summarization while they focus on a single generation way (either abstractive or extractive). We also investigate the generalization of summarizers when transferring to different datasets, but include more datasets and models. + +# 8 Conclusion + +By performing a comprehensive evaluation on eleven summarization systems and five mainstream datasets, we summarize our observations below: + +1) Abstractive summarizers are extremely brittle compared with extractive approaches, and the maximum gap between them reaches $37\%$ in terms of the measure stableness (ROUGE) defined in this paper. 2) BART (SOTA system) is superior over other abstractive models and even comparable with extractive models in terms of stiffness (ROUGE). On the other hand, it is robust when transferring between datasets as it possesses high stableness (ROUGE). 3) $BERT_{match}$ (SOTA system) performs excellently in terms of stiffness, while still lacks stableness when transferred to Bigpatent B from other datasets. 4) The robustness of models can be improved through either equipped the model with ability to copy span from source document (i.e., Lebanoff et al. (2019)) or make use of well trained sequence to sequence pre-trained model (BART). 5) Simply adding BERT on encoder could improve the stiffness (ROUGE) of model but will cause larger cross-dataset and in-dataset perfor + +mance gap, a better way should be found to merge BERT into abstractive model, or a better training strategy should be applied to offset the negative influence it brings. 6) Existing factuality checker (Factcc) is limited in predictive power of positive samples (Sec.6.2). 7) Out-of-domain systems can even surpass in-domain systems in terms of factuality. (Sec.6.2) + +# Acknowledgements + +We would like to thank the anonymous reviewers for their detailed comments and constructive suggestions. This work was supported by the National Natural Science Foundation of China (No. 62022027 and 61976056), Science and Technology on Parallel and Distributed Processing Laboratory (PDL). + +# References + +Ziqiang Cao, Furu Wei, Wenjie Li, and Sujian Li. 2018. Faithful to the original: Fact aware neural abstractive summarization. In Thirty-Second AAAI Conference on Artificial Intelligence. +Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 675-686. +Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 484-494. +Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Nazli Goharian. 2018. A discourse-aware attention model for abstractive summarization of long documents. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 615-621. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Daniel Fried, Nikita Kitaev, and Dan Klein. 2019. Cross-domain generalization of neural constituency parsers. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 323-330, Florence, Italy. Association for Computational Linguistics. + +Jinlan Fu, Pengfei Liu, Qi Zhang, and Xuanjing Huang. 2020. Rethinking generalization of neural models: A named entity recognition case study. arXiv preprint arXiv:2001.03844. +Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4098-4109. +Max Grusky, Mor Naaman, and Yoav Artzi. 2018. Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 708-719. +Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. arXiv preprint arXiv:1603.06393. +Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1684-1692. +Xinyu Hua and Lu Wang. 2017. A pilot study of domain adaptation effect for neural abstractive summarization. arXiv preprint arXiv:1707.07062. +Chris Kedzie, Kathleen McKeown, and Hal Daume III. 2018. Content selection in deep learning models of summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1818-1828. +Byeongchang Kim, Hyunwoo Kim, and Gunhee Kim. 2019. Abstractive summarization of reddit posts with multi-level memory networks. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2519-2531. +Wojciech Krysciński, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Evaluating the factual consistency of abstractive text summarization. arXiv, pages arXiv-1910. +Logan Lebanoff, Kaiqiang Song, Franck Dernoncourt, Doo Soon Kim, Seokhwan Kim, Walter Chang, and Fei Liu. 2019. Scoring sentence singletons and pairs for abstractive summarization. arXiv preprint arXiv:1906.00077. +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. + +Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out. +Yang Liu. 2019. Fine-tune BERT for Extractive Summarization. +Yang Liu and Mirella Lapata. 2019. Text summarization with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3721-3731. +Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factuality in abstractive summarization. arXiv preprint arXiv:2005.00661. +Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarrunner: A recurrent neural network based sequence model for extractive summarization of documents. In Thirty-First AAAI Conference on Artificial Intelligence. +Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Ca glar Gulçehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. CoNLL 2016, page 280. +Shashi Narayan, Shay B Cohen, and Mirella Lapata. 2018. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797-1807. +Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations. +Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1073-1083. +Eva Sharma, Chen Li, and Lu Wang. 2019. Bigpatent: A large-scale dataset for abstractive and coherent summarization. arXiv preprint arXiv:1906.03741. +Alon Talmor and Jonathan Berant. 2019. MultiQA: An empirical investigation of generalization and transfer in reading comprehension. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4911-4921, Florence, Italy. Association for Computational Linguistics. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998-6008. + +Alex Wang, Kyunghyun Cho, and Mike Lewis. 2020. Asking and answering questions to evaluate the factual consistency of summaries. arXiv preprint arXiv:2004.04228. +Danqing Wang, Pengfei Liu, Ming Zhong, Jie Fu, Xipeng Qiu, and Xuanjing Huang. 2019. Exploring domain shift in extractive text summarization. arXiv preprint arXiv:1908.11664. +Fangfang Zhang, Jin-ge Yao, and Rui Yan. 2018. On the abstractiveness of neural document summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 785–790, Brussels, Belgium. Association for Computational Linguistics. +Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J Liu. 2019. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. arXiv preprint arXiv:1912.08777. +Ming Zhong, Pengfei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, and Xuanjing Huang. 2020. Extractive summarization as text matching. +Ming Zhong, Pengfei Liu, Danqing Wang, Xipeng Qiu, and Xuan-Jing Huang. 2019a. Searching for effective neural extractive summarization: What works and what's next. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1049-1058. +Ming Zhong, Danqing Wang, Pengfei Liu, Xipeng Qiu, and Xuan-Jing Huang. 2019b. A closer look at data bias in neural extractive summarization models. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, pages 80-89. +Qingyu Zhou, Nan Yang, Furu Wei, Shaohan Huang, Ming Zhou, and Tiejun Zhao. 2018. Neural document summarization by jointly learning to score and select sentences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 654-663. + +# A Appendices + +# A.1 Detailed Dataset introduction + +CNN/DailyMail The CNN/DailyMail question answering dataset (Hermann et al., 2015) modified by Nallapati et al. (2016) is commonly used for summarization. The dataset consists of online news articles with paired human-generated summaries. For the data preprocessing, we use the non-anonymized data as See et al. (2017), which doesn't replace named entities. + +XSUM XSUM (Narayan et al., 2018) is a dataset consists of the articles and the single-sentence answers of the question "What is the article about?" as summary. It is more abstractive compared with CNN/DailyMail. + +PUBMED PUBMED (Cohan et al., 2018) is drawn from scientific papers specifically medical journal articles from the PubMed Open Access Subset. We use the introduction as source document and the abstract as summary here. + +BIGPATENT BIGPATENT (Sharma et al., 2019) consists of 1.3 million records of U.S. patent documents and the corresponding summaries are created by human. According to Cooperative Patent Classification (CPC), the dataset is divided to nine categories. One of the nine categories is chosen as a dataset in difference domain in our experiment (Category B: Performing Operations; Transporting). + +REDDIT TIFU REDDIT TIFU (Kim et al., 2019) is a dataset with less formal posts compared with datasets mentioned above which mostly use formal documents as source. It is collected from the online discussion forum Reddit. They regard the body text as source, the title as short summary, and the TL;DR summary as long summary, thus making two sets of datasets: TIFU-short and TIFU-long. TIFU-long is used in this paper. + +# A.2 Dataset statistics + +The detailed dataset statistics are presented in Tab. 6 + +
DatasetsStatisticsTopicsOracleLead-k
CNNDM2,764/123/107MNews55.2140.32
Xsum1126/60/59MNews30.4116.38
Pubmed644/36/38MScientific46.2137.52
BigPatent B4,812/265/262MPatents51.5331.85
Reddit206/3.3/3.6MPosts36.4711.09
+ +Table 6: Detailed statistics of five datasets. Lead- $k$ indicates ROUGE-1 F1 score of the first $k$ sentences in the document and Oracle indicates the globally optimal combination of sentences in terms of ROUGE-1 F1 scores with ground truth, the latter represents the upper bound of extractive models. + +# A.3 Experimental setup + +# A.3.1 Extractive Summarizers + +We use the same training setup in (Zhong et al., 2019a). We use cross entropy as loss function to + +train $LSTM_{non}$ and $Trans_{auto}$ . The hidden state dimension of LSTM in $LSTM_{non}$ is set to 512 and the hidden state dimension of Transformer in $Trans_{auto}$ is 2048. We use Transformer with 8 heads. + +$BERT_{non}$ and $Trans_{non}$ is constructed according to Liu and Lapata (2019). All documents and summaries are truncated to 512 tokens when training. $BERT_{non}$ and $Trans_{non}$ are trained for 50000 steps, the gradient is accumulated every two steps. We use Adam as optimizer and the learning rate is set to 2e-3. + +$BERT_{match}$ is trained as in Zhong et al. (2020). It uses the base version of BERT as base model. We use Adam optimizer with warming up. The learning rate schedule follows Vaswani et al. (2017). + +# A.3.2 Abstractive Summarizers + +$L2L, L2L_{ptr}$ and $L2L_{ptr}^{cov}$ are trained using the pytorch reproduced version code of See et al. (2017). We use the same size of vocabulary(50k), hidden state dimension (256) and word embedding dimension (128) as in the paper. All of three models are trained with 650000 maximum training steps, We use Adagrad to train the models with learning rate of 0.15. + +$BE2T$ and $T2T$ is constructed according to Liu and Lapata (2019). We use two separate optimizers for the decoder and encoder regarding $BE2T$ to offset the mismatch of encoder and decoder, since the former is pre-trained while the latter is not. Learning rates for the optimizers of encoder and decoder are 0.002 and 0.2 respectively. On the other hand, $BE2T$ and $T2T$ are trained with gradient accumulation every five steps, training step for which is 200000. + +BART uses the large pre-trained sequence to sequence model in Lewis et al. (2019). The total learning step when fine-tuning is set to 20000 with 500 steps warming up. We use Adam as optimizer and learning rate is 3e-05. + +# A.4 In-dataset ROUGE results for all models + +Tab. 7 displays in-dataset ROUGE-1 F1 ,ROUGE-2 F1 ,ROUGE-L F1 scores. + +# A.5 The ROUGE-1 F1 score difference of all model pairs which are meaningful to compare + +The holistic and fine-grained results of pair-wise comparison are displayed in Tab. 10. + +
ModelsCNNDMXSUMPubMedBigpatent bReddit
R1R2RLR1R2RLR1R2RLR1R2RLR1R2RL
Ext.LSTMnon41.3618.8137.7319.513.1014.5042.9816.5938.2839.2913.0732.6120.465.0516.33
Transnon40.8418.2337.0915.741.6711.5838.4513.2834.1634.4110.0528.7516.252.6012.57
Transauto41.3518.7737.7519.292.8014.2142.7416.3438.0538.7612.6032.1718.553.4414.62
BERTnon42.6919.8838.9921.764.2416.0038.7413.6234.4835.8511.0529.9721.845.2117.15
BERTmatch44.2620.5840.4024.974.7618.4841.1914.9136.7338.8912.8232.4825.326.1620.17
Abs.L2L32.8012.8430.3428.318.7122.3027.847.4525.6930.469.7627.6116.891.2413.63
L2Lptr37.0615.9633.7429.679.5823.4032.0410.3828.9731.039.9225.3521.324.4617.14
L2Lcov39.9517.5436.2528.838.8322.6235.2711.8931.9235.9012.3132.7821.284.3917.22
T2T39.9017.6637.0829.019.1322.7730.718.1027.9742.9416.7537.0619.963.3615.60
BE2T41.3418.9838.4138.9916.6431.2337.1113.3833.7243.1017.1137.3426.667.0021.21
BART44.7521.6941.4644.7321.9937.0245.0216.9441.1745.7818.3138.9834.0011.8826.91
+ +Table 7: Representative summarizers we have studied in this paper and their correspond performance (ROUGE-1, F1, ROUGE-2 F1, ROUGE-L F1) on different datasets. + +# A.6 Cross-dataset factuality results of all models + +The cross-dataset factcc results for abstractive models are shown in Tab. 8 and the factcc results of extractive models are demonstrated in Tab. 9. + +# A.7 Codeurls + +# A.7.1 Training code urls + +The models and their training codeurls are listed below: + +$LSTM_{non}$ and $Trans_{auto}$ are trained from the code in Zhong et al. (2019a), the code url is https://github.com/maszhongming/Effective_Extractive_Summarization. + +We use the code from Liu and Lapata (2019) for $BERT_{non}$ , $Trans_{non}$ , $BE2T$ and $T2T$ . Code url is https://github.com/nlpyang/PreSumm. + +$BERT_{match}$ uses the code from Zhong et al. (2020) and the code url is https://github.com/maszhongming/MatchSum. + +$L2L, L2L_{ptr}$ and $L2L_{ptr}^{cov}$ are trained from the code of See et al. (2017), code url is https://github.com/atulkum/pointer_summarizer. + +We use code in fairseq (Ott et al., 2019) to fine-tune BART, the code url is https://github.com/pytorch/fairseq/tree/master/examples/bart. + +# A.7.2 Evaluation code urls + +The evaluation metrics code urls are listed below: + +We use pyrouge (https://github.com/bheinzerling/pyrouge) to evaluate the ROUGE performance of models. + +The url for Factcc (Krysciński et al., 2019) is https://github.com/salesforce/factCC. + +The url for other metrics for dataset bias is https://github.com/zide05/CDEvalSumm/tree/master/Data-bias-metrics. + +
ABS modelsL2LL2LptrL2LcovptrT2TBE2TBART
CNN.XSUMPubm.Patent BRed.avgCNN.XSUMPubm.Patent BavgCNN.XSUMPubm.Patent BavgCNN.XSUMPubm.Patent BavgCNN.XSUMPubm.Patent BavgCNN.XSUMPubm.Patent Bavg
CNN68.671.173.369.953.967.489.491.392.291.783.589.695.994.590.996.994.694.672.475.771.571.870.572.478.783.987.792.178.784.269.977.987.484.190.281.9
XSUM13.423.518.113.231.019.86.317.89.08.223.212.97.418.111.07.66.510.19.722.610.89.919.114.414.521.129.88.731.321.135.524.736.150.150.739.4
Pubm.61.070.062.878.646.663.877.680.781.575.185.980.270.775.676.667.975.473.258.559.356.272.334.956.255.458.770.871.756.462.669.561.558.461.394.169.0
Patent B94.494.389.071.991.088.165.260.370.962.871.066.067.063.364.661.677.466.879.281.284.468.773.977.585.488.480.366.582.080.652.153.869.067.476.863.8
Red.20.940.211.113.250.927.337.221.555.262.661.147.527.423.542.949.762.241.134.835.750.644.652.543.617.225.725.130.050.329.659.650.369.149.344.254.5
avg51.759.850.949.454.753.355.254.361.860.165.059.253.755.057.256.763.257.250.954.954.753.550.252.850.255.658.753.859.855.657.353.664.062.471.261.7
+ +Table 8: factcc result for Abstractive models + +
EXT modelsLSTMnonTransnonTransautoBERTnonBERTmatch
CNN.XSUMPubm.Patent BRed.avgCNN.XSUMPubm.Patent BAvgCNN.XSUMPubm.Patent BAvgCNN.XSUMPubm.Patent BAvgCNN.XSUMPubm.Patent BRed.
CNN99.299.996.099.195.297.9100.0100.098.099.1100.099.498.1100.091.393.5100.096.699.699.997.398.298.698.799.899.492.995.799.197.4
XSUM84.194.390.381.494.188.999.8100.097.498.2100.099.186.899.382.969.9100.087.898.499.796.695.799.998.199.799.593.295.198.897.3
Pubm.70.584.380.865.189.077.997.798.895.194.7100.097.387.599.679.064.499.786.195.399.395.194.399.596.799.799.293.195.299.397.3
Patent B86.196.090.974.196.088.698.399.896.397.499.598.390.799.885.568.899.788.997.099.096.094.899.197.299.799.093.094.598.496.9
Red.81.092.186.964.690.283.090.394.194.186.796.392.379.498.779.656.498.182.597.098.995.391.998.896.499.799.393.196.199.397.5
avg84.293.389.076.892.987.297.298.696.295.299.297.388.599.583.770.699.588.497.599.496.195.099.297.499.799.393.095.399.097.3
+ +Table 9: factcc result for Extractive models + +
analysis aspect
model type
compare modelsABS
holistic analysis
fine-grain analysis
ROUGEoriginCNN.430.50.33.21.53.0
Xsum3.41.43.44.20.12.5
+ +Table 10: The difference of ROUGE-1 F1 scores between different models pairs. Every column of the table represents the compared result of one pair of models. The line of holistic analysis displays the overall stiffness and stableness of compared models. The rest of the table is the fine-grained results, the first and third lines of which are the origin compared result $(\mathbf{U}_{\mathbf{A}} - \mathbf{U}_{\mathbf{B}}$ for models pairs $A$ and $B$ ) and the second and fourth lines are the normalized compared result $(\hat{\mathbf{U}}_{\mathbf{A}} - \hat{\mathbf{U}}_{\mathbf{B}}$ for models pairs $A$ and $B$ ). For all heatmap, 'grey' represents positive, 'red' represents negative and 'white' represents approximately zero. \ No newline at end of file diff --git a/cdevalsummanempiricalstudyofcrossdatasetevaluationforneuralsummarizationsystems/images.zip b/cdevalsummanempiricalstudyofcrossdatasetevaluationforneuralsummarizationsystems/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..8b6eead90b80fa938820063105f5f993f5c431c2 --- /dev/null +++ b/cdevalsummanempiricalstudyofcrossdatasetevaluationforneuralsummarizationsystems/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6882d93fb11661ea4e520facb69b6d43203d1475cf2561c0a8b325b07545fb50 +size 795248 diff --git a/cdevalsummanempiricalstudyofcrossdatasetevaluationforneuralsummarizationsystems/layout.json b/cdevalsummanempiricalstudyofcrossdatasetevaluationforneuralsummarizationsystems/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b87ded96cedf338fd22a38b52eb03cd6f66c2e61 --- /dev/null +++ b/cdevalsummanempiricalstudyofcrossdatasetevaluationforneuralsummarizationsystems/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1821b0c8ba5255b2e003d94ee82f6d473266f1afe2e8387903c9c4315bbcf8f0 +size 522089 diff --git a/characterizingthevalueofinformationinmedicalnotes/a646717f-1d43-497c-b94a-7f4759961491_content_list.json b/characterizingthevalueofinformationinmedicalnotes/a646717f-1d43-497c-b94a-7f4759961491_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..52e937c6a9ba7dc8445f0f6cd83a9b3dbb3e8432 --- /dev/null +++ b/characterizingthevalueofinformationinmedicalnotes/a646717f-1d43-497c-b94a-7f4759961491_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3c3500dfd19a1696219d6ecc115a2b981571ae765e0b9052f5199c81e509c71b +size 76655 diff --git a/characterizingthevalueofinformationinmedicalnotes/a646717f-1d43-497c-b94a-7f4759961491_model.json b/characterizingthevalueofinformationinmedicalnotes/a646717f-1d43-497c-b94a-7f4759961491_model.json new file mode 100644 index 0000000000000000000000000000000000000000..3e6547fdb87649bcd402cc4845ec55b40928bd8e --- /dev/null +++ b/characterizingthevalueofinformationinmedicalnotes/a646717f-1d43-497c-b94a-7f4759961491_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a7203d141afeb627f07f763a9dbf79161ac72ea950d4013d2f214fa2a6cf4e60 +size 96671 diff --git a/characterizingthevalueofinformationinmedicalnotes/a646717f-1d43-497c-b94a-7f4759961491_origin.pdf b/characterizingthevalueofinformationinmedicalnotes/a646717f-1d43-497c-b94a-7f4759961491_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..42d6fba6649ee8c194bc1eb119e7affdfad3aaf5 --- /dev/null +++ b/characterizingthevalueofinformationinmedicalnotes/a646717f-1d43-497c-b94a-7f4759961491_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:97c5df1a74a026c99aac6b21e39a0919426387585fd3ea145b166269fc0b655c +size 684191 diff --git a/characterizingthevalueofinformationinmedicalnotes/full.md b/characterizingthevalueofinformationinmedicalnotes/full.md new file mode 100644 index 0000000000000000000000000000000000000000..6e86e42036a96250a9e139e0ff12202fdd90c66b --- /dev/null +++ b/characterizingthevalueofinformationinmedicalnotes/full.md @@ -0,0 +1,307 @@ +# Characterizing the Value of Information in Medical Notes + +Chao-Chun Hsu $^{1}$ , Shantanu Karnwal $^{2}$ , Sendhil Mullainathan $^{3}$ , Ziad Obermeyer $^{4}$ , Chenhao Tan $^{2}$ + +1 University of Chicago, 2 University of Colorado Boulder +3 Chicago Booth School of Business, 4 University of California, Berkeley +chaochunh@uchicago.edu +{shantanu.karnwal, chenhao.tan}@colorado.edu +sendhil@chicagobooth.edu, zobermeyer@berkeley.edu + +# Abstract + +Machine learning models depend on the quality of input data. As electronic health records are widely adopted, the amount of data in health care is growing, along with complaints about the quality of medical notes. We use two prediction tasks, readmission prediction and in-hospital mortality prediction, to characterize the value of information in medical notes. We show that as a whole, medical notes only provide additional predictive power over structured information in readmission prediction. We further propose a probing framework to select parts of notes that enable more accurate predictions than using all notes, despite that the selected information leads to a distribution shift from the training data ("all notes"). Finally, we demonstrate that models trained on the selected valuable information achieve even better predictive performance, with only $6.8\%$ of all the tokens for readmission prediction. + +# 1 Introduction + +As electronic health records (EHRs) are widely adopted in health care, medicine is increasingly an information science (Stead et al., 2011; Shortliffe, 2010; Krumholz, 2014): Obtaining and analyzing information is critical for the diagnosis, prognosis, treatment, and prevention of disease. Although EHRs may increase the accuracy of storing structured information (e.g., lab results), there are growing complaints about unstructured medical notes (henceforth "notes") (Gawande, 2018; Payne et al., 2015; Hartzband et al., 2008). + +These complaints can be grouped into two perspectives: consumption and production. On the one hand, information overload poses a critical challenge on the consumption side. That is, the sheer amount of information makes it difficult to glean meaningful information from EHRs, including notes (Weir and Nebeker, 2007). + +On the other hand, from the perspective of production, for every hour spent on patient interaction, physicians have an added one-to-two hours finishing the progress notes and reviewing results among other things, without extra compensation (Patel et al., 2018). The additional work contributes to physician burnout, along with low-quality notes and even errors in the notes. Consequently, physicians tend to directly copy large volumes of patient data into notes, but may fail to record information only available through interaction with patients. For instance, they may miss the wheezing breath for the diagnosis of the chronic obstructive pulmonary disease, or fail to have engaging conversations for evaluating signs of depression (Zeng, 2016). + +While the NLP community has focused on alleviating the challenges in analyzing information (e.g., information overload), we argue that it is equally important to help caregivers obtain and record valuable information in the first place. We aim to take a first step towards this direction by characterizing the value of information in medical notes computationally. In this work, we define valuable information as information that is useful for evaluating medical conditions and making medical decisions. + +To do that, we first examine the value of notes as a whole conditioned on structured information. While narrative texts can potentially provide valuable information only accessible through physician-patient interaction, our analysis addresses the typical complaint that notes contain too many direct copies of structured information such as lab results. Therefore, a natural question is whether notes provide additional predictive power for medical decisions beyond structured information. By systematically studying two critical tasks, readmission prediction and in-hospital mortality prediction, we demonstrate that notes + +are valuable for readmission predictions, but not useful for mortality prediction. Our results differ from previous studies demonstrating the effectiveness of notes in mortality prediction, partly because Ghassemi et al. (2014) use a limited set of structured variables and thus achieve limited predictive power with structured information alone. + +We then develop a probing framework to evaluate the prediction performance of parts of notes selected by value functions. We hypothesize that not all components of notes are equally valuable and some parts of notes can provide stronger predictive power than the whole. We find that discharge summaries are especially predictive for readmission, while nursing notes are most valuable for mortality prediction. Furthermore, we leverage hypotheses from the medical literature to develop interpretable value functions to identify valuable sentences in notes. Similarity with prior notes turns out to be powerful: a mix of most and least similar sentences provide better performance than using all notes, despite containing only a fraction of tokens. + +Building on these findings, we finally demonstrate the power of valuable information beyond the probing framework. We show that classification models trained on the selected valuable information alone provide even better predictive power than using all notes. In other words, our interpretable value functions can effectively filter noisy information in notes and lead to better models. + +We hope that our work encourages future work in understanding the value of information and ultimately improving the quality of medical information obtained and recorded by caregivers, because information is after all created by people. + +# 2 Our Predictive Framework + +We investigate the value of notes through a predictive framework. We consider two prediction tasks using MIMIC-III: readmission prediction and mortality prediction. For each task, we examine two questions: 1) does a model trained on both notes and structured information outperform the model with structured information alone? (§3) 2) using a model trained on all notes, are there interpretable ways to identify parts of notes that are more valuable than all notes? (§4) + +# 2.1 An Overview of MIMIC-III + +MIMIC-III is a freely available medical database of de-identified patient records. This dataset includes basic information about patients such as admission details and demographics, which allows us to identify outcomes of interest such as mortality and readmission. It also contains detailed information that characterizes the patients' health history at the hospital, known as events, including laboratory events, charting events, and medical notes. The data derived from these events are elicited while patients are in the hospital. Our goal is to characterize the value of such elicited information, in particular, notes, through predictive experiments. Next, we break down the information into two categories: structured vs. unstructured. + +Structured information. The structured information includes the numeric and categorical results of medical measurements and evaluations of patients. For example, in MIMIC-III, structured information includes status monitoring, e.g., respiration rate and blood glucose, and fluids that have been administered to or extracted from the patients. + +Notes (unstructured texts). Caregivers, including nurses and physicians, record information based on their interaction with patients in notes. There are fifteen types of notes in MIMIC-III, including nursing notes and physician notes. Table 1 shows the number of notes in each type and their average length. + +Not all admissions have notes from caregivers. After filtering patients under 18 and other invalid data (see details in the supplementary material), discharge summary appears in most admissions $(96.7\%)$ ; however, only $0.1\%$ of admissions have consult notes. The most common types of notes include nursing, radiology, ECG, and physician. There is also significant variation in length between different types of notes. For instance, discharge summary is more than 8 times as long as nursing notes. + +Fig. 1 presents the total number of tokens in all types of notes within one admission. As discussed in the introduction, a huge amount of information (11,135 tokens on average) is generated in the form of unstructured texts for a patient in an admission. We hypothesize that not all of them are useful for medical purposes. + +
CATEGORYCOUNT%LEN.
Nursing506,52873.0241
Radiology338,83483.3449
ECG123,04261.343
Physician92,42618.21369
Discharge summary47,57296.72195
Echo34,06445.8464
Respiratory32,7988.1205
Nutrition7,9716.4602
General7,7106.4290
Rehab Services5,3214.6622
Social Work2,2942.8446
Case Management9391.3260
Pharmacy970.1512
Consult780.11206
+ +Table 1: Statistics of note events in MIMIC-III after data preprocessing. $\%$ denotes the proportion of admissions having this type of note. "LEN:" means the average length. + +![](images/011a43139dd1b093264b00c960e7f1c724e54e1fa508f25d3b1c663c19642af1.jpg) +Figure 1: Distribution of token length of admissions. Average: 11,135 tokens. Median: 6,437 tokens. + +# 2.2 Task Formulation & Data Representation + +We consider the following two prediction tasks related to important medical decisions. + +- Readmission prediction. We aim to predict whether a patient will be re-admitted to the hospital in 30 days after being discharged, given the information collected within one admission. +- In-hospital mortality prediction. We aim to predict whether a patient dies in the hospital within one admission. Following Ghassemi et al. (2014), we consider three time periods: 24 hours, 48 hours, and retrospective. The task is most difficult but most useful with only information from the first 24 hours. We thus focus on that time period in the main paper (see the supplementary material for 48 hours and retrospective results). + +Formally, our data is a collection of time series with labels corresponding to each task, $\mathcal{D} = \{(E_i,y_i)\}_{i = 1}^N$ where $N$ is the number of admissions (instances). For each collection of time series $E = \{(h_t,\tau_t,x_t)\}_{t = 1}^T$ of an admission, $h_t$ represents the timestamp (e.g., $h_t = 4.5$ means 4.5 + +hours after admission) and $\tau_t \in \{0,1\}$ captures the type of an event (0 indicates that the event contains structured variable and 1 indicates that the event is a note) and $x_t$ stores the value of the corresponding event. Our goal is to predict label $y \in \{0,1\}$ : in readmission, $y$ represents whether a patient was re-admitted within a month. In mortality prediction, $y$ represents whether a patient died in this admission. $^3$ + +As a result, we obtained a total of 37,798/33,930 unique patients and 46,968/42,271 admissions for readmission/mortality prediction (24 hours). + +Representing structured information. As structured information is sparse over timestamps, we filter event types that occur less than 100,000 times (767 event types remaining). Following Harutyunyan et al. (2017), we represent the time series data of structured variables into a vector by extracting basic statistics of different time windows. Specifically, for events of structured variables, $E_{i}^{\tau = 0} = \{(h_{t},\tau_{t},x_{t})|\tau_{t} = 0\}_{t = 1}^{T}$ where $x_{t}\in R^{d},d = 767$ , we apply six statistical functions on seven sub-periods to generate $e_i\in \mathbb{R}^{d\times 7\times 6}$ as the representation of structured variables. The six statistical functions are maximum, minimum, mean, standard deviation, skew, and number of measurements. The seven sub-periods are the entire time period, first (10%, 25%, 50%) of the time period, and last (10%, 25%, 50%) of the time period. We then impute missing values with the mean of training data and apply min-max normalization. + +Representing notes. For notes in an admission, we apply sentence and word tokenizers in the NLTK toolkit to each note (Loper and Bird, 2002). See §2.3 for details on how we use tokenized outcomes for different machine learning models. + +# 2.3 Experimental Setup + +Finally, we discuss the experimental setup and models that we explore in this work. Our code is available at https://github.com/BoulderDS/value-of-medical-notes. + +Data split. Following the training and test split5 of patients in Harutyunyan et al. (2019), we use $85\%$ of the patients for training and the rest $15\%$ for testing. To generate the validation set, we first + +split $20\%$ of the patients from training set and then collect the admissions under each patient to prevent information leaking for the same patient. + +Models. We consider the following models. + +- Logistic regression (LR). For notes, we use tfidf representations. We simply concatenate structured variables with $\ell_2$ -normalized tfidf vector from notes to incorporate structured information. We use scikit-learn (Pedregosa et al., 2011) and apply $\ell_2$ regularization to prevent overfitting. We search hyperparameters $C$ in $\{2^x | x \in \mathbb{Z}, -11 \leq x \leq 0\}$ . +- Deep averaging networks (DAN) (Iyyer et al., 2015). We use the average embedding of all tokens in the notes to represent the unstructured information, which can be considered a deep version of bag-of-words methods. Similar to logistic regression, we concatenate the structured variables with the average embedding of words in notes to incorporate structured information. +- GRU-D (Che et al., 2018). The key innovation of GRU-D is to account for missing data in EHRs. It imputes the missing value by considering all the information available so far, including how much time it elapses since the last observation and all the previous history. Similar to DAN, we use the average embedding of tokens to represent notes. See details of GRU-D in the supplementary material. + +Although it is difficult to apply the family of BERT models to this dataset due to their input length limitation compared to the large number of tokens from all medical notes, we experiment with ClinicalBERT (Alsentzer et al., 2019) based on the selected valuable information in §4. + +Evaluation Metrics ROC-AUC is often used in prior work on MIMIC-III (Ghassemi et al., 2014; Harutyunyan et al., 2017). However, when the number of negative instances is much larger than positive instances, the false positive rate in ROC-AUC becomes insensitive to the change of false positive instances. Therefore, area under precision-recall curve (PR-AUC) is considered more informative than ROC-AUC (Davis and Goadrich, 2006). In our experiments, the positive fraction is only $7\%$ and $12\%$ in readmission prediction and mortality prediction respectively. As + +precision is often critical in medical decisions, we also present precision at $1\%$ and $5\%$ . + +# 3 Do Medical Notes Add Value over Structured Information? + +Our first question is concerned with whether medical notes provide any additional predictive value over structured variables. To properly address this question, we need a strong baseline with structured information. Therefore, we include 767 types of structured variables to represent structured information (§2.2). Overall, our results are mixed for readmission prediction and in-hospital mortality prediction. We present results from GRU-D in the supplementary material because GRU-D results reveal similar trends and usually underperform logistic regression or DAN in our experiments. + +Notes outperform structured variables in PR-AUC and ROC-AUC in readmission prediction (Fig. 2a-2d). For both logistic regression and DAN, notes are more predictive than structured information in readmission prediction based on PR-AUC and ROC-AUC. In fact, in most cases, structured variables provide little additional predictive power over notes (except PR-AUC with DAN). Interestingly, we observe mixed results for precision-based metrics. Structured information can outperform notes in identifying the patients that are most likely to be readmitted. For DAN, combining notes and structured information provides a significant boost in precision at $1\%$ compared to one type of information alone, with an improvement of $16\%$ and $13\%$ in absolute precision over notes and structured variables respectively. + +Structured information dominates notes in mortality prediction (Fig. 2e-2h). We observe marginally additional predictive value in mortality prediction by incorporating notes with structured information. In our experiments, the improvement is negligible across all metrics. This result differs from Ghassemi et al. (2014). We believe that the reason is that Ghassemi et al. (2014) only consider age, gender, and the SAPS II score in structured information, while our work considers substantially more structured variables. It is worth noting that logistic regression with our complete set of structured variables provides better performance than DAN and the absolute number in ROC (0.892) is better than the best number (0.79) in prior work (Che et al., 2018). The reason for the + +![](images/0e5272db259880d67116e51369bde5be3c549a691682cfd2f552712efa77d647.jpg) +(a) PR-AUC. + +![](images/52bd2e00bcdcf16f49400548325af12a4deb1df5b9bbed885804bb1e87eb5955.jpg) +(b) ROC-AUC. + +![](images/df55f939d33356e825e507c81666441fbef810418bd27b9e41529363a0109a9f.jpg) +Readmission Prediction +(c) Precision at $1\%$ . + +![](images/3964c277ff098ac144e7d63f7d5c8896038026981c60d26d51609e98d73092a5.jpg) +(d) Precision at $5\%$ . + +![](images/2a5972d3161aff7d0f5bf5d1188361190c951372a06f4c7472f2b238acdf055e.jpg) +In-hospital Mortality Prediction (24 hours) +(e) PR-AUC. + +![](images/c845611c2e536bea8ee424bca3c676b64bbb310f92507ecdc670725e65822f0b.jpg) +(f) ROC-AUC. + +![](images/3d47d50f543a46c8bf090900bdb727b8356005faf6a0fb0c20a3e3d54b893610.jpg) +(g) Precision at $1\%$ . + +![](images/b44bee43f854af6fc11f3aa28ff13a6b954c693202963aacff8eaaa951854a94.jpg) +(h) Precision at $5\%$ . + +![](images/23fb3118b3d0512064c094116052aac1b737e7aca81e9e3a60575aa04cc20af9.jpg) +Figure 2: Results of PR-AUC/ROC-AUC/Precision at $1\%$ / Precision at $5\%$ on logistic regression (LR)/deep averaging networks (DAN) models in readmission prediction and mortality prediction (24 hours). Notes are valuable for readmission predictions, but are marginally valuable in mortality prediction. + +limited value of notes might be that mortality prediction is a relatively simple task where structured information provides unambiguous signals. + +In sum, we find that note contributes valuable information over structured variables in readmission prediction, but almost no additional value in mortality prediction. Note that ROC-AUC tends to be insensitive to different models and information. We thus use PR-AUC in the rest of the work to discuss the value of selected information. + +# 4 Finding Needles in a Haystack: Probing for the Valuable Information + +The key goal of this work is to identify valuable components within notes, as we hypothesize that not all information in notes is valuable for medical decisions, as measured by the predictive power. + +To identify valuable components, we leverage an existing machine learning model (e.g., models in Fig. 2) and hypothesize that the test performance is better if we only use the "valuable" components. Formally, assume that we trained a model using all notes, $f_{\mathrm{all}}$ . $S_{i}$ denotes sentences in all notes $(E_{i}^{\tau = 1} = \{(h_{t},\tau_{t},x_{t})|\tau_{t} = 1\}_{t = 1}^{T})$ for an admission in the test set. We would like to find a subset of sentences $s_i\subset S_i$ so that $f_{\mathrm{all}}(s_i)$ provides more accurate predictions than $f_{\mathrm{all}}(S_i)$ . Note that $s_i$ by definition entails a distribution shift from the data that $f_{\mathrm{all}}$ is trained on $(S_{i})$ , because $s_i$ is much shorter than $S_{i}$ . + +The challenge lies in developing interpretable ways to identify valuable content. We first compare the value of different types of notes in §4.1, which can be seen as trivial value functions based on type of note, and then propose interpretable value functions to zoom in on the content of notes (§4.2). Finally, we show that these valuable components not only provide accurate predictions with a model trained with all the notes, but also allow us to learn a model with better predictive power than that trained with all the notes (§4.3). In other words, we can effectively remove the noise by focusing on the valuable components. + +# 4.1 Discharge Summaries, Nursing Notes, and Physician Notes are Valuable + +To answer our first question, we compare the effectiveness of different types of notes within the top five most common categories: nursing, radiology, ECG, physician, and discharge summary. An important challenge lies in the fact that not every admission produces all types of notes. Therefore, we conduct pairwise comparison that ensures an admission has both types of note. Specifically, for each pair of note types $(t_1, t_2)$ , we choose admissions with both two types of note and make predictions using $s_{t_1}$ and $s_{t_2}$ respectively, where $s_t$ refers to all the sentences in notes of type $t$ . Each cell in Fig. 3 indicates + +PR-AUC $(f_{\mathrm{all}}(s_{t_{\mathrm{row}}}),y) - \mathrm{PR - AUC}(f_{\mathrm{all}}(s_{t_{\mathrm{column}}}),y))$ + +![](images/b1892bb42944077c13b32c9fbe3491ffca1945d5299f198bb2ce2cd0a12990e4.jpg) +(a) Readmission prediction. + +![](images/51eb56298290a1627526f8752af01e2f347f1b59aac4d6df7816aec9e7b274bc.jpg) +(b) Mortality prediction (24 hrs). +Figure 3: Pairwise comparisons between different types of note with logistic regression (each cell shows PR-AUC $(f_{\mathrm{all}}(s_{t_{\mathrm{row}}}),y) - \mathrm{PR - AUC}(f_{\mathrm{all}}(s_{t_{\mathrm{column}}}),y))$ . To account for the differences in length, we subsample two types of note under comparison to be the same length and report the average values of 10 samples. Discharge summaries dominate all other types of notes in readmission prediction, while nursing notes are most useful for mortality prediction. + +P: Physician notes +N: Nursing notes +D: Discharge summary +R: Radiology reports +E: ECG reports + +with LR (see the supplementary material for DAN results, which are similar to LR). For instance, the top right cell in Fig. 3a shows the performance difference between using only nursing notes and using only discharge summaries for admissions with both nursing notes and discharge summaries. The negative value suggests that nursing notes provide less accurate predictions (hence less valuable information) than discharge summaries in readmission prediction. Note that due to significant variance in length between types of note, we subsample $s_{t_{\mathrm{row}}}$ and $s_{t_{\mathrm{column}}}$ to be the same length in these experiments. + +Discharge summaries dominate other types of notes in readmission prediction. Visually, most of the dark values in Fig. 3a are associated with discharge summaries. This makes sense because discharge summaries provide a holistic view of the entire admission and are likely most helpful for predicting future readmission. Among the other four types of notes, nursing notes are the second most valuable. In comparison, physician notes, radiology reports, and ECG reports are less valuable. + +Nursing notes and physician notes are more valuable for mortality prediction. For mortality prediction, nursing notes provide the best predictive power. ECG reports always have the worst results. Recall that we subsample each type of notes to the same length. Hence, the lack of value in ECG reports cannot be attributed to its short length. + +In summary, templated notes such as radiology reports and ECG reports are less valuable for predictive tasks in medical decisions. While physician notes are the central subject in prior work (Weir and Nebeker, 2007), nursing notes are as important for medical purposes given that there are + +many more nursing notes and they record patient information frequently. + +# 4.2 Identifying Valuable Chunks of Notes + +Next, we zoom into sentences within notes to find out which sentences are more valuable, i.e., providing better predictive power using the model trained with all notes. We choose content from discharge summaries for readmission prediction because they are the most valuable. For mortality prediction, we select content from the last physician note since they play a similar role as discharge summaries. To select valuable sentences from $S_{i}$ , we propose various value functions $V$ and for each $V$ , we choose the sentences in $S_{i}$ that score the highest using $V$ to construct $s_{i}^{V} \subset S_{i}$ . These value functions are our main subject of interest. We consider the following value functions. + +- Longest sentences. Intuitively, longer sentences may contain valuable information. Hence, we use $V_{\mathrm{longest}}(s) = \mathrm{length}(s)$ , where length gives the number of tokens. +- Sentences with highest fractions of medical terms. Medical terms are critical for communicating medical information. We develop a value function based on the fraction of medical terms in a sentence. Empirically, we observe that fraction alone tends to choose very short sentences, we thus use $V_{\mathrm{frac}}(s) = \frac{\mathrm{medical}(s)}{\mathrm{length}(s)} * \sqrt{\mathrm{length}(s)}$ , where the medical terms come from OpenMed-Spel (Robinson, 2014) and MTH-Med-Spel-Chek (Narayanaswamy, 2014) $^7$ . +- Similarity with previous notes. A significant complaint about notes is the prevalence of copy-pasting. We thus develop a value function based + +![](images/73f833acd9c042681ed00ece7e719dc2e51d1f53f8175e12c39eead19564ae66.jpg) +(a) Readmission prediction. + +![](images/b24355b7cdcd9b07ca0cd0bc5e31407b09932a9d97fcdbfe0208c5e09f3e7e36.jpg) +(b) Mortality prediction. + +![](images/3780ae7ada2827835f765dd2fc3978336f2212afff5c278a3427029aa29d2054.jpg) +Figure 4: Performance of the selected information based on different value functions using the logistic regression (LR) model trained on all notes. Despite the distribution shift (selected content is much shorter than the training data, i.e., all notes), the selected information outperforms using all notes with either LR or DAN. +(a) First quartile. + +![](images/bbd9d59a53ce4714af9bf52be38fc2dbfb164f1d1143172e587e720386fa16ae.jpg) +(b) Second quartile. + +![](images/0f56d50f5b4852d88df0e09c69e3f8e8975d9976c57ee9f83bd0a166ceb107c1.jpg) +(c) Third quartile. + +![](images/8c5bffcc7baba319f5bb544bc4189eb81196006d0351fccff256ab3848ae78d8.jpg) +(d) Fourth quartile. +Figure 5: Performance comparison for discharge summaries of different lengths in readmission prediction. Selecting valuable information is most useful for the fourth quartile, the longest discharge summaries. + +on similarity with previous notes. As discharge summaries are the final note within an admission, we compute the max tfidf similarity of a sentence with all previous notes. Specifically, we define $V_{\mathrm{dissimilar}}(s) = -\max_{x_k \in X} \operatorname{cossim}(s, x_k)$ , where $X$ refers to all previous notes: we find the most similar previous note to the sentence of interest and flip the sign to estimate dissimilarity. + +Although we hypothesize dissimilar sentences are more valuable due to copy-pasting concerns (i.e., novelty), sentences may also be repeatedly emphasized in notes because they convey critical information. We thus also flip $V_{\text{dissimilar}}$ to choose the most similar sentences ( $V_{\text{similar}}$ ) and use $V_{\text{mix}}$ to select half of the most similar and half of the most dissimilar ones. Similarly, we apply these value functions on the last physician note to select valuable content for mortality prediction. + +- Important Section. Finally, physicians do not treat every section in notes equally themselves, and spend more time on reading the "Impression and Plan" section than other sections (Brown et al., 2014). We use whether a sentence is in this section as our final value function. This only applies to physician notes. + +In practice, sentences in medical notes can be very long. To be fair across different value func + +tions, we truncate the selected sentences to use the same number of tokens with each value function (see the implementation details in the supplementary material). + +Parts of notes can outperform the whole. Fig. 4 shows the test performance of using different value functions to select a fixed percentage of tokens in the discharge summary or the last physician note, compared to using all notes. The underlying model is the corresponding logistic regression model. We also show the performance of using all notes with DAN as a benchmark. + +Some value functions are able to select valuable information that outperforms using all notes with either logistic regression or DAN. Interestingly, we find that selected valuable information generally performs better based on the LR model, which seems more robust to distribution shifts than DAN (recall that selected valuable information is much shorter than the expected test set using all notes). + +In readmission prediction, medical terms are fairly effective early on, outperforming using all notes with LR, using only $20\%$ of the discharge summary. As we include more tokens, a mix of similar and dissimilar sentences becomes more valuable and is eventually comparable with DAN using $45\%$ of the discharge summary. Table 2 presents an example of sentences selected from different value functions in readmission prediction using logistic regression. + +In mortality prediction, the advantage of selected valuable information is even more salient. + +
Value FunctionProb.Selected Sentences
similar0.189Congestive Heart Failure - Systolic and [**Month/Day/Year**] Failure - most recent echo on [**2123-9-3**] with EF 40 % 3. Valvular Disease - Moderate Aortic Stenosis - mild - moderate aortic regurgitation - mild - moderate mitral regurgitation 4. Coronary Artery Disease - [**2122-11-16**] - s/p BMS to OM2, D1, Left circumflex in [**2122-11-16**] for unstable angina and TWI in V2 - 4 - [**2123-5-24**] - NSTEMI s/p cardiac cath
dissimilar0.291No thrills, lifts. BNP elevated though decreased from prior. Please take tiotropium bromide (Spiriva) inhalation twice a day 2. Mom[**Name(NI) 6474**] 50 mcg / Actuation Spray, Non - Aerosol Sig : Two (2) spray Nasal twice a day. Troponins negative x3 sets. PO Q24H (every 24 hours). No S3 or S4. Resp were unlabored, no accessory muscle use. Occupation: general surgeon in [**Location (un) 4551**. Abd: Soft, NTND. EtOH: 1 glass of wine or alcoholic drink /week. + [**4-16**] word dyspnea
mix-similar0.620Congestive Heart Failure - Systolic and [**Month/Day/Year**] Failure - most recent echo on [**2123-9-3**] with EF 40% 3. Valvular Disease - Moderate Aortic Stenosis - mild - moderate aortic regurgitation - mild - moderate mitral regurgitation 4. Coronary Artery Disease No thrills, lifts. BNP elevated though decreased from prior. Please take tiotropium bromide (Spiriva) inhalation twice a day 2. Mom[**Name (NI) 6474**] 50 mcg / Actuation Spray, Non-Aerosol Sig: Two (2) spray Nasal twice a day. Troponins negative x3 sets. PO Q24H
+ +Table 2: Example of selected sentences (5% of tokens) by different value functions from discharge summaries for readmission prediction. This patient was readmitted to the hospital in 30 days after discharge. Underlined sentences in mix-similar function come from dissimilar sentences. "Prob." shows the output probability of readmission with the LR model trained on all notes given selected sentences. + +Consistent with Brown et al. (2014), "assessment and plan" is indeed more valuable than the whole note. It alone outperforms both LR and DAN with all notes. Different from readmission prediction, sentences dissimilar to previous notes are most effective. The reason might be that dissimilar sentences give novel developments in the patient that relate to the impending death. As structured information dominates notes in this task, selected information adds little value to structured information (see the supplementary material). + +The effectiveness of value functions varies across lengths. To further understand the effectiveness of value functions, we break down Fig. 4a based on the length of discharge summaries. Intuitively, it would be harder to select valuable information for short summaries, and Fig. 5a confirms this hypothesis. In all the other quartiles, a value function is able to select sentences that outperform both LR and DAN using all notes. The medical terms are most effective in the second and third quartiles. In the fourth quartile (i.e., the longest discharge summaries), dissimilar content is very helpful, which likely includes novel perspectives synthesized in discharge summaries. These observations resonate with our earlier discussion that dissimilar content contribute novel information. + +# 4.3 Leveraging Valuable Information + +Building on the above observations, we leverage the selected valuable information to train models + +based on only valuable information. Fig. 6 shows the performance of these models on readmission prediction. Here we include DAN with note-level attention ("DAN-Att") as a model-driven oracle weighted selection approach, although it does not lead to interpretable value functions that can inform caregivers during note-taking. + +First, models trained only using discharge summaries ("last note") improves the performance over using all notes by $41\%$ (0.219 vs. 0.155), and outperform DAN and DAN-att as well. Using medical terms and all types of similarity methods, we can outperform using all notes with models only trained on $20\%$ tokens of discharge summaries, that is, $6.8\%$ of all notes. Compared to Fig. 4a, by focusing exclusively on these selected $20\%$ of tokens, the model trained with selected dissimilar sentences outperforms logistic regression by $24.3\%$ (0.194 vs. 0.156), DAN by $8.2\%$ (0.194 vs. 0.178), and DAN-Att by $2\%$ (0.194 vs. 0.190). We also experiment with ClinicalBERT with a fixed number of tokens (see the supplementary material). ClinicalBERT provides comparable performance with logistic regression, and demonstrates similar qualitative trends. + +Recall that medical notes dominate structured information for readmission prediction. It follows that our best performance with selected valuable information all outperform the best performance + +![](images/4f84e450b8098c8f70d51ed0e88047005e0ccb238880238579a6894578e0f78b.jpg) +Figure 6: Performance of trained models with selected valuable information (20% of discharge summaries). + +obtained in §3. + +# 5 Related Work + +We summarize additional related work into the following three areas. + +Value of medical notes. Prior work shows that some important phenotypic characteristics can only be inferred from text reports (Shivade et al., 2014). For example, Escudie et al. (2017) observed that $92.5\%$ of information regarding autoimmune thyroiditis is only presented in text. Despite the potential valuable information in medical notes, prior work also points out the redundancy in EHRs. Cohen et al. (2013) proposed methods to reduce redundant content for the same patient with a summarization-like fingerprinting algorithm, and show improvements in topic modeling. We also discuss the problem of redundancy in notes, but provide a different perspective by probing what type of information is more valuable than others using our framework. + +NLP for medical notes. The NLP community has worked extensively on medical notes to alleviate information overload, ranging from summarization (McInerney et al., 2020; Liang et al., 2019; Alsentzer and Kim, 2018) to information extraction (Wiegreffe et al., 2019; Zheng et al., 2014; Wang et al., 2018). For instance, information extraction aims to automatically extract valuable information from existing medical notes. While our operationalization seems similar, our ultimate goal is to facilitate information solicitation so that medical notes contain more valuable information. + +Recently, generating medical notes has attracted substantial interest that might help caregivers record information (Liu et al., 2018; Krishna et al., 2020), although they do not take into account the value of information. + +Predictive tasks with EHRs. Readmission prediction and mortality prediction are important tasks that have been examined in a battery of stud + +ies (Johnson et al., 2017; Ghassemi et al., 2014; Purushotham et al., 2018; Rajkomar et al., 2018). In MIMIC-III, to the best of our knowledge, we have experimented with the most extensive structured variables and as a result, achieved better performance even with simple models. Other critical tasks include predicting diagnosis codes (Ford et al., 2016) and length of stay (Rajkomar et al., 2018). We expect information in medical notes to be valued differently in these tasks as well. + +# 6 Conclusion + +Our results confirm the value of medical notes, especially for readmission prediction. We further demonstrate that parts can outperform the whole. For instance, selected sentences from discharge summaries can better predict future readmission than using all notes and structured variables. Our work can be viewed as the reverse direction of adversarial NLP (Wallace et al., 2019): instead of generating triggers that fool NLP models, we identify valuable information in texts towards enabling humans to generate valuable texts. + +Beyond confirming intuitions that "assessment and plan" in physician notes is valuable, our work highlights the importance of nursing notes. Our results also suggest that a possible strategy to improve the value of medical notes is to help caregivers efficiently provide novel content while highlighting important prior information (mixed similarity). Substantial future work is required to achieve the long-term goal of improving the notetaking process by nudging caregivers towards obtaining and recording valuable information. + +In general, the issue of effective information solicitation has been understudied by the NLP community. In addition to model advances, we need to develop human-centered approaches to collect data of better quality from people. As Hartzband et al. (2008) argued, "as medicine incorporates new technology, its focus should remain on interaction between the sick and healer." We hope that our study will encourage studies to understand the interaction process and the note-taking process, beyond understanding the resulting information as a given. After all, people are at the center of data. + +Acknowledgments. We thank helpful comments from anonymous reviewers. We thank the MIMIC team for continually providing invaluable datasets for the research community. + +# References + +Emily Alsentzer and Anne Kim. 2018. Extractive summarization of ehr discharge notes. arXiv preprint arXiv:1810.12085. +Emily Alsentzer, John Murphy, William Boag, WeiHung Weng, Di Jin, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clinical BERT embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 72-78, Minneapolis, Minnesota, USA. Association for Computational Linguistics. +PJ Brown, JL Marquard, B Amster, M Romoser, J Friderici, S Goff, and D Fisher. 2014. What do physicians read (and ignore) in electronic progress notes? Applied clinical informatics, 5(02):430-444. +Zhengping Che, Sanjay Purushotham, Kyunghyun Cho, David Sontag, and Yan Liu. 2018. Recurrent neural networks for multivariate time series with missing values. Scientific reports, 8(1):6085. +Raphael Cohen, Michael Elhadad, and Noémie El-hadad. 2013. Redundancy in electronic health record corpora: analysis, impact on text mining performance and mitigation strategies. BMC bioinformatics, 14(1):10. +Jesse Davis and Mark Goadrich. 2006. The relationship between precision-recall and roc curves. In Proceedings of the 23rd international conference on Machine learning, pages 233-240. +Jean-Baptiste Escudie, Bastien Rance, Georgia Malamut, Sherine Khater, Anita Burgun, Christophe Cellier, and Anne-Sophie Jannot. 2017. A novel data-driven workflow combining literature and electronic health records to estimate comorbidities burden for a specific disease: a case study on autoimmune comorbidities in patients with celiac disease. BMC medical informatics and decision making, 17(1):140. +Elizabeth Ford, John A Carroll, Helen E Smith, Donia Scott, and Jackie A Cassell. 2016. Extracting information from the text of electronic medical records to improve case detection: a systematic review. Journal of the American Medical Informatics Association, 23(5):1007-1015. +Atul Gawande. 2018. Why doctors hate their computers. +Marzyeh Ghassemi, Tristan Naumann, Finale Doshi-Velez, Nicole Brimmer, Rohit Joshi, Anna Rumshisky, and Peter Szolovits. 2014. Unfolding physiological state: Mortality modelling in intensive care units. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 75-84. ACM. +Pamela Hartzband, Jerome Groopman, et al. 2008. Off the record-avoiding the pitfalls of going electronic. New England Journal of Medicine, 358(16):1656-1657. + +Hrayr Harutyunyan, Hrant Khachatrian, David C Kale, and Aram Galstyan. 2017. Multitask learning and benchmarking with clinical time series data. arXiv preprint arXiv:1703.07771. +Hrayr Harutyunyan, Hrant Khachatrian, David C. Kale, Greg Ver Steeg, and Aram Galstyan. 2019. Multitask learning and benchmarking with clinical time series data. Scientific Data, 6(1). +Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daumé III. 2015. Deep unordered composition rivals syntactic methods for text classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1681-1691. +Alistair EW Johnson, Tom J Pollard, and Roger G Mark. 2017. Reproducibility in critical care: a mortality prediction case study. In *Machine Learning for Healthcare Conference*, pages 361-376. +Kundan Krishna, Sopan Khosla, Jeffrey P. Bigham, and Zachary C. Lipton. 2020. Generating soap notes from doctor-patient conversations. +Harlan M Krumholz. 2014. Big data and new knowledge in medicine: the thinking, training, and tools needed for a learning health system. *Health Affairs*, 33(7):1163-1170. +Jennifer Liang, Ching-Huei Tsou, and Ananya Poddar. 2019. A novel system for extractive clinical note summarization using EHR data. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 46-54, Minneapolis, Minnesota, USA. Association for Computational Linguistics. +Jingshu Liu, Zachariah Zhang, and Narges Razavian. 2018. Deep ehr: Chronic disease prediction using medical notes. arXiv preprint arXiv:1808.04928. +Edward Loper and Steven Bird. 2002. Nltk: The natural language toolkit. In In Proceedings of the ACL Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics. Philadelphia: Association for Computational Linguistics. +Denis Jered McInerney, Borna Dabiri, Anne-Sophie Touret, Geoffrey Young, Jan-Willem van de Meent, and Byron C Wallace. 2020. Query-focused ehr summarization to aid imaging diagnosis. arXiv preprint arXiv:2004.04645. +Rajasekharan Narayanaswamy. 2014. Mth-med-spelchek of mt-herald. +Rikinkumar S Patel, Ramya Bachu, Archana Adikey, Meryem Malik, and Mansi Shah. 2018. Factors related to physician burnout and its consequences: a review. Behavioral Sciences, 8(11):98. + +Thomas H Payne, Sarah Corley, Theresa A Cullen, Tejal K Gandhi, Linda Harrington, Gilad J Kuperman, John E Mattison, David P McCallie, Clement J McDonald, Paul C Tang, et al. 2015. Report of the amia ehr-2020 task force on the status and future direction of ehrs. Journal of the American Medical Informatics Association, 22(5):1102-1110. +F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830. +Sanjay Purushotham, Chuizheng Meng, Zhengping Che, and Yan Liu. 2018. Benchmarking deep learning models on large healthcare datasets. Journal of biomedical informatics, 83:112-134. +Alvin Rajkomar, Eyal Oren, Kai Chen, Andrew M Dai, Nissan Hajaj, Michaela Hardt, Peter J Liu, Xiaobing Liu, Jake Marcus, Mimi Sun, et al. 2018. Scalable and accurate deep learning with electronic health records. NPJ Digital Medicine, 1(1):18. +R. Robinson. 2014. Openmed spel of e-medtools (version 2.0.0). +Chaitanya Shivade, Preethi Raghavan, Eric Fosler-Lussier, Peter J Embi, Noemie Elhadad, Stephen B Johnson, and Albert M Lai. 2014. A review of approaches to identifying patient phenotype cohorts using electronic health records. Journal of the American Medical Informatics Association, 21(2):221-230. +Edward H Shortliffe. 2010. Biomedical informatics in the education of physicians. JAMA, 304(11):1227-1228. +William W Stead, John R Searle, Henry E Fessler, Jack W Smith, and Edward H Shortliffe. 2011. Biomedical informatics: changing what physicians need to know and how they learn. Academic Medicine, 86(4):429-434. +Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial triggers for nlp. In EMNLP. +Yanshan Wang, Liwei Wang, Majid Rastegar-Mojarad, Sungrim Moon, Feichen Shen, Naveed Afzal, Sijia Liu, Yuqun Zeng, Saeed Mehrabi, Sunghwan Sohn, et al. 2018. Clinical information extraction applications: a literature review. Journal of biomedical informatics, 77:34-49. +Charlene R Weir and Jonathan R Nebeker. 2007. Critical issues in an electronic documentation system. In AMIA Annual Symposium Proceedings, volume 2007, page 786. American Medical Informatics Association. + +Sarah Wiegrefe, Edward Choi, Sherry Yan, Jimeng Sun, and Jacob Eisenstein. 2019. Clinical concept extraction for document-level coding. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 261-272, Florence, Italy. Association for Computational Linguistics. +M Zeng. 2016. Opinion: When the doctor must choose between her patients and her notes. Accessed January, 19. +Jin Guang Zheng, Daniel Howsmon, Boliang Zhang, Juergen Hahn, Deborah McGuinness, James Hendler, and Heng Ji. 2014. Entity linking for biomedical literature. In Proceedings of the ACM 8th International Workshop on Data and Text Mining in Bioinformatics, pages 3-4. \ No newline at end of file diff --git a/characterizingthevalueofinformationinmedicalnotes/images.zip b/characterizingthevalueofinformationinmedicalnotes/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..93d9b604eb8e773d8bd8f9eecf15e1efb29d7d2d --- /dev/null +++ b/characterizingthevalueofinformationinmedicalnotes/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1da88ed6f2366ae2f8613452744e0bfb7c657f763709c52693d1a74bf7a6570a +size 396615 diff --git a/characterizingthevalueofinformationinmedicalnotes/layout.json b/characterizingthevalueofinformationinmedicalnotes/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..4fe76580c2d51ba69cc0647587edc121ab988ae6 --- /dev/null +++ b/characterizingthevalueofinformationinmedicalnotes/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b19c412b0213b65ea0d03eba4682d6fbf592492f062263d3aa8bffa1c202e910 +size 397248 diff --git a/chunkbasedchinesespellingcheckwithglobaloptimization/b432d07a-236d-4f7d-876e-55015a451bef_content_list.json b/chunkbasedchinesespellingcheckwithglobaloptimization/b432d07a-236d-4f7d-876e-55015a451bef_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..940f5cd515591d827a98a587a7168cb55481ae59 --- /dev/null +++ b/chunkbasedchinesespellingcheckwithglobaloptimization/b432d07a-236d-4f7d-876e-55015a451bef_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:719decad5cad2c7ee769b13e73dd84809ad891d57be61efcc182524854532282 +size 70696 diff --git a/chunkbasedchinesespellingcheckwithglobaloptimization/b432d07a-236d-4f7d-876e-55015a451bef_model.json b/chunkbasedchinesespellingcheckwithglobaloptimization/b432d07a-236d-4f7d-876e-55015a451bef_model.json new file mode 100644 index 0000000000000000000000000000000000000000..4e2c62a466047f9d89692fcdae40ee2e86a6b13d --- /dev/null +++ b/chunkbasedchinesespellingcheckwithglobaloptimization/b432d07a-236d-4f7d-876e-55015a451bef_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:774ca7f0dbf494d2df688289c4f9ac75cb9d224de35682f909598e17cbed507b +size 83243 diff --git a/chunkbasedchinesespellingcheckwithglobaloptimization/b432d07a-236d-4f7d-876e-55015a451bef_origin.pdf b/chunkbasedchinesespellingcheckwithglobaloptimization/b432d07a-236d-4f7d-876e-55015a451bef_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e1164f51f61ab99245baa50d0880f9d8b0aaaae9 --- /dev/null +++ b/chunkbasedchinesespellingcheckwithglobaloptimization/b432d07a-236d-4f7d-876e-55015a451bef_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:73778c02b8b52838b1f130ade9310c7375db4b5dbd31b9537868c41a53ae95be +size 519865 diff --git a/chunkbasedchinesespellingcheckwithglobaloptimization/full.md b/chunkbasedchinesespellingcheckwithglobaloptimization/full.md new file mode 100644 index 0000000000000000000000000000000000000000..2a893f8865657e7bca9cc617e4216f0a69074ecd --- /dev/null +++ b/chunkbasedchinesespellingcheckwithglobaloptimization/full.md @@ -0,0 +1,284 @@ +# Chunk-based Chinese Spelling Check with Global Optimization + +Zuyi Bao, Chen Li and Rui Wang + +Alibaba Group + +{zuyi.bzy,puji.lc,masi.wr}@alibaba-inc.com + +# Abstract + +Chinese spelling check is a challenging task due to the characteristics of the Chinese language, such as the large character set, no word boundary, and short word length. On the one hand, most of the previous works only consider corrections with similar character pronunciation or shape, failing to correct visually and phonologically irrelevant typos. On the other hand, pipeline-style architectures are widely adopted to deal with different types of spelling errors in individual modules, which is difficult to optimize. In order to handle these issues, in this work, 1) we extend the traditional confusion sets with semantical candidates to cover different types of errors; 2) we propose a chunk-based framework to correct single-character and multi-character word errors uniformly; and 3) we adopt a global optimization strategy to enable a sentence-level correction selection. The experimental results show that the proposed approach achieves a new state-of-the-art performance on three benchmark datasets, as well as an optical character recognition dataset. + +# 1 Introduction + +Spelling check is a task to automatically detect and correct spelling errors in human writings. Spelling check is well-studied for languages such as English, and many resources and tools have been developed. However, the characteristics of the Chinese language make the Chinese spelling check $(\mathrm{CSC})^{1}$ quite different from the English one in three aspects: + +- In contrast to English words that are composed of a small set of Latin letters, Chinese has more than three thousand frequently used + +characters. The large character set leads to a huge search space for the CSC models. + +- For English spelling check, the basic unit is the word. However, Chinese characters are continuously written without word delimiter, and the word definition varies across different linguistic theories (Xue, 2003; Emerson, 2005). It makes the sentence with spelling errors more ambiguous, and more challenging for the spell checkers to detect and correct the errors. + +- Chinese words usually consist of one to four characters and are much shorter than the English word. Spelling errors can drastically change the meaning of the word. Thus, the CSC task relies on the contextual semantic information to find the best correction. + +For the first challenge, previous research demonstrates that most of the Chinese spelling errors come from similar pronunciations, shapes, or meanings (Liu et al., 2011; Chen et al., 2011). Previous CSC models usually employ the characters with similar pronunciation or shape as the confusion set to reduce the search space, but the visually and phonologically irrelevant typos cannot be handled. Recent work aims at replacing the pronunciation and shape confusion sets with a dynamically generated confusion set by masked language models, which retrieve the semantically related candidates according to the contextual information (Hong et al., 2019). However, due to the lack of knowledge about human errors, masked language models correct the spelling errors ignoring the pronunciation or shape similarity. Therefore, combining the two comes as a natural solution. + +For the second challenge, early works rely on the segmentation results from a Chinese word segmentation system (Yu and Li, 2014). However, as the + +segmentation system is trained on the clean corpus, the spelling errors often lead to incorrect segmentation results. The accumulated errors make the spell checking even more difficult. Thus, character-based models are proposed to perform the correction at the character-level directly, which are more robust to segmentation errors (Zhang et al., 2015; Hong et al., 2019; Zhang et al., 2020). However, the character-based model cannot effectively utilize the word-level semantic information, and the correction is also more difficult to interpret. In order to explore and utilize the word-level information, the word-based methods are designed to do word segmentation and spelling error corrections jointly. Previous works show that the word-based correction models often perform better than their character-based counterparts (Jia et al., 2013; Hsieh et al., 2015; Yeh et al., 2015; Zhao et al., 2017). Since word-based correction models usually apply a pipeline of submodules and handle special cases (e.g., single-character words) individually, the complex architecture makes it difficult to perform global optimization. + +For the third challenge, previous works mainly rely on the local context features such as point-wise mutual information (PMI), part-of-speech (POS) n-gram, and perplexity from an n-gram language model (Liu et al., 2013; Zhang et al., 2015; Yeh et al., 2015). As these statistical features are limited within a fixed-size window, it is difficult to capture the deep contextual information. + +In the paper, we propose a unified framework combining features and benefits from previous works. We employ confusion sets from similar pronunciations, shapes, and semantics to deal with different types of spelling errors. A chunk-based decoding approach is proposed to model both single-character and multi-character words in a uniform way. We also finetune an error model based on the large-scale pretrained language model to include deep semantic information. A global optimization algorithm is adopted to combine different features and select the best correction. The experiment results show that the proposed approach achieves a new state-of-the-art performance on the three benchmark datasets. A further experiment shows that our method is also effective for optical character recognition (OCR) errors. Our contributions are summarized as follows: + +1. We propose a chunk-based decoding method with global optimization to correct single- + +character and multi-character word typos in a unified framework. + +2. We combine pronunciation, shape, and semantic confusion sets to handle different spelling errors. +3. Our method achieves new state-of-the-art performance on the three benchmark datasets and an OCR dataset. + +# 2 Approach + +The workflow of the proposed approach is shown in Figure 1. The proposed spelling check method adopts the chunk-based decoding, which processes single-character and multi-character words in a uniform way. During decoding, the candidates with variable length are dynamically generated according to the input sentence and the partially decoded sentence. For selecting the best correction, a global ranking optimization is used to combine different features.2 + +# 2.1 Chunk-based Decoding + +The chunk-based decoding treats single-character words, multi-character words, phrases, and idioms equivalently as chunks. It provides a unified framework where we can easily extend the candidate generation methods. The framework also makes the implementation of global optimization to be possible. Given an input sentence with $n$ characters $s = [c_1, c_2, \dots, c_n]$ , the chunk-based decoding gradually segments and corrects the input sentence at the same time. It attempts to find the best combination of chunk candidates and rewrites the input sentence to its correction in a left-to-right style: + +$$ +s _ {c} = \underset {\hat {s} \in L (s)} {\arg \max } f (\hat {s}, s) \tag {1} +$$ + +where $f$ is a scoring function. $s$ is the input sentence, and $L(s)$ refers to the set of all possible combinations of chunk candidates for $s$ . + +The decoding process employs the framework of the beam search algorithm (Lowerre, 1976), and the details are shown in Algorithm 1. The beam is initialized with an empty correction. In the loop, we extend each partially decoded correction in the beam with dynamically generated chunk candidates. A scoring model is utilized for giving each + +![](images/91f99e836aedfd917b26bcfd1f27ab714d847d31fb9b5693660b58918b0bfc92.jpg) +Figure 1: The workflow of the proposed chunk-based decoding method during the inference time. The chunk-based candidate generation and decoding are used to disambiguate and correct the input sentence gradually. + +Algorithm 1: Chunk-based Decoding +Input: Input sentence $s$ , Beam size k, Vocabulary V +Output: The corrected sentence $S_{c}$ +Init beam $\leftarrow$ [Root]; +Init temp $\leftarrow \left\lfloor \right\rfloor$ . +Init cans $\leftarrow$ None; +Init x $\leftarrow$ None; +while Any correction in beam is not finished do +temp $\leftarrow \left\lfloor \right\rfloor$ . +foreach correction in beam do if correction is finished then temp.append(correction); continue; end cans $\leftarrow$ getCandidates(s,correction,V); foreach candidate in cans do x $\leftarrow$ correction extend(candidate); x.score $\leftarrow$ score(x); temp.append(x); end +end sort_prune_beam(temp,k); beam $\leftarrow$ temp; +end + $S_{c}\gets$ beam[0]; +Return $S_{c}$ + +correction a confidence score. The details about the candidate generation and correction selection will be introduced in Section 2.2 and 2.3. At the end of each loop, we sort the beam and prune the corrections with low confidence to reduce the search space. Finally, after every correction in the beam decodes the whole input sentence, we output the most confident correction as the final result. + +Essentially, the decoding stage jointly searches all possible segmentations and their corrections. From another point of view, the decoding gradually disambiguates and rewrites the sentence. + +# 2.2 Candidate Generation + +Previous work proposes to retrieve the candidates according to pronunciation or shape confusion sets (Liu et al., 2011; Chen et al., 2011). Following these works, we adopt confusion sets to reduce the search space. For handling single-character word typos and visually or phonologically irrelevant typos, we extend the pronunciation and shape confusion sets with semantic confusion set. + +The candidate generation module assumes that each span of characters in the input sentence can be misspelled. According to confusion sets from three aspects, we generate all possible chunk candidates for the partially decoded correction. Given a vocabulary $V$ , an input sentence $s$ , and a start position $i$ , we consider chunks of characters starting at $i$ and within a max length as a potential typo and generate possible correction candidates: + +Pronunciation: Given a chunk of characters $chunk_{ij} = [c_i, \dots, c_j]$ from the $i$ -th to the $j$ -th character in the sentence $s$ , we convert $chunk_{ij}$ to its pinyin $^3$ and retrieve all the candidates in a similar pronunciation from the $V$ . + +Shape: In addition to pronunciation, we also consider the candidates in a similar shape. Within a chunk $_{ij}$ , we substitute characters with their visually similar characters and keep the candidates that can be found in the $V$ . In practice, making a balance between speed and quality, we only consider candidates that have 1 edit distance (1 substitution) with the chunk $_{ij}$ . + +Semantic: Beyond the pronunciation and shape + +similarity, we also utilize language models to retrieve semantically reasonable candidates according to the contextual information. Specifically, we employ the masked language model (Devlin et al., 2018) as it is effective for modeling long-range dependencies. Following Hong et al. (2019), we finetune the pretrained masked language model on the CSC training data and use the top $k$ prediction of each character as the semantic confusion set. For candidates generation, we substitute each character in the chunk $_{ij}$ with its semantically similar characters and keep the candidates that can be found in the $V$ . Similar to shape confusion set, in practice, we only consider candidates that have 1 edit distance (1 substitution) with the chunk $_{ij}$ . + +# 2.3 Correction Selection + +In this section, we introduce the training strategy for correction selection and the features we used for global optimization. Most of the previous work follows the noisy channel model (Brill and Moore, 2000), which formulates the error correction tasks as: + +$$ +s _ {c} = \underset {\hat {s}} {\arg \max } p (\hat {s} | s) \tag {2} +$$ + +where the $s$ is the input sentence, and $\hat{s}$ refers to a possible correction. The formula can be further rewritten through the Bayes rule as: + +$$ +s _ {c} = \underset {\hat {s}} {\arg \max } \frac {p (s | \hat {s}) \cdot p (\hat {s})}{p (s)} \tag {3} +$$ + +where $p(s|\hat{s})$ and $p(\hat{s})$ refer to the error model probability and the sentence probability respectively. Then we omit the $p(s)$ as it is constant for every $\hat{s}$ and take logarithm: + +$$ +s _ {c} = \underset {\hat {s}} {\arg \max } \left(\log p (s | \hat {s}) + \log p (\hat {s})\right) \tag {4} +$$ + +The formula becomes a linear model combining the error model probability and the sentence probability in logarithm. In practice, the error model and the sentence probability is complex. In the experiment, we use a bundle of features and apply a linear model as the score function for approximation. + +$$ +s c o r e = \sum_ {i} w _ {i} \cdot \operatorname {f e a t} _ {i} \tag {5} +$$ + +where $w_{i}$ is the weight for $i$ -th feature $\mathrm{feat}_i$ . + +The features we used for correction selection are listed with their descriptions in Table 1. The ed and pyed are used to calculate the similarity of the + +
NameDescription
edthe character-level edit distance between s andhat.
pyedthe edit distance between the pinyin of s andhat.
n-chunkthe number of chunks inhat.
wlmthe perplexity ofhat measured by a word-level n-gram language model.
cemthe improvement of log probability from a character error model.
n-pythe number of chunks that are from the pronunciation confusion set.
n-shapethe number of chunks that are from the shape confusion set.
n-lmthe number of chunks that are from the semantic confusion set.
+ +Table 1: The features used for the correction selection. $s$ and $\hat{s}$ refer to the input sentence and a correction. + +correction and input sentence through character-level and pronunciation-level. A longer chunk is usually more unambiguous than a shorter one, thus a correction with less $n$ -chunk is often more reasonable. The wlm is used for checking the fluency of a correction. The $n$ -py, $n$ -shape and $n$ -lm assign weights to different confusion sets. The cem is used for modeling the character-level error probability. We directly use the finetuned masked language model in the semantic confusion set as the error model. When a chunk of characters $[c_i, \dots, c_j]$ is substituted with $[\hat{c}_i, \dots, \hat{c}_j]$ , we calculate the chunk-level cem approximately as: + +$$ +c e m = \sum_ {k = i} ^ {j} \left(\log p \left(\hat {c} _ {k} \mid c _ {k}, s\right) - \log p \left(c _ {k} \mid c _ {k}, s\right)\right) \tag {6} +$$ + +where $p(\hat{c}_k|c_k,s)$ is the probability of replacing $c_{k}$ with $\hat{c}_k$ given the input sentence $s$ .4 + +For combining different features, we apply the Minimum Error Rate Training (MERT) algorithm (Och, 2003). Given the top $n$ outputs, the MERT algorithm optimizes the scoring function by learning to rerank the decoded sentences according to their similarity to the gold sentence. Rather than a local ranking, the MERT algorithm measures the similarity directly by sentence-level metrics to achieve a global optimization. + +# 3 Experiments + +In the following sections, we will introduce the datasets and the experimental settings first, and + +
DatasetTraining DataTest Data
# Sent.Error RateAvg. Length# Sent.Error RateAvg. Length
csc1370050.0%41.81100099.6%74.3
csc14343799.9%49.6106249.8%50.0
csc152339100.0%31.3110050.0%30.6
ocr3575100.0%10.11000100%10.2
+ +Table 2: Statistics of datasets. The error rate refers to the percentage of sentences with errors. + +then the performance on the three benchmark datasets is listed to show the effectiveness of the proposed method. Finally, the evaluation of an OCR subtitle dataset shows that our method can be adapted to OCR errors as well. + +# 3.1 Setup + +We evaluate the proposed method on three CSC benchmark datasets and an OCR subtitle error correction dataset. The three CSC datasets are from SIGHAN13 (Wu et al., 2013), CLP14 (Yu et al., 2014) and SIGHAN15 (Tseng et al., 2015), and the OCR dataset is released from Hong et al. (2019). For simplicity, we denote the CSC datasets from SIGHAN13, CLP14, SIGHAN15 and OCR subtitles as $csc_{13}$ , $csc_{14}$ , $csc_{15}$ and $ocr$ , respectively. The $csc_{13}$ and $ocr$ dataset is evaluated on edit-level with the official evaluation tool from SIGHAN13. Following the official setting, the $csc_{13}$ dataset adopts different test set for error detection and correction. The $csc_{14}$ and $csc_{15}$ dataset are evaluated on sentence-level with the official evaluation tool from CLP14 and SIGHAN15 respectively. Following previous work, we combine the training data from $csc_{13}$ , $csc_{14}$ and $csc_{15}$ as our training set for $csc$ dataset. The training set of $ocr$ dataset is used to learn the model for the OCR dataset. The statistics of the datasets are listed in Figure 2. The $ocr$ dataset contains only erroneous sentences and has a significantly shorter sentence length comparing to the $csc$ datasets. + +For the candidate generation phase, the vocabulary $V$ used in the experiments is collected from gigaword corpus (LDC2011T13) and Chinese idioms. For csc dataset, we segmented the traditional Chinese corpus in the gigaword with $\mathrm{hannlp}^6$ and keep the words that appear more than 10 times in the corpus. ForOCR dataset, we use the simplified Chinese part for generating vocabulary $V$ . For the pronunciation confusion set, we use pypinyin7 for + +conversion between Chinese characters and pinyin. For the shape confusion set, we use the released one from SIGHAN13. For the semantic confusion set, we finetune the released Chinese version of the mask language model BERT (Devlin et al., 2018) on the CSC training set with the officially released Tensorflow code. We also experimented with the whole word masking variants, such as BERT-wwm (Cui et al., 2019), but it did not show a significant improvement. The batch size, learning rate, and training epoch of the finetuning are set to $32$ , $2e^{-5}$ , and 3, respectively. We use the top 5 output as the semantic candidates. The max length of chunks is set to 6 to cover most of the cases. For chunks with one character, we only keep the semantic candidates to reduce the false alarm rate. + +For the correction selection phase, the beam size used in the experiment is set to 10. The segmented gigaword corpus is also used for training a traditional Chinese and a simplified Chinese n-gram word language model through kenlm. For the MERT algorithm, we initialize the weights of the score function with zero and use the implement from Z-MERT (Zaidan, 2009). For optimization, we output the top 10 results and set the maximum MERT iterations to 15. The bilingual evaluation understudy (BLEU) is used as the training metric as it calculates the sentence-level similarity and often leads to better precision. + +# 3.2 Experiment Results on the CSC Datasets + +We first report the performance of the proposed method on the $csc_{13}$ , $csc_{14}$ and $csc_{15}$ dataset. As shown in Table 3, when comparing to previous strong CSC systems, our proposed chunk-based method achieves a significant improvement on the three datasets. + +Zhao et al. (2017) employ a graph-based model and integrate spelling checking with word segmentation. However, their proposed method only pro + +
DatasetModelDetection LevelCorrection Level
AccPRF1AccPRF1
csc13Yeh et al. (2015)74.8044.3137.6740.7266.3070.3062.5066.17
Zhao et al. (2017)----37.0070.5035.6047.31
Hong et al. (2019)*----60.573.160.566.2
Cheng et al. (2020)‡-55.9046.9951.06-44.5837.4740.72
our method83.2061.1975.6767.6667.2074.3467.2070.59
csc14Zhao et al. (2017)-----55.5039.1445.90
Hong et al. (2019)70.061.053.557.069.359.452.055.4
Cheng et al. (2020)‡-58.2754.5356.28-51.0147.6549.27
our method70.078.6554.8064.5968.0877.4351.0461.52
csc15Zhang et al. (2015)70.0980.2753.2764.0469.1879.7251.4562.54
Hong et al. (2019)74.267.660.063.573.766.659.162.6
Zhang et al. (2020)80.973.773.273.577.466.766.266.4
Cheng et al. (2020)‡-70.9764.0067.30-60.0854.1856.98
our method76.8288.1162.0072.7974.6487.3357.6469.44
+ +Table 3: The main results on $csc_{13}$ , $csc_{14}$ and $csc_{15}$ datasets. *The $csc_{13}$ detection-level performance of Hong et al. (2019) is obtained on the test set of correction task and thus incomparable with the results from other work. The results with $\ddagger$ are reproduced by rerunning the released code and evaluation scripts on the standard CSC datasets. The Wang et al. (2018) and Wang et al. (2019) calculate the performance on the character-level, which makes their results incomparable with other works. + +cesses the multi-character words. Two types of single-character words are handled by rules and an individual module. The separated modules make their system difficult to fully explore the annotated data and obtain a global optimization. + +Zhang et al. (2015) combine the character-level candidate generation with a two-stage filter model. For the first stage, they use a logistic regression classifier to reduce the size of candidates. In the second stage, they utilize the online translation system and search engine to select the best correction. Although they get help from empirically developed online systems for correction selection, our approach outperforms them, indicating the effectiveness of the chunk-based framework. + +Hong et al. (2019) finetune the pretrained BERT as a character-based correction model and filter the visually/phonologically irrelevant corrections to improve precision. In other words, they employ a character-level candidate generation and perform a locally optimized character-based correction selection. In the experiment, our method outperforms Hong et al. (2019) with a large margin, which indicates the effectiveness of the globally optimized chunk-based decoding. + +Zhang et al. (2020) propose to train a detection and a correction network jointly. In the experiment, although they employ 5 million pseudo data for extra pretraining, the proposed method still obtains + +an improved performance on the correction level. + +Cheng et al. (2020) propose to incorporate phonological and visual confusion sets into the CSC models through a graph convolutional network. As the performance reported in their paper is obtained with external training data, we reproduced their results on the standard CSC datasets by rerunning their released code and evaluation scripts. + +# 3.3 Experiment Results for the OCR Errors + +We also evaluate our approach on the OCR subtitle error correction dataset, and the results are listed in Table 4. For the error detection level, the proposed method achieves a significant improvement over the previous model from Hong et al. (2019). TheOCR dataset has a shorter average sentence length. The finetuned BERT model does not have enough context to obtain semantically accurate corrections. Hong et al. (2019) only generate the candidates according to the BERT model and obtain a low recall. The proposed method is more robust to short sentences because we also employ the confusion sets from pronunciation and shape. + +For the correction-level, we also observe a significant improvement in the F1 score. However, we notice that our method obtains a lower precision comparing with Hong et al. (2019). We analyzed and found that the OCR subtitles are extracted from + +
ModelDetection LevelCorrection Level
AccPRF1AccPRF1
Hong et al. (2019)18.678.518.630.117.473.417.428.1
our method63.3077.5763.3069.7137.9046.4537.9041.74
+ +Table 4: The results on the OCR subtitle error correction datasetocr. + +
ModelCorrection Level
PRF1
all87.3357.6469.44
all - pinyin87.5454.9167.49
all - shape86.8157.4569.15
all - semantic88.3348.1862.35
+ +Table 5: The results on $csc_{15}$ dataset of disabling different confusion sets. The pinyin, shape, semantic refers to the pronunciation, shape, semantic confusion set, respectively. + +the entertainment domain, which contains many named entities and is quite different from the news vocabulary we used. Thus, although we detected the spelling errors, it is difficult to retrieve the correct candidate. We leave the domain adaptation problem to future work. + +# 3.4 Analysis of Confusion Sets + +To reveal the contributions of each confusion set, we conduct experiments to disable each confusion set one at a time. The experiment results are listed in Table 5. The results show that, without the pronunciation confusion set, the proposed method suffers a obvious drop on the recall rate. The shape confusion set only brings a slight improvement, which is explained that errors in similar shape only count for a small part of the spelling errors in human writings. Another significant improvement comes from the semantic confusion set. With a small sacrifice in precision, we observe an obvious increment of recall rate. This experiment result shows that the semantic confusion set is a good complement to the traditional candidate generation. + +# 3.5 MERT v.s. BERT + +In this section, we compare the locally optimized character-based correction model with our globally optimized chunk-based approach. In the experiment, we use the finetuned BERT checker (Hong et al., 2019) as the character-based model. We use the test set of $csc_{15}$ and compare the performance on the recall rate of the single-character errors and multi-character errors individually. The single-character error refers to the misspelling of a + +![](images/719cd590118afc5dc5e023cb7f2c28e6fa43da7bb1ac625f2eb4e9c378b3e0be.jpg) +Figure 2: The comparison of recall between a locally optimized character-based BERT checker and the proposed globally optimized chunk-based method. + +![](images/f116043b867ba0dd9c80f8e1dbf41710959b677e0e02aef368426cdebf98a5ad.jpg) +Figure 3: The case analysis between the BERT checker and the proposed globally optimized method. + +single character, and we take a chunk of continual typos containing more than one character as the multi-character error. On the test set, the recall rate is calculated at the chunk-level, and the experiment results are shown in Figure 2. The recall of the BERT checker model almost comes from single-character errors. For the multi-character errors, the proposed method obtains a significantly better performance, which indicates the effectiveness of globally optimized chunk-based decoding. + +In Figure 3, we list two cases and their corrections from the BERT checker and our method. The BERT checker takes the CSC task as a character + +![](images/9540e66f1ac7f9cedc9fd51ff3a552d1711f888bb21392596a84666a7b6bd820.jpg) +Figure 4: The precision, recall, F1 score and runtime on the $csc_{15}$ dataset with different beam size. + +![](images/1ee47e5252695047398edd6832db5f5ca657ce5fd0562c2d0cc96bb8ba76f9b2.jpg) + +![](images/7c048fe17b2f4f6a739aacaae62523ad453ddde679cd6af940e272e4f37b624b.jpg) + +
Beam SizeRecall on Sentences with
1 Errors2 Errors3 Errors3+Errors
169.5642.6343.8626.83
272.13 (+2.57)44.21 (+1.58)45.61 (+1.75)26.83 (+0.00)
473.07 (+0.94)44.74 (+0.53)43.86 (-1.75)26.83 (+0.00)
873.30 (+0.23)44.74 (+0.00)43.86 (+0.00)26.83 (+0.00)
+ +Table 6: The edit-level recall for sentences in the $csc_{15}$ dataset with different beam size. + +sequence labeling problem and adopts a character-wise local optimization (Hong et al., 2019). For the multi-character error, the BERT checker tends to correct the misspelled characters according to their incorrect context. As shown in the first case, the BERT checker corrects the word to its own because of the incorrect neighbour. The can compose a word自然 (nature). Thus, the BERT checker usually corrects only a part of the multi-character typo or rewrites the typo to a word which is unfitted in the sentence. The proposed method directly generates the candidates for a chunk of misspelled characters and performs a global optimization to replace the whole typo. + +# 3.6 Beam Size + +The proposed chunk-based decoding is constructed under the framework of beam search. In each loop step, the beam search algorithm prunes the size of candidates to a pre-defined beam size to reduce the search complexity. + +In this section, we investigate how beam size influences the performance of the proposed CSC model. We run experiments with a range of beam size on the test set of $csc_{15}$ , and the results and runtime are shown in Figure 4. When the beam size increases, the CSC model is able to obverse more candidates and obtains a significant improvement in the recall rate. At the same time, a larger search space brings more noise, which leads to a slight drop in precision. As a result, the F1 score achieves an improvement when the beam size increases. For the runtime, Figure 4 illustrates that the time-cost + +grows linearly against the beam size. + +To further investigate the improvement of recall rate, we divide the test set according to the number of errors in the sentences and calculate the edit-level recall for the model under different beam sizes. As shown in Table 6, the experiment results illustrate that the main improvement of the recall rate comes from the sentences with only one error. As the larger beam size essentially includes a longer context, the experiment results demonstrate that CSC errors require more contextual information even for single-character errors. For sentences with more errors, the recall rate increases rapidly when the beam size is small (e.g., beam size from 1 to 2). However, the recall rate does not increase significantly after the beam grows to an appropriate size (e.g., a beam size of 4). This experiment result illustrates that, for sentences with multiple errors, the bottleneck comes from the candidate selection. + +# 4 Related Work + +Previous work of CSC is closely related to a series of shared tasks (Wu et al., 2013; Yu et al., 2014; Tseng et al., 2015). The workflow of CSC systems can be roughly divided into two phases, candidate generation and candidate selection. + +For the candidate generation phase, most of the previous work retrieves the candidates according to pronunciation or shape (Liu et al., 2011; Chen et al., 2011; Yu and Li, 2014; Yeh et al., 2015). Recently, Hong et al. (2019) propose to replace the traditional confusion sets with a dynamically generated one. They treat the CSC as a sequence + +labeling problem and finetune a pretrained masked language model to generate candidates. For reducing the false alarm rate, they filter the result with pronunciation and shape similarity. Their method inspired us to finetune the masked language model for generating semantically related candidates. + +For the candidate selection phase, the perplexity from language models is frequently used for selecting the most reasonable candidate (Chang, 1995; Liu et al., 2013; Jia et al., 2013; Yu and Li, 2014; Yeh et al., 2015). Rules are effective and often included in the CSC model for handling single-character errors (Hsieh et al., 2015; Zhang et al., 2015; Zhao et al., 2017). Recent researchers rely on supervised methods to achieve further improvement. The supervised error model is frequently involved in previous work (Hsieh et al., 2015; Yeh et al., 2015; Zhang et al., 2015). Liu et al. (2013) uses the support vector machines (SVMs) to rerank the candidate list. Yeh et al. (2015) employ a maximum entropy (ME) model for correction selection. Zhao et al. (2017) use conditional random fields (CRFs) to handle two types of misspelled single-character word. Cheng et al. (2020) propose to incorporate phonological and visual similarity knowledge into the CSC models via a graph convolutional network. + +Due to the limited size of CSC training data, the supervised models suffer from the lack of annotated data. Liu et al. (2013) generate pseudo data by replacing the character in the training sentence with characters in the confusion set. Similarly, Zhang et al. (2020) generate homophonous pseudo data to pretrain the detection and correction network jointly. Web texts are in large quantities and contain more errors than published articles. Hsieh et al. (2015) propose to extract spelling error samples from the Google web 1T corpus. Wang et al. (2018) propose the OCR-based and ASR-based methods to mimic human errors. They further proposed a pointer network to model the CSC task under the framework of a seq2seq model (Wang et al., 2019). + +# 5 Conclusion + +In this work, we present a new framework for Chinese spelling check. We include the masked language model for generating semantically related candidates. The chunk-based decoding is employed to handle single-character and multicharacter errors in a uniform way. A global optimization strategy is adopted for combining + +different features. The effectiveness of the proposed method is verified on three CSC benchmark datasets and an OCR subtitle dataset. As for the future work, we plan to extend the proposed framework to Chinese grammatical error correction and explore the possibilities of training in an end-to-end style. + +# References + +Eric Brill and Robert C. Moore. 2000. An improved error model for noisy channel spelling correction. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, pages 286-293, Hong Kong. Association for Computational Linguistics. +Chao-Huang Chang. 1995. A new approach for automatic chinese spelling correction. In Proceedings of Natural Language Processing Pacific Rim Symposium, volume 95, pages 278-283. CiteSeer. +Yong-Zhi Chen, Shih-Hung Wu, Ping-Che Yang, Tsun Ku, and Gwo-Dong Chen. 2011. Improve the detection of improperly used chinese characters in students' essays with error model. International Journal of Continuing Engineering Education and Life Long Learning, 21(1):103-116. +Xingyi Cheng, Weidi Xu, Kunlong Chen, Shaohua Jiang, Feng Wang, Taifeng Wang, Wei Chu, and Yuan Qi. 2020. Spellgcn: Incorporating phonological and visual similarities into language models for chinese spelling check. In ACL. +Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, and Guoping Hu. 2019. Pre-training with whole word masking for chinese bert. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Thomas Emerson. 2005. The second international Chinese word segmentation bakeoff. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing. +Yuzhong Hong, Xianguo Yu, Neng He, Nan Liu, and Junhui Liu. 2019. FASPell: A fast, adaptable, simple, powerful Chinese spell checker based on DAE-decoder paradigm. In Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019), pages 160-169, Hong Kong, China. Association for Computational Linguistics. +Yu-Ming Hsieh, Ming-Hong Bai, Shu-Ling Huang, and Keh-Jiann Chen. 2015. Correcting chinese spelling errors with word lattice decoding. ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP), 14(4):18. + +Zhongye Jia, Peilu Wang, and Hai Zhao. 2013. Graph model for Chinese spell checking. In Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing, pages 88-92, Nagoya, Japan. Asian Federation of Natural Language Processing. +C-L Liu, M-H Lai, K-W Tien, Y-H Chuang, S-H Wu, and C-Y Lee. 2011. Visually and phonologically similar characters in incorrect chinese words: Analyses, identification, and applications. ACM Transactions on Asian Language Information Processing (TALIP), 10(2):10. +Xiaodong Liu, Kevin Cheng, Yanyan Luo, Kevin Duh, and Yuji Matsumoto. 2013. A hybrid Chinese spelling correction using language model and statistical machine translation with reranking. In Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing, pages 54-58, Nagoya, Japan. Asian Federation of Natural Language Processing. +Bruce T Lowerre. 1976. The harpy speech recognition system. Technical report, CARNEGIE-MELLON UNIV PITTSBURGH PA DEPT OF COMPUTER SCIENCE. +Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 160-167, Sapporo, Japan. Association for Computational Linguistics. +Yuen-Hsien Tseng, Lung-Hao Lee, Li-Ping Chang, and Hsin-Hsi Chen. 2015. Introduction to SIGHAN 2015 bake-off for Chinese spelling check. In Proceedings of the Eighth SIGHAN Workshop on Chinese Language Processing, pages 32-37, Beijing, China. Association for Computational Linguistics. +Dingmin Wang, Yan Song, Jing Li, Jialong Han, and Haisong Zhang. 2018. A hybrid approach to automatic corpus generation for Chinese spelling check. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2517-2527, Brussels, Belgium. Association for Computational Linguistics. +Dingmin Wang, Yi Tay, and Li Zhong. 2019. Confusionset-guided pointer networks for Chinese spelling check. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5780-5785, Florence, Italy. Association for Computational Linguistics. +Shih-Hung Wu, Chao-Lin Liu, and Lung-Hao Lee. 2013. Chinese spelling check evaluation at SIGHAN bake-off 2013. In Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing, pages 35-42, Nagoya, Japan. Asian Federation of Natural Language Processing. +Nianwen Xue. 2003. Chinese word segmentation as character tagging. In International Journal of Computational Linguistics & Chinese Language Processing, Volume 8, Number 1, February 2003: Special Issue on Word Formation and Chinese Language Processing, pages 29-48. + +Jui-Feng Yeh, Wen-Yi Chen, and Mao-Chuan Su. 2015. Chinese spelling checker based on an inverted index list with a rescoring mechanism. ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP), 14(4):17. +Junjie Yu and Zhenghua Li. 2014. Chinese spelling error detection and correction based on language model, pronunciation, and shape. In Proceedings of The Third CIPS-SIGHAN Joint Conference on Chinese Language Processing, pages 220-223, Wuhan, China. Association for Computational Linguistics. +Liang-Chih Yu, Lung-Hao Lee, Yuen-Hsien Tseng, and Hsin-Hsi Chen. 2014. Overview of SIGHAN 2014 bake-off for Chinese spelling check. In Proceedings of The Third CIPS-SIGHAN Joint Conference on Chinese Language Processing, pages 126-132, Wuhan, China. Association for Computational Linguistics. +Omar F. Zaidan. 2009. Z-MERT: A fully configurable open source tool for minimum error rate training of machine translation systems. The Prague Bulletin of Mathematical Linguistics, 91:79-88. +Shaohua Zhang, Haoran Huang, Jicong Liu, and Hang Li. 2020. Spelling error correction with soft-masked bert. arXiv preprint arXiv:2005.07421. +Shuiyuan Zhang, Jinhua Xiong, Jianpeng Hou, Qiao Zhang, and Xueqi Cheng. 2015. HANSpeller++: A unified framework for Chinese spelling correction. In Proceedings of the Eighth SIGHAN Workshop on Chinese Language Processing, pages 38-45, Beijing, China. Association for Computational Linguistics. +Hai Zhao, Deng Cai, Yang Xin, Yuzhu Wang, and Zhongye Jia. 2017. A hybrid model for Chinese spelling check. ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP), 16(3):21. \ No newline at end of file diff --git a/chunkbasedchinesespellingcheckwithglobaloptimization/images.zip b/chunkbasedchinesespellingcheckwithglobaloptimization/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..2a23583044c561c774c00e746cc3e289a51df7fe --- /dev/null +++ b/chunkbasedchinesespellingcheckwithglobaloptimization/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:950c4eae371547f6f5110e4769915cfa7487ea33741e2203e1c0b4e55fdd15a5 +size 481419 diff --git a/chunkbasedchinesespellingcheckwithglobaloptimization/layout.json b/chunkbasedchinesespellingcheckwithglobaloptimization/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..6ac5360cb849382217e30b89b0fd60125849cb2b --- /dev/null +++ b/chunkbasedchinesespellingcheckwithglobaloptimization/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bc3b0342c2b7c9629616cf5664ec6cebeab618b1aba9b7d22828fe72d5f3c219 +size 342230 diff --git a/claimcheckworthinessdetectionaspositiveunlabelledlearning/f3e7f24e-4229-4965-a32d-fa0c77e41c41_content_list.json b/claimcheckworthinessdetectionaspositiveunlabelledlearning/f3e7f24e-4229-4965-a32d-fa0c77e41c41_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b4954d99ee0a2c53e4dc60d38f8562289856f550 --- /dev/null +++ b/claimcheckworthinessdetectionaspositiveunlabelledlearning/f3e7f24e-4229-4965-a32d-fa0c77e41c41_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:650b152939d176b3570ffc53d6b5fb21df647a5a1ebf7e35377e521131c8e824 +size 86703 diff --git a/claimcheckworthinessdetectionaspositiveunlabelledlearning/f3e7f24e-4229-4965-a32d-fa0c77e41c41_model.json b/claimcheckworthinessdetectionaspositiveunlabelledlearning/f3e7f24e-4229-4965-a32d-fa0c77e41c41_model.json new file mode 100644 index 0000000000000000000000000000000000000000..915220420d5d7a42e96a432268da8609003f7d5c --- /dev/null +++ b/claimcheckworthinessdetectionaspositiveunlabelledlearning/f3e7f24e-4229-4965-a32d-fa0c77e41c41_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:efdc19482665bd46c5cf3231bef4d6537bdfd12568b1b72221727dc0f4cecd84 +size 104308 diff --git a/claimcheckworthinessdetectionaspositiveunlabelledlearning/f3e7f24e-4229-4965-a32d-fa0c77e41c41_origin.pdf b/claimcheckworthinessdetectionaspositiveunlabelledlearning/f3e7f24e-4229-4965-a32d-fa0c77e41c41_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..46f83643b549c48ee9d91555aa8969b6b31a9940 --- /dev/null +++ b/claimcheckworthinessdetectionaspositiveunlabelledlearning/f3e7f24e-4229-4965-a32d-fa0c77e41c41_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:283fc36e5bcb7f1a5cca13d19ce5f022f744b934d4d5e95620a40f8963b33cac +size 720989 diff --git a/claimcheckworthinessdetectionaspositiveunlabelledlearning/full.md b/claimcheckworthinessdetectionaspositiveunlabelledlearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..214f694b8f55bd9bea6a6271a0ae4c794917dd51 --- /dev/null +++ b/claimcheckworthinessdetectionaspositiveunlabelledlearning/full.md @@ -0,0 +1,360 @@ +# Claim Check-Worthiness Detection as Positive Unlabelled Learning + +Dustin Wright and Isabelle Augenstein + +Dept. of Computer Science + +University of Copenhagen + +Denmark + +{dw|augenstein}@di.ku.dk + +# Abstract + +As the first step of automatic fact checking, claim check-worthiness detection is a critical component of fact checking systems. There are multiple lines of research which study this problem: check-worthiness ranking from political speeches and debates, rumour detection on Twitter, and citation needed detection from Wikipedia. To date, there has been no structured comparison of these various tasks to understand their relatedness, and no investigation into whether or not a unified approach to all of them is achievable. In this work, we illuminate a central challenge in claim check-worthiness detection underlying all of these tasks, being that they hinge upon detecting both how factual a sentence is, as well as how likely a sentence is to be believed without verification. As such, annotators only mark those instances they judge to be clear-cut check-worthy. Our best performing method is a unified approach which automatically corrects for this using a variant of positive unlabelled learning that finds instances which were incorrectly labelled as not check-worthy. In applying this, we outperform the state of the art in two of the three tasks studied for claim check-worthiness detection in English. + +# 1 Introduction + +Misinformation is being spread online at ever increasing rates (Del Vicario et al., 2016) and has been identified as one of society's most pressing issues by the World Economic Forum (Howell et al., 2013). In response, there has been a large increase in the number of organizations performing fact checking (Graves and Cherubini, 2016). However, the rate at which misinformation is introduced and spread vastly outpaces the ability of any organization to perform fact checking, so only the most salient claims are checked. This obviates the need for being able to automatically find check-worthy content online and verify it. + +Reviewers described the book as "magisterial," "encyclopaedic," and a "classic." + +As was pointed out above, Lenten traditions have developed over time. + +![](images/e142d37bbf498f317454e21d01db4f22a8924d490ce943ae0668b83415beb407.jpg) +Figure 1: Examples of check-worthy and non check-worthy statements from three different domains. Check-worthy statements are those which were judged to require evidence or a fact check. + +142 PEOPLE ON BOARD GERMANWINGS AIRBUS A320 THAT CRASHED IN SOUTHERN FRANCE + +![](images/b107dc9aefdeb40ff66946a7db4e48b94b97c74fa07baf243dda4c3d5ef2c23d.jpg) + +Pray for #4U9525 http://t.co/II7Rl24ffH + +He thinks that he knows more than our military because he claimed our armed forces are "a disaster." + +We have to heal the divides in our country. + +![](images/da0bb3470f51edc327264ed8be00d27e52fdfc9bf34d2cc84806ae4082e4ef76.jpg) +Wikipedia +Twitter +Politics + +The natural language processing and machine learning communities have recently begun to address the problem of automatic fact checking (Vlachos and Riedel, 2014; Hassan et al., 2017; Thorne and Vlachos, 2018; Augenstein et al., 2019; Atanasova et al., 2020a,b; Ostrowski et al., 2020; Allein et al., 2020). The first step of automatic fact checking is claim check-worthiness detection, a text classification problem where, given a statement, one must predict if the content of that statement makes "an assertion about the world that is checkable" (Konstantinovskiy et al., 2018). There are multiple isolated lines of research which have studied variations of this problem. Figure 1 provides examples from three tasks which are studied in this work: rumour detection on Twitter (Zubiaga et al., 2016, 2018), check-worthiness ranking in political debates and speeches (Atanasova et al., 2018; Elsayed et al., 2019; Barron-Cedeno et al., 2020), and citation needed detection on Wikipedia (Redi et al., 2019). Each task is concerned with a shared underlying problem: detecting claims which war + +rant further verification. However, no work has been done to compare all three tasks to understand shared challenges in order to derive shared solutions, which could enable improving claim checkworthiness detection systems across multiple domains. + +Therefore, we ask the following main research question in this work: are these all variants of the same task, and if so, is it possible to have a unified approach to all of them? We answer this question by investigating the problem of annotator subjectivity, where annotator background and expertise causes their judgement of what is check-worthy to differ, leading to false negatives in the data (Konstantinovskiy et al., 2018). Our proposed solution is Positive Unlabelled Conversion (PUC), an extension of Positive Unlabelled (PU) learning, which converts negative instances into positive ones based on the estimated prior probability of an example being positive. We demonstrate that a model trained using PUC improves performance on English citation needed detection and Twitter rumour detection. We also show that by pretraining a model on citation needed detection, one can further improve results on Twitter rumour detection over a model trained solely on rumours, highlighting that a unified approach to these problems is achievable. Additionally, we show that one attains better results on political speeches check-worthiness ranking without using any form of PU learning, arguing through a dataset analysis that the labels are much more subjective than the other two tasks. + +The contributions of this work are as follows: + +1. The first thorough comparison of multiple claim check-worthiness detection tasks. +2. Positive Unlabelled Conversion (PUC), a novel extension of PU learning to support check-worthiness detection across domains. +3. Results demonstrating that a unified approach to check-worthiness detection is achievable for 2 out of 3 tasks, improving over the state-of-the-art for those tasks. + +# 2 Related Work + +# 2.1 Claim Check-Worthiness Detection + +As the first step in automatic fact checking, claim check-worthiness detection is a binary classification problem which involves determining if a piece of text makes "an assertion about the world which can be checked" (Konstantinovskiy et al., 2018). + +We adopt this broad definition as it allows us to perform a structured comparison of many publicly available datasets. The wide applicability of the definition also allows us to study if and how a unified cross-domain approach could be developed. + +Claim check-worthiness detection can be subdivided into three distinct domains: rumour detection on Twitter, check-worthiness ranking in political speeches and debates, and citation needed detection on Wikipedia. A few studies have been done which attempt to create full systems for mining check-worthy statements, including the works of Konstantinovskiy et al. (2018), ClaimRank (Jaradat et al., 2018), and ClaimBuster (Hassan et al., 2017). They develop full software systems consisting of relevant source material retrieval, checkworthiness classification, and dissemination to the public via end-user applications. These works are focused solely on the political domain, using data from political TV shows, speeches, and debates. In contrast, in this work we study the claim checkworthiness detection problem across three domains which have publicly available data: Twitter (Zubiaga et al., 2017), political speeches (Atanasova et al., 2018), and Wikipedia (Redi et al., 2019). + +Rumour Detection on Twitter Rumour detection on Twitter is primarily studied using the PHEME dataset (Zubiaga et al., 2016), a set of tweets and associated threads from breaking news events which are either rumours or not. Published systems which perform well on this task include contextual models (e.g. conditional random fields) acting on a tweet's thread (Zubiaga et al., 2017, 2018), identifying salient rumour-related words (Abulaish et al., 2019), and using a GAN to generate misinformation in order to improve a downstream discriminator (Ma et al., 2019). + +Political Speeches For political speeches, the most studied datasets come from the Clef Check-That! shared tasks (Atanasova et al., 2018; El-sayed et al., 2019; Barrón-Cedeno et al., 2020) and ClaimRank (Jaradat et al., 2018). The data consist of transcripts of political debates and speeches where each sentence has been annotated by an independent news or fact-checking organization for whether or not the statement should be checked for veracity. The most recent and best performing system on the data considered in this paper consists of a two-layer bidirectional GRU network which acts on both word embeddings and syntactic parse + +tags (Hansen et al., 2019). In addition, they augment the native dataset with weak supervision from unlabelled political speeches. + +Citation Needed Detection Wikipedia citation needed detection has been investigated recently in (Redi et al., 2019). The authors present a dataset of sentences from Wikipedia labelled for whether or not they have a citation attached to them. They also released a set of sentences which have been flagged as not having a citation but needing one (i.e. unverified). In contrast to other check-worthiness detection domains, there are much more training data available on Wikipedia. However, the rules for what requires a citation do not necessarily capture all "checkable" statements, as "all material in Wikipedia articles must be verifiable" (Redi et al., 2019). Given this, we view Wikipedia citation data as a set of positive and unlabelled data: statements which have attached citations are positive samples of check-worthy statements, and within the set of statements without citations there exist some positive samples (those needing a citation) and some negative samples. Based on this, this domain constitutes the most general formulation of check-worthiness among the domains we consider. Therefore, we experiment with using data from this domain as a source for transfer learning, training variants of PU learning models on it, then applying them to target data from other domains. + +# 2.2 Positive Unlabelled Learning + +PU learning methods attempt to learn good binary classifiers given only positive labelled and unlabelled data. Recent applications where PU learning has been shown to be beneficial include detecting deceptive reviews online (Li et al., 2014; Ren et al., 2014), keyphrase extraction (Sterckx et al., 2016) and named entity recognition (Peng et al., 2019). For a survey on PU learning, see (?), and for a formal definition of PU learning, see §3.2. + +Methods for learning positive-negative (PN) classifiers from PU data have a long history (Denis, 1998; De Comité et al., 1999; Letouzey et al., 2000), with one of the most seminal papers being from Elkan and Noto (2008). In this work, the authors show that by assuming the labelled samples are a random subset of all positive samples, one can utilize a classifier trained on PU data in order to train a different classifier to predict if a sample is positive or negative. The process involves training a PN classifier with positive samples being shown + +to the classifier once and unlabelled samples shown as both a positive sample and a negative sample. The loss for the duplicated samples is weighted by the confidence of a PU classifier that the sample is positive. + +Building on this, du Plessis et al. (2014) propose an unbiased estimator which improves the estimator introduced in (Elkan and Noto, 2008) by balancing the loss for positive and negative classes. The work of Kiryo et al. (2017) extends this method to improve the performance of deep networks on PU learning. Our work builds on the method of Elkan and Noto (2008) by relabelling samples which are highly confidently positive. + +# 3 Methods + +The task considered in this paper is to predict if a statement makes "an assertion about the world that is checkable" (Konstantinovskiy et al., 2018). As the subjectivity of annotations for existing data on claim check-worthiness detection is a known problem (Konstantinovskiy et al., 2018), we view the data as a set of positive and unlabelled (PU) data. In addition, we unify our approach to each of them by viewing Wikipedia data as an abundant source corpus. Models are then trained on this source corpus using variants of PU learning and transferred via fine-tuning to the other claim check-worthiness detection datasets, which are subsequently trained on as PU data. On top of vanilla PU learning, we introduce Positive Unlabelled Conversion (PUC) which relabels examples that are most confidently positive in the unlabelled data. A formal task definition, description of PU learning, and explanation of the PUC extension are given in the following sections. + +# 3.1 Task Definition + +The fundamental task is binary text classification. In the case of positive-negative (PN) data, we have a labelled dataset $\mathcal{D}:\{(x,y)\}$ with input features $x\in \mathbb{R}^d$ and labels $y\in \{0,1\}$ . The goal is to learn a classifier $g:x\to (0,1)$ indicating the probability that the input belongs to the positive class. With PU data, the dataset $\mathcal{D}$ instead consists of samples $\{(x,s)\}$ , where the value $s\in \{0,1\}$ indicates if a sample is labelled or not. The primary difference from the PN case is that, unlike for the labels $y$ , a value of $s = 0$ does not denote the sample is negative, but that the label is unknown. The goal is then to learn a PN classifier $g$ using a PU classifier + +![](images/4a25cc0b759c1c6a9f5121cc9507ce0cc532b1688006c5b28d73201ec3fe2c14.jpg) +Figure 2: High level view of PUC. A PU classifier $(f, \text{green box})$ is first learned using PU data (with $s$ indicating if the sample is positive or unlabelled). From this the prior probability of a sample being positive is estimated. Unlabelled samples are then ranked by $f$ (red box) and the most positive samples are converted into positives until the dataset is balanced according to the estimated prior. The model $g$ is then trained using the duplication and weighting method of Elkan and Noto (2008) as described in §3.2 with labels $l$ (blue box). Greyed out boxes are negative weights which are ignored when training the classifier $g$ , as those examples are only trained as positives. + +$f: x \to (0,1)$ which predicts whether or not a sample is labelled (Elkan and Noto, 2008). + +# 3.2 PU Learning + +Our overall approach is depicted in Figure 2. We begin with an explanation of the PU learning algorithm described in (Elkan and Noto, 2008). Assume that we have a dataset randomly drawn from some probability distribution $p(x,y,s)$ , where samples are of the form $(x,s)$ , $s\in \{0,1\}$ and $s = 1$ indicates that the sample is labelled. The variable $y$ is unknown, but we make two assumptions which allow us to derive an estimator for probabilities involving $y$ . The first is that: + +$$ +p (y = 0 | s = 1) = 0 \tag {1} +$$ + +In other words, if we know that a sample is labelled, then that label cannot be 0. The second assumption is that labelled samples are Selected Completely At Random from the underlying distribution (also known as the SCAR assumption). Check-worthiness data can be seen as an instance of SCAR PU data; annotators tend to only label those instances which are very clearly checkworthy in their opinion (Konstantinovskiy et al., 2018). When combined across several annotators, we assume this leads to a random sample from the total set of check-worthy statements. + +Given this, a classifier $f: x \to (0,1)$ is trained to predict $p(s = 1|x)$ from the PU data. It is then employed to train a classifier $g$ to predict $p(y = 1|x)$ by first estimating $c = p(s = 1|y = 1)$ on a set of validation data. Considering a validation set + +$V$ where $P\subset V$ is the set of positive samples in $V$ , $c$ is estimated as: + +$$ +c \approx \frac {1}{| P |} \sum_ {x \in P} f (x) \tag {2} +$$ + +This says our estimate of $p(s = 1|y = 1)$ is the average confidence of our classifier on known positive samples. Next, we can estimate $E_{p(x,y,s)}[h(x,y)]$ for any arbitrary function $h$ empirically from a dataset of $k$ samples as follows: + +$$ +\begin{array}{l} E [ h ] = \frac {1}{k} \left(\sum_ {(x, s = 1)} h (x, 1) + \sum_ {(x, s = 0)} w (x) h (x, 1) \right. \\ + (1 - w (x)) h (x, 0)) \tag {3} \\ \end{array} +$$ + +$$ +\begin{array}{l} w (x) = p (y = 1 | x, s = 0) \\ = \frac {1 - c}{c} \frac {p (s = 1 | x)}{1 - p (s = 1 | x)} \tag {4} \\ \end{array} +$$ + +In this case, $c$ is estimated using Equation 2 and $p(s = 1|x)$ is estimated using the classifier $f$ . The derivations for these equations can be found in (Elkan and Noto, 2008). + +To estimate $p(y = 1|x)$ empirically, the unlabeled samples in the training data are duplicated, with one copy negatively labelled and one copy positively labelled. Each copy is trained on with a weighted loss $w(x)$ when the label is positive and $1 - w(x)$ when the label is negative. Labelled samples are trained on normally (i.e. a single copy with unit weight). + +# 3.3 Positive Unlabelled Conversion + +For $PUC$ , the motivation is to relabel those samples from the unlabelled data which are very clear cut positive. To accomplish this, we start with the fact that one can also estimate the prior probability of a sample having a positive label using $f$ . If instead of $h$ we want to estimate $E[y] = p(y = 1)$ , the following is obtained: + +$$ +p (y = 1) \approx \frac {1}{k} \left(\sum_ {x, s = 1} 1 + \sum_ {x, s = 0} w (x)\right) \tag {5} +$$ + +This estimate is then utilized to convert the most confident unlabelled samples into positives. First, all of the unlabelled samples are ranked according to their calculated weight $w(x)$ . The ranked samples are then iterated through and converted into positive-only samples until the distribution of positive samples is greater than or equal to the estimate of $p(y = 1)$ . Unlike in vanilla PU learning, these samples are discretized to have a positive weight of 1, and trained on by the classifier $g$ once per epoch as positive samples along with the labelled samples. The remaining unlabelled data are trained on in the same way as in vanilla PU learning. + +# 3.4 Implementation + +In order to create a unified approach to checkworthiness detection, transfer learning from Wikipedia citation needed detection is employed. To accomplish this, we start with a training dataset $\mathcal{D}^s$ of statements from Wikipedia featured articles that are either labelled as containing a citation (positive) or unlabelled. We train a classifier $f^s$ on this dataset and obtain a classifier $g^s$ via PUC. For comparison, we also train models with vanilla PU learning and PN learning as baselines. The network architecture for both $f^s$ and $g^s$ is BERT (Devlin et al., 2019), a large pretrained transformer-based (Vaswani et al., 2017) language model. We use the HuggingFace transformers implementation of the 12-layer 768 dimensional variation of BERT (Wolf et al., 2019). The classifier in this implementation is a two layer neural network acting on the [CLS] token. + +From $g^{s}$ , we train a classifier $g^{t}$ using downstream check-worthiness detection dataset $D^{t}$ by initializing $g^{t}$ with the base BERT network from $g^{s}$ and using a new randomly initialized final layer. In addition, we train a model $f^{t}$ on the target dataset, and train $g^{t}$ with PUC from this model to obtain the final classifier. As a baseline, we also experiment + +with training on just the dataset $D^{t}$ without any pretraining. In the case of citation needed detection, since the data comes from the same domain we simply test on the test split of statements labelled as "citation needed" using the classifier $g^{s}$ . We compare our models to the published state of the art baselines on each dataset. + +For all of our models ( $f^{s}, g^{s}, f^{t}, g^{t}$ ) we train for two epochs, saving the weights with the best F1 score on validation data as the final model. Training is performed with a max learning rate of 3e-5 and a triangular learning rate schedule (Howard and Ruder, 2018) that linearly warms up for 200 training steps, then linearly decays to 0 for the rest of training. For regularization we add L2 loss with a coefficient of 0.01, and dropout with a rate of 0.1. Finally, we split the training sets into $80\%$ train and $20\%$ validation, and train with a batch size of 8. The code to reproduce our experiments can be found here. $^{1}$ + +# 4 Experimental Results + +To what degree is claim check-worthiness detection a PU learning problem, and does this enable a unified approach to check-worthiness detection? In our experiments, we progressively answer this question by answering the following: 1) is PU learning beneficial for the tasks considered? 2) Does PU citation needed detection transfer to rumour detection? 3) Does PU citation needed detection transfer to political speeches? To investigate how well the data in each domain reflects the definition of a check-worthy statement as one which "makes an assertion about the world which is checkable" and thus understand subjectivity in the annotations, we perform a dataset analysis comparing the provided labels of the top ranked checkworthy claims from the PUC model with the labels given by two human annotators. In all experiments, we report the mean performance of our models and standard deviation across 15 different random seeds. Additionally, we report the performance of each model ensembled across the 15 runs through majority vote on each sample. + +# 4.1 Datasets + +Wikipedia Citations We use the dataset from Redi et al. (2019) for citation needed detection. + +
MethodPRF1ePeReFl
Redi et al. 201975.370.973.0 [76.0]*---
BERT78.8 ± 1.383.7 ± 4.581.0 ± 1.579.085.382.0
BERT + PU78.8 ± 0.984.3 ± 3.081.4 ± 1.079.085.682.2
BERT + PUC78.4 ± 0.985.6 ± 3.281.8 ± 1.078.687.182.6
+ +Table 1: F1 and ensembled F1 score for citation needed detection training on the FA split and testing on the LQN split of (Redi et al., 2019). The FA split contains statements with citations from featured articles and the LQN split consists of statements which were flagged as not having a citation but needing one. Listed are the mean, standard deviation, and ensembled results across 15 seeds (eP, eR, and eF1). Bold indicates best performance, underline indicates second best. *The reported value is from rerunning their released model on the test dataset. The value in brackets is the value reported in the original paper. + +The dataset is split into three sets: one coming from featured articles (deemed 'high quality', 10k positive and 10k negative statements), one of statements which have no citation but have been flagged as needing one (10k positive, 10k negative), and one of statements from random articles which have citations (50k positive, 50k negative). In our experiments the models were trained on the high quality statements from featured articles and tested on the statements which were flagged as 'citation needed'. The key differentiating features of this dataset from the other two datasets are: 1) the domain of text is Wikipedia and 2) annotations are based on the decisions of Wikipedia editors following Wikipedia guidelines for citing sources3. + +Twitter Rumours The PHEME dataset of rumours is employed for Twitter claim checkworthiness detection (Zubiaga et al., 2016). The data consists of 5,802 annotated tweets from 5 different events, where each tweet is labelled as rumourous or non-rumourous (1,972 rumours, 3,830 non-rumours). We followed the leave-one-out evaluation scheme of (Zubiaga et al., 2017), namely, we performed a 5-fold cross-validation for all methods, training on 4 events and testing on 1. The key differentiating features of this dataset from the other two datasets are: 1) the domain of data is tweets and 2) annotations are collected from professional journalists specifically for building a dataset to train machine learning models. + +Political Speeches The dataset we adopted in the political speeches domain is the same as in Hansen et al. (2019), consisting of 4 political speeches from the 2018 Clef CheckThat! competition (Atanasova et al., 2018) and 3 political speeches from Claim-Rank (Jaradat et al., 2018) (2,602 statements total). + +We performed a 7-fold cross-validation, using 6 splits as training data and 1 as test in our experimental setup. The data from ClaimRank is annotated using the judgements from 9 fact checking organizations, and the data from Clef 2018 is annotated by factcheck.org. The key differentiating features of this dataset from the other two datasets are: 1) the domain of data is transcribed spoken utterances from political speeches and 2) annotations are taken from 9 fact checking organizations gathered independently. + +# 4.2 Is PU Learning Beneficial for Citation Needed Detection? + +Our results for citation needed detection are given in Table 1. The vanilla BERT model already significantly outperforms the state of the art model from Redi et al. (2019) (a GRU network with global attention) by 6 F1 points. We see further gains in performance with PU learning, as well as when using PUC. Additionally, the models using PU learning have lower variance, indicating more consistent performance across runs. The best performing model we see is the one trained using PUC with an F1 score of 82.6. We find that this confirms our hypothesis that citation data is better seen as a set of positive and unlabelled data when used for checkworthiness detection. In addition, it gives some indication that PU learning improves the generalization power of the model, which could make it better suited for downstream tasks. + +# 4.3 Does PU Citation Needed Detection Transfer to Rumour Detection? + +# 4.3.1 Baselines + +The best published method that we compare to is the CRF from (Zubiaga et al., 2017). which utilizes a combination of content and social features. Content features include word vectors, part-of-speech + +
MethodμPμRμF1ePeReF1
Zubiaga et al. 201766.755.660.7---
BiLSTM62.356.459.0---
BERT69.9 ± 1.760.8 ± 2.665.0 ± 1.371.361.966.3
BERT + Wiki69.3 ± 1.661.4 ± 2.665.1 ± 1.270.762.266.2
BERT + WikiPU69.9 ± 1.362.5 ± 1.666.0 ± 1.172.264.668.2
BERT + WikiPUC70.1 ± 1.161.8 ± 1.865.7 ± 1.071.562.766.8
BERT + PU68.7 ± 1.264.7 ± 1.866.6 ± 0.969.965.267.5
BERT + PUC68.1 ± 1.565.3 ± 1.666.6 ± 0.969.166.367.7
BERT + PU + WikiPU68.4 ± 1.266.1 ± 1.267.2 ± 0.669.367.268.3
BERT + PUC + WikiPUC68.0 ± 1.466.0 ± 2.067.0 ± 1.369.467.568.5
+ +Table 2: micro-F1 ( $\mu$ F1) and ensembled F1 (eF1) performance of each system on the PHEME dataset. Performance is averaged across the five splits of (Zubiaga et al., 2017). Results show the mean, standard deviation, and ensembled score across 15 seeds. **Bold** indicates best performance, **underline** indicates second best. + +tags, and various lexical features, and social features include tweet count, listed count, follow ratio, age, and whether or not a user is verified. The CRF acts on a timeline of tweets, making it contextual. In addition, we include results from a 2-layer BiLSTM with FastText embeddings (Bojanowski et al., 2017). There exist other deep learning models which have been developed for this task, including (Ma et al., 2019) and (Abulaish et al., 2019), but they do not publish results on the standard splits of the data and we were unable to recreate their results, and thus are omitted. + +# 4.3.2 Results + +The results for the tested systems are given in Table 2. Again we see large gains from BERT based models over the baseline from (Zubiaga et al., 2017) and the 2-layer BiLSTM. Compared to training solely on PHEME, fine tuning from basic citation needed detection sees little improvement (0.1 F1 points). However, fine tuning a model trained using PU learning leads to an increase of 1 F1 point over the non-PU learning model, indicating that PU learning enables the Wikipedia data to be useful for transferring to rumour detection i.e. the improvement is not only from a better semantic representation learned from Wikipedia data. For $PUC$ , we see an improvement of 0.7 F1 points over the baseline and lower overall variance than vanilla PU learning, meaning that the results with $PUC$ are more consistent across runs. The best performing models also use PU learning on in-domain data, with the best average performance being from the models trained using PU/PUC on in domain data and initialized with weights from a Wikipedia model trained using PU/PUC. When models are ensembled, pretraining with vanilla PU learning improves over no pretraining by almost 2 F1 points, and the best performing + +models which are also trained using PU learning on in domain data improve over the baseline by over 2 F1 points. We conclude that framing rumour detection on Twitter as a PU learning problem leads to improved performance. + +Based on these results, we are able to confirm two of our hypotheses. The first is that Wikipedia citation needed detection and rumour detection on Twitter are indeed similar tasks, and a unified approach for both of them is possible. Pretraining a model on Wikipedia provides a clear downstream benefit when fine-tuning on Twitter data, precisely when PU/PUC is used. Additionally, training using PUC on in domain Twitter data provides further benefit. This shows that PUC constitutes a unified approach to these two tasks. + +The second hypothesis we confirm is that both Twitter and Wikipedia data are better seen as positive and unlabelled for claim check-worthiness detection. When pretraining with the data as a traditional PN dataset there is no performance gain and in fact a performance loss when the models are ensembled. PU learning allows the model to learn better representations for general claim checkworthiness detection. + +To explain why this method performs better, Table 1 and Table 2 show that PUC improves model recall at very little cost to precision. The aim of this is to mitigate the issue of subjectivity in the annotations of check-worthiness detection datasets noted in previous work (Konstantinovskiy et al., 2018). Some of the effects of this are illustrated in Table 5 and Table 6 in Appendix A The PUC models are better at distinguishing rumours which involve claims of fact about people i.e. things that people said or did, or qualities about people. For non-rumours, the PUC pretrained model is better at + +
MethodMAP
Konstantinovskiy et al. 201826.7
Hansen et al. 201930.2
BERT33.0 ± 1.8
BERT + Wiki34.4 ± 2.7
BERT + WikiPU33.2 ± 1.7
BERT + WikiPUC31.7 ± 1.8
BERT + PU18.8 ± 3.7
BERT + PUC26.7 ± 2.8
BERT + PU + WikiPU16.8 ± 3.5
BERT + PUC + WikiPUC27.8 ± 2.7
+ +recognizing statements which describe qualitative information surrounding the events and information that is self-evident e.g. a tweet showing the map where the Charlie Hebdo attack took place. + +# 4.4 Does PU Citation Needed Detection Transfer to Political Speeches? + +# 4.4.1Baselines + +The baselines we compare to are the state of the art models from Hansen et al. (2019) and Konstantinovskiy et al. (2018). The model from Konstantinovskiy et al. (2018) consists of InferSent embeddings (Conneau et al., 2017) concatenated with POS tag and NER features passed through a logistic regression classifier. The model from Hansen et al. (2019) is a bidirectional GRU network acting on syntactic parse features concatenated with word embeddings as the input representation. + +# 4.4.2 Results + +The results for political speech check-worthiness detection are given in Table 3. We find that the BERT model initialized with weights from a model trained on plain Wikipedia citation needed statements performs the best of all models. As we add transfer learning and PU learning, the performance steadily drops. We perform a dataset analysis to gain some insight into this effect in §4.5. + +# 4.5 Dataset Analysis + +In order to understand our results in the context of the selected datasets, we perform an analysis to learn to what extent the positive samples in each dataset reflect the definition of a check-worthy claim as "an assertion about the world that is checkable". We ranked all of the statements based on the predictions of 15 PUC models trained with different seeds, where more positive class predictions + +Table 3: Mean average precision (MAP) of models on political speeches. **Bold** indicates best performance, **underline** indicates second best. + +
DatasetPRF1
Wikipedia81.787.084.3
84.887.085.9
83.387.085.1
Twitter87.582.484.8
86.381.283.6
86.981.884.2
Politics33.889.349.0
31.1100.047.5
32.594.748.3
+ +Table 4: F1 score comparing manual relabelling of the top 100 predictions by PUC model with the original labels in each dataset by two different annotators. Italics are average value between the two annotators. + +means a higher rank (thus more check-worthy), and had two experts manually relabel the top 100 statements. The experts were informed to label the statements based on the definition of check-worthy given above. We then compared the manual annotation to the original labels using F1 score. Higher F1 score indicates the dataset better reflects the definition of check-worthy we adopt in this work. Our results are given in Table 4. + +We find that the Wikipedia and Twitter datasets contain labels which are more general, evidenced by similar high F1 scores from both annotators ( $>80.0$ ). For political speeches, we observe that the human annotators both found many more examples to be check-worthy than were labelled in the dataset. This is evidenced by examples such as It's why our unemployment rate is the lowest it's been in so many decades being labelled as not checkworthy and New unemployment claims are near the lowest we've seen in almost half a century being labelled as check-worthy in the same document in the dataset's original annotations. This characteristic has been noted for political debates data previously (Konstantinovskiy et al., 2018), which was also collected using the judgements of independent fact checking organizations (Gencheva et al., 2017). Labels for this dataset were collected from various news outlets and fact checking organizations, which may only be interested in certain types of claims such as those most likely to be false. This makes it difficult to train supervised machine learning models for general check-worthiness detection based solely on text content and document context due to labelling inconsistencies. + +# 5 Discussion and Conclusion + +In this work, we approached claim checkworthiness detection by examining how to unify three distinct lines of work. We found that checkworthiness detection is challenging in any domain as there exist stark differences in how annotators judge what is check-worthy. We showed that one can correct for this and improve check-worthiness detection across multiple domains by using positive unlabelled learning. Our method enabled us to perform a structured comparison of datasets in different domains, developing a unified approach which outperforms state of the art in 2 of 3 domains and illuminating to what extent these datasets reflect a general definition of check-worthy. + +Future work could explore different neural base architectures. Further, it could potentially benefit all tasks to consider the greater context in which statements are made. We would also like to acknowledge again that all experiments have only focused on English language datasets; developing models for other, especially low-resource languages, would likely result in additional challenges. We hope that this work will inspire future research on check-worthiness detection, which we see as an under-studied problem, with a focus on developing resources and models across many domains such as Twitter, news media, and spoken rhetoric. + +# Acknowledgements + +![](images/e0ca7d3c7bcb829d915d5a62a5b884bf6da927822d82553539baa3fbbe1ad667.jpg) + +This project has received funding from + +the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 801199. + +# References + +Muhammad Abulaish, Nikita Kumari, Mohd Fazil, and Basanta Singh. 2019. A Graph-Theoretic Embedding-Based Approach for Rumor Detection in Twitter. In IEEE/WIC/ACM International Conference on Web Intelligence, pages 466-470. +Liesbeth Allein, Isabelle Augenstein, and Marie-Francine Moens. 2020. Time-Aware Evidence Ranking for Fact-Checking. arXiv preprint arXiv:2009.06402. +Pepa Atanasova, Alberto Barron-Cedeno, Tamer Elsayed, Reem Suwaileh, Wajdi Zaghouani, Spas Kyuchukov, Giovanni Da San Martino, and Preslav Nakov. 2018. Overview of the CLEF-2018 CheckThat! Lab on Automatic Identification and Verifica + +tion of Political Claims. Task 1: Check-Worthiness. arXiv preprint arXiv:1808.05542. +Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, and Isabelle Augenstein. 2020a. Generating Fact Checking Explanations. In ACL, pages 7352-7364. Association for Computational Linguistics. +Pepa Atanasova, Dustin Wright, and Isabelle Augenstein. 2020b. Generating Label Cohesive and Well-Formed Adversarial Claims. In Proceedings of EMNLP. Association for Computational Linguistics. +Isabelle Augenstein, Christina Lioma, Dongsheng Wang, Lucas Chaves Lima, Casper Hansen, Christian Hansen, and Jakob Grue Simonsen. 2019. MultiFC: A Real-World Multi-Domain Dataset for Evidence-Based Fact Checking of Claims. In EMNLP/IJCNLP (1), pages 4684-4696. Association for Computational Linguistics. +Alberto Barrón-Cedeno, Tamer Elsayed, Preslav Nakov, Giovanni Da San Martino, Maram Hasanain, Reem Suwaileh, and Fatima Haouari. 2020. CheckThat! at CLEF 2020: Enabling the Automatic Identification and Verification of Claims in Social Media. In European Conference on Information Retrieval, pages 499-507. Springer. +Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching Word Vectors with Subword Information. Transactions of the Association for Computational Linguistics, 5:135-146. +Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, and Antoine Bordes. 2017. Supervised Learning of Universal Sentence Representations from Natural Language Inference Data. In EMNLP 2017, pages 670-680. +Francesco De Comité, François Denis, Rémi Gilleron, and Fabien Letouzey. 1999. Positive and Unlabeled Examples Help Learning. In International Conference on Algorithmic Learning Theory, pages 219-230. Springer. +Michela Del Vicario, Alessandro Bessi, Fabiana Zollo, Fabio Petroni, Antonio Scala, Guido Caldarelli, H Eugene Stanley, and Walter Quattrociocchi. 2016. The Spreading of Misinformation Online. Proceedings of the National Academy of Sciences, 113(3):554-559. +François Denis. 1998. PAC Learning From Positive Statistical Queries. In International Conference on Algorithmic Learning Theory, pages 112-126. Springer. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding. In *NAACL-HLT* 2019, pages 4171–4186. + +Marthinus C Du Plessis, Gang Niu, and Masashi Sugiyama. 2014. Analysis of Learning From Positive and Unlabeled Data. In Advances in Neural Information Processing Systems, pages 703-711. +Charles Elkan and Keith Noto. 2008. Learning Classifiers From Only Positive and Unlabeled Data. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Minin, pages 213-220. +Tamer Elsayed, Preslav Nakov, Alberto Barron-Cedeno, Maram Hasanain, Reem Suwaileh, Giovanni Da San Martino, and Pepa Atanasova. 2019. Overview of the CLEF-2019 CheckThat! Lab: Automatic Identification and Verification of Claims. In International Conference of the Cross-Language Evaluation Forum for European Languages, pages 301-321. Springer. +Pepa Gencheva, Preslav Nakov, Lluis Marquez, Alberto Barron-Cedeno, and Ivan Koychev. 2017. A Context-Aware Approach for Detecting Worth-Checking Claims in Political Debates. In Proceedings of the International Conference on Advances in Natural Language Processing, RANLP 2017, pages 267-276, Varna, Bulgaria. INCOMA Ltd. +Lucas Graves and Federica Cherubini. 2016. The Rise of Fact-Checking Sites in Europe. Reuters Institute for the Study of Journalism. +Casper Hansen, Christian Hansen, Stephen Alstrup, Jakob Grue Simonsen, and Christina Lioma. 2019. Neural Check-Worthiness Ranking With Weak Supervision: Finding Sentences for Fact-Checking. In Companion Proceedings of the 2019 World Wide Web Conference, pages 994–1000. +Naeemul Hassan, Gensheng Zhang, Fatma Arslan, Josue Caraballo, Damian Jimenez, Siddhant Gawsane, Shohedul Hasan, Minumol Joseph, Aaditya Kulkarni, Anil Kumar Nayak, et al. 2017. ClaimBuster: the First-Ever End-to-End Fact-Checking System. Proceedings of the VLDB Endowment, 10(12):1945-1948. +Jeremy Howard and Sebastian Ruder. 2018. Universal Language Model Fine-tuning for Text Classification. pages 328-339. +Lee Howell et al. 2013. Digital Wildfires in a Hyperconnected World. WEF report, 3:15-94. +Israa Jaradat, Pepa Gencheva, Alberto Barrón-Cedeno, Lluis Márquez, and Preslav Nakov. 2018. Claim-rank: Detecting Check-Worthy Claims in Arabic and English. pages 26-30. +Ryuichi Kiryo, Gang Niu, Marthinus C du Plessis, and Masashi Sugiyama. 2017. Positive-Unlabeled Learning With Non-Negative Risk Estimator. In Advances in Neural Information Processing Systems, pages 1675-1685. + +Lev Konstantinovskiy, Oliver Price, Mevan Babakar, and Arkaitz Zubiaga. 2018. Towards Automated Factchecking: Developing an Annotation Schema and Benchmark For Consistent Automated Claim Detection. arXiv preprint arXiv:1809.08193. +Fabien Letouzey, François Denis, and Rémi Gilleron. 2000. Learning From Positive and Unlabeled Examples. In International Conference on Algorithmic Learning Theory, pages 71-85. Springer. +Huayi Li, Zhiyuan Chen, Bing Liu, Xiaokai Wei, and Jidong Shao. 2014. Spotting Fake Reviews Via Collective Positive-Unlabeled Learning. In 2014 IEEE International Conference on Data Mining, pages 899-904. IEEE. +Jing Ma, Wei Gao, and Kam-Fai Wong. 2019. Detect Rumors on Twitter by Promoting Information Campaigns With Generative Adversarial Learning. In The World Wide Web Conference, pages 3049-3055. +Wojciech Ostrowski, Arnav Arora, Pepa Atanasova, and Isabelle Augenstein. 2020. Multi-Hop Fact Checking of Political Claims. arXiv preprint arXiv:2009.06401. +Minlong Peng, Xiaoyu Xing, Qi Zhang, Jinlan Fu, and Xuan-Jing Huang. 2019. Distantly Supervised Named Entity Recognition using Positive-Unlabeled Learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2409-2419. +Miriam Redi, Besnik Fetahu, Jonathan Morgan, and Dario Taraborelli. 2019. Citation Needed: A Taxonomy and Algorithmic Assessment of Wikipedia's Verifiability. In *The World Wide Web Conference*, pages 1567-1578. +Yafeng Ren, Donghong Ji, and Hongbin Zhang. 2014. Positive Unlabeled Learning for Deceptive Reviews Detection. In EMNLP 2014, pages 488-498. +Lucas Sterckx, Thomas Demeester, Chris Develder, and Cornelia Caragea. 2016. Supervised Keyphrase Extraction as Positive Unlabeled Learning. In EMNLP 2016, pages 1-6. +James Thorne and Andreas Vlachos. 2018. Automated Fact Checking: Task Formulations, Methods and Future Directions. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3346-3359, Santa Fe, New Mexico, USA. Association for Computational Linguistics. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All You Need. In Advances in Neural Information Processing Systems, pages 5998-6008. +Andreas Vlachos and Sebastian Riedel. 2014. Fact Checking: Task Definition and Dataset Construction. In Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science, pages 18-22. + +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R'emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace's Transformers: State-of-the-art Natural Language Processing. ArXiv, abs/1910.03771. +Arkaitz Zubiaga, Elena Kochkina, Maria Liakata, Rob Procter, Michal Lukasik, Kalina Bontcheva, Trevor Cohn, and Isabelle Augenstein. 2018. Discourse-Aware Rumour Stance Classification in Social Media Using Sequential Classifiers. Information Processing & Management, 54(2):273-290. +Arkaitz Zubiaga, Maria Liakata, and Rob Procter. 2017. Exploiting Context for Rumour Detection in Social Media. In International Conference on Social Informatics, pages 109-123. Springer. +Arkaitz Zubiaga, Maria Liakata, Rob Procter, Geraldine Wong Sak Hoi, and Peter Tolmie. 2016. Analysing How People Orient to and Spread Rumours in Social Media by Looking at Conversational Threads. PloS one, 11(3). + +# A Examples of PUC Improvements for Rumour Detection + +Examples of improvements for rumour detection using PUC can be found in Table 5. + +# B Reproducibility + +# B.1 Computing Infrastructure + +All experiments were run on a shared cluster. Requested jobs consisted of 16GB of RAM and 4 Intel Xeon Silver 4110 CPUs. We used a single NVIDIA Titan X GPU with 12GB of RAM. + +# B.2 Average Runtimes + +See Table 7 for model runtimes. + +# B.3 Number of Parameters per Model + +We used BERT with a classifier on top for each model which consists of 109,483,778 parameters. + +# B.4 Validation Performance + +Validation performances for the tested models are given in Table 8. + +# B.5 Evaluation Metrics + +The primary evaluation metric used was F1 score. We used the sklearn implementation of precision_recall_fscore_support, which can be found here: https://scikit-learn.org/stable/modules/generated/sklearn.metrics_precision_recall_fscore_support.html. Briefly: + +$$ +\begin{array}{l} p = \frac {t p}{t p + f p} \\ r = \frac {t p}{t p + f n} \\ F 1 = \frac {2 * p * r}{p + r} \\ \end{array} +$$ + +where $tp$ are true positives, $fp$ are false positives, and $fn$ are false negatives. + +Additionally, we used the mean average precision calculation from the Clef19 Check That! challenge for political speech data, which can be found here: https://github.com/apepa/clef2019-factchecking-task1/tree/master/scorer Briefly: + +$$ +\mathbf {A P} = \frac {1}{| P |} \sum_ {i} \frac {t p (i)}{i} +$$ + +
Rumour textnPUCnBaseline
Germanwings co-pilot had serious depressive episode: Bild newspaper http://t.co/RgSTrehD21135
Now hearing 148 passengers + crew on board the #A320 that has crashed in southern French Alps. #GermanWings flight. @BBCWorld102
It appears that #Ferguson PD are trying to assassinate Mike Brown's character after literally assassinating Mike Brown.135
#Ferguson cops beat innocent man then charged him for bleeding on them: http://t.co/u1ot9Eh5Cq via @MichaelDalynyc http://t.co/AGJW2Pid1r92
+ +Table 5: Examples of rumours which the PUC model judges correctly vs the baseline model with no pretraining on citation needed detection. $\mathrm{n}^{ * }$ is the number of models among the 15 seeds which predicted the correct label (rumour). + +
Non-Rumour textnPUCnBaseline
A female hostage stands by the front entrance of the cafe as she turns the lights off in Sydney. #syndneysiege http://t.co/qNfCMv9yZt115
Map shows where gun attack on satirical magazine #CharlieHebdo took place in central Paris http://t.co/5AZAKumpNd http://t.co/ECFYztMVk9104
"Hands up! Don't shoot!" #ferguson https://t.co/svCE1S0Zek127
Australian PM Abbott: Motivation of perpetrator in Sydney hostage situation is not yet known - @9NewsAUS http://t.co/SI01B997xf106
+ +Table 6: Examples of non-rumours which the PUC model judges correctly vs the baseline model with no pretraining on citation needed detection. $\mathrm{n}^{ * }$ is the number of models among the 15 seeds which predicted the correct label (non-rumour). + +$$ +\mathrm {m A P} = \frac {1}{| Q |} \sum_ {q \in Q} \mathrm {A P} (q) +$$ + +where $P$ are the set of positive instances, $tp(i)$ is an indicator function which equals one when the $i$ th ranked sample is a true positive, and $Q$ is the set of queries. In this work $Q$ consists of the ranking of statements from each split of the political speech data. + +# B.6 Links to Data + +- Citation Needed Detection (Redi et al., 2019): https://drive.google.com/drive/folders/1zG6orf0_h2jYBvGvsolpSy3ikbNiW0xJ +- PHEME (Zubiaga et al., 2016): https://figshare.com/articles/PHEME_dataset_for_Rumour_Detection_and_Veracity_Classification/6392078. +- Political Speeches: We use the same 7 splits as used in (Hansen et al., 2019). The first 5 can be found here: http://alt.qcri.org/clef2018-factcheck/ + +data/uploads/clef18factchecking_ lab_submissions_and Scores_and_ combinations.zip. The files can be found under "task1_test_set/English/task1-enfile(3,4,5,6,7)". The last two files can be found here: https://github.com/ apepa/claim-rank/tree/master/data/ transcripts_all_sources. The files are "clinton acceptance_speech_ann.tsv" and "trump_inauguration_ann.tsv". + +# B.7 Hyperparameters + +We found that good defaults worked well, and thus did not perform hyperparameter search. The hyperparameters we used are given in Table 9. + +
MethodWikipediaPHEMEPolitical Speeches
BERT34m30s14m25s8m11s
BERT + PU40m7s20m40s15m38s
BERT + PUC40m8s21m20s15m32s
BERT + Wiki-14m28s8m50s
BERT + WikiPU-14m25s8m41s
BERT + WikiPUC-14m28s8m38s
BERT + PU + WikiPU-20m41s15m32s
BERT + PUC + WikiPUC-21m52s15m40s
+ +Table 7: Average runtime of each tested system for each split of the data + +
MethodWikipediaPHEMEPolitical Speeches
BERT88.981.631.3
BERT + PU89.083.718.2
BERT + PUC89.282.832.0
BERT + Wiki-80.832.3
BERT + WikiPU-82.035.7
BERT + WikiPUC-80.434.3
BERT + PU + WikiPU-82.933.3
BERT + PUC + WikiPUC-84.134.0
+ +Table 8: Validation F1 performances for each tested model. + +
HyperparameterValue
Learning Rate3e-5
Weight Decay0.01
Batch Size8
Dropout0.1
Warmup Steps200
Epochs2
+ +Table 9: Validation F1 performances used for each tested model. \ No newline at end of file diff --git a/claimcheckworthinessdetectionaspositiveunlabelledlearning/images.zip b/claimcheckworthinessdetectionaspositiveunlabelledlearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..8626f63edb2bdbab1c45562bc8b2570a855c1688 --- /dev/null +++ b/claimcheckworthinessdetectionaspositiveunlabelledlearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b8060de1b74c83256d6138fca619bc0a9c5635bec553ebecc0cd5ec20bafef38 +size 516305 diff --git a/claimcheckworthinessdetectionaspositiveunlabelledlearning/layout.json b/claimcheckworthinessdetectionaspositiveunlabelledlearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..f0733961ab70c139eb8a378da69db2d0562dfc4c --- /dev/null +++ b/claimcheckworthinessdetectionaspositiveunlabelledlearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3e27f4025322576dff143d1023804d1d9705aa8ae635816ef6b0ece219e02b3a +size 397569 diff --git a/claracrosslingualargumentregularizerforsemanticrolelabeling/f015df03-e37e-45b7-802c-2b0a38159b3d_content_list.json b/claracrosslingualargumentregularizerforsemanticrolelabeling/f015df03-e37e-45b7-802c-2b0a38159b3d_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..3ffb91fe5ed0b297d772a061891f23f910ffa3a6 --- /dev/null +++ b/claracrosslingualargumentregularizerforsemanticrolelabeling/f015df03-e37e-45b7-802c-2b0a38159b3d_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9f5fd85ea6666c0e8801aae45ab42ba4712effa40acd91f23e7873c85c6b2fe6 +size 88938 diff --git a/claracrosslingualargumentregularizerforsemanticrolelabeling/f015df03-e37e-45b7-802c-2b0a38159b3d_model.json b/claracrosslingualargumentregularizerforsemanticrolelabeling/f015df03-e37e-45b7-802c-2b0a38159b3d_model.json new file mode 100644 index 0000000000000000000000000000000000000000..6c0e28037f4589002ce46cf8965ab4d95c135695 --- /dev/null +++ b/claracrosslingualargumentregularizerforsemanticrolelabeling/f015df03-e37e-45b7-802c-2b0a38159b3d_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4e7f14b5407b0e135a06aff891c22a31861311ce35a8a87aa45ce96ffa810ce8 +size 105363 diff --git a/claracrosslingualargumentregularizerforsemanticrolelabeling/f015df03-e37e-45b7-802c-2b0a38159b3d_origin.pdf b/claracrosslingualargumentregularizerforsemanticrolelabeling/f015df03-e37e-45b7-802c-2b0a38159b3d_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e000bab948418de7a0e74d0b4ef0363507f392b4 --- /dev/null +++ b/claracrosslingualargumentregularizerforsemanticrolelabeling/f015df03-e37e-45b7-802c-2b0a38159b3d_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7c338d8b8cd4dd750b9117d0e024db0fad1219543ef44f7733a0c95e9b792fef +size 2126637 diff --git a/claracrosslingualargumentregularizerforsemanticrolelabeling/full.md b/claracrosslingualargumentregularizerforsemanticrolelabeling/full.md new file mode 100644 index 0000000000000000000000000000000000000000..7fcf1a42fd295a0ace1b27cd533674fbc0e366ca --- /dev/null +++ b/claracrosslingualargumentregularizerforsemanticrolelabeling/full.md @@ -0,0 +1,379 @@ +# CLAR: A Cross-Linguual Argument Regularizer for Semantic Role Labeling + +Ishan Jindala, Yunyao Li, Siddhartha Brahma, and Huaiyu Zhu + +aIBM Research, Almaden Research Center, CA 95120 + +bGoogle Research, Berlin, Germany + +ishan.jindal@ibm.com, {yunyaoli, huaiyu}@us.ibm.com, + +sidbrahma@gmail.com + +# Abstract + +Semantic role labeling (SRL) identifies predicate-argument structure(s) in a given sentence. Although different languages have different argument annotations, polyglot training, the idea of training one model on multiple languages, has previously been shown to outperform monolingual baselines, especially for low resource languages. In fact, even a simple combination of data has been shown to be effective with polyglot training by representing the distant vocabularies in a shared representation space. Meanwhile, despite the dissimilarity in argument annotations between languages, certain argument labels do share common semantic meaning across languages (e.g. adjuncts have more or less similar semantic meaning across languages). To leverage such similarity in annotation space across languages, we propose a method called Cross-Linguual Argument Regularizer (CLAR). CLAR identifies such linguistic annotation similarity across languages and exploits this information to map the target language arguments using a transformation of the space on which source language arguments lie. By doing so, our experimental results show that CLAR consistently improves SRL performance on multiple languages over monolingual and polyglot baselines for low resource languages. + +# 1 Introduction + +Semantic Role Labeling (SRL) is the task of labeling each predicate and its corresponding arguments in a given sentence. SRL provides a more stable meaning representation across syntactically different sentences and has been seen to help a wide range of NLP applications such as question answering (Maqsud et al., 2014; Yih et al., 2016) and machine translation (Shi et al., 2016). + +![](images/67fdd64235a3b433457ded058d73d432ad4f16fd95028e81baaace49d6bcf0ff.jpg) +Figure 1: Example of predicate-argument structure from the CoNLL 2009 training data for I) Chinese, II) German, and III) English. + +Recent end-to-end deep neural networks for SRL, though performing well for languages with large training data (Marcheggiani et al., 2017; Tan et al., 2018; He et al., 2018), are much less effective for low resources languages due to very limited annotated data for these languages. Methods such as polyglot training (Mulcaire et al., 2018) seek to make these models perform better on low resource languages by combining supervision from multiple languages. The key idea in polyglot training is to combine the training data from multiple languages by using multilingual word embeddings from a shared space and a common encoder model (e.g. an LSTM). The argument sets for the languages are kept separate by using different classification layers. The arguments sets are kept separate because the semantic label spaces are usually language-specific (Mulcaire et al., 2018). + +However, despite the dissimilarity in argument annotations between languages, certain argument labels do share common semantic meaning across languages. Fig. 1 shows three different sentences from Chinese, German, and English, respectively, with defined predicate-argument structures. Although the predicates are essentially the same, their arguments are labeled differently across languages in the training data. For instance, all sentences + +contain words representing the same underlying semantic meaning that is temporal but with different argument labels (TMP in Chinese, A4 in German, AM-TMP in English). + +We hypothesize that we can improve the SRL performance of low resource languages during cross-lingual transfer by identifying such arguments with similar semantic meaning across languages and representing them close to each other in the feature space. This requires: (1) Detecting the correspondence between the labels in different languages; and (2) Representing arguments with similar semantic meaning in the feature space for better SRL performance. + +We propose a method called Cross-Linguual Argument Regularizer (CLAR) with a two-step process: + +Step 1: Pair Matching: Detecting a number of label pairs between the source and target languages during polyglot training. We call these arguments common arguments. Given the multilingual embedding already used in polyglot training, CLAR does not require additional cross-lingual alignments on parallel data. + +Step 2: Regularization: Given the common arguments identified, find a transformation to bring the paired arguments close together. This transformation is learned and used in the poloyglot training process so that the knowledge on the labels in the source language can be better transferred to knowledge in the corresponding labels in the target language. + +We evaluate CLAR on the SRL portion of the CoNLL 2009 dataset (Hajivc et al., 2009)1 and compare its performance against baseline and polyglot training methods. The main contributions of this work are: + +- We propose CLAR, a simple yet effective method for better cross-lingual transfer by detecting similar semantic role arguments between languages without requiring cross-lingual alignments or parallel data, and by learning a transformation for paired labels via regularization during SRL model training. +- We conduct comprehensive empirical studies and demonstrate the effectiveness of CLAR over both monolingual and polyglot baselines. + +- We perform the ablation study and detailed analysis to understand why CLAR leads to better cross-lingual transfer and how its performance differs with different levels of correspondence among arguments. + +The rest of the paper is organized as follows: Sec. 2 describes the base model. Sec. 3 describes CLAR. Sec. 4 demonstrate its efficacy with extensive empirical evaluation. Sec. 5 reviews the existing literature. Sec. 6 makes concluding remarks. + +# 2 Base Model + +The SRL task consists of four subtasks: 1) predicate identification (e.g., reach); 2) sense disambiguation of the identified predicate (e.g., reach.01); 3) argument identification for each predicate (e.g., market) and 4) role classification of the identified arguments (e.g., A0). Following Li et al. (2018) and Mulcaire et al. (2018), we focus on argument labeling and predicate sense disambiguation, both sequence tagging problems. + +Model Architecture As shown in Fig. 2, our model architecture consists of four main modules: (1) sentence encoder takes the raw tokens sequentially and outputs a fixed sentence representation; (2) role labeler takes the sentence encoder output and identify and predicts roles of the tokens; (3) predicate sense disambiguator takes the sentence encoder output and predict the sense for each predicate; and (4) CLAR regularizer first detects the common arguments and then learns a manifold on which the arguments of the target languages lie. We now describe each of the modules in more details. + +# 2.1 Sentence Encoder + +Word Representation Knowing the predicate position has previously been shown to improve the argument labeling task (Li et al., 2018) and since the predicate position is marked in the CoNLL 2009 dataset, we use this information and obtain the predicate-specific word representations for each word in the sentence. In addition to predicate-specific flag $\boldsymbol{w}_i^f$ , we represent each word $\boldsymbol{w}_i$ in the sentence as a concatenation of several word features including randomly initialized word embeddings $\boldsymbol{w}_i^r$ , pre-trained word embeddings $\boldsymbol{w}_i^p$ , randomly initialized lemma embeddings $\boldsymbol{w}_i^l$ and randomly initialized POS tags embeddings $\boldsymbol{w}_i^s$ . Finally, each word is represented as $\boldsymbol{w}_i = [\boldsymbol{w}_i^r, \boldsymbol{w}_i^p, \boldsymbol{w}_i^l, \boldsymbol{w}_i^s, \boldsymbol{w}_i^f]$ . Since we combine + +![](images/4c4db0b8902dcdc16d61f42896e5fbcf2d602abdf688babdaca164645eeb317e.jpg) +Figure 2: A multitask framework for predicate sense disambiguation and argument classification with CLAR argument regularization + +the resources from a pair of languages similar to polyglot training (Mulcaire et al., 2018), we use the language-specific pre-trained word embeddings for $\pmb{w}_i^p$ and train the SRL model on the source and target language simultaneously. + +BiLSTM Encoder To model the sequential input we use Bi-directional Long Short Term Memory neural networks (Hochreiter and Schmidhuber, 1997), which take in concatenated word representation for each word in the $j$ -th sentence $x_{j} = (\pmb{w}_{j1}, \pmb{w}_{j2}, \dots, \pmb{w}_{jn})$ and process them sequentially from both directions to obtain the contextual representations. + +# 2.2 Semantic Role Labeler + +Our role labeler consists of Multi-Layer Perceptron (MLP) layers with highway connections (Srivastava et al., 2015). It takes the contextualized word representations from the sentence encoder as an input and outputs a probability distribution over the set of argument labels for each word in the sentence. Given a sentence, we maximize the likelihood of labels for each word by minimizing + +$$ +\mathcal {L} _ {\text {B a s e}} = - \frac {1}{N} \sum_ {i = 1} ^ {N} p \left(y ^ {\prime} = y _ {i} \mid \boldsymbol {w} _ {i}; \theta\right), \tag {1} +$$ + +where $y_{i}$ is the argument label, $\pmb{w}_{i}$ represents the input token, $\theta$ represents the model parameters, and $N$ denotes the total number of samples. + +# 3 The CLAR Algorithm + +The underlying motivation for polyglot training Mulcaire et al. (2018) is that arguments from different languages often help enhance each other. It is reasonable to assume that if corresponding arguments from source and target languages are located closer in the feature space, their mutual enhancements can be strengthened. The possibility for doing so is based on the following observation. + +In neural network models that generate labels, the last layer is usually a softmax layer of the form + +$$ +\boldsymbol {y} _ {i} = \frac {\exp (\boldsymbol {H} \boldsymbol {a} _ {i})}{\sum \exp (\boldsymbol {H} \boldsymbol {a} _ {i})} \tag {2} +$$ + +where $\pmb{y}_i\in \mathbb{R}^k$ , its $k$ components corresponding to the $k$ output argument labels. Given $\pmb {a}_i\in \mathbb{R}^m$ as a representation of the input token $i$ calculated by previous layers, the rows $h_k$ of the weights $\pmb{H}$ are responsible for distinguishing the different argument labels $k$ from each other. During the simple polyglot training, the $k$ argument labels consist of $k_{s}$ for the source language and $k_{t}$ for the target language. Splitting these $\pmb{h}_i$ s into two sets, $\pmb{u}_i$ for the source language and $\pmb{v}_i$ for the target language, we + +observe that for arguments labels, the Euclidean distance between $\mathbf{u}_i$ and $\mathbf{v}_j$ are often small if the $i$ and $j$ are corresponding argument labels. These can be brought even closer together by an affine transform (linear transform and translation). + +We therefore propose the following approach (CLAR) consisting of two steps: + +Step 1: Pair Matching: Detect the best pairing of the arguments between a pair of languages. + +Step 2: Regularization: Find a transformation that brings the feature vectors corresponding to the paired argument labels close to each other. + +These two steps are described in detail below. + +Pair Matching: The goal of this step is to identify matching label pairs in the two languages. We start with the simple polyglot training (Mulcaire et al., 2018) for the first few epochs without CLAR and collect the last layer weights for all the target and source language arguments. + +Given the $k_{s}$ vectors $\pmb{u}_{i}$ and $k_{t}$ vectors $\pmb{v}_{j}$ , solve this constraint optimization problem + +$$ +\underset {\boldsymbol {T}} {\text {m i n i m i z e}} \sum_ {i} ^ {k _ {s}} \sum_ {j} ^ {k _ {t}} \boldsymbol {T} _ {i j} | | \boldsymbol {u} _ {i} - \boldsymbol {v} _ {j} | | _ {2} ^ {2} +$$ + +subject to + +$$ +\begin{array}{l} \sum_ {i} \boldsymbol {T} _ {i j} \leq 1, j = 1, \dots , k _ {t} \\ \sum_ {j} T _ {i j} \leq 1, i = 1, \ldots , k _ {s} \\ \sum_ {i, j} \boldsymbol {T} _ {i j} \geq \min (k _ {t}, k _ {s}), j = 1, \dots , k _ {t}; \\ i = 1, \ldots , k _ {s} \\ \boldsymbol {T} _ {i j} \in \{0, 1 \}, \forall i, j. \tag {3} \\ \end{array} +$$ + +Intuitively, this requires finding pairings between $i$ and $j$ such that the total squared distance between paired vectors $(\pmb{u}_i,\pmb{v}_j)$ is minimized, subject to the constraint that each source argument matches at most one target argument and vice versa, and that at least $K = \min (k_{t},k_{s})$ argument pairs are identified. This identifies $K$ semantically similar argument pairs in source and target languages, represented in the binary matrix $\pmb{T}$ , where $T_{ij} = 1$ means that argument $i$ in source language and argument $j$ in target language are paired together. Later on (Sec. 4.5) we will show that in certain situations + +it makes sense to relax the "at most one" constraint and allow many-to-one or one-to-many matching. + +This is an Integer Linear Programming problem, for which many excellent solvers exist. We use GLPK solver from CVXOPT2. + +We observe that the frequency distribution of the argument labels is quite skewed in the training dataset: a few labels (e.g., A0, A1) have much larger number of training examples than other labels. Experiments show that low-frequency labels cause noisy pair matching that degrades the output quality. Therefore, we consider only labels that have more than $1\%$ of the total number occurrences in the respective language training data. Typically, $40 - 50\%$ of the total labels in each language match this criterion. The $k_{s}$ and $k_{t}$ in the general algorithm are replaced by $\hat{k}_{s}$ and $\hat{k}_{t}$ for the number of arguments satisfying this criterion in the source and the target language, respectively. + +Regularization: The goal of this step is to learn an affine transform to bring the target vectors closest to the corresponding source vectors. This step is performed iteratively during the overall training process. + +Given the $K$ pairs $(\pmb{u}_i, \pmb{v}_i)$ detected in the previous step, the objective of the overall optimization objective function is amended as follows + +$$ +\mathcal {L} _ {\mathrm {C L A R}} = \mathcal {L} _ {\mathrm {B a s e}} + \lambda \sum_ {i = 1} ^ {K} \left\| \boldsymbol {u} _ {i} - \Psi \boldsymbol {v} _ {i} + b \right\| _ {2} ^ {2}, \tag {4} +$$ + +where $\Psi \pmb{v}_i + b$ is the affine transform to bring $\pmb{v}_i$ close to $\pmb{u}_i$ , and $\lambda$ controls the strength of the amendments by the paired labels. The transformation $\Psi, b$ is learned iteratively by minimizing (4) during SRL model training. + +# 4 Experiments + +# 4.1 Dataset + +We evaluate CLAR on CoNLL 2009 Shared Task dataset (Hajivc et al., 2009) with English (EN) as the source language and five different languages, namely German (DE), Spanish (ES), Chinese (ZH), Czech (CS) and Catalan (CA), as target languages. The dataset includes no correspondence defined between the argument labels across languages. For instance, the argument label set in English contains + +
ENCACSDEESZHavg
Zhao et al. (2009)86.2080.3085.2076.0083.0077.70-
Roth and Lapata (2016)87.70--80.1080.2079.40-
Marcheggiani et al. (2017)87.70-86.00-80.3081.20-
Cai et al. (2018)89.60----84.30-
Kasai et al. (2019)90.20---83.00--
Mulcaire et al. (2018)Monolingual86.5477.3184.8766.7175.9881.2677.22
Polyglot-79.0884.8269.9776.4581.5078.36
Base SRL + MUSE EmbeddingMonolingual86.4778.9289.7868.7378.0981.3479.37
Polyglot-79.0589.7071.1678.2281.4279.78
CLAR-79.2689.7772.5078.8381.8580.44
Base SRL + BERT EmbeddingMonolingual88.1480.5090.7874.3980.9884.7182.27
Polyglot-81.8790.6774.4581.8884.7982.73
CLAR-82.1890.8175.3382.1385.0483.09
+ +$(\mathtt{A0},\mathtt{A1},\dots)$ while the argument label set in Spanish contains (Arg0-agt, Arg0-pat, $\cdot \cdot \cdot$ ).Further details on dataset is available in Appendix A. + +Table 1: Semantic F1 scores (including sense) on CoNLL 2009 Shared task languages. The best reported performance on English and Spanish from (Kasai et al., 2019), Chinese from (Cai et al., 2018), German from (Roth and Lapata, 2016), Catalan from (Zhao et al., 2009) and Czech from(Marcheggiani et al., 2017). Underline shows the best performance among all methods. + +
EN+CA+CS+DE+ES+ZH
86.4787.1286.7087.0986.6886.90
+ +Table 2: CLAR Semantic F1 scores (including sense) on EN test set for each language pair. + +# 4.2 Setup + +We compare CLAR with several Monolingual and Polyglot methods. For monolingual baselines, we train separate SRL models for each language. For Polyglot and CLAR methods, we train the SRL model on a pair of language. We use pre-trained multilingual embeddings to allow the multilingual sharing between languages. We use Multilingual Unsupervised and Supervised Embeddings (MUSE) (Conneau et al., 2017) for all the languages except Chinese3, where we use fastText aligned word embeddings (Joulin et al., 2018). We also use the pre-trained BERT multilingual cased embeddings (Devlin et al., 2019) in place of MUSE pre-trained embeddings to observe the effect of better multilingual embeddings. Details on model hyperparameters are presented in Appendix B. For all the experiments we fix the base model architecture. For the Polyglot training, we implement the simple polyglot sharing setup proposed by Mulcaire et al. (2018). Along with the reported results + +in Mulcaire et al. (2018) we also report the polyglot results with our model architecture keeping the same word representation to avoid any ambiguity between Polyglot and CLAR comparison. + +# 4.3 Results + +Comparison Against Polyglot and Monolingual Training: Table 1 summarizes the performance of CLAR and all baselines for SRL. As can be seen, for both MUSE and BERT embeddings, CLAR results in better SRL models than those obtained via monolingual and polyglottraining for all target languages. The improvement is particularly noticeable for the languages with much fewer (< 1/3) training samples than those of EN (e.g. DE and ES). This result confirms that CLAR can effectively transfers knowledge from a high resource language (EN) to other languages with less resource. + +Note that for CS, neither CLAR nor polyglot training shows performance gain over the baseline. CLAR outperforms the polyglot baseline but remains on par with the monolingual baseline. We present further investigation on this in Section 4.5. + +Comparison Against SoTA: With the powerful BERT multilingual embeddings, CLAR surpasses the best previously reported results on 3 out of 6 languages (Table 1). In fact, its average performance surpasses that any previous-reported single system. The strong performance of CLAR confirms its great promise for cross-lingual transfer. + +Cross-Lingual Transfer from Target to Source Language: Interestingly, cross-lingual transfer + +
TargetSourceMonolingualPolyglotCLAR
PRF1PRF1PRF1
CA+EN78.4775.4476.9277.5976.6877.1378.3576.5477.44
CS+EN80.3676.0078.1280.3275.6977.9379.9176.5078.17
DE+EN69.6464.4366.9471.6669.9670.8073.1071.5472.31
ES+EN78.2275.6376.9078.3775.8377.0779.7776.2277.95
ZH+EN78.2775.0776.6479.0474.5076.6879.3675.4377.34
+ +Table 3: CLAR performance (argument classification only) on CoNLL 2009 Shared task languages and comparison with polyglot and monolingual methods. + +by CLAR also helps improving the performance of languages with abundant training data. As illustrated in Table 2, transferring knowledge using CLAR from other languages to EN leads to small but consistent improvements for EN. + +# CLAR Performance on Arguments Alone: + +Since CLAR mainly affects role labeling, we conduct further analysis of its performance on argument classification alone (i.e. predicate sense disambiguation is not evaluated). The results are summarized in Table 3 for Base SRL + MUSE embedding. One can observe that for all target languages, CLAR registers small but noticeable improvements (0.24% to 1.51%) for argument classification in comparison to both monolingual and polyglot methods. The consistent improvements confirm the effectiveness of CLAR in enabling better cross-lingual transfer. + +# 4.4 What does CLAR do? + +The results of our comparison studies clearly demonstrate that CLAR outperforms both baseline and polyglot training methods. In this subsection we first explain the intuition behind CLAR and then investigate how it regularizes the arguments. + +Intuition: During Polyglot training we examine the last layer weights of the base SRL model and hypothesize that there exists a mapping between source and target language argument. To evaluate this hypothesis, we plot the weights of the output layer using SVD by keeping the two directions corresponding to top two largest eigenvalues learned by Polyglot (Row I) training in Fig. 3. + +We draw a line between the arguments that are paired by Equation (3). As can be seen, the euclidean distance between some of the paired arguments is similar. For instance, the euclidean distance between the arguments A1 and ZH-A1 is similar to that between A2 and ZH-A2 in Fig. 3b. This + +pattern emerges from the training data for most of the target languages. Further, we observe that the euclidean distances among the common arguments for the source and target languages are also similar. For example, in Fig. 3b, the euclidean distance between the source (EN) arguments A1 and A2 is similar to that between the target language arguments ZH-A1 and ZH-A2. This observation holds true for most of the arguments across the target languages (Fig. 3a - 3c). + +The above observations confirm that there exists similar arguments in source and target languages. The arguments in target language lie on a manifold that is similar in structure, with some translation and/or rotation, to the manifold on which the source language argument lies. + +Argument Matching and Regularization: Therefore, we first match the arguments with similar meanings in the target and the source language. We observe that almost all the matched argument pairs have similar meaning: some are syntactically visible (e.g. ES-argM-adv in ES and AM-ADV in EN), whereas others are semantically similar (e.g. ES-argM-fin and AM-PNC having the same meaning purpose). After obtaining the matched argument pairs, we regularize the output layer weights of the matched target arguments by forcing them to live on a matched source arguments manifold in (4). A list of matched arguments for various language pairs is provided in Appendix C. + +We plot the CLAR learned weight vectors in Fig. 3 (Row II). We can observe the uniformity in lines (in terms of length), which are drawn between paired target to source language arguments. Further, to quantify the length of these lines, we plot the euclidean distance matrix among the matched source language arguments. Among the target language arguments, we compute the correlation coefficient between the euclidean distance for EN- + +![](images/3a69a3ee370419e9eb244669479328af7d7fa96327cc61e0e37d06c513a702e7.jpg) + +![](images/6dcd06e30a9db6749a7162c86c60996a7d588ba084526b146f1095ea0c440c86.jpg) + +![](images/66164a17160452042bbbad84c28e0c5732198de9c6e60aa7de535e0c5e7bface.jpg) + +![](images/2f73d05df692fbc2856e80479a78b8ab66e333b460e4b42d7b605035e1212ab3.jpg) +(a) Polyglot German +(d) CLAR German +Figure 3: A low dimensional representation of output layer weights of the matched arguments in source and target language as determined by polyglot learned weight vectors in row I and by (3) in row II. + +![](images/157f55e84228f943b3737146770eb91f3988f44b69b94ce707347ac4553c5b38.jpg) +(b) Polyglot Chinese +(e) CLAR Chinese + +![](images/95d9dc6a94d2f256d0105294278298a6a75670eff5ff577a35f4a326815310a7.jpg) +(c) Polyglot Spanish +(f) CLAR Spanish + +DE, EN-ZH, and EN-ES to be 0.9984, 0.9531, and 0.9352 respectively. The fact that all these coefficients are close to one indicates that CLAR is indeed able to detect a manifold for the target language arguments similar to the one for the source language arguments. Our experimental results (Table 3) demonstrate that allowing the paired target language arguments to lie on the detected manifold improves the argument classification performance. + +# 4.5 Ablation and Analysis + +Effect of $K$ We also observe the impact of $K$ on the argument classification performance in Table 4. We find that regularizing all the arguments obtained from (3), while performing better than polyglot, is not a great choice overall. We suspect that considering all the paired arguments adds noise in the system. This is likely because some of the arguments in the target languages are language-specific and might be matched with an argument in the source language which has no close correspondence, for example, the Chinese argument ZH-C-C-A0 has no direct corresponding argument in English. + +Additionally, in some languages, arguments are labeled at a very granular level, and multiple arguments in these languages may correspond to a single argument in the source language. + +For example, multiple arguments in Czech frequently map to only one corresponding argument + +
Target02K/2K
CA77.1377.1377.4477.20
CS77.9378.1278.1777.45
DE70.8072.0872.3171.20
ES77.0777.2377.9577.12
ZH76.6876.8777.3477.02
+ +Table 4: Effect of $K$ on argument classification performance $\left( {K = 0\text{represents Polyglot training}}\right)$ + +in English. + +Languages with Similar Linguistic Annotations: To further study the effectiveness of CLAR, we analyze the cross-lingual transfer between the languages known to have similar linguistic annotations. We expect to observe better cross-lingual transfer between such language pairs. Specifically, we examine Spanish (ES) and Catalan (CA) from the same AnCora corpus (Taulé et al., 2008). We consider ES as the source language because it has more training samples than CA. + +In Table 6 we show the paired arguments detected by CLAR along with the euclidean distance between them. It can be seen that the euclidean distance for all paired arguments are close to 1, confirming that CLAR can effectively match semantically similar arguments across languages. + +The experimental results are summarized in Table 5. As expected, CLAR surpasses all prior re + +
TrainingMethodPRF1
CABaseline78.4775.4476.92
+ESPolyglot79.1075.9077.47
CLAR78.7277.9178.31
+ENPolyglot77.5976.6877.13
CLAR78.3576.5477.44
+ +Table 5: Catalan argument classification performance with Spanish as source language + +
TargetSourcePair distance
CA-argM-tempES-argM-temp0.9302
CA-argM-cauES-argM-cau0.9523
CA-argM-atrES-argM-atr0.9542
CA-arg2-benES-arg2-ben0.9608
CA-argM-finES-argM-fin0.9657
CA-arg1-nullES-arg1-null0.9672
CA-argM-mnrES-argM-mnr0.9709
CA-argM-locES-argM-loc0.9790
CA-argM-advES-argM-adv0.9810
CA-arg0-cauES-arg0-cau0.9839
+ +Table 6: Paired arguments in the source (ES) and the target language (CA) + +![](images/67bffb9894d04e8ae3945ed98c856cb4ddcd0e328abe84449a58484ea912c3a4.jpg) +(a) ES +Figure 4: Euclidean distance between last layer weights for ES-CA cross-lingual transfer. + +![](images/adff7b731501d5a576209ca53d9eee60cd6155749c915cfaeb4e8cb18177861f.jpg) +(b) CA + +sults on CA. With the semantically similar language ES, the SRL performance on CA is better than the monolingual and polyglot training methods. Further, we observe a 0.87 point absolute gain in F1 score when the cross-lingual transfer occurred from a similar linguistic annotated language (ES) than a less similar language (EN), despite of much smaller training data size ( $\leq 30\%$ of EN). This observation strengthen our hypothesis that by representing the semantically similar arguments across languages on similar manifolds improves the SRL performance. + +To visualize the space on which the common source and target language argument lies, we plot the heatmap of the euclidean distance between the last layer weights of the learned model in Fig. 4. We plot the separate heatmaps among the paired arguments for each language, the source + +language (in Fig. 4a) and the target language (in Fig. 4b). We observe these two heatmaps look very identical in distribution (a very high correlation coefficient 0.9996 and a low Frobenius norm square of the difference 1.793). This means that CLAR transforms the weight vectors of the corresponding target language arguments in such a way that the transformed weight vectors lie on a manifold, which is similar to another manifold on which source language argument weights lie but translated and/or rotated. The aforementioned is evident from Table 6 where we report the distance between these argument pairs. + +Why is Czech an Exception? Though Czech (CS) has the most training samples in the CoNLL 2009 dataset, the cross-lingual transfer to and from CS is not very significant, as apparent both from Table 3 and previous work by Mulcaire et al. (2018). We observe that the arguments in CS are labeled at a significant finer granularity than those of other languages. For example, for temporal arguments alone, the argument set in Czech contains 9 different labels at the finest granularity. In contrast, each of the other languages has only one single label for temporal arguments. Since CLAR performs one-to-one mapping to and from the source language, we suspect that CLAR encounters challenges in choosing one among many fine grained arguments to map to a coarse argument in English. While it is possible to extend CLAR with many-to-one mapping, based on our preliminary study (Appendix D), it may introduce additional noise. We plan to explore this direction in the future. + +# 5 Related Work + +Models for SRL largely fall into two categories: syntax-agnostic and syntax-aware. For a long time, syntax was considered a prerequisite for better SRL performance (Punyakanok et al., 2008; Gildea and Jurafsky, 2002). In the absence of syntactic information, these methods struggle to capture the discriminatory features and thus perform poorly. + +Recently, end-to-end deep neural models have been shown to extract useful discriminatory features even without syntactic information (Zhou and Xu, 2015; Marcheggiani et al., 2017; Tan et al., 2018; He et al., 2018) and achieve state-of-the-art performance. However, some works (Roth and Lapata, 2016; He et al., 2017; Strubell et al., 2018) argue that given a high-quality syntax parser, it is + +possible to further improve the SRL performance. Along this line, (Marcheggiani and Titov, 2017) proposed a SRL model based on graph convolutional networks which incorporates syntactic information from a parser (Kiperwasser and Goldberg, 2016). Further, (Li et al., 2018) proposes a more general framework to integrate syntax into SRL tasks. All these methods have been shown to perform well on rich resource languages. + +Several recent attempts have been made to transfer knowledge from rich source languages to low resource languages for SRL tasks (Mulcaire et al., 2018, 2019) such that the knowledge transfer helps the model to learn better feature representations for low resource languages. To some extent, in other NLP tasks such as named identity recognition (Xie et al., 2018), and syntactic dependency parsing (Ammar et al., 2016) this knowledge transfer seems to be helping low resource languages. Our experimental results further strengthen this claim and confirm that languages share knowledge at the semantic level as well. + +An alternative line of work transfers cross-lingual knowledge to generate semantic labels for low resource languages by exploiting the monolingual SRL model and Multilingual parallel data (Akbik et al., 2016; Akbik and Li, 2016) with an assumption that the sentences in parallel corpora are semantically equivalent. Similarly, (Prazák and Konopík, 2017) converts the monolingual dependency tree to a universal dependency tree for cross-lingual transfer. Though these methods do not require the knowledge of semantic roles in the target language, they require the availability of massive parallel corpora. On the other hand, CLAR is able to detect the similarity among arguments between the language pairs even in the presence of less data. + +# 6 Conclusion + +We introduces CLAR, a Cross-Linguial Argument Regularizer. It explores linguistic annotation similarity across languages and exploits this obtained information during SRL model training to map the target language arguments as the deformation of a space on which source language arguments lie. We confirm the effectiveness of CLAR for SRL on CoNLL 2009 dataset over monolingual and polyglot methods, without prior knowledge of cross-lingual alignments or parallel data. This paper demonstrates the promise of understanding and exploiting linguistic annotation similarity across + +languages during polyglot training. We plan to explore other ways of identifying and leveraging linguistic annotation similarity across languages. + +# Acknowledgments + +The authors would like to thank Ranit Aharonov, Mo Yu, and Tyler Baldwin for their comments on an early draft of this work. We also thank our anonymous reviewers for their constructive comments and feedback. + +# References + +Alan Akbik, Vishwajeet Kumar, and Yunyao Li. 2016. Towards semi-automatic generation of proposition banks for low-resource languages. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 993-998. +Alan Akbik and Yunyao Li. 2016. Polyglot: Multilingual semantic role labeling with unified labels. In Proceedings of ACL-2016 System Demonstrations, pages 1-6. +Waleed Ammar, George Mulcaire, Miguel Ballesteros, Chris Dyer, and Noah A Smith. 2016. Many languages, one parser. Transactions of the Association for Computational Linguistics, 4:431-444. +Jiaxun Cai, Shexia He, Zuchao Li, and Hai Zhao. 2018. A full end-to-end semantic role labeler, syntactic-agnostic over syntactic-aware? In Proceedings of the 27th International Conference on Computational Linguistics, pages 2753–2765. +Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herve Jégou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186. +Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Computational linguistics, 28(3):245-288. +Jan Hajivc, Massimiliano Ciaramita, Richard Johansson, Daisuke Kawahara, Maria Antonia Martí, Lluis Marquez, Adam Meyers, Joakim Nivre, Sebastian Padó, Jan vStvepánek, et al. 2009. The conll-2009 shared task: Syntactic and semantic dependencies in multiple languages. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning: Shared Task, pages 1-18. Association for Computational Linguistics. + +Jan Hajic, Jarmila Panevova, Zdenka Urevsova, Alevtina Bemova, Veronika Kolarova, and Petr Pajas. 2003. PDT-vallex: Creating a large-coverage valency lexicon for treebank annotation. In Proceedings of the second workshop on treebanks and linguistic theories, volume 9, pages 57-68. +Luheng He, Kenton Lee, Omer Levy, and Luke Zettlemoyer. 2018. Jointly predicting predicates and arguments in neural semantic role labeling. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 364-369. +Luheng He, Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2017. Deep semantic role labeling: What works and what's next. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 473-483. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780. +Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Hervé Jégou, and Edouard Grave. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. +Jungo Kasai, Dan Friedman, Robert Frank, Dragomir Radev, and Owen Rambow. 2019. Syntax-aware neural semantic role labeling with supertags. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 701-709. +Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. +Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional LSTM feature representations. Transactions of the Association for Computational Linguistics, 4:313-327. +Zuchao Li, Shexia He, Jiaxun Cai, Zhuosheng Zhang, Hai Zhao, Gongshen Liu, Linlin Li, and Luo Si. 2018. A unified syntax-aware framework for semantic role labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2401-2411. +Umar Maqsud, Sebastian Arnold, Michael Hulfenhaus, and Alan Akbik. 2014. Nerdle: Topic-specific question answering using wikia seeds. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: System Demonstrations, pages 81-85. + +Diego Marcheggiani, Anton Frolov, and Ivan Titov. 2017. A simple and accurate syntax-agnostic neural model for dependency-based semantic role labeling. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 411-420. +Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1506-1515. +Phoebe Mulcaire, Jungo Kasai, and Noah A Smith. 2019. Polyglot contextual representations improve crosslingual transfer. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3912-3918. +Phoebe Mulcaire, Swabha Swayamdipta, and Noah A Smith. 2018. Polyglot semantic role labeling. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 667-672. +Ondrej Prazák and Miloslav Konopík. 2017. Cross-lingual srl based upon universal dependencies. In RANLP, pages 592-600. +Vasin Punyakanok, Dan Roth, and Wen-tau Yih. 2008. The importance of syntactic parsing and inference in semantic role labeling. Computational Linguistics, 34(2):257-287. +Michael Roth and Mirella Lapata. 2016. Neural semantic role labeling with dependency path embeddings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1192-1202. +Chen Shi, Shujie Liu, Shuo Ren, Shi Feng, Mu Li, Ming Zhou, Xu Sun, and Houfeng Wang. 2016. Knowledge-based semantic embedding for machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2245-2254. +Rupesh K Srivastava, Klaus Greff, and Jürgen Schmidhuber. 2015. Training very deep networks. In Advances in neural information processing systems, pages 2377-2385. +Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguistically-informed self-attention for semantic role labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5027-5038. +Zhixing Tan, Mingxuan Wang, Jun Xie, Yidong Chen, and Xiaodong Shi. 2018. Deep semantic role labeling with self-attention. In Thirty-Second AAAI Conference on Artificial Intelligence. + +Mariona Taule, M Antonia Martí, and Marta Recasens. 2008. Ancora: Multilevel annotated corpora for catalan and spanish. In LREC 2008. + +Jiateng Xie, Zhilin Yang, Graham Neubig, Noah A Smith, and Jaime Carbonell. 2018. Neural crosslingual named entity recognition with minimal resources. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 369-379. + +Wen-tau Yih, Matthew Richardson, Chris Meek, Ming-Wei Chang, and Jina Suh. 2016. The value of semantic parse labeling for knowledge base question answering. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 201-206. + +Hai Zhao, Wenliang Chen, Jun'ichi Kazama, Kiyotaka Uchimoto, and Kentaro Torisawa. 2009. Multilingual dependency learning: Exploiting rich features for tagging syntactic and semantic dependencies. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning: Shared Task, pages 61-66. Association for Computational Linguistics. + +Jie Zhou and Wei Xu. 2015. End-to-end learning of semantic role labeling using recurrent neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1127-1137. + +# A Dataset Description + +Table 7 describes the training data statistics for each language. In the dataset, for every language, all sentences are marked with predicate-argument structures. Across the languages the argument label set is different. + +# B Hyperparameters + +In our experiments, we randomly initialize the word and lemma embedding of dimension 100 each, the pos embedding of dimension 32, and the flag embedding of dimension 16. We use the same model parameters as mentioned in (Li et al., 2018): a 4-layer BiLSTM with 512 dimensional hidden units and 0.1 dropout rate for the sentence encoder. Our role labeler has 5 MLP highway layers with ReLU activations. We train the model with Adam optimizer (Kingma and Ba, 2014) and minimize the final categorical cross-entropy objective. We train each model for 20 epochs and use early stopping with patience 5 on target language development set. For all the experiments, we repeat with 3 different initialization and report the average F1 score along with precision and recall. + +![](images/54c53cdf1015ed5f6d7116e0d73d245478435c76e21ac65f994185911ae84a79.jpg) + +![](images/e25b63c2a18232009688181f7146a26c961d181440bb1ba3bd4a0a5eeee8fa79.jpg) + +![](images/f588ab8dc13d828ae4dca91523b7377d28fafd4ed41cf641df7c1de4068f3ad0.jpg) +(a) EN + +![](images/c759398f4dc423c2d3f031e75c4541cc74ef61c82383d8bc9e679a427d473a7a.jpg) +(b) DE + +![](images/83253fdfbdeb5abb8bf2e350b049c7bcbebf3ddfcf810f866658eab89bc66350.jpg) +(c) EN +(e) EN +Figure 5: Euclidean distance between last layer weights for matched arguments. Row I: EN-DE, Row II: EN-ZH, Row III: EN-ES, column I: Source language, column II: Target language + +![](images/f8ed5ecfda716dd23ec547ba437006e89715f744e27f5c5fcde2c3e3119c9ee2.jpg) +(d) ZH +(f) ES + +# C Paired Arguments + +We present the list of matched arguments for source-target language pairs in Table 8. We observe that almost all the argument pairs have similar meaning: some are syntactically visible (e.g. ES-argM-adv in ES and AM-ADV in EN), whereas others are semantically similar (e.g. ES-argM-fin and AM-PNC having the same meaning purpose). + +We also plot the euclidean distance matrix among the matched source language arguments and among the target language arguments. In Fig. 5 we show the distance matrix for various language pairs. We compute the correlation coefficient between these matrices and All these coefficients are close to 1 which show that CLAR is indeed able to detect a manifold for the target language arguments similar to the one for the source language arguments. + +# D CLAR Extension to Many-to-one Mapping + +We suspect that CLAR gets a difficulty in choosing one among many fine grained arguments to map to a coarse argument in source language. Here we perform the preliminary investigation on the many to one extension of CLAR. Since CS have + +
DatasetWordPOSLemmaArg LabelsPred Labels# Predicate# Argumentstrain-valid/test/ood
CA31,0791522,388391437,43184,36713K/1.7K/1.8K/-
CS75,5721535,31062116414,237365,25538K/5.2K/4.2K/1.1K
DE67,5485748,217102817,40034,27636K/1.6K/1.7K/707
EN30,4794923,7275321179,014393,69939K/1.3K/2.4K/425
ES37,9081524,157431343,82499,05414K/1.6K/1.7K/-
ZH40,3513840,3513710102,813231,86922K/1.7K/2.5K/-
+ +Table 7: Train data statistics for each language. Languages are coded with ISO 639-1 codes. + +
EN-ESEN-ZHEN-DE
ESENZHENDEEN
ES-argM-advAM-ADVZH-DISAM-DISDE-A0A0
ES-argM-tempAM-TMPZH-LOCAM-LOCDE-A4AM-TMP
ES-argM-finAM-PNCZH-C-A0AM-RECDE-A1A1
ES-argM-cauAM-CAUZH-ADVAM-ADV
ES-argL-nullAM-RECZH-A0A0
ES-arg2-extC-AM-DIRZH-TMPAM-TMP
ES-arg0-agtA0ZH-MNRAM-MNR
ES-arg1-patA1ZH-A2A2
ES-argM-mnrAM-MNRZH-A1A1
ES-argM-locAM-LOC
+ +fine grained labels and is good candidate to analyze many to one mapping, we allow many-to-one argument mapping from Czech to English by relaxing a constraint in the final optimization function and updating only this constraint + +$$ +\sum_ {j} T _ {i j} \leq M, i = 1, \dots , \hat {k} _ {s}, \tag {5} +$$ + +while keeping all the other constraints intact. This modification allows at most $M$ arguments in CS to pair with only one argument in EN. Now, following the training procedure, we observe that CLAR is able to efficiently capture many-to-one mappings with minimum noise. In Table 9, we present the argument pairs matched by CLAR. Interestingly, CLAR detects most of the argument pairs correctly, for example, {TWHEN, THL, THO} in CS are mapped to AM-TMP in EN, as expected. However, there are a few pairs that are wrongly mapped, for instance, DIR3 in CS is mapped to A2 in EN. We find that the detection of these noisy pairs is difficult to avoid as the Prague Dependency Treebank 2.0. (Hajic et al., 2003) (source of CS dataset) itself points the borderline cases associated with each argument label in CS. For example, ACPI in CS has borderline + +Table 8: Paired arguments in the source and the target language detected by pair matching algorithm during CLAR training. + +
CSENCSEN
PATA1MATA3
ACTA0BENA3
APPA2ACMPAM-ADV
ADDRA2CAUSAM-ADV
DIR3A2CONDAM-ADV
TWHENAM-TMPCOMPLAM-DIS
THLAM-TMPCPHRC-A1
THOAM-TMPEFFAM-PNC
MANNAM-MNRAIMAM-PNC
REGAM-MNREXTAM-EXT
MEANSAM-MNRDPHRAM-DIR
LOCAM-LOCCRITR-AM-TMP
RSTRAM-LOCTTILLR-AM-TMP
IDAM-LOCTSINR-AM-TMP
COMPL2AM-LOC
ORIGAM-LOC
+ +Table 9: Paired arguments in the source (EN) and the target language (CS) + +
CLAR MappingPRF1
one-one79.9176.5078.17
many-one79.7276.0577.84
many-one (combined)82.5775.4078.82
+ +Table 10: Czech argument classification performance with many to one argument mapping. + +cases with both COND and CAUS, therefore, they are mapped together to a single argument in EN. + +Although CLAR with many-to-one mapping is able to match multiple target language argument labels to a single source language argument label, it actually leads to performance drop as compared to one-to-one mapping (Table 10). This drop in performance is likely because while learning many-to-one mappings, CLAR loses its discriminatory power among those multiple arguments which are mapped to a single label. To validate this phenomenon, at test time, we combine all the argument labels mapped to a single label both for the target and the prediction set; that is, we combine {TWHEN, THL, THO} and propose a new label (say TWHEN) and observe 1ppt $\uparrow$ in $F_{1}$ on these combined labels. However, how to effectively leverage CLAR with many-to-one mapping for SRL model training remains an open question and requires further exploration in the future. \ No newline at end of file diff --git a/claracrosslingualargumentregularizerforsemanticrolelabeling/images.zip b/claracrosslingualargumentregularizerforsemanticrolelabeling/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..48134a5ca94a0586df29b4d4be5f2458be0bbc18 --- /dev/null +++ b/claracrosslingualargumentregularizerforsemanticrolelabeling/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:196ff7efaefcbf14ae1fd70cc816ce66e3083cb8e4b50ee61d1554ed0d639709 +size 751145 diff --git a/claracrosslingualargumentregularizerforsemanticrolelabeling/layout.json b/claracrosslingualargumentregularizerforsemanticrolelabeling/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..fb3c3c98a1497e6c9a7584d59f12cb2d1f61e718 --- /dev/null +++ b/claracrosslingualargumentregularizerforsemanticrolelabeling/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:42366b4174d4959861a90cc0c17351c43f83a12c0c53ebbefe3fd567d855ff33 +size 404287 diff --git a/codebertapretrainedmodelforprogrammingandnaturallanguages/06ba82fa-8c1f-4383-a4c0-ba852abcd2c5_content_list.json b/codebertapretrainedmodelforprogrammingandnaturallanguages/06ba82fa-8c1f-4383-a4c0-ba852abcd2c5_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..dc568cd255ecccd5f13068f8f563413e4ec850ed --- /dev/null +++ b/codebertapretrainedmodelforprogrammingandnaturallanguages/06ba82fa-8c1f-4383-a4c0-ba852abcd2c5_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f2a758a944c258fda056d3a351ed279855e95feab7a768eb6ef928ae17925d80 +size 88294 diff --git a/codebertapretrainedmodelforprogrammingandnaturallanguages/06ba82fa-8c1f-4383-a4c0-ba852abcd2c5_model.json b/codebertapretrainedmodelforprogrammingandnaturallanguages/06ba82fa-8c1f-4383-a4c0-ba852abcd2c5_model.json new file mode 100644 index 0000000000000000000000000000000000000000..732a8abc96ad8e8938b4bc84ef3aa40e95e16fa1 --- /dev/null +++ b/codebertapretrainedmodelforprogrammingandnaturallanguages/06ba82fa-8c1f-4383-a4c0-ba852abcd2c5_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d54ce787e0348657e510bb8bcec1f6bbd4522f92f091834d9cd9d4c8d586e4e6 +size 101320 diff --git a/codebertapretrainedmodelforprogrammingandnaturallanguages/06ba82fa-8c1f-4383-a4c0-ba852abcd2c5_origin.pdf b/codebertapretrainedmodelforprogrammingandnaturallanguages/06ba82fa-8c1f-4383-a4c0-ba852abcd2c5_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ea9408a4e502ee6d8b0f8ba098081dd9890ab89a --- /dev/null +++ b/codebertapretrainedmodelforprogrammingandnaturallanguages/06ba82fa-8c1f-4383-a4c0-ba852abcd2c5_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b786b0883f6aa3bf5d262133c6bae6e2230182598ebe71538f8cb63da1c89e67 +size 1190743 diff --git a/codebertapretrainedmodelforprogrammingandnaturallanguages/full.md b/codebertapretrainedmodelforprogrammingandnaturallanguages/full.md new file mode 100644 index 0000000000000000000000000000000000000000..f9da9521080140159670dfb2d5ce6af33d84df19 --- /dev/null +++ b/codebertapretrainedmodelforprogrammingandnaturallanguages/full.md @@ -0,0 +1,422 @@ +# CodeBERT: A Pre-Trained Model for Programming and Natural Languages + +Zhangyin Feng $^{1*}$ , Daya Guo $^{2*}$ , Duyu Tang $^{3}$ , Nan Duan $^{3}$ , Xiaocheng Feng $^{1}$ , Ming Gong $^{4}$ , Linjun Shou $^{4}$ , Bing Qin $^{1}$ , Ting Liu $^{1}$ , Daxin Jiang $^{4}$ , Ming Zhou $^{3}$ + +1 Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology, China + +2 The School of Data and Computer Science, Sun Yat-sen University, China + +3 Microsoft Research Asia, Beijing, China + +4 Microsoft Search Technology Center Asia, Beijing, China + +{zyfeng,xcfeng,qinb,tliu}@ir.hit.edu.cn + +guody5@mail2.sysu.edu.cn + +{dutang,nanduan,migon,lisho,djiang,mingzhou}@microsoft.com + +# Abstract + +We present CodeBERT, a bimodal pre-trained model for programming language (PL) and natural language (NL). CodeBERT learns general-purpose representations that support downstream NL-PL applications such as natural language code search, code documentation generation, etc. We develop CodeBERT with Transformer-based neural architecture, and train it with a hybrid objective function that incorporates the pre-training task of replaced token detection, which is to detect plausible alternatives sampled from generators. This enables us to utilize both "bimodal" data of NL-PL pairs and "unimodal" data, where the former provides input tokens for model training while the latter helps to learn better generators. We evaluate CodeBERT on two NL-PL applications by fine-tuning model parameters. Results show that CodeBERT achieves state-of-the-art performance on both natural language code search and code documentation generation. Furthermore, to investigate what type of knowledge is learned in CodeBERT, we construct a dataset for NL-PL probing, and evaluate in a zero-shot setting where parameters of pre-trained models are fixed. Results show that CodeBERT performs better than previous pre-trained models on NL-PL probing.1 + +# 1 Introduction + +Large pre-trained models such as ELMo (Peters et al., 2018), GPT (Radford et al., 2018), BERT (Devlin et al., 2018), XLNet (Yang et al., 2019) + +and RoBERTa (Liu et al., 2019) have dramatically improved the state-of-the-art on a variety of natural language processing (NLP) tasks. These pre-trained models learn effective contextual representations from massive unlabeled text optimized by self-supervised objectives, such as masked language modeling, which predicts the original masked word from an artificially masked input sequence. The success of pre-trained models in NLP also drives a surge of multi-modal pre-trained models, such as ViBERT (Lu et al., 2019) for language-image and VideoBERT (Sun et al., 2019) for language-video, which are learned from bimodal data such as language-image pairs with bimodal self-supervised objectives. + +In this work, we present CodeBERT, a bimodal pre-trained model for natural language (NL) and programming language (PL) like Python, Java, JavaScript, etc. CodeBERT captures the semantic connection between natural language and programming language, and produces general-purpose representations that can broadly support NL-PL understanding tasks (e.g. natural language code search) and generation tasks (e.g. code documentation generation). It is developed with the multilayer Transformer (Vaswani et al., 2017), which is adopted in a majority of large pre-trained models. In order to make use of both bimodal instances of NL-PL pairs and large amount of available unimodal codes, we train CodeBERT with a hybrid objective function, including standard masked language modeling (Devlin et al., 2018) and replaced token detection (Clark et al., 2020), where unimodal codes help to learn better generators for producing better alternative tokens for the latter objective. + +We train CodeBERT from Github code reposito + +ries in 6 programming languages, where bimodal datapoints are codes that pair with function-level natural language documentations (Husain et al., 2019). Training is conducted in a setting similar to that of multilingual BERT (Pires et al., 2019), in which case one pre-trained model is learned for 6 programming languages with no explicit markers used to denote the input programming language. We evaluate CodeBERT on two downstream NL-PL tasks, including natural language code search and code documentation generation. Results show that fine-tuning the parameters of CodeBERT achieves state-of-the-art performance on both tasks. To further investigate what type of knowledge is learned in CodeBERT, we construct a dataset for NL-PL probing, and test CodeBERT in a zero-shot scenario, i.e. without fine-tuning the parameters of CodeBERT. We find that CodeBERT consistently outperforms RoBERTa, a purely natural language-based pre-trained model. The contributions of this work are as follows: + +- CodeBERT is the first large NL-PL pretrained model for multiple programming languages. +- Empirical results show that CodeBERT is effective in both code search and code-to-text generation tasks. +- We further created a dataset which is the first one to investigate the probing ability of the code-based pre-trained models. + +# 2 Background + +# 2.1 Pre-Trained Models in NLP + +Large pre-trained models (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2018; Yang et al., 2019; Liu et al., 2019; Raffel et al., 2019) have brought dramatic empirical improvements on almost every NLP task in the past few years. Successful approaches train deep neural networks on large-scale plain texts with self-supervised learning objectives. One of the most representative neural architectures is the Transformer (Vaswani et al., 2017), which is also the one used in this work. It contains multiple self-attention layers, and can be conventionally learned with gradient decent in an end-to-end manner as every component is differentiable. The terminology "self-supervised" means that supervisions used for pre-training are automatically collected from raw data without manual + +annotation. Dominant learning objectives are language modeling and its variations. For example, in GPT (Radford et al., 2018), the learning objective is language modeling, namely predicting the next word $w_{k}$ given the preceding context words $\{w_{1}, w_{2}, \dots, w_{k-1}\}$ . As the ultimate goal of pretraining is not to train a good language model, it is desirable to consider both preceding and following contexts to learn better general-purpose contextual representations. This leads us to the masked language modeling objective used in BERT (Devlin et al., 2018), which learns to predict the masked words of a randomly masked word sequence given surrounding contexts. Masked language modeling is also used as one of the two learning objectives for training CodeBERT. + +# 2.2 Multi-Modal Pre-Trained Models + +The remarkable success of the pre-trained model in NLP has driven the development of multi-modal pre-trained model that learns implicit alignment between inputs of different modalities. These models are typically learned from bimodal data, such as pairs of language-image or pairs of language-video. For example, ViLBERT (Lu et al., 2019) learns from image caption data, where the model learns by reconstructing categories of masked image region or masked words given the observed inputs, and meanwhile predicting whether the caption describes the image content or not. Similarly, VideoBERT (Sun et al., 2019) learns from language-video data and is trained by video and text masked token prediction. Our work belongs to this line of research as we regard NL and PL as different modalities. Our method differs from previous works in that the fuels for model training include not only bimodal data of NL-PL pairs, but larger amounts of unimodal data such as codes without paired documentations. + +A concurrent work (Kanade et al., 2019) uses masked language modeling and next sentence prediction as the objective to train a BERT model on Python source codes, where a sentence is a logical code line as defined by the Python standard. In terms of the pre-training process, CodeBERT differs from their work in that (1) CodeBERT is trained in a cross-modal style and leverages both bimodal NL-PL data and unimodal PL/NL data, (2) CodeBERT is pre-trained over six programming languages, and (3) CodeBERT is trained with a new learning objective based on replaced token + +detection. + +# 3 CodeBERT + +We describe the details about CodeBERT in this section, including the model architecture, the input and output representations, the objectives and data used for training CodeBERT, and how to fine-tune CodeBERT when it is applied to downstream tasks. + +# 3.1 Model Architecture + +We follow BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019), and use multi-layer bidirectional Transformer (Vaswani et al., 2017) as the model architecture of CodeBERT. We will not review the ubiquitous Transformer architecture in detail. We develop CodeBERT by using exactly the same model architecture as RoBERTa-base. The total number of model parameters is 125M. + +# 3.2 Input/Output Representations + +In the pre-training phase, we set the input as the concatenation of two segments with a special separator token, namely [CLS], $w_{1}, w_{2}, \ldots, w_{n}$ , [SEP], $c_{1}, c_{2}, \ldots, c_{m}$ , [EOS]. One segment is natural language text, and another is code from a certain programming language. [CLS] is a special token in front of the two segments, whose final hidden representation is considered as the aggregated sequence representation for classification or ranking. Following the standard way of processing text in Transformer, we regard a natural language text as a sequence of words, and split it as WordPiece (Wu et al., 2016). We regard a piece of code as a sequence of tokens. + +The output of CodeBERT includes (1) contextual vector representation of each token, for both natural language and code, and (2) the representation of [CLS], which works as the aggregated sequence representation. + +# 3.3 Pre-Training Data + +We train CodeBERT with both bimodal data, which refers to parallel data of natural language-code pairs, and unimodal data, which stands for codes without paired natural language texts and natural language without paired codes. + +We use datapoints from Github repositories, where each bimodal datapoint is an individual function with paired documentation, and each uni-modal code is a function without paired documentation. Specifically, we use a recent large dataset + +
TRAINING DATAbimodal DATAunimodal CODES
GO319,256726,768
JAVA500,7541,569,889
JAVAscript143,2521,857,835
PHP662,907977,821
PYTHON458,2191,156,085
RUBY52,905164,048
ALL2,137,2936,452,446
+ +Table 1: Statistics of the dataset used for training CodeBERT. + +provided by Husain et al. (2019), which includes 2.1M bimodal datapoints and 6.4M unimodal codes across six programming languages (Python, Java, JavaScript, PHP, Ruby, and Go). Data statistics is shown in Table 1. $^{2}$ + +The data comes from publicly available open-source non-fork GitHub repositories and are filtered with a set of constraints and rules. For example, (1) each project should be used by at least one other project, (2) each documentation is truncated to the first paragraph, (3) documentations shorter than three tokens are removed, (4) functions shorter than three lines are removed, and (5) function names with substring "test" are removed. An example of the data is given in Figure 13. + +![](images/d6672709466d6c723b520de7f9e9fad498bf9d9f20781cd0612e2c2adb8fb8bf.jpg) +Figure 1: An example of the NL-PL pair, where NL is the first paragraph (filled in red) from the documentation (dashed line in black) of a function. + +# 3.4 Pre-Training CodeBERT + +We describe the two objectives used for training CodeBERT here. The first objective is masked language modeling (MLM), which has proven effective in literature (Devlin et al., 2018; Liu et al., + +![](images/036d7326f5b5358cc1617985da803434a2953df7a82195aee3fd3eef07d5cc4e.jpg) +Figure 2: An illustration about the replaced token detection objective. Both NL and code generators are language models, which generate plausible tokens for masked positions based on surrounding contexts. NL-Code discriminator is the targeted pre-trained model, which is trained via detecting plausible alternatives tokens sampled from NL and PL generators. NL-Code discriminator is used for producing general-purpose representations in the fine-tuning step. Both NL and code generators are thrown out in the fine-tuning step. + +2019; Sun et al., 2019). We apply masked language modeling on bimodal data of NL-PL pairs. The second objective is replaced token detection (RTD), which further uses a large amount of unimodal data, such as codes without paired natural language texts. Detailed hyper-parameters for model pre-training are given in Appendix B.1. + +Objective #1: Masked Language Modeling (MLM) Given a datapoint of NL-PL pair $(x = \{w, c\})$ as input, where $w$ is a sequence of NL words and $c$ is a sequence of PL tokens, we first select a random set of positions for both NL and PL to mask out (i.e. $m^w$ and $m^c$ , respectively), and then replace the selected positions with a special [MASK] token. Following Devlin et al. (2018), $15\%$ of the tokens from $x$ are masked out. + +$$ +m _ {i} ^ {w} \sim \operatorname {u n i f} \{1, | \boldsymbol {w} | \} \text {f o r} i = 1 \text {t o} | \boldsymbol {w} | \tag {1} +$$ + +$$ +m _ {i} ^ {c} \sim \operatorname {u n i f} \{1, | c | \} \text {f o r} i = 1 \text {t o} | c | \tag {2} +$$ + +$$ +\boldsymbol {w} ^ {\text {m a s k e d}} = \operatorname {R E P L A C E} \left(\boldsymbol {w}, \boldsymbol {m} ^ {\boldsymbol {w}}, [ M A S K ]\right) \tag {3} +$$ + +$$ +\boldsymbol {c} ^ {\text {m a s k e d}} = \operatorname {R E P L A C E} (\boldsymbol {c}, \boldsymbol {m} ^ {\boldsymbol {c}}, [ M A S K ]) \tag {4} +$$ + +$$ +\boldsymbol {x} = \boldsymbol {w} + \boldsymbol {c} \tag {5} +$$ + +The MLM objective is to predict the original tokens which are masked out, formulated as follows, where $p^{D_1}$ is the discriminator which predicts a token from a large vocabulary. + +$$ +\mathcal {L} _ {\mathrm {M L M}} (\theta) = \sum_ {i \in \boldsymbol {m} ^ {\boldsymbol {w}} \cup \boldsymbol {m} ^ {\boldsymbol {c}}} - \log p ^ {D _ {1}} \left(x _ {i} \mid \boldsymbol {w} ^ {\text {m a x k e d}}, \boldsymbol {c} ^ {\text {m a x k e d}}\right) \tag {6} +$$ + +Objective #2: Replaced Token Detection (RTD) In the MLM objective, only bimodal data (i.e. datapoints of NL-PL pairs) is used for training. Here we present the objective of replaced token detection. The RTD objective (Clark et al., 2020) is originally developed for efficiently learning pre-trained model for natural language. We adapt it in our scenario, with the advantage of using both bimodal and unimodal data for training. Specifically, there are two data generators here, an NL generator $p^{G_w}$ and a PL generator $p^{G_c}$ , both for generating plausible alternatives for the set of randomly masked positions. + +$$ +\hat {w} _ {i} \sim p ^ {G _ {w}} \left(w _ {i} \mid \boldsymbol {w} ^ {\text {m a x k e d}}\right) \text {f o r} i \in \boldsymbol {m} ^ {\boldsymbol {w}} \tag {7} +$$ + +$$ +\hat {c} _ {i} \sim p ^ {G _ {c}} \left(c _ {i} \mid c ^ {\text {m a s k e d}}\right) \text {f o r} i \in \boldsymbol {m} ^ {\boldsymbol {c}} \tag {8} +$$ + +$$ +\boldsymbol {w} ^ {\text {c o r r u p t}} = \operatorname {R E P L A C E} (\boldsymbol {w}, \boldsymbol {m} ^ {\boldsymbol {w}}, \hat {\boldsymbol {w}}) \tag {9} +$$ + +$$ +\boldsymbol {c} ^ {\text {c o r r u p t}} = \operatorname {R E P L A C E} (\boldsymbol {c}, \boldsymbol {m} ^ {\boldsymbol {c}}, \hat {\boldsymbol {c}}) \tag {10} +$$ + +$$ +\boldsymbol {x} ^ {\text {c o r r u p t}} = \boldsymbol {w} ^ {\text {c o r r u p t}} + \boldsymbol {c} ^ {\text {c o r r u p t}} \tag {11} +$$ + +The discriminator is trained to determine whether a word is the original one or not, which is a binary classification problem. It is worth noting that the RTD objective is applied to every position in the input, and it differs from GAN (generative adversarial network) in that if a generator happens to produce the correct token, the label of that token is "real" instead of "fake" (Clark et al., 2020). The loss function of RTD with regard to the discriminator parameterized by $\theta$ is given below, where $\delta(i)$ is + +an indicator function and $p^{D_2}$ is the discriminator that predicts the probability of the $i$ -th word being original. + +$$ +\begin{array}{l} \mathcal {L} _ {\mathrm {R T D}} (\theta) = \sum_ {i = 1} ^ {| \boldsymbol {w} | + | \boldsymbol {c} |} \left(\delta (i) \log p ^ {D _ {2}} \left(\boldsymbol {x} ^ {\text {c o r r u p t}}, i\right) + \right. \\ \left. \left(1 - \delta (i)\right) \left(1 - \log p ^ {D _ {2}} \left(\boldsymbol {x} ^ {\text {c o r r u p t}}, i\right)\right)\right) \tag {12} \\ \end{array} +$$ + +$$ +\delta (i) = \left\{ \begin{array}{l l} 1, & \text {i f} x _ {i} ^ {\text {c o r r u p t}} = x _ {i}. \\ 0, & \text {o t h e r w i s e .} \end{array} \right. \tag {13} +$$ + +There are many different ways to implement the generators. In this work, we implement two efficient n-gram language models (Jurafsky, 2000) with bidirectional contexts, one for NL and one for PL, and learn them from corresponding unimodel datapoints, respectively. The approach is easily generalized to learn bimodal generators or use more complicated generators like Transformer-based neural architecture learned in a joint manner. We leave these to future work. The PL training data is the unimodal codes as shown in Table 1, and the NL training data comes from the documentations from bimodal data. One could easily extend these two training datasets to larger amount. The final loss function are given below. + +$$ +\min _ {\theta} \mathcal {L} _ {\mathrm {M L M}} (\theta) + \mathcal {L} _ {\mathrm {R T D}} (\theta) \tag {14} +$$ + +# 3.5 Fine-Tuning CodeBERT + +We have different settings to use CodeBERT in downstream NL-PL tasks. For example, in natural language code search, we feed the input as the same way as the pre-training phase and use the representation of [CLS] to measure the semantic relevance between code and natural language query, while in code-to-text generation, we use an encoder-decoder framework and initialize the encoder of a generative model with CodeBERT. Details are given in the experiment section. + +# 4 Experiment + +We present empirical results in this section to verify the effectiveness of CodeBERT. We first describe the use of CodeBERT in natural language code search (§4.1), in a way that model parameters of CodeBERT are fine-tuned. After that, we present the NL-PL probing task (§4.2), and evaluate CodeBERT in a zero-shot setting where the parameters + +of CodeBERT are fixed. Finally, we evaluate CodeBERT on a generation problem, i.e. code documentation generation (§4.3), and further evaluate on a programming language which is never seen in the training phase (§4.4). + +# 4.1 Natural Language Code Search + +Given a natural language as the input, the objective of code search is to find the most semantically related code from a collection of codes. We conduct experiments on the CodeSearchNet corpus (Husain et al., 2019). We follow the official evaluation metric to calculate the Mean Reciprocal Rank (MRR) for each pair of test data $(c, w)$ over a fixed set of 999 distractor codes. We further calculate the macro-average MRR for all languages as an overall evaluation metric. It is helpful to note that this metric differs from the AVG metric in the original paper, where the answer is retrieved from candidates from all six languages. We fine-tune a language-specific model for each programming language. We train each model with a binary classification loss function, where a softmax layer is connected to the representation of [CLS]. Both training and validation datasets are created in a way that positive and negative samples are balanced. Negative samples consist of balanced number of instances with randomly replaced NL (i.e. $(c, \hat{w})$ ) and PL (i.e. $(\hat{c}, w)$ ). Detailed hyper-parameters for model fine-tuning are given in Appendix B.2. + +Model Comparisons Table 2 shows the results of different approaches on the CodeSearchNet corpus. The first four rows are reported by Husain et al. (2019), which are joint embeddings of NL and PL (Gu et al., 2018; Mitra et al., 2018). NBOw represents neural bag-of-words. CNN, BIRNN and SELFATT stand for 1D convolutional neural network (Kim, 2014), bidirectional GRU-based recurrent neural network (Cho et al., 2014), and multi-head attention (Vaswani et al., 2017), respectively. + +We report the remaining numbers in Table 2. We train all these pre-trained models by regarding codes as a sequence of tokens. We also continuously train RoBERTa only on codes from CodeSearchNet with masked language modeling. Results show that CodeBERT consistently performs + +
MODELRUBYJAVASCIPTGOPYTHONJAVAPHPMA-AVG
NBOW0.42850.46070.64090.58090.51400.48350.5181
CNN0.24500.35230.62740.57080.52700.52940.4753
BiRNN0.08350.15300.45240.32130.28650.25120.2580
SELFATT0.36510.45060.68090.69220.58660.60110.5628
ROBERTA0.62450.60600.82040.80870.66590.65760.6972
PT w/ CODE ONLY (INIT=s)0.57120.55570.79290.78550.65670.61720.6632
PT w/ CODE ONLY (INIT=R)0.66120.64020.81910.84380.72130.67060.7260
CODEBERT (MLM, INIT=s)0.56950.60290.83040.82610.71420.65560.6998
CODEBERT (MLM, INIT=R)0.68980.69970.83830.86470.74760.68930.7549
CODEBERT (RTD, INIT=R)0.64140.65120.82850.82630.71500.67740.7233
CODEBERT (MLM+RTD, INIT=R)0.69260.70590.84000.86850.74840.70620.7603
+ +Table 2: Results on natural language code retrieval. Baselines include four joint embeddings (first group) of NL and PL, RoBERTa, and RoBERTa which is continuously trained with masked language modeling on codes only (second group). PT stands for pre-training. We train CodeBERT (third group) with different settings, including using different initialization (from scratch (INIT=S) or initialized with the parameters of RoBERTa (INIT=R)) and using different learning objectives (MLM, RTD, or the combination of both). + +better than RoBERTa and the model pre-trained with code only. CodeBERT (MLM) learned from scratch performs better than RoBERTa. Unsurprisingly, initializing CodeBERT with RoBERTa improves the performance $^{6}$ . + +# 4.2 NL-PL Probing + +In the previous subsection, we show the empirical effectiveness of CodeBERT in a setting that the parameters of CodeBERT are fine-tuned in downstream tasks. In this subsection, we further investigate what type of knowledge is learned in CodeBERT without modifying the parameters. + +Task Formulation and Data Construction Following the probing experiments in NLP (Petroni et al., 2019; Talmor et al., 2019), we study NL-PL probing here. Since there is no existing work towards this goal, we formulate the problem of NL-PL probing and create the dataset by ourselves. Given an NL-PL pair $(c, w)$ , the goal of NL-PL probing is to test model's ability to correctly predict/recover the masked token of interest (either a code token $c_i$ or word token $w_j$ ) among distractors. There are two major types of distractors: one is the whole target vocabulary used for the masked language modeling objective (Petroni et al., 2019), and another one has fewer candidates which are filter or curated based on experts' understanding about the ability to be tested (Talmor et al., 2019). We follow the second direction and formulate NL-PL probing as a multi-choice question answering task, where the question is cloze-style in which a certain token + +is replaced by $[MASK]$ and distractor candidate answers are curated based on our expertise. + +Specifically, we evaluate on the NL side and PL side, respectively. To ease the effort of data collection, we collect data automatically from NL-PL pairs in both validation and testing sets of CodeSearchNet, both of which are unseen in the pretraining phase. To evaluate on the NL side, we select NL-PL pairs whose NL documentations include one of the six keywords (max, maximize, min, minimize, less, greater), and group them to four candidates by merging first two keywords and the middle two keywords. The task is to ask pre-trained models to select the correct one instead of three other distractors. That is to say, the input in this setting includes the complete code and a masked NL documentation. The goal is to select the correct answer from four candidates. For the PL side, we select codes containing keywords max and min, and formulate the task as a two-choice answer selection problem. Here, the input includes complete NL documentation and a masked PL code, and the goal is to select the correct answer from two candidates. Since code completion is an important scenario, we would like to test model's ability in predicting the correct token merely based on preceding PL contexts. Therefore, we add an additional setting for PL side, where the input includes the complete NL documentation and preceding PL codes. Data statistics is given in the top two rows in Table 3. + +Model Comparisons Results are given in Table 3. We report accuracy, namely the number of correctly predicted instances over the number of all instances, for each programming language. Since + +
RUBYJAVASCIPTGOPYTHONJAVAPHPALL
NUMBER OF DATAPoints FOR PROBING
PL (2 CHOICES)382721521,2644824072,615
NL (4 CHOICES)206515921632373856
PL PROBING
ROBERTA73.6863.9772.3759.1859.9669.7862.45
PRE-TRAIN W/ CODE ONLY71.0577.9489.4770.4170.1282.3174.11
CODEBERT (MLM)86.8486.4090.7982.2090.4688.2185.66
PL PROBING WITH PRECEDED CONTEXT ONLY
ROBERTA73.6853.3151.3255.1442.3252.5852.24
PRE-TRAIN W/ CODE ONLY63.1648.5361.8456.2558.5158.9756.71
CODEBERT (MLM)65.7950.7459.2162.0354.9859.9559.12
NL PROBING
ROBERTA50.0072.3154.7261.5761.6165.7561.21
PRE-TRAIN W/ CODE ONLY55.0067.6960.3868.0665.0268.4965.19
CODEBERT (MLM)65.0089.2366.6776.8573.3779.4574.53
+ +datasets in different programming languages are extremely unbalanced, we report the accumulated metric with the same way. We use CodeBERT (MLM) here because its output layer naturally fits for probing. Results show that CodeBERT performs better than baselines on almost all languages on both NL and PL probing. The numbers with only preceding contexts are lower than that with bidirectional contexts, which suggests that code completion is challenging. We leave it as a future work. + +We further give a case study on PL-NL probing. We mask NL token and PL token separately, and report the predicted probabilities of RoBERTa and CodeBERT. Figure 3 illustrates the example of a python code7. We can see that RoBERTa fails in both cases, whereas CodeBERT makes the correct prediction in both NL and PL settings. + +# 4.3 Code Documentation Generation + +Although the pre-training objective of CodeBERT does not include generation-based objectives (Lewis et al., 2019), we would like to investigate to what extent does CodeBERT perform on generation tasks. Specifically, we study code-to-NL generation, and report results for the documentation generation task on CodeSearchNet Corpus in six programming languages. Since the generated documentations are short and higher order n-grams may not overlap, we remedy this problem by using smoothed BLEU score (Lin and Och, 2004). + +![](images/0768685ca92ca4e0c0343b71bde34a9624c1baa33751963a2154baec8374b569.jpg) + +Table 3: Statistics of the data for NL-PL probing and the performance of different pre-trained models. Accuracies $(\%)$ are reported. Best results in each group are in bold. + +
maxminlessgreater
NLRoberta96.24%3.73%0.02%0.01%
CodeBERT (MLM)39.38%60.60%0.02%0.0003%
PLRoberta95.85%4.15%--
CodeBERT (MLM)0.001%99.999%--
+ +Figure 3: Case study on python language. Masked tokens in NL (in blue) and PL (in yellow) are separately applied. Predicted probabilities of RoBERTa and CodeBERT are given. + +Model Comparisons We compare our model with several baselines, including a RNN-based model with attention mechanism (Sutskever et al., 2014), the Transformer (Vaswani et al., 2017), RoBERTa and the model pre-trained on code only. To demonstrate the effectiveness of CodeBERT on code-to-NL generation tasks, we adopt various pre-trained models as encoders and keep the hyperparameters consistent. Detailed hyper-parameters are given in Appendix B.3. + +Table 4 shows the results with different models for the code-to-documentation generation task. As we can see, models pre-trained on programming language outperform RoBERTa, which illustrates that pre-training models on programming + +
MODELRUBYJAVASCIPTGOPYTHONJAVAPHPOVERALL
SEQ2SEQ9.6410.2113.9815.9315.0921.0814.32
TRANSFORMER11.1811.5916.3815.8116.2622.1215.56
ROBERTA11.1711.9017.7218.1416.4724.0216.57
PRE-TRAIN W/ CODE ONLY11.9113.9917.7818.5817.5024.3417.35
CODEBERT (RTD)11.4213.2717.5318.2917.3524.1017.00
CODEBERT (MLM)11.5714.4117.7818.7717.3824.8517.46
CODEBERT (RTD+MLM)12.1614.9018.0719.0617.6525.1617.83
+ +language could improve code-to-NL generation. Besides, results in the Table 4 show that CodeBERT pre-trained with RTD and MLM objectives brings a gain of 1.3 BLEU score over RoBERTa overall and achieve the state-of-the-art performance $^{8}$ . + +# 4.4 Generalization to Programming Languages NOT in Pre-training + +We would like to evaluate CodeBERT on the programming language which is never seen in the pretraining step. To this end, we study the task of generating a natural language summary of a C# code snippet. We conduct experiments on the dataset of CodeNN (Iyer et al., 2016), which consists of 66,015 pairs of questions and answers automatically collected from StackOverflow. This dataset is challenging since the scale of dataset is orders of magnitude smaller than CodeSearchNet Corpus. We evaluate models using smoothed BLEU-4 score and use the same evaluation scripts as Iyer et al. (2016). + +Table 4: Results on Code-to-Documentation generation, evaluated with smoothed BLEU-4 score. + +
MODELBLEU
MOSES (KOEHN ET AL., 2007)11.57
IR13.66
SUM-NN (RUSH ET AL., 2015)19.31
2-LAYER BILSTM19.78
TRANSFORMER (VASWANI ET AL., 2017)19.68
TREELSTM (TAI ET AL., 2015)20.11
CODENN (IYER ET AL., 2016)20.53
CODE2SEQ (ALON ET AL., 2019)23.04
ROBERTA19.81
PRE-TRAIN W/ CODE ONLY20.65
CODEBERT (RTD)22.14
CODEBERT (MLM)22.32
CODEBERT (MLM+RTD)22.36
+ +Table 5: Code-to-NL generation on C# language. + +Model Comparisons Table 5 shows that our model with MLM and RTD pre-training objectives achieves 22.36 BLEU score and improves by 2.55 points over RoBERTa, which illustrates CodeBERT + +could generalize better to other programming language which is never seen in the pre-training step. However, our model achieve slightly lower results than code2seq (Alon et al., 2019). The main reason could be that code2seq makes use of compositional paths in its abstract syntax tree (AST) while CodeBERT only takes original code as the input. We have trained a version of CodeBERT by traversing the tree structure of AST following a certain order, but applying that model does not bring improvements on generation tasks. This shows a potential direction to improve CodeBERT by incorporating AST. + +# 5 Conclusion + +In this paper, we present CodeBERT, which to the best of our knowledge is the first large bimodal pre-trained model for natural language and programming language. We train CodeBERT on both bimodal and unimodal data, and show that finetuning CodeBERT achieves state-of-the-art performance on downstream tasks including natural language code search and code-to-documentation generation. To further investigate the knowledge embodied in pre-trained models, we formulate the task of NL-PL probing and create a dataset for probing. We regard the probing task as a cloze-style answer selection problem, and curate distractors for both NL and PL parts. Results show that, with model parameters fixed, CodeBERT performs better than RoBERTa and a continuously trained model using codes only. + +There are many potential directions for further research on this field. First, one could learn better generators with bimodal evidence or more complicated neural architecture to improve the replaced token detection objective. Second, the loss functions of CodeBERT mainly target on NL-PL understanding tasks. Although CodeBERT achieves strong BLEU scores on code-to-documentation generation, the CodeBERT itself could be further improved by generation-related learning objectives. + +How to successfully incorporate AST into the pretraining step is also an attractive direction. Third, we plan to apply CodeBERT to more NL-PL related tasks, and extend it to more programming languages. Flexible and powerful domain/language adaptation methods are necessary to generalize well. + +# Acknowledgments + +Xiaocheng Feng is the corresponding author of this work. We thank the anonymous reviewers for their insightful comments. Zhangyin Feng, Xiaocheng Feng, Bing Qin and Ting Liu are supported by the National Key R&D Program of China via grant 2018YFB1005103 and National Natural Science Foundation of China (NSFC) via grant 61632011 and 61772156. + +# References + +Uri Alon, Shaked Brody, Omer Levy, and Eran Yahav. 2019. code2seq: Generating sequences from structured representations of code. International Conference on Learning Representations. +Kyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. +Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. {ELECTRA}: Pretraining text encoders as discriminators rather than generators. In International Conference on Learning Representations. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Xiaodong Gu, Hongyu Zhang, and Sunghun Kim. 2018. Deep code search. In 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE), pages 933-944. IEEE. +Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. 2019. Code-searchnet challenge: Evaluating the state of semantic code search. arXiv preprint arXiv:1909.09436. +Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2016. Summarizing source code using a neural attention model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2073-2083. + +Dan Jurafsky. 2000. Speech & language processing. Pearson Education India. +Aditya Kanade, Petros Maniatis, Gogul Balakrishnan, and Kensen Shi. 2019. Pre-trained contextual embedding of source code. arXiv preprint arXiv:2001.00059. +Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882. +Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th annual meeting of the association for computational linguistics companion volume proceedings of the demo and poster sessions, pages 177-180. +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. +Chin-Yew Lin and Franz Josef Och. 2004. Orange: a method for evaluating automatic evaluation metrics for machine translation. In Proceedings of the 20th international conference on Computational Linguistics, page 501. Association for Computational Linguistics. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. +Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visi-olinguistic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems, pages 13-23. +Bhaskar Mitra, Nick Craswell, et al. 2018. An introduction to neural information retrieval. Foundations and Trends® in Information Retrieval, 13(1):1-126. +Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. +Fabio Petroni, Tim Roktaschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. 2019. Language models as knowledge bases? arXiv preprint arXiv:1909.01066. +Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual bert? arXiv preprint arXiv:1906.01502. + +Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai-assetss/researchcovers/languageunsupervised/language understanding paper.pdf. + +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683. + +Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685. + +Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid. 2019. Videobert: A joint model for video and language representation learning. arXiv preprint arXiv:1904.01766. + +Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104-3112. + +Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075. + +Alon Talmor, Yanai Elazar, Yoav Goldberg, and Jonathan Berant. 2019. olmpics-on what language model pre-training captures. arXiv preprint arXiv:1912.13283. + +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008. + +Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. + +Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. + +# A Data Statistic + +Data statistics of the training/validation/testing data splits for six programming languages are given in Table 6. + +
CODE SEARCHTRAININGDEVTESTING
GO635,63528,48314,291
JAVA908,88630,65526,909
JAVAscript247,77316,5056,483
PHP1,047,40652,02928,391
PYTHON824,34246,21322,176
RUBY97,5804,4172,279
+ +Table 6: Data statistics about the CodeSearchNet Corpus for natural language code search. + +# B Train Details + +# B.1 Pre-training + +We train CodeBERT on one NVIDIA DGX-2 machine using FP16. It combines 16 interconnected NVIDIA Tesla V100 with 32GB memory. We use the following set of hyper-parameters to train models: batchsize is 2,048 and learning rate is 5e-4. We use Adam to update the parameters and set the number of warmup steps as 10K. We set the max length as 512 and the max training step is 100K. Training 1,000 batches of data costs 600 minutes with MLM objective, 120 minutes with RTD objective. + +# B.2 CodeSearch + +In the fine-turning step, we set the learning rate as 1e-5, the batch size as 64, the max sequence length as 200 and the max fine-tuning epoch as 8. As the same with pre-training, We use Adam to update the parameters. We choose the model performed best on the development set, and use that to evaluate on the test set. + +# B.3 Code Summarization on Six Programming Languages + +We use Transformer with 6 layers, 768 dimensional hidden states and 12 attention heads as our decoder in all settings. We set the max length of input and inference as 256 and 64, respectively. We use the Adam optimizer to update model parameters. The learning rate and the batch size are 5e-5 and 64, respectively. We tune hyperparameters and perform early stopping on the development set. + +# B.4 Code Summarization on C# + +Since state-of-the-art methods use RNN as their decoder, we choose a 2-layer GRU with an attention mechanism as our decoder for a comparison. We fine-tune models using a grid search with the following set of hyper-parameters: batchsize is in $\{32,64\}$ and learning rate is in $\{2e - 5,5e - 5\}$ . We report + +the number when models achieve best performance on the development set. + +# C Learning Curve of CodeSearch + +From Figure 4, we can see that CodeBERT performs better at the early stage, which reflects that CodeBERT provides good initialization for learning downstream tasks. + +![](images/538fbd8d70ce5a47461e337b8607a4a91c5827ca9d7b45f0dfd51efe09db49aa.jpg) +Figure 4: Learning curve of different pre-trained models in the fine-tuning step. We show results on Python and Java. + +![](images/f86b24882981ee7278bd008d7c50c119ab543644afa83e589a286bc040a17e36.jpg) + +# D Late Fusion + +In section §4.1, we show that CodeBERT performs well in the setting where natural languages and codes have early interactions. Here, we investigate whether CodeBERT is good at working as a unified encoder. We apply CodeBERT for natural language code search in a later fusion setting, where CodeBERT first encodes NL and PL separately, and then calculates the similarity by dot-product. In this way, code search is equivalent to find the nearest codes in the shared vector space. This scenario also facilitates the use of CodeBERT in an online system, where the representations of codes are calculated in advance. In the runtime, a system only needs to compute the representation of NL and vector-based dot-product. + +We fine-tune CodeBERT with the following objective, which maximizes the dot-product of the ground truth while minimizing the dot-product of distractors. + +$$ +- \frac {1}{N} \sum_ {i} \log \left(\frac {\exp \left(E n c \left(c _ {i}\right) ^ {\intercal} E n c \left(w _ {i}\right)\right)}{\sum_ {j} \exp \left(E n c \left(c _ {j}\right) ^ {\intercal} E n c \left(w _ {i}\right)\right)}\right) \tag {15} +$$ + +Results are given in Table 7. We just do this setting on two languages with a relatively small amount of data. + +We can see that CodeBERT performs better than RoBERTa and the model pre-trained with codes + +
MODELRUBYGO
ROBERTA0.00430.0030
PRE-TRAIN W/ CODE ONLY0.16480.4179
CODEBERT0.68700.8372
+ +Table 7: Results on natural language code search by late fusion. + +only. And late fusion performs comparable with the standard way. What's more, late fusion is more efficient and this setting could be used in an online system. + +# E Case Study + +To qualitatively analyze the effectiveness of CodeBERT, we give some cases for code search and code documentation generation tasks. + +Considering the limited space, we only give the top2 results of the query for python programming language. As show in Figure 5, search results are very relevant with query. + +Figure 6 and Figure 7 show the outputs with different models for the code documentation generation task. As we can see, CodeBERT performs better than all baselines. + +Query +```txt +create file and write something +``` + +Search Results (top2) +```python +https://github.com/darknessomi/musicbox/blob/master/NEMbox/util.py#L37-L40 +def create_file(path, default $=$ "\n"): if not os.path.exists(path): with open(path, "w") as f: f.write(default) +``` + +```txt +https://github.com/datakortet/yamldirs/blob/master/yamldirs/filemaker.py#L114-L118 +``` + +```python +def make_file(self, filename, content): + '''Create a new file with name``filename``and content``content``. + '''with open(filename,'w') as fp: + fp.write(content) +``` + +Figure 5: Python CodeSearch example. The results are searched from 1,156,085 python code data. We only give the top2 results because space is limited. + +Gold: Add a write error result +```txt +public void addWriteErrorResult(final BulkWriteError writeError, final IndexMap indexMap) { notNull("writeError", writeError); mergeWriteErrors(asList(writeError), indexMap); } +``` + +```txt +CodeBERT: Add a write error result. +PRE-TRAIN W/ CODEONLY : Merges the given write error. +Roberta: Add a write operation to the map. +Transformer: Adds an error to the write map. +RNN: Add an error map. +``` + +Figure 6: Java code documentation generation output example. + +```python +def create_or_update(self, list_id, subscriber_hash, data): + subscriber_hash = check subscribed_hash(subscriber_hash) + self.list_id = list_id + self subscribers_hash = subscriber_hash + if 'email_address' not in data: + raise ValueError('The list member must have an email_address') + check_email(data['email_address']) + if 'status_if_new' not in data: + raise ValueError('The list member must have a status_if_new') + if data['status_if_new'] not in [' subscribed', 'un subscribed', 'cleaned', 'pending', 'transactional']: + raise ValueError('The list member status_if_new must be one of " subscribed", "un subscribed", "cleaned", "pending", or "transactional")) + return self._mc_client._put(url= self._build_path(list_id, 'members', subscriber_hash), data=data) +``` + +Gold: Add or update a list member. +```txt +CodeBERT: Create or update a list member. +PRE-TRAIN W/ CODEONLY: Create or update a subscriber. +Roberta: Create or update an existing record. +Transformer: Create or update a subscription. +RNN: Creates or updates an email address. +``` + +Figure 7: Python code documentation generation output example. \ No newline at end of file diff --git a/codebertapretrainedmodelforprogrammingandnaturallanguages/images.zip b/codebertapretrainedmodelforprogrammingandnaturallanguages/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e874af230734c5a59fa6c1091eaa165e457d21d7 --- /dev/null +++ b/codebertapretrainedmodelforprogrammingandnaturallanguages/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:07e337e48e5a458d128f86065797e4f31fde403fb74a3e064041aeb346a2379a +size 581659 diff --git a/codebertapretrainedmodelforprogrammingandnaturallanguages/layout.json b/codebertapretrainedmodelforprogrammingandnaturallanguages/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..9afd1073a403d11e6e942959c7017454e3a96f53 --- /dev/null +++ b/codebertapretrainedmodelforprogrammingandnaturallanguages/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e5c20a33fba611634d1475c0668c4c2e3257e54b939a5a0c8666a9c263fed912 +size 356862 diff --git a/comingtotermsautomaticformationofneologismsinhebrew/36e539c4-a4ba-40d7-a32b-bd36ae75ffd7_content_list.json b/comingtotermsautomaticformationofneologismsinhebrew/36e539c4-a4ba-40d7-a32b-bd36ae75ffd7_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..e581b9b7e458b1c823f34e82a57ba120c0695ce7 --- /dev/null +++ b/comingtotermsautomaticformationofneologismsinhebrew/36e539c4-a4ba-40d7-a32b-bd36ae75ffd7_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0ca9f2615d99c59322e1a6997dd5989b9b999aaf7091726e0e90fb22fef70b8d +size 86386 diff --git a/comingtotermsautomaticformationofneologismsinhebrew/36e539c4-a4ba-40d7-a32b-bd36ae75ffd7_model.json b/comingtotermsautomaticformationofneologismsinhebrew/36e539c4-a4ba-40d7-a32b-bd36ae75ffd7_model.json new file mode 100644 index 0000000000000000000000000000000000000000..66c05e8aa1a2ee99bfc1c5b35928c23327e036d9 --- /dev/null +++ b/comingtotermsautomaticformationofneologismsinhebrew/36e539c4-a4ba-40d7-a32b-bd36ae75ffd7_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fe0d5d3f49389bd81475e8ef26446b86f5d69ce121c43148f1f3d63c72ec5944 +size 103410 diff --git a/comingtotermsautomaticformationofneologismsinhebrew/36e539c4-a4ba-40d7-a32b-bd36ae75ffd7_origin.pdf b/comingtotermsautomaticformationofneologismsinhebrew/36e539c4-a4ba-40d7-a32b-bd36ae75ffd7_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..06ffd2ef2286dd1f06524554e2d6d167905534ee --- /dev/null +++ b/comingtotermsautomaticformationofneologismsinhebrew/36e539c4-a4ba-40d7-a32b-bd36ae75ffd7_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1e109949f007526ea1a4eb376914e351b43a65a74555710a8651d303ae6ea2c2 +size 809846 diff --git a/comingtotermsautomaticformationofneologismsinhebrew/full.md b/comingtotermsautomaticformationofneologismsinhebrew/full.md new file mode 100644 index 0000000000000000000000000000000000000000..5fcb6ea8702972da2836f813fe679810c293cc97 --- /dev/null +++ b/comingtotermsautomaticformationofneologismsinhebrew/full.md @@ -0,0 +1,363 @@ +# Coming to Terms: Automatic Formation of Neologisms in Hebrew + +Moran Mizrahi *, Stav Yardeni Seelig *, Dafna Shahaf + +The Hebrew University of Jerusalem + +{moranmiz, stav.yardeni, dshahaf}@cs.huji.ac.il + +# Abstract + +Spoken languages are ever-changing, with new words entering them all the time. However, coming up with new words (neologisms) today relies exclusively on human creativity. In this paper we propose a system to automatically suggest neologisms. We focus on the Hebrew language as a test case due to the unusual regularity of its noun formation. User studies comparing our algorithm to experts and non-experts demonstrate that our algorithm is capable of generating high-quality outputs, as well as enhance human creativity. More broadly, we seek to inspire more computational work around the topic of linguistic creativity, which we believe offers numerous unexplored opportunities. + +# 1 Introduction + +Human languages are always changing, evolving, and adapting to the needs of their speakers. New words regularly enter our vocabulary, while others disappear. For example, the word "selfie" (self-portrait digital photo, typically taken with a smartphone) has recently become part of everyday English, even spawning variations such as selfie (a selfie of one's hair), selfie (a selfie taken during a workout), and drelfie (a selfie taken while being drunk) (Christiansen and Chater, 2016). + +Newly coined words or expressions are termed neologisms. There are many neologism formation mechanisms; common ones include loanwords borrowed from another language (kindergarten), morphological derivation (socialize, simplify), compounding (football, breakwater), blending (spoon + fork = spork), and acronyms (laser). + +Importantly, the coining of novel words relies on human creativity, with the new terms often conveying a lot of information in an inventive way. In + +this work, we set out to explore the possibility of automating some of this inherently-human, creative linguistic process. In other words, we ask whether computers can generate high-quality, novel words on their own, or alternatively help inspire people to find better words. + +We focus on automatic generation of neologisms in the Hebrew language. Hebrew has several properties which make it particularly interesting for our goal: first, modern Hebrew was revived after a long period of time (Rabin, 1963; Fellman, 1973), which is unique. There are no other cases of a natural language without any native speakers subsequently acquiring millions of native speakers. For this reason, foreign words are very common in Hebrew, and many terms need to be coined. + +Another reason for focusing on Hebrew is its unusual regularity of noun formation. While port-manteaus (word blends), word combinations and other formation mechanisms do exist in Hebrew, most words are created by combination of root and pattern. To the best of our knowledge, this method of word generation was not explored before in a computational context. Our contributions are: + +- We propose a novel task, automating the formation of neologisms in Hebrew, and propose an algorithm mimicking the human process. Our pipeline includes models for learning special-case phonological rules, as well as other statistical properties of the language. We release open-source code and data here. +- We evaluate individual components and then run a user study, comparing our algorithm to both experts and non-experts. While humans are better (as expected), our algorithm is capable of generating high-quality words, winning $27 - 41\%$ of pairwise comparisons in terms of suitability, likability and creativity, as well as having candidates in the top quartile of the overall ranking. +- In addition to comparing our system to human + +performance, we build on ideas from human-computer interaction to explore how the system can improve human performance. We show our algorithm's output can enhance human creativity, getting non-experts closer to experts. We believe that this type of evaluation can be beneficial for many NLP tasks, especially creative tasks or tasks where human performance is still significantly superior. + +Beyond the specific task of generating Hebrew neologisms, we hope this work would inspire further research towards automating and supporting creative tasks. + +# 2 Background + +Hebrew is classified as an Afroasiatic, Semitic language. Like Arabic, Hebrew is written right to left. Vowels are indicated by diacritic marks representing the syllabic onset, or by matres lectionis (consonantal letters used as vowels). Everyday printed Hebrew often omits the diacritic marks, resulting in a highly ambiguous text. For example, $\text{הַרִשׁ}$ can be diacritized as "onion", "in a shadow" or "in the shadow" (Shmidman et al., 2020). + +Hebrew morphology. Hebrew follows nonconcatenative morphology. It is based on roots, consisting of a sequence of consonants (usually three), from which nouns, adjectives and verbs are formed. Thus, different words composed of the same root often have semantically related meanings. For example, the words (tizmoret), (zamar), and (zemer) all have the root (sing), and stand respectively for an orchestra, a singer, and a song. + +While in English words are usually formed by adding prefixes and suffixes, in Hebrew the root letters are combined into patterns, called mishkalim. The patterns are commonly represented by using the arbitrary placeholder letters (k-t-l) for root consonants. Patterns usually include diacritics, vowel letters and sometimes prefixes and suffixes. For example, to form the Hebrew word (orchestra), the placeholder letters of the pattern are replaced with the root letters. + +Even though this concept is simple, there is a significant amount of special cases requiring modifications to the form of the final word. From a sample of the Even-Shoshan dictionary (Even-Shoshan and Azar, 2003), we estimate that $\sim 2/3$ of the roots require some modification. For example, combining the root with the pattern should have resulted in (tarpe'a). However, since + +is a special root (ends with $\aleph$ ), it becomes $\beth \beth \beth$ (trufa). + +Importantly, many patterns denote specific semantic categories. For example, the pattern (katal) is commonly used to describe professions, as in (singer), (cook), and (reporter). However, not every category has its matching patterns, and some patterns can denote multiple different categories. For example, the pattern (katelet) can be used for professions in feminine form, but is also a very common pattern for illnesses. + +Formation of Hebrew words. Many world languages have official language regulators, often referred to as language academies (e.g., the Royal Spanish Academy, L'Académie française, the Council for German Orthography). The regulating body for Hebrew is the Academy of the Hebrew Language. One of the Academy's most important roles is creating new words to replace loanwords derived from other languages (Fellman, 1974). The initiative tends to come from the public, seeking Hebrew alternatives for foreign words common in everyday speech. A committee of scholars of language, linguistics, Judaic studies, and Bible discusses the word and suggests a Hebrew replacement. Most new words are built using the root-pattern system (aca, 2020), although compound nouns and portmanteaus (blends) are also used. + +We note that even with decades of experience, it is difficult to predict whether the new terms will be picked up by the public. Some words catch on immediately, some take years, and some never do. + +# 3 Methodology + +In this section we present our algorithm, ELIEZER BOT-YEHUDA (EBY), named after Eliezer BenYehuda, a lexicographer who was the driving force behind the revival of the Hebrew language in the modern era. We follow the three main ways of forming words used by the Academy of the Hebrew Language: root-pattern, compounds, and portmanteaus. The input to the algorithm is a source word in English, for which we wish to find a Hebrew word. We used English as a mediating language due to the variety of linguistic resources available for it, but the algorithm can work with any other language (see Section 3.3). Figure 1 shows the process for the input word "palette". + +![](images/88952dde90842d8b5161c6d08fbdf48053ee49a607157a30fc736150f712c819.jpg) +Figure 1: The pipeline of the algorithm, including root-pattern, compounds and portmanteaus, demonstrated on the source word "palette" (see dashed squares). The pipeline mimics the human process of generating neologisms. + +# 3.1 Root and pattern pipeline + +Root and pattern combination is the most common mechanism for coining Hebrew terms. We now explain how we simulate this process. + +# 3.1.1 Finding potential roots + +The first step towards coming up with a new term is understanding what the word is about. Therefore, we created a document for each English word that appeared in our dictionaries, containing multiple English dictionary definitions (from Wiktionary, Merriam-Webster dictionary, WordNet (Miller, 1995), ConceptNet (Speer and Havasi, 2012), Wikipedia abstracts and Easier English Student Dictionary (Rooney and Collin, 2003)). After lemmatizing and removing stop words, we used tfidf (Ramos et al., 2003) to find the 10 most important words in each document (e.g., color, mix, board for "palette"). Despite the simplicity of this process, it proved to be effective in practice (see section 4.3). + +Next, we attempt to identify relevant roots. To do so, we translated the important words into Hebrew, using English Wiktionary, Hebrew Wiktionary, and Hebrew Wordnet (Ordan and Wintner, 2007). Importantly, the output of the translators was diacritized words, from which we extracted roots (identifying the root without diacritics is much harder). Given the translations, we used Hebrew Wiktionary and Even-Shoshan dictionary1 to identify roots. We ranked the roots based on their impor + +tant word's tfidf score. Extracted roots for "palette" include (color), (mix). + +# 3.1.2 Finding potential patterns + +As mentioned in section 2, many of the patterns in Hebrew convey semantic information. Thus, to find patterns reflecting the word's category, we use Wordnet's hypernym and hyponym relations to extract up to $k = 100$ sister-terms of the original foreign word. We translate these into Hebrew, with the hope that some already have Hebrew translations, which could hint at the appropriate patterns. + +Hebrew Wiktionary provided roots and patterns for the translated words, but Even-Shoshan dictionary provided roots only; see the end of section 3.1.3 for details on how we inferred the patterns for translations with root only. Finally, we chose the top patterns based on their prevalence. As many semantic categories have several corresponding patterns, and due to sparsity of our resources, we chose to use the top 4 patterns. In the case of "palette", one pattern found was (maktela), used for instruments. + +# 3.1.3 Combining roots and patterns + +A naive combination of a root and a pattern will not necessarily generate the word correctly (section 2). Thus, we trained a seq2seq model to modify the naive root and pattern combination into a valid Hebrew word $(\pi_{\pi_{\pi_{\pi_{\pi_{\pi_{\pi_{\pi_{\pi_{\pi}}}}}}}}})$ . We did not use a rule-based model due to the large number of rules and to allow a more general pipeline. + +We curated a dataset of 3365 words, with root and pattern, extracted from Hebrew Wiktionary. We used the naive combination function on the + +root and the pattern (substituting root letters in the pattern) to create the model's inputs, and trained it to turn them into the correct Hebrew words. The vocabulary size of the dataset was 46 (including Hebrew letters and diacritics). The dataset was divided into train, validation and test sets with $80\%$ , $10\%$ and $10\%$ of the data respectively. + +Model architecture and training details. The architecture is of character-based attentional seq2seq model (Bahdanau et al., 2014) with a single GRU layer. We used a bidirectional encoder with character embeddings and the decoder included dropout. The character embeddings in the encoder were concatenated to binary vectors, indicating for each root letter whether it belongs to different special-case root families (e.g., guttural letters). See Appendix for the choice of model parameters. Example output for this stage for "palette" was (matsbe'a), a combination of the root "color" (w) with the instrument pattern (maktela). + +The model achieved 0.68 accuracy on the test set. Mean Levenshtein edit distance for errors only (after setting the distance of two diacritic characters that sound alike to zero) was 1.63 characters. Most of the differences to ground truth were diacritics differences. For further evaluation see section 4.2. + +We also used our model for inferring patterns of dictionary words with root but no pattern in our dictionary. We combined these words' roots with all possible patterns, and let our seq2seq model process them. If the result was identical to the original word, we considered the pattern likely. + +# 3.1.4 Ranking and filtering suggestions + +At this stage we had root and pattern suggestions. Next, we wanted to select the more "Hebrew looking" words. This was necessary both since the seq2seq model did not fix all of the possible issues, and since we wanted to make sure the new word suggestions fit into the target language in terms of their statistical characteristics. To choose the best root-pattern combinations per root, we used a character based Hebrew language model. For each combination of root and pattern, the model computed a probability score. We kept the two combinations with the highest probability per root, filtering words with probability $\leq 0.1$ . + +To train our model, we needed a sufficient amount of Hebrew words with diacritics. Therefore, we crawled the Ben Yehuda project website, + +containing the classics of Hebrew literature $^{2}$ . Hebrew is a morphologically rich language. Thus, each token in the text may include multiple morphemes. Since we wanted the language model to represent statistical properties of the words themselves, we cleaned them from prefixes according to grammar rules $^{3}$ (see elaboration in the Appendix). The final dataset consisted of 514,300 unique words with diacritics, and 4,955,687 characters, with average word length 9.6 characters. The number of possible characters (including diacritics) was 46. The data was divided into train, validation and test sets (80%, 10% and 10% respectively). We used an n-gram character-based language model. See implementation details and parameter choice in the Appendix. Further evaluation of the model is provided in section 4.3. + +To prevent confusion, the last step of the algorithm is to filter out words which are identical or sound like existing Hebrew words (Levenshtein edit distance is zero, with substitution weight of two diacritic characters that sound alike set to zero). + +# 3.2 Compound and portmanteau pipeline + +In addition to our main pipeline, we also supported two less-common word formation processes: Compound and portmanteau (see Figure 1). To create proper grammatical compound nouns for a source word, we translate the important words as before (see section 3.1.1). We filter out all important words without a root, to exclude loanwords. Then, we pair up the important words left to create a compound noun, ranking the pairs according to the sum of their tfidf scores. + +To make sure the compound nouns are grammatical, we focus on a specific case of compound noun which is the highly prevalent in Hebrew, and check whether the words in the combination are both nouns and have a "genitive case" relation. This was done using UDPipe POS tagger and dependency parser (Straka and Strakova, 2017). An example of a compound for "palette" was [luakh tseva, meaning "color board"). + +To form portmanteaus, we attempted to blend the top compounds when possible, according to blending rules (Bat-El, 1996). For "palette", one example was (irbuluakh, meaning "mix" + "board"). + +# 3.3 A note on generalizability + +Even though the scheme we presented focuses on Hebrew, it can be adapted to other languages as well. First, note that the root-pattern system is also used in Arabic (the fifth most spoken language in the world). By changing the data sources and retraining the seq2seq model, our algorithm should also work for this language. In addition, the compound and portmanteaus strategies discussed in the pipeline are common in languages without Hebrew's root-pattern system. Thus, these formation processes can be used in numerous languages. + +More broadly, we would like to encourage the utilization of our pipeline and its main components (identifying related-content words, identifying potential word forms, word generation via language-dependent manipulations, ranking outputs using language models) when generalizing the algorithm to other languages. We believe it can serve as a useful guide for automating the creative linguistic process of neologism generation in any language. + +# 4 Evaluation of individual components + +Our pipeline (depicted in Figure 1) is composed of several components. In this section we evaluate the contribution of the three main components: important words (tfidf), combining roots and patterns (seq2seq model) and ranking and filtering (language model). For these evaluations, we used student annotators who are native speakers of Hebrew. + +# 4.1 Important words extraction + +For this evaluation, two annotators manually marked words they consider important in 15 English word definitions (20-300 words each). We measured agreement using Jaccard Index, averaged over the words, resulting in 0.4 with std = 0.197. Inspecting the annotations, we note that the annotators tended to mark a relatively small number of important words in each definition. + +We took words chosen by both annotators as ground truth, and measured the mean recall, resulting in 0.7 (std = 0.25). As the main purpose of this component is to capture the important words, we consider the results satisfactory. + +# 4.2 Root and pattern combination + +A random sample indicated that the seq2seq model applies changes to about $60\%$ of its inputs. Taking a closer look at the results, we noticed that our model + +was able to learn and correctly apply some Hebrew phonological rules, such as identifying repeating letters and realizing when they should be merged. + +It was also able to correctly add and remove diacritics in words (e.g., recognizing that guttural letters cannot get a gemination mark). One of the model's weaknesses was converting diphthongs to monophthongs. Some examples showing the seq2seq model's ability of applying different rules are shown in the Appendix. + +To evaluate the model more quantitatively, we asked two annotators to look at 100 word pairs and identify the one that seems to follow Hebrew phonological rules more closely. These word pairs were sampled randomly from words changed by the seq2seq model (by at least one character). + +The agreement between annotators using Cohen's Kappa was significant (0.7). Both of the annotators agreed that the modified word was better in $75\%$ of the pairs. They agreed that the modified word was worse only in $10\%$ of the pairs. Therefore, we concluded that the seq2seq model indeed improves the root-pattern combinations. + +# 4.3 Language model score + +For the language model evaluation, we used similar methods. First, we qualitatively examined the probabilities assigned by the model to specific words. We found that existing Hebrew words were assigned high probabilities, while words contradicting Hebrew phonological rules, such as those still containing diphthongs, were assigned low probabilities (examples for word probabilities assigned by the language model are shown in the Appendix). + +We created 100 groups of words, sharing a root but using 4 different patterns (as described in 3.1.2). We computed our character-LM score for each word, and extracted the highest and lowest scoring words per group. We asked two annotators to label the more "Hebrew looking" word from these word pairs. Cohen's Kappa agreement was again significant at 0.78. Both of the annotators agreed on the higher-rated word being better in $69\%$ of the pairs, and agreed that the higher-rated word was worse in $20\%$ of the pairs. We concluded that the LM indeed manages to capture useful information. As the LM was trained on Hebrew classics, we believe its performance can be improved using more modern data containing diacritics. + +# 5 Evaluating the algorithm's output + +After evaluating the main parts of the algorithm, we continue to evaluate its suggestions (including root and pattern, compound and portmanteaus). We address two main questions: (1) How do the words our algorithm generated compare to those generated by humans? (2) Can our algorithm's output boost creativity in humans generating new words? + +We note that we do not expect our algorithm to beat human performance. Rather, we set out to test whether it can generate plausible suggestions, and whether it can inspire people to suggest better words. We considered the following baselines: + +1. Expert suggestions: Hebrew Academy. The officially chosen Hebrew words, as well as runner-up suggestions discussed by the committee. +2. Non-expert suggestions: New word suggestions by human participants (non experts). +3. Non-expert + EBY. New word suggestions by non experts, after being exposed to the algorithm's output. + +Step 1: Choosing source words. To choose source words for the experiment, we collected recent Hebrew Academy meeting protocols available online ${}^{4}$ . We composed a list of foreign words for which an official Hebrew translation was chosen as well as runner-up suggestions. We found 91 foreign words with at least two suggestions for a Hebrew alternative and translated them to English (our mediating language). We filtered out English words our dictionaries had no translations for, as well as words with a well-known official Hebrew alternative (identified through 3 annotators; words known by at least one person were discarded). We sampled 20 random words from the resulting filtered list. + +Step 2: Non-experts. We recruited 4 non-expert student volunteers and showed them the 20 foreign words. For each word, the participants had two minutes to suggest Hebrew alternatives, then they were exposed to the algorithm's output and had one more minute to come up with suggestions. We chose those time constraints after holding trial runs and observing that suggestions slowed down considerably after the first minute. + +Our algorithm's output and the non-expert baselines yielded many suggestions. To narrow them down and even the play field, we mimicked the + +voting process used by the Hebrew Academy when it picks its top suggestions per foreign word: we recruited three more student volunteers, who discussed and agreed on up to top 3 suggestions from our algorithm's outputs and each of the non-expert baseline suggestions independently. The chosen alternatives were then used for the comparison stage. + +# 5.1 Evaluation metrics + +The assessment of the new word suggestions is not trivial, and should take into consideration different aspects. We chose to measure Suitability (does the new word fit the original meaning?), Likability (do you like it?) and Creativity (how creative is it?). We believe these three measures provide a comprehensive view of the fit of the words. + +We created an online survey and recruited native Hebrew speakers via student mailing lists and groups. Participation was voluntary. In the survey, the participants saw 5 random source words out of the chosen 20. Each source word was followed by 5-10 Hebrew suggestions from all baselines, order randomized. Participants were asked to rate each suggestion with respect to suitability, likability and creativity on a Likert scale of 1-5. + +As Likert scale is an ordinal scale, where arithmetic operations should not be conducted, we defined binary versions of our measures. We concluded that the suitability rating must be high ( $\geq 4$ ) to pass, as the suggestion has to match the original meaning. For likability and creativity, we settled on the more relaxed threshold of $\geq 3$ . Looking at the distribution of ratings reinforced this decision, as this is also the exact binarization cutoff we would have chosen to get close to $50\%$ positives (see histogram in Appendix). As one could argue for other reasonable thresholds (e.g., 4 for all measures), we report results for them in the Appendix as well. + +Finally, we define a combined binary score, Combined, capturing whether the user considers the word a good candidate as a whole. To be positive, a user's rating has to pass the three thresholds: 4 for suitability, 3 for likability and creativity. + +# 5.2 Results + +The experiment included 177 participants, providing between 20-29 ratings for each suggestion. In this section we analyze the results. + +Correlation between the three measures. First, we calculated the correlation between all measures using Spearman coefficient. We found that both + +suitability and creativity are positively correlated with likability (0.62 and 0.45 respectively), as expected. The link between suitability and creativity was weaker (0.25), which agrees with our intuition (as many suitable suggestions are not necessarily creative). + +Experts vs. non-experts. We now compare baselines 1 (experts) and 2 (non-experts). For each source word, we identified the best suggestion from each baseline (the word with the highest percentage of positive binary ratings). We found that the experts' best alternative surpassed the non-experts best alternative more times in likability and suitability (65% and 55% respectively). However, this was not the case for creativity (45%). For the combined measure, experts won 70% of the time. + +These results are compatible with our beliefs that experts perform better than non-experts in general. The Hebrew Academy is an official institute, and thus it might put more emphasis on suitability and likability than on creativity. + +Algorithm vs. humans: shared suggestions. Automatically coming up with the same words humans thought of (whether experts or non-experts) is an encouraging sign. When considering human baselines, we used all of their suggestions, before filtering. Our algorithm produced 4 suggestions identical to expert suggestions, and 2 identical to non-expert suggestions. Non-experts generated 7 suggestions identical to experts. When focusing on roots only, for 14 out of our 20 source words, at least one root our algorithm selected also appeared in the expert suggestions (and 16 appeared in the non-expert ones). In comparison, for 17 words, at least one of the non-expert roots appeared in the expert suggestions. + +Algorithm vs. humans: How did we fare? To compare the algorithm to the baselines, we ranked the suggestions for all of the source words by the percentage of the positive (Combined) votes they received. Table 1 shows the distribution of positions in the ranked list for the different baselines (the bottom line shows the percentage of words from each baseline, unrelated to the ranking). Not surprisingly, the expert suggestions dominate the top quarter, followed by the non-experts. However, our algorithm is still well-represented in the top quarters, despite having fewer candidates in the race. Interestingly, there are more expert suggestions than non-experts in the bottom quarter. + +Likert scores are difficult to compare among dif + +
EBYExpertsNon-experts
Top 25%10.3%56.4%33.3%
50-7518.9%43.2%37.8%
25-5037.1%17.1%45.7%
Bottom 25%52.8%33.3%13.9%
Total29.3%38.1%32.7%
+ +Table 1: Distribution of words from each baseline in each quartile, where the words are sorted by the percentage of positive combined (binary) votes. "Total" indicates percentage of suggestions for each baseline. Human baselines are, as expected, winning, but EBY is still well-represented in the top quartiles, despite having fewer total candidates. + +![](images/32b9225fbcfaa371cd823936640a1853941ec3251814b832b00d61c213031596.jpg) + +![](images/efb597f7d3bf277bc27da2e403cf361c36e087facf1afa5871a9cf6152b588ac.jpg) + +![](images/e6f9874f1f1200f645133beba97bdd64cb9068783db89f69336b763dacd1da68.jpg) +Figure 2: Percentages of times row baseline beat column baseline in (a) suitability, (b) likability and (c) creativity. Comparisons are computed within participant. Showing our algorithm (EBY), experts (Exp), non-experts (Non-Exp), and non-experts added suggestions after seeing the algorithm's outputs (Non-Exp+EBY). + +ferent people. Thus, we performed one more evaluation. For each person and each source word they saw, we made pairwise comparisons between each two suggestions they ranked, and computed the total percentage of times one baseline beat another. The results are in Figure 2. As these comparisons are computed in the context of the same person, we believe these results reflect user preference. As in the previous evaluation, the human baselines are better than our algorithm, but it does show promise: it wins $35 - 40\%$ of the time compared to experts, and $27 - 41\%$ compared to non-experts. + +Enhancing human creativity. As noted in the beginning of section 5, we let the non-experts suggest words for two minutes, then showed them EBY's output and collected more suggestions for one minute. We now wish to assess the algorithm's + +![](images/23827050df5b1d600123aad1fc3b7fb86867fd9e7a1675c1dade165b61acc713.jpg) +Figure 3: Comparison of the best non-expert suggestion before and after exposure to the algorithm's outputs. X axis is the best non-expert suggestion score before exposure, and y - after. Points above the diagonal indicate improvement. + +potential to be a part of people's creative process. + +We start by looking at the number of suggestions. The mean number of suggestions before exposure was 11.15 (std = 2.56), and the mean number of additional suggestions after exposure was 8.35 (std = 2.73). The number of additional suggestions is encouraging, as (1) the time after exposure is shorter, and (2) in preliminary trials (without the algorithm's output) we noticed that suggestions were slowing down considerably after the first minute. + +After comparing the additional suggestions to the algorithm's outputs, we concluded that they can be attributed to the algorithm in many cases. For example, when translating "guardhouse", participants took a rather rare root suggested by the algorithm (πρ) and combined it with a better pattern associated with places, resulting in the highest-scoring word in the combined measure: πρ(,) (zkifiyah). + +Next, we compared the suggestions before and after exposure. Each point in Figure 3 represents a source word. For each suggestion, we compute its score (percentage of positive ratings in the binary measure). The $x$ axis represents the best suggestion's score before exposure, and the $y$ axis - the best non-expert suggestion, either before or after. Words above the diagonal are the ones whose suggestions improved. Exposure to the algorithm improved $20\%$ of the words in suitability and likability. For creativity and the combined measure, $35\%$ of the words improved. + +The algorithm's outputs brought the non-experts + +closer to expert performance. In section 5.2 we compared non-experts to experts. After exposure to the algorithm's outputs, the non-experts' best alternative surpassed the experts' best alternative $45\%$ of the times in the combined measure (compared to $30\%$ ), and $70\%$ in creativity (compared to $55\%$ ). Three words ( $\text{艹艹艹}$ , $\text{艹艹艹}$ , $\text{艹艹艹}$ ) surpassed expert suggestions in all measures. Also refer to Figure 2 to see the effect in terms of pairwise comparisons. Interestingly, the added suggestions beat both the first-round suggestions and the expert suggestions in terms of creativity. + +# 6 Error analysis + +We analyzed the algorithm's errors to understand where it is lacking and where to focus future work efforts. We identified two main issues. + +Limited resources. In many of the cases in which our algorithm failed to generate appropriate alternatives, it appears to be due to a lack of resources - absent / inaccurate Hebrew translations, or a lack of root / pattern information. For example, consider the word "leggings". One of the important words identified was "fitting", which was inaccurately translated to "appropriate". Another word, "tight", was accurately translated to both $\overline{\mathfrak{p}}\overline{\mathfrak{n}}$ (haduk) and $\overline{\mathfrak{m}}\overline{\mathfrak{n}}$ (matuakh), but our dictionaries did not have their roots. We believe that better Hebrew resources will significantly improve our algorithm. + +Connotations. Some of EBY's suggestion received low likability scores. One such word, which was highly disliked, is $\pi \pi -\pi \pi$ (sakal ze'a) for "deodorant". Literally, this is a combination of "to thwart" and "sweat". Even though the meaning is well-represented here, both words have a negative connotation. Describing deodorant by the word "sweat" is not appealing, and the Hebrew word for "thwart" also carries negative connotations. + +Another example is "periphery", where suggestions focused on roots with meanings of "margin" and "out". This can be offensive for people who live there. In fact, even the Hebrew Academy was unable to reach a decision for this word. After discussing suggestions based on "margin", it was taken off of the agenda following public outrage5. We believe a better understanding of connotations can help the algorithm produce more appealing results. + +# 7 Related work + +Lexical creativity. Lexical creativity has been the subject of many studies. Yet, these studies often focus on creative writing of longer texts, such as literature or songs. For example, Settles (2010); Castro and Attarian (2018) focused on developing tools assisting songwriters, and Zhu et al. (2009) predicted human judgments for creativity of sentences. As for lexical creativity work focusing on terms, it mostly explores the cognitive/pyschological aspect of the generation process. For example, Costello (2002) studied the processes guiding word choice when creating noun compounds, and Kuznetsova et al. (2013) explored different contributing factors to creativity in word combinations. In contrast, we explore terms generation from an algorithmic perspective by trying to mimic this process. + +Computational neologism. Much previous computational work on neologisms focused on automatic recognition of neologisms and their meanings (Cook and Stevenson, 2010; Cartier, 2017; Costin-Gabriel and Rebedea, 2014; Veale and Butnariu, 2010; Kerremans and Prokić, 2018). Work on computational generation of neologisms mostly focused on creating compounds and word blends from source words (Smith et al., 2014; Deri and Knight, 2015; Gantal et al., 2017; Kulkarni and Wang, 2018; Özbal and Strapparava, 2012; Simon, 2018). Although our algorithm supports these word formations, the main focus of our work is on word generation via root and pattern combination, unexplored in a computational context before. In addition to providing an algorithm for the generation of the neologisms themselves, we also show its potential in enhancing human creativity. + +# 8 Discussion and future work + +Coming up with new words (neologisms) is a hallmark of human creativity. In this paper we proposed a system to automatically suggest neologisms, using the Hebrew language as a test case. Given a source word, the system identifies related words, roots and patterns and uses them to suggest new terms. We evaluated the system through a user study, comparing it to experts and non-experts, and showed that while humans still perform better, our algorithm is capable of generating high-quality outputs, as well as enhance human creativity. + +In the future, we plan to explore more word formation strategies, such as associations; for exam + +ple, by using the EAT database (Hees et al., 2016). Another exciting avenue is researching the factors influencing the acceptance of new words by the public. A better understanding of successful neologisms, adopted by speakers of the language, can potentially help in their creation. + +Beyond the somewhat-niche nature of Hebrew neologisms, we seek more broadly to inspire more work on automating and supporting creative tasks (such as authoring), especially in human-computer collaborative frameworks. We believe more NLP should be applied to tackle psychological phenomena, and that the intersection of the fields opens up many intriguing research questions. + +# Acknowledgements + +We thank the anonymous reviewers for their insightful comments, the Hyadata Lab at HUJI for their thoughtful remarks, the Hebrew Academy for their cooperation, and all the participants in our user studies. We would also like to especially thank Gal Vishne and Raviv Yaniv for their support and help during this project. This work was supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant no. 852686, SIAM) and NSF-BSF grant no. 2017741. + +# References + +2020. The Hebrew Academy official website. + +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. + +Outi Bat-El. 1996. Selecting the best of the worst: the grammar of hebrew blends. *Phonology*, 13(3):283–328. + +Emmanuel Cartier. 2017. Neoveille, a web platform for neologism tracking. In Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics, pages 95-98. + +Pablo Samuel Castro and Maria Attarian. 2018. Combining learned lyrical structures and vocabulary for improved lyric generation. arXiv preprint arXiv:1811.04651. + +Morten H Christiansen and Nick Chater. 2016. Creating language: Integrating evolution, acquisition, and processing. MIT Press. + +Paul Cook and Suzanne Stevenson. 2010. Automatically identifying the source words of lexical blends in english. Computational Linguistics, 36(1):129-149. +Fintan Costello. 2002. Investigating creative language: People's choice of words in the production of novel noun-noun compounds. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 24. +C. Costin-Gabriel and T. E. Rebedea. 2014. Archaisms and neologisms identification in texts. In 2014 RoEduNet Conference 13th Edition: Networking in Education and Research Joint Event RENAM 8th Conference, pages 1-6. +Aliya Deri and Kevin Knight. 2015. How to make a frenemy: Multitape fists for portmanteau generation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 206-210. +Abraham Even-Shoshan and M Azar. 2003. Even shoshan dictionary. Am Oved, Kineret Zmora Bitan, Dvir and Yediot Aaronot, Tel Aviv, page 2039. +Jack Fellman. 1973. The revival of a classical tongue: Eliezer Ben Yehuda and the modern Hebrew language. 6. Walter de Gruyter. +Jack Fellman. 1974. The academy of the hebrew language: Its history, structure and function. Linguistics, 12(120):95-104. +Varun Gangal, Harsh Jhamtani, Graham Neubig, Edward Hovy, and Eric Nyberg. 2017. Charmanteau: Character embedding models for portmanteau creation. arXiv preprint arXiv:1707.01176. +Jörn Hees, Rouven Bauer, Joachim Folz, Damian Borth, and Andreas Dengel. 2016. Edinburgh associative thesaurus as rdf and dbpedia mapping. In *The Semantic Web*, pages 17–20, Cham. Springer International Publishing. +Daphné Kerremans and Jelena Prokic. 2018. Mining the web for new words: Semi-automatic neologism identification with the neocrawler. Anglia, 136(2):239-268. +Vivek Kulkarni and William Yang Wang. 2018. Simple models for word formation in english slang. arXiv preprint arXiv:1804.02596. +Polina Kuznetsova, Jianfu Chen, and Yejin Choi. 2013. Understanding and quantifying creativity in lexical composition. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1246-1258. +George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39-41. + +Noam Ordan and Shuly Wintner. 2007. Hebrew word-net: a test case of aligning lexical databases across languages. International Journal of Translation, 19(1):39-58. +Gözde Özbal and Carlo Strapparava. 2012. A computational approach to the automation of creative naming. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 703-711. Association for Computational Linguistics. +Chaim Rabin. 1963. The revival of hebrew as a spoken language. The Journal of Educational Sociology, 36(8):388-392. +Juan Ramos et al. 2003. Using tfidf to determine word relevance in document queries. In Proceedings of the first instructional conference on machine learning, volume 242, pages 133-142. Piscataway, NJ. +Kathy Rooney and PH Collin. 2003. Easier English Student Dictionary: Over 32,000 Terms Clearly Defined, Upper intermediate level. Bloomsbury Publishing. +Burr Settles. 2010. Computational creativity tools for songwriters. In Proceedings of the NAACL HLT 2010 Second Workshop on Computational Approaches to Linguistic Creativity, pages 49-57. Association for Computational Linguistics. +Avi Shmidman, Shaltiel Shmidman, Moshe Koppel, and Yoav Goldberg. 2020. Nakdan: Professional hebrew diacritizer. arXiv preprint arXiv:2005.03312. +Jonathan A Simon. 2018. Entendrepreneur: Generating humorous portmanteaus using word-embeddings. +Michael R Smith, Ryan S Hintze, and Dan Ventura. 2014. Nehovah: A neologism creator nomen ipsum. In ICCC, pages 173-181. +Robert Speer and Catherine Havasi. 2012. Representing general relational knowledge in conceptnet 5. In LREC, pages 3679-3686. +Milan Straka and Jana Straková. 2017. Tokenizing, pos tagging, lemmatizing and parsing ud 2.0 with udpipe. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 88-99, Vancouver, Canada. Association for Computational Linguistics. +Tony Veale and Cristina Butnariu. 2010. Harvesting and understanding on-line neologisms. Cognitive Perspectives on Word Formation, 221:399. +Xiaojin Zhu, Zhiting Xu, and Tushar Khot. 2009. How creative is your writing? In Proceedings of the workshop on computational approaches to linguistic creativity, pages 87-93. + +# A Appendices + +In these sections we provide more implementation details for the sake of reproducibility, some qualitative evaluations of the models and a short discussion about the choice of our metrics. We release the source, data and train-validation-test splits here. + +# A.1 Implementation details: Seq2seq + +For the seq2seq model described in section 3.1.3, we used AdamOptimizer, with learning rate 5e-4, hidden size 100, batch size 2, teacher forcing ratio 0.65, dropout probability 0.1 and 10 epochs. These hyperparameters were chosen based on accuracy after performing a grid search with the following hyperparameters bounds: + +- Learning rate: 1e-4 to 5e-3. +- Hidden size: 10 to 150. +- Batch size: 2 to 16. +- Teacher forcing ratio: 0.5 to 0.8. + +10 epochs were chosen based on early stopping. + +We also tried other similar models with the same hyperparameter bounds: + +- The same architecture, with a unidirectional GRU layer. +- The same architecture without attention. +- No use of character embeddings (one hot vectors instead). +- No use of special case root families information. + +The chosen model outperformed all the other options we tried. We trained the seq2seq model on our own laptops, without the use of a GPU. + +# A.2 Implementation details: Language model + +The language model we used in section 3.1.4 is an n-gram character based model, with $\mathrm{n} = 4$ , and add-k smoothing, where $k = \frac{1}{|V|^4}$ and $\mathbf{V}$ is the size of the vocabulary. We normalized the word probabilities according to their length. We chose this model since it had the lowest perplexity (4.72 on the validation set and 4.67 on the test set) compared to other n-gram models with $\mathbf{n}$ between 2 and 6 (see Table 2). It also performed better than a one layered GRU language model. In many cases, a language model needs to account for long dependencies between elements (e.g., words). However, this is not the case here, and it is reasonable to assume that the influence of characters within a word is in a small window. + +The data for the training of the model was obtained from the Ben Yehuda project website, + +
nPerplexity
211.41
36.0
44.72
56.37
614.64
+ +Table 2: Character based n-gram language model perplexity on the validation set for different n values. + +containing the classics of Hebrew literature. We wanted the language model to represent statistical properties of the words themselves. Thus, we cleaned them from prefixes $(\text{役}^{\prime \prime})$ using the relevant diacritization rules. The cleaning algorithm used counts of occurrences of words starting with one of the letters, before and after removal of their first letter. If the number of occurrences of the word after cleaning was higher than its number of occurrences before that, the letter was removed and the relevant diacritization changes were applied. The prevalence of the definite article $\pi$ required a special treatment. To words starting with $\pi$ , we applied the changes when the number of occurrences after cleaning was higher than fifth of the occurrences before cleaning. This cleaning procedure was repeated 4 times to account for multiple prefixes (such as in $\text{役}^{\prime \prime}$ ), which should result in $\text{役}^{\prime \prime}$ . + +# A.3 Qualitative evaluation of the models + +When evaluating the seq2seq and language model in sections 4.2 and 4.3, we used both qualitative and quantitative evaluations. We add here some tables demonstrating their qualitative performance. + +In Table 3, we show some examples of phonological rules our seq2seq model was able to learn. In Table 4, we show the top and bottom 3 generated Hebrew alternatives for the English word "allergy" according to the probabilities assigned by the language model. This table shows how existing or well formed Hebrew words are assigned with a high probability, while words violating Hebrew phonological rules are assigned with low probabilities. + +# A.4 Evaluation measures + +As Likert scale is an ordinal scale, where arithmetic operations should not be conducted, in section 5.1 we defined a binary score using a cutoff for each of our measures: suitability, likability and creativity. + +We chose the cutoffs based on our intuition that suitability must be high (threshold $\geq 4$ ), but like + +
RulesInputOutput
aππππππππππ
bππππππππ
cπππππππ
dπππππππ
eππππππππππ
+ +Table 3: Examples showing the seq2seq model's ability of applying different rules. (a) lenition (b) uniting repeating latters under a gemination mark (c) diphthong to monophthong (d) assimilation followed by gemination (e) diacritization changes due to guttural letters. + +
RankWordProbability
1הַעַשׁה0.44
2הַעַשׁה0.40
3הַעַשׁה0.39
30הַעַשׁה0.04
31הַעַשׁה0.03
32הַעַשׁה0.01
+ +![](images/3c3668aa6513c1b92926e0d6760655a4ac51980f153594f01050df6c11a51a58.jpg) +Figure 4: Histogram of ratings for each measure in the user study. + +bility and creativity can be more relaxed (threshold of $\geq 3$ ). Looking at the distribution of ratings reinforced this decision, as this is also the exact binarization cutoff we would have chosen to get close to $50\%$ positives. See histogram of ratings in Figure 4: for suitability, roughly $50\%$ of the + +Table 4: Examples for word probabilities assigned by the language model. We present the top and bottom 3 new Hebrew alternatives for the word "allergy", after sorting all of the outputs according to the language model probabilities. It is evident that the top words are well formed, sometimes already existing, Hebrew words, while the bottom words do not fit to the statistical characteristics of Hebrew words. + +
EBYExpertsNon-experts
Top 25%7.89%57.89%34.21%
50-7530.56%33.33%36.11%
25-5032.43%32.43%35.14%
Bottom 25%47.22%27.78%25%
Total29.3%38.1%32.7%
+ +Table 5: Distribution of words from each baseline in each quarter, where the words are sorted by the percentage of positive combined (binary) votes as in Table 1 of the paper, with binarization cutoff 4 for all three measures. + +
EBYExpertsNon-experts
Top 25%5.4%59.46%35.14%
50-7525.64%35.9%38.46%
25-5040%25.71%34.29%
Bottom 25%47.22%30.56%22.22%
Total29.3%38.1%32.7%
+ +Table 6: Distribution of words from each baseline in each quarter, where the words are sorted by the percentage of positive combined (binary) votes as in Table 1 of the paper, with binarization cutoff 3 for all three measures. + +participants exceed the $\geq 4$ threshold. However, for likability and creativity to be close to $50\%$ we needed to treat 3 as a positive label as well. + +As one could argue for other reasonable thresholds, we report these results here as well. Tables 5 and 6 are computed the same way as Table 1 in the paper. For Table 5 we use $\geq 4$ threshold for all measures; in Table 6 we use $\geq 3$ threshold for all measures. While the top quartile results are lower, the qualitative effect is the same, and the algorithm still has many suggestions in top quarters. \ No newline at end of file diff --git a/comingtotermsautomaticformationofneologismsinhebrew/images.zip b/comingtotermsautomaticformationofneologismsinhebrew/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..93d4fe7305ff50f8e2130623375e6b433e23571f --- /dev/null +++ b/comingtotermsautomaticformationofneologismsinhebrew/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6bbd8fdf3d6db6bc328eede4a378fff768a98f4dc91c3ca2692f683c4d81f0a4 +size 269956 diff --git a/comingtotermsautomaticformationofneologismsinhebrew/layout.json b/comingtotermsautomaticformationofneologismsinhebrew/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..c870757b3c24e9446ef14b083c7dc31a42610e10 --- /dev/null +++ b/comingtotermsautomaticformationofneologismsinhebrew/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b81dbf0916264117c30f8b19a836d4dd93bfded1fd6f786a8ada656fb9ef9af4 +size 375320 diff --git a/commongenaconstrainedtextgenerationchallengeforgenerativecommonsensereasoning/d5a6918d-b9e9-4825-b394-f921522b3cd1_content_list.json b/commongenaconstrainedtextgenerationchallengeforgenerativecommonsensereasoning/d5a6918d-b9e9-4825-b394-f921522b3cd1_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..c08d8251865725c86fbda74d691cf36dd4111f5b --- /dev/null +++ b/commongenaconstrainedtextgenerationchallengeforgenerativecommonsensereasoning/d5a6918d-b9e9-4825-b394-f921522b3cd1_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2265d909a981424ab2688f0a2f1955a004e2b931da2e07e2e7c3c2095814adf7 +size 121455 diff --git a/commongenaconstrainedtextgenerationchallengeforgenerativecommonsensereasoning/d5a6918d-b9e9-4825-b394-f921522b3cd1_model.json b/commongenaconstrainedtextgenerationchallengeforgenerativecommonsensereasoning/d5a6918d-b9e9-4825-b394-f921522b3cd1_model.json new file mode 100644 index 0000000000000000000000000000000000000000..15c6fcbd22077a09defeae92ce8ba5ad1148fd0b --- /dev/null +++ b/commongenaconstrainedtextgenerationchallengeforgenerativecommonsensereasoning/d5a6918d-b9e9-4825-b394-f921522b3cd1_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8dc3ed027deab1840f6a1c281adbb65af2ff5579f85124cf3eeab483e7165c58 +size 160171 diff --git a/commongenaconstrainedtextgenerationchallengeforgenerativecommonsensereasoning/d5a6918d-b9e9-4825-b394-f921522b3cd1_origin.pdf b/commongenaconstrainedtextgenerationchallengeforgenerativecommonsensereasoning/d5a6918d-b9e9-4825-b394-f921522b3cd1_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..dc5c6bbc9b9c0c07557a5df6d433ad6850709224 --- /dev/null +++ b/commongenaconstrainedtextgenerationchallengeforgenerativecommonsensereasoning/d5a6918d-b9e9-4825-b394-f921522b3cd1_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c1102662f15c7b00f5b2ac2a09611f52bc5b9ffe6f622deaffc053888ddba753 +size 2897135 diff --git a/commongenaconstrainedtextgenerationchallengeforgenerativecommonsensereasoning/full.md b/commongenaconstrainedtextgenerationchallengeforgenerativecommonsensereasoning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..bbd7021eb7d270602e797f7b2eb934879d7ed6fb --- /dev/null +++ b/commongenaconstrainedtextgenerationchallengeforgenerativecommonsensereasoning/full.md @@ -0,0 +1,494 @@ +# CommonGen: A Constrained Text Generation Challenge for Generative Commonsense Reasoning + +Bill Yuchen Lin $^{\dagger}$ Wangchunshu Zhou $^{\dagger}$ Ming Shen $^{\dagger}$ Pei Zhou $^{\dagger}$ Chandra Bhagavatula $^{\ddagger}$ Yejin Choi $^{\ddagger\dagger}$ Xiang Ren $^{\dagger}$ + +$^{\spadesuit}$ University of Southern California $^{\spadesuit}$ Allen Institute for Artificial Intelligence + +$\diamond$ Paul G. Allen School of Computer Science & Engineering, University of Washington + +{yuchen.lin, xiangren}@usc.edu,{chandrab, yejinc}@allenai.org + +# Abstract + +Recently, large-scale pretrained language models have demonstrated impressive performance on several commonsense-reasoning benchmark datasets. However, building machines with commonsense to compose realistically plausible sentences remains challenging. In this paper, we present a constrained text generation task, COMMONGEN associated with a benchmark dataset, to explicitly test machines for the ability of generative commonsense reasoning. Given a set of common concepts (e.g., {dog, frisbee, catch, throw}); the task is to generate a coherent sentence describing an everyday scenario using these concepts (e.g., "a man throws a frisbee and his dog catches it"). + +The COMMONGEN task is challenging because it inherently requires 1) relational reasoning with background commonsense knowledge, and 2) compositional generalization ability to work on unseen concept combinations. Our dataset, constructed through a combination of crowdsourced and existing caption corpora, consists of 77k commonsense descriptions over 35k unique concept-sets. Experiments show that there is a large gap between state-of-the-art text generation models (e.g., T5) and human performance (31.6% v.s. 63.5% in SPICE metric). Furthermore, we demonstrate that the learned generative commonsense reasoning capability can be transferred to improve downstream tasks such as CommonsenseQA (76.9% to 78.4 in dev accuracy) by generating additional context. + +# 1 Introduction + +Commonsense reasoning, the ability to make acceptable and logical assumptions about ordinary scenes in our daily life, has long been acknowledged as a critical bottleneck of artificial intelligence and natural language processing (Davis and Marcus, 2015). Most recent commonsense reasoning challenges, such as CommonsenseQA (Tal + +Concept-Set: a collection of objects/actions. + +dog, frisbee, catch, throw + +Generative Commonsense Reasoning + +Expected Output: everyday scenarios covering all given concepts. + +- A dog leaps to catch a thrown frisbee. + +![](images/2854cfa75c179d11e52ac9fa3aa9092339ca03278887f54a319ef6d7ab85093f.jpg) + +- The dog catches the frisbee when the boy throws it. +- A man throws away his dog's favorite frisbee expecting him to catch it in the air. + +GPT2: A dog throws a frisbee at a football player. + +![](images/8559c75c09bd6567993f2c172d551ae94910c1122bf3cf65e93f9d0c4aab6a38.jpg) + +UniLM: Two dogs are throwing frisbees at each other . + +BART: A dog throws a frisbee and a dog catches it. + +T5: dog catches a frisbee and throws it to a dog + +![](images/22979506bf860f3a024314444ad4071fbec498b2dc376b0ed1d05cecbad9cfcb.jpg) +Figure 1: An example of the dataset of COMMONGEN. GPT-2, UniLM, BART and T5 are large pre-trained text generation models, fine-tuned on the proposed task. + +mor et al., 2019), SocialIQA (Sap et al., 2019b), WinoGrande (Sakaguchi et al., 2019) and Hellaswag (Zellers et al., 2019b), have been framed as discriminative tasks - i.e. AI systems are required to choose the correct option from a set of choices based on a given context. While significant progress has been made on these discriminative tasks, we argue that commonsense reasoning in text generation poses a distinct complementary challenge. In this paper, we advance machine commonsense towards generative reasoning ability. + +Humans acquire the ability to compose sentences by learning to understand and use common concepts that they recognize in their surrounding environment (Tincoff and Jusczyk, 1999). The acquisition of such an ability is regarded as a significant milestone of human development (Moore, 2013). Can machines acquire such generative commonsense reasoning ability? To initiate the investigation, we present $\mathrm{COMMONGEN}^{1}$ - a novel constrained generation task that requires machines to generate a sentence describing a day-to-day scene using concepts from a given concept-set. For ex + +![](images/814004b706351606d3b7d13b50ab8cd841f72281a92641abb0a338af058208d0.jpg) +Figure 2: Two key challenges of COMMONGEN: relational reasoning with underlying commonsense knowledge about given concepts (left), and compositional generalization for unseen combinations of concepts (right). + +![](images/e764590d699a5035bade099feb118d7625f5f9b8849d8fed54c9652bbe1dc63f.jpg) + +ample, in Figure 1, given a set of concepts: $\{dog, frisbee, catch, throw\}$ , machines are required to generate a sentence such as "a man throws a frisbee and his dog catches it in the air." + +To successfully solve the task, models need to incorporate two key capabilities: a) relational reasoning, and b) compositional generalization. Grammatically sound sentences may not always be realistic as they might violate our commonsense (e.g., "a dog throws a frisbee ..."). In order to compose a plausible sentence that describes an everyday scenario, models need to construct a grammatical sentence while adhering to and reasoning over the commonsense relations between the given concepts. Models additionally need compositional generalization ability to infer about unseen concept compounds. This encourages models to reason about a potentially infinite number of novel combinations of familiar concepts – an ability believed to be a limitation of current AI systems (Lake and Baroni, 2017; Keysers et al., 2020). + +Therefore, in support of the COMMONGEN task, we present a dataset consisting of 35,141 conceptsets associated with 77,449 sentences. We explicitly design our dataset collection process to capture the key challenges of relational reasoning and compositional generalization described above, through an actively controlled crowd-sourcing process. We establish comprehensive baseline performance for state-of-the-art language generation models with both extensive automatic evaluation and manual comparisons. The best model, based on T5 (Raffel et al., 2019), achieves $31.60\%$ with significant gap compared to human performance of $63.50\%$ in the SPICE metric - demonstrating the difficulty of the task. Our analysis shows that state-of-the-art + +models struggle at the task, generating implausible sentences – e.g. “dog throws a frisbee ...”, “giving massage to a table”, etc. Additionally, we show that successful COMMONGEN models can benefit downstream tasks (e.g., commonsense-centric question answering) via generating useful context as background scenarios. We believe these findings point to interesting future research directions for the community of commonsense reasoning. + +# 2 Task Formulation and Key Challenges + +We formulate the proposed COMMONGEN task with mathematical notations and discuss its inherent challenges with concrete examples. The input is an unordered set of $k$ concepts $x = \{c_1, c_2, \ldots, c_k\} \in \mathcal{X}$ (i.e., a concept-set), where each concept $c_i \in \mathcal{C}$ is a common object (noun) or action (verb). We use $\mathcal{X}$ to denote the space of all possible concept-sets and use $\mathcal{C}$ to denote the concept vocabulary (a subset of ConceptNet's unigram concepts). The expected output is a simple, grammatical sentence $y \in \mathcal{Y}$ that describes a common scenario in our daily life, using all given concepts in $x$ (morphological inflections are allowed). A scenario can depict either a static situation or a short series of actions. The COMMONGEN task is to learn a function $f: \mathcal{X} \to \mathcal{Y}$ , which maps a concept-set $x$ to a sentence $y$ . The unique challenges of this task come from two aspects: + +Relational Reasoning with Commonsense. Expected generative reasoners should prioritize the most plausible scenarios over many other less realistic ones. As shown in Figure 2, models need to recall necessary relational commonsense facts that are relevant to the given concepts, and then reason an optimal composition of them for gener + +ating a desired sentence. In order to complete a scenario, generative commonsense reasoners also need to reasonably associate additional concepts (e.g., 'woman', 'gym') as agents or background environments for completing a coherent scenario. + +This not only requires understanding underlying commonsense relations between concepts, but also incrementally composing them towards a globally optimal scenario. The underlying reasoning chains are inherently based on a variety of background knowledge such as spatial relations, object properties, physical rules, temporal event knowledge, social conventions, etc. However, they may not be recorded in any existing knowledge bases. + +Compositional Generalization. Humans can compose a sentence to describe a scenario about the concepts they may never seen them co-occurring. For example, in Figure 2, there is a testing concept-set \(\hat{x} = \{\text{pear}, \text{basket}, \text{pick}, \text{put}, \text{tree}\}\). The concept 'pear' never appear in the training data, and 'pick' never co-occurs with 'basket'. We, humans, can generalize from these seen scenarios in the training data and infer that a plausible output: \(\hat{y} = \text{"a girl picks some pears from a tree and put them into her basket." This compositionally generalization ability via analogy, i.e., to make "infinite use of finite means" (Chomsky, 1965), is challenging for machines. This analogical challenge not only requires inference about similar concepts (e.g., 'apple' \(\rightarrow\) 'pear') but also their latent associations. + +# 3 Dataset Construction and Analysis + +Figure 3 illustrates the overall workflow of our data construction for the proposed COMMONGEN task. We utilize several existing caption corpora for sampling frequent concept-sets (Sec. 3.1) for reflecting common scenarios. We employ AMT crowd workers for collecting human-written sentences (Sec. 3.2) for the development and test set, while we carefully monitor the quality of crowd workers and refine them dynamically. Finally, we present the statistics of the COMMONGEN dataset, and the analysis on the challenges (Sec. 3.4). + +# 3.1 Collecting Concept-Sets from Captions + +It can be unreasonable to present any arbitrary set of concepts (e.g., $x = \{apple, \text{fold}, \text{rope}\}$ ) and ask a reasoner to generate a commonsense scenario, since such an arbitrary set of concepts can be too unrelated. Therefore, our concept-sets are supposed to reflect reasonable concept co-occurrences + +![](images/3a27d7174d51a72c06f4b9e3e9ec04b1e395a8e7c7620c7d7f3c690da131327d.jpg) +Figure 3: Dataset construction workflow overview. + +in everyday situations. As web images and video clips capture diverse everyday scenarios, we use their caption text as a natural resource for collecting concept-sets and their corresponding descriptions of commonsense scenarios. More specifically, we collect visually-grounded sentences from several existing caption datasets, including image captioning datasets, such as Flickr30k (Young et al., 2014), MSCOCO (Lin et al., 2014), Conceptual Captions (Sharma et al., 2018), as well as video captioning datasets including LSMDC (Rohrbach et al., 2017), ActivityNet (Krishna et al., 2017), and VATEX (Wang et al., 2019b). + +We first conduct part-of-speech tagging over all sentences in the corpora such that words in sentences can be matched to the concept vocabulary of ConceptNet. Then, we compute the sentence frequency of concept-sets consisting of $3 \sim 5$ concepts. That is, for each combination of three/four/five concepts in the vocabulary, we know how many sentences are in the corpora covering all concepts. + +Ideally, we want the selected concept-sets in our dataset to reflect the natural distribution of concept-sets in the real world. At first glance, a reasonable solution may seem to sample from the distribution of the concept-sets based on their frequencies in the source datasets. However, we find that this method leads to a rather unnaturally skewed collection of concept-sets, due to the inherent data biases from the source datasets. We therefore design a function to score a concept-set $x$ based on scene diversity and inverse frequency penalty. We denote $S(x)$ as the set of unique sentences that contain all given concepts $\{c_1, c_2, \ldots, c_k\}$ , and then we have + +$$ +\mathrm {s c o r e} (x) = | S (x) | \frac {| \bigcup_ {s _ {i} \in S (x)} \{w | w \in s _ {i} \} |}{\sum_ {s _ {i} \in S (x)} \mathrm {l e n} (s _ {i})} \rho (x), +$$ + +where $\rho(x) = \frac{|\mathcal{X}|}{\max_{c_i \in x} |\{x' | c_i \in x' \text{ and } x' \in \mathcal{X}\}|}$ . The first term in score is the number of unique sen + +
StatisticsTrainDevTest
# Concept-Sets32,6519931,497
-Size = 325,020493-
-Size = 44,240250747
-Size = 53,391250750
# Sentences per Concept-Set67,3894,0186,042
2.064.044.04
Average Length10.5411.5513.34
# Unique Concepts4,6977661,248
# Unique Concept-Pairs59,1253,9268,777
# Unique Concept-Triples50,7133,7669,920
% Unseen Concepts-6.53%8.97%
% Unseen Concept-Pairs-96.31%100.00%
% Unseen Concept-Triples-99.60%100.00%
+ +Table 1: The basic statistics of the COMMONGEN data. We highlight the ratios of concept compositions that are unseen in training data, which assures the challenge in compositional generalization ability. + +tences covering all given concepts in $x$ , and the second term is to represent the diversity of the scenes described in these sentences. Th last term $\rho(x)$ is the penalty of inverse frequency. Specifically, we find the concept in $x$ that has the maximum "set frequency" (i.e., the number of unique concept-sets containing a particular concept), then we take the inverse with the number of all concept-sets for normalization. This penalty based on inverse set-frequency effectively controls the bias towards highly frequent concepts. With the distribution of such scores of concept-sets, we sample our candidate examples for the next steps. + +# 3.2 Crowd-Sourcing References via AMT + +In order to ensure the best quality, the references of the evaluation examples are crowdsourced from crowd workers on Amazon Mechanical Turk, which amounts to 10,060 references over 2.5k distinct concept-sets. Note that these newly collected references for dev and test examples can ensure that we can do a fair comparisons targeting generalization, considering potential data-leak (i.e., recent pre-trained language models might have seen the caption datasets). Each concept-set was assigned to at least 3 workers. In addition to references about given concept-sets, we also ask the workers to provide rationale sentences to explain what commonsense facts they have used, for ensuring that the described scenarios are common in daily life (example rationales are shown in Fig 9). + +We control the quality by actively filtering workers who produced low-quality references, then removing their annotations, and finally re-opening + +![](images/68b52597aff684cbbb32e1320393f1ef48577147ee4496fa876c8b223b2f7d2e.jpg) +Figure 4: Connectivity analysis in 5-size concept-sets in the test set, each of which consists of 10 concept pairs. For example, 12.0 in blue means: there are $12\%$ concept-sets that have 3 concept pairs with one-hop connections on ConceptNet. + +the slots only for quality workers. There were 1,492 accepted workers in total and 171 disqualified workers in the end after the active filtering. There are three criteria for efficiently narrowing down candidates for us to further manually remove out low-quality workers: 1) coverage via part-of-speech tagging, 2) especially high perplexity via GPT-2, and 3) length of the rationales. Meanwhile, we also dynamically replaced the concept-sets that majority of the references do not make sense to ensure the final quality. + +# 3.3 Down-Sampling Training Examples + +In order to evaluate the compositional generalization ability, we down-sample the remaining candidate concept-sets to construct a distantly supervised training dataset (i.e., using caption sentences as the human references). We explicitly control the overlap of the concept-sets between training examples and dev and test examples. The basic statistics of the final dataset is shown in Table 1. There are on average four sentences for each example in dev and test sets, which provide a richer and more diverse test-bed for automatic and manual evaluation. Table 1 also shows the ratio of unseen concept compositions (i.e., concept, concept-pair, and concept-triple) in the dev and test. Notably, all pairs of concepts in every test concept-set are unseen in training data and thus pose a challenge for compositional generalization. + +# 3.4 Analysis of Underlying Common Sense + +We here introduce deeper analysis of the dataset by utilizing the largest commonsense knowledge graph (KG), ConceptNet (Speer et al., 2017), as an tool to study connectivity and relation types. + +Connectivity Distribution. If the concepts inside a given concept-set is more densely connected + +
CategoryRelations1-hop2-hop
Spatial knowledgeAtLocation, LocatedNear9.40%39.31%
Object propertiesUsedFor, CapableOf, PartOf, ReceivesAction, MadeOf, FormOf, HasProperty, HasA9.60%44.04%
Human behaviorsCausesDesire, MotivatedBy, Desires, NotDesires, Manner4.60%19.59%
Temporal knowledgeSubevent, Prerequisite, First/Last-Subevent1.50%24.03%
GeneralRelatedTo, Synonym, DistinctFrom, IsA, HasContext, SimilarTo74.89%69.65%
+ +Table 2: The distributions of the relation categories on one/two-hop connections. + +with each other on the KG, then it is likely to be easier to write a scenario about them. In each 5-size concept-set (i.e. a concept-set consists of five concepts), there are 10 unique pairs of concepts, the connections of which we are interested in. As shown in Figure 4, if we look at the one-hop links on the KG, about $60\%$ of the 5-size concept-set have less than one link among all concept-pairs. On the other hand, if we consider two-hop links, then nearly $50\%$ of them are almost fully connected (i.e. each pair of concepts has connections). These two observations together suggest that the COMMONGEN has a reasonable difficulty: the concepts are not too distant or too close, and thus the inputs are neither too difficult nor too trivial. + +Relation Distribution. Furthermore, the relation types of such connections can also tell us what kinds of commonsense knowledge are potentially useful for relational reasoning towards generation. We report the frequency of different relation types $^2$ of the one/two-hop connections among concept-pairs in the dev and test examples in Fig. 8. To better summarize the distributions, we categorize these relations into five major types and present their distribution in Table 2, respectively for one/two-hop connections between concept pairs. + +# 4 Methods + +We briefly introduce the baseline methods that are tested on the COMMONGEN task. + +Encoder-Decoder Models. Bidirectional RNNs and Transformers (Vaswani et al., 2017) are two most popular architectures for seq2seq learning. We use them with the addition of attention mecha + +nism (Luong et al., 2015) with copying ability (Gu et al., 2016), which are based on an open-source framework OpenNMT-py (Klein et al., 2017). We use bRNN-CopyNet and Trans-CopyNet denote them respectively. To alleviate the influence from the concept ordering in such sequential learning methods, we randomly permute them multiple times for training and decoding and then get their average performance. To explicitly eliminate the order-sensitivity of inputs, we replace the encoder with a mean pooling-based MLP network (MeanPooling-CopyNet). + +Non-autoregressive generation. Recent advances (Lee et al., 2018; Stern et al., 2019) in conditional sentence generation have an emerging interest on (edit-based) non-autoregressive generation models, which iteratively refine generated sequences. We assume that these models potentially would have better performance because of their explicit modeling on iterative refinements, and thus study the most recent such model Levenshtein Transformer (LevenTrans) by Gu et al. (2019). We also include a recent enhanced version, ConstLeven (Susanto et al., 2020), which incorporates lexical constraints in LevenTrans. + +Pre-trained Language Generation Models. We also employ various pre-trained language generation models, including GPT-2 (Radford et al., 2019), UniLM (Dong et al., 2019), UniLM-v2 (Bao et al., 2020), BERT-Gen (Bao et al., 2020), BART (Lewis et al., 2019), and T5 (Raffel et al., 2019), to tackle this task and test their generative commonsense reasoning ability. We fine-tuned all the above models on our training data with a seq2seq format. + +Specifically, to use GPT-2 for this sequence-to-sequence task, we condition the language model on the format $c_{1}c_{2}\ldots c_{k} = y$ during fine-tuning, where $c_{i}$ is a concept in the given concept-set and connects with other concepts with a blank; $y$ is a target sentence. For inference, we sample from the fine-tuned GPT-2 model after a prompt of $c_{1}c_{2}\ldots c_{k} =$ with beam search and use the first generated sentence as the output sentence. For BERT-Gen, we use the s2s-ft package to fine-tune them in a sequence-to-sequence fashion that is similar to the LM objective employed by UniLM. + +As for T5, the state-of-the-art text-to-text pretrained model which is pre-trained with a multitask objective by prepending a task description + +
Model\ MetricsROUGE-2/LBLEU-3/4METEORCIDErSPICECoverage
bRNN-CopyNet (Gu et al., 2016)7.6127.7910.705.7015.804.7915.0051.15
Trans-CopyNet8.7828.0811.907.1015.504.6114.6049.06
MeanPooling-CopyNet9.6631.1410.706.1016.405.0617.2055.70
LevenTrans. (Gu et al., 2019)10.5832.2319.7011.6020.107.5419.0063.81
ConstLeven. (Susanto et al., 2020)11.8233.0418.9010.1024.2010.5122.2094.51
GPT-2 (Radford et al., 2019)17.1839.2830.7021.1026.2012.1525.9079.09
BERT-Gen (Bao et al., 2020)18.0540.4930.4021.1027.3012.4927.3086.06
UniLM (Dong et al., 2019)21.4843.8738.3027.7029.7014.8530.2089.19
UniLM-v2 (Bao et al., 2020)18.2440.6231.3022.1028.1013.1028.1089.13
BART (Lewis et al., 2019)22.2341.9836.3026.3030.9013.9230.6097.35
T5-Base (Raffel et al., 2019)14.5734.5526.0016.4023.009.1622.0076.67
T5-Large (Raffel et al., 2019)22.0142.9739.0028.6030.1014.9631.6095.29
Human Performance (Upper Bound)48.8863.7948.2044.9036.2043.5363.5099.31
+ +Table 3: Experimental results of different baseline methods on the COMMONGEN test set. The first group of models are non-pretrained models, while the second group is large pretrained models that we have fine-tuned. The best models are bold and second best ones are underlined within each metric. We highlight the metrics that we used in our official leaderboard. (Results on dev set are at Table. 7.) + +before the input text, we presuppose that the input concept set with a simple prompt: "generate a sentence with:" and fine-tune the model with the source sentence on the format "generate a sentence with $c_{1}$ $c_{2}$ ... $c_{k}$ ." For decoding, we employ the standard beam search with a beam size of 5 for all compared models. We also report their results with a lexically-constrained decoding method, dynamic beam allocation (DBA) (Post and Vilar, 2018), which do not show improvement over conventional beam searching. + +# 5 Evaluation + +We first introduce the automatic evaluation metrics, then present main experimental results with manual analysis, and finally introduce the potential application in transferring CommonGen-trained models for other downstream tasks. + +# 5.1 Metrics + +Following other conventional generation tasks, we use several widely-used automatic metrics to automatically assess the performance, such as BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), METEOR (Banerjee and Lavie, 2005), which mainly focus on measuring surface similarities. We report the concept Coverage, which is the average percentage of input concepts that are present in lemmatizated outputs. + +In addition, we argue that it is more suitable to use evaluation metrics specially design for caption- + +ing task, such as CIDEr (Vedantam et al., 2015) and SPICE (Anderson et al., 2016). They usually assume system generations and human references use similar concepts, and thus focus on evaluate the associations between mentioned concepts instead of n-gram overlap. For example, the SPICE metric uses dependency parse trees as proxy of scene graphs to measure the similarity of scenarios. $^{5}$ + +To estimate human performance within each metric, we treat each reference sentence in dev/test data as a "system prediction" to be compared with all other references, which is equivalent to compute inter-annotator agreement within each metric. Thus, systems that have better generative ability than average crowd-workers should exceed this. + +# 5.2 Experimental Results + +Automatic Evaluation. Table 3 presents the experimental results in a variety of metrics. We can see that all fine-tuned pre-trained models (the lower group) outperform non-pretrained models (the upper group) with a significant margin. This is not surprising because their pretraining objectives, including masked language modeling, word ordering, and text infilling which predicts missing words or text spans, are relevant to our task. On the other hand, we find that the key disadvantage of non-pretrained models with CopyNet still falls in the + +
C.LvenGPTBERT-G.UniLMBARTT5
Hit@13.221.522.321.026.326.8
Hit@318.263.059.569.069.070.3
Hit@551.495.595.396.896.397.8
+ +Table 4: Manual Evaluation via Pair-wise Comparisons for Ranking. Numbers are hit rates $(\%)$ at top $1/3/5$ + +Concept-Set: {hand, sink, wash, soap} + +[brNN-CopyNet]: a hand works in the sink. + +[MeanPooling-CopyNet]: the hand of a sink being washed up + +[ConstLeven]: a hand strikes a sink to wash from his soap. + +[GPT-2]: hands washing soap on the sink. + +[BERT-Gen]: a woman washes her hands with a sink of soaps. + +[UniLM]: hands washing soap in the sink + +[BART]: a man is washing his hands in a sink with soap and washing them with hand soap. + +[T5]: hand washed with soap in a sink. + +![](images/a61b10f1c2541b84ad558f1fd60b7342150b2e2b95462a6c8e2ebd3169c55df9.jpg) +Figure 5: A case study with a concept-set $\{hand, sink, wash, soap\}$ for qualitative analysis of machine generations. Human references are collected from AMT. + +1. A girl is washing her hands with soap in the bathroom sink. +2. I will wash each hand thoroughly with soap while at the sink. +3. The child washed his hands in the sink with soap. +4. A woman washes her hands with hand soap in a sink. +5. The girl uses soap to wash her hands at the sink. + +failure of using all given concepts (i.e., low coverage), which results in worse results. + +Among them, UniLM, BART, and T5 performs the best, which may be due to its inherent sequence-to-sequence pre-training framework. We found that BART has the best concept coverage, which is probably due to its comprehensive pre-training tasks that aim to recover text with noise. The results suggest that further modifying pre-trained models is a promising direction for generative commonsense. + +Manual Evaluation. We conduct manual evaluation with a focus on commonsense plausibility for comparing the 6 best-performing models in Table 4. We ask five graduate students to compare 1,500 pairs of model-generated sentences respectively, for ranking the models within 100 conceptsets that are covered by all the models. The final average ranked results are shown in Table 4 and their inter-annotator agreement is 0.85 in Kendall's rank correlation coefficient. + +Note that the coverage-weighted hit@1 rate correlates with the SPICE metric the most, i.e., 0.94 in Spearman's $\rho$ for model ranks, while METEOR and ROUGE-2 are both 0.88 and BLEU-4 is 0.78. + +Case study. Fig. 5 shows the top generations of dif + +![](images/786a3eecb4c27f14a47bb979a3a6607b2e6271c416e1323b3d6ecd5ff0f5e0f8.jpg) +Figure 6: Learning curve for the transferring study. We use several trained COMMONGEN (GG) models to generate choice-specific context for the CSQA task. Detailed numbers are shown in Tab. 8 in the appendix. + +ferent models and human references about an input concept-set: {hand, sink, soup, wash} (more cases are shown in Fig. 9 in the appendix). We find that non-pretrained seq2seq models (e.g., bRNN, MeanPooling, ConstLeven) can successfully use part of given concepts, while the generated sentences are less meaningful and coherent. On the contrary, the outputs of fine-tuned pre-trained language models are significantly more commonsensical. Most of them use all given concepts in their outputs. ConstLeven tends to make use of frequent patterns to compose a non-sense sentence but uses all concepts. GPT-2 and UniLM incorrectly compose the dependency among hand, wash, and soap. The phrase 'a sink of soaps' in BERT-gen's output makes itself less common. BART and T5 generate relatively reasonable scenarios, but both are not as natural as human references; BART's contains repetitive content while T5's lacks a human agent. + +Influence of Dynamic Beam Allocation. Considering that all tested models decode sentences with beam searching, one may wonder what if we use a decoding method specially designed for constrained decoding. Thus, we employed dynamic beam allocation (DBA) (Post and Vilar, 2018). The results are shown in Table 5. Note that the models are the same as in Table 3 while only the decoding method is changed to DBA. We can see that all methods are negatively impacted by the decoding method. This suggests that for the COMMON-GEN task and pre-trained language models, we may need to focus on knowledge-based decoding + +
Model \ MetricsROUGE-2/LBLEU-3/4METEORCIDErSPICECoverage
T5-large+DBA16.836.7127.318.725.38.6224.383.98
T5-base+DBA15.0734.8224.81623.59.3121.376.81
GPT-2+DBA17.5639.4529.420.624.910.8526.879.51
BART+DBA18.1537.0228.319.125.59.8225.184.78
+ +Table 5: Experimental results of models with DBA decoding method on the test set. + +or re-ranking as future directions. + +# 5.3 Transferring CommonGen Models + +One may wonder how fine-tuned COMMONGEN models can benefit commonsense-centric downstream tasks such as Commonsense Question Answering (Talmor et al., 2019) (CSQA) with their generative commonsense reasoning ability. To this end, we use the models trained with the COMMONGEN dataset for generating useful context. + +We extract the nouns and verbs in questions and all choices respectively, and combine the concepts of the question $q$ and each choice $c_{i}$ to build five concept-sets. Then, we use these concept-sets as inputs to a trained COMMONGEN model (e.g., T5) for generating scenario a sentence $g_{i}$ for each as choice-specific contexts. Finally, we prepend the outputs in front of the questions, i.e., “ $<\mathrm{s}>G: g_{i} \mid Q: q\mathrm{C: } c_{i} <\mathrm{s}>$ . Note that the state-of-the-art RoBERTa-based models for CSQA uses the same form without “G: $g_{i}$ ” in fine-tuning. + +We show the learning-efficiency curve in Fig. 6, where $y$ is the accuracy on the official dev set and $x$ is the number of training steps. The details of the experiments are shown in the appendix. + +We highlight the performance of original RoBERTa-Large as the baseline. We find that some CommonGen models further improves the performance by a large margin, e.g., 76.9 UniLM 78.4 and they converge at better accuracy in the end. Note that BERT-gen and ConstLeven cause negative transfer due to the low quality of generated context. Particularly, we find that the context generated by the T5-based CommonGen model (CG-T5) helps speed up training about 2 times, if we look at 550th steps of CG-T5 (74.85%) and 1,250th steps of original RoBERTa (74.77%). + +Through manual analysis, we find that the successful COMMONGEN models can generate more reasonable and natural sentence for correct choices while noisy sentences for wrong choices. For example with CG (T5), $q =$ "What do people aim to do at work?", $c_{i} =$ "complete job" (✓) with $g_{i} =$ "people + +work to complete a job aimed at achieving a certain goal"; $c_{j} = \underline{\text{wear hats}}$ ( $\mathbf{x}$ ) $g_{j} =$ "people wearing hats aim their guns at each other while working on a construction site." The used question concepts and choice concepts are underlined. + +# 6 Related Work + +Commonsense benchmark datasets. There are many emerging datasets for testing machine commonsense from different angles, such as commonsense extraction (Xu et al., 2018; Li et al., 2016), next situation prediction (SWAG (Zellers et al., 2018), CODAH (Chen et al., 2019), HellaSWAG (Zellers et al., 2019b)), cultural and social understanding (Lin et al., 2018; Sap et al., 2019a,b), visual scene comprehension (Zellers et al., 2019a), and general commonsense question answering (Talmor et al., 2019; Huang et al., 2019; Wang et al., 2019a, 2020). However, the success of fine-tuning pre-trained language models for these tasks does not necessarily mean machines can produce novel assumptions in a more open, realistic, generative setting. We see COMMONGEN as a novel, complementary commonsense reasoning benchmark task for advancing machine commonsense in NLG. + +Constrained Text Generation. Constrained text generation aims to decode sentences with expected attributes such as sentiment (Luo et al., 2019a; Hu et al., 2017), tense (Hu et al., 2017), template (Zhu et al., 2019; J Kurisinkel and Chen, 2019), style (Fu et al., 2018; Luo et al., 2019b; Li et al., 2018), topics (Feng et al., 2018), etc. Two related scenarios with our task is lexically constrained decoding and word ordering (Zhang and Clark, 2015; Hasler et al., 2018; Dinu et al., 2019; Hokamp and Liu, 2017; Puduppully et al., 2017; Miao et al., 2019). However, they are not easily adopted by the recent pre-trained language models and thus not directly useful for our task. Topical story generation (Fan et al., 2018; Yao et al., 2019) is also a related direction, while it targets generating longer, creative stories around the given topics, making it hard to directly adopt them to our task. Additionally, the + +COMMONGEN task brings some more challenges mentioned in Section 2. Prior constrained generation methods cannot address these issues together in a unified model. + +Incorporating Commonsense for NLG. There are a few recent works that incorporate commonsense knowledge in language generation tasks such as essay generation (Guan et al., 2019; Yang et al., 2019a), image captioning (Lu et al., 2018), video storytelling (Yang et al., 2019b), and conversational systems (Zhang et al., 2020a). These works suggest that generative commonsense reasoning has a great potential to benefit downstream applications. Our proposed COMMONGEN, to the best of our knowledge, is the very first constrained sentence generation dataset for assessing and conferring generative machine commonsense and we hope it can benefit such applications. Our transferring study in Sec. 5.3 also shows the potential benefits of CommonGen-generated contexts. + +# 7 Conclusion + +Our major contribution in this paper are threefold: + +- we present COMMONGEN, a novel constrained text generation task for generative commonsense reasoning, with a large dataset; +- we carefully analyze the inherent challenges of the proposed task, i.e., a) relational reasoning with latent commonsense knowledge, and b) compositional generalization. +- our extensive experiments systematically examine recent pre-trained language generation models (e.g., UniLM, BART, T5) on the task, and find that their performance is still far from humans, generating grammatically sound yet realistically implausible sentences. + +Our study points to interesting future research directions on modeling commonsense knowledge in language generation process, towards conferring machines with generative commonsense reasoning ability. We hope COMMONGEN would also benefit downstream NLG applications such as conversational systems and storytelling models. + +# Acknowledgements + +This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via Contract No. 2019-19051600007, the DARPA MCS program under Contract No. N660011924033 with the United + +States Office Of Naval Research, the Defense Advanced Research Projects Agency with award W911NF-19-20271, and NSF SMA 18-29268. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. We would like to thank all the collaborators in USC INK research lab for their constructive feedback on the work. + +# References + +Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2016. Spice: Semantic propositional image caption evaluation. In European Conference on Computer Vision, pages 382-398. Springer. +Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65-72, Ann Arbor, Michigan. Association for Computational Linguistics. +Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan Yang, Xiulei Liu, Yu Wang, Songhao Piao, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2020. Unilmv2: Pseudo-masked language models for unified language model pre-training. arXiv: Computation and Language. +Michael Chen, Mike D'Arcy, Alisa Liu, Jared Fernandez, and Doug Downey. 2019. Codah: An adversarially authored question-answer dataset for common sense. ArXiv, abs/1904.04365. +Noam Chomsky. 1965. Aspects of the theory of syntax. +Ernest Davis and Gary Marcus. 2015. Commonsense reasoning and commonsense knowledge in artificial intelligence. Commun. ACM, 58:92-103. +Georgiana Dinu, Prashant Mathur, Marcello Federico, and Yaser Al-Onaizan. 2019. Training neural machine translation to apply terminology constraints. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3063-3068, Florence, Italy. Association for Computational Linguistics. +Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In Advances in Neural Information Processing Systems, pages 13042-13054. + +Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889-898, Melbourne, Australia. Association for Computational Linguistics. +Xiaocheng Feng, Ming Liu, Jiahao Liu, Bing Qin, Yibo Sun, and Ting Liu. 2018. Topic-to-essay generation with neural networks. In *IJCAI*, pages 4078–4084. +Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Exploration and evaluation. In Thirty-Second AAAI Conference on Artificial Intelligence. +Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1631-1640, Berlin, Germany. Association for Computational Linguistics. +Jiatao Gu, Changhan Wang, and Junbo Zhao. 2019. Levenshtein transformer. In Advances in Neural Information Processing Systems, pages 11179-11189. +Jian Guan, Yansen Wang, and Minlie Huang. 2019. Story ending generation with incremental encoding and commonsense knowledge. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6473-6480. +Eva Hasler, Adrià de Gispert, Gonzalo Iglesias, and Bill Byrne. 2018. Neural machine translation decoding with terminology constraints. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 506-512, New Orleans, Louisiana. Association for Computational Linguistics. +Chris Hokamp and Qun Liu. 2017. Lexically constrained decoding for sequence generation using grid beam search. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1535-1546, Vancouver, Canada. Association for Computational Linguistics. +Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward controlled generation of text. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1587-1596. JMLR.org. +Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Cosmos QA: Machine reading comprehension with contextual commonsense reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2391-2401, Hong Kong, China. Association for Computational Linguistics. + +Litton J Kurisinkel and Nancy Chen. 2019. Set to ordered text: Generating discharge instructions from medical billing codes. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6165-6175, Hong Kong, China. Association for Computational Linguistics. +Daniel Keysers, Nathanael Scharli, Nathan Scales, Hylke Buisman, Daniel Furrer, Sergii Kashubin, Nikola Momchev, Danila Sinopalnikov, Lukasz Stafiniak, Tibor Tihon, Dmitry Tsarkov, Xiao Wang, Marc van Zee, and Olivier Bousquet. 2020. Measuring compositional generalization: A comprehensive method on realistic data. In International Conference on Learning Representations. +Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67-72, Vancouver, Canada. Association for Computational Linguistics. +Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, and Juan Carlos Niebles. 2017. Dense-captioning events in videos. In Proceedings of the IEEE international conference on computer vision, pages 706-715. +Brenden M Lake and Marco Baroni. 2017. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In . +Jason Lee, Elman Mansimov, and Kyunghyun Cho. 2018. Deterministic non-autoregressive neural sequence modeling by iterative refinement. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1173-1182, Brussels, Belgium. Association for Computational Linguistics. +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. ArXiv, abs/1910.13461. +Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, retrieve, generate: a simple approach to sentiment and style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1865-1874, New Orleans, Louisiana. Association for Computational Linguistics. +Xiang Li, Aynaz Taheri, Lifu Tu, and Kevin Gimpel. 2016. Commonsense knowledge base completion. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1445-1455, Berlin, Germany. Association for Computational Linguistics. + +Bill Yuchen Lin, Frank F. Xu, Kenny Zhu, and Seungwon Hwang. 2018. Mining cross-cultural differences and similarities in social media. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 709-719, Melbourne, Australia. Association for Computational Linguistics. +Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics. +Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer. +Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2018. Neural baby talk. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 7219-7228. IEEE Computer Society. +Fuli Luo, Peng Li, Pengcheng Yang, Jie Zhou, Yutong Tan, Baobao Chang, Zhifang Sui, and Xu Sun. 2019a. Towards fine-grained text sentiment transfer. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2013-2022, Florence, Italy. Association for Computational Linguistics. +Fuli Luo, Peng Li, Jie Zhou, Pengcheng Yang, Baobao Chang, Zhifang Sui, and Xu Sun. 2019b. A dual reinforcement learning framework for unsupervised text style transfer. arXiv preprint arXiv:1905.10060. +Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421, Lisbon, Portugal. Association for Computational Linguistics. +Ning Miao, Hao Zhou, Lili Mou, Rui Yan, and Lei Li. 2019. Cgmh: Constrained sentence generation by metropolis-hastings sampling. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6834-6842. +Chris Moore. 2013. The development of commonsense psychology. Psychology Press. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. + +Matt Post and David Vilar. 2018. Fast lexically constrained decoding with dynamic beam allocation for neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1314-1324, New Orleans, Louisiana. Association for Computational Linguistics. +Ratish Puduppully, Yue Zhang, and Manish Shrivastava. 2017. Transition-based deep input linearization. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 643-654, Valencia, Spain. Association for Computational Linguistics. +Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683. +Anna Rohrbach, Atousa Torabi, Marcus Rohrbach, Niket Tandon, Christopher Pal, Hugo Larochelle, Aaron Courville, and Bernt Schiele. 2017. Movie description. International Journal of Computer Vision, 123(1):94-120. +Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Winogrande: An adversarial winograd schema challenge at scale. *ArXiv*, abs/1907.10641. +Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. 2019a. Atomic: An atlas of machine commonsense for if-then reasoning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3027-3035. +Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019b. Social IQa: Commonsense reasoning about social interactions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4463-4473, Hong Kong, China. Association for Computational Linguistics. +Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2556-2565, Melbourne, Australia. Association for Computational Linguistics. + +Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Thirty-First AAAI Conference on Artificial Intelligence. +Mitchell Stern, William Chan, Jamie Kiros, and Jakob Uszkoreit. 2019. Insertion transformer: Flexible sequence generation via insertion operations. arXiv preprint arXiv:1902.03249. +Raymond Hendy Susanto, Shamil Chollampatt, and Li ling Tan. 2020. Lexically constrained neural machine translation with levenshtein transformer. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. To appear. +Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149-4158, Minneapolis, Minnesota. Association for Computational Linguistics. +Ruth Tincoff and Peter W Jusczyk. 1999. Some beginnings of word comprehension in 6-month-olds. *Psychological science*, 10(2):172-175. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008. +Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4566-4575. +Cunxiang Wang, Shuailong Liang, Yili Jin, Yi long Wang, Xiaodan Zhu, and Yue Zhang. 2020. SemEval-2020 task 4: Commonsense validation and explanation. In Proceedings of The 14th International Workshop on Semantic Evaluation. Association for Computational Linguistics. +Cunxiang Wang, Shuailong Liang, Yue Zhang, Xiaonan Li, and Tian Gao. 2019a. Does it make sense? and why? a pilot study for sense making and explanation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4020-4026, Florence, Italy. Association for Computational Linguistics. +Xin Wang, Jiawei Wu, Junkun Chen, Lei Li, Yuan-Fang Wang, and William Yang Wang. 2019b. Vatex: A large-scale, high-quality multilingual dataset for video-and-language research. In Proceedings of the IEEE International Conference on Computer Vision, pages 4581-4591. + +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R'emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. +Frank F. Xu, Bill Yuchen Lin, and Kenny Zhu. 2018. Automatic extraction of commonsense LocatedNear knowledge. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 96-101, Melbourne, Australia. Association for Computational Linguistics. +Pengcheng Yang, Lei Li, Fuli Luo, Tianyu Liu, and Xu Sun. 2019a. Enhancing topic-to-essay generation with external commonsense knowledge. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2002–2012, Florence, Italy. Association for Computational Linguistics. +Pengcheng Yang, Fuli Luo, Peng Chen, Lei Li, Zhiyi Yin, Xiaodong He, and Xu Sun. 2019b. Knowledgeable storyteller: a commonsense-driven generative model for visual storytelling. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI, pages 5356-5362. +Lili Yao, Nanyun Peng, Ralph Weischedel, Kevin Knight, Dongyan Zhao, and Rui Yan. 2019. Plan-and-write: Towards better automatic storytelling. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7378-7385. +Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67-78. +Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019a. From recognition to cognition: Visual commonsense reasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6720-6731. +Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. SWAG: A large-scale adversarial dataset for grounded commonsense inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 93-104, Brussels, Belgium. Association for Computational Linguistics. +Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019b. HellaSwag: Can a machine really finish your sentence? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4791-4800, Florence, Italy. Association for Computational Linguistics. + +Houyu Zhang, Zhenghao Liu, Chenyan Xiong, and Zhiyuan Liu. 2020a. Grounded conversation generation as guided traverses in commonsense knowledge graphs. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. To appear. +Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020b. "bertscore: Evaluating text generation with bert". In International Conference on Learning Representations. +Yue Zhang and Stephen Clark. 2015. Discriminative syntax-based word ordering for text generation. Computational Linguistics, 41:503-538. +Wanrong Zhu, Zhiting Hu, and Eric P. Xing. 2019. Text infilling. ArXiv, abs/1901.00158. + +# A Supplementary Figures and Tables + +We include additional figures and tables that we mentioned in the main content here. + +- Figure 8 shows the detailed distribution of the commonsense relations between given concepts, the summary of which was shown in Table 2 of the main content. +- Figure 9 presents 4 more case studies with human rationales which we asked our crowd workers to provide. +- Figure 7 shows instructions and AMT interface for crowd-sourcing human references. +- Table 7 shows the model performances on the dev set of COMMONGEN, as a reference for future development. +- Table 8 is the full results of the learning curve in Figure 5. We highlight the highest checkpoints and the speed-up by the CG-T5, which are discussed in Section 5.3. + +# B Experimental Details + +Main experiments. We present some implementation details in training and testing the baseline models in Table 6. The detailed instructions for installing dependencies and all necessary training command-lines are shown in the instruction 'readme.md' files. The number of trainable model parameters are directly induced from either output of the frameworks or the original papers. We show some key hyper-parameters that we manually tuned on top of the development set. + +All key hyper-parameters were initialized by the default values as suggested by the original authors of the frameworks. The bound of our manual tuning is done by iterating the magnitudes or the neighboring choices, for example, the learning rates ('lr') of the last seven models are selected from $\{1e - 3,\dots ,1e - 4,\dots ,1e - 5\}$ . Then, similarly, the batch size (bsz) is first maximized by making full use of the GPU memory. Note that the first three models are implemented with the OpenNMT-py framework $^{6}$ . The LevenTrans $^{7}$ , ConstLeven $^{8}$ , and BART $^{9}$ are adopted by the official authors' release. The + +
ModelsInstruction Files#ParaKey HPs
bRNN-CopyNetopennmt_based/README.md8.12 MIr=0.2, bzS=128, layers=2, rnn_size=128, dropout=0,
Trans-CopyNetopennmt_based/README.md6.25 MIr=0.2, bzS=128, layers=1, hidden_size=128, dropout=0.1,
MeanPooling-CopyNetopennmt_based/README.md7.76 Mglobalattention=mlp, Ir=0.15, rnn_size=128, bzS=128
LevenTrans.fairseq_based/README.md55.4 MIr=5e-4, warmup-init-Ir=1e-7, dropout=0.3, warmup=10k
ConstLevenconst-levt/readme.md55.4 MIr=5e-4, warmup-init-Ir=1e-7, dropout=0.3, warmup=10k
GPT-2GPT-2/readme.md345 MIr=5e-5, bzS=32*4
BERT-GenBERT-based/readme.md110 MIr=3e-5, bzS=32,
UniLMunilm_based/README.md340 MIr=1e-5, bzS=32
UniLMv2UniLM_v2/readme.md110 MIr=3e-5, bzS=32
BARTBART/readme.md400 MIr=3e-5, warmup=500, bzS=32
T5-BaseT5/readme.md220 MIr=5e-5, bzS=192
T5-LargeT5/readme.md770 MIr=2e-5, bzS=2*32, warmup_steps=400
+ +Table 6: The paths to the instruction files in our submitted code zip file (under the 'methods/' folder), and their numbers of parameters and key hyper-parameters. + +BERT-gen, UniLM, UniLMv2 are all based on their official source code10. The GPT-2 and T5 are both adopted by the huggingface transformers11 framework (Wolf et al., 2019). All models use beam searching as their decoding algorithms and beam-size are mostly 5, which is selected from {5, 10, 20}. All our models were trained on Quadro RTX 6000 GPUs. The training time of X-CopyNet and LevenTrans models are less than 12 hours with a single GPU. The second group of models are trained between 12 and 24 hours, expect for T5-large, which we used 3 GPUs and fine-tuned about 48 hours. Note that all the above methods are self-contained in our submitted code as long as users follow the associated readme instructions. + +Transferring study experiments. We use the same hyper-parameters which are searched over the baseline RoBERTa-Large model for these experiments. The best hyper-parameter $^{12}$ of RoBERTa-Large for CommonsenseQA $^{13}$ : + +- batch size = 16, learning rate = 1e-5, +maximum updates $= 3,000$ ( $\sim 5$ epochs) +- warmup steps=150, dropout rate=0.1 +- weight decay $= {0.01}$ ,adam_epsilon $= 1\mathrm{e} - 6$ + +We tried 10 random seeds and use the best one (42). Then, we follow the steps described in Sec. 5.3 to run other CG-enhanced models with the + +
Model \ MetricsROUGE-2/LBLEU-3/4METEORCIDErSPICECoverage
bRNN-CopyNet (Gu et al., 2016)9.2330.5713.607.8017.406.0416.9058.95
Trans-CopyNet11.0832.5717.2010.6018.807.0218.0062.16
MeanPooling-CopyNet11.3634.6314.808.9019.207.1720.2068.32
LevenTrans. (Gu et al., 2019)12.2235.4223.1015.0022.108.9421.4071.83
ConstLeven. (Susanto et al., 2020)13.4735.1921.3012.3025.0011.0623.2096.87
GPT-2 (Radford et al., 2019)17.7441.2432.7023.3027.5013.2627.6085.46
BERT-Gen (Bao et al., 2020)18.7342.3633.0023.7029.1013.3428.7091.71
UniLM (Dong et al., 2019)21.6845.6640.4030.4031.0015.7231.4092.41
UniLM-v2 (Bao et al., 2020)19.2443.0133.4024.2029.2013.6529.3093.57
BART (Lewis et al., 2019)22.1343.0237.0027.5031.0014.1230.0097.56
T5-Base (Raffel et al., 2019)15.3336.2028.1018.0024.609.7323.4083.77
T5-Large (Raffel et al., 2019)21.9844.4140.8030.6031.0015.8431.8097.04
Human Performance48.8863.7948.2044.9036.2043.5363.5099.31
+ +Table 7: Experimental results of different baseline methods on the COMMONGEN dev set. The first group of models are non-pretrained models, while the second group is large pretrained models that we have fine-tuned. The best models are **bold** and second best ones are **underlined** within each metric. + +same hps. This suggests that further searching for them may have even better performance. + +![](images/0f5c6397511a37a1ae5ff22708aada01f6f7702a9418ab01cefefc988de3072b.jpg) +Figure 7: Our annotation interface on the AMT platform. The upper part is the instruction for the annotators and we provide an example for them. Note that we give the part-of-speech hints (from the captain corpora) to boost the speed of annotation, but we do not remove sentences with wrong part-of-speech as long as they also make sense. + +![](images/418ad2e36b967ddd245fa563608050a00031043d85a950f8450b19a554cbfca5.jpg) +Figure 8: One/two-hop relation frequency in the COMMONGEN dev.&test sets on ConceptNet. + +1) [Input concept-set]: { give, lay, massage, table } + +# [Machine generations] + +[brnn-CpNet]: Lays massage someone table vertical gives on and the water. +[Trans-CpNet]: Massage lays on the kitchen. +[MP-CpNet]: A massage table being calling with an improvisation lay free speaker. +[LevenTrans]: A man chatting at the table. +[GPT-2]: A man gives a massage to a table. +[BERT-Gen]: A woman lays down on a table and gives a massage to a man. +[UniLM]: A woman lays down a massage on a table and gives a massage. +[UniLM-v2]: A woman is laying down and giving a massage on a table. +[BART]: A man lays on a table and gives a massage to a woman laying on the table. +[T5]: Woman lay on a table and gives a massage. +2)[Input concept-set]:{cow,horse,lasso,ride} + +# [Machine generations] + +[bRNN-CpNet]: Someone lowers his horse from the wall and lasso glass by cows. +[Trans-CpNet]: A horse having lasso in the bridal cows. +[MP-CpNet]: Cow in a lasso getting the ride. +[LevenTrans]: A cow rides through a horse. +[GPT-2]: A horse rides on a lasso. +[BERT-Gen]: A cow rides a lasso on a horse. +[UniLM]: A man rides a horse with a lasso at cows. +[UniLM-v2]: A horse rides a cow with a lasso on it. +[BART]: A man rides a horse and a cow on a bridle with a lasso. +[T5]: Lasso to ride a cow on a horse. + +3) [Input concept-set]: { hand, hold, walk, water} + +# [Machine generations] + +[brNN-CpNet]: Process of holds at hands under walk on hours. +[Trans-CpNet]: Hands with a walk in the water. +[MP-CpNet]: Walk across the hold to water. +[LevenTrans]: Hand moored at the water. +[GPT-2]: A woman holds a water walker and holds a hand. +[BERT-Gen]: A man walking and holding a hand in water while walking. +[UniLM]: A man holds hands to walk across the water. +[UniLM-v2]: A man is walking and holding a hand in the water. +[BART]: A man walks with a woman holding her hand as they walk through water. +[T5]: Man holds a bottle of water in his hand as he walks along a river. + +4) [Input concept-set]: { clean, ladder, squeegee, stand, window} + +# [Machine generations] + +[bRNN-CpNet]: The window stands out a ladder but clean the sun to being squeegee. +[Trans-CpNet]: A brown leather ladder with green eyes. +[MP-CpNet]: Window of the zebra are on a tablecloth. +[LevenTrans]: A man on a a on the kitchen. +[GPT-2]: Someone grabs a ladder from a window and squeezes it open. +[BERT-Gen]: A woman is cleaning a window with a ladder and a squeegee. +[UniLM]: Someone stands next to a window and stands on a ladder to clean the squeegee. +[UniLM-v2]: A man is standing on a ladder and using a ladder to clean the window. +[BART]: A man with a squeegee and a ladder standing on the ledge of a window is cleaning the window. +[T5]: Squeegee and ladder on a wooden stand to clean windows and windows. + +# [Human references from AMT] + +1. The man lays down on the massage table and the therapist gives him a massage. +[Rationale]: The man must lay down to receive a massage. The therapist is the giver of massages. The table is a massage table. +2. Lay down on the table and the masseuse will give you a neck massage. +[Rationale]: A masseuse is a woman who gives massages professionally. +Massages are usually done on tables. + +3. The woman gives the man who lays on the table a massage. +{Rationale}: Some massages are done laying down; people like to get massages; tables are used for people to get massages; people lay on tables to get massages. + +# [Human references from AMT] + +1. When those men ride a horse for the first time and lasso those cows. +[Rationale]: cowboys ride horses and lasso cows for a living +2. A cowboy can use a lasso to control a horse or cow in order to ride them. +[Rationale]: I understand the words and I can read and write English. +3. The cowboy will lasso the cow while riding on the horse. +[Rationale]: Have seen it. + +# [Human references from AMT] + +1. The couple holds hands as they walk by the water. +[Rationale]: +Couples hold hands when taking walk even by a body of water. +2. The girl is walking holding in her hand a bottle of water. +[Rationale]: I see this reading the words +3. The couple hold hands while they walk by the water. +[Rationale]: People sometimes hold hands. People Like to walk near water. + +# [Human references from AMT] + +1. The window cleaner stands on the ladder to clean the window with a squeegee. +[Rationale]: A squeegee is a tool to clean windows. A ladder is something that people use to reach high places. +2. The man clean the window on the ladder stand by using squeegee. +[Rationale]: man need to clean the window by using squeegee on the ladder stand +3. The man stood beside the ladder and cleaned the window with a squeegee. +[Rationale]: people can stand next to ladders. People clean windows. Squeepees are used to clean windows. + +Figure 9: Four cases for qualitative analysis of machine generations. References are collected from AMT crowdworkers and they are required to provide rationales. Note that the third one is a positive case showing that some models can successfully generate reasonable scenarios. However, most models perform poorly on the other cases. + +
Training StepsRoBERTa-Largew/CG(BART)w/CG(T5)w/CG(UniLM)w/CG(BERT-Gen)w/CG(ConstLever)
500.22520.18840.25060.22440.20070.2162
1000.30880.27030.35870.31530.29240.2809
1500.50530.29730.56430.18510.33910.3653
2000.57170.44390.66500.38330.52740.5324
2500.60200.52420.69370.53480.58390.6396
3000.63880.66010.71170.63230.62740.6634
3500.66750.68140.71500.65030.66260.6740
4000.68300.68300.72150.68470.67810.6773
4500.70270.70680.73380.69210.70680.6962
5000.70190.70760.74280.70110.69290.7052
5500.69780.72480.74860.72560.70680.6904
6000.67900.72320.74940.73380.72480.7068
6500.71500.72890.74280.74690.71010.7117
7000.71420.74530.74770.73870.73050.7183
7500.70270.74530.73140.75270.71660.7183
8000.71580.73550.74370.73710.72810.7240
8500.71740.74450.76250.74200.73790.7322
9000.71910.75430.75590.75020.74770.7338
9500.73550.74860.74770.73870.74280.7404
10000.74770.75100.74610.74860.74280.7363
10500.73460.75020.75680.74690.74120.7297
11000.74280.75270.75510.74940.73630.7420
11500.73790.76090.75760.76410.74530.7437
12000.74690.74770.75020.74610.74200.7477
12500.74770.74120.75920.75180.72730.7371
13000.75020.75180.76170.76660.75180.7412
13500.74690.75020.75510.75680.74370.7404
14000.74200.74940.76410.75590.74940.7428
14500.75100.75840.76250.74610.74610.7461
15000.75350.76740.76900.75510.74120.7428
15500.74610.75590.76740.75100.74450.7412
16000.74370.75840.75840.75430.74450.7420
16500.75680.76090.76330.75430.74940.7428
17000.75510.75840.76330.76250.75350.7396
17500.76000.75680.76990.77400.75510.7518
18000.76170.75590.77310.77400.75270.7486
18500.76900.75840.77720.77070.76170.7461
19000.76580.75920.78050.78380.74860.7445
19500.75840.76170.77150.77150.75100.7396
20000.75100.76170.76900.77150.74450.7355
20500.75510.76410.77310.76490.75590.7477
21000.76410.76170.76410.76250.75590.7412
21500.75840.75430.76580.76410.75270.7461
22000.75840.74770.76490.76330.74530.7371
22500.75510.75590.76410.76090.74610.7363
23000.75350.76000.76990.76740.74120.7420
23500.75510.76170.76820.76250.75020.7412
24000.75590.76490.76990.76250.75590.7387
24500.75840.76740.77070.76580.74770.7387
25000.75510.76490.76000.76330.75020.7363
25500.75920.76580.77310.76580.75180.7387
26000.75590.76580.77150.76000.74200.7371
26500.75760.76740.76900.76000.74940.7420
27000.75680.77070.76900.76000.74610.7379
27500.75680.76990.76740.76490.74450.7437
28000.75920.76820.76900.76170.74450.7453
28500.75920.76410.77070.76490.74610.7445
29000.76090.76490.77400.76580.74770.7437
29500.76170.76490.77400.76580.74690.7437
30000.76000.76580.77310.76580.74370.7420
+ +Table 8: Experimental results of the transferring study on CommonsenseQA dev set. \ No newline at end of file diff --git a/commongenaconstrainedtextgenerationchallengeforgenerativecommonsensereasoning/images.zip b/commongenaconstrainedtextgenerationchallengeforgenerativecommonsensereasoning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..60e9b1b7adadc228a053c720db1af8cfb306bab9 --- /dev/null +++ b/commongenaconstrainedtextgenerationchallengeforgenerativecommonsensereasoning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a543f1cf1e0269a2991b509d4bce575965addc78ad8de6bb9fede421318514fa +size 1143012 diff --git a/commongenaconstrainedtextgenerationchallengeforgenerativecommonsensereasoning/layout.json b/commongenaconstrainedtextgenerationchallengeforgenerativecommonsensereasoning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..466e4b6fbbd3383c0f0f07a3eb3bd2a11a0bfdf2 --- /dev/null +++ b/commongenaconstrainedtextgenerationchallengeforgenerativecommonsensereasoning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4568a7247a2d5ab3acdb105a6d99effbe7b391759874b9a862c1f0433ed2f0b8 +size 585513 diff --git a/composedvariationalnaturallanguagegenerationforfewshotintents/413b48eb-f76b-41fc-90d8-ad194af32b1e_content_list.json b/composedvariationalnaturallanguagegenerationforfewshotintents/413b48eb-f76b-41fc-90d8-ad194af32b1e_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..54c974222fd4d83f55d4fd7296ce0a93576fa660 --- /dev/null +++ b/composedvariationalnaturallanguagegenerationforfewshotintents/413b48eb-f76b-41fc-90d8-ad194af32b1e_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:45b96d98d0be06adea044e1475d406abb1d0cd868f5c9bbc1956048121254572 +size 75002 diff --git a/composedvariationalnaturallanguagegenerationforfewshotintents/413b48eb-f76b-41fc-90d8-ad194af32b1e_model.json b/composedvariationalnaturallanguagegenerationforfewshotintents/413b48eb-f76b-41fc-90d8-ad194af32b1e_model.json new file mode 100644 index 0000000000000000000000000000000000000000..e866fd3146fffcbc28f47a8bae6d4097a07c6391 --- /dev/null +++ b/composedvariationalnaturallanguagegenerationforfewshotintents/413b48eb-f76b-41fc-90d8-ad194af32b1e_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b13d3b95ed9ae186cd2a31b99437788097ce00436667796597abcde1edb5203a +size 92303 diff --git a/composedvariationalnaturallanguagegenerationforfewshotintents/413b48eb-f76b-41fc-90d8-ad194af32b1e_origin.pdf b/composedvariationalnaturallanguagegenerationforfewshotintents/413b48eb-f76b-41fc-90d8-ad194af32b1e_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..747fa21b2322c6e6f124fb17cffdf4fed7753768 --- /dev/null +++ b/composedvariationalnaturallanguagegenerationforfewshotintents/413b48eb-f76b-41fc-90d8-ad194af32b1e_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bc8773c9e8fe8ad3022ebd96b153f7fc1fec5ff59903bc12f19b65fa3e291a5f +size 399437 diff --git a/composedvariationalnaturallanguagegenerationforfewshotintents/full.md b/composedvariationalnaturallanguagegenerationforfewshotintents/full.md new file mode 100644 index 0000000000000000000000000000000000000000..f58804abbf28db326036d30ae55d15d5ddbccf98 --- /dev/null +++ b/composedvariationalnaturallanguagegenerationforfewshotintents/full.md @@ -0,0 +1,342 @@ +# Composed Variational Natural Language Generation for Few-shot Intents + +Congying Xia $^{1*}$ , Caiming Xiong $^{2}$ , Philip Yu $^{1}$ and Richard Socher $^{2}$ + +1University of Illinois at Chicago, Chicago, IL, USA + +$^{2}$ Salesforce Research, Palo Alto, CA, US + +{cxia8, psyu}@uic.edu,{cxiong, rsocher}@salesforce.com + +# Abstract + +In this paper, we focus on generating training examples for few-shot intents in the realistic imbalanced scenario. To build connections between existing many-shot intents and few-shot intents, we consider an intent as a combination of a domain and an action, and propose a composed variational natural language generator (CLANG), a transformer-based conditional variational autoencoder. CLANG utilizes two latent variables to represent the utterances corresponding to two different independent parts (domain and action) in the intent, and the latent variables are composed together to generate natural examples. Additionally, to improve the generator learning, we adopt the contrastive regularization loss that contrasts the in-class with the out-of-class utterance generation given the intent. To evaluate the quality of the generated utterances, experiments are conducted on the generalized few-shot intent detection task. Empirical results show that our proposed model achieves state-of-the-art performances on two real-world intent detection datasets. + +# 1 Introduction + +Intelligent assistants have gained great popularity in recent years since they provide a new way for people to interact with the Internet conversationally (Hoy, 2018). However, it is still challenging to answer people's diverse questions effectively. Among all the challenges, identifying user intentions from their spoken language is important and essential for all the downstream tasks. + +Most existing works (Hu et al., 2009; Xu and Sarikaya, 2013; Chen et al., 2016; Xia et al., 2018) formulate intent detection as a classification task and achieve high performance on pre-defined intents with sufficient labeled examples. With this + +ever-changing world, a realistic scenario is that we have imbalanced training data with existing many-shot intents and insufficient few-shot intents. Previous intent detection models (Yin, 2020; Yin et al., 2019) deteriorate drastically in discriminating the few-shot intents. + +To alleviate this scarce annotation problem, several methods (Wei and Zou, 2019; Malandrakis et al., 2019; Yoo et al., 2019) have been proposed to augment the training data for low-resource spoken language understanding (SLU). Wei and Zou (2019) introduce simple data augmentation rules for language transformation like insert, delete and swap. Malandrakis et al. (2019) and Yoo et al. (2019) utilize variational autoencoders (Kingma and Welling, 2013) with simple LSTMs (Hochreiter and Schmidhuber, 1997) that have limited model capacity to do text generation. Furthermore, these models are not specifically designed to transfer knowledge from existing many-shot intents to few-shot intents. + +In this paper, we focus on transferable natural language generation by learning how to compose utterances with many-shot intents and transferring to few-shot intents. When users interact with intelligent assistants, their goal is to query some information or execute a command in a certain domain (Watson Assistant, 2017). For instance, the intent of the input "what will be the highest temperature next week" is to ask about the weather. The utterance can be decomposed into two parts, "what will be" corresponding to an action "Query" and "the highest temperature" related to the domain "Weather". These actions or domains are very likely to be shared among different intents including the few-shot ones (Xu et al., 2019). For example, there are a lot of actions ("query", "set", "remove") that can be combined with the domain of "alarm". The action "query" also exists in multiple domains like "weather", "calendar" and "movie". Ideally, if we + +can learn the expressions representing for a certain action or domain and how they compose an utterance for existing intents, then we can learn how to compose utterances for few-shot intents naturally. Therefore, we define an intent as a combination of a domain and an action. Formally, we denote the domain as $y_{d}$ and the action as $y_{a}$ . Each intent can be expressed as $y = (y_{d}, y_{a})$ . + +A composed variational natural language generator (CLANG) is proposed to learn how to compose an utterance for a given intent with an action and a domain. CLANG is a transformer-based (Vaswani et al., 2017) conditional variational autoencoder (CVAE) (Kingma et al., 2014). It contains a bilatent variational encoder and a decoder. The bilatent variational encoder utilizes two independent latent variables to model the distributions of action and domain separately. Special attention masks are designed to guide these two latent variables to focus on different parts of the utterance and disentangle the semantics for action and domain separately. Through decomposing utterances for existing many-shot intents, the model learns to generate utterances for few-shot intents as a composition of the learned expressions for domain and action. + +Additionally, we adopt the contrastive regularization loss to improve our generator learning. During the training, an in-class utterance from one intent is contrasted with an out-of-class utterance from another intent. Specifically, the contrastive loss is to constrain the model to generate the positive example with a higher probability than the negative example with a certain margin. With the contrastive loss, the model is regularized to focus on the given domain and intent and the probability of generating negative examples is reduced. + +To quantitatively evaluate the effectiveness of CLANG for augmenting training data in low-resource intent detection, experiments are conducted for the generalized few-shot intent detection task (GFSID) (Xia et al., 2020). GFSID aims to discriminate a joint label space consisting of both existing many-shot intents and few-shot intents. + +Our contributions are summarized below. 1) We define an intent as a combination of a domain and an action to build connections between existing many-shot intents and few-shot intents. 2) A composed variational natural language generator (CLANG) is proposed to learn how to compose an utterance for a given intent with an action and a domain. Utterances are generated for few-shot in + +tents via a composed variational inferences process. 3) Experiment results show that CLANG achieves state-of-the-art performance on two real-world intent detection datasets for the GFSID task. + +# 2 Composed Variational Natural Language Generator + +In this section, we introduce the composed variational natural language generator (CLANG). As illustrated in Figure 1, CLANG consists of three parts: input representation, bi-latent variational encoder, and decoder. + +# 2.1 Input Representation + +For a given intent $y$ decomposed into a domain $y_{d}$ and an action $y_{a}$ and an utterance $x = (w_{1}, w_{2}, \ldots, w_{n})$ with $n$ tokens, we designed the input format like BERT as ([CLS], $y_{d}, y_{a}$ , [SEP], $w_{1}, w_{2}, \ldots, w_{n}$ , [SEP]). As the example in Figure 1, the intent has the domain of "weather" and the action of "query". The utterance is "what will be the highest temperature next week". The input is represented as ([CLS], weather, query, [SEP], what, will, be, the, highest, temperature, next, week, [SEP]). + +Texts are tokenized into subword units by Word-Piece (Wu et al., 2016). The input embeddings of a token sequence are represented as the sum of three embeddings: token embeddings, position embeddings (Vaswani et al., 2017), and segment embeddings (Devlin et al., 2018). The segment embeddings are learned to identify the intent and the utterance with different embeddings. + +# 2.2 Bi-latent Variational Encoder + +As illustrated in Figure 1, the bi-latent variational encoder is to encode the input into two latent variables that contain the disentangled semantics in the utterance corresponding to domain and action separately. + +Multiple transformer layers (Vaswani et al., 2017) are utilized in the encoder. Through the self-attention mechanism, these transformer layers not only extract semantic meaningful representations for the tokens, but also model the relation between the intent and the utterance. The embeddings for the domain token and the action token output from the last transformer layer are denoted as $\mathbf{e}_d$ and $\mathbf{e}_a$ . We encode $\mathbf{e}_d$ into variable $z_d$ to model the distribution for the domain and $\mathbf{e}_a$ are encoded into variable $z_a$ to model the distribution for the action. + +Ideally, we want to disentangle the information + +![](images/6c10ef44846f4bd9ff29c62bc97d99ed49b147b79f73ad622137546b3911edbf.jpg) +Figure 1: The overall framework of CLANG. + +for the domain and the action, making $\mathbf{e}_d$ attend to tokens related to domain and $\mathbf{e}_a$ focus on the expressions representing the action. To achieve that, we make a variation of the attention calculations in transformer layers to avoid direct interactions among the domain token and the action token in each layer. + +Instead of applying the whole bidirectional attention to the input, an attention mask matrix $\mathbf{M} \in \mathbb{R}^{N \times N}$ is added to determine whether a pair of tokens can be attended to each other (Dong et al., 2019). $N$ is the length of the input. For the $l$ -th Transformer layer, the output of a self-attention head $\mathbf{A}_l$ is computed via: + +$$ +\mathbf {Q} = \mathbf {T} ^ {l - 1} \mathbf {W} _ {Q} ^ {l}, +$$ + +$$ +\mathbf {K} = \mathbf {T} ^ {l - 1} \mathbf {W} _ {K} ^ {l}, +$$ + +$$ +\mathbf {V} = \mathbf {T} ^ {l - 1} \mathbf {W} _ {V} ^ {l}, \tag {1} +$$ + +$$ +\mathbf {A} _ {l} = \operatorname {s o f t m a x} \left(\frac {\mathbf {Q K} ^ {\top}}{\sqrt {d _ {k}}} + \mathbf {M}\right) \mathbf {V}, +$$ + +where the attention mask matrix calculated as: + +$$ +\mathbf {M} _ {i j} = \left\{ \begin{array}{l l} 0, & \text {a l l o w t o a t t e n d}; \\ - \infty , & \text {p r e v e n t f r o m a t t e n d i n g .} \end{array} \right. \tag {2} +$$ + +The output of the previous transformer layer $\mathbf{T}^{l - 1}\in \mathbb{R}^{N\times d_h}$ is linearly projected to a triple of queries, keys and values parameterized by matrices $\mathbf{W}_Q^l,\mathbf{W}_K^l,\mathbf{W}_V^l\in \mathbb{R}^{d_h\times d_k}$ . $d_h$ is the hidden dimension for the transformer layer, and $d_{k}$ is the hidden dimension for a self-attention head. + +The proposed attention mask for the domain token and the action token is illustrated in Figure 2. The Domain $y_{d}$ and the action $y_{a}$ are prevented + +from attending to each other. All the other tokens have are allowed to have full attentions. The elements in the mask matrix for the attentions between domain and action are $-\infty$ , and 0 for all the others. + +The disentangled embeddings $\mathbf{e}_d$ and $\mathbf{e}_a$ are encoded into two latent variables $z_{d}$ and $z_{a}$ to model the posterior distributions determined by the intent elements separately: $p(z_d|x,y_d)$ , $p(z_a|x,y_a)$ . The latent variable $z_{d}$ is conditioned on the domain $y_{d}$ , while $z_{a}$ is controlled by the action $y_{a}$ . By modeling the true distributions, $p(z_d|x,y_d)$ and $p(z_a|x,y_a)$ , using a known distribution that is easy to sample from (Kingma et al., 2014), we constrain the prior distributions, $p(z_d|y_d)$ and $p(z_a|y_a)$ , as multivariate standard Gaussian distributions. A reparametrization trick (Kingma and Welling, 2013) is used to generate the latent vector $z_{d}$ and $z_{a}$ separately. Gaussian parameters $(\mu_d,\mu_a,\sigma_d^2,\sigma_a^2)$ are projected from $\mathbf{e}_d$ and $\mathbf{e}_a$ : + +$$ +\mu_ {d} = \mathbf {e} _ {d} \mathbf {W} _ {\mu_ {d}} + b _ {\mu_ {d}}, +$$ + +$$ +\log \left(\sigma_ {d} ^ {2}\right) = \mathbf {e} _ {d} \mathbf {W} _ {\sigma_ {d}} + b _ {\sigma_ {d}}, \tag {3} +$$ + +$$ +\mu_ {a} = \mathbf {e} _ {a} \mathbf {W} _ {\mu_ {a}} + b _ {\mu_ {a}}, +$$ + +$$ +\log \left(\sigma_ {a} ^ {2}\right) = \mathbf {e} _ {a} \mathbf {W} _ {\sigma_ {a}} + b _ {\sigma_ {a}}, +$$ + +where we have $\mathbf{W}_{\mu_d},\mathbf{W}_{\mu_a},\mathbf{W}_{\sigma_d},\mathbf{W}_{\sigma_a}\in \mathbb{R}^{d_h\times d_h},$ $b_{\mu_d},b_{\mu_a},b_{\sigma_d},b_{\sigma_a}\in \mathbb{R}^{d_h}$ . Noisy variables $\varepsilon_{d}\sim$ $\mathcal{N}(0,\mathrm{I}),\varepsilon_{a}\sim \mathcal{N}(0,\mathrm{I})$ are utilized to sample $z_{d}$ and $z_{a}$ from the learned distribution: + +$$ +z _ {d} = \mu_ {d} + \sigma_ {d} \cdot \varepsilon_ {d}, \tag {4} +$$ + +$$ +z _ {a} = \mu_ {a} + \sigma_ {a} \cdot \varepsilon_ {a}. +$$ + +The KL-loss function is applied to regularize the prior distributions for these two latent variables to + +be close to the Gaussian distributions: + +$$ +\begin{array}{l} \mathcal {L} _ {K L} = \mathbb {D} _ {\mathrm {K L}} [ q (z _ {d} | x, y _ {d}), p (z _ {d} | y _ {d}) ] \tag {5} \\ + \mathbb {D} _ {\mathrm {K L}} \left[ q \left(z _ {a} | x, y _ {a}\right), p \left(z _ {a} | y _ {a}\right) \right]. \\ \end{array} +$$ + +A fully connected layer with Gelu (Hendrycks and Gimpel, 2016) activation function is applied on $z_{d}$ and $z_{a}$ to compose these two latent variables together and outputs $z$ . The composed latent information $z$ is utilized in the decoder to do generation. + +![](images/93c2ecb8bcccc31efadd22c8dc219e4494129e0c5115e287fc648130457bc2ad.jpg) +Figure 2: The attention map of domain and action in the encoder. + +# 2.3 Decoder + +The decoder utilizes the composed latent information together with the intent to reconstruct the input utterance $p(x|z_d, z_a, y_d, y_a)$ . As shown in Figure 1, a residual connection is built from the input representation to the decoder to get the embeddings for all the tokens. To keep a fixed length and introduce the composed latent information $z$ into the decoder, we replace the first [CLS] token with $z$ . + +The decoder is built with multiple transformer layers to generate the utterance. Text generation is a sequential process that we use the left context to predict the next token. To simulate the left-to-right generation process, another attention mask is utilized for the decoder. In the attention mask for the decoder, tokens in the intent can only attend to intent tokens, while tokens in the utterance can attend to both the intent and all the left tokens in the utterance. + +For the first token $z$ which holds composed latent information, it is only allowed to attend to itself due to the vanishing latent variable problem. The latent information can be overwhelmed by the information of other tokens when adapting VAE to natural language generators either for LSTM (Zhao et al., 2017) or transformers (Xia et al., 2020). To further increase the impact of the composed latent information $z$ and alleviate the vanishing latent variable problem, we concatenate the token representations of $z$ to all the other token embeddings output from the last transformer layer in the decoder. + +The hidden dimension increases to $2 \times d_h$ after the concatenation. To reduce the hidden dimension + +to $d_h$ and get the embeddings to decode the vocabulary, two fully-connected (FC) layers followed by a layer normalization (Ba et al., 2016) are applied on top of the transformer layers. Gelu is used as the activation function in these two FC layers. The embeddings output from these two FC layers are decoded into tokens in the vocabulary. The embeddings at position $i = \{1, \dots, n - 1\}$ are used to predict the next token at position $i + 1$ till the [SEP] token is generated. + +To train the decoder to reconstruct the input, a reconstruction loss is formulated as: + +$$ +\mathcal {L} _ {r} = - \mathbb {E} _ {q \left(z _ {d} \mid x, y _ {d}\right), q \left(z _ {a} \mid x, y _ {a}\right)} [ \log p (x \mid z _ {d}, z _ {a}, y _ {d}, y _ {a}) ]. \tag {6} +$$ + +# 2.4 Learning with contrastive loss + +Although the model can generate utterances for a given intent, such as "are there any alarms set for seven am" for "Alarm Query", there are some negative utterances generated. For example, "am i free between six to seven pm" is generated with the intent of "Alarm Query". This would be because in the training, it lacks supervision to distinguish in-class from out-of-class examples especially for few-shot intents. To alleviate the problem, we adopt a contrastive loss in the objective function and reduce the probability to generate out-of-class samples. + +Given an intent $y = (y_{d},y_{a})$ , an in-class utterance $x^{+}$ from this intent and an out-of-class utterance $x^{-}$ from another intent. The contrastive loss constrains the model to generate the in-class example $x^{+}$ with a higher probability than $x^{-}$ . In the same batch, we feed the in-class example $(y_{d},y_{a},x^{+})$ and the out-of-class example $(y_{d},y_{a},x^{-})$ into CLANG to model the likelihood: $P(x^{+}|y)$ and $P(x^{-}|y)$ . The chain rule is used to calculate the likelihood of the whole utterance: $p(x|y) = p(w_1|y)p(w_2|y,w_1)\dots p(w_n|y,w_1,\dots,T_{w_{n - 1}})$ . In the contrastive loss, the log-likelihood of the in-class example is constrained to be higher than the out-of-class example with a certain margin $\lambda$ : + +$$ +\mathcal {L} _ {c} = \max \{0, \lambda - \log p (x ^ {+} | y) + \log p (x ^ {-} | y) \}. \tag {7} +$$ + +To leverage challenging out-of-class utterances, we choose the most similar utterance with a different intent as the out-of-class utterance. Three indicators are considered to measure the similarity between the in-class utterance and all the utterances with a different intent: the number of shared unigrams $s_1$ , bi-grams $s_2$ between the utterances and + +the number of shared uni-grams between the name of intents $s_3$ . The sum of these three numbers, $s = s_1 + s_2 + s_3$ , is utilized to find the out-of-class utterance with the highest similarity. If there are multiple utterances having the same highest similarity $s$ , we random choose one as the negative utterance. + +The overall loss function is a summation of the KL-loss, the reconstruction loss and the contrastive loss: + +$$ +\mathcal {L} = \mathcal {L} _ {K L} + \mathcal {L} _ {r} + \mathcal {L} _ {c}. \tag {8} +$$ + +# 2.5 Generalized Few-shot Intent Detection + +Utterances for few-shot intents are generated by sampling two latent variables, $z_{d}$ and $z_{a}$ , separately from multivariate standard Gaussian distributions. Beam search is applied to do the generation. To improve the diversity of the generated utterances, we sample the latent variables for $s$ times and save the top $k$ results for each time. The overall generation process follows that of Xia et al. (2020). + +These generated utterances are added to the original training dataset to alleviate the scare annotation problem. We finetune BERT with the augmented dataset to solve the generalized few-shot intent detection task. The whole pipeline is referred as BERT + CLANG in the experiments. + +
DatasetSNIPS-NLUNLUED
Vocab Size10,8966,761
#Total Classes764
#Few-shot Classes216
#Few-shots / Class1 or 51 or 5
#Training Examples7,8587,430
#Training Examples / Class1571.6155
#Test Examples2,7991,076
Average Sentence Length9.057.68
+ +Table 1: Data Statistics for SNIPS-NLU and NLUED. #Few-shot examples are excluded in the #Training Examples. For NLUED, the statistics is reported for KFold_1. + +# 3 Experiments + +To evaluate the effectiveness of the proposed approach for generating labeled examples for few-shot intents, experiments are conducted for the GF-SID task on two real-world datasets. The few-shot intents are augmented with utterances generated from CLANG. + +# 3.1 Datasets + +Following (Xia et al., 2020), two public intent detection datasets are used in the experiments: SNIPS-NLU (Coucke et al., 2018) and NLUED + +(Xingkun Liu and Rieser, 2019). These two datasets contain utterances from users when interacting with intelligent assistants and are annotated with pre-defined intents. Dataset details are illustrated in Table 1. + +SNIPS-NLU1 contains seven intents in total. Two of them (RateBook and AddTo Playlist) as regraded as few-shot intents. The others are used as existing intents with sufficient annotation. We randomly choose $80\%$ of the whole data as the training data and $20\%$ as the test data. + +$\mathbf{N L U E D}^{2}$ is a natural language understanding dataset with 64 intents for human-robot interaction in home domain, in which 16 intents as randomly selected as the few-shot ones. A sub-corpus of 11, 036 utterances with 10-folds cross-validation splits is utilized. + +# 3.2 Baselines + +We compare the proposed model with a few-shot learning model and several data augmentation methods. 1) Prototypical Network (Snell et al., 2017) (PN) is a distance-based few-shot learning model. It can be extended to the GFSID task naturally by providing the prototypes for all the intents. BERT is used as the encoder for PN to provide a fair comparison. We fine-tune BERT together with the PN model. This variation is referred to as BERT-PN+. 2) BERT. For this baseline, we oversampled the few-shot intents by duplicating the few-shots to the maximum training examples for one class. 3) SVAE (Bowman et al., 2015) is a variational autoencoder built with LSTMs. 4) CGT (Hu et al., 2017) adds a discriminator based on SVAE to classify the sentence attributes. 5) EDA (Wei and Zou, 2019) uses simple data augmentations rules for language transformation. We apply three rules in the experiment, including insert, delete, and swap. 6) CG-BERT (Xia et al., 2020) is the first work that combines CVAE with BERT to do few-shot text generation. BERT is fine-tuned with the augmented training data for these generation baselines. The whole pipelines are referred to as BERT + SVAE, BERT + CGT, BERT + EDA and BERT + CG-BERT in Table 2. An ablation study is also provided to understand the importance of contrastive loss by removing it from CLANG. + +
Many-shotFew-shotH-MeanMany-shotFew-shotH-Mean
SNIPS-NLU 1-shotSNIPS-NLU 5-shot
BERT-PN+92.66 ± 4.4960.52 ± 7.5872.99 ± 5.9795.96 ± 1.1386.03 ± 2.0090.71 ± 1.19
BERT98.20 ± 0.0644.42 ± 4.3557.74 ± 7.5098.34 ± 0.1081.82 ± 6.1689.22 ± 3.74
BERT + SVAE98.24 ± 0.0945.15 ± 5.5461.67 ± 5.1198.34 ± 0.0682.10 ± 4.0689.49 ± 2.47
BERT + CGT98.20 ± 0.0745.80 ± 5.6862.30 ± 5.1798.32 ± 0.1482.65 ± 4.3189.78 ± 2.83
BERT + EDA98.20 ± 0.0847.52 ± 5.9663.87 ± 5.2998.09 ± 0.1882.00 ± 3.4789.30 ± 2.12
BERT + CG-BERT98.13 ± 0.1563.04 ± 5.4976.65 ± 4.2498.30 ± 0.1786.89 ± 4.0592.20 ± 2.32
BERT + CLANG98.34 ± 0.1064.63 ± 6.1677.86 ± 4.3998.34 ± 0.0688.04 ± 1.3492.90 ± 0.71
NLUED 1-shotNLUED 5-shot
BERT-PN+81.24 ± 2.7618.95 ± 4.4230.67 ± 5.5383.41 ± 2.6260.28 ± 4.1969.93 ± 3.49
BERT94.00 ± 0.937.88 ± 3.2814.39 ± 5.6694.12 ± 0.8951.69 ± 3.1966.67 ± 2.51
BERT + SVAE93.80 ± 0.708.88 ± 3.6616.01 ± 6.0693.60 ± 0.6354.03 ± 3.9168.42 ± 3.06
BERT + CGT94.00 ± 0.669.33 ± 3.6816.78 ± 6.1693.61 ± 0.6354.70 ± 4.0668.96 ± 3.17
BERT + EDA93.78 ± 0.6611.65 ± 4.8920.41 ± 7.5693.71 ± 0.6457.22 ± 4.3570.95 ± 3.35
BERT + CG-BERT94.01 ± 0.7020.39 ± 5.7733.12 ± 7.9293.80 ± 0.6061.06 ± 4.2973.88 ± 3.10
BERT + CLANG93.60 ± 0.7922.03 ± 6.1035.29 ± 8.0593.29 ± 0.8666.44 ± 3.0777.56 ± 2.05
+ +Table 2: Generalized few shot intent detection with 1-shot and 5-shot settings on SNIPS-NLU and NLUED. Seen is the accuracy on the seen intents $(acc_{s})$ , Unseen/Novel is the accuracy on the novel intents $(acc_{s})$ , H-Mean is the harmonic mean of seen and unseen accuracies. + +# 3.3 Implementation Details + +Both the encoder and the decoder use six transformer layers. Pre-trained weights from BERT-base are used to initialize the embeddings and the transformer layers. The weights from the first six layers in BERT-base are used to initialize the transformer layers in the encoder and the later six layers are used to initialize the decoder. Adam optimizer (Kingma and Ba, 2014) is applied for all the experiments. The margin for the contrastive loss is 0.5 for all the settings. All the hidden dimensions used in CLANG is 768. For CLANG, the learning rate is 1e-5 and the batch size is 16. Each epoch has 1000 steps. Fifty examples from the training data are sampled as the validation set. The reconstruction error on the validation set is used to search for the number of training epochs in the range of [50, 75, 100]. The reported performances of CLANG and the ablation of contrastive loss are both trained with 100 epochs. + +The hyperparameters for the generation process including the top index $k$ and the sampling times $s$ are chosen by evaluating the quality of the generated utterances. The quality evaluation is described in section 3.5. We search $s$ in the list of [10, 20], and $k$ in the list of [20, 30]. We use $k = 30$ and $s = 20$ for BERT + CLANG in NLUED, while use $k = 30$ and $s = 10$ for all the other experiments. When fine-tuning BERT for the GFSID task, we fix the hyperparameters as follows: the batch size is 32, the learning rate is 2e-5, and the number of the training epochs is 3. + +# 3.4 Experiment Results + +The experiment results for the generalized few-shot intent detection task are shown in Table 2. Performance is reported for two datasets with both 1-shot and 5-shot settings. For SNIPS-NLU, the performance is calculated with the average and the standard deviation over 5 runs. The results on NLUED are reported over 10 folds. + +Three metrics are used to evaluate the model performances, including the accuracy on existing many-shot intents $(acc_{m})$ , the accuracy on few-shot intents $(acc_{f})$ together with their harmonic mean $(H)$ . As the harmonic mean of $acc_{m}$ and $acc_{f}$ , $H$ is calculated as: + +$$ +H = 2 \times \left(a c c _ {m} \times a c c _ {f}\right) / \left(a c c _ {m} + a c c _ {f}\right). \tag {9} +$$ + +We choose the harmonic mean as our evaluation criteria instead of the arithmetic mean because the overall results are significantly affected by the many-shot class accuracy $acc_{m}$ over the few-shot classes $acc_{f}$ in arithmetic mean (Xian et al., 2017). Instead, the harmonic mean is high only when the accuracies on both many-shot and few-shot intents are high. Due to this discrepancy, we evaluate the harmonic mean which takes a weighted average of the many-shot and few-shot accuracy. + +As illustrated in Table 2, the proposed pipeline BERT + CLANG achieves state-of-the-art performance on the accuracy for many-shot intents, few-shot intents, and their harmonic mean for the SNIPS-NLU dataset. As for the NLUED dataset, BERT + CLANG outperforms all the baselines + +on the accuracy for few-shot intents and the harmonic mean, while achieves comparable results on many-shot intents compared with the best baseline. Since the many-shot intents have sufficient training data, the improvement mainly comes from few-shot intents with scarce annotation. For example, the accuracy for few-shot intents on NLUED with the 5-shot setting improves $5\%$ from the best baseline (BERT + CG-BERT). + +Compared to the few-shot learning method, CLANG achieves better performance consistently in all the settings. BERT-PN+ achieves decent performance on many-shot intents while lacks the ability to provide embeddings that can be generalized from existing intents to few-shot intents. + +For data augmentation baselines, CLANG obtains the best performance on few-shot intents and the harmonic mean. These results demonstrate the high quality and diversity of the utterances generated form CLANG. CGT and SVAE barely improve the performance for few-shot intents. They only work well with sufficient training data. The utterances generated by these two models are almost the same as the few-shot examples. The performance improved by EDA is also limited since it only provides simple language transformation like insert and delete. Compared with CG-BERT that incorporates the pre-trained language model BERT, CLANG further improves the ability to generate utterances for few-shot intents with composed natural language generation. + +From the ablation study illustrated in Table 3, removing the contrastive loss decreases the accuracy for few-shot intents and the harmonic mean. It shows that the contrastive loss regularizes the generation process and contributes to the downstream classification task. + +
Many-shotFew-shot NLUED 1-shotH-Mean
CLANG -Lv93.60 ± 0.7922.03 ± 6.1035.29 ± 8.05
93.88 ± 0.8421.76 ± 6.4434.92 ± 8.48
NLUED 5-shot
CLANG -Lv93.29 ± 0.8666.44 ± 3.0777.56 ± 2.05
92.94 ± 0.7265.26 ± 2.9576.64 ± 2.06
+ +# 3.5 Result Analysis + +To further understand the proposed model, CLANG, result analysis and generation quality evaluation are provided in this section. We take the fold 7 of the NLUED dataset with the 5-shot + +setting as an example. It contains 16 novel intents with 5 examples per intent. + +The intent in this paper is defined as a pair of a domain and an action. The domain or the action might be shared among the many-shot intents and the few-shot intents. The domain/action that exists in many-shot intents is named as a seen domain/action; otherwise, it is called a novel domain/action. To analyze how well our model performs on different few-shot intents, we split few-shot intents into four types: a novel domain with a seen action $(\mathrm{Novel}_d)$ , a novel action with a seen domain $(\mathrm{Novel}_a)$ , both domain and action are seen $(\mathrm{Dual}_s)$ , both domain and action are novel $(\mathrm{Dual}_u)$ . We compare our proposed model with CG-BERT on these different types. As illustrated in Table 4, CLANG consistently performs better than CG-BERT on all the types. The performance for intents with a seen action and a novel domain improves $20.90\%$ . This observation indicates that our model is better at generalizing seen actions into novel domains. + +Table 3: Ablation study for removing the contrastive loss ${\mathcal{L}}_{v}$ from CLANG on NLUED. + +
TotalNoveldNovelaDualsDualu
Number164831
CG-BERT58.76%47.76%60.43%67.34%63.16%
CLANG67.88%68.66%62.58%75.51%84.21%
+Improve9.12%20.90%2.15%8.17%21.05%
+ +Table 4: Accuracies on different types of few-shot intents. + +As a few-shot natural language generation model, diversity is a very important indicator for quality evaluation. We compare the percentage of unique utterances generated by CLANG with CG-BERT. In CG-BERT, the top 20 results are generated for each intent by sampling the hidden variable for once. There are 257 unique sentences out of 320 utterances (80.3%). In CLANG, the top 30 results for each intent are generated by sampling the latent variables for once. We got 479 unique sentences out of 480 utterances (99.8%), which is much higher than CG-BERT. + +Several generation examples are shown in Table 5. CLANG can generate good examples (indicated by G) that have new slots values (like time, place, or action) not existing in the few-shot examples (indicated by R). For example, G1 has a new time slot and G5 has a new action. Bad cases (indicated by B) like B1 and B5 fill in the sentence with improper slot values. CLANG can also learn sentences from other intents. For instance, G3 transfer the expression in R3 from "Recommendation Events" to "recommendation movies". However, + +# Intent: Alarm Query + +R1: what time is my alarm set for tomorrow morning +G1: what time is my alarm set for this weekend +B1: how much my alarm set for tomorrow morning +R2: i need to set an alarm how many do i have set +G2: do i have an alarm set for tomorrow morning +B2: how many emails i have set + +# Intent: Recommendation Movies + +R3 (events): is there anything to do tonight +G3 (movies):are there anything movie tonight +R4 (events): what bands are playing in town this weekend +B4 (movies): what bands are playing in town this weekend + +# Intent:Takeaway Order + +R5: places with pizza delivery near me +G5: search for the delivery near me +B5: compose a delivery near me +G6: places with pizza delivery near my location +B6: places with pizza delivery near my pizza + +A case study is further provided for the Alarm Query intent with human evaluation. There are 121 unique utterances generated in total. As shown in Table 6, $80.99\%$ are good examples and $19.01\%$ are bad cases. Good cases mainly come from four types: Add/Delete/Replacement which provides simple data augmentation; New Time slot that has a new time slot value; New Question that queries alarm in new question words; Combination that combines two utterances together. Bad cases either come from a wrong intent (intent related to Query or Alarm) or use a wrong question word. + +Table 5: Generation examples from CLANG. R are real examples in the few-shots, G are good generation examples and B are bad cases. + +
TypeCountPercent
Add/Delete/Replacement3327.27%
New Time slot3024.79%
New Question2823.14%
Combination75.79%
Total Good Cases9880.99%
Wrong Intent (Query)108.26%
Wrong Intent (Alarm)75.79%
Wrong Question64.96%
Total Bad Cases2319.01%
+ +Table 6: Generation case study for the Alarm Query intent. + +# 4 Related Work + +Generative Data Augmentation for SLU Generative data augmentation methods alleviates the problem of lacking data by creating artificial training data with generation models. Recent works (Wei and Zou, 2019; Malandrakis et al., 2019; Yoo + +et al., 2019) have explored this idea for SLU tasks like intent detection. Wei and Zou (2019) provide data augmentation ability for natural language with simple language transformation rules like insert, delete and swap. Malandrakis et al. (2019) and Yoo et al. (2019) utilize variational autoencoders (Kingma and Welling, 2013) to generate training data for SLU tasks. Malandrakis et al. (2019) investigates templated-based text generation model to augment the training data for intelligent artificial agents. Yoo et al. (2019) generate fully annotated utterances to alleviate the data scarcity issue in spoken language understanding tasks. These models utilize LSTM as encoders (Hochreiter and Schmidhuber, 1997) with limited model capacity. Xia et al. (2020) provide the first work that combines CVAE with BERT to generate utterances for generalized few-shot intent detection. + +Recently, large-scale pre-trained language models are proposed for conditional text generation tasks (Dathathri et al., 2019; Keskar et al., 2019), but they are only evaluated by human examination. They are not aiming at improving downstream classification tasks in low-resource conditions. + +Contrastive Learning in NLP Contrastive learning that learns the differences between the positive data from the negative examples has been widely used in NLP (Gutmann and Hyvarinen, 2010; Mikolov et al., 2013; Cho et al., 2019). Gutmann and Hyvarinen (2010) leverage the Noise Contrastive Estimation (NCE) metric to discriminate the observed data from artificially generated noise samples. Cho et al. (2019) introduce contrastive learning for multi-document question generation by generating questions closely related to the positive set but far away from the negative set. Different from previous works, our contrastive loss learn a positive example against a negative example together with label information. + +# 5 Conclusion + +In this paper, we propose a novel model, Composed Variational Natural Language Generator (CLANG) for few-shot intents. An intent is defined as a combination of a domain and an action to build connections between existing intents and few-shot intents. CLANG has a bi-latent variational encoder that uses two latent variables to learn disentangled semantic features corresponding to different parts in the intent. These disentangled features are composed together to generate training examples for few-shot intents. Additionally, a contrastive loss is + +adopted to regularize the generation process. Experimental results on two real-world intent detection datasets show that our proposed method achieves state-of-the-art performance for GFSID. + +# Acknowledgments + +We thank the reviewers for their valuable comments. This work is supported in part by NSF under grants III-1763325, III-1909323, and SaTC-1930941. + +# References + +Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. +Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. 2015. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349. +Yun-Nung Chen, Dilek Hakkani-Tur, Gokhan Tur, Jianfeng Gao, and Li Deng. 2016. End-to-end memory networks with knowledge carryover for multi-turn spoken language understanding. In INTERSPEECH, pages 3245-3249. +Woon Sang Cho, Yizhe Zhang, Sudha Rao, Asli Celikyilmaz, Chenyan Xiong, Jianfeng Gao, Mengdi Wang, and Bill Dolan. 2019. Contrastive Multi-document Question Generation. arXiv e-prints, page arXiv:1911.03047. +Alice Coucke, Alaa Saade, Adrien Ball, Théodore Bluche, Alexandre Caulier, David Leroy, Clément Doumouro, Thibault Gisselbrecht, Francesco Caltagirone, Thibaut Lavril, et al. 2018. Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces. arXiv preprint arXiv:1805.10190. +Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2019. Plug and play language models: a simple approach to controlled text generation. arXiv preprint arXiv:1912.02164. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. arXiv preprint arXiv:1905.03197. + +Michael Gutmann and Aapo Hyvarinen. 2010. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pages 297-304. +Dan Hendrycks and Kevin Gimpel. 2016. Bridging nonlinearities and stochastic regularizers with gaussian error linear units. arXiv preprint arXiv:1606.08415. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780. +Matthew B Hoy. 2018. Alexa, siri, cortana, and more: An introduction to voice assistants. Medical reference services quarterly, 37(1):81-88. +Jian Hu, Gang Wang, Fred Lochovsky, Jian-tao Sun, and Zheng Chen. 2009. Understanding user's query intent with wikipedia. In WWW, pages 471-480. +Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward controlled generation of text. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1587-1596. JMLR.org. +Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858. +Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. +Diederik P Kingma and Max Welling. 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. +Durk P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. 2014. Semisupervised learning with deep generative models. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3581-3589. Curran Associates, Inc. +Nikolaos Malandrakis, Minmin Shen, Anuj Goyal, Shuyang Gao, Abhishek Sethi, and Angeliki Metallinou. 2019. Controlled text generation for data augmentation in intelligent artificial agents. arXiv preprint arXiv:1910.03487. +Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111-3119. +Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems, pages 4077-4087. + +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008. +IBM Watson Assistant. 2017. Defining intents. In https://cloud.ibm.com/docs/assistant-icp?topic= assistant-icp-intents. +Jason W Wei and Kai Zou. 2019. Eda: Easy data augmentation techniques for boosting performance on text classification tasks. arXiv preprint arXiv:1901.11196. +Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. +Congying Xia, Chenwei Zhang, Hoang Nguyen, Jiawei Zhang, and Philip Yu. 2020. Cg-bert: Conditional text generation with bert for generalized few-shot intent detection. arXiv preprint arXiv:2004.01881. +Congying Xia, Chenwei Zhang, Xiaohui Yan, Yi Chang, and Philip S Yu. 2018. Zero-shot user intent detection via capsule neural networks. arXiv preprint arXiv:1809.00385. +Yongqin Xian, Bernt Schiele, and Zeynep Akata. 2017. Zero-shot learning-the good, the bad and the ugly. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4582-4591. +Pawel Swietojanski Xingkun Liu, Arash Eshghi and Verena Rieser. 2019. Benchmarking natural language understanding services for building conversational agents. In Proceedings of the Tenth International Workshop on Spoken Dialogue Systems Technology (IWSDS), pages xxx-xxx, Ortigia, Siracusa (SR), Italy. Springer. +Hu Xu, Bing Liu, Lei Shu, and P Yu. 2019. Openworld learning and application to product classification. In The World Wide Web Conference, pages 3413-3419. ACM. +Puyang Xu and Ruhi Sarikaya. 2013. Convolutional neural network based triangular crf for joint intent detection and slot filling. In ASRU, pages 78-83. +Wenpeng Yin. 2020. Meta-learning for few-shot natural language processing: A survey. arXiv preprint arXiv:2007.09604. +Wenpeng Yin, Jamaal Hay, and Dan Roth. 2019. Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach. arXiv preprint arXiv:1909.00161. + +Kang Min Yoo, Youhyun Shin, and Sang-goo Lee. 2019. Data augmentation for spoken language understanding via joint variational generation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7402-7409. +Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. arXiv preprint arXiv:1703.10960. \ No newline at end of file diff --git a/composedvariationalnaturallanguagegenerationforfewshotintents/images.zip b/composedvariationalnaturallanguagegenerationforfewshotintents/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..f42154cc0d8485fc42e47fec9704d87c81b42c69 --- /dev/null +++ b/composedvariationalnaturallanguagegenerationforfewshotintents/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:24664b0c7f7ba2188dfcf80093ae8f644a3d5e38f913a24bfbcb3b5ed0ebf6ea +size 467600 diff --git a/composedvariationalnaturallanguagegenerationforfewshotintents/layout.json b/composedvariationalnaturallanguagegenerationforfewshotintents/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..ab7853ae580f34a609c2f5a04cd8de991abdee1c --- /dev/null +++ b/composedvariationalnaturallanguagegenerationforfewshotintents/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ebaaa6e85ec08f520cadc55e06678b123d93a7b2fc7a9ba441fede43a89c1158 +size 396975 diff --git a/compressingtransformerbasedsemanticparsingmodelsusingcompositionalcodeembeddings/ac273aea-8b48-4a3c-a4dd-d52404fdd4bc_content_list.json b/compressingtransformerbasedsemanticparsingmodelsusingcompositionalcodeembeddings/ac273aea-8b48-4a3c-a4dd-d52404fdd4bc_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..5aaa74103846eeea8aac8ea932d58d4ce8d143a0 --- /dev/null +++ b/compressingtransformerbasedsemanticparsingmodelsusingcompositionalcodeembeddings/ac273aea-8b48-4a3c-a4dd-d52404fdd4bc_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:66411792db4b3817c094fda3da52901d8608719d04da56fb5d2aabf933d00de1 +size 51066 diff --git a/compressingtransformerbasedsemanticparsingmodelsusingcompositionalcodeembeddings/ac273aea-8b48-4a3c-a4dd-d52404fdd4bc_model.json b/compressingtransformerbasedsemanticparsingmodelsusingcompositionalcodeembeddings/ac273aea-8b48-4a3c-a4dd-d52404fdd4bc_model.json new file mode 100644 index 0000000000000000000000000000000000000000..37de8755719f6f6e220131119e09e89405cd2d69 --- /dev/null +++ b/compressingtransformerbasedsemanticparsingmodelsusingcompositionalcodeembeddings/ac273aea-8b48-4a3c-a4dd-d52404fdd4bc_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8eae4f5e044c4b011ce40e9dd4e575804865d93de50ceaa7dbc687dd3a72093d +size 63313 diff --git a/compressingtransformerbasedsemanticparsingmodelsusingcompositionalcodeembeddings/ac273aea-8b48-4a3c-a4dd-d52404fdd4bc_origin.pdf b/compressingtransformerbasedsemanticparsingmodelsusingcompositionalcodeembeddings/ac273aea-8b48-4a3c-a4dd-d52404fdd4bc_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..edebb12a95c97a95787f333119827f2b25d408b8 --- /dev/null +++ b/compressingtransformerbasedsemanticparsingmodelsusingcompositionalcodeembeddings/ac273aea-8b48-4a3c-a4dd-d52404fdd4bc_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:78f718445c5edf232b98473f1a42be1a0a66611e2febc1d18c83b03b6dc655b7 +size 261462 diff --git a/compressingtransformerbasedsemanticparsingmodelsusingcompositionalcodeembeddings/full.md b/compressingtransformerbasedsemanticparsingmodelsusingcompositionalcodeembeddings/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a6a91ee90dd98859fba0927ff01a714a580b5bdf --- /dev/null +++ b/compressingtransformerbasedsemanticparsingmodelsusingcompositionalcodeembeddings/full.md @@ -0,0 +1,175 @@ +# Compressing Transformer-Based Semantic Parsing Models using Compositional Code Embeddings + +Prafull Prakash $^{1*}$ , Saurabh Kumar Shashidhar $^{1*}$ , Wenlong Zhao $^{1*}$ , Subendhu Rongali $^{1}$ , Haidar Khan $^{2}$ , and Michael Kayser $^{2}$ + +1University of Massachusetts Amherst + +2Amazon Alexa + +{prafullpraka, ssaurabhkuma, wenlongzhao, strongali}@cs.umass.edu {khhaida, mikayser}@amazon.com + +# Abstract + +The current state-of-the-art task-oriented semantic parsing models use BERT or RoBERTa as pretrained encoders; these models have huge memory footprints. This poses a challenge to their deployment for voice assistants such as Amazon Alexa and Google Assistant on edge devices with limited memory budgets. We propose to learn compositional code embeddings to greatly reduce the sizes of BERT-base and RoBERTa-base. We also apply the technique to DistilBERT, ALBERT-base, and ALBERT-large, three already compressed BERT variants which attain similar state-of-the-art performances on semantic parsing with much smaller model sizes. We observe $95.15\% \sim 98.46\%$ embedding compression rates and $20.47\% \sim 34.22\%$ encoder compression rates, while preserving $>97.5\%$ semantic parsing performances. We provide the recipe for training and analyze the trade-off between code embedding sizes and downstream performances. + +# 1 Introduction + +Conversational virtual assistants, such as Amazon Alexa, Google Home, and Apple Siri, have become increasingly popular in recent times. These systems can process queries from users and perform tasks such as playing music and finding locations. A core component in these systems is a task-oriented semantic parsing model that maps natural language expressions to structured representations containing intents and slots that describe the task to perform. For example, the expression Can you play some songs by Coldplay? may be converted to Intent: PlaySong, Artist: Coldplay, and the expression Turn off the bedroom light may be converted to Intent: TurnOffLight, Device: bedroom. + +Task-oriented semantic parsing is traditionally approached as a joint intent classification and slot + +filling task. Kamath and Das (2018) provide a comprehensive survey of models proposed to solve this task. Researchers have developed semantic parsers based on Recurrent Neural Networks (Mesnil et al., 2013; Liu and Lane, 2016; Hakkani-Tür et al., 2016), Convolutional Neural Networks (Xu and Sarikaya, 2013; Kim, 2014), Recursive Neural Networks (Guo et al., 2014), Capsule Networks (Sabour et al., 2017; Zhang et al., 2019), and slotgated attention-based models (Goo et al., 2018). + +The current state-of-the-art models on SNIPS (Coucke et al., 2018), ATIS (Price, 1990), and Facebook TOP (Gupta et al., 2018) datasets are all based on BERT-style (Devlin et al., 2018; Liu et al., 2019) encoders and transformer architectures (Chen et al., 2019; Castellucci et al., 2019; Rongali et al., 2020). It is challenging to deploy these large models on edge devices and enable the voice assistants to operate locally instead of relying on central cloud services, due to the limited memory budgets on these devices. However, there has been a growing push towards the idea of TinyAI1. + +In this paper, we aim to build space-efficient task-oriented semantic parsing models that produce near state-of-the-art performances by compressing existing large models. We propose to learn compositional code embeddings to significantly compress BERT-base and RoBERTa-base encoders with little performance loss. We further use ALBERT-base/large (Lan et al., 2019) and DistilBERT (Sanh et al., 2019) to establish light baselines that achieve similar state-of-the-art performances, and apply the same code embedding technique. We show that our technique is complementary to the compression techniques used in ALBERT and DistilBERT. With all variants, we achieve $95.15\% \sim 98.46\%$ embedding compression rates and $20.47\% \sim 34.22\%$ encoder compression rates, with $>97.5\%$ semantic + +Parsing performance preservation. + +# 2 Related Compression Techniques + +# 2.1 BERT Compression + +Many techniques have been proposed to compress BERT (Devlin et al., 2018). Ganesh et al. (2020) provide a survey on these methods. Most existing methods focus on alternative architectures in transformer layers or learning strategies. + +In our work, we use DistilBERT and ALBERT-base as light pretrained language model encoders for semantic parsing. DistilBERT (Sanh et al., 2019) uses distillation to pretrain a model that is $40\%$ smaller and $60\%$ faster than BERT-base, while retaining $97\%$ of its downstream performances. ALBERT (Lan et al., 2019) factorizes the embedding and shares parameters among the transformer layers in BERT and results in better scalability than BERT. ALBERT-xxlarge outperforms BERT-large on GLUE (Wang et al., 2018), RACE (Lai et al., 2017), and SQUAD (Rajpurkar et al., 2016) while using less parameters. + +We use compositional code learning (Shu and Nakayama, 2017) to compress the model embeddings, which contain a substantial amount of model parameters. Previously ALBERT uses factorization to compress the embeddings. We find more compression possible with code embeddings. + +# 2.2 Embedding Compression + +Varied techniques have been proposed to learn compressed versions of non-contextualized word embeddings, such as, Word2Vec (Mikolov et al., 2013) and GLoVE (Pennington et al., 2014). Subramanian et al. (2018) use denoising k-sparse autoencoders to achieve binary sparse intrepretable word embeddings. Chen et al. (2016) achieve sparsity by representing the embeddings of uncommon words using sparse linear common combination of common words. Lam (2018) achieve compression by quantization of the word embeddings by using 1-2 bits per parameter. Faruqui et al. (2015) use sparse coding in a dictionary learning setting to obtain sparse, non-negative word embeddings. Raunak (2017) achieve dense compression of word embeddings using PCA combined with a post-processing algorithm. Shu and Nakayama (2017) propose to represent word embeddings using compositional codes learnt directly in end-to-end fashion using neural networks. Essentially few common basis vectors are learnt and embeddings are reconstructed + +using their composition via a discrete code vector specific to each token embedding. This results in $98\%$ compression rate in sentiment analysis and $94\% - 99\%$ in machine translation tasks without performance loss with LSTM based models. All the above techniques are applied to embeddings such as WordVec and Glove, or LSTM models. + +We aim to learn space-efficient embeddings for transformer-based models. We focus on compositional code embeddings (Shu and Nakayama, 2017) since they maintain the vector dimensions, do not require special kernels for calculating in a sparse or quantized space, can be finetuned with transformer-based models end-to-end, and achieve extremely high compression rate. Chen et al. (2018) explores similar idea as Shu and Nakayama (2017) and experiment with more complex composition functions and guidances for training the discrete codes. Chen and Sun (2019) further show that end-to-end training from scratch of models with code embeddings is possible. Given various pretrained language models, we find that the method proposed by Shu and Nakayama (2017) is straightforward and perform well in our semantic parsing experiments. + +# 3 Method + +# 3.1 Compositional Code Embeddings + +Shu and Nakayama (2017) apply additive quantization (Babenko and Lempitsky, 2014) to learn compositional code embeddings to reconstruct pretrained word embeddings such as GloVe (Pennington et al., 2014), or task-specific model embeddings such as those from an LSTM neural machine translation model. Compositional code embeddings $E^{C}$ for vocabulary $V$ consist of a set of $M$ codebooks $E_{1}^{C}, E_{2}^{C}, \ldots, E_{M}^{C}$ , each with $K$ basis vectors of the same dimensionality $D$ as the reference embeddings $E$ , and a discrete code vector $(C_{w}^{1}, C_{w}^{2}, \ldots, C_{w}^{M})$ for each token $w$ in the vocabulary. The final embedding for $w$ is composed by summing up the $C_{w}^{i}$ th vector from the $i$ th codebook as $E^{C}(C_{w}) = \sum_{i=1}^{M} E_{i}^{C}(C_{w}^{i})$ . Codebooks and discrete codes are jointly learned using the mean squared distance objective: $(C^{*}, E^{C*}) = \arg \min_{C, E^{C}} \frac{1}{|V|} \sum_{w \in V} ||E^{C}(C_{w}) - E(w)||^{2}$ . For learning compositional codes, the Gumbel-softmax reparameterization trick (Jang et al., 2016; Maddison et al., 2016) is used for one-hot vectors corresponding to each discrete code. + +
EncoderEncoderParam# / SizeEmbParam# / SizeSizeRatioCCEmbSizeCCEncoderSizeEmbCompEncoderComp
RoBERTa-base125.29M / 477.94MB38.60M / 147.25MB30.81%2.27MB332.96MB98.46%30.33%
BERT-base-uncased110.10M / 420.00MB23.44M / 89.42MB21.29%1.97MB332.55MB97.80%20.82%
DistilBERT-base-uncased66.99M / 255.55MB23.44M / 89.42MB34.99%1.97MB168.10MB97.80%34.22%
ALBERT-large-v217.85M / 68.09MB3.84M / 14.65MB21.52%0.71MB54.15MB95.15%20.47%
ALBERT-base-v211.81M / 45.05MB3.84M / 14.65MB32.52%0.71MB31.11MB95.15%30.94%
+ +# 3.2 Transformer-Based Models with Compositional Code Embeddings + +In this work, we learn compositional code embeddings to reduce the size of the embeddings in pretrained contextualized language models. We extract the embedding tables from pretrained RoBERTa-base (Liu et al., 2019), BERT-base (Devlin et al., 2018), DistilBERT-base (Sanh et al., 2019), ALBERT-large-v2 and ALBERT-base-v2 (Lan et al., 2019) from the huggingface transformers library (Wolf et al., 2019) and follow the approach presented by Shu and Nakayama (2017) to learn the code embeddings. We then replace the embedding tables in the transformer models with the compositional code approximations and evaluate the compressed language models by finetuning on downstream tasks. When Shu and Nakayama (2017) feed compositional code embeddings into the LSTM neural machine translation model, they fix the embedding parameters and train the rest of the model from random initial values. In our experiments, we fix the discrete codes, initialize the transformer layers with those from the pretrained language models, initialize the task-specific output layers randomly, and finetune the codebook basis vectors with the rest of the non-discrete parameters. + +# 3.3 Size Advantage of Compositional Code Embeddings + +An embedding matrix $E \in \mathbb{R}^{|V| \times D}$ stored as 32-bit float point numbers, where $|V|$ is the vocabulary size and $D$ is the embedding dimension, requires 32|V|D bits. Its compositional code reconstruction requires 32MKD bits for $MK$ basis vectors, and $M\log_2K$ bits for codes of each of $|V|$ tokens. Since each discrete code takes an integer value in $[1, K]$ , it can be represented using $\log_2K$ bits. + +Table 1 illustrates the size advantage of compositional code embeddings for various pretrained transformer models (Wolf et al., 2019) used in our experiments. While the technique focuses on compressing the embedding table, it is compatible with other compression techniques for transformer models, in + +Table 1: Model compression with compositional code ("cc") embeddings. The embedding layers are compressed by more than $95\%$ with compositional code embeddings in all of the BERT variants. + +
DatasetTrainValidTest#Intent#Slot
ATIS4,4785008932683
SNIPS13,084700700739
Facebook TOP31,2794,4629,0422536
+ +Table 2: Statistics for semantic parsing datasets. + +cluding parameter sharing among transformer layers and embedding factorization used in ALBERT and distillation for learning DistilBERT. In our experiments, we apply the code learning technique to compress embeddings in five pretrained BERT variants by $95.15\% \sim 98.46\%$ to build competitive but significantly lighter semantic parsing models. + +# 4 Datasets + +Following Rongali et al. (2020), we evaluate our models on SNIPS (Coucke et al., 2018), Airline Travel Information System (ATIS) (Price, 1990), and Facebook TOP (Gupta et al., 2018) datasets for task-oriented semantic parsing (Table 2). For SNIPS and ATIS, we use the same train/validation/test split as Goo et al. (2018). + +# 5 Experiments and Analyses + +For transformer model training, we base our implementation on the huggingface transformers library v2.6.0 (Wolf et al., 2019). We use the AdamW optimizer (Loshchilov and Hutter, 2017) with $10\%$ warmup steps and linear learning rate decay to 0. Forr code embedding learning, we base our implementation on that of Shu and Nakayama (2017). By default we learn code embeddings with 32 codebooks and 16 basis vectors per codebook. Unless otherwise specified, hyperparameters are found according to validation performances from one random run. We conduct our experiments on a mixture of Tesla M40, Titan X, 1080 Ti, and 2080 Ti GPUs. We use exact match (EM) and intent accuracy as evaluation metrics. Exact match requires correct predictions for all intents and slots in a query, and is our primary metric. + +
ModelEMIntent
Joint BiRNN (Hakkani-Tür et al., 2016)73.296.9
Attention BiRNN (Liu and Lane, 2016)74.196.7
Slot Gated Full Attention (Goo et al., 2018)75.597.0
CapsuleNLU (Zhang et al., 2019)80.997.3
BERT-Seq2Seq-Ptr (Rongali et al., 2020)86.398.3
RoBERTa-Seq2Seq-Ptr (Rongali et al., 2020)87.198.0
BERT-Joint (Castellucci et al., 2019)91.699.0
Joint BERT (Chen et al., 2019)92.898.6
OursepolrwdEM-vEMIntent
ALBERT-base5e-50.0590.7191.2998.86
ALBERT-base_CC11005e-50.0190.0089.1498.14
ALBERT-large3e-50.0591.2992.4398.14
ALBERT-large_CC11002e-50.0591.1492.4398.71
DistilBERT-base3e-50.0590.2991.1498.57
DistilBERT-base_CC9006e-50.0190.1491.2498.43
BERT-base3e-50.0592.1492.2999.14
BERT-base_CC9006e-50.0591.2990.7198.71
+ +# 5.1 SNIPS and ATIS + +We implement a joint sequence-level and token-level classification layer for pretrained transformer models. The intent probabilities are predicted as $y^i = \text{softmax}(\mathrm{W}^i\mathrm{h}_0 + \mathrm{b}^i)$ , where $\mathrm{h}_0$ is the hidden state of the [CLS] token. The slot probabilities for each token $j$ are predicted as $y_j^s = \text{softmax}(\mathrm{W}^s\mathrm{h}_j + \mathrm{b}^s)$ . We use the cross entropy loss to maximize $p(y^i|x) \prod p(y_j^s|x)$ where $j$ is the first piece-wise token for each word in the query. We learn code embeddings for {500, 700, 900, 1100, 1300} epochs. We train transformer models with original and code embeddings all for 40 epochs with batch size 16 and sequence length 128. Uncased BERT and DistilBERT perform better than the cased versions. We experiment with peak learning rate {2e-5, 3e-5, ..., 6e-5} and weight decay {0.01, 0.05, 0.1}. As shown in Table 3 and 4, we use different transformer encoders to establish strong baselines which achieve EM values that are within $1.5\%$ of the state-of-the-art. + +On both datasets, models based on our compressed ALBERT-large-v2 encoder (54MB) perserves $>99.6\%$ EM of the previous state-of-the-art model (Chen et al., 2019) which uses a BERT encoder (420MB). In all settings, our compressed encoders perserve $>97.5\%$ EM of the uncompressed counterparts under the same training settings. We show that our technique is effective on a variety of pretrained transformer encoders. + +Table 3: Results on SNIPS. "cc" indicate models with code embeddings. "epo" is the epoch number for offline code embedding learning. "lr" and "wd" are the peak learning rate and weight decay for whole model finetuning. "EM-v", "EM", "Intent" indicate validation exact match, test exact match, and test intent accuracy. + +
ModelEMIntent
Joint-BiRNN (Hakkani-Tür et al., 2016)80.792.6
Attention-BiRNN (Liu and Lane, 2016)78.991.1
Slot-Gated (Goo et al., 2018)82.293.6
CapsuleNLU (Zhang et al., 2019)83.495.0
BERT-Seq2Seq-Ptr (Rongali et al., 2020)86.497.4
RoBERTa-Seq2Seq-Ptr (Rongali et al., 2020)87.197.4
BERT-Joint (Castellucci et al., 2019)88.297.8
Joint-BERT (Chen et al., 2019)88.297.5
OursepolrwdEM-vEMIntent
ALBERT-base5e-50.0593.486.9097.42
ALBERT-base_CC9006e-50.194.287.2396.75
ALBERT-large5e-50.0593.888.0297.54
ALBERT-large_CC11005e-50.194.087.9197.54
DistilBERT-base4e-50.0593.688.1397.42
DistilBERT-base_CC11006e-50.0593.287.1297.54
BERT-base4e-50.0193.488.1397.54
BERT-base_CC7006e-50.193.087.3597.20
+ +Table 4: Results on ATIS. Refer to the caption of Table 3 for abbreviation explanations. + +
ModelEMIntent
RNNG (Gupta et al., 2018)78.51-
Shift Reduce (SR) Parser80.86-
SR with ELMo embeddings83.93-
SR ensemble + ELMo + SVMRank87.25-
BERT-Seq2Seq-Ptr (Rongali et al., 2020)83.1397.91
RoBERTa-Seq2Seq-Ptr (Rongali et al., 2020)86.6798.13
OursEM-vEMIntent
ALBERT-Seq2Seq-Ptr84.5685.4198.47
ALBERT-Seq2Seq-Ptr_CC83.4884.4298.05
DistilBERT-Seq2Seq-Ptr84.2585.1298.50
DistilBERT-Seq2Seq-Ptr_CC82.7683.4298.09
BERT-Seq2Seq-Ptr83.8385.0198.59
BERT-Seq2Seq-Ptr_CC82.3683.3498.25
RoBERTa-Seq2Seq-Ptr85.0085.6798.59
RoBERTa-Seq2Seq-Ptr_CC83.5183.7898.17
+ +Table 5: Results on Facebook TOP. The SR models are by Einolghozati et al. (2019). Refer to the caption of Table 3 for abbreviation explanations. + +# 5.2 Facebook TOP + +Table 5 presents results on Facebook TOP. We follow Rongali et al. (2020) and experiment with Seq2Seq models. We use different pretrained BERT-variants as the encoder, transformer decoder layers with $d_{model} = 768$ (Vaswani et al., 2017), and a pointer generator network (Vinyals et al., 2015) which uses scaled dot-product attention to score tokens. The model is trained using the cross-entropy loss with label smoothing of 0.1. For simplicity, we always train code embeddings for 900 epochs offline. Learning rate 2e-5 and weight decay 0.01 are used for transformer training. BERT and DistilBERT are cased in these experiments. During inference, we employ beam decoding with width 5. Our greatly compressed models present $98\sim 99\%$ performances of the original models. + +
EpochMeanEucDistNN-cosNN-EucSNIPSATISTOP
1000.3677±0.25%0.66±1.90%0.65±2.00%79.2982.3178.09
2000.3254±0.08%2.20±0.69%2.30±0.84%85.4384.9981.59
3000.3023±0.09%3.66±0.92%3.96±0.55%86.8686.1183.17
4000.2841±0.23%4.84±0.58%5.26±0.83%89.7187.0183.45
5000.2685±0.26%5.72±0.48%6.21±0.78%87.7187.2383.82
6000.2573±0.12%6.20±0.39%6.72±0.18%88.1485.6983.41
7000.2499±0.20%6.42±0.49%6.94±0.33%88.0087.3584.27
8000.2444±0.07%6.54±0.39%7.07±0.15%88.5786.9084.09
9000.2407±0.10%6.62±0.31%7.14±0.14%88.5786.5684.42
10000.2380±0.07%6.65±0.39%7.16±0.10%89.1487.1283.86
+ +# 5.3 Analysis for Code Convergence + +We study the relationship among a few variables during code learning for the embeddings from pretrained ALBERT-base (Table 6). During the first 1000 epochs, the mean Euclidean distance between the original and reconstructed embeddings decrease with a decreasing rate. The average number of shared top-20 nearest neighbours according to cosine similarity and Euclidean distances between the two embeddings increase with a decreasing rate. We apply code embeddings trained for different numbers of epochs to ALBERT-base-v2 and fine-tune on semantic parsing. On SNIPS and ATIS, we find the best validation setting among learning rate $\{2,3,4,5,6\}$ e-5 and weight decay $\{0.01, 0.05, 0.01\}$ . We observe that the test exact match plateaus for code embeddings trained for more than 400 epochs. On Facebook TOP, we use learning rate 2e-5 and weight decay 0.01, and observe the similar trend. + +# 5.4 Effects of M and K + +We use embeddings from pretrained ALBERT-base-v2 as reference to learn code embeddings with M in $\{8, 16, 32, 64\}$ and K in $\{16, 32, 64\}$ . As shown in Table 7, after 700 epochs, the MSE loss for embeddings with larger M and K converges to smaller values in general. With $M = 64$ , more epochs are needed for convergence to smaller MSE losses compared to those from smaller M. We apply the embeddings to ALBERT-base-v2 and finetune on SNIPS. In general, larger M yields better performances. Effects of K are less clear when M is large. + +# 6 Conclusion + +Current state-of-the-art task-oriented semantic parsing models are based on pretrained RoBERTa-base (478MB) or BERT-base (420MB). We apply DistilBERT (256MB), ALBERT-large (68MB), and + +Table 6: Analyses for the code embedding learning process (M=32, K=16). MeanEucDist, NN-cos, and NN-Euc are averaged across 5 runs. "SNIPS", "ATIS", and "TOP" are the test exact match achieved on the three datasets. + +
MKepoMSEEM
8167000.3155±0.05%85.43
8327000.3032±0.04%87.43
8647000.2944±0.04%87.43
16167000.2855±0.05%88.57
16327000.2727±0.09%88.00
16327000.2669±0.08%88.14
32167000.2499±0.20%89.00
32327000.2421±0.20%89.14
32647000.2396±0.27%88.29
64167000.2543±0.47%88.29
641610000.2256±1.06%89.71
64327000.2557±0.37%89.86
643210000.2159±0.43%89.71
+ +Table 7: Effects of M and K. Mean squared errors (MSE) are averaged over 5 runs. Best validation exact match (EM) is presented for compressed transformer models trained with 0.05 weight decay and $\{3,4,5,6,7\} e - 5$ peak learning rates on SNIPS. + +ALBERT-base (45MB), and observe near state-of-the-art performances. We learn compositional code embeddings to compress the model embeddings by $95.15\% \sim 98.46\%$ , the pretrained encoders by $20.47\% \sim 34.22\%$ , and observe $97.5\%$ performance preservation on SNIPS, ATIS, and Facebook TOP. Our compressed ALBERT-large is 54MB and can achieve $99.6\%$ performances of the previous state-of-the-art models on SNIPS and ATIS. Our technique has potential to be applied to more tasks including machine translation in the future. + +# Acknowledgement + +This project is part of the data science industry mentorship program initiated by Andrew McCallum at University of Massachusetts Amherst. We thank the teaching assistants Rajarshi Das and Xiang Lorraine Li for helpful discussion and the instructor Andrew McCallum for valuable feedback. Experiments in this project are conducted on the Gypsum cluster at UMassAmherst. The cluster is purchased with funds from the Massachusetts Technology Collaborative. + +# References + +Artem Babenko and Victor Lempitsky. 2014. Additive quantization for extreme vector compression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 931-938. +Giuseppe Castellucci, Valentina Bellomaria, Andrea Favalli, and Raniero Romagnoli. 2019. Multilingual intent detection and slot filling in a joint bert-based model. arXiv preprint arXiv:1907.02884. +Qian Chen, Zhu Zhuo, and Wen Wang. 2019. Bert for joint intent classification and slot filling. +Ting Chen, Martin Renqiang Min, and Yizhou Sun. 2018. Learning k-way d-dimensional discrete codes for compact embedding representations. arXiv preprint arXiv:1806.09464. +Ting Chen and Yizhou Sun. 2019. Differentiable product quantization for end-to-end embedding compression. arXiv preprint arXiv:1908.09756. +Yunchuan Chen, Lili Mou, Yan Xu, Ge Li, and Zhi Jin. 2016. Compressing neural language models by sparse word representations. arXiv preprint arXiv:1610.03950. +Alice Coucke, Alaa Saade, Adrien Ball, Theodore Bluche, Alexandre Caulier, David Leroy, Clément Doumouro, Thibault Gisselbrecht, Francesco Caltagirone, Thibaut Lavril, et al. 2018. Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces. arXiv preprint arXiv:1805.10190. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Arash Einolghozati, Panupong Pasupat, Sonal Gupta, Rushin Shah, Mrinal Mohit, Mike Lewis, and Luke Zettlemoyer. 2019. Improving semantic parsing for task oriented dialog. arXiv preprint arXiv:1902.06000. +Manaal Faruqui, Yulia Tsvetkov, Dani Yogatama, Chris Dyer, and Noah Smith. 2015. Sparse overcomplete word vector representations. arXiv preprint arXiv:1506.02004. +Prakhar Ganesh, Yao Chen, Xin Lou, Mohammad Ali Khan, Yin Yang, Deming Chen, Marianne Winslett, Hassan Sajjad, and Preslav Nakov. 2020. Compressing large-scale transformer-based models: A case study on bert. arXiv preprint arXiv:2002.11985. +Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and Yun-Nung Chen. 2018. Slot-gated modeling for joint slot filling and intent prediction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 753-757. + +Daniel Guo, Gokhan Tur, Wen-tau Yih, and Geoffrey Zweig. 2014. Joint semantic utterance classification and slot filling with recursive neural networks. In 2014 IEEE Spoken Language Technology Workshop (SLT), pages 554-559. IEEE. +Sonal Gupta, Rushin Shah, Mrinal Mohit, Anuj Kumar, and Mike Lewis. 2018. Semantic parsing for task oriented dialog using hierarchical representations. arXiv preprint arXiv:1810.07942. +Dilek Hakkani-Tür, Gökhan Tür, Asli Celikyilmaz, Yun-Nung Chen, Jianfeng Gao, Li Deng, and Ye-Yi Wang. 2016. Multi-domain joint semantic frame parsing using bi-directional rnN-lstm. In *Interspeech*, pages 715-719. +Eric Jang, Shixiang Gu, and Ben Poole. 2016. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144. +Aishwarya Kamath and Rajarshi Das. 2018. A survey on semantic parsing. arXiv preprint arXiv:1812.00978. +Yoon Kim. 2014. Convolutional neural networks for sentence classification. CoRR, abs/1408.5882. +Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-scale reading comprehension dataset from examinations. arXiv preprint arXiv:1704.04683. +Maximilian Lam. 2018. Word2bits-quantized word vectors. arXiv preprint arXiv:1803.05651. +Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942. +Bing Liu and Ian Lane. 2016. Attention-based recurrent neural network models for joint intent detection and slot filling. arXiv preprint arXiv:1609.01454. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. +Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101. +Chris J Maddison, Andriy Mnih, and Yee Whye Teh. 2016. The concrete distribution: A continuous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712. +Grégoire Mesnil, Xiaodong He, Li Deng, and Yoshua Bengio. 2013. Investigation of recurrent-neural-network architectures and learning methods for spoken language understanding. In Interspeech, pages 3771-3775. + +Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. +Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543. +Patti Price. 1990. Evaluation of spoken language systems: The atis domain. In Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27, 1990. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: $100,000+$ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. +Vikas Raunak. 2017. Simple and effective dimensionality reduction for word embeddings. arXiv preprint arXiv:1708.03629. +Subendhu Rongali, Luca Soldaini, Emilio Monti, and Wael Hamza. 2020. Don't parse, generate! a sequence to sequence architecture for task-oriented semantic parsing. arXiv preprint arXiv:2001.11458. +Sara Sabour, Nicholas Frosst, and Geoffrey E. Hinton. 2017. Dynamic routing between capsules. CoRR, abs/1710.09829. +Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. +Raphael Shu and Hideki Nakayama. 2017. Compressing word embeddings via deep compositional code learning. arXiv preprint arXiv:1711.01068. +Anant Subramanian, Danish Pruthi, Harsh Jhamtani, Taylor Berg-Kirkpatrick, and Eduard Hovy. 2018. Spine: Sparse interpretable neural embeddings. In Thirty-Second AAAI Conference on Artificial Intelligence. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008. +Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in neural information processing systems, pages 2692-2700. +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. CoRR, abs/1804.07461. + +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R'emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. +Puyang Xu and Ruhi Sarikaya. 2013. Convolutional neural network based triangular crf for joint intent detection and slot filling. In 2013 IEEE workshop on automatic speech recognition and understanding, pages 78-83. IEEE. +Chenwei Zhang, Yaliang Li, Nan Du, Wei Fan, and Philip Yu. 2019. Joint slot filling and intent detection via capsule neural networks. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy. Association for Computational Linguistics. \ No newline at end of file diff --git a/compressingtransformerbasedsemanticparsingmodelsusingcompositionalcodeembeddings/images.zip b/compressingtransformerbasedsemanticparsingmodelsusingcompositionalcodeembeddings/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..792ff337cfc404d4daf9a4bf19a7e9ca06a0f0d1 --- /dev/null +++ b/compressingtransformerbasedsemanticparsingmodelsusingcompositionalcodeembeddings/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f3421f18b4bfa882efe6856156864dbcd69ccb48d60273c4cd79dbed78b2d6de +size 381245 diff --git a/compressingtransformerbasedsemanticparsingmodelsusingcompositionalcodeembeddings/layout.json b/compressingtransformerbasedsemanticparsingmodelsusingcompositionalcodeembeddings/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..ec8a8b885e36fe659d54bfcee3f63e938d67eb37 --- /dev/null +++ b/compressingtransformerbasedsemanticparsingmodelsusingcompositionalcodeembeddings/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ec82e1955299ab5e355f792be22209826bb05848b90897916f667e387b36e8cc +size 243904 diff --git a/computerassistedtranslationwithneuralqualityestimationandautomaticpostediting/d2dd4824-fc5f-4df6-95e2-76d452ebda30_content_list.json b/computerassistedtranslationwithneuralqualityestimationandautomaticpostediting/d2dd4824-fc5f-4df6-95e2-76d452ebda30_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..7691cb3cee00a41a86a6bef6fa1b2193b6bb17ed --- /dev/null +++ b/computerassistedtranslationwithneuralqualityestimationandautomaticpostediting/d2dd4824-fc5f-4df6-95e2-76d452ebda30_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e4002eb5a42a8293fc59cdcaab4d63ae066a00107752dfa54f611d7ea6ca36d1 +size 80637 diff --git a/computerassistedtranslationwithneuralqualityestimationandautomaticpostediting/d2dd4824-fc5f-4df6-95e2-76d452ebda30_model.json b/computerassistedtranslationwithneuralqualityestimationandautomaticpostediting/d2dd4824-fc5f-4df6-95e2-76d452ebda30_model.json new file mode 100644 index 0000000000000000000000000000000000000000..5a1743c65a65f242db2e4ec5035651efe31814e9 --- /dev/null +++ b/computerassistedtranslationwithneuralqualityestimationandautomaticpostediting/d2dd4824-fc5f-4df6-95e2-76d452ebda30_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3a6601a8d69f6ba611c4401c3ca1a37b456d0925d03b4107e83ec38b502c7a28 +size 94431 diff --git a/computerassistedtranslationwithneuralqualityestimationandautomaticpostediting/d2dd4824-fc5f-4df6-95e2-76d452ebda30_origin.pdf b/computerassistedtranslationwithneuralqualityestimationandautomaticpostediting/d2dd4824-fc5f-4df6-95e2-76d452ebda30_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3de8c6ccea77349fa7981a23463215dba8df1e90 --- /dev/null +++ b/computerassistedtranslationwithneuralqualityestimationandautomaticpostediting/d2dd4824-fc5f-4df6-95e2-76d452ebda30_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2a7da758fc67d77bfc930e43c3b062fea4e6f3df867388539c1dc701d44497e3 +size 735904 diff --git a/computerassistedtranslationwithneuralqualityestimationandautomaticpostediting/full.md b/computerassistedtranslationwithneuralqualityestimationandautomaticpostediting/full.md new file mode 100644 index 0000000000000000000000000000000000000000..90707abad652656da3f1bde1ab37eced3353f926 --- /dev/null +++ b/computerassistedtranslationwithneuralqualityestimationandautomaticpostediting/full.md @@ -0,0 +1,349 @@ +# Computer Assisted Translation with Neural Quality Estimation and Automatic Post-Editing + +Ke Wang*, Jiayi Wang*, Niyu Ge, Yangbin Shi, Yu Zhao, Kai Fan† + +Alibaba Group Inc. + +{moyu.wk, joanne.wjy, niyu.ge, taiwu.syb}@alibaba-inc.com, kongyu@taobao.com, k.fan@alibaba-inc.com + +# Abstract + +With the advent of neural machine translation, there has been a marked shift towards leveraging and consuming the machine translation results. However, the gap between machine translation systems and human translators needs to be manually closed by postediting. In this paper, we propose an end-to-end deep learning framework of the quality estimation and automatic post-editing of the machine translation output. Our goal is to provide error correction suggestions and to further relieve the burden of human translators through an interpretable model. To imitate the behavior of human translators, we design three efficient delegation modules - quality estimation, generative post-editing, and atomic operation post-editing and construct a hierarchical model based on them. We examine this approach with the English - German dataset from WMT 2017 APE shared task and our experimental results can achieve the state-of-the-art performance. We also verify that the certified translators can significantly expedite their post-editing processing with our model in human evaluation. + +# 1 Introduction + +The explosive advances in the sequence to sequence model (Sutskever et al., 2014; Bahdanau et al., 2014; Vaswani et al., 2017) enable the deep learning based neural machine translation (NMT) to approximate and even achieve the human parity in some specific language pairs and scenarios. Instead of translating from scratch by human translators, a new translation paradigm has emerged: computer assisted translation (CAT) system, which includes the machine translation and human post-editing. The post-editing is the process whereby humans amend machine-generated translations to achieve + +an acceptable final product. Practically, the estimated average translation time can be reduced by $17.4\%$ (from 1957.4 to 1617.7 seconds per text) (Läubli et al., 2013). + +However, utilizing NMT poses two key challenges. First, the neural machine translation quality still continues to vary a great deal across different domains or genres, more or less in proportion to the availability of paralleled training corpora. Second, the zero tolerance policy is a common choice in the vast majority of important applications. For example, when business legal documents are translated, even a single incorrect word could bring serious financial or property losses. Therefore, the subsequent human post-editing is indispensable in situations like this. Unfortunately, while NMT systems saves time by providing the preliminary translations, the time spent on error corrections by humans (Läubli et al., 2013) remains substantial to the extent that it offsets the efficiency gained by the NMT systems. In this paper, we explore automatic post-editing (APE) in the deep learning framework. Specifically, we adopt an imitation learning approach, where our model first screens the translation candidates by quality prediction and then decides whether to post edit with the generation or the atomic operation method. + +Starting with a wide range of features used in the CAT system, we carefully analyze the human post-editing results to narrow down our framework design into three key modules: quality estimation (QE), generative post-editing and atomic operation post-editing. These modules are tightly integrated into the transformer neural networks (Vaswani et al., 2017). Our main innovation is a hierarchical model with two modular post-editing algorithms which are conditionally used based on a novel fine-grained quality estimation model. For each machine translation, our model i) runs the QE model to predict the detailed token level errors, + +which will be further summarized as an overall quality score to decide whether the machine translation quality is high or not, and ii) conditional on the previous decision, employs the atomic operation post-editing algorithm on the high quality sentence or the generative model to rephrase the translation for the low one. + +We examine our approach on the public English-German dataset from WMT $^{1}$ 2017 APE shared task. Our system outperforms the top ranked methods in both BLEU and TER metrics. In addition, following a standard human evaluation process aimed at achieving impartiality with respect to the efficiency of CAT system, we ask several certified translators to edit the machine translation outputs with or without our APE assistance. Evaluation results show that our system significantly improves translators' efficiency. + +# 2 Related Work + +Our work relates to and builds on several intertwined threads of research in machine translation, including QE and APE. We briefly survey the traditional methods and differentiate our approach. + +# 2.1 Quality Estimation + +Quality estimation is often a desired component for developing and deploying automatic language technologies, and has been extensively researched in machine translation (Barrault et al., 2019). Its purpose is to provide some metrics measuring the overall quality. The current state-of-the-art models mostly originated from the predictor-estimator framework (Kim et al., 2017), where a sequence-to-sequence model is pre-trained to extract sophisticated sequence features to be fed into a sequence level regression or classification network. + +Tan et al. (2017) proposed the neural post-editing based quality estimation by streamlining together the traditional QE and APE models. Since our proposed QE module will eventually serve the APE module as well, we consider two modifications accordingly. First, we re-define the QE as a fine-grained multi-class problem, whose output indicates the number of tokens in four categories, missing / redundant / erroneous or kept tokens. A similar idea was initially proposed in (Gu et al., 2017) to predict the number of copy occurrences in non-autoregressive neural machine translation. + +Table 1: Notation used in the model + +
SymbolDefinition
ssentence in source language
mmachine translated sentence in target language
tgolden (reference) sentence in target language
epost-editing sentence in target language
sithe i-th token of s, similar for mi, ti, ei
PMTthe probabilistic model of machine translation
PPEthe probabilistic model of post-editing
PQEthe probabilistic model of quality estimation
IxAindicator function, = 1 if A is true, o.w. 0
τthreshold to distinguish high/low quality translation
+ +In this paper, we make significant extensions to include more categories. Secondly, we maximize our QE model performance with a novel conditional BERT architecture. Inspired by the masked language model objective in the encoder BERT (Devlin et al., 2019), we introduce the training objective to the encoder-decoder framework by adapting the decoder to become a memory encoder, allowing us to pre-train the target language model similar to BERT but conditioned on the source language text. + +# 2.2 Automatic Post-Editing + +Automatic Post Editing aims to improve the quality of an existing MT system by learning from human edited samples, converting "translationese" output into natural text. The traditional APE is based on a round-trip translation loop to mimic errors similar to the ones produced by NMT and can achieve acceptable performance with large scale monolingual data only (Freitag et al., 2019). However, the prevalent trend in this area prefers the dual-source encoder-decoder architecture with parallel data (Chatterjee et al., 2017b; Junczys-Dowmunt and Grundkiewicz, 2018; Pal et al., 2018; Lopes et al., 2019), which obtained the best results in WMT competitions (Chatterjee et al., 2019). The dual-source encoder encodes the source text and the machine translation output separately, and the decoder decodes the post-edited results. All these approaches encode each source independently and apply an auto-regressive decoder. They differ in their parameter sharing mechanisms. + +While our approach still employs the multisource APE framework, but there are two fundamental differences. First, our APE module, as aforementioned above, is built on our re-designed QE model, with which the source and the machine translation are entangled by the encoder and memory-encoder QE module. Second, our decoder + +consists in a versatile architecture that can choose between the left to right auto-regressive generative model and the atomic-operation based paralleled model. It dynamically determines which model to engage at runtime. The parallelizable model was broadly explored in insertion- or deletion- based transformer (Chan et al., 2019; Stern et al., 2019; Gu et al., 2019), while our decoder supports more functional operations. + +# 3 Model and Objective + +In order to achieve the automatic post-editing goal, it is essential for the model to find the exact errors appearing in the machine translation and learn how to fix them. Breaking the problem into several subtasks, our proposed pipeline includes three major models as Figure 1. By skipping the pre-training temporarily, the first step is to investigate the fine-grained quality estimation model with respect to the source text and machine translated text. Its output will provide a fine-grained quality estimation of the machine translation. Based on the corresponding quality, an atomic APE or a generative APE model will be called for further processing. + +![](images/ed9c361ca496114baa88f275d9b2f893b98a299c1c66db67de4d9fa4890d1c20.jpg) +Figure 1: The overall pipeline. The QE model will output fine-grained metrics to the translation quality. Then, high quality machine translation will proceed with atomic APE model for minor fix, while the low quality machine translation will go through a generative APE model for completely rephrasing. Note that the model parameters are shared for three steps w.r.t. encoder and memory encoder. Detailed computational graph can refer to Figure 2. + +# 3.1 Fine-Grained Quality Estimation + +Table 2: Definition of QE Tags + +
Labelk>1k=1k=0k=-1
Definitioninsert k-1 tokenskeepdeletereplace
+ +As described in the related work, compared to + +traditional translation QE task in WMT², our QE module is more fine-grained and is recast as a multiclass $\{-1,0,1,\dots,K\}$ sequence labeling problem. The definition of the integer labels is shown in Table 2. If $k <= 1$ , the label denotes one single token operation; otherwise, it means to insert $k - 1$ extra tokens after the current one. The QE tag q for training pair $(\mathbf{m},\mathbf{e})$ can be deterministically calculated by dynamic programming Algorithm 4 in Appendix, which is basically a string matching algorithm. We define a conditionally independent sequence tagging model for the error prediction. + +$$ +P _ {\mathrm {Q E}} (\mathbf {q} | \mathbf {s}, \mathbf {m}) = \prod_ {i} P _ {\mathrm {Q E}} (q _ {i} | \mathbf {s}, \mathbf {m}) \tag {1} +$$ + +A transformer based neural network is employed. We present a novel encoder-memory encoder framework with memory attention as shown in the decomposition of the following equation. + +$$ +\begin{array}{l} P _ {\mathrm {Q E}} (\mathbf {q} | \mathbf {s}, \mathbf {m}) \\ \triangleq \operatorname {S o f t m a x} _ {\mathrm {Q E}} \left(\operatorname {E n c} ^ {M} (\mathbf {m}, \operatorname {E n c} (\mathbf {s}))\right) \\ \end{array} +$$ + +where $\mathrm{Enc}(\cdot)$ is the standard transformer encoder (Vaswani et al., 2017), and $\mathrm{Enc}^M (\cdot)$ is the memory encoder adapted from standard transformer decoder. It removed the future masking in the transformer decoder and use the last state as the output which contains contexts from both SRC and MT. + +During inference, neither the ground truth of post-editing nor the golden translation reference is available. The fine-grained QE model can predict the human translation edit rate (HTER) $h$ through the inferred QE tags $\hat{\mathbf{q}}$ . + +$$ +\begin{array}{l} h = \frac {\# \text {p r e d i c t e d e d i t s}}{\text {p r e d i c t e d P E l e n g t h}} \\ = \frac {\sum_ {i} \left\{\mathbb {I} _ {\hat {q} _ {i} < 1} + (\hat {q} _ {i} - 1) \mathbb {I} _ {\hat {q} _ {i} > = 1} \right\}}{\sum_ {i} | \hat {q} _ {i} |} \tag {3} \\ \end{array} +$$ + +On the one hand, the overall metric $h$ can quantitate the quality of machine translation and determine which APE algorithm will be used. On the other hand, the detailed QE tags can theoretically guide the APE which atomic operation should be applied. Thus, the QE tagging and the atomic operation APE are simultaneously and iteratively trained, which will be elaborated in 3.2 and 3.5. + +![](images/fc3991058069e2f96515a5e437b93f3d0c4809bbea8cfb66e581d45da0c0d78d.jpg) +Figure 2: The detailed computational graph including detailed operations. + +![](images/da5e8329535c9d5167d95b1b567cc70ab041d381af8a93cf949ed0045d3f26a7.jpg) +Figure 3: An example illustration of placeholder inserter and atomic operation APE. + +# 3.2 Atomic Operation Automatic Post-Editing + +The key idea of atomic operation APE is to reduce all predefined operations (insertion, deletion, substitution) into a special substitution operation by introducing an artificial token placeholder [PLH]. + +First, we align the machine translation $\mathbf{m}$ and the post edits $\mathbf{e}$ by inserting [PLH]s, resulting in a new $\tilde{\mathbf{m}}$ of the same length as $\mathbf{e}$ . Technically, we insert $q_{i} - 1$ [PLH]s after $m_{i}$ if $q_{i} > 1$ ; we delete the current token $m_{i}$ if $q_{i} = 0$ ; we replace $m_{i}$ with [PLH] if $q_{i} = -1$ . For convenience, this process is denoted as $\tilde{\mathbf{m}} = \mathrm{PLH\_INS}(\mathbf{m},\mathbf{q})$ . + +Second, the original APE task is transformed into another sequence tagging problem, since $|\tilde{\mathbf{m}}| = |\mathbf{e}|$ . + +$$ +\begin{array}{l} P _ {\mathrm {P E}} ^ {A} (\mathbf {e} | \mathbf {s}, \mathbf {m}) = P _ {\mathrm {P E}} ^ {A} (\mathbf {e} | \mathbf {s}, \tilde {\mathbf {m}}) \\ = \operatorname {S o f t m a x} _ {\mathrm {P E}} \left(\operatorname {E n c} ^ {M} \left(\tilde {\mathbf {m}}, \operatorname {E n c} (\mathbf {s})\right)\right) \\ \end{array} +$$ + +Notice that i) the encoder and memory encoder share the parameters with the QE in Equation (2); ii) the softmax layer is different, because the number of outputs in APE has a different size equal to the vocabulary size. An intuitive visualization can see the Figure 3 and the holistic pipeline sees the Figure 1. + +# 3.3 Generative Automatic Post-Editing + +The larger HTER $h$ is, the lower quality of $\mathbf{m}$ is, and the more atomic operations are required. In this case, the previous APE model may be not powerful enough to learn complicated editing behaviors. We propose a backup APE model via auto-regressive approach for the deteriorated translations. Concretely, we write the dual-source language model into its probabilistic formulation. + +$$ +\begin{array}{l} P _ {\mathrm {P E}} ^ {G} (\mathbf {e} | \mathbf {s}, \mathbf {m}) = \prod_ {i} P _ {\mathrm {P E}} ^ {G} (e _ {i} | \mathbf {e} _ {< i}, \mathbf {s}, \mathbf {m}) \\ = \prod_ {i} \operatorname {D e c} \left(\mathbf {e} _ {< i}; \operatorname {E n c} ^ {M} (\mathbf {m}, \operatorname {E n c} (\mathbf {s})) ; \operatorname {E n c} (\mathbf {s})\right) \tag {5} \\ \end{array} +$$ + +Notice that i) the encoder and memory encoder are still reused here, ii) the $\mathrm{Dec}(\cdot ;\cdot ;\cdot)$ is a transformer decoder with hierarchical attention, since two memory blocks $\mathrm{Enc}^M (\mathbf{m},\mathrm{Enc}(\mathbf{s}))$ and $\mathrm{Enc}(\mathbf{s})$ are both conditional variables for the auto-regressive language model; iii) unlike sequence tagging, the inference of the generative APE is intrinsically non-parallelizable. + +Algorithm 1 Imitation Learning Algorithm +Require: $\mathbf{s},\mathbf{m} = \{m_i\}_{i = 1}^M$ $\mathbf{e} = \{e_i\}_{i = 1}^N$ , hyperparameter $\beta \in (0,1)$ +1: Draw a random number $r$ from uniform distribution [0,1]. +2: if $r > \beta$ then +3: $\tilde{\mathbf{m}} = \mathrm{PLH\_INS}(\mathbf{m},\mathbf{q})$ +4: else +5: Randomly replace $20\%$ of $e_i$ as [PLH] to obtain $\tilde{\mathbf{m}}$ +6: end if +7: Pseudo data for insertion Remove all [PLH] in $\tilde{\mathbf{m}}$ to obtain $\mathbf{m}^i$ +8: Pseudo data for substitution Run APE inference model to obtain the prediction $\hat{\mathbf{e}}^s\gets P_{\mathrm{PE}}^A (\cdot |\mathbf{s},\tilde{\mathbf{m}})$ +9: Pseudo data for deletion Randomly insert one or two [PLH]s to each gap in e with probability 0.15 or 0.025 to obtain the updated $\tilde{\mathbf{m}}$ +10: Run APE inference model to obtain the prediction $\hat{\mathbf{e}}^d\gets P_{\mathrm{PE}}^A (\cdot |\mathbf{s},\tilde{\mathbf{m}})$ +11: return 3 fake data points, $\mathbf{m}^i$ $\mathbf{m}^s = \hat{\mathbf{e}}^s$ $\mathbf{m}^d = \hat{\mathbf{e}}^d$ + +# 3.4 Pre-training and Imitation Learning + +Because of the lack of human post-editing data, training from scratch is typically difficult. We thus employ two workaround methods to improve the model performance. + +Pre-training It is worth noting that the reduced atomic operation APE is actually equivalent to the mask language modeling problem, a.k.a. the famous BERT (Devlin et al., 2019). Therefore, we pre-train the encoder-memory encoder model as a conditional BERT with the data pairs $(\mathbf{s},\mathbf{t})$ and $(\mathbf{m},\hat{\mathbf{e}})$ , aiming at learning the syntactic and alignment information of the ground truth. To make the pre-training valid on downstream tasks, we consistently use [PLH] token to randomly mask the reference / post-editing sentence. + +Imitation Learning As mentioned in 3.1, during inference, the predicted QE tags will causally tie to the successive APE algorithm, because $\tilde{\mathbf{m}}$ is derived from $(\mathbf{m},\hat{\mathbf{q}})$ . Although we would want the model to learn to predict all three atomic operations together, the small size of real post-editing data severely limits the performance of joint QE tagging. Therefore, we propose a model specialization strategy where the model learns three separate tasks: deletion, insertion, and substitution. A reasonable amount of training data can be generated for each of the tasks and the model learns to specialize in each operation. The details are summarized in Algorithm 1. + +# 3.5 Training and Inference Algorithms + +In this section, we assemble all modules together into the final system. Because our model involves + +Algorithm 2 APE Training +Require: Pre-training data $\mathcal{P}$ in pair (s, t or e), QE Training data $\mathcal{Q}$ in triplet(s, m, e). +1: Pre-train the encoder-memory encoder model with $\mathcal{P}$ as 3.4. +2: while not converge do +3: Sample a tuple from $\mathcal{Q}$ . +4: Call Algorithm 1 to enlarge the training sample four times. +5: for each (s, m, e) in the augmented data do +6: Calculate true QE tags $\mathbf{q} =$ Algorithm 4(m, e). +7: Get machine translation with [PLH] $\tilde{\mathbf{m}} =$ PLH_INS(m, q). +8: Update model parameters of encoder-memory encoder by optimizing the loss $\mathcal{L}_{\mathrm{QE}}(\mathbf{q}, \mathbf{s}, \mathbf{m}) + \mathcal{L}_{\mathrm{PE}}^A(\mathbf{e}, \mathbf{s}, \tilde{\mathbf{m}})$ . +9: Update All model parameters by optimizing loss $\mathcal{L}_{\mathrm{PE}}^G(\mathbf{e}, \mathbf{s}, \mathbf{m})$ . +10: end for +11: end while +12: return All model parameters. + +Algorithm 3 APE inference +Require: s, m, HTER threshold $\tau$ , iteration steps $S$ . +1: $\mathbf{m}^{(0)} = \mathbf{m}$ +2: for $i = 1, \dots, S$ do +3: Run QE inference $\hat{\mathbf{q}} \gets P_{\mathrm{QE}}(\cdot | \mathbf{s}, \mathbf{m}^{(i-1)})$ . +4: Run Equation 3 to obtain quality metric $h$ . +5: if $i == 1$ and $h > \tau$ then +6: Run generative APE inference $\hat{\mathbf{e}} \gets P_{\mathrm{PE}}^G(\cdot | \mathbf{s}, \mathbf{m})$ . +7: return APE $\hat{\mathbf{e}}$ . +8: end if +9: $\tilde{\mathbf{m}} = \mathrm{PLH\_INS}(\mathbf{m}^{(i-1)}, \mathbf{q})$ +10: Run atomic operation APE inference $\mathbf{m}^{(i)} \gets P_{\mathrm{PE}}^A(\cdot | \mathbf{s}, \tilde{\mathbf{m}})$ . +11: end for +12: return APE $\hat{\mathbf{e}} = \mathbf{m}^{(S)}$ . + +a nontrivial pipeline, we describe the details of training and inference separately and summarize them in Algorithm 2 and 3. + +Training usually requires to minimize the loss function (negative data log-likelihood of probabilistic models) by stochastic gradient descent (SGD) with respect to the trainable parameters. Our QE and atomic operation APE are both sequence tagging task, while the generative APE is a sequence generation task. The three loss functions are uniformly defined as sequential cross entropy between the predicted and the true sequence. Note that the QE and atomic operation APE share the encoder-memory encoder, so these two losses can be summed together for optimization. However, the generative APE model has an isolated hierarchical transformer decoder, so we need a second update by optimizing the corresponding loss alone. + +Inference of our APE system is not quite the same as the training. First, the overall inference is a continuously alternating procedure between QE + +and APE, where the predicted APE is assigned as a new machine translation for iterative updating. However, the inner loop in training algorithm regards to the augmented data points. Second, we introduce an early stop after the first QE tagging prediction. If the predicted quality is very low (i.e. the HTER is larger than a cross-validated threshold), the generative APE will be called and the inference will immediately exit without further iterations. Lastly, the APE results are utilized by professional translators for further editing. In the next section, we validate the gain of APE over machine translation with regards to the efficiency. + +# 4 Experiments on our Proposed Model + +We verify the validity and efficiency of the proposed APE model by conducting a series of APE experiments and human evaluation on WMT'17 APE dataset. For convenience, we denote the generative post-editing model as $GM$ , the atomic operation post-editing model as $AOM$ , and the final hierarchical model as $HM$ in this section. + +# 4.1 Setup + +Dataset. The open public WMT17 Automatic Post-Editing Shared Task (Bojar et al., 2017) data on English-German (En-De) is widely used for APE experiments. It consists of 23K real triples (source, machine translation & post-editing) for training and another 2K triples for testing from the Internet Technology (IT) domain. Besides, the shared task also provides a large-scale artificial synthetic corpus containing around 500K high quality and 4 million low quality synthetic triples. We over sample the APE real data by 20 times and merge it with the synthetic data, results in roughly 5 million of triples for both pre-training and APE training. The details of the training set are shown in Appendix Table 6. We adopt test set of the same task in WMT16 as the development set. Furthermore, we apply truecaser (Koehn et al., 2007) to all files and encode every sentence into subword units (Kudo, 2018) with a 32K shared vocabulary. + +Evaluation Metrics. We mainly evaluate our systems with metrics bilingual evaluation understudy (BLEU) (Papineni et al., 2002) and translation edit rate (TER) (Snover et al., 2006), since they are standard and widely employed in the APE shared task. The metric BLEU indicates how similar the candidate texts are to the reference texts, with values closer to 100 representing higher sim + +![](images/29a7ccb57067fbf40f5830d6ff5cf37a8d510a162c1e3b0b593ba842c86f2b51.jpg) +Figure 4: Results of Our Generative Model on Test Set + +ality. TER measures how many edits required from the predicted sentence to the ground truth sentence, and is calculated by Equation (3) as well and multiplied by 100. + +Training Details. All experiments are trained on 8 NVIDIA P100 GPUs for maximum 100,000 steps for about two days until convergence, with a total batch-size of around 17,000 tokens per step and the Adam optimizer (Kingma and Ba, 2014). Only the source and post-edited sentence pairs are used for pre-training. During pre-training, $20\%$ tokens in post-editing sentence are masked as [PLH]. Parameters are being tuned with 12,000 steps of learning rates warm-up (Vaswani et al., 2017) for both of the GM and AOM model. However, 5 automatic post editing iterations (i.e. $S = 5$ in Algorithm alg:infer) are applied during the inference for the AOM model due to its characteristic of fine-grained editing behaviors. Except these modifications, we follow the default transformer-based configuration (Vaswani et al., 2017) for other hyper-parameters in our models. + +# 4.2 APE Systems Comparison + +The main results of automatic post-editing systems are presented in Table 3 and competitively compared with results of recent years' winners of WMT APE shared task and several other top results. It is observed that our hierarchical single model achieves the state-of-the-art performance on both BLEU and TER metrics, outperforming not only all other single models but also the ensemble models of top ranked systems in WMT APE tasks. + +Note that our hierarchical system is not a two-model ensemble. The standard ensemble method requires inference and combination of results from more than one models. In contrast, our hierarchical model contains multiple parameter-sharing modules to accomplish multi-tasks, and only need to infer once on the selected model. + +Table 3: Performance Comparison on WMT17 APE En-De Dataset + +
ModelBLEU↑TER↓Note
Official Baseline62.4924.48Do nothing with the origin machine translation
MS-UEdin69.7219.49Single model (Junczys-Dowmunt and Grundkiewicz, 2018), winner of WMT18 APE task
Levenshtein Transformer70.119.2Single model (Gu et al., 2019)
Unbabel70.6619.03Single model (Correia and Martins, 2019), winner of WMT19 APE task.
FBK (Ensemble)70.0719.60Ensemble model(Chatterjee et al., 2017a), winner of WMT17 APE task
MS-UEdin (Ensemble)70.4619.03Ensemble model(Junczys-Dowmunt and Grundkiewicz, 2018)
Unbabel (Ensemble)71.9018.07Ensemble model(Correia and Martins, 2019)
Only GM71.5218.44Single model, i.e. τ = 0 in Algorithm 3
Only AOM68.4020.34Single model, i.e. τ = 1 in Algorithm 3
Our HM72.0718.01Single model, i.e. τ = 0.3, determined on development dataset
+ +Table 4: Performance Gain from Pseudo Data + +
ModelBLEU↑TER↓ΔBLEUΔTER
AOM w/o pseudo data65.6522.14--
AOM with pseudo data68.4020.34+2.75-1.80
+ +# 4.2.1 Results of Generative APE Model + +As mentioned in section 3.3, the decoder of our generative model receives encoder-memory encoder outputs, referring to SRC memory and SRC-MT joint memory. A transformer attention layer encodes the SRC into the SRC memory, and the joint memory is produced by another one, which encodes the original MT conditionally on the SRC memory. These two encoders are pre-trained with sources and post-edits from the full training data. + +We designed a set of systematic experiments to verify that our model benefits from such a design in Figure 4: (1) To verify that the memory encoder has the ability to learn cross-lingual knowledge, we replace the memory encoder with an ordinary multi-head self-attention encoder, which does not accept the source memory as input, marked by w/o Joint. (2) To prove that the shortcut from the SRC memory to the decoder input is necessary, the shortcut is removed in the w/o Shortcut experiment. (3) To verify that our model can leverage representations from pre-training, we conduct an experiment without pre-training, denoted as w/o Pre-training. + +The ablation results significantly demonstrate that our model does benefit from meory encoder, SRC memory shortcut and pre-training. Removing any of them will result in performance loss. + +# 4.2.2 Results of Atomic Operation APE Model + +In each iteration, based on the QE model's output, our AOM refines the MT in parallel regarding to + +all placeholders. Unlike the GM, the time cost of the AOM only depends on the steps of iterations, regardless of the length of the sentence. To evaluate the decoding efficiency, we collect the AOM's performances at different iteration steps, as shown in Figure 5. + +![](images/1f307c09fe9f9d67e0e92df297d91b39bd1a6859df3f1cfbc7e032737c7a2ee1.jpg) +Figure 5: The convergence curves of the AOM inference w.r.t. iteration. The iterative updating converges within only 3 to 5 steps, which is much smaller than the averaged number of decoding steps of the GM. + +The Role of Pseudo Data. As noted in section 3.4, model specialization algorithm is applied to train the model to learn different kinds of atomic operations. We compare our AOM on the test set with and without pseudo data in Table 4. The results demonstrate that our model specialization algorithm plays a key role by providing a powerful guidance for training and making up for the deficiency from the lack of large amount of real APE data. + +# 4.2.3 Results of QE Model + +The QE model is the prerequisite of the final hierarchical model as well as the basis of our atomic operation model. Therefore, it is necessary to guarantee the performance of QE results as accurate as possible. Unlike the traditional OK/BAD word- + +Table 5: Results of Fine-Grained QE Model (Pearson = 0.664). Quality tag prediction is evaluated in terms of multi-classification accuracy via F1-scores. The overall MT quality estimation is measured by the Pearson correlation coefficient, indicating the correlation between the predicted and the real MT quality w.r.t. TER. + +
KERMOKBAD
Precision↑0.8770.7100.5630.6220.8980.783
Recall↑0.9510.4710.4800.5400.9620.559
F1-score↑0.9130.5660.5180.5780.9280.652
+ +level QE task in WMT (Bojar et al., 2017), our model pursues to predict fine-grained quality tags. So, we cannot make a completely fair comparison with previous works. + +The fine-grained quality tag of each word predicted by the model can be classified into one of the four labels: $K$ for Kept, $E$ for Erroneous, $R$ for Redundant and $M$ for Missing. Furthermore, we convert the predicted fine-grained QE tags to OK/BAD tags directly by treating tag $K$ and tag $M$ as OK, and the other two tags as BAD according to the rules of tagging in WMT17 QE Shared Task. + +We provide our fine-grained QE results on the test dataset of WMT17 APE Task in Table 5, where the ground-truth tags are produced by Algorithm 4 in Appendix A.1. Note that the TER score can be easily computed from the predicted quality tags. The predicted TER score is regarded as an indicator of MT quality in our hierarchical model: MTs with quality higher than $\tau$ in Algorithm 3 are fed to the GM, otherwise they are sent to the AOM. The hyper-parameter $\tau = 0.3$ is determined by cross validation on WMT16 development dataset. Afterwards, we apply it on the WMT17 test dataset to select a potentially preferable model from GM and AOM to generate the final APE result for each SRC and MT pair. + +There are more than $75\%$ of tokens in the training set are tagged with Keep. In terms of the huge challenge posed by the unbalanced dataset, our fine-grained quality estimation is quite remarkable. The performance of our final hierarchical model in Table 3 proves the effectiveness of it. + +# 4.3 Results of Human Evaluation + +We conduct real post-editing experiments with professional translators involved. There are 6 independent participating translators, randomly divided into 2 groups. They are all native speakers of German and have $10+$ years of experience in transla + +![](images/364554e07798690093a0811ff5981731c63c635912a4890c73c48b6bdc5a1e3c.jpg) +Figure 6: Time Spent in Post-Editing by Translators. The averaged total time spent by translators to post-edit the APE becomes significantly decreased by $26.3\%$ + +tion of En-De in IT related domains. We follow two different flows in our experiments. For fair comparison, both of the two groups see the same 100 source sentences picked from the WMT17 test dataset. The MTs are provided for the first group for post-editing, while our model generated APEs for the second group. However, the information on the category of the translation is not revealed to translators. The translators are asked to record the elapsed time of their labor in total. + +The statistics of averaged post-editing time for different translators are summarized in Figure 6. Besides the total time, we also analyze the duration for low and high quality translations separately (determined by QE model). In either case, post-editing from the APE costs less time. We also did case study about high-quality vs low-quality APE in Appendix A.3. From different perspectives of experimental validation, we can conclude that the APE generated by our model can ease the burden of translators and substantially improve the post-editing efficiency. + +# 5 Conclusion + +In this paper, we propose a hierarchical model that utilizes the fine-grained word-level QE prediction to select one of the two APE models we propose to generate better translations automatically, which shows a state-of-the-art performance. In particular, we design a dynamic deep learning model using imitation learning, which intuitively mimics the editing behaviors of human translators. Our hierarchical model is not a standard ensemble model in the conventional sense. We merely share the parameters of different modules to accomplish different objectives, including QE, AOM and GM. Our experimental findings show that if the characteristics of errors in the machine translation can be + +accurately simulated, it is highly likely that MT output can be automatically refined by the APE model. Towards this end, we conduct a rigorous comparison of the machine translation and automatic post-editing based manual post-editing tasks, and it is observed that the latter can significantly increase the efficiency of post-editing. + +# Acknowledgments + +This work is partly supported by National Key R&D Program of China (2018YFB1403202). + +# References + +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. +Loic Barrault, Ondrej Bojar, Marta R. Costa-jussa, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias Muller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine translation (WMT19). In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1-61, Florence, Italy. Association for Computational Linguistics. +Ondrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Shujian Huang, Matthias Huck, Philipp Koehn, Qun Liu, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Raphael Rubino, Lucia Specia, and Marco Turchi. 2017. Findings of the 2017 conference on machine translation (WMT17). In Proceedings of the Second Conference on Machine Translation, pages 169-214, Copenhagen, Denmark. Association for Computational Linguistics. +William Chan, Nikita Kitaev, Kelvin Guu, Mitchell Stern, and Jakob Uszkoreit. 2019. Kermit: Generative insertion-based modeling for sequences. arXiv preprint arXiv:1906.01604. +Rajen Chatterjee, M. Amin Farajian, Matteo Negri, Marco Turchi, Ankit Srivastava, and Santanu Pal. 2017a. Multi-source neural automatic post-editing: Fbkås participation in the wmt 2017 ape shared task. In Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers, pages 630-638, Copenhagen, Denmark. Association for Computational Linguistics. +Rajen Chatterjee, M Amin Farajian, Matteo Negri, Marco Turchi, Ankit Srivastava, and Santanu Pal. 2017b. Multi-source neural automatic post-editing: Fbk?s participation in the wmt 2017 ape shared task. In Proceedings of the Second Conference on Machine Translation, pages 630-638. + +Rajen Chatterjee, Christian Federmann, Matteo Negri, and Marco Turchi. 2019. Findings of the wmt 2019 shared task on automatic post-editing. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 11-28. +Gonçalo M. Correia and André F. T. Martins. 2019. A simple and effective approach to automatic post-editing with transfer learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3050-3056, Florence, Italy. Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186. +Markus Freitag, Isaac Caswell, and Scott Roy. 2019. Ape at scale and its implications on mt evaluation biases. In Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers), pages 34-44. +Jiatao Gu, James Bradbury, Caiming Xiong, Victor OK Li, and Richard Socher. 2017. Non-autoregressive neural machine translation. arXiv preprint arXiv:1711.02281. +Jiatao Gu, Changhan Wang, and Jake Zhao. 2019. Levenshtein transformer. In Advances in Neural Information Processing Systems. +Marcin Junczys-Dowmunt and Roman Grundkiewicz. 2018. Ms-uedin submission to the wmt2018 ape shared task: Dual-source transformer for automatic post-editing. In Proceedings of the Third Conference on Machine Translation, Volume 2: Shared Task Papers, pages 835-839. Association for Computational Linguistics. +Hyun Kim, Hun-Young Jung, Hongseok Kwon, Jong-Hyeok Lee, and Seung-Hoon Na. 2017. Predictor-estimator: Neural quality estimation based on target word prediction for machine translation. ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP), 17(1):3. +Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. International Conference on Learning Representations. +Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In ACL. + +Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. pages 66-75. +Samuel Läubli, Mark Fishel, Gary Massey, Maureen Ehrensberger-Dow, Martin Volk, Sharon O'Brien, Michel Simard, and Lucia Specia. 2013. Assessing post-editing efficiency in a realistic translation environment. +António V. Lopes, M. Amin Farajian, Gonçalo M. Correia, Jonay Trénous, and André F. T. Martins. 2019. Unbabel's submission to the WMT2019 APE shared task: BERT-based encoder-decoder for automatic post-editing. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 118-123, Florence, Italy. Association for Computational Linguistics. +Santanu Pal, Nico Herbig, Antonio Krüger, and Josef van Genabith. 2018. A transformer-based multi-source automatic post-editing system. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 827-835. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proc Meeting of the Association for Computational Linguistics. +Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In *In Proceedings of Association for Machine Translation in the Americas*, pages 223-231. +Mitchell Stern, William Chan, Jamie Kiros, and Jakob Uszkoreit. 2019. Insertion transformer: Flexible sequence generation via insertion operations. In International Conference on Machine Learning, pages 5976-5985. +Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104-3112. +Yiming Tan, Zhiming Chen, Liu Huang, Lilin Zhang, Maoxi Li, and Mingwen Wang. 2017. Neural post-editing based on quality estimation. In Proceedings of the Second Conference on Machine Translation, pages 655-660. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998-6008. + +# A Appendix + +# A.1 Pseudo code of QE tag computation + +The computation of QE tags is quite similar to the famous Minimum Edit Distance problem and can be solved with dynamic programming in algorithm 4. + +Algorithm 4 QE tag computation +Require: machine translation $\mathbf{m} = \{m_i\}_{i=1}^M$ , post-editing $\mathbf{e} = \{e_i\}_{i=1}^N$ . +1: Initialize the edit distance matrix $d_{i,0} = i, d_{0,j} = j$ and QE tag $q_i = 1$ . +2: for $i = 1, \dots, M$ do +3: for $j = 1, \dots, N$ do +4: $d_{i,j} = \min \{d_{i-1,j-1} + \mathbb{I}_{m_i \neq e_j}, d_{i,j-1} + 1, d_{i-1,j} + 1\}$ +5: end for +6: end for +7: while $i > 0$ or $j > 0$ do +8: if $i > 0$ and $j > 0$ and $d_{i-1,j-1} + 1 = d_{i,j}$ then +9: $q_i = -1, i -, j -$ +10: else if $j > 0$ and $d_{i,j-1} + 1 = d_{i,j}$ then +11: $q_i + +, j -$ +12: else if $i > 0$ and $d_{i-1,j} + 1 = d_{i,j}$ then +13: $q_i = 0, i -$ +14: else +15: $i - -, j -$ +16: end if +17: end while +18: return $\mathbf{q} = \{q_i\}_{i=1}^M$ + +# A.2 Details of the Traning Corpus + +WMT APE shared-task provided both real APE triplets and a large a large-scale artificial synthetic corpus containing around 500K high quality and 4 million low quality synthetic triples. Table 6 shows the difference between them. + +Table 6: Details of the WMT 2017 APE Shared-Task Dataset. The BLEU and TER metrics are directly evaluated on machine translation and post-edittings as references. + +
Source# SentenceAvg. LengthBLEUTER
Real Triples23,00017.8861.8725.35
Artificial 500K526,36820.9060.0125.55
Artificial 4M4,391,18016.6846.5935.37
500K+20*Real986,36819.4960.8025.46
4M+500K+20*Real(Full Training data)5,377,54817.2049.6533.31
+ +# A.3 Case Study and Runtime Efficiency + +As mentioned in the paper, the AOM is more suitable for translations that only require a few edit operations while GM is more preferable for low quality translations. To demonstrate this conclusion and prove the effectiveness of our QE-based automatic selector, some cases of translations with different qualities are shown in Table 7. + +In case 1 and case 2, the translation is quite close to $pe$ . Therefore, the AOM only need to predict tokens for a small number of [PLH]s. When there are relatively complete contexts provided, the AOM + +Table 7: Examples of Crowdsourcing after APE. Tokens in “ $\langle \rangle$ ” indicates GM's over corrections or AOM's inaccurate translations due to too many consecutive [PLH] predictions, which leads inadequate contextual information. Tokens in “{}” highlights correct automatic editings. + +
High Quality s Translation Case
Case1SRCIn List view , click any column header to sort by that criteria .
MTKlicken Sie in der Listenansicht auf eine beliebige Spaltenüberschrift , um nach dieser Kriterien sortieren .
PEKlicken Sie in der Listenansicht auf eine beliebige Spaltenüberschrift , um nach diesen Kriterien zu sortieren .
MT (sub-word)_klicken_Sie_in_der_Listenansicht_auf_eine_beliebige_Spalten überschrift , um_nach_dieser_Kriterien_sortieren_
Predicted QE Tag1 1 1 1 1 1 1 1 1 1 1 1 1 -1 2 1 1
TER vs Predicted TER11.76 vs 11.11
AOM Input_klicken_Sie_in_der_Listenansicht_auf_eine_beliebige_Spalten überschrift , um_nach[PLH] _Kriterien[PLH] _sortieren_
AOM Output_klicken_Sie_in_der_Listenansicht_auf_eine_beliebige_Spalten überschrift , um_nach{...diesen} _Kriterien{...zu} _sortieren_
GM Output_klicken_Sie_in_der_Listenansicht_auf_eine_beliebige_Spalten überschrift , um_nach_dieser_Kriterien{...zu} _sortieren_
Final OutputKlicken Sie in der Listenansicht auf eine beliebige Spaltenüberschrift , um nach diesen Kriterien zu sortieren .
Translator Editno action
Case2SRCYou can justify all text in a paragraph either including or excluding the last line .
MTSie können den gesamtten Text eines Absatzes mit oder ohne die letzte Zeile .
PESie können den gesamtten Text eines Absatzes mit oder ohne die letzte Zeile ausrichten .
MT (sub-word)_Sie_konnen__den_gesamtten__Text_eines_Absatzes__mit__oder__ohne__die__letzte_Zeile_
Predicted QE Tag1 1 1 1 1 1 1 1 1 1 1 2 1
TER vs Predicted TER6.67 vs 6.67
AOM Input_Sie_konnen__den_gesamtten__Text_eines_Absatzes__mit__oder__ohne__die__letzte_Zeile[PLH]_
AOM Output_Sie_konnen__den_gesamtten__Text_eines_Absatzes__mit__oder__ohne__die__letzte_Zeile{...ausrichten}...
GM Output_Sie_konnen__den_gesamtten__Text_eines_Absatzes{...entweder\_einschließlich}oder_ohne__die_letzte_Zeile\_löschen...
Final OutputSie können den gesamtten Text eines Absatzes mit oder ohne die letzte Zeile ausrichten .
Translator Editno action
Low Quality Translation Case
Case3SRCIn Start Number , enter the number to assign to the first PDF on the list .
MTWahlen Sie unter “Number ,”geben Sie die Nummer für die erstede PDF-Datei in der listened aus .
PEGeben Sie unter “Startnummer” die Nummer für die erstede PDF-Datei in der listened ein .
MT (sub-word)_wahlen_Sie_unter__“_Number__,”_geben_Sie_die_Nummer_für__die_erateCDF - Datei_in_.der_
Predicted QE Tag-1 1 1 2 -1 -1 -1 -1 1 1 0 -1 -1 1 1 1 -1 1 1 -1 1 1
TER vs Predicted TER35.29 vs 54.55
AOM Input[PLH]_Sie_unter__“[PLH][PLH][PLH][PLH][PLH]_die_Nummer[PLH][PLH]_PDF - Datei[PLH]_der_?[Lspe][PLH]_
AOM Output{...geben}_Sie_unter__“_Start{...geben\_Sie\_zum\_Zuweisen\_}”_die_Nummer__der__erstenCDF - Datei_über__der_?[Lspe]{...ein}...
GM Output{...geben}_Sie_unter__“{...Start nummer}”_die_Nummer_für__die_erateCDF - Datei_in_.der_?[Lspe]{...an}...
Final OutputGeben Sie unter “Startnummer” die Nummer für die erstede PDF-Datei in der listened an .
Translator Editan→ein
Case4SRCThe Illustrator text is converted to HTML text with basic formatting attributes in the resulting web page .
MTDie Illustrator Text HTML-Text mit grundlegenden Formatierungsattribut in der erstellen Webseite konvertiert wird .
PEDie Illustrator-Text wird in HTML-Text mit grundlegenden Formatierungsattributen in der erstellen Webseite konvertiert
MT (sub-word)_/_dieIllustrator_TextHTML-Textmitgrundlegenden Formatierung s attribute_in_der_erstellen_Webseite
Predicted QE Tag_/_convertiert_wird_
TER vs Predicted TER-1 3 3 1 1 1 1 1 1 -1 1 1 1 1 0 1
AOM Input[PLH]_Illustrator[PLH][PLH]_Text[PLH][PLH]_HTML - Text_mit_grundlegenden_Formatierung s{attributen}_in_der_erstellen_Webeite_kenvertiert_
AOM Output_in_Illustrator - Der_S_text_in_ in HTML - Text_mit_grundlegenden_Formatierung s{attributen}_in_der_erstellen_Webeite_kenvertiert_
GM Output_/_derIllustrator{ - Text_wirld_in} HTML - Text_mit_grundlegenden_Formatierung s{attributen}_in_der_erstellen_Webeite_kenvertiert_
Final OutputDer Illustrator-Text wird in HTML-Text mit grundlegenden Formatierungsattributen in der erstellen Webseite konvertiert .
Translator EditDer→Die
+ +can achieve a higher performance than the GM. Moreover, after reading the source and the final output, the human translators did not even take any additional action to improve the translation quality. + +In the opposite way, as shown in case 3 and case 4, there is a huge gap between $mt$ and $pe$ , and the input for AOM contains a considerable number of placeholders, which lacks enough contextual information. In these cases, our GM can auto-regressively regenerate the translation based on the given $mt$ to guarantee the higher quality of the final output. Based on the QE selector, the translators only need to make very few efforts to correct the errors in the final generated APE of our model. + +A practical point of the computer assisted translation via APE is its expense and computational cost. Compared with the traditional computer assisted translation crowdsourcing, machine translation + human post-editing, our additional automatic post-editing does increase the computational cost, which is roughly equivalent to another machine translation model. In general, the crowdsourcing is charged by hours. The numbers in our findings suggest a promising budget cut associated with CAT crowdsourcing. However, this extra APE module may lead to a latency increase by 400ms, which is still far below the average time cost by human post-editing. Even for an online crowdsourcing system, a well-designed concurrent mechanism should make the translators not feel any delay. From the perspective of architecture scale, the APE model can be deployed in the identical processing unit for the machine translation model and be called successively in a pipeline. The only concern is that the memory storage capacity should be large enough to store more parameters. \ No newline at end of file diff --git a/computerassistedtranslationwithneuralqualityestimationandautomaticpostediting/images.zip b/computerassistedtranslationwithneuralqualityestimationandautomaticpostediting/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..522abd98d5e9ec4beace34eb976010f785c4a59a --- /dev/null +++ b/computerassistedtranslationwithneuralqualityestimationandautomaticpostediting/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e0b7f4c439c97a33fbfce8ae0dd27c125c85b5b0940f2489ee67d432be34afa9 +size 706104 diff --git a/computerassistedtranslationwithneuralqualityestimationandautomaticpostediting/layout.json b/computerassistedtranslationwithneuralqualityestimationandautomaticpostediting/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..1af560aaf12cadaae20086ad31527b3b07e7ea25 --- /dev/null +++ b/computerassistedtranslationwithneuralqualityestimationandautomaticpostediting/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0bffedc2948789146e0b02e5635a163207f6db325a1d49cdb72acf10cf284682 +size 387053 diff --git a/conceptbertconceptawarerepresentationforvisualquestionanswering/6381e200-bf44-4c66-b23b-adb6d5884aa9_content_list.json b/conceptbertconceptawarerepresentationforvisualquestionanswering/6381e200-bf44-4c66-b23b-adb6d5884aa9_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..5abbd5d2c8efc4203105ff1cdcd12057b06d3352 --- /dev/null +++ b/conceptbertconceptawarerepresentationforvisualquestionanswering/6381e200-bf44-4c66-b23b-adb6d5884aa9_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8ae706ec83ea0b4320bb2cfe306cc1b94c3f47eebb27d38fa1ff8faf18d5fdcb +size 66453 diff --git a/conceptbertconceptawarerepresentationforvisualquestionanswering/6381e200-bf44-4c66-b23b-adb6d5884aa9_model.json b/conceptbertconceptawarerepresentationforvisualquestionanswering/6381e200-bf44-4c66-b23b-adb6d5884aa9_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c77ba54547e9a84251027c679b0c985618b29d32 --- /dev/null +++ b/conceptbertconceptawarerepresentationforvisualquestionanswering/6381e200-bf44-4c66-b23b-adb6d5884aa9_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1908ca708e76ab75497b7ff1ea411acb808b1e070f31939df86b9113561d4d93 +size 87624 diff --git a/conceptbertconceptawarerepresentationforvisualquestionanswering/6381e200-bf44-4c66-b23b-adb6d5884aa9_origin.pdf b/conceptbertconceptawarerepresentationforvisualquestionanswering/6381e200-bf44-4c66-b23b-adb6d5884aa9_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e11035ab89ac7865d3b212bcc518c27ae30126a0 --- /dev/null +++ b/conceptbertconceptawarerepresentationforvisualquestionanswering/6381e200-bf44-4c66-b23b-adb6d5884aa9_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:092a03f0484ff7c4df11223b4fda65068bb5dbda4d2ad2ad4ee66062279b651c +size 2815354 diff --git a/conceptbertconceptawarerepresentationforvisualquestionanswering/full.md b/conceptbertconceptawarerepresentationforvisualquestionanswering/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b478a3d4ec971e2de56750dc33a1179d0d4bef70 --- /dev/null +++ b/conceptbertconceptawarerepresentationforvisualquestionanswering/full.md @@ -0,0 +1,355 @@ +# ConceptBert: Concept-Aware Representation for Visual Question Answering + +François Gardères* +Ecole Poyltechnique Paris, France + +Maryam Ziaeefard† McGill University Montreal, Canada + +Baptiste Abeloos Thales Montreal, Canada + +Freddy Lecue +Inria, France +Thales, Canada + +# Abstract + +Visual Question Answering (VQA) is a challenging task that has received increasing attention from both the computer vision and the natural language processing communities. Current works in VQA focus on questions which are answerable by direct analysis of the question and image alone. We present a concept-aware algorithm, ConceptBert, for questions which require common sense, or basic factual knowledge from external structured content. Given an image and a question in natural language, ConceptBert requires visual elements of the image and a Knowledge Graph (KG) to infer the correct answer. We introduce a multi-modal representation which learns a joint Concept-Vision-Language embedding. We exploit ConceptNet KG for encoding the common sense knowledge and evaluate our methodology on the Outside Knowledge-VQA (OK-VQA) and VQA datasets. Our code is available at https://github.com/ZiaMaryam/ConceptBERT + +# 1 Introduction + +Visual Question Answering (VQA) was firstly introduced to bridge the gap between natural language processing and image understanding applications in the joint space of vision and language (Malinowski and Fritz, 2014). + +Most VQA benchmarks compute a question representation using word embedding techniques and Recurrent Neural Networks (RNNs), and a set of object descriptors comprising bounding box coordinates and image features vectors. Word and image representations are then fused and fed to a network to train a VQA model. However, these approaches are practical when no knowledge beyond the visual content is required. + +Incorporating the external knowledge introduces several advantages. External knowledge and supporting facts can improve the relational representation between the objects detected in the image, or between entities in the question and objects in the image. It also provides information on how the answer can be derived from the question. Therefore, the complexity of the questions can be increased based on the supporting knowledge base. + +Organizing the world's facts and storing them in a structured database, large scale Knowledge Bases (KB), have become important resources for representing the external knowledge. A typical KB consists of a collection of subject-predicate-object triplets also known as a fact. A KB in this form is often called a Knowledge Graph (KG) (Bollacker et al.) due to its graphical representation. The entities are nodes and the relations are the directed edges that link the nodes. The triples specify that two entities are connected by a particular relation, e.g., (Shakespeare, writerOf, Hamlet). + +A VQA system that exploits KGs is an emerging research topic, and is not well-studied. Recent research has started integrating knowledge-based methods into VQA models (Wang et al., 2017, 2016; Narasimhan et al., 2018; Narasimhan and Schwing, 2018; Zhu et al., 2015; Marino et al., 2019). These methods incorporate the external knowledge through two approaches: i) they exploit a set of associated facts for each question provided in VQA datasets (Narasimhan et al., 2018; Narasimhan and Schwing, 2018), or ii) they collect possible search queries for each question-image pair and use a search API to retrieve the answers (Wang et al., 2017, 2016; Zhu et al., 2015; Marino et al., 2019). However, we go one step further and implement an end-to-end VQA model that is fully trainable. Our model does not require knowledge annotations in VQA datasets or search queries. + +![](images/633c29ea5b571c74481aa35755fe1c783eb51b94f4af801900f8f25af4cef2cb.jpg) +Figure 1: Model architecture of the proposed ConceptBert. + +Most of the recent works are still based on the idea of context-free word embeddings rather than the pre-trained language representation (LR) model. While the pre-trained LR model such as BERT (Devlin et al., 2018) is an emerging direction, there is little work on its fusion with KG and image representation in VQA tasks. Liu et al. propose a knowledge-based language representation and use BERT as the token embedding method. However, this model is also a query-based method. It collects entity names involved in questions and queries their corresponding triples from the KG. Then, it injects queried entities into questions. + +In this paper, we introduce a model which jointly learns from visual, language, and KG embeddings and captures image-question-knowledge specific interactions. The pipeline of our approach is shown in Figure 1. We compute a set of object, question, and KG embeddings. The embedded inputs are then passed through two main modules: i) the vision-language representation, and ii) the concept-language representation. The vision-language representation module jointly enhances both the image and question embeddings, each improving its context representation with the other one. The concept-language representation uses a KG embedding to incorporate relevant external information in the question embedding. The outputs of these two modules are then aggregated to represent concept-vision-language embeddings and then fed to a classifier to predict the answer. + +Our model is different from the previous methods since we use pre-trained image and language features and fuse them with KG embeddings to + +incorporate the external knowledge into the VQA task. Therefore, our model does not need additional knowledge annotations or search queries and reduces computational costs. Furthermore, our work represents an end-to-end pipeline that is fully trainable. + +In summary, the main contributions of our work are: + +1. Novel methodology to incorporate common sense knowledge to VQA models (Figure 1) +2. Concept-aware representation to use knowledge graph embeddings in VQA models (Figure 2-b) +3. Novel multimodal Concept-Visual-Language embeddings (Section 3.4) + +# 2 Problem formulation + +Given a question $q \in \mathcal{Q}$ grounded in an image $I \in \mathcal{I}$ and a knowledge graph $\mathcal{G}$ , the goal is to predict a meaningful answer $a \in \mathcal{A}$ . Let $\Theta$ be the parameters of the model $p$ that needs to be trained. Therefore, the predicted answer $\hat{a}$ of our model is: + +$$ +\hat {a} = \arg \max _ {a \in \mathcal {A}} p _ {\Theta} (a | I, q, \mathcal {G}) \tag {1} +$$ + +In order to retrieve the correct answer, we aim to learn a joint representation $z \in R^{d_z}$ of $q, I$ , and $\mathcal{G}$ such that: + +$$ +a ^ {*} = \hat {a} = \underset {a \in \mathcal {A}} {\arg \max } p _ {\Theta} (a | z) \tag {2} +$$ + +where $a^*$ is the ground-truth answer. $d_z$ is a hyperparameter that represents the dimension of the + +joint space $z$ . $d_{z}$ is selected based on a trade-off between the capability of the representation and the computational cost. + +# 3 Our approach + +# 3.1 Input representations + +The input to our model, ConceptBert, consists of an image representation, a question representation, and a knowledge graph representation module (cf. the blue-dashed box in Figure 1) which are discussed in detail below. + +Image representation: We use pre-trained Faster R-CNN features (Anderson et al., 2017) to extract a set of objects $\mathcal{V} = \{v_{i} \mid i = 1, \dots, n_{v}\}$ per image, where each object $v_{i}$ is associated with a visual feature vector $v_{i} \in \mathbb{R}^{d_{v}}$ and bounding-box coordinates $b_{i} \in \mathbb{R}^{d_{b}}$ . + +Question representation: Given a question consisting of $n_T$ tokens, we use BERT embeddings (Devlin et al., 2018) to generate question representation $q \in \mathbb{R}^{n_T \times d_q}$ . BERT operates over sequences of discrete tokens consisting of vocabulary words and a small set of special tokens, i.e., SEP, CLS, and MASK. The representation of each token is a sum of a token-specific learned embedding and encodings for position and segment. Position refers to the token's index in the sequence and segment shows the index of the token's sentence if multiple sentences exist. + +Knowledge graph representation: We use ConceptNet (Speer et al., 2016) as the source of common sense knowledge. ConceptNet is a multilingual knowledge base, representing words and phrases that people use and the common sense relationships between them. ConceptNet is a knowledge graph built from several different sources (mostly from Wiktionary, Open Mind Common Sense (Singh et al., 2002) and Games with a purpose such as Ahn et al.). It contains over 21 million edges and over 8 million nodes. In this work, we focus on the English vocabulary which contains approximately 1.5 million nodes. To avoid the step of the query construction and take full advantage of the large scale KG, we exploit ConceptNet embedding proposed in (Malaviya et al., 2020) and generate the KG representation $\pmb{k} \in \mathbb{R}^{n_T \times d_k}$ . + +This method uses Graph Convolutional Networks (Kipf and Welling, 2016) to incorporate information from the local neighborhood of a node in the graph. It includes an encoder and a decoder. + +A graph convolutional encoder takes a graph as input, and encodes each node. The encoder operates by sending messages from a node to its neighbors, weighted by the relation type defined by the edge. This operation occurs in multiple layers, incorporating information multiple hops away from a node. The last layer's representation is used as the graph embedding of the node. + +# 3.2 Vision-Language representation + +To learn joint representations of language $\mathbf{q}$ and visual content $\mathcal{V}$ , we generate vision-attended language features $V$ and language-attended visual features $Q$ (cf. the orange box in Figure 1) inspired by VilBERT model (Lu et al., 2019). + +Our vision-language module is mainly based on two parallel BERT-style streams, which operate over image regions and text segments (cf. Figure 2-a). Each stream is a succession of transformer blocks and co-attentional transformer layers to enable information exchange between image and text modalities. These exchanges are restricted between specific layers and the text features go through more processing than visual features. The final set of image features represent high-level information of language features, and final text features include high-level vision features. + +# 3.3 Concept-Language representation + +The vision-language module represents the interactions between the image and the question. However, this module alone is not able to answer questions that require insights that are neither in the image, nor in the question. To this end, we propose the concept-language representation to produce language features conditioned on knowledge graph embeddings (cf. the red box in Figure 1). It performs knowledge-conditioned language attention in the concept stream (Figure 2-b). With this system, the model is able to incorporate common sense knowledge to the question, and enhance the question comprehension with the information found in the knowledge graph. + +The entities in the knowledge graph have both contextual and relational information that we desire to integrate in the question embedding. To this purpose, we use an attentional transformer layer which is a multi-layer bidirectional Transformer using the encoder part of the original Transformer (Vaswani et al., 2017). + +The concept-language module is a series of + +![](images/8a2d84f0552994943503362016503178d110c1ff93125029bfa49fed69fbefc5.jpg) +a) Vision-Language representation + +![](images/397d1dcbe433357ab9daef03dae176d4057c79ea4f37a32cfe178613e9f6a2a8.jpg) +b) Concept-Language representation +Figure 2: Attention-based representation modules + +Transformer blocks that attend to question tokens based on KG embeddings. Given input question tokens $\{w_0,\dots ,w_T\}$ represented as $q$ and their KG embeddings represented as $k$ , our model outputs a final representation $G$ . + +The input consists of "queries" from question embeddings and "keys" and "values" of KG embeddings. We use Multi-Head Attention with scaled dot-product. Therefore, we pack a set of $\mathbf{q}$ into a matrix $Q_{w}$ , and $\mathbf{k}$ into a matrix $K_{G}$ and $V_{G}$ . + +$$ +\operatorname {A t t} \left(Q _ {w}, K _ {G}, V _ {G}\right) = \operatorname {s o f t m a x} \left(\frac {Q _ {w} \cdot K _ {G} ^ {\top}}{\sqrt {d _ {k}}}\right) \cdot V _ {G} \tag {3} +$$ + +The output of the final Transformer block, $G$ , is a new representation of the question, enhanced with common sense knowledge extracted from the knowledge graph. Figure 2-b shows an intermediate representation $H_{C}$ . + +# 3.4 Concept-Vision-Language embedding module + +We aggregate the outputs of the three streams to create a joint concept-vision-language representation. The aggregator needs to detect high-level interactions between the three streams to provide a meaningful answer, without erasing the lower-level interactions extracted in the previous steps. + +We design the aggregator by applying the Compact Trilinear Interaction (CTI) (Do et al., 2019) to question, answer, and image features and generate a vector to jointly represent the three features. + +Given $V\in \mathbb{R}^{n_v\times d_v}$ $Q\in \mathbb{R}^{n_T\times d_q}$ , and $G\in$ $\mathbb{R}^{n_T\times d_k}$ , we generate a joint representation $z\in$ $\mathbb{R}^{d_z}$ of the three embeddings. The joint representation $z$ is computed by applying CTI to each + +$$ +(V, Q, G): +$$ + +$$ +z = \sum_ {i = 1} ^ {n _ {v}} \sum_ {j = 1} ^ {n _ {T}} \sum_ {k = 1} ^ {n _ {T}} \mathcal {M} _ {i j k} \left(V _ {i} W _ {z _ {v}} \circ Q _ {j} W _ {z _ {q}} \circ G _ {k} W _ {z _ {g}}\right) \tag {4} +$$ + +where $\mathcal{M}$ is an attention map $\mathcal{M} \in \mathbb{R}^{n_v \times n_T \times n_T}$ : + +$$ +\mathcal {M} = \sum_ {r = 1} ^ {R} \llbracket \mathcal {G} _ {r}; V W _ {v _ {r}}, Q W _ {q _ {r}}, G W _ {g _ {r}} \rrbracket \tag {5} +$$ + +where $W_{z_v}, W_{z_q}, W_{z_g}, W_{v_r}, W_{q_r}, W_{g_r}$ are learnable factor matrices, and $\circ$ is the Hadamard product. $R$ is a slicing parameter, establishing a trade-off between the decomposition rate and the performance, and $\mathcal{G}_r \in \mathbb{R}^{d_{q_r} \times d_{v_r} \times d_{g_r}}$ is a learnable Tucker tensor. + +The joint embedding computes more efficient and more compact representations than simply concatenating the embeddings. It creates a joint representation in a single space of the three different embedding spaces. In addition, we overcome the issue of dimensionality faced with concatenating large matrices. + +The outputs of the aggregator is a joint concept- vision-language representation which is then fed to a classifier to predict the answer. + +# 4 Experiments + +We evaluate the performance of our proposed model using the standard evaluation metric recommended in the VQA challenge (Agrawal et al., 2017): + +$$ +A c c (a n s) = \min \left(1, \frac {\# \{\text {h u m a n s p r o v i d e d a n s} \}}{3}\right) \tag {6} +$$ + +# 4.1 Datasets + +All experiments have been performed on VQA 2.0 (Goyal et al., 2016) and Outside Knowledge-VQA (OK-VQA) (Marino et al., 2019) datasets. + +VQA 2.0 is a public dataset containing about 1.1 million questions and 204,721 images extracted from the 265,016 images of the COCO dataset. At least 3 questions (5.4 questions on average) are provided per image, and each question is associated with 10 different answers obtained by crowd sourcing. Since VQA 2.0 is a large dataset, we only consider questions whose set of answers has at least 9 identical ones. With this common practice, we can cast aside questions which have lukewarm answers. The questions are divided in three categories: Yes/No, Number, and Other. We are especially interested in the "Other" category, which can require external knowledge to find the correct answer. + +OK-VQA: To evaluate the performance of our proposed model, we require questions which are not answerable by direct analysis of the objects detected in the image or the entities in the question. Most of knowledge-based VQA datasets impose hard constraints on their questions, such as being generated by templates (KB-VQA (Wang et al., 2015)) or directly obtained from existing knowledge bases (FVQA (Wang et al., 2016)). We select OK-VQA which is the only VQA dataset that requires handling unstructured knowledge to answer natural questions about images. + +The OK-VQA dataset is composed of 14,031 images and 14,055 questions. For each question, we select the unanimous answer as the ground-truth answer. OK-VQA is divided into eleven categories: vehicles and transportation (VT); brands, companies and products (BCP); objects, materials and clothing (OMC); Sports and Recreation (SR); Cooking and Food (CF); Geography, History, Language and Culture (GHLC); People and Everyday Life (PEL); plants and animals (PA); science and technology (ST); weather and climate (WC). If a question was classified as belonging to different categories by different people, it was categorized as "Other". + +# 4.2 Implementation details + +In this section, we provide the implementation details of our proposed model in different building blocks. + +Image embedding: Each image has a total of + +36 image region features ( $n_v = 36$ ), each represented by a bounding box and an embedding vector computed by pre-trained Faster R-CNN features where $d_v = 2048$ . Each bounding box includes a 5-dimensional spatial coordinate ( $d_b = 5$ ) corresponding to the coordinates of the top-left point of the bounding box, the coordinates of the bottom-right point of the bounding box, and the covered fraction of the image area. + +Question embedding: The input questions are embedded using BERT's BASE model. Therefore, each word is represented by a 768-D word embedding $(d_q = 768)$ . Each question is divided into 16-token blocks $(n_T = 16)$ , starting with a [CLS] token and ending with a [SEP] token. The answers are transformed to one-hot encoding vectors. + +Knowledge graph embedding: During our experiments, we explored different node embeddings for ConceptNet (e.g. GloVe (Pennington et al., 2014), NumberBatch (Speer et al., 2016), and (Malaviya et al., 2020)). We found that the embedding generated by (Malaviya et al., 2020) works best in our model. + +Vision-Language representation: We initialize our vision-language representation with pretrained ViLBERT features. The ViLBERT model is built on the Conceptual Captions dataset (Sharma et al., 2018), which is a collection of 3.3 million image-caption pairs, to capture the diversity of visual content and learn some interactions between images and text. Our vision-language module includes 6 layers of Transformer blocks with 8 and 12 attention heads in the visual stream and linguistic streams, respectively. + +Concept-Language representation: We train the concept stream of our ConceptBert from scratch. The module includes 6 layers of Transformer blocks with 12 attention heads. + +Concept-Vision-Language embedding: We have tested our concept-vision-language representation with $d_z = 512$ and $d_z = 1024$ . The best results were reached using $d_z = 1024$ . Our hypothesis is that we can improve the capability of the module by increasing $d_z$ . However, it leads to an increase in the computational cost. We set $R = 32$ in Equation 5, the same value as in the CTI (Do et al., 2019) for the slicing parameter. + +Classifier: We use a binary cross-entropy loss with a batch size of 1024 over a maximum of 20 epochs on 8 Tesla GPUs. We use the BertAdam + +
DatasetLVLCLCVL
VQA 2.026.6867.938.2469.95
OK-VQA14.9331.3522.1233.66
+ +Table 1: Evaluation results on VQA 2.0 and OK-VQA validation sets for ablation study + +
ModelOverallYes/NoNumberOther
Up-Down59.680.342.855.8
XNM Net64.7---
ReGAT67.18---
ViLBERT67.982.5654.2767.15
SIMPLE67.982.7054.3767.21
CONCAT68.182.9654.5768.00
ConceptBert69.9583.9955.2970.59
+ +Table 2: Evaluation results of our model compared with existing algorithms on VQA 2.0 validation set. + +optimizer with an initial learning rate of 4e-5. A linear decay learning rate schedule with warm up is used to train the model. + +# 4.3 Experimental results + +This sub-section provides experimental results on the VQA 2.0 and OK-VQA datasets. + +Ablation Study: In Table 1, we compare two ablated instances of ConceptBert with its complete form. Specifically, we validate the importance of incorporating the external knowledge into VQA pipelines on top of the vision and language embeddings. Table 1 reports the overall accuracy on the VQA 2.0 and OK-VQA validation sets in the following setting: + +- $L$ : Only questions features $\mathbf{q}$ are fed to the classifier. +- $VL$ : Only the outputs of the Vision-Language representation module $[V;Q]$ are concatenated and fed to the classifier. +- $CL$ : Only the output of the Concept-Language representation module $G$ is fed to the classifier. +- $CVL$ : ConceptBert complete form; the outputs of both Vision-Language and Concept-Language modules are fused (cf. Section 3.4) and fed to the classifier. + +Comparison between $L$ and $CL$ instances shows the importance of incorporating the external knowledge to accurately predict answers. Adding the KG embeddings to the model leads to a gain of $11.56\%$ and $7.19\%$ in VQA and OK-VQA datasets, respectively. + +We also note that the $VL$ model outperforms the $CL$ model. The reason is that most of the ques + +tions in both VQA 2.0 and OK-VQA datasets are related to objects found in the images. Therefore, the accuracy drops without providing the detected object features. Compared to the VL and $CL$ , the CVL model gives the highest accuracy which indicates the effectiveness of the joint concept-vision-language representation. + +Results on VQA 2.0 dataset: The performance of our complete model on VQA 2.0 validation set is compared with the existing models in Table 2. Up-Down model (Anderson et al., 2017) combines the bottom-up and top-down attention mechanism that enables attention to be calculated at the level of objects. XNM Net (Shi et al., 2018) and ReGAT (Li et al., 2019) are designed to answer semantically-complicated questions. In addition to the existing approaches we elaborated two other baselines: (i) SIMPLE: First, we create the embedding $G$ , which is the output of the concept-language module. Then, we use $G$ and the image embedding, feed them to the vision-language module, and send its output to a classifier and check the answer. (ii) CONCAT: we concatenate the embeddings from the question and ConceptNet to form a mixed embedding $Q_{KB}$ . Then, we send $Q_{KB}$ and the image embedding to the vision-language module, and feed its output to a classifier and check the answer. It is worthy to note that SIMPLE and CONCAT do not have CTI involved. The results show that our model outperforms the existing models. Since we report our results on the validation set, we removed the validation set from the training phase, so that the model only relies on the training set. + +Results on OK-VQA dataset: Table 3 shows the performance of our complete model on OK-VQA validation set. Since there exists only one work on OK-VQA dataset in the literature, we apply a few state-of-the-art models on OK-VQA and report their performance. We also performed SIMPLE and CONCAT baselines on OK-VQA dataset. In the OK-VQA study (Marino et al., 2019), the best results are obtained by fusing MUTAN and ArticleNet (MUTAN + AN) as a knowledge-based baseline. AN retrieves some articles from Wikipedia for each question-image pair and then train a network to predict whether and where the ground-truth answers appear in the article and in each sentence. + +From the table, we observe that our model surpasses the baselines and SOTA models in almost every category which indicates the usefulness of + +
ModelOverallVTBCPOMCSRCFGHLCPELPASTWCOther
XNM Net25.6126.8421.8618.2233.0223.9323.8320.7924.8121.4342.6424.39
MUTAN+AN27.5825.5623.9526.8733.4429.9420.7125.0529.7024.7639.8423.62
ViLBERT31.3527.9226.7429.7235.2431.9334.0426.5430.4927.3846.228.72
SIMPLE31.3728.1226.8429.7735.7731.9929.0926.9931.0927.6646.2828.81
CONCAT31.9528.6627.0129.8135.8832.8931.0426.9431.9928.0146.3329.01
ConceptBert33.6630.3828.0230.6537.8535.0832.9128.5535.8832.3847.1331.47
+ +Table 3: Evaluation results of our model compared with the SOTA algorithms on OK-VQA validation set. + +![](images/ff3d4bd330b76aa6bdeb2160b4ce93e28d9a0c272a70480bf229cdaa8c3f0e46.jpg) + +![](images/561fc0352bc6615f1fcd45c620cad5100872a84fdbc8630e886f5cb5b2c8f167.jpg) + +![](images/f0cfb9be0b9f80871cd93f97b2940c54fc7677d478ab7717b86012d900fb3487.jpg) + +![](images/6a50e6e83defd1e7f8207fc9403339ea4d043facac360055b5336c12f17074c9.jpg) +Q: What is the likely relationship of these animals? VL: friends; CVL: mother and child +Q: What condiment is hanging out of the sandwich? VL: mustard; CVL: onion +Figure 3: VQA examples in the category "Other": ConceptBert complete form $CVL$ outperforms the $VL$ model on the question Q. + +![](images/4cc0e66c365afa78618c67c17381788cefa19a1401258ac622582d1344499422.jpg) +Q: What is the lady looking at? VL: phone; CVL: camera +Q: What is laying on a banana? +VL: nothing; CVL: sticker + +![](images/1b1e4b1ee9db4fb1a509125917be3e8b02157d1fde8138736fba8cd44a6f1189.jpg) +Q: What metal do the minute hands are made of? VL: metal; CVL: steel +Q: What vegetable is on the lower most plate? VL: celery; CVL: carrot + +external knowledge in predicting answers. ConceptBert performs especially well in the "Cooking and Food" (CF), "Plants and Animals" (PA), and "Science and Technology" (ST) categories with a gain larger than $3\%$ . The answers to these type of questions often are entities out of the main entities in the question and the visual features in the image. Therefore, the information extracted from the knowledge graph plays an important role in determining the answer. ViLBERT performs better in the category "Geography, History, Language and Culture" (GHLC) compared to ConceptBert, since "dates" are not entities in ConceptNet. + +# 4.4 Qualitative results + +We illustrate some qualitative results of Concept-Bert complete form $CVL$ by comparing it with the $VL$ model. In particular, we aim at illustrating the advantage of adding (i) the external knowledge extracted from the ConceptNet knowledge graph, and (ii) concept-vision-language embedding representations. + +Figure 3 and Figure 4 illustrate some qualitative + +results on VQA 2.0 and OK-VQA validation sets, respectively. + +From the figures, we observed that the $VL$ model is influenced by the objects detected in the picture. However, the $CVL$ model is able to identify the correct answer without only focusing on the visual features. For example in the third row in Figure 4, $CVL$ model uses the facts that an elephant is herbivorous, and black cat is associated with Halloween to find the correct answers. + +It is worthy to note that the $CVL$ answers remain consistent from a semantic perspective even in the case of wrong answers. For example, How big is the distance between the two players? exposes a distance as opposed to the $VL$ model which provides a Yes/No answer (cf. Figure 5). In another example for the question Sparrows need to hide to avoid being eaten by what?, the $CVL$ model mentions an animal species that can eat sparrows, while the $VL$ model returns an object found in the image. From these visualization results, we observe that the knowledge strongly favours the capture of interactions between objects, which contributes + +![](images/86f5a091292755f2b974e4822c75772c38fc6b3dabcc2d32a45d2d75c4e3a65b.jpg) +Q: What event is this? + +![](images/49f4fcb3aca554e46e77aaebabb4157144f1366518a47b638add0deb93256465.jpg) +VL: birthday; CVL: wedding +Q: The box features the logo from which company? + +![](images/98a47a5374c1e822569bb78199da5245c056a54de1a2728b5d5866d2f7d72ac3.jpg) +VL: delta; CVL: amazon +Q: What holiday is associated with this animal? +VL: sleep; CVL: hallowen + +![](images/86266179c27b3fa54e5c865be12daf6fe80390f1584ea5901073b55b36ce2017.jpg) +Q: Why does this animal have this object? + +![](images/72bbff23d63b073cd69663d85368029ce21d5b4b758df9b37df68898ca4bf59c.jpg) +VL: warmth; CVL: soccer +Q: What would you describe this place? + +![](images/420062d87b73eecffc016bdec71895faede0bca0bb9ab7200511cef0ffc28751.jpg) +VL: airport; CVL: market +Q: What do these animals eat? +VL: water; CVL: plant + +![](images/f705a990eafbaccca60ef041b648e14ce8232cd3567f8dfcb67250491ce60db8.jpg) +Q: What is the red item used for? + +![](images/49b0b773e233a316897ca2c1c7cbe7ff1a07569222c8546177310e664a7f6883.jpg) +VL: stop; CVL: water +Q: What type of tool is she using for her hair? + +![](images/06033a0998f866be8dddfcd9ffe9fbe05e0319fa8ea389daeece698e30237e70.jpg) +VL: clip; CVL: brush +Q: What is the red building called? +VL: bell; CVL: lighthouse + +![](images/ad11b93c05448cd79bd8ce3f6ee3595a708293cc3bd6804b5a573d34812f8645.jpg) +Figure 4: OK-VQA examples: ConceptBert complete form $CVL$ outperforms the $VL$ model on the question Q. +Q: What is the company that designs the television? + +![](images/3756407631a1ec504a5f152cf741a67a001245bea94ad4f75c7b2a7519ab9be2.jpg) +VL: table; CVL: lg +GT: samsung +Q: Where can you buy contemporary furniture? +VL: couch; CVL: store +GT: ikea +Figure 5: ConceptBert complete form $CVL$ identifies answers of the same type as the ground-truth answer (GT) compared with the $VL$ model on the question Q. VQA and OK-VQA examples are shown in the first and second rows, respectively. + +![](images/dacb6ddb35a6179a1fe2631cfd1449a701d7fe4e1c140d7a97dade99a0ffa78a.jpg) +Q: How big is the distance between the two players? + +![](images/ee11695d9628a29b72b48283183f5a555b7ede90989a0cc30b43e89193ddcbde.jpg) +VL: yes; CVL: 20ft +GT: 10ft +Q: What kind of boat is this? +VL: ship; CVL: freight +GT: tug + +![](images/303cb1fd9fd26514767639ef4561a94675063db6983dd0e7146075431be37aef.jpg) +Q: What play is advertised on the side of the bus? + +![](images/8498523ba07ccb1235798ec997d5e70d512879feff85ba5380fe351215c19eb7.jpg) +VL: nothing; CVL: movie +GT: smurfs +Sparrows need to hide to avoid being eaten by what? +VL: leaf; CVL: bird +GT: hawks + +to a better alignment between image regions and questions. + +# 5 Conclusions + +In this paper, we present ConceptBert, a concept-aware end-to-end pipeline for questions which require knowledge from external structured content. We introduce a new representation of questions enhanced with the external knowledge exploiting Transformer blocks and knowledge graph embeddings. We then aggregate vision, language, and concept embeddings to learn a joint concept-vision-language embedding. The experimental results have shown the performance of our proposed model on VQA 2.0 and OK-VQA dataset. + +For future work, we will investigate how to integrate the explicit relations between entities and objects. We believe that exploiting the provided relations in knowledge graphs and integrating them with relations found between objects in questions/images can improve the predictions. + +# References + +Aishwarya Agrawal, Jiasen Lu, Stanislaw Antol, Margaret Mitchell, C. Lawrence Zitnick, Devi Parikh, and Dhruv Batra. 2017. Vqa: Visual question answering. Int. J. Comput. Vision, 123(1):4-31. +Luis Von Ahn, Mihir Kedia, and Manuel Blum. Ver-. bosity: a game for collecting common-sense facts. In In Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems, volume 1 of Games, pages 75-78. ACM Press. +Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2017. Bottom-up and top-down attention for image captioning and VQA. CoRR, abs/1707.07998. +Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pages 1247-1250. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. +Tuong Do, Thanh-Toan Do, Huy Tran, Erman Tjiputra, and Quang D. Tran. 2019. Compact trilinear interaction for visual question answering. +Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2016. Making the + +v in vqa matter: Elevating the role of image understanding in visual question answering. +Thomas N. Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. CoRR, abs/1609.02907. +Linjie Li, Zhe Gan, Yu Cheng, and Jingjing Liu. 2019. Relation-aware graph attention network for visual question answering. +Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, and Ping Wang. K-BERT: Enabling language representation with knowledge graph. In Proceedings of AAAI 2020. +Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visi-olinguistic representations for vision-and-language tasks. +Chaitanya Malaviya, Chandra Bhagavatula, Antoine Bosselut, and Yejin Choi. 2020. Commonsense knowledge base completion with structural and semantic context. Proceedings of the 34th AAAI Conference on Artificial Intelligence. +Mateusz Malinowski and Mario Fritz. 2014. A multi-world approach to question answering about real-world scenes based on uncertain input. In Advances in neural information processing systems, pages 1682-1690. +Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. 2019. OK-VQA: A visual question answering benchmark requiring external knowledge. CoRR, abs/1906.00067. +Medhini Narasimhan, Svetlana Lazebnik, and Alexander G. Schwing. 2018. Out of the box: Reasoning with graph convolution nets for factual visual question answering. CoRR, abs/1811.00538. +Medhini Narasimhan and Alexander G. Schwing. 2018. Straight to the facts: Learning knowledge base retrieval for factual visual question answering. CoRR, abs/1809.01124. +Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543. +Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of ACL. +Jiaxin Shi, Hanwang Zhang, and Juanzi Li. 2018. Explainable and explicit visual reasoning over scene graphs. +Push Singh et al. 2002. The public acquisition of commonsense knowledge. + +Robyn Speer, Joshua Chin, and Catherine Havasi. 2016. Conceptnet 5.5: An open multilingual graph of general knowledge. CoRR, abs/1612.03975. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. CoRR, abs/1706.03762. +Peng Wang, Qi Wu, Chunhua Shen, Anthony Dick, and Anton Van Den Henge. 2017. Image captioning and visual question answering based on attributes and external knowledge. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, IJCAI'17, pages 1290-1296. +Peng Wang, Qi Wu, Chunhua Shen, Anton van den Hengel, and Anthony Dick. 2016. Fvqa: Fact-based visual question answering. +Peng Wang, Qi Wu, Chunhua Shen, Anton van den Hengel, and Anthony R. Dick. 2015. Explicit knowledge-based reasoning for visual question answering. CoRR, abs/1511.02570. +Yuke Zhu, Ce Zhang, Christopher Ré, and Li Fei-Fei. 2015. Building a large-scale multimodal knowledge base for visual question answering. CoRR, abs/1507.05670. \ No newline at end of file diff --git a/conceptbertconceptawarerepresentationforvisualquestionanswering/images.zip b/conceptbertconceptawarerepresentationforvisualquestionanswering/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..c57017dfe4b489e81ea2c1e8d16d86e932fd25fe --- /dev/null +++ b/conceptbertconceptawarerepresentationforvisualquestionanswering/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3ac7175a0da755212c667cbdf1a87663ea0fd6d5cfc89f3d5cc6c34a29582b8a +size 682174 diff --git a/conceptbertconceptawarerepresentationforvisualquestionanswering/layout.json b/conceptbertconceptawarerepresentationforvisualquestionanswering/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..226540f650ab0edccba89563a554e2247d98472f --- /dev/null +++ b/conceptbertconceptawarerepresentationforvisualquestionanswering/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eb731db58914da682cd15475d5c692d0ba37d1080f74c5480e0ae046f3785456 +size 412200 diff --git a/conditionalneuralgenerationusingsubaspectfunctionsforextractivenewssummarization/e6253770-6131-4f13-b0f1-ce087a1e3be0_content_list.json b/conditionalneuralgenerationusingsubaspectfunctionsforextractivenewssummarization/e6253770-6131-4f13-b0f1-ce087a1e3be0_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f8ee877d00a32f2db0d5350ee4ce9b41d7caa4fa --- /dev/null +++ b/conditionalneuralgenerationusingsubaspectfunctionsforextractivenewssummarization/e6253770-6131-4f13-b0f1-ce087a1e3be0_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:077f2991a47214a72a80a13c0370941626fe707cbb97d0763a947f5f3ecf4cff +size 69965 diff --git a/conditionalneuralgenerationusingsubaspectfunctionsforextractivenewssummarization/e6253770-6131-4f13-b0f1-ce087a1e3be0_model.json b/conditionalneuralgenerationusingsubaspectfunctionsforextractivenewssummarization/e6253770-6131-4f13-b0f1-ce087a1e3be0_model.json new file mode 100644 index 0000000000000000000000000000000000000000..110783769e0109dbd1eb819c129c699ccd5dcc25 --- /dev/null +++ b/conditionalneuralgenerationusingsubaspectfunctionsforextractivenewssummarization/e6253770-6131-4f13-b0f1-ce087a1e3be0_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0cc6462586beedd613e5b84faf5b476019ce940bdaa52ff1d97c66902b05d154 +size 85819 diff --git a/conditionalneuralgenerationusingsubaspectfunctionsforextractivenewssummarization/e6253770-6131-4f13-b0f1-ce087a1e3be0_origin.pdf b/conditionalneuralgenerationusingsubaspectfunctionsforextractivenewssummarization/e6253770-6131-4f13-b0f1-ce087a1e3be0_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2a6f408433a4c7ce8d969b9e5f2d39f3f88afd58 --- /dev/null +++ b/conditionalneuralgenerationusingsubaspectfunctionsforextractivenewssummarization/e6253770-6131-4f13-b0f1-ce087a1e3be0_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f4db2a0209ec793de8139f4b15d4a8469c008f734f94e5804f30d2bee5bd3fe9 +size 2376411 diff --git a/conditionalneuralgenerationusingsubaspectfunctionsforextractivenewssummarization/full.md b/conditionalneuralgenerationusingsubaspectfunctionsforextractivenewssummarization/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a342159b356fb8cc468a30b69923f0ad7a923195 --- /dev/null +++ b/conditionalneuralgenerationusingsubaspectfunctionsforextractivenewssummarization/full.md @@ -0,0 +1,246 @@ +# Conditional Neural Generation using Sub-Aspect Functions for Extractive News Summarization + +Zhengyuan Liu, Ke Shi, Nancy F. Chen +Institute for Infocomm Research, A*STAR, Singapore +{liu_zhengyuan, shi_ke, nfychen}@i2r.a-star.edu.sg + +# Abstract + +Much progress has been made in text summarization, fueled by neural architectures using large-scale training corpora. However, in the news domain, neural models easily overfit by leveraging position-related features due to the prevalence of the inverted pyramid writing style. In addition, there is an unmet need to generate a variety of summaries for different users. In this paper, we propose a neural framework that can flexibly control summary generation by introducing a set of sub-aspect functions (i.e. importance, diversity, position). These sub-aspect functions are regulated by a set of control codes to decide which sub-aspect to focus on during summary generation. We demonstrate that extracted summaries with minimal position bias is comparable with those generated by standard models that take advantage of position preference. We also show that news summaries generated with a focus on diversity can be more preferred by human raters. These results suggest that a more flexible neural summarization framework providing more control options could be desirable in tailoring to different user preferences, which is useful since it is often impractical to articulate such preferences for different applications a priori. + +# 1 Introduction + +Text summarization targets to automatically generate a shorter version of the source content while retaining the most important information. As a straightforward and effective method, extractive summarization creates a summary by selecting and subsequently concatenating the most salient semantic units in a document. Recently, neural approaches, often trained in an end-to-end manner, have achieved favorable improvements on various large-scale benchmarks (Nallapati et al., 2017; Narayan et al., 2018a; Liu and Lapata, 2019). + +Despite renewed interest and avid development in extractive summarization, there are still long- + +standing, unresolved challenges. One major problem is position bias, which is especially common in the news domain, where the majority of research in summarization is studied. In many news articles, sentences appearing earlier tend to be more important for summarization tasks (Hong and Nenkova, 2014), and this preference is reflected in reference summaries of public datasets. However, while this tendency is common due to the classic textbook writing style of the "inverted pyramid" (Scanlan, 1999), news articles can be presented in various ways. Other journalism writing styles include anecdotal lead, question-and-answer format, and chronological organization (Stovall, 1985). Therefore, salient information could also be scattered across the entire article, instead of being concentrated in the first few sentences, depending on the chosen writing style of the journalist. + +As the "inverted pyramid" style is widespread in news articles (Kryscinski et al., 2019), neural models would easily overfit on position-related features in extractive summarization tasks because of the data-driven learning setup which tags on to features that correlate the most with the output. As a result, those models would select the sentences at the very beginning of a document as best candidates regardless of considering the full context, resulting in sub-optimal models with fancy neural architectures that do not generalize well to other domains (Kedzie et al., 2018). + +Additionally, according to Nenkova et al. (2007): "Content selection is not a deterministic process (Salton et al., 1997; Marcu, 1997; Mani, 2001). Different people choose different sentences to include in a summary, and even the same person can select different sentences at different times (Rath et al., 1961). Such observations lead to concerns about the advisability of using a single human model ...", such observations suggest that individuals differ on what she considers key information under different circumstances. This reflects the need to generate + +application-specific summaries, which is challenging without establishing appropriate expectations and knowledge of targeted readers prior to model development and ground-truth construction. However, publicly available datasets only provide one associated reference summary to a document. Without any explicit instructions and targeted applications or user preferences, ground-truth construction for summarization becomes an under-constrained assignment (Kryscinski et al., 2019). Therefore, it is challenging for end-to-end models to generate alternative summaries without proper anchoring from reference summaries, making it harder for such models to reach their full potential. + +In this work, we propose a flexible neural summarization framework that is able to provide more explicit control options when automatically generating summaries (see Figure 1). Since summarization has been regarded as a combination of sub-aspect functions (e.g. information, layout) (Carbonell and Goldstein, 1998; Lin and Bilmes, 2012), we follow the spirit of sub-aspect theory and adopt control codes on sub-aspects to condition summary generation. The advantages are two-fold: (1) It provides a systematic approach to investigate and analyze how one might minimize position bias in extractive news summarization in neural modeling. Most, if not all, previous work like (Jung et al., 2019; Kryscinski et al., 2019) only focus on analyzing the degree and prevalence of position bias. In this work, we take one step further to propose a research methodology direction to disentangle position bias from important and non-redundant summary content. (2) Text summarization needs are often domain or application specific, and difficult to articulate a priori what the user/preferences are, thus requiring potential iterations to adapt and refine. However, human ground-truth construction for summarization is time-consuming and labor-intensive. Therefore, a more flexible summary generation framework could minimize manual labor and generate useful summaries more efficiently. + +An ideal set of sub-aspect control codes should characterize different aspects of summarization well in a comprehensive manner but at the same time delineate a relatively clear boundary between one another to minimize the set size (Higgins et al., 2017). To achieve this, we adopt the sub-aspects defined in (Jung et al., 2019): IMPORTANCE, DIVERSITY, and POSITION, and assess their characterization capability on the CNN/Daily Mail news + +![](images/da6233d671824d1b7eda7a22448f0c9dffe31b87c364141c3c9ce0736f610368.jpg) +Figure 1: Proposed conditional generation framework exploiting sub-aspect functions. + +corpus (Hermann et al., 2015) via quantitative analyses and unsupervised clustering. We utilize control codes based on these three sub-aspect functions to label the training data and implement our conditional generation approach with a neural selector model. Empirical results show that given different control codes, the model can generate output summaries of alternative styles while maintaining performance comparable to the state-of-the-art model; modulation with semantic sub-aspects can reduce systemic bias learned on a news corpus and improve potential generality across domains. + +# 2 In Relation to Other Work + +In text summarization, most benchmark datasets focus on the news domain, such as NYT (Sandhaus, 2008) and CNN/Daily Mail (Hermann et al., 2015), where the human-written summaries are used in both abstractive and extractive paradigms (Gehrmann et al., 2018). To improve the performance of extractive summarization, non-neural approaches explore various linguistic and statistical features such as lexical characteristics (Kupiec et al., 1995), latent topic information (Ying-Lang Chang and Chien, 2009), discourse analysis (Hi-rao et al., 2015; Liu and Chen, 2019), and graph-based modeling (Erkan and Radev, 2004; Mihalcea and Tarau, 2004). In contrast, neural approaches learn the features in a data-driven manner. Based on recurrent neural networks, SummaRuNNer is one of the earliest neural models (Nallapati et al., 2017). Much development in extractive summarization has been made via reinforcement learning (Narayan et al., 2018b), jointly learning of scoring and ranking (Zhou et al., 2018), and deep context + +tual language models (Liu and Lapata, 2019). + +Despite much development in recent neural approaches, there are still challenges such as corpus bias resulting from the prevalent "inverted pyramid" journalism writing style (Lin and Hovy, 1997), and system bias (Jung et al., 2019) stemming from position preference in the ground-truth. However, to date only analysis work has been done to characterize the position-bias problem and its ramifications, such as inability to generalize across corpora or domains (Kedzie et al., 2018; Kryscinski et al., 2019). Few, if any, have attempted to resolve this long-standing problem of position bias using neural approaches. In this work, we take a first stab to introduce sub-aspect functions for conditional extractive summarization. We explore the possibility of disentangling the three sub-aspects that are commonly used to characterize summarization: POSITION for choosing sentences by their position, IMPORTANCE for choosing relevant and repeating content across the document, and DIVERSITY for ensuring minimal redundancy between summary sentences (Jung et al., 2019) during the summary generation process. In particular, we use these three sub-aspects as control codes for conditional training. To the best of our knowledge, this is the first work in applying auxiliary conditional codes for extractive summary generation. + +In other NLP tasks, topic information is used as conditional signals and applied to dialogue response generation (Xing et al., 2017) and pretraining of large-scale language models (Keskar et al., 2019) while sentiment polarity is used in text style transfer (John et al., 2019). In image style transfer, codes specifying color or texture are used to train conditional generative models (Mirza and Osindero, 2014; Higgins et al., 2017). + +# 3 Extractive Oracle Construction + +# 3.1 Similarity Metric: Semantic Affinity vs. Lexical Overlap + +For benchmark corpora that are widely adopted, e.g. CNN/Daily Mail (Hermann et al., 2015), there are only golden abstractive summaries written by humans with no corresponding extractive oracle summaries. To convert the human-written abstracts to extractive oracle summaries, most previous work used ROUGE score (Lin, 2004), which counts contiguous n-gram overlap, as the similarity criteria to rank and select sentences from the source content. Since ROUGE scores only conduct lexi + +![](images/8befd2c8f0d389f3b15fcadbd3a5f3150a18b2c958bd1f55372324d275816a48.jpg) +Figure 2: Cumulative position distribution of oracles built on ROGUE (Blue) and BertScore (Orange). X axis is the ratio of article length. Y axis is the cumulative percentage of summary sentences. + +cal matching using word overlapping algorithms, salient sentences from the source content paraphrased by human-editors could be overlooked as the ROUGE scores would be low, while sentences with a high count of common words could get an inflated ROUGE score (Kryscinski et al., 2019). + +To tackle this drawback of ROUGE, we propose to apply the semantic similarity metric BertScore (Zhang et al., 2020) to rank the candidate sentences. BertScore has performed better than ROUGE and BLEU in sentence-level semantic similarity assessment (Zhang et al., 2020). Moreover, BertScore includes recall measures between reference and candidate sequences, a more suitable metric than distance-based similarity measures (Wieting et al., 2019; Reimers and Gurevych, 2019) for summarization related tasks, where there is an asymmetrical relationship between the reference and the generated text. + +# 3.2 Oracle Construction and Evaluation + +To build oracles with semantic similarity, we first segment sentences in source documents and human-written gold summaries1. Then we convert the text to a semantically rich distributed vector space. For each sentence in a gold summary, we use BertScore to calculate its semantic similarity with candidates from the source content, then the sentence with the highest recall score is chosen. Candidates with a recall score lower than 0.5 are excluded to streamline the selection process. + +We observed that the oracle summaries generated through semantic similarity differ from those chosen from n-gram overlap. The positional distributions of two schemes are different, where early sentence bias is less significant for the BertScore scheme (see Figure 2). To further evaluate the effectiveness of this oracle construction approach, + +
ROUGE-1 F1 ScoreROUGE-2 F1 Score
ROUGE Oracle51.8431.08
BertScore Oracle50.5629.41
Similarity EvaluationScore
Gold Summaries-
ROUGE Candidates0.70
BertScore Candidates0.84
QA Paradigm EvaluationAccuracy
Entity and Event Questions:
Gold Summaries0.95
ROUGE Candidates0.54
BertScore Candidates0.72
Extended Questions:
Gold Summaries0.87
ROUGE Candidates0.52
BertScore Candidates0.70
+ +Table 1: ROUGE and Human evaluation scores of oracle summaries built on BertScore and ROUGE. + +we conducted two assessments. ROUGE scores were computed with the gold summaries. Table 1 shows oracle summaries derived from BertScore are comparable though slightly lower than those from ROUGE, which is not unexpected given that BertScore is mismatched with the ROUGE metric. We also conducted two human evaluations. First, we ranked the candidate summary pairs of 50 news samples based on their similarity to human-written gold summaries (Narayan et al., 2018a). Four linguistic analyzers were asked to consider two aspects: informativeness and coherence (Radev et al., 2002). The evaluation score represents the likelihood of a higher ranking, and is normalized to [0, 1]. Next, we adopted the question-answering paradigm (Liu and Lapata, 2019) to evaluate 30 selected samples. For each sentence in the gold summary, questions were constructed based on key information such as events and named entities. Questions where the answer can only be obtained by comprehending the full summary were also included. Human annotators were asked to answer these questions given an oracle summary. The extractive summaries constructed with BertScore are significantly higher in all human evaluations (see Table 1). + +# 4 Sub-Aspect Control Codes + +# 4.1 Sub-Aspect Features in News Summarization + +Conditional generation often uses control codes as an auxiliary vector to adjust pre-defined style features. Classic examples include sentiment polarity in style transfer (John et al., 2019) or physical attributes (e.g. color) in image generation (Higgins + +![](images/4f585b36200358d5818cf38809fcb0d3b3237517f4e052c89902f5901b1d4e7a.jpg) +Figure 3: Sample-level distribution of sub-aspect functions of the BertScore oracle. Values are the percentage in categorized samples, which adds up to $60.03\%$ of CNN/Daily Mail training set. The remaining $39.97\%$ do not belong to any of these 3 sub-aspects. + +et al., 2017). However, for summarization it is challenging to pinpoint such intuitive or well-defined features, as the writing style could vary according to genre, topic, or editor preference. + +In this work, we adopt position, importance and diversity as a set of sub-function features to characterize extractive news summarization (Jung et al., 2019). Considerations include: (1) "inverted pyramid" writing style is common in news articles, thus making layout or position a salient sub-aspect for summarization; (2) Importance sub-aspect indicates the assumption that repeatedly occurring content in the source document contains more important information; (3) Diversity sub-aspect suggests that selected salient sentences should maximize the semantic volume in a distributed semantic space (Lin and Bilmes, 2012; Yogatama et al., 2015). + +# 4.2 Summary-Level Quantitative Analysis + +We apply two methods to evaluate the compatibility and effectiveness of the sub-aspects we choose for extractive news summarization. First, we conduct a quantitative analysis on the CNN/Daily Mail corpus, based on the assumption that the writing style variability of summaries can be characterized through different combinations of sub-aspects (Lin and Bilmes, 2012). + +For each source document, we converted all sentences to vector representations with a pre-trained contextual language model BERT (Devlin et al., 2019). For each sentence, we averaged hidden states of all tokens as the sentence embedding. Similar to (Jung et al., 2019), to obtain the subset of sentences which correspond to importance sub-aspect, + +![](images/be537ecc0495e76174db3015c1685e64ca26b61d8c2cda7cb44e7d82192276a9.jpg) +Figure 4: Autoencoder with adversarial training strategy for unsupervised clustering of sentence-level distribution of sub-aspect functions. + +we adopted an N-Nearest method which calculates an averaged Pearson correlation between one sentence and the rest for all source sentence vectors, and collected the first- $k$ candidates with the highest scores ( $k$ equals oracle summary length). To obtain the subset which corresponds to the diversity sub-aspect, we used one implementation3 of the QuickHull algorithm (Barber et al., 1996) to find vertices, which can be regarded as sentences that maximize the volume size in a projected semantic space. For the subset that corresponds to the position sub-aspect, the first 4 sentences in the source document were chosen. + +With three sets of sub-aspects, we quantified the distribution of different sub-aspects on the extractive oracle constructed in Section 3. An oracle summary will be mapped to the importance sub-aspect when at least two sentences in the summary are in the subset of importance sub-aspect. For those oracle summaries that are shorter than 3 sentences (occupying $19\%$ of the oracle), only one sentence was used to determine which sub-aspect they would be mapped to. Note that the mapping is many to many; i.e. each summary can be mapped to more than one sub-aspect. Figure 3 displays the distribution of the three sub-aspect functions of the oracle summaries, where position occupies the largest area. This visualization shows that the three sub-aspects represent distinct linguistic attributes but could overlap with one another. + +# 4.3 Sentence-Level Unsupervised Analysis + +According to the mapping algorithm in the previous section, $39\%$ summaries were not mapped to a sub-aspect. This finding motivated us to investigate the distribution of sub-aspect functions at the sentence level. Thus, we conducted unsupervised clustering, + +![](images/718ccdbd40f29c8e9c110bb1d8611646364cfa81b2bfd0f6488fd4a19e61d318.jpg) +Figure 5: Sentence-level clustering result labeled with sub-aspect features. X axis is the cluster index. Y axis is the proportion of sub-aspect features in each cluster. + +assuming that samples within one cluster are most similar to each other and they can be represented by the dominant feature. + +As shown in Figure 4, we use an autoencoder architecture with adversarial training to model the correlation between document and summary sentences in the semantic space. The encoding component receives the source document representation and one summary sentence representation as input, and compresses it to a latent feature vector. Then, the latent vector and document vector are concatenated and fed to the decoding component to reconstruct the sentence vector. To obtain a compact yet effective latent vector representing the correlation between the source and summary, we adopt an adversarial training strategy as in (John et al., 2019). More specifically, the adversarial decoder we include aims to reconstruct the sentence vector directly from the latent vector. During the training process, we update parameters of the autoencoder with an adversarial penalty (see Appendix B for implementation details). After training this autoencoder, we conduct k-means clustering $(k = 5)$ on the latent representation vectors. Then, we analyze the clustering output, with the sentence-level labels of sub-aspect functions as defined in Section 4.2. As shown in Figure 5, sentences with position subspace is distributed relatively equally across each cluster, while importance and diversity dominate in respectively different clusters. Based on the clustering results, we assign the sub-aspect function which is dominant to unmapped sentences in the same cluster. For instance, diversity is assigned to unmapped sentences in cluster 0 and 1 while importance is assigned to those in cluster 3 and 4. By doing this, we reduce $\approx 78\%$ of unmapped sentences and further reduce $35\%$ unmapped summaries using the same criteria in Section 4.2. + +# 5 Conditional Neural Generation + +In this section, we construct a set of control codes to specify the three sub-aspect features described in Section 4, and label the oracle summaries constructed in Section 3, then we propose a neural extractive model with a conditional learning strategy for a more flexible summary generation. + +# 5.1 Control Code Specification Scheme + +The control codes are constructed in the form of [importance, diversity, position] to specify sub-aspect features. We can flexibly indicate the 'ON' and 'OFF' state of each sub-aspect by switching its corresponding value to 1 or 0, thus enabling disentanglement of each sub-aspect function. For instance, the control code $[1,0,0]$ would tell the model to focus more on importance during sentence scoring and selection, while $[0,1,1]$ would focus on both diversity and position. Indeed, switching the position code to 0 would help the model obtain minimal position bias. Note that this does not mean the first few sentences would not be selected, as there is overlap between position, importance and diversity (shown in Figure 3). There are 8 control codes under this specification scheme, and we expect this code design can provide the model with sub-aspect conditions for generating summaries. + +# 5.2 Neural Extractive Selector + +Given a document $D$ containing a number of sentences $[s_0, s_1, \ldots, s_n]$ , the content selector assigns a score $y_i \in [0, 1]$ to each sentence $i$ , indicating its probability of being included in the summary. A neural model can be trained as an extractive selector for text summarization tasks by contextually modeling the source content. + +Here, we implemented and adapted the neural extractive selector in a sequence labeling manner (Kedzie et al., 2018). As shown in Figure 6, the model consists of three components: a contextual encoding component, a selection modeling component and an output component. First, we used BERT in the contextual encoding component to obtain feature-rich sentence-level representations. Then, in the training process, we concatenated these sentence embeddings with the pre-calculated control code vector and fed them to the next layer, which models the contextual hidden states with the conditional signals. Next, a linear layer with Sigmoid function receives the hidden states and produces scores for each segment between 0 and 1 + +![](images/1b5c30b9c831ba93ff3564a0624ced2240787375ebcaa46f4cb1abce65ba3dee.jpg) +Figure 6: Overview of the neural selector architecture. + +![](images/a0fb752ad09cefe60085494af4bb6b8852492b6db7e88827797b7920802a9814.jpg) +Figure 7: Position distribution of generated summaries from a strong baseline model BertEXT and our conditional summarization model with position code set to 0 (3 implementations). X axis is the position ratio. Y axis is the sentence-level proportion. + +as the probability of extractive selection. While this architecture is straightforward, it has shown to be competitive when combined with state-of-the-art contextual representation (Liu and Lapata, 2019). + +In our setting, sentences were processed by a subword tokenizer (Wu et al., 2016) and their embeddings were initialized with 768-dimension "base-uncased" BERT (Devlin et al., 2019) and were fixed during training. Lengthy source documents were not truncated. For the selection modeling component, we applied a multi-layer Bi-directional LSTM (Schuster and Paliwal, 1997) and a Transformer network (Vaswani et al., 2017) and it was empirically shown that a two-layer Bi-LSTM performed best (see Appendix C for more implementation details). During testing, sentences with the top-3 selection probability were extracted as output summary, and we used the Trigram Blocking strategy (Paulus et al., 2017) to reduce redundancy. + +# 6 Experimental Results and Analysis + +# 6.1 Quantitative Analysis + +To test the possibility of reducing position bias by conditioning summary generation, we switched the position code to 0 and compared the position + +![](images/8ea8c061029546454a63be88fd70698686782377882fb119ee9cda68839bb335.jpg) +Figure 8: Sub-aspect mapping of generated summary with importance-focus code [1,0,0]. Left panel: one sentence in the summary belongs to importance sub-aspect. Right panel: two sentences in the summary belong to importance sub-aspect. Contour lines denote the number of generated summaries. + +of selected sentences in summaries generated by our model to the state-of-the-art baseline BertEXT, based on fine-tuning BERT (Liu and Lapata, 2019). The results show that BertEXT has a $50\%$ chance of choosing the first $10\%$ of sentences in the document. While the proposed framework still has a stronger tendency to choose sentences from the first $30\%$ of the sentences, its position distribution is flattened compared to that of BertEXT. + +We respectively switched importance and diversity codes to 1 and categorized the generated summaries into subset of each sub-aspect function as in Section 4.2. As shown in Figure 8 and 9, summaries in the subset of importance and diversity weigh higher when the corresponding control codes are ON. Together, these results demonstrate the feasibility of our proposed framework, which can generate output summaries of alternative styles when given different control codes. + +# 6.2 Automatic Evaluation + +We calculated F1 ROUGE scores for generated summaries under 8 control codes, and compared them with the BertScore oracle (see Section 3), the Lead-3 baseline by selecting first-3 sentences as summary, and several competitive extractive models: SummaRuNNer (Nallapati et al., 2017), TransformerEXT and BertEXT (Liu and Lapata, 2019). From Table 2 we observe that: (1) Summary generated from code [0,0,1] is similar to LEAD-3 but can dynamically learn the positional features not limited to the first 3 sentences, while isolating out diversity and importance features. (2) Only focusing on the importance sub-aspect leads to the worst performance, but performance can be improved when considering other sub-aspects. (3) Focusing on the diversity sub-aspect (i.e. Code [0,1,0]) can generate results comparable to strong baselines. + +![](images/619a52c859805c2c6a213e9cba2a8e7a49a6fb732f6b05d3f30687e2947c3139.jpg) +Figure 9: Sub-aspect mapping of generated summary with diversity-focus code [0,1,0]. Left panel: one sentence in the summary belongs to diversity sub-aspect. Right panel: two sentences in the summary belong to diversity sub-aspect. Contour lines denote the number of generated summaries. + +
ROUGE-1ROUGE-2
Oracle (BertScore)50.5629.41
LEAD-340.4217.62
SummaRuNNer*39.6016.20
TransformerEXT*40.9018.02
BertEXT*43.2320.24
Code [0,0,0]39.4417.37
Code [0,0,1]40.2118.25
Code [0,1,0]39.1817.11
Code [0,1,1]40.7018.42
Code [1,0,0]36.7214.74
Code [1,0,1]40.3317.90
Code [1,1,0]37.5915.68
Code [1,1,1]40.8718.50
+ +Table 2: ROUGE F1 score evaluation with various control codes, in the form of [importance, diversity, position]. * denotes the results from corresponding paper. + +# 6.3 Human Evaluation + +In addition to automatic evaluation, the human evaluation was conducted by experienced linguistic analysts using Best-Worst Scaling (Louviere et al., 2015). Analysts were given 50 news articles randomly chosen from the CNN/Daily Mail test set and the corresponding summaries from 6 systems: the oracle, BertEXT, three codes disabling sub-aspect position, and one code enabling position. They were asked to decide the best and the worst summaries for each document in terms of informativeness and coherence (Radev et al., 2002; Narayan et al., 2018a). We collected judgments from 5 human evaluators for each comparison. For each evaluator, the documents were randomized differently. The order of summaries for each document was also shuffled differently for each evaluator. The score of a model was calculated as the percentage of times it was labeled as best minus the percentage of times it was labeled as worst, ranging from $-1.0$ to $1.0$ . Since these differences come in pairs, the sum of all the evaluation scores for all summary types adds up to zero. We observed that + +
Evaluation Score
Oracle0.0458
BertEXT0.0332
Code [1,0,0]-0.062
Code [0,1,0]0.0198
Code [0,0,1]-0.071
Code [1,1,0]0.0350
+ +Table 3: Human evaluation on samples from baselines and our model with control codes, in the form of [importance, diversity, position]. + +
ROUGE-1ROUGE-2
BertEXT36.78 (-6.45)14.95 (-5.29)
Code [1,0,0]33.94 (-2.78)13.04 (-1.70)
Code [0,1,0]36.59 (-2.59)14.33 (-2.78)
Code [0,0,1]30.34 (-9.87)8.90 (-9.35)
+ +summaries under diversity code are more favored than those under importance, and their combination can further produce better results (see Table 3). These findings resonate those from the automatic evaluation, suggesting that whether the evaluation metric is lexical overlap (ROUGE) or human judgement, the diversity sub-aspect plays a more salient role than importance. Moreover, both automatic and human evaluations show that summarizing with semantic-related sub-aspect condition codes achieves reasonable summaries. Examples in Appendix D show that generated summaries are not position-biased yet still preserve key information from the source content. + +# 6.4 Inference on Samples of Shuffled Sentences + +To further assess the decoupling between using sub-aspect signals and positional information learned by the model, we conducted an experiment on samples with shuffled sentences, similar to document shuffle in (Kedzie et al., 2018). In our setting, we only introduce the shuffle process in the model inference phase. We shuffled the sentences of all test samples we used in Section 6.2, then applied the well-trained model to generate the predicted summaries. As shown in Table 4, outputs under position sub-aspect and BertEXT suffer a significant drop in performance when we shuffle the sentence order. By comparison, there is far less decrease between the shuffled and in-order samples under diversity and importance control code, demonstrating that the latent features of these two + +Table 4: Inference scores on samples with shuffled sentences. Control codes are in the form of [importance, diversity, position]. Values in brackets: absolute decrease from scores on original in-order samples. + +
R-1 F1R-2 F1R-2 Recall
Oracle--8.70*
Baseline--6.10*
BertEXT26.913.702.98
Code [1,0,0]34.816.236.34
Code [0,1,0]31.795.324.62
Code [0,0,1]29.673.983.47
+ +Table 5: Inference scores on AMI corpus from baselines and our model with control codes, in the form of [importance, diversity, position]. * denotes results from (Kedzie et al., 2018). + +semantic-related sub-aspects rely less on the position information, suggesting that applying semantic sub-aspects in the training process can reduce systemic bias learned by the model on a corpus with strong position preference. + +# 6.5 Inference on AMI Meeting Corpus + +We also conducted an inference experiment on a less position-biased corpus. The AMI corpus (Carletta et al., 2005) is a collection of meetings annotated with text transcriptions with human-written summaries. Different from news summarization, meeting summaries are abstractive with extracted keywords. Unlike the previous comparison work in (Kedzie et al., 2018), we did not train the model from scratch with the AMI training set. Instead, we only applied the pre-trained model (without any fine-tuning) in Section 6 for summarization inference on its test set (20 meeting transcript-summary pairs). Table 5 shows summaries under importance code obtain the highest ROUGE-1 and ROUGE-2 scores, better than the best-reported model in (Kedzie et al., 2018). Not surprisingly, summaries under the position code do not perform well, as there is less position bias in AMI. These findings suggest that our models with semantic-related control codes generalize across domains. + +# 7 Conclusion + +We proposed a neural framework for conditional extractive news summarization. In particular, subspace functions of importance, diversity and position are used to condition summary generation. This framework enables us to reduce position bias, a long-standing problem in news summarization, in generated summaries while preserving comparable performance with other standard models. Moreover, our results suggest that with conditional learning, summaries can be more efficiently tailored to different user preferences and application needs. + +# Acknowledgments + +This research was supported by funding from the Institute for Infocomm Research (I2R) under A*STAR ARES, Singapore. We thank Ai Ti Aw, Bin Chen, Shen Tat Goh, Ridong Jiang, Jung Jae Kim, Ee Ping Ong, and Zeng Zeng at I2R for insightful discussions. We also thank the anonymous reviewers for their precious feedback to help improve and extend this piece of work. + +# References + +C Bradford Barber, David P Dobkin, David P Dobkin, and Hannu Huhdanpaa. 1996. The quickhull algorithm for convex hulls. ACM Transactions on Mathematical Software (TOMS), 22(4):469-483. +Jaime Carbonell and Jade Goldstein. 1998. The use of mmr, diversity-based reranking for reordering documents and producing summaries. In Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '98, pages 335-336, New York, NY, USA. ACM. +Jean Carletta, Simone Ashby, Sebastien Bourban, Mike Flynn, Mael Guillemot, Thomas Hain, Jaroslav Kadlec, Vasilis Karaiskos, Wessel Kraaij, Melissa Kronenthal, et al. 2005. The am meeting corpus: A pre-announcement. In International workshop on machine learning for multimodal interaction, pages 28-39. Springer. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Günes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of artificial intelligence research, 22:457-479. +Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4098-4109, Brussels, Belgium. Association for Computational Linguistics. +Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in neural information processing systems, pages 1693-1701. + +Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. 2017. beta-vae: Learning basic visual concepts with a constrained variational framework. *ICLR*, 2(5):6. +Tsutomu Hirao, Masaaki Nishino, Yasuhisa Yoshida, Jun Suzuki, Norihito Yasuda, and Masaaki Nagata. 2015. Summarizing a document by trimming the discourse tree. IEEE/ACM Trans. Audio, Speech and Lang. Proc., 23(11):2081-2092. +Kai Hong and Ani Nenkova. 2014. Improving the estimation of word importance for news multi-document summarization. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 712-721, Gothenburg, Sweden. Association for Computational Linguistics. +Vineet John, Lili Mou, Hareesh Bahuleyan, and Olga Vechtomova. 2019. Disentangled representation learning for non-parallel text style transfer. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 424-434, Florence, Italy. Association for Computational Linguistics. +Taehee Jung, Dongyeop Kang, Lucas Mentch, and Eduard Hovy. 2019. Earlier isn't always better: Subaspect analysis on corpus and system biases in summarization. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3315-3326, Hong Kong, China. Association for Computational Linguistics. +Chris Kedzie, Kathleen McKeown, and Hal Daume III. 2018. Content selection in deep learning models of summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1818-1828, Brussels, Belgium. Association for Computational Linguistics. +Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858. +Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference for Learning Representations. +Wojciech Kryscinski, Nitish Shirish Keskar, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Neural text summarization: A critical evaluation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 540-551, Hong Kong, China. Association for Computational Linguistics. + +Julian Kupiec, Jan Pedersen, and Francine Chen. 1995. A trainable document summarizer. In Proceedings of the 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '95, page 68-73, New York, NY, USA. Association for Computing Machinery. +Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pages 74-81, Barcelona, Spain. Association for Computational Linguistics. +Chin-Yew Lin and Eduard Hovy. 1997. Identifying topics by position. In Fifth Conference on Applied Natural Language Processing, pages 283-290, Washington, DC, USA. Association for Computational Linguistics. +Hui Lin and Jeff Bilmes. 2012. Learning mixtures of submodular shells with application to document summarization. In Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence, UAI'12, pages 479-490, Arlington, Virginia, United States. AUAI Press. +Yang Liu and Mirella Lapata. 2019. Text summarization with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3721-3731, Hong Kong, China. Association for Computational Linguistics. +Zhengyuan Liu and Nancy Chen. 2019. Exploiting discourse-level segmentation for extractive summarization. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, pages 116-121, Hong Kong, China. Association for Computational Linguistics. +Jordan J Louviere, Terry N Flynn, and Anthony Alfred John Marley. 2015. Best-worst scaling: Theory, methods and applications. Cambridge University Press. +I. Mani. 2001. Summarization evaluation: An overview. In ACL/EACL-97 summarization workshop. +Daniel Marcu. 1997. From discourse structures to text summaries. In Intelligent Scalable Text Summarization. +Rada Mihalcea and Paul Tarau. 2004. TextRank: Bringing order into text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 404-411, Barcelona, Spain. Association for Computational Linguistics. +Mehdi Mirza and Simon Osindero. 2014. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784. + +Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarrunner: A recurrent neural network based sequence model for extractive summarization of documents. In *Thirty-First AAAI Conference on Artificial Intelligence*. +Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018a. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797-1807, Brussels, Belgium. Association for Computational Linguistics. +Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018b. Ranking sentences for extractive summarization with reinforcement learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1747-1759, New Orleans, Louisiana. Association for Computational Linguistics. +Ani Nenkova, Rebecca Passonneau, and Kathleen McKeown. 2007. The pyramid method: Incorporating human content selection variation in summarization evaluation. ACM Trans. Speech Lang. Process., 4(2). +Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summarization. In Proceedings of the 6th International Conference on Learning Representations (ICLR). +Dragomir R. Radev, Eduard Hovy, and Kathleen McKeown. 2002. Introduction to the special issue on summarization. Computational Linguistics, 28(4):399-408. +GJ Rath, A Resnick, and TR Savage. 1961. The formation of abstracts by the selection of sentences. part i. sentence selection by men and machines. American Documentation, 12(2):139-141. +Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3980-3990, Hong Kong, China. Association for Computational Linguistics. +Gerard Salton, Amit Singhal, Mandar Mitra, and Chris Buckley. 1997. Automatic text structuring and summarization. Information processing & management, 33(2):193-207. +Evan Sandhaus. 2008. The new york times annotated corpus. Linguistic Data Consortium, Philadelphia, 6(12):e26752. +Christopher. Scanlan. 1999. Reporting and writing: basics for the 21st century. Oxford University Press. + +Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673-2681. +Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073-1083. +Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958. +James Glen Stovall. 1985. Writing for the mass media. Prentice-Hall. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008. +John Wieting, Kevin Gimpel, Graham Neubig, and Taylor Berg-Kirkpatrick. 2019. Simple and effective paraphrastic similarity from parallel translations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4602-4608, Florence, Italy. Association for Computational Linguistics. +Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. +Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware neural response generation. In Thirty-First AAAI Conference on Artificial Intelligence. +Ying-Lang Chang and J. Chien. 2009. Latent dirichlet learning for document summarization. In 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 1689-1692. +Dani Yogatama, Fei Liu, and Noah A. Smith. 2015. Extractive summarization by maximizing semantic volume. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1961-1966, Lisbon, Portugal. Association for Computational Linguistics. +Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2020. *Bertscore: Evaluating text generation with bert.* In *Proceedings of the Eighth International Conference on Learning Representations (ICLR)*. + +Qingyu Zhou, Nan Yang, Furu Wei, Shaohan Huang, Ming Zhou, and Tiejun Zhao. 2018. Neural document summarization by jointly learning to score and select sentences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 654-663, Melbourne, Australia. Association for Computational Linguistics. \ No newline at end of file diff --git a/conditionalneuralgenerationusingsubaspectfunctionsforextractivenewssummarization/images.zip b/conditionalneuralgenerationusingsubaspectfunctionsforextractivenewssummarization/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..2455c52c9776e67ac0e252606aae0c9be31d22fc --- /dev/null +++ b/conditionalneuralgenerationusingsubaspectfunctionsforextractivenewssummarization/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:157b38635d20965c85e9031edd15d34c97193c0a1528475b5b8a26fe25da8adf +size 317459 diff --git a/conditionalneuralgenerationusingsubaspectfunctionsforextractivenewssummarization/layout.json b/conditionalneuralgenerationusingsubaspectfunctionsforextractivenewssummarization/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..9231c52729fbeba35f5ef336e3a57ca9c661b099 --- /dev/null +++ b/conditionalneuralgenerationusingsubaspectfunctionsforextractivenewssummarization/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b845ebcee89be0eaca8e404cd777b91a1fb6a3614909729d3082b2b12baf5f8b +size 280462 diff --git a/connectingthedotsaknowledgeablepathgeneratorforcommonsensequestionanswering/8134efba-4e35-40b7-8b63-8c2ac1f206b1_content_list.json b/connectingthedotsaknowledgeablepathgeneratorforcommonsensequestionanswering/8134efba-4e35-40b7-8b63-8c2ac1f206b1_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0f5097fa225c2978a319d9c4180481b5ec2c1be2 --- /dev/null +++ b/connectingthedotsaknowledgeablepathgeneratorforcommonsensequestionanswering/8134efba-4e35-40b7-8b63-8c2ac1f206b1_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3fd097a45a2081ef0530eb28d4ab24bf875d2a24000ffc0d5449effcbcfad089 +size 88480 diff --git a/connectingthedotsaknowledgeablepathgeneratorforcommonsensequestionanswering/8134efba-4e35-40b7-8b63-8c2ac1f206b1_model.json b/connectingthedotsaknowledgeablepathgeneratorforcommonsensequestionanswering/8134efba-4e35-40b7-8b63-8c2ac1f206b1_model.json new file mode 100644 index 0000000000000000000000000000000000000000..4f72b7fa44dbcea1e01964520405948199b49c5f --- /dev/null +++ b/connectingthedotsaknowledgeablepathgeneratorforcommonsensequestionanswering/8134efba-4e35-40b7-8b63-8c2ac1f206b1_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:24680c8f8ee2c6b0080059e9484942a3d6bce853c1ccb0e34d8f1b8cfaaac1f4 +size 111646 diff --git a/connectingthedotsaknowledgeablepathgeneratorforcommonsensequestionanswering/8134efba-4e35-40b7-8b63-8c2ac1f206b1_origin.pdf b/connectingthedotsaknowledgeablepathgeneratorforcommonsensequestionanswering/8134efba-4e35-40b7-8b63-8c2ac1f206b1_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..695e4679acc1b3e6bce3f12f6aad85f73b6722e1 --- /dev/null +++ b/connectingthedotsaknowledgeablepathgeneratorforcommonsensequestionanswering/8134efba-4e35-40b7-8b63-8c2ac1f206b1_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:75e57869b572282d7d52476fe148fe4da1c8629fa7ae9ce2a0a3f665e3e43cdb +size 592357 diff --git a/connectingthedotsaknowledgeablepathgeneratorforcommonsensequestionanswering/full.md b/connectingthedotsaknowledgeablepathgeneratorforcommonsensequestionanswering/full.md new file mode 100644 index 0000000000000000000000000000000000000000..4dd0f3f9dd63668e933630cc376cb8cd9f12818f --- /dev/null +++ b/connectingthedotsaknowledgeablepathgeneratorforcommonsensequestionanswering/full.md @@ -0,0 +1,422 @@ +# Connecting the Dots: A Knowledgeable Path Generator for Commonsense Question Answering + +Peifeng Wang $^{1,3}$ , Nanyun Peng $^{1,2,3}$ , Filip Ilievski $^{3}$ , Pedro Szekely $^{1,3}$ , Xiang Ren $^{1,3}$ + +$^{1}$ Department of Computer Science, University of Southern California + +$^{2}$ Department of Computer Science, University of California, Los Angeles + +$^{3}$ Information Sciences Institute, University of Southern California + +{peifengw, xiangren}@usc.edu, violetpeng@cs.ucla.edu + +{ilievski,pszekely}@isi.edu + +# Abstract + +Commonsense question answering (QA) requires background knowledge which is not explicitly stated in a given context. Prior works use commonsense knowledge graphs (KGs) to obtain this knowledge for reasoning. However, relying entirely on these KGs may not suffice, considering their limited coverage and the contextual dependence of their knowledge. In this paper, we augment a general commonsense QA framework with a knowledgeable path generator. By extrapolating over existing paths in a KG with a state-of-the-art language model, our generator learns to connect a pair of entities in text with a dynamic, and potentially novel, multi-hop relational path. Such paths can provide structured evidence for solving commonsense questions without finetuning the path generator. Experiments on two datasets show the superiority of our method over previous works which fully rely on knowledge from KGs (with up to $6\%$ improvement in accuracy), across various amounts of training data. Further evaluation suggests that the generated paths are typically interpretable, novel, and relevant to the task. + +# 1 Introduction + +Solving commonsense QA tasks requires filling gaps with external knowledge. For instance, given the multiple-choice question in Figure 1, a system needs to know that fungus grows in moist environments, such as caves, and that a cave is a type of geological feature. Such commonsense knowledge is obvious for humans but most existing QA systems do not have it or cannot reason with it. + +Although recent advances in pre-trained language models (LMs) have resulted in impressive performance on commonsense-related benchmarks (Zellers et al., 2018; Bhagavatula et al., 2019; + +Figure 1: Our path generator learns to connect the question entities (in red) and choice entities (in blue). The dashed arrow indicates a missing link in a static KG. +![](images/6484429b0f1799c355da8d67fea69233d5486b48caa189f7538f9b844adcf9eb.jpg) +Q: In what geological feature will you find fungus growing? +A: shower stall B: toenails C: basement D: forest E: cave + +Huang et al., 2019), it is unclear whether this is due to commonsense reasoning or to capturing spurious correlations in the data (Niven and Kao, 2019). Pre-trained LMs may answer a question correctly for wrong reasons, making them highly uninterpretable (Mitra et al., 2019). + +Alternatively, a set of systems retrieve external knowledge either from large text corpora or knowledge graphs (KGs). A corpus, however, might not be an ideal source of commonsense knowledge, as such knowledge is seldom stated explicitly in text (Storks et al., 2019). In contrast, commonsense KGs, like ConceptNet (Speer et al., 2017) and ATOMIC (Sap et al., 2019), provide structured evidence about the relevant entities, thus enabling effective reasoning and higher interpretability. Existing systems retrieve knowledge from a KG in the form of: triplets (Mihaylov and Frank, 2018), multi-hop paths (Lin et al., 2019; Bauer et al., 2018), or subgraphs (Kapanipathi et al., 2019). + +Despite the aforementioned benefits, exploiting these KGs poses the following challenges. Firstly, as KGs are known to suffer from sparsity (Li et al., 2016), they might not contain the knowledge needed to fill the gaps between the question and the answer. For example, a missing link (cave, IsA, geological feature) in Figure 1 might prevent the QA system from choosing the correct answer. Recent + +work on commonsense KG completion (Li et al., 2016; Bosselut et al., 2019; Bosselut and Choi, 2019) is limited to predicting the tail of a statement with known head and relation, or a single-hop relation between entities. Secondly, due to the large size and heterogeneity of modern KGs, contextualization—i.e., identifying a set of KG facts which are relevant or needed to answer a question—is also difficult (Fadnis et al., 2019). Simply retrieving all paths could introduce noisy information and potentially harm reasoning. + +To address this gap between LMs and KGs, we propose a knowledgeable path generator (PG) that generalizes over the facts stored in a KG, rather than only retrieving them. We call our method neural KG due to its neural generalization over structured KGs, and, in contrast, we use the term static KG for methods which rely exclusively on existing facts in a KG. Our PG connects a pair of question and answer entities with a (novel) multi-hop path, which may not exist in the KG, allowing for missing facts like (cave, IsA, geological feature) in Figure 1 to be considered during inference. + +To learn such a generator, we: (1) sample a set of random walk instances from a static commonsense KG based on rules and constraints for informativeness and relevance ( $\S 3.1$ ); (2) fine-tune a pretrained language model — GPT-2 (Radford et al., 2019) on the sampled paths ( $\S 3.2$ ). By doing so, we transfer the rich knowledge encoded in GPT-2 to our PG. This is expected to both enhance the generalization ability of the PG and combat the sparsity of KGs. Also, by generating high-quality missing links between the question and answer entities, we contextualize the task with relevant commonsense knowledge. To understand the impact of our multi-hop PG on downstream commonsense QA tasks, we integrate the PG in an augmented version of a general QA framework ( $\S 3.3$ ). + +We run experiments on two benchmark datasets CommonsenseQA (Talmor et al., 2018) and OpenBookQA (Mihaylov et al., 2018). The results show that out method performs better than previous systems augmented with static KGs by up to $6\%$ in accuracy, which also reveals its potential as a plug-in module for various datasets and as a vital complement to existing KG structures. In the low-resource setting, the accuracy gain over the baselines grows as the training data decreases, indicating a larger inductive bias of our generator. We also assess the quality and interpretability of our paths through + +![](images/7f79e82b53a1583edb575fc139e685fbafe616b7dd21cd5ae6e014da3e95ed6d.jpg) +Figure 2: Our KG-augmented QA Framework. The reasoning module leverages both the unstructured context and structured knowledge to answer a question. + +both automatic and human evaluation. + +To summarize, our key contributions are: + +1. We propose a method to generate task-relevant knowledge paths that may not exist in the original KG, thus addressing the contextualization and sparsity challenges of KGs. +2. We design and implement a framework with three variants of our PG, to understand the role of local and global graph information. +3. Extensive experiments on two benchmark datasets demonstrate the effectiveness of our method compared to previous methods, as well as its robustness to limited training data. + +# 2 Preliminaries + +Our multiple-choice commonsense QA setup follows prior work (Talmor et al., 2018; Mihaylov et al., 2018; Bisk et al., 2020): given a question $q$ , a system selects exactly one of the choices $a$ as an answer. To experiment with contextualized background knowledge, we adopt a general framework (Figure 2) consisting of a context module, a knowledge module and a reasoning module. The context module encodes both the question $q$ and a choice $a$ as unstructured evidence, while the knowledge module encodes external facts as structured evidence. Both the unstructured and the structured evidence are fed to the reasoning module, which produces a score for a question-choice pair. The choice with a highest score would be the predicted answer. Next, we introduce each module in detail. + +Context Module We concatenate a question $q$ and one of its choices $a$ with a special token, and feed the sequence into a contextual encoder. This encoder generates an embedding $\mathbf{c}$ , which serves as an unstructured evidence to our system. As commonly done for textual input, we consider a bidirectional pre-trained language model (Devlin et al., 2018; Liu et al., 2019) as a contextual encoder. + +Knowledge Module Given a commonsense KG $\mathcal{G} = (\mathcal{E},\mathcal{R})$ where $\mathcal{E}$ is the entity set and $\mathcal{R}$ is the relation set, we seek a set of relevant knowledge facts for a question-choice pair $\{q,a\}$ , which would serve as structured evidence to support reasoning. We employ an entity recognition system to extract relevant entity mentions in the question (denoted by $\mathcal{E}^q = \{e^q\}$ ) and one of the choices $(\mathcal{E}^a = \{e^a\})$ . We connect each pair of question-choice entities with a multi-hop path, which can be done either by retrieving existing paths for now (as in previous methods) or by generating paths (see §3.3). Formally, a path is $p(e^{q},e^{a}) = \{e^{q},r_{0},e_{1},r_{1},\dots,r_{T - 1},e^{a}\}$ where $T$ is the number of hops. Note that when $T = 1$ , the path is a single triplet. The set of paths is denoted by $\mathcal{P} = \{p(e^{q},e^{a})|e^{q}\in \mathcal{E}^{q},e^{a}\in \mathcal{E}^{a}\}$ . + +Naturally, we employ a Relational Network (RN) (Santoro et al., 2017) to aggregate the retrieved paths into a static knowledge embedding $\mathbf{k}$ , which serves as structured evidence. In essence, a RN is a composite function over the set $\mathcal{P}$ : + +$$ +\mathbf {k} = f _ {\phi} \left(\left\{g _ {\theta} (p) \mid p \in \mathcal {P} \right\}\right), \tag {1} +$$ + +where $f_{\phi}$ could be any aggregation function and $g_{\theta}$ could be any neural network which projects a discrete path $p$ into a fixed-size continuous embedding $\mathbf{p}$ . We expect that not all paths contribute equally to choosing the right answer. Therefore, we construct the function $f_{\phi}$ as an attention network: + +$$ +\mathbf {k} = \sum_ {p \in \mathcal {P}} \alpha_ {p} \mathbf {p}. \tag {2} +$$ + +We compute the attention weight $\alpha_{p}$ by using the context embedding $\mathbf{c}$ as a query: + +$$ +\alpha_ {p} = \frac {e x p (\hat {\alpha} _ {p})}{\sum_ {p ^ {\prime}} \exp (\hat {\alpha} _ {p ^ {\prime}})}, \qquad (3) +$$ + +where the context embedding $\mathbf{c}$ guides (as an attention query) the encoding of the structured evidence: + +$$ +\hat {\alpha} _ {p} = \mathbf {c} ^ {\top} \tanh (\mathbf {W} _ {a t t} \cdot \mathbf {p} + \mathbf {b} _ {a t t}). \qquad (4) +$$ + +Here, the attention network is parameterized by $(\mathbf{W}_{att},\mathbf{b}_{att})$ and $\tanh (\cdot)$ is a nonlinear activation function. Regarding the function $g_{\theta}$ , we employ its original formulation: + +$$ +g _ {\theta} (p) = \mathrm {M L P} [ \mathbf {e} ^ {\mathbf {q}}; (\mathbf {r _ {0}} \circ \dots \circ \mathbf {r _ {T - 1}}); \mathbf {e} ^ {\mathbf {a}} ], \quad (5) +$$ + +where $[;]$ is vector concatenation and $\circ$ stands for element-wise multiplication. The components + +(entities and relations) of a path are represented by their feature vectors. + +Reasoning Module This module leverages the unstructured evidence (the context embedding $\mathbf{c}$ ) and the structured one (the knowledge embedding $\mathbf{k}$ ), to compute the plausibility of a question-choice pair. We concatenate $\mathbf{c}$ with $\mathbf{k}$ and feed them to the final classification layer, which is a linear transformation that scores a question-choice pair $\{q, a\}$ : + +$$ +f (q, a) = \mathbf {W} _ {c l s} \cdot [ \mathbf {c}; \mathbf {k} ] + \mathbf {b} _ {c l s}, \tag {6} +$$ + +The linear classification layer is parameterized by $(\mathbf{W}_{cls},\mathbf{b}_{cls})$ . We get the final probability over all choices by normalizing with softmax. + +# 3 Knowledgeable Path Generator + +Extracting the structured evidence by retrieving paths (or subgraphs) from a static KG, as in prior work (Mihaylov et al., 2018; Lin et al., 2019; Kapanipathi et al., 2019), faces two key challenges: sparsity and contextualization ( $\S 1$ ). We thus propose a knowledgeable path generator (PG), which learns to connect a question-choice entity pair $(e^q, e^a)$ with a multi-hop path. The generated paths are used as structured evidence in the knowledge module. Next, we detail the construction of training data ( $\S 3.1$ ), the learning of our path generator over this data ( $\S 3.2$ ), and the integration of the generator into the reasoning module ( $\S 3.3$ ). Figure 3 presents an overview of our adapted knowledge module. + +# 3.1 Knowledge Path Sampling + +We sample paths from a commonsense KG using random walks, in order to provide training data for our PG. Such paths are expected to contain useful knowledge for commonsense QA tasks. Given a KG $\mathcal{G} = (\mathcal{E},\mathcal{R})$ , each sampled path $p = \{e_0,r_0,e_1,r_1,\dots,r_{T - 1},e_T\}$ is a random walk on the graph, where $e_t\in \mathcal{E}$ and $r_t\in \mathcal{R}$ . The number of hops, $T$ , is a hyperparameter in our method. To improve the quality of the paths, we adopt two heuristic strategies. For relevance, we define a subset of relation types that are useful for answering commonsense questions, e.g., AtLocation and IsA, and filter out the remaining ones, e.g., RelatedTo, prior to sampling (see Appendix B for the discarded relations). For informativeness, we require all relation types in a path to be distinct. + +We explore two sampling strategies in order to select the starting node of the random walks: + +![](images/83bd7e0abc03b93753b2faf680d18a17c1ff3e27016efd1d62e2d629982af943.jpg) +Figure 3: Overview of our adapted knowledge module. (1) Extraction of entities from a question and its answer choices. (2) Generation of a multi-hop knowledge path with our PG to connect each pair of question and answer entities. (3) Aggregation of the generated paths into a knowledge embedding. + +Local Sampling. The random walks start from the entities that appear in the questions and answer choices of the training set of a benchmark. This strategy is expected to favor generation of paths that are tailored to the task. + +Global Sampling. We conduct random walks starting from each entity in $\mathcal{E}$ . This may divert our PG away from biasing on the local structure of the KG and enhance its generalizability to unseen data. + +To include entities that are connected only with inverse triplets in a path, we add a reverse relation $r^{-1}$ for each relation $r$ . We also sample paths with a mixed number of hops $T$ , so our generator can learn to connect entities using paths of variable length, when needed. The full path sampling procedure is described by Algorithm 1 in the Appendix. + +# 3.2 Generating Paths to Connect Entities + +We employ GPT-2 (Radford et al., 2019) as the backbone of our path generator. GPT-2 is a pretrained language model that encodes rich unstructured knowledge from large text corpora. We foresee two benefits of combining a pre-trained model such as GPT-2 and a static KG: (1) the language model would be able to generate commonsense knowledge paths, by being enriched with relevant structured knowledge; (2) the unstructured knowledge encoded in the language model would help to alleviate the sparsity challenge of the static KGs. + +Unlike COMET (Bosselut et al., 2019) which fine-tunes GPT (an earlier version of GPT-2) with independent triplets, we fine-tune GPT-2 with consecutive triplets that form paths (see Section 3.1). To do so, we first use GPT-2's Byte-Pair Encoding (Sennrich et al., 2016) to convert each symbolic path $p$ to its textual form as a sequence $\{\mathbf{x}_0,\mathbf{y}_0,\mathbf{x}_1,\mathbf{y}_1,\dots ,\mathbf{y}_{T - 1},\mathbf{x}_T\}$ , where $\mathbf{x}_t = \{x_t^1,x_t^2,\ldots ,x_t^{|e_t|}\}$ are phrase tokens of the entity $e_t$ and $\mathbf{y}_t = \{y_t^1,y_t^2,\dots ,y_t^{|r_t|}\}$ are phrase tokens of the + +Table 1: Example Transformation of a Symbolic Path into Text. + +{predator, DistinctFrom, prey, IsA, animal} +$\rightarrow$ {animal, [SEP], predator, distinct, from, prey, is, a, animal} + +relation $r_t$ . The reverse relations are represented by adding a special prefix token “\_”. The resulting paths mimic natural language sentences to facilitate optimal usage of the knowledge encoded in the pre-trained language model. At inference time, in order to connect the question-choice entities, we also add the last entity phrase tokens $\mathbf{x}_T$ together with a separate token [SEP] at the beginning of each path sequence, which produces the final transformation $\mathbf{s}^p$ . This informs the generator about the last entity it should output when generating a path. Table 1 provides an example path transformation. + +The PG learns to maximize the probability of the observed paths given the entity pairs. We use negative conditional log likelihood as a loss function: + +$$ +\mathcal {L} = - \sum_ {t = | \mathbf {x} _ {0} | + | \mathbf {x} _ {T} | + 1} ^ {| \mathbf {s} ^ {p} |} \log P \left(s _ {t} ^ {p} \mid s _ {< t} ^ {p}\right), \tag {7} +$$ + +where the conditional probability is defined as: + +$$ +P \left(s _ {t} ^ {p} \mid s _ {< t} ^ {p}\right) = \operatorname {s o f t m a x} \left(\mathbf {W} _ {\text {v o c a b}} \cdot \mathbf {h} _ {\mathbf {t}}\right). \tag {8} +$$ + +Here $\mathbf{h}_{\mathbf{t}}$ denotes the final GPT-2 representation for $s_t^p$ . $\mathbf{W}_{\text{vocab}}$ is the embedding matrix for the token-based vocabulary used by GPT-2, which generalizes well to unseen words. During the inference, the target entity $(e^a)$ , the [SEP] token, and the starting entity $(e^q)$ are fed to our generator (the shaded part in Table 1), and greedy decoding is used to generate a path connecting the two entities. Other constrained decoding strategies would be left as future work. + +# 3.3 Adapted Commonsense QA Framework + +To facilitate integration of the structured evidence from our path generator instead of a static KG, we adapt the knowledge module from §2 slightly. + +We construct the path set $\mathcal{P}$ by generating a multi-hop path $p(e^{q},e^{a})$ for each pair of a question entity $e^q$ and a choice entity $e^a$ with our PG and greedy decoding. To represent each path with an embedding, we perform mean pooling of the hidden states from the last layer of GPT-2 (before the softmax layer in Eq. 8) as a new formulation for the function $g_{\theta}$ : + +$$ +g _ {\theta} (p) = \mathrm {M E A N} (\{\mathbf {h _ {1}}, \mathbf {h _ {2}} \dots , \mathbf {h _ {| s ^ {p} |}} \}). \qquad (9) +$$ + +Since GPT-2 has been pre-trained on a large corpus, we believe such representation should be sufficient for preserving the information of the paths. Then, the knowledge embedding obtained with the function $f_{\phi}$ of the RN (Eq. 2-4) is concatenated with the original static knowledge embedding as our new definition of $\mathbf{k}$ . + +The whole pipeline is optimized by minimizing its cross-entropy loss. The set of learnable parameters excludes the parameters of our proposed PG, because we observed that fixing their values yields optimal performance. This points to another advantage of our PG: after being fine-tuned on the sampled random walks from a KG, the PG could be integrated within an existing QA system with no further training. + +# 4 Experiments + +# 4.1 Datasets + +We evaluate our method on two commonsense QA benchmarks: CommonsenseQA (Talmor et al., 2018) and OpenBookQA (Mihaylov et al., 2018). As the test set of CommonsenseQA is not publicly available, the predictions for it can only be evaluated once every two weeks via the official leaderboard. Thus, we report our test score on the leaderboard, and perform more extensive comparisons on the data split used in Lin et al. (2019). Besides questions and answers, OpenBookQA provides a collection of background facts in a textual form. We use the correspondence between these facts and their questions, prepared by Clark et al. (2019), as an additional input to the context module for all methods, except RoBERTa-large (see §4.5). + +# 4.2 KG and Path Data Preparation + +Entity Recognition We employ ConceptNet (Speer et al., 2017), a popular commonsense KG. As stated in §3.1, we disregard triplets that belong to a predefined set of relations (see Appendix). Similar to previous work (Lin et al., 2019), we use lexical matching to ground the entities mentioned in the question and the answer choices to our KG. One exception is that each answer choice in CommonsenseQA is treated as a single entity, as these tend to correspond directly to concepts in ConceptNet. + +Path Sampling We sample a set of paths with varying lengths, ranging from 1 to 3 hops. Global sampling generates 2,825,692 paths, while local sampling results in 133,612 paths for CommonSenseQA and 105,155 for OpenBookQA. We split them into training/dev/test sets at a $90:5:5$ ratio. + +# 4.3 Baselines + +As baselines, we consider a fine-tuned LM, static KG-augmented models, and a 1-hop link predictor on the question and the answer entities. + +Fine-tuned LM. To examine the role of the external knowledge, we compare to a "Fine-tuned LM" ablation of our QA framework without the knowledge module (§2). + +Static KG Models. We compare to three static KG variants of our QA framework that model the knowledge module with path/graph encoders: (1) a RN degenerate version of our system, which computes a knowledge embedding by an attention mechanism over the retrieved paths for each question-choice entity pair; (2) Relational Graph Convolutional Networks (RGCN) (Schlichtkrull et al., 2018) which encode local graphs by using graph convolutional networks with relation-specific weight matrices; (3) GconAttn (Wang et al., 2019) which models the alignment between entities via attention and pools over all entity embeddings. + +Link Prediction Model. This baseline predicts the relation between question and answer entities instead of creating or finding knowledge paths. Namely, we employ TransE (Bordes et al., 2013) to learn a representation for every entity and relation in ConceptNet, which is then leveraged to predict a 1-hop relation for each pair of question and answer entities. The representations for each resulting triplet are used as 1-hop path embeddings. The rest of this baseline is identical to our QA framework. + +Table 2: Test accuracy with varying proportions of CommonsenseQA (using the data split in (Lin et al., 2019)). Results (as mean and standard deviation) are computed over 4 experimental runs with different random seeds (top score in boldface, second score underlined). Parts of the results for baselines are reported from our another work (Feng et al., 2020). + +
MethodsBERT-largeRoBERTa-large
20% Train60% Train100% Train20% Train60% Train100% Train
Fine-tuned LM (w/o KG)46.25 (±0.63)52.30 (±0.16)55.39 (±0.40)55.28 (±0.35)65.56 (±0.76)68.69 (±0.56)
+ RN45.12 (±0.69)54.23 (±0.28)58.92 (±0.14)61.32 (±0.68)66.16 (±0.28)69.59 (±3.80)
+ RGCN48.67 (±0.28)54.71 (±0.37)57.13 (±0.36)58.58 (±0.17)68.33 (±0.85)68.41 (±0.66)
+ GconAttn47.95 (±0.11)54.96 (±0.69)56.94 (±0.77)57.53 (±0.31)68.09 (±0.63)69.88 (±0.47)
+ Link Prediction47.10 (±0.79)53.96 (±0.56)56.02 (±0.55)60.84 (±1.36)66.29 (±0.29)69.33 (±0.98)
+ PG-Local50.20 (±0.31)55.68 (±0.07)56.81 (±0.73)61.56 (±0.72)67.77 (±0.83)70.43 (±0.65)
+ PG-Global49.89 (±1.03)55.47 (±0.92)57.21 (±0.45)62.93 (±0.82)68.65 (±0.02)71.55 (±0.99)
+ PG-Full51.97 (±0.26)57.53 (±0.19)59.07 (±0.30)63.72 (±0.77)69.46 (±0.23)72.68 (±0.42)
+ +Table 3: Test accuracy on OpenBookQA. Methods with AristoRoBERTa leverage the textual evidence by Clark et al. (2019) as an additional input to the context module. + +
MethodsRoBERTa-largeAristoRoBERTa
Fine-tuned LMs (w/o KG)64.80 (±2.37)78.40 (±1.64)
+ RN65.20 (±1.18)75.35 (±1.39)
+ RGCN62.45 (±1.57)74.60 (±2.53)
+ GconAtten64.75 (±1.48)71.80 (±1.21)
+ Link Prediction66.30 (±0.48)77.25 (±1.11)
+ PG-Local70.05 (±1.33)79.80 (±1.45)
+ PG-Global68.40 (±0.31)80.05 (±0.68)
+ PG-Full71.20 (±0.96)79.15 (±0.78)
+ +# 4.4 Model Variations + +We experiment with three variants of our method which differ in terms of the knowledge embedding: (1) PG-Full: combination of our global PG and a static RN as detailed in §3.3; (2) PG-Local: a local PG which is trained on both local and global paths; (3) PG-Global: a global, data-independent PG which is trained on global paths only. We note that PG-Local and PG-Global do not include the static knowledge embedding. + +# 4.5 Results + +Main Results For all systems, we experiment with several encoders as a context module: BERTlarge (Devlin et al., 2018) and RoBERTa-large (Liu et al., 2019) for CommonsenseQA, RoBERTa-large and AristoRoBERTa (Clark et al., 2019) for OpenBookQA. Tables 2 and 3 show the results for CommonsenseQA and OpenBookQA, respectively. On both datasets, we observe consistent improvements brought by our method with different context encoders. Our full model which, combines both generated and static knowledge, achieves the best performance overall, suggesting these two knowledge sources are complementary. Typically, either our local or global variant yields second best results, demonstrating the effectiveness of the generated + +Table 4: Test accuracy on CommonsenseQA's official leaderboard. Note that the SOTA system, UnifiedQA is impractical (11B parameters) in an academic setting. + +
MethodsSingleEnsemble
RoBERTa (Liu et al., 2019)72.172.5
RoBERTa+FreeLB (Zhu et al., 2019)-73.1
RoBERTa+HyKAS (Ma et al., 2019)73.2-
XLNet+DREAM73.3-
RoBERTa+KE-73.3
RoBERTa+KEDGN-74.4
XLNet+GraphReason (Lv et al., 2019)75.3-
Albert (Lan et al., 2019)-76.5
UnifiedQA* (Khashabi et al., 2020)79.1-
Albert+PG-Full75.678.2
+ +paths as structured evidence and their superiority over the static KG methods. The comparable performance of Link Prediction to the static KG methods indicates that even predicting 1-hop knowledge paths helps to address the KG sparsity. + +Furthermore, we report comparable results to the other systems on the official test sets, accessible via the leaderboards (Tables 4 and 5). Notably, the two best-performing systems, UnifiedQA (Khashabi et al., 2020) and TTTTT (Raffel et al., 2019), are based on the T5 language model (Raffel et al., 2019), which requires excessive computational resources and is impractical in an academic setting. Excluding these, our full method achieves the best performance on both datasets. + +Less Labeled Data To compare the robustness of our model and the baselines to sparsity, we perform experiments with $\{20\%, 40\%, 60\%, 80\%, 100\}$ of the training data from both datasets. The results, displayed in Table 2 and Figure 4, show that our method (with RoBERTa) performs better or equal to the baselines with any amount of training data. The performance gain brought by either our Global or Full model is higher when less data is used, which shows that introducing structured evidence as inductive bias helps in a low-resource setting. + +Table 5: Test accuracy on OpenBookQA leaderboard. All listed methods leverage the provided science facts as additional textual input. Note that the top 2 systems, UnifiedQA (11B parameters) and TTTTT (3B parameters) are computationally expensive and impractical in an academic setting. + +
MethodsTest
Careful Selection (Banerjee et al., 2019)72.0
AristoRoBERTa77.8
KF + SIR (Banerjee and Baral, 2020)80.0
Albert + KB81.0
TTTTT* (Raffel et al., 2019)83.2
UnifiedQA* (Khashabi et al., 2020)87.2
AristoRoBERTa + PG-Full80.2
Albert + PG-Full81.8
+ +![](images/c7ecf4d40422dc0d70ac7a0c44a093fe2dd79e53468d4cee56e1146f5f94546a.jpg) +Figure 4: Test accuracy on CommonsenseQA (left) and Open-BookQA (right) with different proportions of training data. + +![](images/d9d922ca1dd84c43d4b5f872bf2226ebf9c50926eadec874cd979469e9b860a9.jpg) + +Ablation Study We study the contribution of different strategies for learning our generator based on the performance of our Global and Local variants in Tables 2-3. We also include another variant by training our path generator from scratch, i.e. training a randomly-initialized model with the same architecture as GPT-2 instead of fine-tuning a pre-trained one. This Scratch variant achieves 68.75 and 65.50 accuracy on the CommonsenseQA and OpenBookQA test sets, respectively, with RoBERTa-large as the text encoder. Its performance thus resembles that of the static KG baselines while our Full method achieves 72.68 and 71.20. This demonstrates that learning paths from scratch approximates what a static KG has already, whereas the unstructured knowledge stored in a pre-trained GPT-2 helps to complement missing knowledge in a static KG. When coupled with a more powerful encoder like RoBERTa or Albert, our Global variant achieves comparable or better results than our Local variant, without fitting the paths to the task, and thus holds a promise to enhance generalization on a wider range of datasets. + +# 4.6 Study of Path Quality & Interpretability + +Automatic Evaluation We perform automatic evaluation of the validity and novelty of the gener + +Table 6: Automatic and Human Evaluation of the generated Paths on the task testset. All scores are scaled to be percentage-based. + +
MetricCommonsenseQAOpenBookQA
GlobalScratchGlobalScratch
Connection97.3391.1696.0396.01
Valid Entity98.6497.7899.2197.97
Valid Relation100.00100.00100.00100.00
Score59.3153.2757.7450.62
Novelty75.8258.1878.9353.81
H-Valid89.2060.1384.9353.73
H-Relevance87.5370.5388.1374.00
+ +ated paths from our Global and Scratch PG variants. To automatically measure validity, we analyze (1) the proportion of paths which successfully connect the head and the tail entities (Connection), (2) the proportion of entities/relations found in ConceptNet (Valid Entity / Relation). We also leverage a commonsense knowledge base completion model, Bilinear AVG (Li et al., 2016), which produces a score for a given triplet. This model reportedly achieves $92.5\%$ accuracy on commonsense knowledge completion and has been used in previous work (Bosselut et al., 2019). We average the scores of all the triplets in a path which are missing in ConceptNet as its Score. We compute novelty as the proportion of paths which contain at least one triplet missing in ConceptNet (Novelty). + +The results are presented in Table 6. Firstly, our two generator variants are able to connect a vast majority of the entity pairs with a valid path (over $90\%$ Connection). For this purpose, our generators only use the relations in the relation set instead of other, out-of-KG phrases ( $100\%$ Valid Relation). In addition, the novel paths from the Global generator are of higher quality compared with the ones from the Scratch generator, given that any fact with a score over 0.5 is classified as positive by Bilinear AVG, which is later confirmed by our human evaluation as well. The Global generator also has a higher Novelty, indicating the necessity of transferring knowledge from a pretrained GPT-2 to complement a static KG. + +Human Evaluation We also conduct human evaluation on two dimensions of the generated paths: (1) validity (How valid are the paths?) (2) relevance (How relevant are the paths to the question?). We randomly sample 50 paths from our Global and Scratch generator for different question-choice entity pairs in the test datasets. For each path, we provide the corresponding question and answer + +Table 7: Paths from question to gold answer entities, with novel and valid triplets in boldface. + +
Q1: Where would you find magazines along side many other printed works? +A: doctor. B*: bookstore. C: market. D: train station. E: mortuary. +PG-Global (2-hop): {magazine, IsA, book, AtLocation, bookstore} +PG-Scratch: {magazine, IsA, magazine, AtLocation, bookstore}
Q2: If you want harmony, what is something you should try to do with the world? +A: take time. B: make noise. C: make war. D*: make peace. E: make haste. +PG-Global (2-hop): {harmony, _MotivatedByGoal, make better world, +HasPrerequisite, make peace} +PG-Scratch: {harmony, _UsedFor, committing perjury, Causes, make peace}
Q3: Janet was watching the film because she liked what? +A: rejection. B: laughter. C*: being entertained. D: fear. E: bordem. +PG-Global (1-hop): {film, _CausesDesire, being entertained} +PG-Scratch: {film, HasContext, being entertained}
+ +choices as the context. We ask three annotators to score each path from 1 (Not at all) to 5 (Very), resulting in a total of 150 scores for each dimension/generator/dataset. The averages of these scores are reported as H-Valid and H-Relevance in Table 6. For both dimensions, our Global generator achieves higher scores, showing the ability of fine-tuning a pre-trained GPT-2 as our generator to learn the path distribution which is of high quality and relevant to commonsense QA. + +Path Interpretability. In Table 7, we compare example paths generated by our Global and Scratch variants to connect the question entities to the gold answer entities. In Q1, our Global generator provides knowledge about the location of an entity with a 2-hop path, which helps with answering such "Where" questions. Although the path from our Scratch generator also contains the AtLocation relation, its first generated hop (IsA) is less informative. In Q2, our Global generator is able to connect complex ideas about harmony and making peace with a 2-hop path, while the path from the Scratch variant contains incorrect information: peace is caused by committing perjury. In Q3, the path from our Global generator is able to predict the relevant property of an entity and realizes that a 1-hop relation suffices in this case. Our Scratch variant, however, predicts a less precise relation (HasContext). These cases show the path generalization ability of the fine-tuned pre-trained GPT-2, owed to its unstructured knowledge. We refer readers to Table 12 in Appendix for more cases. + +# 5 Related Work + +Multi-hop Reasoning on KGs. Recent benchmarks for commonsense QA and related tasks like open domain QA (Yang et al., 2018) and reading comprehension (Welbl et al., 2018), require systems to conduct multi-hop reasoning. Existing systems typically employ entity linking to recognize + +the relevant entities, ground them to a KG, and retrieve the paths from the local graph neighborhood around the entities. The retrieved paths are scored or ranked using graph-based metrics (e.g., PageRank, centrality) (Paul and Frank, 2019; Fadnis et al., 2019; Bauer et al., 2018), handcrafted rules (Kapanipathi et al., 2019) or neural methods (e.g., attention mechanisms) (Kundu et al., 2018; Lin et al., 2019). Rather than relying on a static KG, our PG is able to generate knowledge paths dynamically, even when these are absent in the KG. + +Dynamic Knowledge Path Generation. Several methods generate knowledge paths instead of extracting them from static KGs. Asai et al. (2019) learn reasoning paths by forming sequences of evidence documents, however, their approach relies on the inter-document hyperlinks to establish relations in the constructed KG. The extractor of Fu et al. (2019) retrieves missing facts in order to address the sparsity of KGs. Unlike our work, their setting is limited to knowledge graph completion, where both a query entity and a single query relation are given. The most similar existing work to ours is that by Bosselut and Choi (2019), which also leverages GPT-2 to dynamically generate knowledge paths. We see two key differences between this method and ours: (1) they expand their paths gradually by predicting the next entity one at a time, while we generate the paths in an end-to-end manner; (2) their method is restricted to a setting where the context could be treated as a single entity and the question - as a query relation, which is not a limitation to our method. + +# 6 Conclusion + +In this paper, we propose a generator of multi-hop knowledge paths, which provides structured evidence for answering commonsense questions. The generator, learned by fine-tuning GPT-2 on random walks sampled from ConceptNet, produces a path between each pair of question and answer entities. All generated paths are aggregated into a knowledge embedding and fused with a context embedding given by a text encoder for classification. Our QA framework enhanced with this generator outperforms both pre-trained language models and prior KG-augmented methods on two commonsense QA benchmarks. The accuracy gain increases with less training data. Furthermore, automatic- and human-based evaluations of the generated paths yield high scores for their validity, + +novelty, and relevance. Future research should investigate how to optimally fuse the knowledge and the context embeddings. It should also address the ambiguity of the entity mentions in the questions, the answers, and the lexical nodes in ConceptNet. + +# Acknowledgments + +We thank the anonymous reviewers for their insightful comments. This material is based upon work sponsored by the DARPA MCS program under Contract No. N660011924033 with the United States Office Of Naval Research. + +# References + +Akari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi, Richard Socher, and Caiming Xiong. 2019. Learning to retrieve reasoning paths over wikipedia graph for question answering. arXiv preprint arXiv:1911.10470. +Pratyay Banerjee and Chitta Baral. 2020. Knowledge fusion and semantic knowledge ranking for open domain question answering. arXiv preprint arXiv:2004.03101. +Pratyay Banerjee, Kuntal Kumar Pal, Arindam Mitra, and Chitta Baral. 2019. Careful selection of knowledge to solve open book question answering. arXiv preprint arXiv:1907.10738. +Lisa Bauer, Yicheng Wang, and Mohit Bansal. 2018. Commonsense for generative multi-hop question answering tasks. arXiv preprint arXiv:1809.06309. +Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Scott Wen-tau Yih, and Yejin Choi. 2019. Abductive commonsense reasoning. arXiv preprint arXiv:1908.05739. +Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. 2020. Piqa: Reasoning about physical commonsense in natural language. In Thirty-Fourth AAAI Conference on Artificial Intelligence. +Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Advances in neural information processing systems, pages 2787-2795. +Antoine Bosselut and Yejin Choi. 2019. Dynamic knowledge graph construction for zero-shot commonsense question answering. arXiv preprint arXiv:1911.03876. +Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. + +2019. Comet: Commonsense transformers for automatic knowledge graph construction. arXiv preprint arXiv:1906.05317. +Peter Clark, Oren Etzioni, Tushar Khot, Bhavana Dalvi Mishra, Kyle Richardson, Ashish Sabharwal, Carissa Schoenick, Oyvind Tafjord, Niket Tandon, Sumithra Bhakthavatsalam, et al. 2019. From 'f'to'a' on the ny regents science exams: An overview of the aristo project. arXiv preprint arXiv:1909.01958. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Kshitij Fadnis, Kartik Talamadupula, Pavan Kapanipathi, Haque Ishfaq, Salim Roukos, and Achille Fokoue. 2019. Heuristics for interpretable knowledge graph contextualization. arXiv preprint arXiv:1911.02085. +Yanlin Feng, Xinyue Chen, Bill Yuchen Lin, Peifeng Wang, Jun Yan, and Xiang Ren. 2020. Scalable multi-hop relational reasoning for knowledge-aware question answering. arXiv preprint arXiv:2005.00646. +Cong Fu, Tong Chen, Meng Qu, Woojeong Jin, and Xiang Ren. 2019. Collaborative policy learning for open knowledge graph reasoning. arXiv preprint arXiv:1909.00230. +Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Cosmos qa: Machine reading comprehension with contextual commonsense reasoning. arXiv preprint arXiv:1909.00277. +Pavan Kapanipathi, Veronika Thost, Siva Sankalp Patel, Spencer Whitehead, Ibrahim Abdelaziz, Avinash Balakrishnan, Maria Chang, Kshitij Fadnis, Chulaka Gunasekara, Bassem Makni, et al. 2019. Infusing knowledge into the textual entailment task using graph convolutional networks. arXiv preprint arXiv:1911.02060. +Daniel Khashabi, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2020. Unifiedqa: Crossing format boundaries with a single qa system. arXiv preprint arXiv:2005.00700. +Souvik Kundu, Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. Exploiting explicit paths for multi-hop reading comprehension. arXiv preprint arXiv:1811.01127. +Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942. + +Xiang Li, Aynaz Taheri, Lifu Tu, and Kevin Gimpel. 2016. Commonsense knowledge base completion. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1445-1455. +Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xiang Ren. 2019. Kagnet: Knowledge-aware graph networks for commonsense reasoning. arXiv preprint arXiv:1909.02151. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. +Shangwen Lv, Daya Guo, Jingjing Xu, Duyu Tang, Nan Duan, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, and Songlin Hu. 2019. Graph-based reasoning over heterogeneous external knowledge for commonsense question answering. arXiv preprint arXiv:1909.05311. +Kaixin Ma, Jonathan Francis, Quanyang Lu, Eric Nyberg, and Alessandro Oltramari. 2019. Towards generalizable neuro-symbolic systems for commonsense question answering. arXiv preprint arXiv:1910.14087. +Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct electricity? a new dataset for open book question answering. In EMNLP. +Todor Mihaylov and Anette Frank. 2018. Knowledgeable reader: Enhancing cloze-style reading comprehension with external commonsense knowledge. arXiv preprint arXiv:1805.07858. +Arindam Mitra, Pratyay Banerjee, Kuntal Kumar Pal, Swaroop Mishra, and Chitta Baral. 2019. Exploring ways to incorporate additional knowledge to improve natural language commonsense question answering. arXiv preprint arXiv:1909.08855. +Timothy Niven and Hung-Yu Kao. 2019. Probing neural network comprehension of natural language arguments. arXiv preprint arXiv:1907.07355. +Debjit Paul and Anette Frank. 2019. Ranking and selecting multi-hop knowledge paths to better predict human needs. arXiv preprint arXiv:1904.00676. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683. + +Adam Santoro, David Raposo, David G Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. 2017. A simple neural network module for relational reasoning. In Advances in neural information processing systems, pages 4967-4976. +Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. 2019. Atomic: an atlas of machine commonsense for if-then reasoning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3027-3035. +Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In European Semantic Web Conference, pages 593-607. Springer. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics. +Robert Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Thirty-First AAAI Conference on Artificial Intelligence. +Shane Storks, Qiaozi Gao, and Joyce Y Chai. 2019. Commonsense reasoning for natural language understanding: A survey of benchmarks, resources, and approaches. arXiv preprint arXiv:1904.01172. +Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2018. Commonsenseqa: A question answering challenge targeting commonsense knowledge. arXiv preprint arXiv:1811.00937. +Xiaoyan Wang, Pavan Kapanipathi, Ryan Musa, Mo Yu, Kartik Talamadupula, Ibrahim Abdelaziz, Maria Chang, Achille Fokoue, Bassem Makni, Nicholas Mattei, et al. 2019. Improving natural language inference using external knowledge in the science questions domain. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7208-7215. +Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi-hop reading comprehension across documents. Transactions of the Association for Computational Linguistics, 6:287-302. +Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600. + +Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. Swag: A large-scale adversarial dataset for grounded commonsense inference. arXiv preprint arXiv:1808.05326. + +Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Thomas Goldstein, and Jingjing Liu. 2019. Freelb: Enhanced adversarial training for language understanding. arXiv preprint arXiv:1909.11764. + +# A Algorithm for Path Sampling + +# Algorithm 1 Path Sampling + +Input: $\mathcal{G} = (\mathcal{E},\mathcal{R})$ and a set of all the question entities $\{e^q\}$ + +Output: A set of triplet paths $\{p\}$ . + +1: repeat +2: if Do Global Sampling then +3: current_node $u\gets$ uniform_sample(E) +4: else +5: current_node $u\gets$ uniform_sample( $\{e^q\}$ ) +6: end if +7: $p \gets \{u\}$ +8: for $t = 1$ to $T$ do +9: $N\gets Neighbor(u)$ +10: next_node $v\gets$ uniform_sample(N) +11: $M\gets \text{AllRelations}(u,v)$ +12: while TRUE do +13: $r\gets \text{uniform\_sample}(M)$ +14: if $r$ not in $p$ then +15: BREAK +16: end if +17: end while +18: $p\gets p\cup \{r,v\}$ +19: $u\gets v$ +20: end for +21: until Maximum number of paths achieved. + +# B Discarded Relations + +When sampling knowledge paths, we discard some relation types which are regarded to be uninformative and offer little help for answering the questions. They include RelatedTo, Synonym, Antonym, DerivedFrom, FormOf, EtymologicallyDerivedFrom and EtymologicallyRelatedTo. + +Table 8: QA Dataset Statistics. + +
TrainDevTest
CommonsenseQA (official)9,7411,2211,140
CommonsenseQA (Lin et al.)8,5001,2211,241
OpenBookQA4,957500500
+ +# C Datasets Split + +Both CommonsenseQA $^3$ and OpenbookQA $^4$ have their datasets available on their leaderboard pages. + +The dataset split used in (Lin et al., 2019) is also available by request and we have included it as a supplementary material. + +Table 9: Learning rate of different context modules for CommonsenseQA. + +
Learning RateBatch Size
BERT-large2e-532
RoBERTa-large2e-616
Albert-xxlarge-v21e-516
+ +Table 10: Learning rate of different context modules for OpenBookQA. + +
Learning RateBatch Size
Roberta-large1e-532
AristoRoBERTa2e-516
Albert-xxlarge-v21e-516
+ +# D Implementation Details + +Path Generator Training We employ a pretrained GPT2-base model (Radford et al., 2019) to initialize our generator. Then we fine-tune the generator with an initial learning rate of $1e - 5$ and a batch size of 64. The learning rate is changed with a warm-up period of 500 mini batches and then linearly decayed. The training lasts until the loss on the development set no longer decreases for 2 epochs. + +Training on the Task Datasets We search for the optimal hyper-parameters based on the classification accuracy on the development set. The learning rate for the context module is chosen from $\{2e - 6,5e - 6,1e - 5,2e - 5,5e - 5\}$ . The learning rate for the rest of the parameters is set to $1e - 3$ . The batch size is chosen from $\{8,16,32,64,128\}$ . A large batch size is achieved by accumulating gradient through several small batches. The training lasts until the accuracy on the development set no longer increases for 2 epochs. The optimal hyperparameters for both datasets are listed in Tables 9-10. + +Model Size We list the model size of the major modules in our QA framework in Table 11. These include the different pre-trained LMs used as a context module, the backbone of our PG (GPT-2), and the RN used for the static knowledge module. + +Table 11: Number of parameters of the major modules in our QA framework. + +
# Parameters
BERT-large340M
RoBERTa-large355M
AristorRoBERTa355M
Albert-xxlarge-v2223M
GPT2-base117M
RN399K
+ +Table 12: More Paths from questions to gold answer entities, with novel and valid triplets in boldface. + +
Q1: He spent all summer in his room playing video games, because of this it wasn't surprising for Mother to find a stack of dirty dishes in her what? +A*: son's room. B: party. C: dishwasher. D: restaurant kitchen. E: shoes +PG-Global: {play Video, UsedFor, computer, AtLocation, son's room} +PG-Scratch: {play Video, UsedFor, machine, IsA, son's room}
Q2: What do people typically do while playing guitar? +A: cry. B: hear sounds. C*: singing. D: arthritis. E: making music. +PG-Global: {guitar, Usedfor, playing music, HasSubevent, singing} +PG-Scratch: {guitar, HasContext, music, Causes, singing}
Q3: Blue read material outside of his comfort zone because he wanted to gain what? +A*: new perspective. B: entertained. C: understanding. D: hunger. E: tired eyes. +PG-Global: {reading material, HasPrerequisite, learning about subject, Causes, new perspective} +PG-Scratch: {reading material, HasSubevent, reading, Causes, new perspective}
Q4: Bob the lizard lives in a warm place with lots of water. Where does he probably live? +A: rock. B*: tropical rainforest. C: jazz club. D: new mexico. E: rocky places. +PG-Global: {warm place, AtLocation, forest, IsA, tropical rainforest} +PG-Scratch: {warm place, AtLocation, tropical rainforest}
Q5: She was always helping at the senior center, it brought her what? +A: satisfaction. B: heart. C: feel better. D: pay. E: happiness. +PG-Global: {help, UsedFor, giving assistance, Causes, happiness} +PG-Scratch: {help, HasSubevent, giving assistance, MotivatedByGoal, happiness}
Q6: What is likely to satisfy someone's curiosity? +A*: hear news. B: read book. C: see favorite show. D: comedy show. E: go somewhere. +PG-Global: {curiosity, CausesDesire, find information, HasSubevent, read, Hasprerequisite, hear news} +PG-Scratch: {curiosity, CausesDesire, hear news}
Q7: Where would a person be doing when having to wait their turn? +A: have patience. B: get in line. C: sing. D*: stand in line. E: turn left. +PG-Global: {wait, HasPrerequisite, stand in line} +PG-Scratch: {wait, HasPrerequisite, stand in line}
Q8: It's easier for human's to survive in: +A: a cave. B: the ocean. C*: a town. D: alone. +PG-Global: {survive_MotivatedByGoal, live, UsedFor, townhouse, AtLocation, town} +PG-Scratch: {survive, HasProperty, town}
Q9: A man wanted to find the United States on a visual, where should he look? +A: history book. B*: atlas. C: tv channels. D: northern hemisphere. E: map. +PG-Global: {visual, HasContext, map, AtLocation, atlas} +PG-Scratch: {visual, IsA, atlas}
Q10: What leads to someone going to to bed? +A: bad dreams. B: lazyness. C: get pregnant. D*: sleepiness. E: rest. +PG-Global: {bed, UsedFor, sleeping, Causes, sleepiness} +PG-Scratch: {bed, UsedFor, sleepiness}
\ No newline at end of file diff --git a/connectingthedotsaknowledgeablepathgeneratorforcommonsensequestionanswering/images.zip b/connectingthedotsaknowledgeablepathgeneratorforcommonsensequestionanswering/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..35ed89a4c5a53359c4aecb5b993a342115d12173 --- /dev/null +++ b/connectingthedotsaknowledgeablepathgeneratorforcommonsensequestionanswering/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c9b75ea61ea940f7750c6aca0aedba31edadc6534312fa12cfd77f28cd3e1e5d +size 748058 diff --git a/connectingthedotsaknowledgeablepathgeneratorforcommonsensequestionanswering/layout.json b/connectingthedotsaknowledgeablepathgeneratorforcommonsensequestionanswering/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..8394c1d9935933bca0c28e233bc7b8d3c6316f2b --- /dev/null +++ b/connectingthedotsaknowledgeablepathgeneratorforcommonsensequestionanswering/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ad1fd931c9f8060f0a4ad18d7e5cda7daac5f6bbbeede101a84faf4f10aee7f5 +size 458079 diff --git a/consistentresponsegenerationwithcontrolledspecificity/cc84cd50-7a3b-4be2-bca5-04ef3931ee22_content_list.json b/consistentresponsegenerationwithcontrolledspecificity/cc84cd50-7a3b-4be2-bca5-04ef3931ee22_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..4d64fd8cacd19f7994371feb6f172abe6be88521 --- /dev/null +++ b/consistentresponsegenerationwithcontrolledspecificity/cc84cd50-7a3b-4be2-bca5-04ef3931ee22_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:689cc5eb2bc6c51df9d5c070bda6b8f77bb5f680bae8bf6ea5136d4ff6ed85ac +size 70182 diff --git a/consistentresponsegenerationwithcontrolledspecificity/cc84cd50-7a3b-4be2-bca5-04ef3931ee22_model.json b/consistentresponsegenerationwithcontrolledspecificity/cc84cd50-7a3b-4be2-bca5-04ef3931ee22_model.json new file mode 100644 index 0000000000000000000000000000000000000000..a194e5ecd4fa10db262f703906f23162360a0131 --- /dev/null +++ b/consistentresponsegenerationwithcontrolledspecificity/cc84cd50-7a3b-4be2-bca5-04ef3931ee22_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ccc81267bf12ff9507ea0eede6c6552195f10703a17a9d9f73a8ee46da2073ba +size 81835 diff --git a/consistentresponsegenerationwithcontrolledspecificity/cc84cd50-7a3b-4be2-bca5-04ef3931ee22_origin.pdf b/consistentresponsegenerationwithcontrolledspecificity/cc84cd50-7a3b-4be2-bca5-04ef3931ee22_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b7aebafd96e6da0b86134ba8ee3dec04fd6107af --- /dev/null +++ b/consistentresponsegenerationwithcontrolledspecificity/cc84cd50-7a3b-4be2-bca5-04ef3931ee22_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dbe39642e8f826dc69e042e0bcd199c1a5afa3d800bd033d7e47e55255624d5b +size 538796 diff --git a/consistentresponsegenerationwithcontrolledspecificity/full.md b/consistentresponsegenerationwithcontrolledspecificity/full.md new file mode 100644 index 0000000000000000000000000000000000000000..f7b1e301f0dee0c1fe5075a36a8845d0e61de851 --- /dev/null +++ b/consistentresponsegenerationwithcontrolledspecificity/full.md @@ -0,0 +1,287 @@ +# Consistent Response Generation with Controlled Specificity + +Junya Takayama and Yuki Arase + +Graduate School of Information Science and Technology, Osaka University + +{takayama.junya, arase}@ist.osaka-u.ac.jp + +# Abstract + +We propose a method to control the specificity of responses while maintaining the consistency with the utterances for open-domain conversation systems. We first design a metric based on pointwise mutual information, which measures the co-occurrence degree between an utterance and a response. To control the specificity of the generated responses, we add the distant supervision based on the co-occurrence degree and a PMI-based word prediction mechanism to a sequence-to-sequence model. Using these mechanisms, our model outputs the words with desired specificity for a given specificity level. In experiments with open-domain dialogue corpora, automatic and human evaluation results confirm that our model controls the specificity of the responses more sensitively than the conventional model and can generate highly consistent responses. + +# 1 Introduction + +Open-domain response generation is a task for generating a human-like responses to chit-chatting. There are many end-to-end response generation models (Vinyals and Le, 2015; Sordoni et al., 2015; Mei et al., 2017) that apply a sequence-to-sequence (Seq2Seq) (Sutskever et al., 2014) architecture, which allows the generation of fluent responses. However, the Seq2Seq model suffers from a tendency to generate safe but overly typical responses (i.e. dull responses), such as "Yes" and "I don't understand." To solve this problem, several studies proposed methods to increase the specificity of the generated responses (Li et al., 2016a; Zhang et al., 2018b; Jiang et al., 2019); however, simply maximizing the specificity of the response results in a degenerative solution that generates a specific but inconsistent responses. + +In this study, we define the conditions that an automatically generated response is expected to satisfy as (i) being consistent with an input utterance, + +![](images/a0fca064e7c25cd4f0c9a870c3a731b595a9e33faf8ce9fa077c5a761f0bf12c.jpg) +Figure 1: An example of the relationship between utterance and response. There are several possible responses to an utterance with various specificity. + +(ii) being specific to provide informative contents, and (iii) being controllable. As shown in Figure 1, in a human conversation, an utterance could have various responses with different specificity (Csaky et al., 2019). Then, humans control the specificity of the response as necessary. Thus, instead of only generating highly specific responses, the specificity should be controllable in response generation tasks. + +We propose a method to control the specificity of responses while maintaining their consistency with the utterances. Following the observation that a response uniquely co-occurring with a specific utterance in a corpus is both specific and consistent for the utterance, we design a metric called MaxPMI, which measures the co-occurrence degree between an utterance and a response on the basis of positive pointwise mutual information (PPMI). We apply the distant supervision into our model using automatically annotated MaxPMI scores of the training set. At the inference, the specificity of the generated responses can be controlled by inputting a desired specificity level. We also propose a method to automatically set the specificity level by estimating the maximum MaxPMI score for an input utterance, which allows the generation of a response which has the maximum mutual information with the input. + +We conducted both automatic and human eval + +uations using DailyDialog and Twitter corpora. The results confirmed that our method largely outperformed the methods in previous studies and achieved sensitive control of the specificity of the output responses. + +# 2 Related Work + +Previous studies focus on addressing the dull response problem generated by Seq2seq models. Li et al. (2016a) rerank the $N$ -best generated responses using an objective function to maximize the mutual information between the utterance and generated sentences. Because this method is post-processing, it ceases to be effective if there are no appropriate response candidates among the $N$ -best responses. To directly improve the specificity of each response generated, previous studies devised training mechanisms of Seq2seq models by penalising for the generation of dull responses and eventually training models to generate specific responses. Yao et al. (2016) and Li et al. (2016b) apply reinforcement learning, and Xu et al. (2017) and Zhang et al. (2018b) apply generative adversarial networks, to directly generate specific responses. Based on the hypothesis that the specificity of sentences increases with the number of low-frequency words, Nakamura et al. (2019) and Jiang et al. (2019) propose loss functions weighted by word frequency. In contrast, to ensure both specificity and consistency, Takayama and Arase (2019) propose a model that directly promotes the generation of words that co-occur with uttered sentences on the basis of PPMI. Their model includes a mechanism for deciding whether or not to generate words of high co-occurrence with the utterance at each decoding step. In this study, we apply this method to our model for proactively generating specific words in a response. + +Controlling the properties of generated responses is also related to our study. Xu et al. (2019) and Ko et al. (2019) allow for the control of dialogue-acts, length, and specificity of responses; however, they are resource intensive and thus require an external annotated corpus. In contrast, SC-Seq2Seq (Zhang et al., 2018a) achieves control of response specificity without dependence on external resources, which is most relevant to our study. Moreover, SC-Seq2Seq applies distant supervision, but uses word frequency in responses as a measure of specificity. At inference, SC-Seq2seq requires to input a desired specificity realized in the response. + +We measure specificity based on PPMI between an utterance and response, hence, our method can maintain both specificity and consistency to the utterance. Additionally, our method can estimate the maximum specificity for each input utterance, and automatically adjust the specificity of generated responses. + +# 3 Proposed method + +The proposed method is depicted in Figure 2. In the proposed method, first, a label that indicates the co-occurrence degree between utterance and response is automatically annotated by MaxPMI score (Section 3.1). The model generates sentences on the basis of previously calculated PPMI and MaxPMI (see Section 3.2). The training is performed using the framework of distant supervision based on the utterance-response pair and the MaxPMI score given beforehand (Section 3.3). At the inference, responses are generated using one method of inputting a manually determined specificity level or automatically estimated specificity level considering the input utterance (see Section 3.4). + +Since we aim to explicitly control the amount of information in response to utterances, we use the decoder architecture of Takayama and Arase (2019) which has an output gating mechanism that controls whether or not to generate specific words at each decoding time-step. + +# 3.1 MaxPMI: Co-occurrence measure between response and utterance + +We propose a simple PPMI-based co-occurrence measure, called MaxPMI, which is based on the observation that a consistent and highly specific response contains words that highly co-occur with a specific utterance. + +First, the PPMI of each word is calculated in advance using the all training corpus. $X = \{x^{1}, x^{2}, \ldots, x^{|X|}\}$ is a word sequence in an utterance sentence, and $Y = \{y^{1}, y^{2}, \ldots, y^{|Y|}\}$ is a word sequence in a response sentence. If the probabilities of word $x$ of appearing in the utterance and response sentences are $p_{X}(x)$ and $p_{Y}(x)$ , respectively, and if the probability of words $x$ and $y$ of simultaneously appearing in a certain utterance-response pair is $p(x, y)$ , then the PPMI is calculated as follows: + +$$ +\mathrm {P P M I} (x, y) = \max \left(\log_ {2} \frac {p (x , y)}{p _ {X} (x) \cdot p _ {Y} (y)}, 0\right). +$$ + +![](images/050b9bfa8a07f1decec8fb7363117e19d11f985ae6a58abce0067c6fd23c9a2d.jpg) +Figure 2: Model architecture + +MaxPMI is defined as follows: + +$$ +\operatorname {M a x} \operatorname {P M I} (\boldsymbol {X}, \boldsymbol {Y}) = \max _ {\boldsymbol {x} \in \boldsymbol {X}, \boldsymbol {y} \in \boldsymbol {Y}} \operatorname {P P M I} (\boldsymbol {x}, \boldsymbol {y}). +$$ + +When training the model, MaxPMI shall be normalized to the range of [0, 1] by using min-max normalization. + +# 3.2 Model Architecture + +Our model is based on Seq2seq architecture, which consists of an encoder and decoder, as follows. + +Encoder Like in normal Seq2Seq, the tokens in the input sentence are first vectorized using the embedding layer, following which the input sentence is encoded using the gated recurrent units (GRU) (Cho et al., 2014) to obtain the vector $h_{GRU}$ . In addition, the proposed method includes a multilayer perceptron (MLP), which encodes the input MaxPMI score (MaxPMI( $X, Y$ ) as $h_s$ . Subsequently, $h_{GRU}$ and $h_s$ are concatenated to form a vector $h_e = \{h_{GRU}; h_s\}$ , which is input to the decoder. The vector $h_s$ conveys the decoder to the level of specificity with which the response should be generated. + +Decoder The decoder has the same architecture as that in Takayama and Arase (2019), which promotes the generation of words of high co-occurrence with an input utterance. Let $V$ be the vocabulary of the decoder. A word co-occurrence degree $d_v$ between a word $v \in V$ and an input sentence $X$ is defined as follows: + +$$ +d _ {v} = \sum_ {x \in X} \operatorname {P P M I} (x, v). +$$ + +The decoder first receives a vector $\boldsymbol{v}_f = [d_0, \dots, d_{|V|}] \in \mathbb{R}^{|V|}$ that contains the word cooccurrence degrees of all the vocabulary words. It then encodes $\boldsymbol{v}_f$ into a vector $\boldsymbol{h}_v$ using the multi-layer perceptron (MLP). + +The initial state $\pmb{h} = \{h_e; h_v\}$ of the decoder is concatenation of $\pmb{h}_v$ and the encoder output $\pmb{h}_e$ . Consequently, the decoder can obtain the information of a word that co-occurs easily with the input. In addition, $\pmb{v}_f$ is added with weighting to the output vector $\pi^i$ of the decoder in each time step $i$ to amplify the output probability of a word having a high amount of mutual information with the input sentence. The final output $\tilde{\pi}^i$ of the decoder is given as follows: + +$$ +\tilde {\pmb {\pi}} ^ {i} = (1 - \lambda^ {i}) \cdot \pmb {\pi} ^ {i} + \lambda^ {i} \cdot \pmb {v} _ {f}, +$$ + +where generation of specific words is controlled by a parameter $\lambda$ . We employ a gating mechanism using a sigmoid function (See et al., 2017) to determine the value of $\lambda$ . Although previous literature discussed that the vanishing gradient problem could be caused by a sigmoid function (Goldberg and Hirst 2017, on page 46), See et al. (2017) have shown that the sigmoid-based gating is highly stable. $\lambda^i$ is computed according to the decoder's current intermediate state $h_i$ as follows: + +$$ +\lambda^ {i} = \operatorname {s i g m o i d} \left(W _ {\text {g a t e}} \boldsymbol {h} ^ {i} + \boldsymbol {b} _ {\text {g a t e}}\right). +$$ + +where $W_{gate}$ is the trainable weight matrix and $\pmb{b}_{gate}$ is the bias term. + +# 3.3 Distant Supervision + +MaxPMI score of an utterance-response pair $(\mathbf{X},\mathbf{Y})$ in the training corpus is calculated for the + +distant supervision beforehand (Section 3.1). These scores are then input to the decoder as $h_s$ for training. The cross-entropy loss is used as the loss function: + +$$ +\mathcal {L} = \sum_ {(\boldsymbol {X}, \boldsymbol {Y}) \in \mathcal {D}} \log P (\boldsymbol {Y} | \boldsymbol {X}, \operatorname {M a x P M I} (\boldsymbol {X}, \boldsymbol {Y}); \theta), +$$ + +where $\mathcal{D}$ denotes a training set and the model parameters are $\theta$ . Intuitively, this loss function allows the model to learn what response should be generated conditioned on an utterance and a specificity level. + +# 3.4 Inference + +At the inference, we can control the specificity of a response by inputting the score $s \in [0,1]$ to the model. The larger $s$ makes the response more specific, i.e. the response contains words that frequently co-occurred among the utterances and responses of the training corpus. Users of our conversation model determine the desired specificity according to their use cases. + +Situations also arise in which users prefer automatic control of the response specificity (rather than controlling it themselves). An appropriate value of $s$ depends on an input utterance, i.e. there are utterances that could have specific responses or only typical responses. For example, the utterance in Figure 1 may have specific responses as depicted, but the utterance "Hello." most likely has typical responses like "Hi." Hence, we propose a method for estimating the appropriate $s$ to generate a maximally specific response possible for the utterance. We define the upper bound of MaxPMI, $s_{max}$ , for input sentence $X$ as: + +$$ +s _ {m a x} = \max _ {x \in \mathbf {X}, v \in \mathbf {V}} \mathrm {P P M I} (x, v), +$$ + +which can be calculated using the precomputed PPMI values. By using $s_{max}$ , the most specific response among possible responses of varying specificity to $X$ is expected to be generated (referred to as information-maximization decoding). + +# 4 Experimental Settings + +To evaluate whether our model can control the specificity of the responses while maintaining their consistency with the utterances, we conducted response-generation experiments using Japanese and English chit-chat dialogue corpora. + +# 4.1 Experiment Corpora + +We used two corpora, Twitter (Japanese) and DailyDialog (English). The details of each corpus are as follows. + +Twitter We crawled online conversations on Japanese Twitter by using the mentions of "@” as clues. A single-turn dialogue corpus was constructed by considering a tweet and its reply as an utterance-response pair. The sizes of the training/validation/test sets were 1,383,424/24, 123/25,010 utterance-response pairs, respectively. Each utterance-response pair was divided into subwords using a BertJapaneseTokenizer (bert-base-japanese) in transformers $^{1}$ (version = 2.5.1). + +DailyDialog This corpus was constructed by Li et al. (2017) by crawling various websites that taught users English dialogues for daily usage. This consists of multi-turn dialogues, which we converted to a single-turn dialogues by considering two consecutive utterances as an utterance-response pair. The sizes of the training/validation/test sets were 76,052/7,069/6,740 utterance-response pairs, respectively. Each utterance-response pair was divided into subwords using a BertTokenizer (BERT-base-uncased) in transformers. + +As pre-processing, the subwords with frequencies less than 50 for both corpora were excluded for calculating the PPMI. + +# 4.2 Comparison Methods + +We compared our model to previous models. The baseline is the standard Seq2Seq (Seq2Seq). We also compared our model to SC-Seq2Seq (Zhang et al., 2018a) as it is the most relevant method for controlling the specificity of responses. + +SC-Seq2Seq is a response generation model that can control the specificity of output sentences using the distant supervision. It hypothesizes that the lower the frequencies of words in a sentence, the higher the specificity of the sentence. As a measure of sentence specificity, it uses a frequency-based metric; inverse frequency of words. Moreover, SC-Seq2Seq also has a word prediction mechanism based on the Gaussian kernel layer in addition to the output layer of the decoder. Unlike our + +model, which takes into account the co-occurrence between utterances and responses, this word prediction layer takes into account the rarity of words. At the inference, the specificity of a response is controlled by inputting the specificity score $\in [0,1]$ . + +# 4.3 Metrics for Automatic Evaluation + +We employed several automatic-evaluation metrics typically used in the evaluation of conversation systems. + +Metrics for Validity First, we evaluated the validity of the generated sentences in comparison with the reference sentences (responses) using BLEU and NIST. BLEU (Papineni et al., 2002) measures the correspondence between the $n$ -grams in generated responses and those in the reference sentences. Liu et al. (2016) empirically show that BLEU has a higher Spearman's correlation with 5-scale human evaluation than some other reference-based metrics in experiments using the English Twitter corpus. NIST (Doddington, 2002) also measures the correspondence between generated responses and reference sentences. Unlike BLEU, NIST places lower weights on frequent $n$ -grams, i.e. NIST regards content words as more important than function words. Thus, we regard that NIST is more suitable for evaluating the specificity aspects of the responses. We used Natural Language Toolkit2 for calculation of BLEU and NIST scores. + +Metrics for Diversity Second, we evaluated the diversity of the generated responses using dist and ent. Dist (Li et al., 2016a) is defined as the number of distinct $n$ -grams in the generated responses divided by the total number of generated tokens. On the other hand, ent (Zhang et al., 2018b) considers the frequency of $n$ -grams in generated responses as follows: + +$$ +\mathrm {e n t} = - \frac {1}{\sum_ {w} F (w)} \sum_ {w \in Y} F (w) \log \frac {F (w)}{\sum_ {w} F (w)}, +$$ + +where $Y$ is a set of $n$ -grams output by the system, and $F(w)$ computes the frequency of each $n$ -gram. Compared to dist, which simply focuses on the number of types of words used in a response, ent focuses on the specificity of the response. + +Metrics for Fluency Finally, we evaluated the repetition rate (Le et al., 2017) on the test set, which + +measures the meaningless repetition of words: + +$$ +\mathrm {r e p e t i t i o n \_ r a t e} = \frac {1}{N} \sum_ {i = 1} ^ {N} \frac {1 + r \left(\widetilde {Y} ^ {i}\right)}{1 + r (Y ^ {i})}, +$$ + +where $\widetilde{Y}^i$ is the $i$ -th generated sentence, $Y^i$ is its reference, and $N$ is the total number of test sentences. The function $r(\cdot)$ measures the repetition as the difference between the number of words and that of unique words in a sentence: + +$$ +r (Y) = \mathrm {l e n} (Y) - \mathrm {l e n} (\mathrm {s e t} (Y)), +$$ + +where $Y$ means a sentence, $\operatorname{len}(Y)$ computes the number of words in $Y$ , and $\operatorname{set}(Y)$ removes the duplicate words in $Y$ . + +# 4.4 Human Evaluation Settings + +Because appropriate responses for a certain utterance are diverse, human evaluation is crucial to properly evaluate conversation systems. We conducted human evaluation using the Japanese Twitter corpus. Specifically, we recruited six raters via crowd-sourcing, who were all Japanese native speakers and active users of Twitter. The raters evaluated the quality of 300 responses that were generated for randomly sampled utterances from the test set. All raters annotated the same set in parallel; each rater evaluated all the systems. In addition, we shuffled the set of responses to an utterance so that the raters did not distinguish which model each response was output from. The raters were recruited using Lancers, $^{3}$ a popular Japanese crowd-sourcing service. + +The evaluation criteria were the same as those used in (Zhang et al., 2018a): $+2$ : the response is not only semantically consistent and grammatical, but also specific; $+1$ : the response is grammatical and can be used as a response to the utterance, but is too trivial (e.g., "I don't know"); $+0$ : the response is semantically inconsistent or ungrammatical (e.g., grammatical errors). After collecting results from the raters, we adopted the results of the five raters and excluded one who had extremely low agreements with the others. + +# 4.5 Model Settings + +We used Adam (Kingma and Lei Ba, 2015) as an optimizer for training all the models with the learning rate to 0.0002. We also used gradient clipping + +
BLEU-1BLEU-2NISTdist-1dist-2ent-4replength
Proposed (s = smax)6.904.220.660.0630.198.472.686.08
Proposed (s = 0.5)6.714.090.640.0570.178.262.906.51
SC-Seq2Seq (s = 0.8)6.544.000.620.0100.025.651.905.45
Seq2Seq5.363.530.410.0080.024.001.564.08
Reference100.00100.0016.850.1100.5111.171.006.11
+ +Table 1: Automatic evaluation results on the Twitter corpus (Japanese) + +
BLEU-1BLEU-2NISTdist-1dist-2ent-4replength
Proposed (s = smax)22.3017.622.870.0830.4110.771.4611.89
Proposed (s = 0.5)22.0617.412.850.0850.4110.741.4111.63
SC-Seq2Seq (s = 0.5)13.328.181.400.0980.3610.341.2910.09
Seq2Seq13.759.001.540.0960.3710.311.269.70
Reference100.00100.0016.700.1270.5410.911.0011.67
+ +Table 2: Automatic evaluation results on the DailyDialog corpus (English) + +to avoid the exploding gradient problem, with a threshold of 5. For all the models, the number of dimensions of the hidden and embedding layers was 512 and 256, respectively. The training was performed up to 40 epochs on Twitter corpus and 200 epochs on DailyDialog corpus, and the evaluation was conducted using the model with the highest BLEU score on the validation set. + +SC-Seq2Seq has a hyper-parameter $\sigma^2$ , which determines the variance of the Gaussian kernel layer. $\sigma^2$ was set to 0.1 for Twitter and 0.2 for DailyDialog, chosen from 0.1, 0.2, 0.5, and 1.0 to maximise the BLEU score on the validation set. + +All the code used in the experiment was written using PyTorch $^4$ (version = 1.0.0). We use a single GPU (NVIDIA Tesla V100 SXM2, 32 GB memory) for both training and testing. + +# 5 Results and Discussion + +# 5.1 Automatic Evaluation Results + +The automatic evaluation results on the test sets are presented in Tables 1 (Twitter) and 2 (Daily-Dialog), where the last columns show the average number of words per response. The proposed method ( $s = s_{max}$ ; information-maximization decoding) achieved the highest scores on validity and diversity metrics (BLEU, NIST, dist, and ent) for most cases. These results confirm that the information-maximization decoding can generate a highly specific response by estimating the appropriate specificity level $s$ . Compared with other + +methods, our model achieved much higher BLEU and NIST scores on DailyDialog. We hypothesize that this was because our model explicitly incorporates the co-occurrence statistics of words, which may complement the training of Seq2seq with a smaller corpus. + +SC-Seq2seq showed comparable BLEU and NIST scores to our model on the Twitter corpus; however, its dist and ent scores were as low as Seq2seq. In contrast, SC-Seq2seq scored high for dist and ent on the DailyDialog corpus, but its BLEU and NIST scores were lower than the standard Seq2seq. These results indicate that the effectiveness of SC-Seq2seq is domain dependent. We conjecture this is caused by the specificity estimation based on word frequencies regardless of utterances and responses, which is easily affected by occurrence of rare words. + +As an adverse effect of the proposed method, the repetition rate is higher than that of Seq2Seq and SC-Seq2Seq in both corpora. The longer average length of responses and higher NIST and BLEU scores of the proposed model indicates that highly co-occurring words (in references) are repeatedly generated. This is because the probability of generating such words is always high, regardless of the state of the decoder, and it will be generated repeatedly. We will address this problem by adjusting $\boldsymbol{v}_f$ at each time-step in future. + +# 5.2 Controllability Evaluation Results + +We evaluated the controllability of the specificity of the generated responses using the automatic evalu + +
BLEU-1BLEU-2NISTdist-1dist-2ent-4replength
Proposeds = 0.00.050.030.000.0070.032.390.941.28
s = 0.24.712.960.360.0350.106.391.814.09
s = 0.56.954.260.680.0580.178.152.936.54
s = 0.85.913.450.560.0460.158.123.978.25
s = 1.05.633.230.530.0390.138.094.238.72
s = s_max7.204.460.700.0640.198.412.686.06
SC-Seq2Seqs = 0.03.722.680.130.0130.046.060.982.99
s = 0.24.052.880.170.0130.046.010.993.11
s = 0.55.343.650.400.0130.035.481.423.89
s = 0.86.744.160.660.0110.035.711.825.36
s = 1.06.313.860.570.0090.025.662.856.36
+ +Table 3: Controllability Evaluation on Twitter corpus (Japanese) + +
ModelsRate (%)Kappa
+2+1+0
Proposed (s = smax)24.819.855.40.42
Proposed (s = 0.5)26.617.955.50.41
Proposed (s = 0.0)0.653.845.60.02
Seq2Seq10.159.730.30.42
SC-Seq2Seq (s = 1.0)11.533.754.80.56
SC-Seq2Seq (s = 0.8)9.854.735.50.50
SC-Seq2Seq (s = 0.0)10.956.532.60.44
Proposed (hybrid)22.038.239.8-
SC-Seq2Seq (hybrid)12.150.237.7-
+ +Table 4: Human evaluation results on the test set of Twitter corpus (Japanese) + +ation metrics. For each utterance of the validation set, responses were generated using our model and SC-Seq2Seq, respectively. + +The results are summarized in Table 3 (Twitter). Our model shows more sensitive variation for changing $s$ than SC-Seq2Seq. Particularly, in the range of $s \leq 0.5$ , as $s$ increases, dist, which indicates diversity, and NIST, which indicates validity of responses, increase. However, in the range of $s \geq 0.5$ , as $s$ increases, almost all the scores decrease. These results show that it is impossible to generate an appropriate response when the inputted specificity level $s$ is beyond the possible range for input utterances. It is evident that the repetition rate ('rep' in Table 3) and average length of responses increased as $s$ became larger. This is because the decoder prefers words co-occurring with the utterance in accordance with a large $s$ ; and consequently, it repeatedly generated highly specific words for utterances. + +The results of the proposed method $(s = s_{max})$ show the highest scores for all of BLEU, NIST, dist, and ent. Further, it achieves the lower repetition rate than the proposed method $(s = 0.5)$ which performed best among different settings of $s$ . This results show that the optimal $s$ for each input + +utterance can be estimated by using information-maximization decoding. The same tendency was also observed in the DailyDialog corpus, whose results are omitted due to the space limitation. + +# 5.3 Human Evaluation Results + +The human evaluation results on the test set of Twitter corpus are presented in Table 4. Except for the proposed method ( $s = 0.0$ ), the Kappa values for all the methods exceed 0.4. These Kappa values are similar to those obtained in the human evaluations performed in Zhang et al. (2018a). The low kappa value of 0.02 for the proposed method ( $s = 0.0$ ) is caused by the frequent output of very short responses5 such as “?” and “huh?”, thereby making it difficult to determine whether a response is acceptable. + +The proposed method ( $s = 0.5$ ) and the proposed method ( $s = s_{max}$ ) have more “+2”s than the proposed method ( $s = 0.0$ ), which shows that our model generates specific responses by increasing $s$ . The change in the ratio of the number of “+2”s to the change in $s$ is more pronounced for our model than for each of the SC-Seq2Seq results. Thus, our model possesses more sensitive specificity control than SC-Seq2Seq. However, both of the proposed methods and SC-Seq2Seq show a significant increase in the rate of “+0” upon increasing $s$ , compared to Seq2seq. This is because the fluency of the responses was deteriorated by forcing to output a larger number of specific words, which negatively affected the language generation ability of the decoder. Particularly, as mentioned in Section 5.2, many responses might have lost their fluency because of repeated words. + +To address this problem, we tried a simple heuristic to switch the proposed method and the plain Seq2seq. If the proportion of unique words in a re + +
UtteranceI'm always watching cat movies in free times. Cats of other families all look cute. (暇さえあれは猫動画観て。他所の猫がわいく見た)
Proposed (s = smax)Movies of cat cafés* remind me how cute they are (猫カフロー動画見てけは可愛い)
Proposed (s = 1.0)Cats are cute (猫可愛ですよね)
Proposed (s = 0.8)Movies of cat cafés remind me how cute they are (猫カフロー動画見てけは可愛い)
Proposed (s = 0.5)Movies of cat cafés remind me how cute they are (猫カフロー動画見てけは可愛い)
Proposed (s = 0.2)Cute (可愛い)
Proposed (s = 0.0)Thank you (のはがよう)
SC-Seq2Seq (s = 1.0)Can't agree more (那儿)
SC-Seq2Seq (s = 0.8)Are you interested in me? (私のごとが気にしてる)?
SC-Seq2Seq (s = 0.5)Can't agree more (那儿)
SC-Seq2Seq (s = 0.2)Can't agree more! (那儿)
SC-Seq2Seq (s = 0.0)Can't agree more! (那儿)
Seq2SeqCute (可愛い)
UtteranceBut, I'm afraid it's going to be a nightmare as tomorrow's a national holiday (くださ热线日んだらやこう)
Proposed (s = smax)It can't be helped. A national holiday is always like that. (祝日んだらから仕方ない)
Proposed (s = 1.0)It's not because of the national holiday, right? (祝日は関係ないから)
Proposed (s = 0.8)It can't be helped. A national holiday is a national holiday. (祝日は祝日だから仕方ない)
Proposed (s = 0.5)It can't be helped. A national holiday is always like that. (祝日はだから仕方ない)
Proposed (s = 0.2)Hang in there (がんばろがんばろ)
Proposed (s = 0.0)Thank you (のはがよう)
SC-Seq2Seq (s = 1.0)No way, I can't stand it (那儿は無理だわ)
SC-Seq2Seq (s = 0.8)I should go to a part-time job now (俺は今からバイ卜だら)
SC-Seq2Seq (s = 0.5)No way, I can't stand it (那儿は無理だわ)
SC-Seq2Seq (s = 0.2)I have to work tomorrow (明日は仕事だわ)
SC-Seq2Seq (s = 0.0)Good morning! (お願いいたします!)
Seq2SeqCan't agree more (那儿)
+ +Table 5: Examples of generated responses in test set of Twitter corpus. The English sentences in the table was translated from the original Japanese sentences, written in parentheses. (*A “cat cafe” is a cafe where people can play with cats.) + +sponse sentence generated by our model falls below a threshold $T$ (we set $T$ to 0.95), i.e. the response contains repetitive words, we switch to the plain Seq2seq and use its response instead. The results obtained after applying this heuristic to the proposed method ( $s = s_{max}$ ) as well as SC-Seq2Seq ( $s = 1.0$ ) are listed in Table 4 as the proposed method (hybrid) and SC-Seq2Seq (hybrid), respectively. For both the proposed method (hybrid) and SC-Seq2Seq (hybrid), the ratio of "+" decreases by more than 15 percentage points, while that of "+" remains almost unchanged. This problem will be addressed using a more sophisticated approach in future work. + +# 5.4 Case Study + +Table 5 presents two examples of generated responses sampled from the test set of the Twitter corpus. In the range of $s \geq 0.5$ , our model generated highly specific responses to the utterances. However, it repeatedly generated the same phrase when $s$ was too large, i.e. the response on $s = 0.8$ + +for the second case. As mentioned in the Section 5.1, this is an adverse effect of forcing to output a larger number of specific words than possible. In contrast, the information-maximization decoding $(s = s_{max})$ avoids this problem by adaptively setting an appropriate $s$ value for each input utterance. + +SC-Seq2Seq often produced more specific responses than Seq2Seq as shown in the second example. However, the change in the specificity of responses is limited even though inputting a large value of $s$ , like the first example. Specifically, the response by SC-Seq2Seq ( $s = 0.8$ ) in the first case ignores the input utterance and thus is inconsistent. We conjecture this is caused by that the specificity in SC-Seq2Seq is estimated regardless of utterances and responses. For the same example, our model can output words that are associated with the utterance, such as "cat", "movie", and "cute". + +# 6 Conclusion + +We empirically showed that the co-occurrence relationship between words in an utterance and words + +in its response helps to control the specificity in response generation. The conventional specificity control model often generates responses with less consistency with the utterances. In contrast, our model can control specificity of the responses while maintaining the consistency with the utterance. + +As future work, we shall improve the proposed method to maintain the fluency in responses by addressing the repeated word problem. Further, an appropriate specificity level of a response depends on the previous utterances and responses, i.e. conversation systems that always return highly specific responses are annoying. Hence, we intend to propose a method to adjust the specificity level considering the conversation history. + +# Acknowledgments + +This work was supported by JSPS KAKENHI Grant Number JP18K11435. + +# References + +Kyunghyun Cho, Bart Van Merrienboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. 2014. Learning Phrase Representations using RNN EncoderDecoder for Statistical Machine Translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724-1734. +Richard Csaky, Patrik Purgai, and Gabor Recski. 2019. Improving Neural Conversational Models with Entropy-Based Data Filtering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL), pages 5650-5669. +George Doddington. 2002. Automatic Evaluation of Machine Translation Quality Using N-gram CoOccurrence Statistics. In Proceedings of the second international conference on Human Language Technology Research (HLT). +Yoav Goldberg and Graeme Hirst. 2017. Neural Network Methods in Natural Language Processing. Morgan & Claypool Publishers. +Shaojie Jiang, Pengjie Ren, Christof Monz, and Maarten de Rijke. 2019. Improving Neural Response Diversity with Frequency-Aware Cross-Entropy Loss. In Proceedings of the Web Conference. +Diederik P Kingma and Jimmy Lei Ba. 2015. ADAM: A Method for Stochastic Optimization. In The 3rd International Conference on Learning Representations 2015 (ICLR). + +Wei-Jen Ko, Greg Durrett, and Junyi Jessy Li. 2019. Linguistically-Informed Specificity and Semantic Plausibility for Dialogue Generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), Volume 1 (Long and Short Papers), pages 3456-3466, Minneapolis, Minnesota. +An Nguyen Le, Ander Martinez, Akifumi Yoshimoto, and Yuji Matsumoto. 2017. Improving Sequence to Sequence Neural Machine Translation by Utilizing Syntactic Dependency Information. In Proceedings of the The 8th International Joint Conference on Natural Language Processing (IJCNLP), pages 21-29. +Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A Diversity-Promoting Objective Function for Neural Conversation Models. In Proceedings of The 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 110-119. +Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016b. Deep Reinforcement Learning for Dialogue Generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1192-1202. +Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. DailyDialog: A manually labelled multi-turn dialogue dataset. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (IJCNLP), Volume 1: Long Papers, pages 986-995. +Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2122-2132. +Hongyuan Mei, Mohit Bansal, and Matthew R. Walter. 2017. Coherent Dialogue with Attention-Based Language Models. In Proceedings of the National Conference on Artificial Intelligence (AAAI), San Francisco, CA. +Ryo Nakamura, Katsuhito Sudoh, Koichiro Yoshino, and Satoshi Nakamura. 2019. Another Diversity-Promoting Objective Function for Neural Dialogue Generation. In Proceedings of The Second AAAI Workshop on Reasoning and Learning for Human-Machine Dialogues (DEEP-DIAL). +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: A Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 311-318. + +Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), Volume 1: Long Papers, pages 1073-1083. +Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 196-205. +Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of The Twenty-eighth Conference on Neural Information Processing Systems (NeurIPS), pages 3104-3112. +Junya Takayama and Yuki Arase. 2019. Relevant and Informative Response Generation using Pointwise Mutual Information. In Proceedings of the First Workshop on NLP for Conversational AI, pages 133-138. +Oriol Vinyals and Quoc V Le. 2015. A Neural Conversational Model. In Proceedings of the 32nd International Conference on Machine Learning (ICML). +Can Xu, Wei Wu, Chongyang Tao, Huang Hu, Matt Schuerman, and Ying Wang. 2019. Neural Response Generation with Meta-words. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL), pages 5416-5426. +Zhen Xu, Bingquan Liu, Baoxun Wang, Chengjie Sun, Xiaolong Wang, Zhuoran Wang, and Chao Qi. 2017. Neural Response Generation via GAN with an Approximate Embedding Layer. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 617-626. +Kaisheng Yao, Baolin Peng, Geoffrey Zweig, and KamFai Wong. 2016. An Attentional Neural Conversation Model with Improved Specificity. arXiv preprint arXiv:1606.01292. +Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, Jun Xu, and Xueqi Cheng. 2018a. Learning to control the specificity in neural response generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1108-1117. +Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018b. Generating Informative and Diverse Conversational Responses via Adversarial Information Maximization. In Proceedings of 32nd Conference on Neural Information Processing Systems (NeurIPS). \ No newline at end of file diff --git a/consistentresponsegenerationwithcontrolledspecificity/images.zip b/consistentresponsegenerationwithcontrolledspecificity/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..cb161040def5662fdfc5508912b029e0676d0bb4 --- /dev/null +++ b/consistentresponsegenerationwithcontrolledspecificity/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ec970188557f218bbbea0e9152417892831ac8ee9d12506e166510768f37002b +size 533936 diff --git a/consistentresponsegenerationwithcontrolledspecificity/layout.json b/consistentresponsegenerationwithcontrolledspecificity/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..fe2204e15766d86c2c78069a73e02c717d998a03 --- /dev/null +++ b/consistentresponsegenerationwithcontrolledspecificity/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9ecd98f0a05c924a8156debafe0f0fe16fca8e374474bc4cdf00440e609d3d33 +size 341365 diff --git a/constraineddecodingforcomputationallyefficientnamedentityrecognitiontaggers/bc251353-bd18-4c48-b76f-24149b09aa10_content_list.json b/constraineddecodingforcomputationallyefficientnamedentityrecognitiontaggers/bc251353-bd18-4c48-b76f-24149b09aa10_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b3b346017230feaf649fae819566e2571cdd11e1 --- /dev/null +++ b/constraineddecodingforcomputationallyefficientnamedentityrecognitiontaggers/bc251353-bd18-4c48-b76f-24149b09aa10_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a72c0f686df40ea0fb7c06b13972d73e227d564441c9594db264d1c8b321c765 +size 54583 diff --git a/constraineddecodingforcomputationallyefficientnamedentityrecognitiontaggers/bc251353-bd18-4c48-b76f-24149b09aa10_model.json b/constraineddecodingforcomputationallyefficientnamedentityrecognitiontaggers/bc251353-bd18-4c48-b76f-24149b09aa10_model.json new file mode 100644 index 0000000000000000000000000000000000000000..afe40af7f6dc863c3ef4c17d8a53cc8103d6c5fe --- /dev/null +++ b/constraineddecodingforcomputationallyefficientnamedentityrecognitiontaggers/bc251353-bd18-4c48-b76f-24149b09aa10_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dca66bf8fc4b24def2487f6ca936a093b7f1d5e1d9e8d577587c8526e3f2b4a6 +size 64770 diff --git a/constraineddecodingforcomputationallyefficientnamedentityrecognitiontaggers/bc251353-bd18-4c48-b76f-24149b09aa10_origin.pdf b/constraineddecodingforcomputationallyefficientnamedentityrecognitiontaggers/bc251353-bd18-4c48-b76f-24149b09aa10_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7a7ffe3b3a847fa39366f0455b3eebcc986348e2 --- /dev/null +++ b/constraineddecodingforcomputationallyefficientnamedentityrecognitiontaggers/bc251353-bd18-4c48-b76f-24149b09aa10_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a2278dc1a46f5be2a90f06f453870c8d33713ee4a7c24ce9183da098e5f95076 +size 275733 diff --git a/constraineddecodingforcomputationallyefficientnamedentityrecognitiontaggers/full.md b/constraineddecodingforcomputationallyefficientnamedentityrecognitiontaggers/full.md new file mode 100644 index 0000000000000000000000000000000000000000..9e5d21a59a131b7f2e2c16c3db56326c39ba83f1 --- /dev/null +++ b/constraineddecodingforcomputationallyefficientnamedentityrecognitiontaggers/full.md @@ -0,0 +1,214 @@ +# Constrained Decoding for Computationally Efficient Named Entity Recognition Taggers + +Brian Lester, Daniel Pressel, Amy Hemmeter, Sagnik Ray Choudhury, and Srinivas Bangalore + +Interactions, Ann Arbor MI 48104 + +{blester,dpressel,ahemmeter,schoudhury,sbangalore} + +@interactions.com + +# Abstract + +Current state-of-the-art models for named entity recognition (NER) are neural models with a conditional random field (CRF) as the final layer. Entities are represented as per-token labels with a special structure in order to decode them into spans. Current work eschews prior knowledge of how the span encoding scheme works and relies on the CRF learning which transitions are illegal and which are not to facilitate global coherence. We find that by constraining the output to suppress illegal transitions we can train a tagger with a cross-entropy loss twice as fast as a CRF with differences in F1 that are statistically insignificant, effectively eliminating the need for a CRF. We analyze the dynamics of tag co-occurrence to explain when these constraints are most effective and provide open source implementations of our tagger in both PyTorch and TensorFlow. + +# 1 Introduction + +Named entity recognition (NER) is the task of finding phrases of interest in text that map to real world entities such as organizations ("ORG") or locations ("LOC"). This is normally cast as a sequence labeling problem where each token is assigned a label that represents its entity type. Multi-token entities are handled by having special "Beginning" and "Inside" indicators that specify which tokens start, continue, or change the type of an entity. Ratinov and Roth (2009) show that the IOBES tagging scheme, where entity spans must begin with a "B" token, end with an "E" token and where single token entities are labeled with an "S", performs better than the traditional BIO scheme. The IOBES tagging scheme dictates that some token sequences are illegal. For example, one cannot start an entity with an "E" tag (such as a transition from an "O", meaning it is outside of an entity, to "E-ORG") nor can they change types in the middle of an entity—for + +example, transitioning from "I-ORG" to "I-LOC". Most approaches to NER rely on the model learning which transitions are legal from the training data rather than injecting prior knowledge of how the encoding scheme works. + +It is conventional wisdom that, for NER, models with a linear-chain conditional random field (CRF) (Lafferty et al., 2001) layer perform better than those without, yielding relative performance increases between 2 and 3 percent in F1 (Ma and Hovy, 2016; Lample et al., 2016). A CRF with Viterbi decoding promotes, but does not guarantee, global coherence while simple greedy decoding does not (Collobert et al., 2011). Therefore, in a bidirectional LSTM (biLSTM) model with a CRF layer, illegal transitions are rare compared to models that select the best scoring tag for each token. + +Due to the high variance observed in the performance of NER models (Reimers and Gurevych, 2017) it is important to have fast training times to allow for multiple runs of these models. However, as the CRF forward algorithm is $O(NT^2)$ , where $N$ is the length of the sentence and $T$ is the number of possible tags, it slows down the training significantly. Moreover, substantial effort is required to build an optimized, correct implementation of this layer. Alternately, training with a cross-entropy loss runs in $O(N)$ for sparse labels and popular deep learning toolkits provide an easy to use, parallel version of this loss which brings the runtime down to $O(\log N)$ . + +We believe that, due to the strong contextualized local features with infinite context created by today's neural models, global features used in the CRF do little more than enforce the rules of an encoding scheme. Instead of traditional CRF training, we propose training with a cross-entropy loss and using Viterbi decoding (Forney, 1973) with heuristically determined transition probabilities that prohibit illegal transitions. We call this constrained + +decoding and find that it allows us to train models in half the time while yielding F1 scores comparable to CRFs. + +# 2 Method + +Training a tagger with a CRF is normally done by minimizing the negative log likelihood of the sequence of gold tags given the input, parameterized by the model, where the probability of the sequence is given by + +$$ +P (y | x; \theta) = \frac {e ^ {\sum_ {i} \sum_ {j} w _ {j} f _ {j} (y _ {i - 1} , y _ {i} , x , i))}}{\sum_ {y ^ {\prime} \in Y} e ^ {\sum_ {i} \sum_ {j} w _ {j} f _ {j} (y _ {i - 1} ^ {\prime} , y _ {i} ^ {\prime} , x , i)}} +$$ + +By creating a feature function, $f_{j}$ , that is span-encoding-scheme-aware, we can introduce constraints that penalize any sequence that includes an illegal transition by returning a large negative value. Note the summation over all possible tag sequences. While efficient dynamic programs exist to make this sum tractable for linear-chain CRFs with Markov assumptions, this is still a costly normalization factor to compute. + +In neural models, these feature functions are represented as a transition matrix that represents the score of moving from one tag $y$ at index $i$ to another at $i + 1$ . We implement a mask that effectively eliminates invalid IOBES transitions by setting those scores to large negative values. By applying this mask to the transition matrix we can simulate feature functions that down-weigh illegal transitions. + +Contrast the CRF loss with the token-level cross-entropy loss where $y$ is the correct labels and $\hat{y}$ is the model's predictions. + +$$ +L _ {\mathrm {c r o s s - e n t r o p y}} = - \sum_ {i} y _ {i} \log (\hat {y} _ {i}) +$$ + +Here we can see that the loss for each element in the input $i$ can be computed independently due to the lack of a global normalization factor. This lack of a global view is potentially harmful, as we lose the ability to condition on the previous label decision to avoid making illegal transitions. We hypothesize that, using our illegal transition heuristics, we can create feature functions that do not have to be trained, but can be applied at test time and allow for contextual coherence while using a cross-entropy loss. + +We can use the mask directly as the transition matrix to calculate the maximum probability sequence while avoiding illegal transitions for models that were not trained with a CRF. Using these transitions scores in conjunction with cross-entropy trained models, we can achieve comparable models that train more quickly. We call this method constrained decoding. + +Constrained decoding is relatively easy to implement, given a working CRF implementation, all one needs to do is apply the transition mask to the CRF transition parameters to create a constrained CRF. Replacing the transition parameters with the mask yields our constrained decoding model. Starting from scratch, one only needs to implement Viterbi decoding, using the mask as transition parameters, to implement the constrained decoding model—avoiding the need for the CRF forward algorithm and the CRF loss. + +For constrained decoding, we leverage the IOBES tagging scheme rather than BIO tagging, allowing us to inject more structure into the decoding mask. Early experiments with BIO tagging failed to show the large gains we realized using IOBES tagging for the reasons mentioned in Section 4. + +# 3 Experiments & Results + +To test if we can replace the CRF with constrained decoding we use two sequential prediction tasks: NER (CoNLL 2003 (Tjong Kim Sang and De Meulder, 2003), WNUT-17 (Derczynski et al., 2017), and OntoNotes (Hovy et al., 2006)) and slot-filling (Snips (Coucke et al., 2018)). For each (task, dataset) pair we use common embeddings and hyperparameters from the literature. The baseline models are biLSTM-CRFs with character compositional features based on convolutional neural networks (Dos Santos and Zadrozny, 2014) and our models are identical except we train with a cross-entropy loss and use the encoding scheme constraints as transition probabilities instead of learning them with a CRF. Our hyper-parameters mostly follow Ma and Hovy (2016), except we use multiple pre-trained word embeddings concatenated together (Lester et al., 2020). For Ontonotes we follow Chiu and Nichols (2016). See Section A.7 or the configuration files in our implementation for more details. + +As seen in Table 1, in three out of four datasets constrained decoding performs comparably or better than the CRF in terms of F1. OntoNotes is + +
DatasetModelmeanstdmax
CoNLLCRF91.610.2592.00
Constrain91.440.2391.90
WNUT-17CRF40.331.1341.99
Constrain40.591.0641.71
SnipsCRF96.040.2896.35
Constrain96.070.1796.29
OntoNotesCRF87.430.2687.57
Constrain86.130.1786.72
+ +Table 1: Tagging results on a variety of datasets. The CRF model is a standard biLSTM-CRF while the Constrain model is a biLSTM trained with a cross-entropy loss that uses heuristic transition scores, created from the illegal transitions, for test time decoding. OntoNotes is the only dataset where the difference in performance between the CRF and constrained decoding is statistically significant $(p < 0.5)$ . All scores are entity-level F1 and are reported across 10 runs. + +the only dataset with a statistically significant difference in performance. We explore this discrepancy in Section 4. Similarly, Table 2 shows that when we apply constrained decoding to a variety of internal datasets, which span a diverse set of specific domains, we do not observe any statistically significant differences in F1 between CRF and constrained decoding models. + +The models were trained using Mead-Baseline (Pressel et al., 2018), an open-source framework for creating, training, evaluating and deploying models for NLP. The constrained decoding tagger performs much faster at training time. Even when compared to the optimized, batched CRF provided by Mead-Baseline, it trained in $51.2\%$ of the time as the CRF. + +In addition to faster training times, training our constrained models produces only $65\%$ of the $\mathrm{CO}_{2}$ emissions that the CRF does. While GPU computations for the constrained model draw 1.3 times more power—due to the greater degree of possible parallelism in the cross-entropy loss function—than the CRF, the reduction in training time results in smaller carbon emissions as calculated in Strubell et al. (2019). + +Constrained decoding can also be applied to a CRF. The CRF does not always learn the rules of a transition scheme, especially in early training iterations. Applying the constraints to the CRF can improve both F1 and convergence speed. We establish this by training biLSTM-CRF models with and without constraints on CoNLL 2003. We find that + +
TaskDomainΔ
NERGeneric NER0.80
Slot FillingCustomer Service0.21
Automotive-0.68
Cyber Security0.84
+ +Table 2: Entity-level F1 comparing a constrained CRF model with a constrained decoding model. Due to the nature of the data we present the relative performance difference between the two models. We see some improvements and some drops in performance but, once again, there is not a statistically significant difference between the CRF and constrained decoding. + +
TaskDatasetΔ
NERCoNLL-0.03
WNUT-170.65
OntoNotes-1.48
Snips0.03
+ +Table 3: Results on well-known datasets presented as relative differences to help frame results in Table 2 + +the constraint mask yields a small (albeit statistically insignificant) boost in F1 as shown in Table 4. + +Our experiments suggest that injecting prior knowledge of the transition scheme helps the model to focus on learning the features for sequence tagging tasks (and not the transition rules themselves) and train faster. Table 5 shows that our constrained model converged ${}^{1}$ on CoNLL 2003 faster on average than an unconstrained CRF. + +# 4 Analysis + +The relatively poor performance of constrained decoding on OntoNotes suggests that there are several classes of transition that it cannot model. For example, the transition distribution between entity types, + +
Modelmeanstdmax
Unconstrained91.550.2691.79
Constrained91.610.2592.00
+ +Table 4: Results of biLSTM-CRF models with and without constraints evaluated with entity-level F1 on the CoNLL 2003 dataset. Scores are reported across 10 runs. We see that while, in theory, the CRF should learn the constraints, injecting this knowledge gives a gain in performance. + +
Modelmeanstdminmax
Unconstrained72.421.01697
Constrained60.623.33789
+ +Table 5: Using the constraints while training a biLSTM-CRF tagger on the CoNLL dataset result in a statistically significant $(p < 0.5)$ decrease in the number of epochs until convergence. Scores are reported across 30 runs. + +or the prior distribution of entities. We analyzed the datasets to identify the characteristics that cause constrained decoding to fail. + +One such presumably obvious characteristic is the number of entity types. However, our experiments suggest that number of entity types does not affect performance: Snips has more entity types than OntoNotes yet constrained decoding works better for Snips. + +We define an ambiguous token as a token whose type has multiple tag values in the dataset. For example the token "Chicago" could be "I-LOC" or "I-ORG" in the phrases "the Chicago River" and "the Chicago Bears" respectively. Such ambiguous tokens are the ones for which we expect global features to be particularly useful. A "strictly dominated token" is defined as a token that can only take on a single value due to the legality of the transition from the previous tag. In the above example given that "the" was a "B-LOC" then "Chicago" is strictly dominated and forced to be an "I-LOC". Contrast this with a non-strictly dominated token that can still have multiple possible tag values when conditioned on the previous tag. As constrained decoding eliminates illegal transitions we would expect that it would perform well on datasets where a large proportion of ambiguous tokens are strictly dominated. This tends to hold true—only $15.9\%$ of OntoNotes' ambiguous tokens are strictly dominated while $70.7\%$ of CoNLL's tokens are and for WNUT-17 $73.6\%$ are. + +We believe that the ambiguity of the first and last token of an entity also plays a role. Once we start an entity, constrained decoding vastly narrows the scope of decisions that need to be made. Instead of making a decision over the entire set of tags, we only decide if we should continue the entity with an "I-" or end it with an "E-". Therefore, we expect constrained decoding to work well with datasets that have fairly unambiguous entity starts and ends. We quantify this by finding the proportion of entities that begin (or end) with an unambiguous type, + +that is, the first token of an entity only has a single label throughout the dataset, for example, "Kuwait" is only labeled with "S-LOC" in the CoNLL dataset. We call these metrics "Easy First" and "Easy Last" respectively and find that datasets with higher constrained decoding performance also have a higher percentages of entities with an easy first or last token. A summary of these characteristics for each dataset is found in Table 6. + +This also explains why constrained decoding doesn't work as well for BIO-encoded CoNLL as it does for IOBES. When using the IOBES format, more tokens are strictly dominated. The other stark difference is the proportion of "Easy Last" entities. Without the "E-" token, much less structure can be injected into the model, resulting in decreased performance of constrained decoding. These trends also hold true in internal datasets, where the Automotive dataset had the fewest incidences of each of these phenomena. + +While not perfect predictors for the performance of constrained decoding, the metrics chosen are good proxies and can be used as a prescriptive measure for new datasets. + +# 5 Previous Work + +Our approach is similar in spirit to previous work in NLP where constraints are introduced during training and inference time (Roth and Yih, 2005; Punyakanok et al., 2005) to lighten the computational load, and to Strubell et al. (2018) where prior knowledge is injected into the model by manual manipulation. In our approach, however, we focus specifically on manipulating the model weights themselves rather than model features. + +There have been attempts to eliminate the CRF layer, notably, Shen et al. (2017) found that an additional LSTM greedy decoder layer is competitive with the CRF layer, though their baseline is much weaker than the models found in other work. Additionally, their decoder has an auto-regressive relationship that is difficult to parallelize and, in practice, there is still significant overhead at training time. Chiu and Nichols (2016) mention good results with a similar technique but don't provide in-depth analysis, metrics, or test its generality. + +# 6 Conclusion + +For sequence tagging tasks, a CRF layer introduces substantial computational cost. We propose replacing it with a lightweight technique, constrained + +
DatasetTag TypesAmbiguityStrictly DominatedEasy FirstEasy Last
CoNLL (IOBES)48.8%71.2%58.3%94.0%
CoNLL (BIO)47.4%59.6%68.5%57.4%
WNUT-1763.6%74.3%82.9%97.0%
OntoNotes1814.9%15.9%16.2%55.9%
Snips3924.5%26.7%32.4%91.1%
+ +Table 6: Analysis of the tag dynamics and co-occurrence. We see that OntoNotes is an outlier in the percentage of ambiguous tokens that are strictly dominated by their context, the entities that have easy to spot starting tokens, and entities with clearly defined ends. All of these quirks of the data help explain why we only see a statistically significant performance drop for OntoNotes. + +decoding, which doubles the speed of training with comparable F1 performance. We analyze the algorithm to understand where it might work or fail and propose prescriptive measures for using it. + +The broad theme of the work is to find simple and computationally efficient modifications of current networks and suggest possible failure cases. While larger models have shown significant improvements, we believe there is still relevance in investigating small, targeted changes. In the future, we want to explore similar techniques in other common NLP tasks. + +# References + +Jason P.C. Chiu and Eric Nichols. 2016. Named Entity Recognition with Bidirectional LSTM-CNNs. Transactions of the Association for Computational Linguistics, 4:357-370. +Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel P. Kuksa. 2011. Natural Language Processing (Almost) from Scratch. Journal of Machine Learning Research, 12:2493-2537. +Alice Coucke, Alaa Saade, Adrien Ball, Théodore Bluche, Alexandre Caulier, David Leroy, Clément Doumouro, Thibault Gisselbrecht, Francesco Caltagirone, Thibaut Lavril, Maël Primet, and Joseph Dureau. 2018. Snips Voice Platform: an Embedded Spoken Language Understanding System for Private-by-design Voice Interfaces. arXiv preprint, arXiv:1805.10190. +Leon Derczynski, Eric Nichols, Marieke van Erp, and Nut Limsopatham. 2017. Results of the WNUT2017 Shared Task on Novel and Emerging Entity Recognition. In Proceedings of the 3rd Workshop on Noisy User-generated Text, pages 140-147, Copenhagen, Denmark. Association for Computational Linguistics. +Cicero Nogueira Dos Santos and Bianca Zadrozny. 2014. Learning Character-level Representations for Part-of-speech Tagging. In Proceedings of the 31st + +International Conference on International Conference on Machine Learning - Volume 32, ICML'14, pages II-1818-II-1826. JMLR.org. +G.D. Forney. 1973. The Viterbi Algorithm. Proceedings of the IEEE, 61(3):268-278. +Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. Ontonotes: The $90\%$ Solution. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, NAACL-Short '06, pages 57-60, Stroudsburg, PA, USA. Association for Computational Linguistics. +John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML '01, pages 282-289, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. +Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural Architectures for Named Entity Recognition. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 260-270. +Brian Lester, Daniel Pressel, Amy Hemmeter, Sagnik Ray Choudhury, and Srinivas Bangalore. 2020. Multiple Word Embeddings for Increased Diversity of Representation. arXiv preprint arXiv:2009.14394. +Xuezhe Ma and Eduard Hovy. 2016. End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064-1074, Berlin, Germany. Association for Computational Linguistics. +Tomas Mikolov, Kai Chen, Greg S. Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. +Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors for Word + +Representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543. +Daniel Pressel, Sagnik Ray Choudhury, Brian Lester, Yanjie Zhao, and Matt Barta. 2018. Baseline: A Library for Rapid Modeling, Experimentation and Development of Deep Learning Algorithms Targeting NLP. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 34-40. Association for Computational Linguistics. +Vasin Punyakanok, Dan Roth, Wen-tau Yih, and Dav Zimak. 2005. Learning and Inference over Constrained Output. In Proceedings of the 19th International Joint Conference on Artificial Intelligence, IJCAI'05, pages 1124-1129, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. +Lev Ratinov and Dan Roth. 2009. Design Challenges and Misconceptions in Named Entity Recognition. In CoNLL 2009 - Proceedings of the Thirteenth Conference on Computational Natural Language Learning, pages 147-155. +Nils Reimers and Iryna Gurevych. 2017. Reporting Score Distributions Makes a Difference: Performance Study of LSTM-networks for Sequence Tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 338-348, Copenhagen, Denmark. Association for Computational Linguistics. +Dan Roth and Wen-tau Yih. 2005. Integer Linear Programming Inference for Conditional Random Fields. In Proceedings of the 22Nd International Conference on Machine Learning, ICML '05, pages 736-743, New York, NY, USA. ACM. +Yanyao Shen, Hyokun Yun, Zachary Lipton, Yakov Kronrod, and Animashree Anandkumar. 2017. Deep Active Learning for Named Entity Recognition. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 252-256, Vancouver, Canada. Association for Computational Linguistics. +Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research, 15(56):1929-1958. +Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and Policy Considerations for Deep Learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3645-3650, Florence, Italy. Association for Computational Linguistics. +Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguistically-Informed Self-Attention for Semantic Role Labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural + +Language Processing, pages 5027-5038, Brussels, Belgium. Association for Computational Linguistics. +Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 Shared Task: Language-independent Named Entity Recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003 - Volume 4, CONLL '03, pages 142-147, Stroudsburg, PA, USA. Association for Computational Linguistics. +Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, Stefan J. van der Walt, Matthew Brett, Joshua Wilson, K. Jarrod Millman, Nikolay Mayorov, Andrew R. J. Nelson, Eric Jones, Robert Kern, Eric Larson, CJ Carey, Ilhan Polat, Yu Feng, Eric W. Moore, Jake VanderPlas, Denis Laxalde, Josef Perktold, Robert Cimrman, Ian Henriksen, E. A. Quintero, Charles R Harris, Anne M. Archibald, Antonio H. Ribeiro, Fabian Pedregosa, Paul van Mulbregt, and SciPy 1.0 Contributors. 2020. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods, 17:261-272. + +# A Reproducibility + +# A.1 Hyperparameters + +Mead/Baseline is a configuration file driven model training framework. All hyperparameters are fully specified in the configuration files included with the source code for our experiments. + +# A.2 Statistical Significance + +For all claims of statistical significance we use a t-test as implemented in scipy (Virtanen et al., 2020) and using an alpha value of 0.05. + +# A.3 Computational Resources + +All models were trained on a single NVIDIA 1080Ti. While multiple GPUs were used for training many models in parallel to facilitate testing many datasets and to estimate the variability of the method, the actual model can easily be trained on a single GPU. + +# A.4 Evaluation + +To calculate metrics, entity-level F1 is used for NER and slot-filling. In entity-level F1, entities are created from the token-level labels and compared to the gold entities. Entities that match on both type and boundaries are considered correct while a mismatch in either causes an error. The F1 score is then calculated using these entities. We use the + +
DatasetModelParameters
CoNLLCRF4,658,190
Constrain4,657,790
Unconstrained CRF4,658,190
WNUT-17CRF12,090,032
Constrain12,089,248
SnipsCRF5,940,866
Constrain5,924,737
OntoNotesCRF12,090,032
Constrain12,089,248
+ +Table 7: The number of parameters for different models. + +evaluation code that ships with the framework we use, MEAD/Baseline, which we have bundled with the source code for our experiments. + +# A.5 Model Size + +The number of parameters in different models can be found in Table 7. + +# A.6 Dataset Information + +Relevant information about datasets can be found in Table 8. The majority of data is used as distributed, except we convert NER and slot-filling datasets to the IOBES format. All public datasets are included in the supplementary material. A quick overview of each dataset follows: + +CoNLL: A NER dataset based on news text. We converted the IOB labels into the IOBES format. There are 4 entity types, MISC, LOC, PER, and LOC. + +WNUT-17: A NER dataset of new and emerging entities based on noisy user text. We converted the BIO labels into the IOBES format. There are 6 entity types, corporation, creative-work, group, location, person, and product. + +OntoNotes: A much larger NER dataset. We converted the labels into the IOBES format. There are 18 entity types, CARDINAL, DATE, EVENT, FAC, GPE, LANGUAGE, LAW, LOC, MONEY, NORP, ORDINAL, ORG, PERCENT, PERSON, PRODUCT, QUANTITY, TIME, and WORK_OF_ART. + +Snips: A slot-filling dataset focusing on commands one would give a virtual assistant. We converted the dataset from its normal format of two associated files, one containing surface terms and one containing labels in the more standard CoNLL file format and converted the + +labels into the IOBES format. There are 39 entity types, album, artist, bestrating, city, condition_description, condition_temperature, country, cuisine, current_location, entity_name, facility, genre, geographic poi, location_name, movie_name, movie_type, music_item, object_location_type, object_name, object_part_of_series_type, object_select, object_type, party_size_description, party_size_number, playlist, playlist_OWNER, poi, rating_unit, rating_value, restaurant_name, restaurant_type, served_dish, service, sort, spatial relatlon, state,timeRange,track,and year. + +# A.7 Hyper Parameters + +Table 9 details the various hyper-parameters used to train models for each dataset. For all datasets the only difference between the baseline CRF model and the model using constrained decoding is that the CRF has learnable transition parameters in the final layer while the constrained decoding model sets these transitions parameters manually based on the rules of the span encoding scheme. The framework we use, Mead-Baseline, is configuration file driven and we have included the configuration files used on our experiments in the supplementary material. + +
DatasetTrainDevTestTotal
CoNLLExamples14,9873,466367422137
Tokens204,56751,57846,666302,811
WNUT-17Examples3,3941,0091,2875,690
Tokens62,73015,73323,394101,857
OntoNotesExamples59,9248,5288,26276,714
Tokens1,088,503147,724152,7281,388,955
SnipsExamples13,08470070014,484
Tokens117,7006,3846,354130,438
+ +Table 8: Example and token count statistics for public datasets used. + +
HyperParameterCoNLLOntonotesSnipsWNUT-17
Embedding6B + Senna6B + Senna6B + GN27B + w2v-30M + 840B
Character Filter Size3333
Character Feature Size30303030
Character Embed Size30203030
RNN TypebiLSTMbiLSTMbiLSTMbiLSTM
RNN Size400400400200
RNN Layers1211
Drop In0.10.10.10.0
Drop Out0.50.630.50.5
Batch Size1091020
Epochs10010010060
Learning Rate0.0150.0080.0150.008
Momentum0.90.90.90.9
Gradient Clipping5.05.05.05.0
OptimizerSGDSGDSGDSGD
Patience40404020
Early Stopping Metricf1f1f1f1
Span TypeIOBESIOBESIOBESIOBES
+ +Table 9: Hyper-parameters used for each dataset. "Embedding" is the type of pre-trained word embeddings used. 6B, 27B, and 840B are GloVe embeddings (Pennington et al., 2014) with 27B having been trained on Twitter, Senna is embeddings from Collobert et al. (2011), GN is vectors trained on Google News with word2vec from Mikolov et al. (2013) and w2v-30M are word2vec vectors trained on Twitter from Pressel et al. (2018). "Character Filter Size" is the number of token the character compositional convolutional neural network cover is a single window, "Character Feature Size" is the number of convolutional features maps used, and "Character Embed Size" is the dimensionality of the vectors each character is mapped to before it is the input to the convolutional network. The "RNN Size" is the size of the output after the RNN which means that bidirectional RNNs are composed to two RNNs, one in each direction, where both are half the "RNN Size". "Drop In" is the probability that an entire token will be drop out from the input, while "Drop Out" is the probability that individual neurons are dropped out (Srivastava et al., 2014). \ No newline at end of file diff --git a/constraineddecodingforcomputationallyefficientnamedentityrecognitiontaggers/images.zip b/constraineddecodingforcomputationallyefficientnamedentityrecognitiontaggers/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e80615eceb2259062f1f94a5cbedacc4a4afe8d5 --- /dev/null +++ b/constraineddecodingforcomputationallyefficientnamedentityrecognitiontaggers/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:084b624b3ebe504446d42ea88d79c25b4d08c6cddfe1a80236469483e0ebfb90 +size 333738 diff --git a/constraineddecodingforcomputationallyefficientnamedentityrecognitiontaggers/layout.json b/constraineddecodingforcomputationallyefficientnamedentityrecognitiontaggers/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..35285e35f723dd036d1a49720002838fc3ded2ec --- /dev/null +++ b/constraineddecodingforcomputationallyefficientnamedentityrecognitiontaggers/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6015569499f1fd99484277dd8cdb1e50d2979e2729129cfc96de64b916db95b0 +size 216643 diff --git a/contextanalysisforpretrainedmaskedlanguagemodels/684a5676-5b32-47d8-bfbd-ebf3e88a7017_content_list.json b/contextanalysisforpretrainedmaskedlanguagemodels/684a5676-5b32-47d8-bfbd-ebf3e88a7017_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ffe895d30c9f7300803cea6ff4758fdfed96bb6b --- /dev/null +++ b/contextanalysisforpretrainedmaskedlanguagemodels/684a5676-5b32-47d8-bfbd-ebf3e88a7017_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c0424d7c1fbb85cdeb28b263417391b8e11526697080397ec9b790c3c15f0413 +size 97593 diff --git a/contextanalysisforpretrainedmaskedlanguagemodels/684a5676-5b32-47d8-bfbd-ebf3e88a7017_model.json b/contextanalysisforpretrainedmaskedlanguagemodels/684a5676-5b32-47d8-bfbd-ebf3e88a7017_model.json new file mode 100644 index 0000000000000000000000000000000000000000..7201c5a8280e3d5579bf2ae111685fa5e56ce9bd --- /dev/null +++ b/contextanalysisforpretrainedmaskedlanguagemodels/684a5676-5b32-47d8-bfbd-ebf3e88a7017_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d270e1535583bdb5e03d6c41d83a6d41e84bece869919da5306d4b886ef873c1 +size 118330 diff --git a/contextanalysisforpretrainedmaskedlanguagemodels/684a5676-5b32-47d8-bfbd-ebf3e88a7017_origin.pdf b/contextanalysisforpretrainedmaskedlanguagemodels/684a5676-5b32-47d8-bfbd-ebf3e88a7017_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..58b89aabc51b034736f8679baf6e36d0e324605e --- /dev/null +++ b/contextanalysisforpretrainedmaskedlanguagemodels/684a5676-5b32-47d8-bfbd-ebf3e88a7017_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:708f8effdcd4fb30e23366082ca1f840ec43314914b71b4f9b08b218fffd7f33 +size 1094198 diff --git a/contextanalysisforpretrainedmaskedlanguagemodels/full.md b/contextanalysisforpretrainedmaskedlanguagemodels/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c89a84d978a9e71d37f0d4652c2772db00439d3a --- /dev/null +++ b/contextanalysisforpretrainedmaskedlanguagemodels/full.md @@ -0,0 +1,377 @@ +# Context Analysis for Pre-trained Masked Language Models + +Yi-An Lai Garima Lalwani Yi Zhang + +AWS AI HLT + +{yianl, glalwani, yizhngn}@amazon.com + +# Abstract + +Pre-trained language models that learn contextualized word representations from a large un-annotated corpus have become a standard component for many state-of-the-art NLP systems. Despite their successful applications in various downstream NLP tasks, the extent of contextual impact on the word representation has not been explored. In this paper, we present a detailed analysis of contextual impact in Transformer- and BiLSTM-based masked language models. We follow two different approaches to evaluate the impact of context: a masking based approach that is architecture agnostic, and a gradient based approach that requires back-propagation through networks. The findings suggest significant differences on the contextual impact between the two model architectures. Through further breakdown of analysis by syntactic categories, we find the contextual impact in Transformer-based MLM aligns well with linguistic intuition. We further explore the Transformer attention pruning based on our findings in contextual analysis. + +# 1 Introduction + +Pre-trained masked language models (MLM) such as BERT (Devlin et al., 2019) and ALBERT (Lan et al., 2019) have set state-of-the-art performance on a broad range of NLP tasks. The success is often attributed to their ability to capture complex syntactic and semantic characteristics of word use across diverse linguistic contexts (Peters et al., 2018). Yet, how these pre-trained MLMs make use of the context remains largely unanswered. + +Recent studies have started to inspect the linguistic knowledge learned by pre-trained LMs such as word sense (Liu et al., 2019a), syntactic parse trees (Hewitt and Manning, 2019), and semantic relations (Tenney et al., 2019). Others directly analyze model's intermediate representations and attention + +weights to understand how they work (Kovaleva et al., 2019; Voita et al., 2019). + +While previous works either assume access to model's internal states or take advantage of model's special structures such as self-attention maps, these analysis are difficult to generalize as the architectures evolve. In this paper, our work complements these previous efforts and provides a richer understanding of how pre-trained MLMs leverage context without assumptions on architectures. We aim to answer following questions: (i) How much context is relevant to and used by pre-trained MLMs when composing representations? (ii) How far do MLMs look when leveraging context? That is, what are their effective context window sizes? We further define a target word's essential context as the set of context words whose absence will make the MLM indiscriminate of its prediction. We analyze linguistic characteristics of these essential context words to better understand how MLMs manage context. + +We investigate the contextual impacts in MLMs via two approaches. First, we propose the context perturbation analysis methodology that gradually masks out context words following a predetermined procedure and measures the change in the target word probability. For example, we iteratively mask words that have the least change to the target word probability until the probability deviates too much from the start. At this point, the remaining words are relevant to and used by the MLM to represent the target word, since further perturbation causes a notable prediction change. Being model agnostic, our approach looks into the contextualization in the MLM task itself, and quantify them only on the output layer. We refrain from inspecting internal representations since new architectures might not have a clear notion of "layer" with inter-leaving jump connections such as those in Guo et al. (2019) and Yao et al. (2020). + +The second approach is adapted from Falenska and Kuhn (2019) and estimates the impact of an input subword to the target word probability via the norm of the gradients. We study pre-trained MLMs based on two different architectures: Transformer and BiLSTM. The former is essentially BERT and the latter resembles ELMo (Peters et al., 2018). Although the scope in this work is limited to the comparison between two popular architectures, the same novel methodology can be readily applied to multilingual models as well as other Transformer-based models pre-trained with MLM. + +From our analysis, when encoding words using sentence-level inputs, we find that BERT is able to leverage $75\%$ of context on average in terms of the sentence length, while BiLSTM has the effective context size of around $30\%$ . The gap is compelling for long-range context more than 20 words away, wherein, BERT still has a $65\%$ chance to leverage the words in comparison to BiLSTM that only has $10\%$ or less to do so. In addition, when restricted to a local context window around the target word, we find that the effective context window size of BERT is around $78\%$ of the sentence length, whereas BiLSTM has a much smaller window size of around $50\%$ . With our extensive study on how different pre-trained MLMs operate when producing contextualized representations and what detailed linguistic behaviors can be observed, we exploited these insights to devise a pilot application. We apply attention pruning that restricts the attention window of BERT based on our findings. Results show that the performance remains the same with its efficiency improved. Our main contributions can be briefly summarized as: + +- Standardize the pre-training setup (model size, corpus, objective, etc.) for a fair comparison between different underlying architectures. +- Novel design of a straight-forward and intuitive perturbation-based analysis procedure to quantify impact of context words. +- Gain insights about how different architectures behave differently when encoding contexts, in terms of number of relevant context words, effective context window sizes, and more fine-grained break-down with respect to POS and dependency structures. +- Leverage insights from our analysis to conduct a pilot application of attention pruning on a sequence tagging task. + +# 2 Related Work + +Pre-training language models (LM) to learn contextualized word representations from a large amount of unlabeled text has been shown to benefit downstream tasks (Howard and Ruder, 2018; Peters et al., 2018; Radford et al., 2019). Masked language modeling (MLM) introduced in BERT (Devlin et al., 2019) has been widely used as the pre-training task in works including RoBERTa (Liu et al., 2019b), SpanBERT (Joshi et al., 2020), and ALBERT (Lan et al., 2019). Many of them employ the Transformer architecture (Vaswani et al., 2017) that uses multi-head self-attention to capture context. + +To assess the linguistic knowledge learned by pre-trained LMs, probing task methodology suggest training supervised models on top of the word representations (Ettinger et al., 2016; Hupkes et al., 2018; Belinkov and Glass, 2019; Hewitt and Liang, 2019). Investigated linguistic aspects span across morphology (Shi et al., 2016; Belinkov et al., 2017; Liu et al., 2019a), syntax (Tenney et al., 2019; Hewitt and Manning, 2019), and semantics (Conneau et al., 2018; Liu et al., 2019a). + +Another line of research inspects internal states of pre-trained LMs such as attention weights (Kovaleva et al., 2019; Clark et al., 2019) or intermediate word representations (Coenen et al., 2019; Ethayarajh, 2019) to facilitate our understanding of how pre-trained LMs work. In particular, Voita et al. (2019) studies the evolution of representations from the bottom to top layers and finds that, for MLM, the token identity tends to be recreated at the top layer. A close work to us is Khandelwal et al. (2018), they conduct context analysis on LSTM language models to learn how much context is used and how nearby and long-range context is represented differently. + +Our work complements prior efforts by analyzing how models pre-trained by MLM make use of context and provides insights that different architectures can have different patterns to capture context. Distinct from previous works, we leverage no specific model architecture nor intermediate representations while performing the context analysis. + +Another related topic is generic model interpretations including LIME (Ribeiro et al., 2016), SHAP (Lundberg and Lee, 2017), and Ancona et al. (2017). Despite the procedural similarity, our work focuses on analyzing how pre-trained MLMs behave when encoding contexts and our methodology is both model-agnostic and training-free. + +
ModelMNLI-(m/mm)QQPQNLISST-2CoLASTS-BMRPCRTEAvg
BERT (Devlin et al., 2019)84.6/83.471.290.593.552.185.888.966.479.6
BiLSTM + ELMo72.9/73.465.671.790.235.064.080.850.167.1
BERT (ours)84.6/84.071.091.593.655.786.288.667.480.3
BiLSTM (ours)70.9/70.263.073.790.630.567.681.254.666.9
+ +Table 1: GLUE benchmark test results. BiLSTM+ELMo numbers are cited from (Wang et al., 2018). The comparable performance to previous works validates our pre-training process. + +# 3 Masked Language Modeling + +Given a sentence $X = (w_{1}, w_{2}, \dots, w_{L})$ where each word $w_{i}$ is tokenized into $l_{i}$ subwords $(s_{i1}, \dots, s_{il_i})$ , a portion of tokens are randomly masked with the [MASK] token. MLMs are trained to recover the original identity of masked tokens by minimizing the negative log likelihood (NLL). In practice, BERT (Devlin et al., 2019) randomly replaces $15\%$ tokens by [MASK] for $80\%$ of the cases, keep the original token for $10\%$ of the time, and replace with a random token for the remaining $10\%$ of the cases. + +For context analysis, we perform the masking and predictions at the word level. Given a target word $w_{t}$ , all its subwords are masked $X_{\backslash t} = (\dots s_{(t - 1)l_{t - 1}}, [\text{MASK}], \dots, [\text{MASK}], s_{(t + 1)1}\dots)$ . + +Following Devlin et al. (2019), the conditional probability of $w_{t}$ can be computed from outputs of MLMs with the independence assumption between subwords: + +$$ +\begin{array}{l} P (w _ {t} | X _ {\backslash t}) = P (s _ {t 1} \dots s _ {t l _ {t}} | X _ {\backslash t}) \\ = \prod_ {i = 1} ^ {l _ {t}} P \left(s _ {t i} \mid X _ {\backslash t}\right). \tag {1} \\ \end{array} +$$ + +To investigate how MLMs use context, we propose procedures to perturb the input sentence from $X_{\backslash t}$ to $\widetilde{X}_{\backslash t}$ and monitor the change in the target word probability $P(w_{t}|X_{\backslash t})$ + +# 4 Approach + +Our goal is to analyze the behaviors of pre-trained MLMs when leveraging context to recover identity of the masked target word $w_{t}$ , e.g. to answer questions such as how many context words are considered and how large the context window is. To this end, we apply two analysis approaches. The first one is based on the masking or perturbation of input context which is architecture agnostic. The second gradient-based approach requires back-propagation through networks. + +Our first approach performs context perturbation analysis on pre-trained LMs at inference time and measures the change in masked target word probabilities. To answer each question, we start from $X_{\backslash t}$ and design a procedure $\Psi$ that iteratively processes the sentence from last perturbation $\widetilde{X}_{\backslash t}^{k + 1} = \Psi (\widetilde{X}_{\backslash t}^{k})$ . The patterns of $P(w_{t}|\widetilde{X}_{\backslash t}^{k})$ offer insights to our question. An example of $\Psi$ is to mask out a context word that causes the least or negligible change in $P(w_{t}|\widetilde{X}_{\backslash t}^{k})$ . It's worth mentioning that as pre-trained LMs are often used off-the-shelf as a general language encoder, we do not further finetune the model on the analysis dataset but directly analyze how they make use of context. In practice, we loop over a sentence word-by-word to set the word as the target first and use rest of words as the context for our masking process. Since we do the context analysis only with model inference, the whole process is fast - around half day on a 4-GPU machine to process 12k sentences. + +Our second approach estimates the impact of an input subword $s_{ij}$ to $P(w_t|X_{\backslash t})$ by using derivatives. Specifically, we adapt the IMPACT score proposed in Falenska and Kuhn (2019) to our questions. The score $\mathrm{IMPACT}(s_{ij},w_t)$ can be computed with the gradients of the negative log likelihood (NLL) with respect to the subword embedding: + +$$ +\operatorname {I M P A C T} \left(s _ {i j}, w _ {t}\right) = \frac {\left\| \frac {\partial (\log P \left(w _ {t} \mid X _ {\backslash t}\right)}{\partial s _ {i j}} \right\|}{\sum_ {m} ^ {L} \sum_ {n} ^ {l _ {m}} \left\| \frac {\partial - \log P \left(w _ {t} \mid X _ {\backslash t}\right)}{\partial s _ {m n}} \right\|}. \tag {2} +$$ + +The $l_{2}$ -norm of the gradient is used as the impact measure and normalized over all the subwords in a sentence. In practice, we report the impact of a context word $w_{i}$ by adding up the scores from its subwords $\sum_{j}^{l_i}\mathrm{IMPACT}(s_{ij},w_t)$ . + +We investigate two different encoder architectures of pre-trained MLMs. The first one is BERT that employs 12 Transformer encoder layers, 768 dimension, 3072 feed-forward hidden size, and 110 million parameters. The other uses a standard bidirectional LSTM (Hochreiter and Schmidhuber, + +![](images/a5505052f71a1c191dbf7b320fd240f096be088562ab412759b4946f3e8dbb60.jpg) +(a) Masking-based context impacts + +![](images/1b73c3ce5701e252121b83f2e76d3b02a82c73e70aa18b6d4e6bc2caff55495e.jpg) +(b) Gradient-based context impacts +Figure 1: Analysis of how much context is used by MLMs. (a) Context words at all relative positions have significantly higher probabilities to be considered by BERT, compared with BiLSTM. (b) Gradient-based IMPACT score also shows that BERT considers more distant context than BiLSTM, impact scores are normalized to $100\%$ . + +
EWTGUM
Sentences9,6733,197
Words195,09367,585
Mean Length20.1721.14
Median Length1719
Max Length15998
+ +Table 2: Statistics of datasets used for analysis + +1997) that has 3 layers, 768 embedding dimension, 1200 hidden size, and around 115 million parameters. The BiLSTM model parameters are chosen so that they resemble ELMo while being close to BERT in model size. To have a fair comparison, we pre-train both encoders from scratch on the uncased Wikipedia-book corpus (wikibook) with the same pre-training setup as in Devlin et al. (2019). For BiLSTM, we add a linear layer and a LayerNorm (Ba et al., 2016) on top, to project outputs into 768 dimension. We validate our pre-trained models by fine-tuning them on GLUE benchmark (Wang et al., 2018) in single-task manner and report test performance comparable to previous works in Table 1. Our pre-trained BiLSTM-based MLM also gets comparable results to ELMo (Peters et al., 2018). + +We perform MLM context analysis on two English datasets from the Universal Dependencies (UD) project, English Web Treebank (EWT) (Silveira et al., 2014) and Georgetown University Multilayer corpus (GUM) (Zeldes, 2017). Datasets from the UD project provide consistent and rich linguistic annotations across diverse genres, enabling us to gain insights towards the contexts in MLMs. We use the training set of each dataset for analysis. EWT consists of 9,673 sentences from web + +blogs, emails, reviews, and social media with the median length being 17 and maximum length being 159 words. GUM comprises 3, 197 sentences from Wikipedia, news articles, academic writing, fictions, and how-to guides with the median length being 19 and maximum length being 98 words. The statistics of datasets are summarized in Table 2. + +# 5 How much context is used? + +Self-attention is designed to encode information from any position in a sequence, whereas BiLSTMs model context through the combination of long- and short-term memories in both left-to-right and right-to-left directions. For MLMs, the entire sequence is provided to produce contextualized representations, it is unclear how much context in the sequence is used by different MLMs. + +In this section, we first propose a perturbation procedure $\Psi$ that iteratively masks out a context word contributing to the least absolute change of the target word probability $P(w_{t}|\widetilde{X}_{\backslash t}^{k})$ . That is, we incrementally eliminate words that do not penalize MLMs predictions one by one, until further masking cause $P(w_{t}|\widetilde{X}_{\backslash t}^{k})$ to deviate too much from the original probability $P(w_{t}|X_{\backslash t})$ . At this point, the remaining unmasked words are considered being used by the MLM since corrupting any of them causes a notable change in target word prediction. + +In practice, we identify deviations using the negative log likelihood (NLL) that corresponds to the loss of MLMs. Assuming NLL has a variance of $\epsilon$ at the start of masking, we stop the perturbation procedure when the increase on NLL $\log P(w_{t}|X_{\backslash t}) - \log P(w_{t}|\tilde{X}_{\backslash t}^{k})$ exceeds $2\epsilon$ . We observe that NLLs fluctuate around $[-0.1, 0.1]$ at + +![](images/24e010c2741d0294fa4c284fad513d40ff55963aa3534c3721681640f0082b1a.jpg) +Figure 2: Context usage analysis for MLMs via elimination of irrelevant context. BERT uses about $75\%$ of context while BiLSTM uses around $30\%$ . + +the start of masking, hence we terminate our procedure when the NLL increase reaches 0.2. We report the effective context size in terms of percentage of length to normalize the length impact. The analysis process is repeated using each word in a sentence as the target word for all sentences in the dataset. + +For our second approach, we follow equation 2 to calculate the normalized impact of each subword to the target word and aggregate them for each context word to get IMPACT $(w_{i},w_{t})$ . We group the IMPACT scores by relative position of a word $w_{i}$ to the target word $w_{t}$ and plot the average. To compare with our first approach, we also use masking-based method to analyze that for a word with a specific relative position, what would be its probability of being used by a MLM. + +BERT uses distant context more than BiLSTM. After our masking process, a subset of context words are tagged as "being used" by the pre-trained LM. In Figure 1a, we aggregated results in terms of relative positions (context-word-to-target-word) for all targets and sentences. "Probability of being used %" denotes when a context word appears at a relative position to target, how likely is it to be relevant to the pre-trained LM. + +Figure 1a shows that context words at all relative positions have substantially higher probabilities to be considered by BERT than BiLSTM. And BiLSTM focuses sharply on local context words, while BERT leverages words at almost all the positions. A notable observation is that both models consider a lot more often, words within distance around $[-10, 10]$ and BERT has as high as $90\%$ probability to use the words just before and after the target word. Using gradient-based analysis, Figure 1b shows similar results that BERT considers more distant context than BiLSTM and local words have + +![](images/5f031702feb3181dc30a3eba29cdae7c5a806e764fcd3793730fc0c7b5d14ca6.jpg) +(a) Masking-based: Different syntactic categories + +![](images/2b21a72e4ecc1739b19666bc4dccfcf5a6b70e6dc330375e9bee2ce12dcc1fff.jpg) +(b) Masking-based: Different length buckets +Figure 3: Context usage analysis for MLMs, instances bucketed by syntactic categories of target words or input lengths. (a) More context is used to model context words than function words. (b) BERT uses fixed amounts of context while BiLSTM's context usage percentage varies by input length. + +more impact to both models than distant words. + +There are notable differences between two analysis approaches. Since the gradient-based IMPACT score is normalized into a distribution across all positions, it does not show the magnitude of the context impact on the two different models. On the other hand, the masking-based analysis shows that BERT uses words at each position more than BiLSTM based on absolute probability values. Another important difference is that the gradient-based approach is a glass-box method and requires back-propagation through networks, assuming the models to be differentiable. On the other hand, the masking-based approach treats the model as a black-box and has no differentiability assumption on models. In the following sections, we will continue analysis with the masking-based approach. + +BERT uses $75\%$ of words in a sentence as context while BiLSTM considers $30\%$ . Figure 2 shows the increase in NLL when gradually masking out the least relevant words. BERT's NLL increases considerably when $25\%$ of context are + +masked, suggesting that BERT uses around $75\%$ of context. For BiLSTM, its NLL goes up remarkably after $70\%$ of context words are masked, meaning that it considers around $30\%$ of context. Albeit having the same capacity, we observe that BERT uses more than two times of context words into account than BiLSTM. This could explain the superior fine-tuning performance of BERT on tasks demanding more context to solve. We observe that pre-trained MLMs have consistent behaviors across two datasets that have different genres. For the following analysis, we report results combining EWT and GUM datasets. + +Content words needs more context than function words. We bucket instances based on the part-of-speech (POS) annotation of the target word. Our analysis covers content words including nouns, verbs and adjectives, and function words including adpositions and determiners. Figure 3a shows that both models use significantly more context to represent content words than function words, which is aligned with linguistic intuitions (Boyd-Graber and Blei, 2009). The findings also show that MLMs handle content and function words in a similar manner as regular language models do, which are previously analyzed by Wang and Cho (2016); Khandelwal et al. (2018). + +BiLSTM context usage percentage varies by input sentence length, whereas for BERT, it doesn't. We categorize sentences with length shorter than 25 as short, between 25 and 50 as medium, and more than 50 as long. Figure 3b shows that BiLSTM uses $35\%$ of context for short sentences, $20\%$ for medium, and only $10\%$ for long sentences. On the other hand, BERT leverages fixed $75\%$ of context words regardless of the sentence length. + +# 6 How far do MLMs look? + +In the previous section, we looked at how much context is relevant to the two MLMs via an elimination procedure. From Figure 1a and 1b, we also observe that local context is more impactful than long-range context for MLMs. In this section, we investigate this notion of locality of context even further and try to answer the question of how far away do MLMs actually look at in practice, i.e., what is the effective context window size (cws) of each MLM. + +For context perturbation analysis, we introduce a locality constraint to the perturbation procedure + +![](images/3fa0c6ae07a5ed022eff452f0624f87468695409f1b6136172584cb439d1e06f.jpg) +Figure 4: Change in NLL as the context window size around target word (left and right combined) changes + +while masking words. We aim to identify how local versus distant context impacts the target word probability differently. We start with masking all the words around the target, i.e., the model only relies on its priors learned during pre-training ( $cws \sim 0\%$ )1. We iteratively increase the $cws$ on both sides until all the surrounding context is available ( $cws \sim 100\%$ ). Details of the masking procedure can be found in Appendix. We report the increase in NLL compared to when the entire context is available $\log P(w_t | X_{\backslash t}) - \log P(w_t | \widetilde{X}_{\backslash t}^k)$ , with respect to the increasing $cws$ . This process is repeated using each word as the target word, for all the sentences in the dataset. We aggregate and visualize the results similar to section 5 and use the same threshold (0.2) as before to mark the turning point. + +As shown in Figure 4, increasing the $cws$ around target word reduces the change of NLL until a point where the gap is closed. The plot clearly highlights the differences in the behavior of two models - for BERT, words within $cws$ of $78\%$ impact the model's ability to make target word predictions, whereas, for BiLSTM, only words within $cws$ of $50\%$ affect the target word probability. This shows that BERT, leveraging entire sequence by self-attention, looks at a much wider context window size (effective $cws \sim 78\%$ ) in comparison to the recurrent architecture BiLSTM (effective $cws \sim 50\%$ ). Besides, BiLSTM shows a clear notion of contextual locality that it tends to consider very local context for target word prediction. + +Furthermore, we investigate the symmetry of $cws$ on either side by following the same procedure but now separately on each side of the target word. We iteratively increase $cws$ either on left side or right side while keeping the rest of the words un-masked. More details of the analysis procedure can + +![](images/0fbb6e115c65a182beab58ed927e8d132ca337b20690fc31fe34430b5ff406a9.jpg) +(a) Target word belonging to POS - NOUN + +![](images/cb156e3ac61dfbbfde0e932322467bfb56194c93e1f43c333394952c669ef2ac.jpg) +(b) Target word belonging to POS - DET +Figure 5: Symmetricity analysis of context window size for two target word syntactic categories from short sentences $l \leq 25$ (a) For NOUN as target, BERT looks at words within the window [-16, 16], while BiLSTM has the context window [-7, 7]. (b) When target word is DET, BERT looks at words within the window [-14, 18], while BiLSTM has the context window [-1, 3]. + +be found in the Appendix. The analysis results are further bucketed by the POS categories of target words as well as input sentence lengths, similar to Section 5, to gain more fine-grained insights. In Figure 5, we show the symmetry analysis of $cws$ for short length sentences and target word with POS tags - NOUN and DET. The remaining plots for medium and long length sentences with target word from other POS tags are shown in Appendix due to the lack of space. + +From Figure 5, both models show similar behaviors across different POS tags when leveraging symmetric/asymmetric context. The $cws$ attended to on either side is rather similar when target words are NOUN, whereas for DET, we observe both models paying more attention to right context words than the left. This observation aligns well with linguistic intuitions for English language. We can also observe the striking difference between two models in effective $cws$ , with BERT attending to a much larger $cws$ than BiLSTM. The difference in + +![](images/a9703174d95781cf55f1d901c8dbcaa320e2f8a747b584084de0e36355ccc32c.jpg) +Figure 6: Identifying essential context by masking most important words. $35 - 40\%$ of context is critical to BERT while BiLSTM sees about $20\%$ as essential. + +the left and right $cws$ for DET appears to be more pronounced for BiLSTM in comparison to BERT. We hypothesize that this is due to BiLSTM's overall smaller $cws$ (left + right) which makes it only attend to the most important words that happen to be mostly in the right context. + +# 7 What kind of context is essential? + +There is often a core set of context words that is essential to capture the meaning of target word. For example, "Many people think cotton is the most comfortable to wear in hot weather." Although most context is helpful to understand the masked word fabric, cotton and wear are essential as it would be almost impossible to make a guess without them. + +In this section, we define essential context as words such that when they are absent, MLMs would have no clue about the target word identity, i.e., the target word probability becomes close to masking out the entire sequence $P(w_{t}|\widetilde{X}_{mask\_all})$ . To identify essential context, we design the perturbation $\Psi$ to iteratively mask words bringing largest drop in $P(w_{t}|\widetilde{X}_{\backslash t}^{k})$ until we reach a point, where the increase in NLL just exceeds the $100\%$ mask setting $(\log P(w_{t}|X_{\backslash t}) - \log P(w_{t}|\widetilde{X}_{mask\_all}))$ . The words masked using above procedure are labelled as essential context words. We further analyze linguistic characteristics of the identified essential context words. + +BERT sees $35\%$ of context as essential, whereas BiLSTM perceives around $20\%$ . Figure 6 shows that on average, BERT recognizes around $35\%$ of context as essential when making predictions, i.e., when the increase in NLL is on par with masking all context. On the other hand, BiLSTM sees only $20\%$ of context as essential. This implies that BERT would be more robust than the BiLSTM- + +
ContextDistanceAll targetsNOUNADJVERBDETADP
Full-contextLinear9.379.339.238.979.479.47
BERT-essentialLinear6.256.425.895.875.656.11
BiLSTM-essentialLinear5.496.436.036.324.203.77
Full-contextTree3.633.373.732.834.134.31
BERT-essentialTree2.912.662.882.203.183.46
BiLSTM-essentialTree2.742.662.902.282.742.73
+ +Table 3: Mean distances from essential context words to target words. Linear means linear positional distance and Tree denotes the dependency tree walk distance. Results are bucketed by part-of-speech tags of target words. + +![](images/52b7b91f667ac2079f37cac1cff9c1d02517243a467835d36eb844475c8a8de9.jpg) +Figure 7: Essential context identified by BERT along with POS tags and dependency trees. Words in brackets are targets. Words underlined are essential. + +based encoder in the presence of noisy input, a finding also supported by Yin et al. (2020); Jin et al. (2019), as it will be harder to confuse the model completely given larger size of essential context words set in comparison to BiLSTM. + +# Essential words are close to target words in both linear position and on dependency tree. + +Table 3 calculates the mean distances from identified essential words to the target words on combined EWT and GUM datasets. Both the models tend to identify words much closer to the target as essential, whether we consider linear positional distance or node distance in dependency trees. We use annotated dependency relations to extract the traversal paths from each essential word to the target word in dependency tree. We find that the top 10 most frequent dependency paths often correspond with the common syntactic structures in natural language. For example, when target words are NOUN, the top 3 paths are DET(up:det) $\Rightarrow$ NOUN, ADP (up:case) $\Rightarrow$ NOUN, ADJ (up:amod) $\Rightarrow$ NOUN for both models. Further, we also look at the dependency paths of essential words which are unique to each model. The comparison shows + +that words of common dependency paths are sometimes identified as essential by BERT but not by BiLSTM and vice versa. This suggests that there is room to improve MLMs by making them consistently more aware of input's syntactic structures, possibly by incorporating dependency relations into pre-training. The full lists of top dependency paths are presented in the Appendix. + +Figure 7 shows examples of essential words from BERT with POS tags and dependency relations. Words in square brackets are target words and the underlined words are essential words. We observe that words close to the target in the sentence as well as in the dependency tree are often seen as essential. We can also see that BERT often includes the root of the dependency tree as an essential word. + +# 8 Application: Attention Pruning for Transformer + +As a pilot application, we leverage insights from analysis in previous sections to perform attention pruning for Transformer. Transformer has achieved impressive results in NLP and has been used for long sequences with more than 10 thousand tokens (Liu et al., 2018). Self-attention for a sequence of + +
ModelDev F1Test F1
BERT - Full94.9(0.2)90.8(0.1)
BERT - Dynamic Pruning94.7(0.2)90.6(0.2)
BERT - Static Pruning94.5(0.2)90.3(0.1)
+ +Table 4: CoNLL-2003 Named Entity Recognition results (5 seeds). The attention pruning based on our findings gives comparable results to the original BERT. + +length $L$ is of $\mathcal{O}(L^2)$ complexity in computation and memory. Many works attempt to improve the efficiency of self-attention by restricting the number of tokens that each input query can attend to (Child et al., 2019; Kitaev et al., 2020). + +Our analysis in Section 6 shows that BERT has effective cws of around $78\%$ . We perform a dynamic attention pruning by making self-attention neglect the furthest $22\%$ of tokens. Due to the $\mathcal{O}(L^2)$ complexity, this could save around $39\%$ of computation in self-attention. We apply this locality constraint to self-attention when fine-tuning BERT on a downstream task. Specifically, we use the CoNLL-2003 Named Entity Recognition (NER) dataset (Sang and Meulder, 2003) with $200k$ words for training. We fine-tune BERT for NER in the same way as in Devlin et al. (2019). We also explore a static attention pruning that restricts the attention span to be within $[-5, +5]^2$ . Results in Table 4 show that BERT with attention pruning has comparable performance to the original BERT, implying successful application of our analysis findings. Note that we use an uncased vocabulary, which could explain the gap compared to Devlin et al. (2019). + +# 9 Conclusion + +In our context analysis, we have shown that BERT has an effective context size of around $75\%$ of input length, while BiLSTM has about $30\%$ . The difference in context usage is striking for long-range context beyond 20 words. Our extensive analysis of context window size demonstrate that BERT uses much larger context window size than BiLSTM. Besides, both models often identify words with common syntactic structures as essential context. These findings not only help to better understand contextual impact in masked language models, but also encourage model improvements in efficiency and effectiveness in future works. On top of that, diving deep into the connection between our con + +text analysis and a model's robustness to noisy texts is also an interesting topic to explore. + +# Acknowledgments + +The authors would like to acknowledge the entire AWS Lex Science team for thoughtful discussions, honest feedback, and full support. We are also very grateful to the reviewers for insightful comments and helpful suggestions. + +# References + +Marco Ancona, Enea Ceolini, Cengiz Öztireli, and Markus Gross. 2017. Towards better understanding of gradient-based attribution methods for deep neural networks. arXiv preprint arXiv:1711.06104. +Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. +Yonatan Belinkov and James Glass. 2019. Analysis methods in neural language processing: A survey. Transactions of the Association for Computational Linguistics, 7:49-72. +Yonatan Belinkov, Lluis Márquez i Villodre, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and James R. Glass. 2017. Evaluating layers of representation in neural machine translation on part-of-speech and semantic tagging tasks. In IJCNLP. +Jordan L Boyd-Graber and David M Blei. 2009. Syntactic topic models. In Advances in neural information processing systems, pages 185-192. +Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. 2019. Generating long sequences with sparse transformers. *ArXiv*, abs/1904.10509. +Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does bert look at? an analysis of bert's attention. *ArXiv*, abs/1906.04341. +Andy Coenen, Emily Reif, Ann Yuan, Been Kim, Adam Pearce, Fernanda B. Viégas, and Martin Wattenberg. 2019. Visualizing and measuring the geometry of bert. In NeurIPS. +Alexis Conneau, Germán Kruszewski, Guillaume Lample, Loïc Barrault, and Marco Baroni. 2018. What you can cram into a single vector: Probing sentence embeddings for linguistic properties. In ACL. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. ArXiv, abs/1810.04805. +Kawin Ethayarajh. 2019. How contextual are contextualized word representations? comparing the geometry of bert, elmo, and gpt-2 embeddings. ArXiv, abs/1909.00512. + +Allyson Ettinger, Ahmed Elgohary, and Philip Resnik. 2016. Probing for semantic evidence of composition by means of simple classification tasks. In RepEval@ACL. +Agnieszka Falenska and Jonas Kuhn. 2019. The (non-)utility of structural features in bilstm-based dependency parsers. In ACL. +Zhijiang Guo, Yan Zhang, Zhiyang Teng, and Wei Lu. 2019. Densely connected graph convolutional networks for graph-to-sequence learning. Transactions of the Association for Computational Linguistics, 7:297-312. +John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In EMNLP/IJCNLP. +John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word representations. In *NAACL-HLT*. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780. +Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In ACL. +Dieuwke Hupkes, Sara Veldhoen, and Willem Zuidema. 2018. Visualisation and diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure. Journal of Artificial Intelligence Research, 61:907-926. +Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2019. Is bert really robust? a strong baseline for natural language attack on text classification and entailment. arXiv: Computation and Language. +Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64-77. +Urvashi Khandelwal, He He, Peng Qi, and Dan Jurafsky. 2018. Sharp nearby, fuzzy far away: How neural language models use context. In ACL. +Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya. 2020. Reformer: The efficient transformer. ArXiv, abs/2001.04451. +Olga V. Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the dark secrets of bert. In EMNLP/IJCNLP. +Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942. + +Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019a. Linguistic knowledge and transferability of contextual representations. ArXiv, abs/1903.08855. +Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summarizing long sequences. ArXiv, abs/1801.10198. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. +Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Advances in neural information processing systems, pages 4765-4774. +Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. ArXiv, abs/1802.05365. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9. +Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135-1144. +Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. ArXiv, cs.CL/0306050. +Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does string-based neural mt learn source syntax? In EMNLP. +Natalia Silveira, Timothy Dozat, Marie-Catherine de Marneffe, Samuel Bowman, Miriam Connor, John Bauer, and Christopher D. Manning. 2014. A gold standard dependency corpus for English. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014). +Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipanjan Das, and Ellie Pavlick. 2019. What do you learn from context? probing for sentence structure in contextualized word representations. ArXiv, abs/1905.06316. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all + +you need. In Advances in neural information processing systems, pages 5998-6008. +Elena Voita, Rico Sennrich, and Ivan Titov. 2019. The bottom-up evolution of representations in the transformer: A study with machine translation and language modeling objectives. In EMNLP/IJCNLP. +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. +Tian Wang and Kyunghyun Cho. 2016. Larger-context language modelling with recurrent neural network. In ACL. +Shaowei Yao, Tianming Wang, and Xiaojun Wan. 2020. Heterogeneous graph transformer for graph-to-sequence learning. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7145-7154. +Fan Yin, Quanyu Long, Tao Meng, and Kai-Wei Chang. 2020. On the robustness of language encoders against grammatical errors. arXiv preprint arXiv:2005.05683. +Amir Zeldes. 2017. The GUM corpus: Creating multilayer resources in the classroom. Language Resources and Evaluation, 51(3):581-612. + +A Appendix + +B Context Window Size Analysis + +# B.1 Masking Strategies for Context Window Size Analysis + +As mentioned in Section 6, for analyzing how far masked LMs look at within the available context, we follow a masking strategy with locality constraints applied. The masking strategy is as follows - we start from no context available, i.e., all the context words masked and iteratively increase the available context window size (cws) on both sides simultaneously, till the entire context is available. This procedure is also depicted in Figure 8. For symmetry analysis of cws, we follow similar process as above but considering each side of the target word separately. Hence, when considering context words to the left, we iteratively increase the cws on the left of target word, keeping the rest of the context words on the right unmasked as shown in Figure 9. + +![](images/9ba95b68fd37633598027b0b3d39d5cf7594c1a18147fe3b7edc6063f93798c3.jpg) +Figure 8: Masking strategy for context window size analysis + +# B.2 Additional Plots for Symmetricity Analysis of Context Window Size + +In Figure 10, we show various plots investigating how context around the target word impact's model performance as we look at left and right context separately. Figures 10a, 10d, 10g, 10j, 10m show left and right cws for sentences belonging to short length category ( $l \leq 25$ ). The trends show that, where NOUN, ADJ, VERB leverage somewhat symmetric context windows, DET and ADP show asymmetric behavior relying more heavily on right context words for both the models - BERT and BiLSTM. Similar observations can be made for sentences belonging to medium length bucket ( $l > 25$ and $l \leq 50$ ) with ADP being an exception where BiLSTM shows more symmetric context different than BERT, as shown in Figures 10b, 10e, 10h, 10k, 10n. However, for sentences belonging to + +![](images/f817886afcf2e2f4b8663c0b05ab0579afbdd41b253082eaa694f6ea599b91a6.jpg) +Figure 9: Masking strategy for symmetry analysis of $cws$ on the left + +long length bucket $(l > 50)$ , left and right context window sizes are leveraged quite differently. + +We can also see that BiLSTM leverages almost similar number of context words as we moved on to buckets of longer sentence lengths in comparison to BERT which can leverage more context when its available. This is aligned with our observation from Section 5. + +# C Dependency Paths from Essential Words to Target Words + +Given a target word, BERT or BiLSTM identifies a subset of context words as essential. Based on the dependency relations provided in the datasets, we extract the dependency paths starting from each essential word to the target words, i.e., the path to traverse from an essential word to the given target word in the dependency tree. We summarize the top 10 most frequent dependency paths recognized by BERT or BiLSTM given the target words being a specific part-of-speech category. Table 5, 6, 7, 8, 9 show the results for NOUN, ADJ, VERB, DET, and ADP, respectively. The up and down denote the direction of traversal, followed by the corresponding relations in the dependency tree. We can see that the top dependency paths for BERT and BiLSTM are largely overlapped with each other. We also observe that these most frequent dependency paths are often aligned with common syntactic patterns. For example, the top 3 paths for NOUN are DET = (up:det) ⇒ NOUN that could be “the” cat, ADP = (up:case) ⇒ NOUN that could be “at” home, and ADJ = (up:amod) ⇒ NOUN which could be “white” car. This implies that both models could be aware of the common syntactic structures in the natural language. + +To further compare the behaviors of BERT and BiLSTM when identifying essential context, we count the occurrence of dependency paths based on the disjoint essential words. That is, given an input sentence, we only count the dependency paths of + +![](images/2efdb0bf87ae0a317e25e7787faf71970e25ea0985a6913a32b2eb16bb71391e.jpg) +(a) BERT looking at context window size [-16, 16]; biLSTM looking at context window size [-7, 7] + +![](images/856c8ab7ac5b6b4133163ed448a652e99081a9993fd147a8d4e08fb806da48d8.jpg) +(d) BERT looking at context window size [-15, 16]; biLSTM looking at context window size [-5, 5] + +![](images/20633c1675f6b8db4e4b6039991e7f14540dad294c942e1e4df29ab70e798f6b.jpg) + +![](images/2feaffd7a2117d2b4443c55f5942c7266bfe596342cb4d7959234d0a4ce8d713.jpg) +(g) BERT looking at context window size [-14, 16]; biLSTM looking at context window size [-7, 6] + +![](images/0845f9d70f7ca9c75b6faa1a655492036c3c25b124bc0e2a8aac7af0975bc607.jpg) +(j) BERT looking at context window size [-14, 18]; biLSTM looking at context window size [-1, 3] +(m) BERT looking at context window size [-13, 16]; biLSTM looking at context window size [-2, 3] +Figure 10: Symmetricity analysis of context window size for different syntactic categories of target word belonging to sentences from buckets of different lengths; along the rows, we consider sentences of different lengths for a given syntactic category: (a) - (c) analysis for NOUN; (d) - (f) analysis for ADJ; (g) - (i) analysis for VERB; (j) - (l) analysis for DET; (m) - (o) analysis for ADP; along the columns, we consider different syntactic categories for given bucket ranging from short (first column), medium (second column) to long (third column) + +![](images/fbc16128ef16c4b58f245f0578658a33007cd912316b5d111eca8181cee99f9f.jpg) +(b) BERT looking at context window size [-29, 32]; biLSTM looking at context window size [-12, 12] + +![](images/74ec854609dc3de8a1165d0bc6359dc56b75f0bc02879b5e4adb5f57981aae42.jpg) +(e) BERT looking at context window size [-28, 30]; biLSTM looking at context window size [-7, 6] + +![](images/462b7f33b30a9baa20c65b5e3a0feb2b4bd675dfcb91a510c1d321c7ab3daf36.jpg) + +![](images/abaaee4bbcf078a331ef532c43b9212e9a633e0663298608025b172dfe1b85fc.jpg) +(h) BERT looking at context window size [-28, 30]; biLSTM looking at context window size [-9, 8] + +![](images/1c2423bb868352f23d371d3ff0d425b3df9fe26711d22f73f608bdc95a0f9d73.jpg) +(k) BERT looking at context window size [-25, 31]; biLSTM looking at context window size [-1, 2] +(n) BERT looking at context window size [-25, 30]; biLSTM looking at context window size [-3, 3] + +![](images/771c1f79f551000c13b3e81a15728bfe9f35bccaf8ae087bfbe88d23dfc9edc8.jpg) +(c) BERT looking at context window size [-135, 72]; biLSTM looking at context window size [-19, 5] + +![](images/b2bca17c53dff1c6cf8a52e5d47a6dbc8e48d80eb055dfb876a70dec95d57e84.jpg) +(f) BERT looking at context window size [-54, 77]; biLSTM looking at context window size [-6, 4] + +![](images/ca1257233dc8be74d36a2ee83f2c26b3e37d717c18fbb47b01ff9f1ef74afb81.jpg) +(i) BERT looking at context window size [-102, 148]; biLSTM looking at context window size [-10, 9] + +![](images/b2c6c75b0d78b08c23e73d6146daca00d680235e2c984039782a353ca7b9676b.jpg) + +![](images/ee8a496786c207ae7e01def6c8fec0c585e8063b6c8ca8b7f338f68b6a5e4983.jpg) +(1) BERT looking at context window size [-50, 75]; biLSTM looking at context window size [-2, 2] +(o) BERT looking at context window size [-99, 113]; biLSTM looking at context window size [-3, 3] + +
BERTBiLSTM
DET=(up:det)⇒ NOUNDET=(up:det)⇒ NOUN
ADP=(up:case)⇒ NOUNADP=(up:case)⇒ NOUN
ADJ=(up:amod)⇒ NOUNADJ=(up:amod)⇒ NOUN
VERB=(down:obj)⇒ NOUNVERB=(down:obj)⇒ NOUN
ADP=(up:case)⇒ NOUN=(up:nmod)⇒ NOUNVERB=(down:obl)⇒ NOUN
NOUN=(down:compound)⇒ NOUNADP=(up:case)⇒ NOUN=(up:nmod)⇒ NOUN
NOUN=(up:compound)⇒ NOUNNOUN=(up:nmod)⇒ NOUN
NOUN=(up:nmod)⇒ NOUNNOUN=(down:nmod)⇒ NOUN
NOUN=(down:nmod)⇒ NOUNNOUN=(down:compound)⇒ NOUN
VERB=(down:obl)⇒ NOUNNOUN=(up:compound)⇒ NOUN
+ +Table 5: Top 10 most frequent dependency paths when the target words are NOUN. + +
BERTBiLSTM
NOUN=(down:amod)⇒ ADJNOUN=(down:amod)⇒ ADJ
DET=(up:det)⇒ NOUN=(down:amod)⇒ ADJDET=(up:det)⇒ NOUN=(down:amod)⇒ ADJ
ADP=(up:case)⇒ NOUN=(down:amod)⇒ ADJADP=(up:case)⇒ NOUN=(down:amod)⇒ ADJ
AUX=(up:cop)⇒ ADJAUX=(up:cop)⇒ ADJ
VERB=(down:obj)⇒ NOUN=(down:amod)⇒ ADJADV=(up:advmod)⇒ ADJ
ADV=(up:advmod)⇒ ADJVERB=(down:obj)⇒ NOUN=(down:amod)⇒ ADJ
ADJ=(up:amod)⇒ NOUN=(down:amod)⇒ ADJADJ=(up:amod)⇒ NOUN=(down:amod)⇒ ADJ
ADP=(up:case)⇒ NOUN=(up:nmod)⇒ NOUN=(down:amod)⇒ ADJPUNCT=(up:punct)⇒ ADJ
PUNCT=(up:punct)⇒ NOUN=(down:amod)⇒ ADJPUNCT=(up:punct)⇒ NOUN=(down:amod)⇒ ADJ
PUNCT=(up:punct)⇒ ADJ
+ +Table 6: Top 10 most frequent dependency paths when the target words are ADJ. + +
BERTBiLSTM
PRON=(up:nsubj)⇒ VERBPRON=(up:nsubj)⇒ VERB
NOUN=(up:obj)⇒ VERBNOUN=(up:obj)⇒ VERB
PUNCT=(up:punct)⇒ VERBPUNCT=(up:punct)⇒ VERB
AUX=(up:aux)⇒ VERBAUX=(up:aux)⇒ VERB
ADV=(up:advmod)⇒ VERBADV=(up:advmod)⇒ VERB
ADP=(up:case)⇒ NOUN=(up:obl)⇒ VERBADP=(up:case)⇒ NOUN=(up:obl)⇒ VERB
NOUN=(up:obl)⇒ VERBNOUN=(up:obl)⇒ VERB
PART=(up:mark)⇒ VERBPART=(up:mark)⇒ VERB
DET=(up:det)⇒ NOUN=(up:obj)⇒ VERBDET=(up:det)⇒ NOUN=(up:obj)⇒ VERB
SCONJ=(up:mark)⇒ VERBSCONJ=(up:mark)⇒ VERB
+ +Table 7: Top 10 most frequent dependency paths when the target words are VERB. + +
BERTBiLSTM
NOUN=(down:det)⇒ DETNOUN=(down:det)⇒ DET
ADP=(up:case)⇒ NOUN=(down:det)⇒ DETADP=(up:case)⇒ NOUN=(down:det)⇒ DET
ADJ=(up:amod)⇒ NOUN=(down:det)⇒ DETADJ=(up:amod)⇒ NOUN=(down:det)⇒ DET
VERB=(down:obj)⇒ NOUN=(down:det)⇒ DETVERB=(down:obj)⇒ NOUN=(down:det)⇒ DET
ADP=(up:case)⇒ NOUN=(up:nmod)⇒ NOUN=(down:det)⇒ DETADP=(up:case)⇒ NOUN=(up:nmod)⇒ NOUN=(down:det)⇒ DET
VERB=(down:obl)⇒ NOUN=(down:det)⇒ DETVERB=(down:obl)⇒ NOUN=(down:det)⇒ DET
NOUN=(up:compound)⇒ NOUN=(down:det)⇒ DETNOUN=(down:nmod)⇒ NOUN=(down:det)⇒ DET
PROPN=(down:det)⇒ DETNOUN=(up:compound)⇒ NOUN=(down:det)⇒ DET
NOUN=(down:nmod)⇒ NOUN=(down:det)⇒ DETPROPN=(down:det)⇒ DET
NOUN=(up:nmod)⇒ NOUN=(down:det)⇒ DETNOUN=(up:nmod)⇒ NOUN=(down:det)⇒ DET
+ +Table 8: Top 10 most frequent dependency paths when the target words are DET. + +
BERTBiLSTM
NOUN=(down:case)⇒ ADPNOUN=(down:case)⇒ ADP
DET=(up:det)⇒ NOUN=(down:case)⇒ ADPDET=(up:det)⇒ NOUN=(down:case)⇒ ADP
VERB=(down:obl)⇒ NOUN=(down:case)⇒ ADPVERB=(down:obl)⇒ NOUN=(down:case)⇒ ADP
NOUN=(down:nmod)⇒ NOUN=(down:case)⇒ ADPNOUN=(down:nmod)⇒ NOUN=(down:case)⇒ ADP
PROPN=(down:case)⇒ ADPPROPN=(down:case)⇒ ADP
ADJ=(up:amod)⇒ NOUN=(down:case)⇒ ADPADJ=(up:amod)⇒ NOUN=(down:case)⇒ ADP
DET=(up:det)⇒ NOUN=(down:nmod)⇒ NOUN=(down:case)⇒ ADPDET=(up:det)⇒ NOUN=(down:nmod)⇒ NOUN=(down:case)⇒ ADP
PUNCT=(up:punct)⇒ VERB=(down:obl)⇒ NOUN=(down:case)⇒ ADPPRON=(up:nmod:pos)⇒ NOUN=(down:case)⇒ ADP
NOUN=(down:nmod)⇒ PROPN=(down:case)⇒ ADPNOUN=(down:nmod)⇒ PROPN=(down:case)⇒ ADP
PRON=(up:nmod:pos)⇒ NOUN=(down:case)⇒ ADPPRON=(down:case)⇒ ADP
+ +Table 9: Top 10 most frequent dependency paths when the target words are ADP. + +essential words which are unique to each model, e.g., words essential to BERT but not essential to BiLSTM. Our goal is to see for these essential words unique to a model, whether some special dependency paths are captured by the model. Table 10, 11, 12, 13, 14 show the results for NOUN, ADJ, VERB, DET, and ADP, respectively. We observe that around top 5 dependency paths for essential words unique to BERT or BiLSTM are mostly overlapping with each other as well as the results in Table 5, 6, 7, 8, 9. This implies that sometimes words of common dependency paths can be identified by BERT as essential while BiLSTM fails to do so and sometimes it's another way around. In other words, there is a room to make models to be more consistently aware of syntactic structures of an input. The observation suggests that explicitly incorporating dependency relations into pre-training could potentially benefit masked language models. + +
BERTBiLSTM
DET=(up:det)⇒ NOUNADP=(up:case)⇒ NOUN
ADP=(up:case)⇒ NOUNDET=(up:det)⇒ NOUN
ADJ=(up:amod)⇒ NOUNVERB=(down:obl)⇒ NOUN
PUNCT=(up:punct)⇒ NOUNVERB=(down:obj)⇒ NOUN
VERB=(down:obl)⇒ NOUNPUNCT=(up:punct)⇒ NOUN
VERB=(down:obj)⇒ NOUNNOUN=(up:nmod)⇒ NOUN
ADP=(up:case)⇒ NOUN=(up:nmod)⇒ NOUNPUNCT=(up:punct)⇒ VERB=(down:obj)⇒ NOUN
NOUN=(up:nmod)⇒ NOUNNOUN=(down:nmod)⇒ NOUN
NOUN=(down:nmod)⇒ NOUNPUNCT=(up:punct)⇒ VERB=(down:obl)⇒ NOUN
NOUN=(up:compound)⇒ NOUNPRON=(up:nsubj)⇒ VERB=(down:obj)⇒ NOUN
+ +Table 10: Top 10 dependency paths from essential words unique to each model to the target words that are NOUN. + +
BERTBiLSTM
NOUN=(down:amod)⇒ ADJNOUN=(down:amod)⇒ ADJ
DET=(up:det)⇒ NOUN=(down:amod)⇒ ADJPUNCT=(up:punct)⇒ ADJ
ADP=(up:case)⇒ NOUN=(down:amod)⇒ ADJADP=(up:case)⇒ NOUN=(down:amod)⇒ ADJ
VERB=(down:obj)⇒ NOUN=(down:amod)⇒ ADJVERB=(down:obl)⇒ NOUN=(down:amod)⇒ ADJ
PUNCT=(up:punct)⇒ NOUN=(down:amod)⇒ ADJPUNCT=(up:punct)⇒ NOUN=(down:amod)⇒ ADJ
AUX=(up:cop)⇒ ADJNOUN=(up:nmod)⇒ NOUN=(down:amod)⇒ ADJ
ADJ=(up:amod)⇒ NOUN=(down:amod)⇒ ADJDET=(up:det)⇒ NOUN=(down:amod)⇒ ADJ
PUNCT=(up:punct)⇒ ADJVERB=(down:obl)⇒ NOUN=(down:amod)⇒ ADJ
ADP=(up:case)⇒ NOUN=(up:nmod)⇒ NOUN=(down:amod)⇒ ADJPUNCT=(up:punct)⇒ VERB=(down:obl)⇒ NOUN=(down:amod)⇒ ADJ
VERB=(down:obl)⇒ NOUN=(down:amod)⇒ ADJPRON=(up:subj)⇒ ADJ
+ +Table 11: Top 10 dependency paths from essential words unique to each model to the target words that are ADJ. + +
BERTBiLSTM
PUNCT=(up:punct)⇒ VERBPUNCT=(up:punct)⇒ VERB
ADP=(up:case)⇒ NOUN=(up:obl)⇒ VERBNOUN=(up:obl)⇒ VERB
NOUN=(up:obj)⇒ VERBNOUN=(up:obj)⇒ VERB
NOUN=(up:obl)⇒ VERBPRON=(up:nsubj)⇒ VERB
DET=(up:det)⇒ NOUN=(up:obj)⇒ VERBDET=(up:det)⇒ NOUN=(up:obl)⇒ VERB
PRON=(up:nsubj)⇒ VERBVERB=(up:advcl)⇒ VERB
ADV=(up:advmod)⇒ VERBADP=(up:case)⇒ NOUN=(up:obl)⇒ VERB
NOUN=(up:nsubj)⇒ VERBVERB=(down:advcl)⇒ VERB
CCONJ=(up:cc)⇒ VERBVERB=(up:conj)⇒ VERB
SCONJ=(up:mark)⇒ VERBVERB=(down:conj)⇒ VERB
+ +Table 12: Top 10 dependency paths from essential words unique to each model to the target words that are VERB. + +
BERTBiLSTM
NOUN=(down:det)⇒ DETNOUN=(down:det)⇒ DET
ADP=(up:case)⇒ NOUN=(down:det)⇒ DETVERB=(down:obl)⇒ NOUN=(down:det)⇒ DET
VERB=(down:obj)⇒ NOUN=(down:det)⇒ DETNOUN=(up:nmod)⇒ NOUN=(down:det)⇒ DET
NOUN=(up:nmod)⇒ NOUN=(down:det)⇒ DETNOUN=(down:nmod)⇒ NOUN=(down:det)⇒ DET
VERB=(down:obl)⇒ NOUN=(down:det)⇒ DETPUNCT=(up:punct)⇒ VERB=(down:obl)⇒ NOUN=(down:det)⇒ DET
ADJ=(up:amod)⇒ NOUN=(down:det)⇒ DETPUNCT=(up:punct)⇒ NOUN=(down:det)⇒ DET
ADP=(up:case)⇒ NOUN=(up:nmod)⇒ NOUN=(down:det)⇒ DETADP=(up:case)⇒ NOUN=(down:det)⇒ DET
PRON=(up:subj)⇒ VERB=(down:obj)⇒ NOUN=(down:det)⇒ DETADP=(up:case)⇒ NOUN=(up:nmod)⇒ NOUN=(down:det)⇒ DET
NOUN=(up:compound)⇒ NOUN=(down:det)⇒ DETVERB=(down:obj)⇒ NOUN=(down:det)⇒ DET
DET=(up:det)⇒ NOUN=(down:nmod)⇒ NOUN=(down:det)⇒ DETPUNCT=(up:punct)⇒ VERB=(down:obj)⇒ NOUN=(down:det)⇒ DET
+ +Table 13: Top 10 dependency paths from essential words unique to each model to the target words that are DET. + +
BERTBiLSTM
NOUN=(down:case)⇒ ADPNOUN=(down:case)⇒ ADP
DET=(up:det)⇒ NOUN=(down:case)⇒ ADPVERB=(down:obl)⇒ NOUN=(down:case)⇒ ADP
VERB=(down:obl)⇒ NOUN=(down:case)⇒ ADPPUNCT=(up:punct)⇒ VERB=(down:obl)⇒ NOUN=(down:case)⇒ ADP
ADJ=(up:amod)⇒ NOUN=(down:case)⇒ ADPDET=(up:det)⇒ NOUN=(down:case)⇒ ADP
PUNCT=(up:punct)⇒ VERB=(down:obl)⇒ NOUN=(down:case)⇒ ADPADI=(up:amod)⇒ NOUN=(down:case)⇒ ADP
PROPN=(down:case)⇒ ADPAUX=(up:aux)⇒ VERB=(down:obl)⇒ NOUN=(down:case)⇒ ADP
NOUN=(down:nmod)⇒ NOUN=(down:case)⇒ ADPPROPN=(down:case)⇒ ADP
DET=(up:det)⇒ NOUN=(down:nmod)⇒ NOUN=(down:case)⇒ ADPPRON=(up:nsubj)⇒ VERB=(down:obl)⇒ NOUN=(down:case)⇒ ADP
ADP=(up:case)⇒ NOUN=(down:nmod)⇒ NOUN=(down:case)⇒ ADPNOUN=(up:nmod)⇒ NOUN=(down:case)⇒ ADP
NOUN=(up:compound)⇒ NOUN=(down:case)⇒ ADPADP=(up:case)⇒ NOUN=(up:nmod)⇒ NOUN=(down:case)⇒ ADP
+ +Table 14: Top 10 dependency paths from essential words unique to each model to the target words that are ADP. \ No newline at end of file diff --git a/contextanalysisforpretrainedmaskedlanguagemodels/images.zip b/contextanalysisforpretrainedmaskedlanguagemodels/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..83b460239c1c2c3273c41ca63ff0898d2f82974c --- /dev/null +++ b/contextanalysisforpretrainedmaskedlanguagemodels/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1c519a9441ffd96d5b5e139db60300982c313b0f8a17f65c9644e5db298901ef +size 1479198 diff --git a/contextanalysisforpretrainedmaskedlanguagemodels/layout.json b/contextanalysisforpretrainedmaskedlanguagemodels/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d9a6641278b14cb15f0304ea50347bd60d9f6746 --- /dev/null +++ b/contextanalysisforpretrainedmaskedlanguagemodels/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dc4918475f068845533477fbbe1b2e4242cb63352ead2a9d1db6546cfa15c48f +size 490936 diff --git a/contextawarestandaloneneuralspellingcorrection/dbcea614-fdf1-4bb1-975d-d9d671e0ea02_content_list.json b/contextawarestandaloneneuralspellingcorrection/dbcea614-fdf1-4bb1-975d-d9d671e0ea02_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..7ee6b44d0fb9f07e6b0dbc28b46fb21b8c26ab2e --- /dev/null +++ b/contextawarestandaloneneuralspellingcorrection/dbcea614-fdf1-4bb1-975d-d9d671e0ea02_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8f4a594e2a18c56e60b581ac206c5035a38c71f7612c8a2095a12470502c312e +size 51093 diff --git a/contextawarestandaloneneuralspellingcorrection/dbcea614-fdf1-4bb1-975d-d9d671e0ea02_model.json b/contextawarestandaloneneuralspellingcorrection/dbcea614-fdf1-4bb1-975d-d9d671e0ea02_model.json new file mode 100644 index 0000000000000000000000000000000000000000..1e4bf8bdb9fd43121ed78cedb8388afb1a8a6b41 --- /dev/null +++ b/contextawarestandaloneneuralspellingcorrection/dbcea614-fdf1-4bb1-975d-d9d671e0ea02_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9af07814c564cfe24cdfbee10d4001953de079c0447613bd0b35c36e34c049f7 +size 65210 diff --git a/contextawarestandaloneneuralspellingcorrection/dbcea614-fdf1-4bb1-975d-d9d671e0ea02_origin.pdf b/contextawarestandaloneneuralspellingcorrection/dbcea614-fdf1-4bb1-975d-d9d671e0ea02_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..55518e4c447fbcc27e9f3458d023f4bbb8d9182a --- /dev/null +++ b/contextawarestandaloneneuralspellingcorrection/dbcea614-fdf1-4bb1-975d-d9d671e0ea02_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a055f895df6010ed51e03b23ce215a2508c948fc6b74d7c05c3e426689525651 +size 370059 diff --git a/contextawarestandaloneneuralspellingcorrection/full.md b/contextawarestandaloneneuralspellingcorrection/full.md new file mode 100644 index 0000000000000000000000000000000000000000..d795574bbff3b1d1fce282360f86f9138f300296 --- /dev/null +++ b/contextawarestandaloneneuralspellingcorrection/full.md @@ -0,0 +1,208 @@ +# Context-aware Stand-alone Neural Spelling Correction + +Xiangci Li * + +University of Texas at Dallas + +Richardson, TX + +Hairong Liu + +Baidu USA + +Sunnyvale, CA + +Liang Huang + +Baidu USA + +Sunnyvale, CA + +lixiangci8@gmail.com liuhairong@baidu.com lianghuang@baidu.com + +# Abstract + +Existing natural language processing systems are vulnerable to noisy inputs resulting from misspellings. On the contrary, humans can easily infer the corresponding correct words from their misspellings and surrounding context. Inspired by this, we address the standalone spelling correction problem, which only corrects the spelling of each token without additional token insertion or deletion, by utilizing both spelling information and global context representations. We present a simple yet powerful solution that jointly detects and corrects misspellings as a sequence labeling task by fine-turning a pre-trained language model. Our solution outperforms the previous state-of-the-art result by $12.8\%$ absolute $F_{0.5}$ score. + +# 1 Introduction + +A spelling corrector is an important and ubiquitous pre-processing tool in a wide range of applications, such as word processors, search engines and machine translation systems. Having a surprisingly robust language processing system to denoise the scrambled spellings, humans can relatively easily solve spelling correction (Rawlinson, 1976). However, spelling correction is a challenging task for a machine, because words can be misspelled in various ways, and a machine has difficulties in fully utilizing the contextual information. + +Misspellings can be categorized into non-word, which is out-of-vocabulary, and the opposite, real-word misspellings (Klabunde, 2002). The dictionary look-up method can detect non-word misspellings, while real-word spelling errors are harder to detect, since these misspellings are in the vocabulary (Mays et al., 1991; Wilcox-O'Hearn et al., 2008). In this work, we address the stand-alone (Li et al., 2018) spelling correction problem. It only + +![](images/06bcf7a7db74b3eb36a83f91b83d134475a7a092b6ff2bd4cb6f2bc77383b65c.jpg) +Figure 1: A schematic illustration of our approach. Left: combined word-level and character-level encoder model. Right: subword-level model using BIO2 tagging scheme (Sang and Veenstra, 1999). + +![](images/401f8896dd612952a28c77f61bff47bf5057cc695b12791da9add8fe44c25030.jpg) + +corrects the spelling of each token without introducing new tokens or deleting tokens, so that the original information is maximally preserved for the down-stream tasks. + +We formulate the stand-alone spelling correction as a sequence labeling task and jointly detect and correct misspellings. Inspired by the human language processing system, we propose a novel solution on the following aspects: (1) We encode both spelling information and global context information in the neural network. (2) We enhance the real-word correction performance by initializing the model from a pre-trained language model (LM). (3) We strengthen the model's robustness on unseen non-word misspellings by augmenting the training dataset with a synthetic character-level noise. As a result, our best model ${}^{1}$ outperforms the previous state-of-the-art result (Wang et al., 2019) by ${12.8}\%$ absolute ${F}_{0.5}$ score. + +# 2 Approach + +We use the transformer-encoder (Vaswani et al., 2017) to encode the input sequences and denote it as Encoder. As illustrated in Figure 1, we present both Word+Char encoder and Subword encoder, because we believe the former is better in encoding spelling information, while the latter has the benefit of utilizing a large pre-trained LM. + +Word+Char encoder. We use a word encoder to extract global context information and a character encoder to encode spelling information. As shown in equation 1, in order to denoise the noisy word sequence $S^*$ to the clean sequence $S$ , we first separately encode $S^*$ using a word-level transformer-encoder Encoder_word and each noisy spelling sequence $C_k^*$ of token $k$ via a character-level transformer-encoder Encoder_char. For Encoder_word, we replace non-word misspellings, i.e. OOV words, with a $\langle unk \rangle$ token. For Encoder_char, we treat each character as a token and each word as a "sentence", so each word's character sequence embedding $h_{char}^k$ is independent of each other. Since the transformer-encoder (Vaswani et al., 2017) computes contextualized token representations, we take $h_{char}$ , the [CLS] token representation of each character sequence as the local character-level representation of $S^*$ . Finally, we jointly predict $S$ by concatenating the local and global context representations. + +$$ +\begin{array}{l} h _ {w o r d} = E n c o d e r _ {w o r d} \left(S ^ {*}\right) \\ h _ {c h a r} ^ {k} = \operatorname {E n c o d e r} _ {c h a r} \left(C _ {k} ^ {*}\right) \\ h _ {c h a r} = \left[ C L S \left(h _ {c h a r} ^ {1}\right), C L S \left(h _ {c h a r} ^ {2}\right), \dots , C L S \left(h _ {c h a r} ^ {n}\right) \right] \\ h _ {S} = \left[ h _ {w o r d}; h _ {c h a r} \right] \\ p (S) = \operatorname {s o f t m a x} \left(W h _ {S} + b\right) \tag {1} \\ \end{array} +$$ + +Subword encoder. Alternatively, we use subword tokenization to simultaneously address the spelling and context information. Formally, as shown in equation 2, given a noisy subword token sequence $S_{sub}^{*}$ , we encode it using a transformer-encoder Encoder_sub and simply use an affine layer to predict the sequence of each subword token's corresponding correct word token $S_{sub}$ in BIO2 tagging scheme (Sang and Veenstra, 1999). + +$$ +\begin{array}{c} h _ {s u b} = \operatorname {E n c o d e r} _ {s u b} \left(S _ {s u b} ^ {*}\right) \\ p \left(S _ {s u b}\right) = \operatorname {s o f t m a x} \left(W _ {s u b} h _ {s u b} + b _ {s u b}\right) \end{array} \tag {2} +$$ + +Furthermore, we fine-tune our Subword encoder model with a pre-trained LM initialization to enhance the real-word misspelling correction performance. + +We use cross-entropy loss as our training objective. Finally, in addition to the natural misspelling noise, we apply a synthetic character-level noise to the training set to enhance the model's robustness to unseen misspelling patterns. The details will be introduced in section 3.1. + +# 3 Experiments + +# 3.1 Dataset + +Since we cannot find a sentence-level misspelling dataset, we create one by using the sentences in the 1-Billion-Word-Language-Model-Benchmark (Chelba et al., 2013) as gold sentences and randomly replacing words with misspellings from a word-level natural misspelling list (Mitton, 1985; Belinkov and Bisk, 2017) to generate noisy input sentences. In a real scenario, there will always be unseen misspellings after the model deployment, regardless of the size of the misspelling list used for training. Therefore, we only use $80\%$ of our full word-level misspelling list for train and dev set. In order to strengthen the robustness of the model to various noisy spellings, we also add noise from a character-level synthetic misspelling list (Belinkov and Bisk, 2017) to the training set. As a result, real-word misspelling contributes to approximately $28\%$ of the total misspellings for both dev and test set. The details are described in Section A.1 + +# 3.2 Results + +Performance Metrics We compare word-level precision, recall and $F_{0.5}$ score, which emphasizes precision more. We also provide accuracy for reference in Table 1, because both of the baselines were evaluated with accuracy score. Table 3 shows the definition of true positive (TP), false positive (FP), false negative (FN) and true negative (TN) in this work to avoid confusions. We calculate them using the following equations: + +$$ +\begin{array}{l} a c c u r a c y = (T P + T N) / (T P + F P + F N + T N) \\ p r e c i s i o n = T P / (T P + F P) \\ r e c a l l = T P / (T P + F N) \\ F _ {\beta} = (1 + \beta^ {2}) \cdot \frac {p r e c i s i o n \cdot r e c a l l}{(\beta^ {2} \cdot p r e c i s i o n) + r e c a l l} \\ \end{array} +$$ + +where $\beta = 0.5$ in this work. + +
ModelsDevTest
AccPRF0.5AccPRF0.5
1ScRNN (Sakaguchi et al., 2017)0.9580.8230.8900.8360.9460.7550.8650.775
2MUDE (Wang et al., 2019)0.9660.8290.9520.8510.9520.7510.9280.781
3Char Encoder0.8830.5170.8190.5590.8700.4580.8020.501
4Word Encoder0.9320.5650.9490.6150.9240.5210.9030.570
5Word + Char Encoder0.9880.9590.9590.9590.9740.8820.9290.891
6+ random char0.9860.9530.9470.9510.9760.8980.9270.904
7Subword Encoder0.9860.9340.9720.9410.9680.8310.9500.852
8+ Char Encoder0.9800.9080.9590.9170.9630.8080.9390.831
9+ random char0.9850.9310.9660.9380.9730.8660.9500.881
10+ LM pre-train0.9900.9510.9820.9570.9750.8660.9620.883
11+ LM pre-train + random char0.9890.9460.9790.9520.9800.8960.9640.909
+ +Table 1: Model performance and ablation studies measured by accuracy, precision, recall and $F_{0.5}$ . + +
ModelsReal-WordNon-Word
devtestdevtest
PRF0.5PRF0.5PP
1ScRNN (Sakaguchi et al., 2017)0.5070.5920.5220.4560.5230.4680.9520.873
2MUDE (Wang et al., 2019)0.5950.8250.6300.5330.7470.5660.9450.855
3Char Encoder0.1060.3040.1220.0990.2960.1130.8860.792
4Word Encoder0.9160.8890.9110.8350.7920.8260.4380.414
5Word + Char Encoder0.9000.8510.9000.8190.7500.8040.9790.903
6+ random char0.9020.8070.8810.8190.7410.8020.9690.924
7Subword Encoder0.8040.8970.8210.7150.8270.7350.9880.877
8+ Char Encoder0.7400.8480.7590.6640.7860.6850.9780.867
9+ random char0.7990.8760.8130.7180.8190.7360.9840.925
10+ LM pre-train0.8500.9350.8660.7710.8700.7890.9880.877
11+ LM pre-train + random char0.8450.9220.8600.7870.8720.8030.9870.941
+ +Table 2: Real-word and non-word performance measured by precision, recall and $F_{0.5}$ . All of the recall of non-word is 1.000. + +
= Ground Truth?Noisy InputPrediction
True PositiveX
False PositiveX
False NegativeXX
True Negative
+ +Table 3: Definition of True Positive (TP), False Positive (FP), False Negative (FN) and True Negative (TN). $\checkmark$ means the noisy input token or prediction the same as the ground truth, and vice versa for $x$ . + +Baselines. Sakaguchi et al. (2017) proposed semi-character recurrent neural network (ScRNN), which takes the first and the last character as well as the bag-of-word of the rest of the characters as features for each word. Then they used an + +LSTM (Hochreiter and Schmidhuber, 1997) to predict each original word. Wang et al. (2019) proposed MUDE, which uses a transformer-encoder (Vaswani et al., 2017) to encode character sequences as word representations and used an LSTM (Hochreiter and Schmidhuber, 1997) for the correction of each word. They also used a Gated Recurrent Units (GRU) (Cho et al., 2014) to perform the character-level correction as an auxiliary task during training. We train ScRNN (Sakaguchi et al., 2017) and MUDE (Wang et al., 2019), both of which are stand-alone neural spelling correctors, on our dataset as baselines. + +Overview. As row 11 of Table 1 shows, finetuning the Subword (WordPiece (Peters et al., 2018)) encoder model with LM initialization + +(ERNIE 2.0 (Sun et al., 2019)) on the augmented dataset with synthetic character-level misspellings yields the best performance. Without leveraging a pre-trained LM, the Word+Char Encoder model trained on the augmented dataset with synthetic character-level misspellings performs the best (row 6). In fact, the differences between these approaches are small. + +In Table 2, we calculate real-word and non-word correction performance to explain the effect of each training technique applied. Note that as shown in Figure 1, because non-word misspellings are preprocessed already, the detection of these non-word misspellings can be trivially accomplished, which results in all models having non-word recall of 1.000. + +As Table 2 shows, strong models overall perform well on both real-word misspellings and nonword misspellings. Although our models perform better on non-word misspellings than real-word misspellings, the significant improvement of our models over the baselines comes from the real-word misspellings, due to the usage of the pretrained LM. In the following paragraphs, we state our claims and support them with our experimental results. + +Spelling correction requires both spelling and context information. As Table 2 shows, without the context information, the character encoder model (row 3) performs poorly on real-word misspellings. On the contrary, word encoder model (row 4) performs well on real-word misspellings, but poorly on non-word misspellings, due to the lack of the spelling information. The combined Word+Char encoder model (row 5) leverages both spelling and context information and thus improves nearly $40\%$ absolute $F_{0.5}$ in Table 1. It even outperforms the LM initialized model (row 10). Both of the baseline models (row 1 and 2) perform poorly, because they perform spelling corrections upon character sequences, which disregards the semantics of the context, as their poor real-word performance in Table 2 row 1 and 2 suggests. On the other hand, since subword embeddings essentially subsume character embedding, an additional character encoder does not improve the performance of the Subword encoder model (Table 1 row 8). + +Pre-trained LM facilitates spelling correction. As row 10 of Table 1 shows, fine-tuning the model with a pre-trained LM weight initialization im + +proves both precision and recall score over the Subword encoder model (row 7). The LM pretraining mainly improves real-word recall as Table 2 row 10 suggests. Pre-trained LMs are trained with multiple unsupervised pre-training tasks on a much larger corpus than ours, which virtually expands the training task and the training set. + +Because most neural language models are trained on the subword level, we are not able to obtain a pre-trained LM initialized version of Word+Char encoder model (row 5). Nonetheless, we hypothesize that such a model will yield a very promising performance given sufficient training data and proper LM pre-training tasks. + +Training on additional synthetic character-level noise improves model robustness. As row 6, 9 and 11 of Table 1 and 2 shows, in addition to frequently occurring natural misspellings, training models on the texts with synthetic character-level noise improves the test performance, which is mainly contributed by the improvement of precision on non-word misspellings. Note that the train and dev set only cover $80\%$ of the candidate natural misspellings. Adding character-level noise in the training data essentially increases the variety of the misspelling patterns, which makes the model more robust to unseen misspelling patterns. + +# 4 Related Work and Background + +Many approaches are proposed for spelling correction (Formiga and Fonollosa, 2012; Kukich, 1992; Whitelaw et al., 2009; Zhang et al., 2006; Flor, 2012; Carlson and Fette, 2007; Flor and Futagi, 2012), such as edit-distance based approaches (Damerau, 1964; Levenshtein, 1966; Bard, 2007; Kukich, 1992; Brill and Moore, 2000; De Amorim and Zampieri, 2013; Pande, 2017), approaches based on statistical machine translation (Chiu et al., 2013; Hasan et al., 2015; Liu et al., 2013), and spelling correction for user search queries (Cucerzan and Brill, 2004; Gao et al., 2010). Most of them do not use contextual information, and some use simple contextual features (Whitelaw et al., 2009; Flor, 2012; Carlson and Fette, 2007; Flor and Futagi, 2012). + +In recent years, there are some attempts to develop better spelling correction algorithms based on neural nets (Etoori et al., 2018). Similar to our baselines ScRNN (Sakaguchi et al., 2017) and MUDE (Wang et al., 2019), Li et al. (2018) proposed a nested RNN to hierarchically encode characters to + +word representations, then correct each word using a nested GRU (Cho et al., 2014). However, these previous works either only train models on natural misspellings (Sakaguchi et al., 2017) or synthetic misspellings (Wang et al., 2019), and only focus on denoising the input texts from orthographic perspective without leveraging the retained semantics of the noisy input. + +On the other hand, Tal Weiss proposed Deep Spelling (Weiss), which uses the sequence-to-sequence architecture (Sutskever et al., 2014; Bahdanau et al., 2014) to generate corrected sentences. Note that Deep Spelling is essentially not a spelling corrector since spelling correction must focus only on the misspelled words, not on transforming the whole sentences. For similar reasons, spelling correction is also different from GEC (Grammar Error Correction) (Zhang and Wang, 2014; Junczys-Dowmunt et al., 2018). + +As a background, recently pre-trained neural LMs (Peters et al., 2018; Devlin et al., 2018; Yang et al., 2019; Radford et al., 2019; Sun et al., 2019) trained on large corpus on various pre-training tasks have made an enormous success on various benchmarks. These LMs captures the probability of a word or a sentence given their context, which plays a crucial role in correcting real-word misspellings. However, all of the LMs mentioned are based on subword embeddings, such as WordPiece (Peters et al., 2018) or Byte Pair Encoding (Gage, 1994) to avoid OOV words. + +# 5 Conclusion + +We leverage novel approaches to combine spelling and context information for stand-alone spelling correction, and achieved state-of-the-art performance. Our experiments give insights on how to build a strong stand-alone spelling corrector: (1) combine both spelling and context information, (2) leverage a pre-trained LM and (3) use the synthetic character-level noise. + +# References + +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. +Gregory V Bard. 2007. Spelling-error tolerant, order-independent pass- phrases via the dameraulevenshtein string-edit distance metric. In Proceed + +ings of the fifth Australasian symposium on ACSW frontiers-Volume 68, pages 117-124. Citeseer. +Yonatan Belinkov and Yonatan Bisk. 2017. Synthetic and natural noise both break neural machine translation. arXiv preprint arXiv:1711.02173. +Eric Brill and Robert C Moore. 2000. An improved error model for noisy channel spelling correction. In Proceedings of the 38th annual meeting on association for computational linguistics, pages 286-293. Association for Computational Linguistics. +Andrew Carlson and Ian Fette. 2007. Memory-based context-sensitive spelling correction at web scale. In Sixth International Conference on Machine Learning and Applications (ICMLA 2007), pages 166-171. IEEE. +Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillip Koehn, and Tony Robinson. 2013. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005. +Hsun-wen Chiu, Jian-cheng Wu, and Jason S Chang. 2013. Chinese spelling checker based on statistical machine translation. In Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing, pages 49-53. +Kyunghyun Cho, Bart Van Merrienboer, Caglar Gul-cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. +Silviu Cucerzan and Eric Brill. 2004. Spelling correction as an iterative process that exploits the collective knowledge of web users. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 293-300. +Fred J Damerau. 1964. A technique for computer detection and correction of spelling errors. Communications of the ACM, 7(3):171-176. +Renato Cordeiro De Amorim and Marcos Zampieri. 2013. Effective spell checking methods using clustering algorithms. In Proceedings of the International Conference on Recent Advances in Natural Language Processing RANLP 2013, pages 172-178. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Pravallika Etoori, Manoj Chinnakotla, and Radhika Mamidi. 2018. Automatic spelling correction for resource-scarce languages using deep learning. In Proceedings of ACL 2018, Student Research Workshop, pages 146-152. +Michael Flor. 2012. Four types of context for automatic spelling correction. TAL, 53(3):61-99. + +Michael Flor and Yoko Futagi. 2012. On using context for automatic correction of non-word misspellings in student essays. In Proceedings of the seventh workshop on building educational applications Using NLP, pages 105-115. Association for Computational Linguistics. +Lluis Formiga and José AR Fonollosa. 2012. Dealing with input noise in statistical machine translation. In Proceedings of COLING 2012: Posters, pages 319-328. +Philip Gage. 1994. A new algorithm for data compression. C Users Journal, 12(2):23-38. +Jianfeng Gao, Xiaolong Li, Daniel Micol, Chris Quirk, and Xu Sun. 2010. A large scale ranker-based system for search query spelling correction. In Proceedings of the 23rd international conference on computational linguistics, pages 358-366. Association for Computational Linguistics. +Sasa Hasan, Carmen Heger, and Saab Mansour. 2015. Spelling correction of user search queries through statistical machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 451-460. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780. +Marcin Junczys-Dowmunt, Roman Grundkiewicz, Shubha Guha, and Kenneth Heafield. 2018. Approaching neural grammatical error correction as a low-resource machine translation task. arXiv preprint arXiv:1804.05940. +Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. +Ralf Klabunde. 2002. Daniel jurafsky/james h. martin, speech and language processing. Zeitschrift für Sprachwissenschaft, 21(1):134-135. +Karen Kukich. 1992. Techniques for automatically correcting words in text. *Acm Computing Surveys (CSUR)*, 24(4):377-439. +Vladimir I Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. In Soviet physics doklady, volume 10, pages 707-710. +Hao Li, Yang Wang, Xinyu Liu, Zhichao Sheng, and Si Wei. 2018. Spelling error correction using a nested rnn model and pseudo training data. arXiv preprint arXiv:1811.00238. +Xiaodong Liu, Kevin Cheng, Yanyan Luo, Kevin Duh, and Yuji Matsumoto. 2013. A hybrid chinese spelling correction using language model and statistical machine translation with reranking. In Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing, pages 54-58. + +Eric Mays, Fred J Damerau, and Robert L Mercer. 1991. Context based spelling correction. Information Processing & Management, 27(5):517-522. +Roger Mitton. 1985. Corpora of misspellings for download. +Harshit Pande. 2017. Effective search space reduction for spell correction using character neural embeddings. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 170-174. +Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proc. of NAACL. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9. +Graham Ernest Rawlinson. 1976. The significance of letter position in word recognition. Ph.D. thesis, University of Nottingham. +Keisuke Sakaguchi, Kevin Duh, Matt Post, and Benjamin Van Durme. 2017. Robsut wrod reocginiton via semi-character recurrent neural network. In Thirty-First AAAI Conference on Artificial Intelligence. +Erik F Sang and Jorn Veenstra. 1999. Representing text chunks. In Proceedings of the ninth conference on European chapter of the Association for Computational Linguistics, pages 173-179. Association for Computational Linguistics. +Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang. 2019. Ernie 2.0: A continual pre-training framework for language understanding. arXiv preprint arXiv:1907.12412. +Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104-3112. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008. +Zhiwei Wang, Hui Liu, Jiliang Tang, Songfan Yang, Gale Yan Huang, and Zitao Liu. 2019. Learning multi-level dependencies for robust word recognition. arXiv preprint arXiv:1911.09789. +Tal Weiss. Deep spelling: Rethinking spelling correction in the 21st century. + +Casey Whitelaw, Ben Hutchinson, Grace Y Chung, and Gerard Ellis. 2009. Using the web for language independent spellchecking and autocorrection. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2-Volume 2, pages 890-899. Association for Computational Linguistics. +Amber Wilcox-O'Hearn, Graeme Hirst, and Alexander Budanitsky. 2008. Real-word spelling correction with trigrams: A reconsideration of the mays, damerau, and mercer model. In International conference on intelligent text processing and computational linguistics, pages 605-616. Springer. +Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. +Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural information processing systems, pages 5754-5764. +Longkai Zhang and Houfeng Wang. 2014. A unified framework for grammar error correction. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task, pages 96-102. +Yang Zhang, Pilian He, Wei Xiang, and Mu Li. 2006. Discriminative reranking for spelling correction. In Proceedings of the 20th Pacific Asia Conference on Language, Information and Computation, pages 64-71. + +# A Appendices + +# A.1 Dataset Details + +We keep the most frequent words in the 1-Billion-Word-Language-Model-Benchmark dataset (Chelba et al., 2013) as our word vocabulary $\Psi_w$ , and all characters in $\Psi_w$ to form our character vocabulary $\Psi_c$ . After deleting sentences containing OOV words, we randomly divide them into three datasets $S_{train}$ , $S_{dev}$ and $S_{test}$ . We merge the two word-level misspelling lists (Mitton, 1985; Belinkov and Bisk, 2017) to get a misspelling list $\Omega$ . We randomly choose $80\%$ of all misspellings in $\Omega$ to form a known-misspelling-list, $\hat{\Omega}$ . + +To strengthen the robustness of the model to various noisy spellings, we also utilize the methods in Belinkov and Bisk (2017), namely, swap, middle random, fully random and keyboard type, to generate character-level synthetic misspellings. To encourage the model to learn contextual information, we add an additional method, random generate, to generate arbitrary character sequences as misspellings. + +While replacing gold words with misspellings, for a sentence with $n$ words, the number of replaced words is $m = \max (\lfloor \alpha n\rfloor ,1)$ , where $\alpha = \min (|\mathcal{N}(0,0.2)|,1.0)$ and $\mathcal{N}$ represents a Gaussian distribution. + +The dev set is created with misspellings from sampled list $\hat{\Omega}$ , and the test set is created with misspellings from the full list $\Omega$ . We compare 2 train sets, the first has only natural misspellings from $\hat{\Omega}$ , and the second has natural misspellings as well as synthetic misspellings, which is denoted as $+\text{random char}$ in Section 3.2. We always use the same dev set and test set that only contain natural misspellings for comparison. + +Table 4 shows the parameters of our stand-alone spelling correction dataset. We will release the dataset and codes after this paper is published. + +# A.2 Implementation Details + +We use PaddlePaddle ${}^{2}$ for the network implementation and keep the same configuration for the Subword encoders as ERNIE 2.0 (Sun et al., 2019). We tune the models by grid search on the dev set according to ${F}_{0.5}$ score. The detailed hyperparameters shown in Table 5. In addition, we use Adam optimizer (Kingma and Ba, 2014) with learning rate of 5e-5 as well as linear decay. We used + +
Parameter NameValue
|Ψw|50000
|Ψc|130
max_sent_len200
max_word_len20
|S1|17971548
|S2|5985
|S3|5862
+ +Table 4: Parameters of our stand-alone spelling correction dataset. + +
Parameter NameWordSubwordChar
max seq length25625620
hidden size512768256
# hidden layers6124
# attention heads8128
+ +Table 5: Hyper-parameters of word encoders, Subword(WordPiece (Wu et al., 2016)) encoders and character encoders. + +10 GeForce GTX 1080 Ti or RTX 2080Ti to train each model until convergence, which takes a few days. \ No newline at end of file diff --git a/contextawarestandaloneneuralspellingcorrection/images.zip b/contextawarestandaloneneuralspellingcorrection/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..95d5ef246cc5df05e3fbb973fb4d972921327199 --- /dev/null +++ b/contextawarestandaloneneuralspellingcorrection/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f21a4cc0e4b53bc883282cdcc586b0138531492576287fc412245edf9812d26e +size 379314 diff --git a/contextawarestandaloneneuralspellingcorrection/layout.json b/contextawarestandaloneneuralspellingcorrection/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..46ca87d8d155392ca2ee992dc425b4a0773abd8b --- /dev/null +++ b/contextawarestandaloneneuralspellingcorrection/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c74557f5c33f5a5e3f0587918abce4eae55ec4091da16163299fd10860d128d +size 251805 diff --git a/contextualmodulationforrelationlevelmetaphoridentification/7447c641-fc3f-4b06-932a-b07c5969ca45_content_list.json b/contextualmodulationforrelationlevelmetaphoridentification/7447c641-fc3f-4b06-932a-b07c5969ca45_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..2c7066964c58784aca8e7965fea701e573e7a562 --- /dev/null +++ b/contextualmodulationforrelationlevelmetaphoridentification/7447c641-fc3f-4b06-932a-b07c5969ca45_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e2d06635d979beca2394b277ece9bcd92e96d05677e50193eff3199328d4ab7f +size 105231 diff --git a/contextualmodulationforrelationlevelmetaphoridentification/7447c641-fc3f-4b06-932a-b07c5969ca45_model.json b/contextualmodulationforrelationlevelmetaphoridentification/7447c641-fc3f-4b06-932a-b07c5969ca45_model.json new file mode 100644 index 0000000000000000000000000000000000000000..f68d14b9f50c731d3e2eddcbd834694189ec1c14 --- /dev/null +++ b/contextualmodulationforrelationlevelmetaphoridentification/7447c641-fc3f-4b06-932a-b07c5969ca45_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e20a5ea7388238e3aae4ae9dd2805d6c087310d058c032c42155c3ce456e27a7 +size 127835 diff --git a/contextualmodulationforrelationlevelmetaphoridentification/7447c641-fc3f-4b06-932a-b07c5969ca45_origin.pdf b/contextualmodulationforrelationlevelmetaphoridentification/7447c641-fc3f-4b06-932a-b07c5969ca45_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..eb75f7343ae0e21df2e628e756a39c1629f226f8 --- /dev/null +++ b/contextualmodulationforrelationlevelmetaphoridentification/7447c641-fc3f-4b06-932a-b07c5969ca45_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a552adbf5a67dc3242278efaa76af76e6479aeaf787c13105e9139dee8c3e97b +size 1229046 diff --git a/contextualmodulationforrelationlevelmetaphoridentification/full.md b/contextualmodulationforrelationlevelmetaphoridentification/full.md new file mode 100644 index 0000000000000000000000000000000000000000..8635ca0dafed67b2769a7183c275f9856fed99c2 --- /dev/null +++ b/contextualmodulationforrelationlevelmetaphoridentification/full.md @@ -0,0 +1,326 @@ +# Contextual Modulation for Relation-Level Metaphor Identification + +Omnia Zayed, John P. McCrae, Paul Buitelaar + +Insight SFI Research Centre for Data Analytics + +Data Science Institute + +National University of Ireland Galway + +IDA Business Park, Lower Dangan, Galway, Ireland + +{omnia.zayed, john.mccrae, paul.buitelaar}@insight-centre.org + +# Abstract + +Identifying metaphors in text is very challenging and requires comprehending the underlying comparison. The automation of this cognitive process has gained wide attention lately. However, the majority of existing approaches concentrate on word-level identification by treating the task as either single-word classification or sequential labelling without explicitly modelling the interaction between the metaphor components. On the other hand, while existing relation-level approaches implicitly model this interaction, they ignore the context where the metaphor occurs. In this work, we address these limitations by introducing a novel architecture for identifying relation-level metaphoric expressions of certain grammatical relations based on contextual modulation. In a methodology inspired by works in visual reasoning, our approach is based on conditioning the neural network computation on the deep contextualised features of the candidate expressions using feature-wise linear modulation. We demonstrate that the proposed architecture achieves state-of-the-art results on benchmark datasets. The proposed methodology is generic and could be applied to other textual classification problems that benefit from contextual interaction. + +# 1 Introduction + +Despite its fuzziness, metaphor is a fundamental feature of language that defines the relation between how we understand things and how we express them (Cameron and Low, 1999). A metaphor is a figurative device containing an implied mapping between two conceptual domains. These domains are represented by its two main components, namely the tenor (target domain) and the vehicle (source domain) (End, 1986). According to the conceptual metaphor theory (CMT) of Lakoff and Johnson (1980), which we adopt in this work, a + +concept such as "liquids" (source domain/vehicle) can be borrowed to express another such as "emotions" (target domain/tenor) by exploiting single or common properties. Therefore, the conceptual metaphor "Emotions are Liquids" can be manifested through the use of linguistic metaphors such as "pure love", "stir excitement" and "contain your anger". The interaction between the target and the source concepts of the expression is important to fully comprehend its metaphoricity. + +Over the last couple of years, there has been an increasing interest towards metaphor processing and its applications, either as part of natural language processing (NLP) tasks such as machine translation (Koglin and Cunha, 2019), text simplification (Wolska and Clausen, 2017; Clausen and Nastase, 2019) and sentiment analysis (Rentoumi et al., 2012) or in more general discourse analysis use cases such as in analysing political discourse (Charteris-Black, 2011), financial reporting (Ho and Cheng, 2016) and health communication (Semino et al., 2018). + +Metaphor processing comprises several tasks including identification, interpretation and cross-domain mappings. Metaphor identification is the most studied among these tasks. It is concerned with detecting the metaphoric words or expressions in the input text and could be done either on the sentence, relation or word levels. The difference between these levels of processing is extensively studied in (Zayed et al., 2020). Identifying metaphors on the word-level could be treated as either sequence labelling by deciding the metaphoricity of each word in a sentence given the context or single-word classification by deciding the metaphoricity of a targeted word. On the other hand, relation-level identification looks at specific grammatical relations such as the dobj or amod dependencies and checks the metaphoricity of the verb or the adjective given its association with the noun. In + +relation-level identification, both the source and target domain words (the tenor and vehicle) are classified either as a metaphoric or literal expression, whereas in word-level identification only the source domain words (vehicle) are labelled. These levels of analysis (paradigms) are already established in literature and adopted by previous research in this area as will be explained in Section 2. The majority of existing approaches, as well as the available datasets, pertaining to metaphor processing focus on the metaphorical usage of verbs and adjectives either on the word or relation levels. This is because these syntactic types exhibit metaphoricity more frequently than others according to corpus-based analysis (Cameron, 2003; Shutova and Teufel, 2010). + +Although the main focus of both the relation-level and word-level metaphor identification is discerning the metaphoricity of the vehicle (source domain words), the interaction between the metaphor components is less explicit in word-level analysis either when treating the task as sequence labelling or single-word classification. Relation-level analysis could be viewed as a deeper level analysis that captures information that is not captured on the word-level through modelling the influence of the tenor (e.g. noun) on the vehicle (e.g. verb/adjective). There will be reasons that some downstream tasks would prefer to have such information (i.e. explicitly marked relations), among these tasks are metaphor interpretation and cross-domain mappings. Moreover, employing the wider context around the expression is essential to improve the identification process. + +This work focuses on relation-level metaphor identification represented by verb-noun and adjective-noun grammar relations. We propose a novel approach for context-based textual classification that utilises affine transformations. In order to integrate the interaction of the metaphor components in the identification process, we utilise affine transformation in a novel way to condition the neural network computation on the contextualised features of the given expression. The idea of affine transformations has been used in NLP-related tasks such as visual question-answering (de Vries et al., 2017), dependency parsing (Dozat and Manning, 2017), semantic role labelling (Cai et al., 2018), coreference resolution (Zhang et al., 2018), visual reasoning (Perez et al., 2018) and lexicon features integration (Margatina et al., 2019). + +Inspired by the works on visual reasoning, we use the candidate expression of certain grammatical relations, represented by deep contextualised features, as an auxiliary input to modulate our computational model. Affine transformations can be utilised to process one source of information in the context of another. In our case, we want to integrate: 1) the deep contextualised-features of the candidate expression (represented by ELMo sentence embeddings) with 2) the syntactic/semantic features of a given sentence. Based on this task, affine transformations have a similar role to attention but with more parameters, which allows the model to better exploit context. Therefore, it could be regarded as a form of a more sophisticated attention. Whereas the current "straightforward" attention models are overly simplistic, our model prioritises the contextual information of the candidate to discern its metaphoricity in a given sentence. + +Our proposed model consists of an affine transform coefficients generator that captures the meaning of the candidate to be classified, and a neural network that encodes the full text in which the candidate needs to be classified. We demonstrate that our model significantly outperforms the state-of-the-art approaches on existing relation-level benchmark datasets. The unique characteristics of tweets and the availability of Twitter data motivated us to identify metaphors in such content. Therefore, we evaluate our proposed model on a newly introduced dataset of tweets (Zayed et al., 2019) annotated for relation-level metaphors. + +# 2 Related Work + +Over the last decades, the focus of computational metaphor identification has shifted from rule-based (Fass, 1991) and knowledge-based approaches (Krishnakumaran and Zhu, 2007; Wilks et al., 2013) to statistical and machine learning approaches including supervised (Gedigian et al., 2006; Turney et al., 2011; Dunn, 2013a,b; Tsvetkov et al., 2013; Hovy et al., 2013; Mohler et al., 2013; Klebanov et al., 2014; Bracewell et al., 2014; Jang et al., 2015; Gargett and Barnden, 2015; Rai et al., 2016; Bulat et al., 2017; Köper and Schulte im Walde, 2017), semi-supervised (Birke and Sarkar, 2006; Shutova et al., 2010; Zayed et al., 2018) and unsupervised methods (Shutova and Sun, 2013; Heintz et al., 2013; Strzalkowski et al., 2013). These approaches employed a variety of features to design their models. With the advances in neu + +ral networks, the focus started to shift towards employing more sophisticated models to identify metaphors. This section focuses on current research that employs neural models for metaphor identification on both word and relation levels. + +Word-Level Processing: Do Dinh and Gurevych (2016) were the first to utilise a neural architecture to identify metaphors. They approached the problem as sequence labelling where a traditional fully-connected feed-forward neural network is trained using pre-trained word embeddings. The authors highlighted the limitation of this approach when dealing with short and noisy conversational texts. As part of the NAACL 2018 Metaphor Shared Task (Leong et al., 2018), many researchers proposed neural models that mainly employ LSTMs (Hochreiter and Schmidhuber, 1997) with pre-trained word embeddings to identify metaphors on the word-level. The best performing systems are: THU NGN (Wu et al., 2018), OCOTA (Bizzoni and Ghanimifard, 2018) and bot.zen (Stemle and Onysko, 2018). Gao et al. (2018) were the first to employ the deep contextualised word representation ELMo (Peters et al., 2018), combined with pre-trained GloVe (Pennington et al., 2014) embeddings to train bidirectional LSTM-based models. The authors introduced a sequence labelling model and a single-word classification model for verbs. They showed that incorporating the context-dependent representation of ELMo with context-independent word embeddings improved metaphor identification. Mu et al. (2019) proposed a system that utilises a gradient boosting decision tree classifier. Document embeddings were employed in an attempt to exploit wider context to improve metaphor detection in addition to other word representations including GloVe, ELMo and skip-thought (Kiros et al., 2015). Mao et al. (2018, 2019) explored the idea of selectional preferences violation (Wilks, 1978) in a neural architecture to identify metaphoric words. Mao's proposed approaches emphasised the importance of the context to identify metaphoricity by employing context-dependent and context-independent word embeddings. Mao et al. (2019) also proposed employing multi-head attention to compare the targeted word representation with its context. An interesting approach was introduced by Dankers et al. (2019) to model the interplay between metaphor identification and emotion regression. The authors introduced multiple multi-task learning tech + +niques that employ hard and soft parameter sharing methods to optimise LSTM-based and BERT-based models. + +Relation-Level Processing: Shutova et al. (2016) focused on identifying the metaphoricity of adjective/verb-noun pairs. This work employed multimodal embeddings of visual and linguistic features. Their model employs the cosine similarity of the candidate expression components based on word embeddings to classify metaphors using an optimised similarity threshold. Rei et al. (2017) introduced a supervised similarity network to detect adjective/verb-noun metaphoric expressions. Their system utilises word gating, vector representation mapping and a weighted similarity function. Pre-trained word embeddings and attribute-based embeddings (Bulat et al., 2017) were employed as features. This work explicitly models the interaction between the metaphor components. Gating is used to modify the vector of the verb/adjective based on the noun, however the surrounding context is ignored by feeding only the candidates as input to the neural model which might lead to loosing important contextual information. + +Limitations: As discussed, the majority of previous works adopted the word-level paradigm to identify metaphors in text. The main distinction between the relation-level and the word-level paradigms is that the former makes the context more explicit than the latter through providing information about not only where the metaphor is in the sentence but also how its components come together through hinting at the relation between the tenor and the vehicle. Stowe and Palmer (2018) showed that the type of syntactic construction a verb occurs in influences its metaphoricity. On the other hand, existing relation-level approaches (Tsvetkov et al., 2014; Shutova et al., 2016; Bulat et al., 2017; Rei et al., 2017) ignore the context where the expression appears and only classify a given syntactic construction as metaphorical or literal. Studies showed that the context surrounding a targeted expression is important to discern its metaphoricity and fully grasp its meaning (Mao et al., 2018; Mu et al., 2019). Therefore, current relation-level approaches will only be able to capture commonly used conventionalised metaphors. In this work, we address these limitations by introducing a novel approach to textual classification which employs contextual information from both the targeted expression under study and the wider + +context surrounding it. + +# 3 Proposed Approach + +Feature-wise transformation techniques such as feature-wise linear modulation (FiLM) have been recently employed in many applications showing improved performance. They became popular in image processing applications such as image style transfer (Dumoulin et al., 2017); then they found their way into multi-modal tasks, specifically visual question-answering (de Vries et al., 2017; Perez et al., 2018). They also have been shown to be effective approaches for relational problems as mentioned in Section 1. The idea behind FiLM is to condition the computation carried out by a neural model on the information extracted from an auxiliary input in order to capture the relationship between multiple sources of information (Dumoulin et al., 2018). + +Our approach adopts Perez's (2018) formulation of FiLM on visual reasoning for metaphor identification. In visual reasoning, image-related questions are answered by conditioning the image-based neural network (visual pipeline) on the question context via a linguistic pipeline. In metaphor identification, we can consider that the image in our case is the sentence that has a metaphoric candidate and the auxiliary input is the linguistic interaction between the components of the candidate itself. This will allow us to condition the computation of a sequential neural model on the contextual information of the candidate and leverage the feature-wise interactions between the conditioning representation and the conditioned network. To the best of our knowledge, we are the first to propose such contextual modulation for textual classification in general and for metaphor identification specifically. + +Our proposed architecture consists of a contextual modulation pipeline and a metaphor identification linguistic pipeline as shown in Figure 1. The input to the contextual modulator is the deep contextualised representation of the candidate expression under study (which we will refer to as targeted expression1) to capture the interaction between its components. The linguistic pipeline employs an LSTM encoder which produces a contextual representation of the provided sentence where the targeted expression appeared. The model is trained + +end-to-end to identify relation-level metaphoric expressions focusing on verb-noun and adjective-noun grammatical relations. Our model takes as input a sentence (or a tweet) and a targeted expression of a certain syntactic construction and identifies whether the candidate in question is used metaphorically or literally by going through the following steps: + +Condition: In this step the targeted expression is used as the auxiliary input to produce a conditioning representation. We first embed each candidate of verb-direct object pairs $^2$ ( $v, n$ ) using ELMo sentence embeddings to learn context-dependent aspects of word meanings $c_{vn}$ . We used the 1,024-dimensional ELMo embeddings pre-trained on the One Billion Word benchmark corpus (Chelba et al., 2014). The sentence embeddings of the targeted expression are then prepared by implementing an embeddings layer that loads these pre-trained ELMo embeddings from the TensorFlow Hub $^3$ . The layer takes in the raw text of the targeted expression and outputs a fixed mean-pooled vector representation of the input as the contextualised representation. This representation is then used as an input to the main component of this step, namely a contextual modulator. The contextual modulator consists of a fully-connected feed-forward neural network (FFNN) that produces the conditioning parameters (i.e. the shifting and scaling coefficients) that will later modulate the linguistic pipeline computations. Given that $c_{vn}$ is the conditioning input then the contextual modulator outputs $\gamma$ and $\beta$ , the context-dependent scaling and shifting vectors, as follows: + +$$ +\begin{array}{l} \gamma \left(c _ {v n}\right) = W _ {\gamma} c _ {v n} + b _ {\gamma}, \tag {1} \\ \beta \left(c _ {v n}\right) = W _ {\beta} c _ {v n} + b _ {\beta} \\ \end{array} +$$ + +where $W_{\gamma}, W_{\beta}, b_{\gamma}, b_{\beta}$ are learnable parameters. + +Embed: Given a labelled dataset of sentences, the model begins by embedding the tokenised sentence $S$ of words $w_{1},w_{2},\ldots ,w_{n}$ , where $n$ is the number of words in $S$ , into vector representations using GloVe embeddings. We used the uncased 200-dimensional GloVe embeddings pre-trained on $\sim 2$ billion tweets and contains 1.2 million words. + +Encode: The next step is to train a neural network with the obtained embeddings. Since context is important for identifying metaphoricity, sentence + +![](images/bdfd1e9fc8c0809237d6da10eada77056dfa56950341942a68bf7d29de88a747.jpg) +Figure 1: The proposed framework for relation-level metaphor identification showing the contextual modulation in detail. The attention process is greyed out as we experimented with and without it. + +encoder is a sensible choice. We use an LSTM sequence model to obtain a contextual representation which summarises the syntactic and semantic features of the whole sentence. The output of the LSTM is a sequence of hidden states $h_1, h_2, \dots, h_n$ , where $h_i$ is the hidden state at the $i^{th}$ time-step. + +Feature-wise Transformation: In this step, an affine transformation layer, hereafter AffineTrans layer, applies a feature-wise linear modulation to its inputs, which are: 1) the hidden states from the encoding step; 2) the scaling and shifting parameters from the conditioning step. By feature-wise, we mean that scaling and shifting are applied to each encoded vector for each word in the sentence. + +$$ +f \left(h _ {i}, c _ {v n}\right) = \gamma \left(c _ {v n}\right) \odot h _ {i} + \beta \left(c _ {v n}\right) \tag {2} +$$ + +Attend: Recently, attention mechanisms have become useful to select the most important elements in a given representation while minimising information loss. In this work, we employ an attention layer based on the mechanism presented in (Lin et al., 2017). It takes the output from the Affine-Trans layer as an input in addition to a randomly initialised weight matrix $W$ , a bias vector $b$ and a learnable context vector $u$ to produce the attended + +output as follows: + +$$ +e _ {i} = \tanh \left(W f _ {i} + b\right) \tag {3} +$$ + +$$ +\alpha_ {i} = \operatorname {s o f t m a x} \left(u e _ {i}\right) \tag {4} +$$ + +$$ +r = \sum_ {i = 1} ^ {n} \alpha_ {i} f _ {i} \tag {5} +$$ + +Our model is trained and evaluated with and without the attention mechanism in order to differentiate between the effect of the feature modulation and the attention on the model performance. + +Predict: The last step is to make the final prediction using the output from the previous step (attended output in case of using attention or the AffineTrans layer output in case of skipping it). We use a fully-connected feed-forward layer with a sigmoid activation that returns a single (binary) class label to identify whether the targeted expression is metaphoric or not. + +# 4 Datasets + +The choice of annotated dataset for training the model and evaluating its performance is determined by the level of metaphor identification. Given + +the distinction between the levels of analysis, approaches addressing the task on the word-level are not fairly comparable to relation-level approaches since each task addresses metaphor identification differently. Therefore, the tradition of previous work in this area is to compare approaches addressing the task on the same level against each other on level-specific annotated benchmark datasets (Zayed et al., 2020). + +Following prior work in this area and in order to compare the performance of our proposed approach with other relation-level metaphor identification approaches, we utilise available annotated datasets that support this level of processing. The existing datasets are either originally prepared to directly support relation-level processing such as the TSV (Tsvetkov et al., 2014) dataset and the Tweets dataset by Zayed et al. (2019) or adapted from other word-level benchmark datasets to suit relation-level processing such as the adaptation of the benchmark datasets TroFi (Birke and Sarkar, 2006) and VU Amsterdam metaphor corpus (VUAMC) (Steen et al., 2010) by Zayed et al. (2020) and the adaptation of the MOH (Mohammad et al., 2016) dataset by Shutova et al. (2016). Due to space limitation, we include in Appendix A: 1) examples of annotated instances from these datasets showing their format as: sentence, targeted expression and the provided label; 2) the statistics of these datasets including their size and percentage of metaphors. + +Relation-Level Datasets: These datasets focus on expressions of certain grammatical relations. Obtaining these relations could be done either automatically by employing a dependency parser or manually by highlighting targeted expressions in a specific corpus. Then, these expressions are manually annotated for metaphoricity given the surrounding context. There exist two benchmark datasets of this kind, namely the TSV dataset and Zayed et al. (2019) Tweets dataset, hereafter ZayTw dataset. The former focuses on discerning the metaphoricity of adjective-noun expressions in sentences collected from the Web and Twitter while the latter focuses on verb-direct object expressions in tweets. + +Adapted Word-Level Datasets: Annotated datasets that support word-level metaphor identification are not suitable to support relation-level processing due to the annotation difference (Shutova, 2015; Zayed et al., 2020). To overcome the limited availability of relation-level datasets, there has + +been a growing effort to enrich and extend benchmark datasets annotated on the word-level to suit relation-level metaphor identification. Although it is non-trivial and requires extra annotation effort, Shutova et al. (2016) and Zayed et al. (2020) introduced adapted versions of the MOH, TroFi and VUAMC datasets to train and evaluate models that identify metaphors on the relation-level. Since the MOH dataset was originally created to identify metaphoric verbs on the word-level, its adaptation by Shutova et al. (2016), also referred to as MOH-X in several papers, focused on extracting the verb-noun grammar relations using a dependency parser. The dataset is relatively small and contains short and simple sentences that are originally sampled from the example sentences of each verb in WordNet (Fellbaum, 1998). The TroFi dataset was designed to identify the metaphoricity of 50 selected verbs on the word-level from the 1987-1989 Wall Street Journal (WSJ) corpus. The VUAMC (Steen et al., 2010) is the largest corpus annotated for metaphors and has been employed extensively by models developed to identify metaphors on the word-level. However, models designed to support relation-level metaphor identification can not use it in its current state. Therefore, previous research focusing on relation-level processing (Rei et al., 2017; Bulat et al., 2017; Shutova et al., 2016; Tsvetkov et al., 2014) did not train, evaluate or compare their approaches using it. Recently, a subset of the VUAMC was adapted to suit relation-level analysis by focusing on the training and test splits provided by the NAACL metaphor shared task. This corpus subset as well as the TroFi dataset are adapted by Zayed et al. (2020) to suit identifying metaphoric expressions on the relation-level, focusing on verb-direct object grammar relations (i.e dobj dependencies). The Stanford dependency parser was utilised to extract these relations which were then filtered to ensure quality. + +# 5 Experiments + +# 5.1 Experimental Setup + +We employ a single-layer LSTM model with 512 hidden units. The Adadelta algorithm (Zeiler, 2012) is used for optimisation during the training phase and the binary cross-entropy is used as a loss function to fine tune the network. The reported results are obtained using batch size of 256 instances for the ZayTw dataset and 128 instances for the + +other employed datasets. $L_{2}$ -regularisation weight of 0.01 is used to constraint the weights of the contextual modulator. In all experiments, we zeropad the input sentences to the longest sentence length in the dataset. All the hyper-parameters were optimised on a randomly separated development set (validation set) by assessing the accuracy. We present here the best performing design choices based on experimental results but we highlight some other attempted considerations in Appendix B. We implemented our models using Keras (Chollet et al., 2015) with the TensorFlow backend. We are making the source code and best models publicly available4. To ensure reproducibility, we include the sizes of the training, validation and test sets in Appendix B as well as the best validation accuracy obtained on each validation set. All the results presented in this paper are obtained after running the experiments five times with different random seeds and taking the average. + +In this work, we selected the following state-of-the-art models pertaining to relation-level metaphor identification for comparisons: the cross-lingual model by (Tsvetkov et al., 2014), the multimodal system of linguistic and visual features by (Shutova et al., 2016), the ATTR-EMBED model by Bulat et al. (2017) and the supervised similarity network (SSN) by Rei et al. (2017). We consider the SSN system as our baseline. For fair comparisons, we utilised their same data splits on the five employed benchmark datasets described in Section 4. + +# 5.2 Excluding AffineTrans + +We implemented a simple LSTM model to study the effect of employing affine transformations on the system performance. The input to this model is the tokenised sentence $S$ which is embedded as a sequence of vector representations using GloVe. These sequences of word embeddings are then encoded using the LSTM layer to compute a contextual representation. Finally, this representation is fed to a feed-forward layer with a sigmoid activation to predict the class label. We used this model with and without the attention mechanism. + +# 5.3 Results + +We conduct several experiments to better understand our proposed model. First, we experiment with the simple model introduced in Section 5.2. + +Then, we train the proposed models on the benchmark datasets discussed in Section 4. We experiment with and without the attention layer to assess its effect on the model performance. Furthermore, we compare our model to the current work that addresses the task on the relation-level, in-line with our peers in this area. Tables 1 and 2 show our model performance in terms of precision, recall, F1-score and accuracy. + +Since the source code of Rei's (2017) system is available online5, we trained and tested their model using the ZayTw dataset as well as the adapted VUAMC and TroFi dataset in an attempt to study the ability of their model to generalise when applied on a corpus of a different text genre with wider metaphoric coverage including less common (conventionalised) metaphors. + +# 6 Discussion + +Overall performance. We analysed the model performance by inspecting the classified instances. We noticed that it did a good job identifying conventionalised metaphors as well as uncommon ones. Appendix A shows examples of classified instances by our system from the employed benchmark datasets. Our model achieves significantly better F1-score over the state-of-the-art SSN system (Rei et al., 2017) under the one-tailed paired $t$ -test (Yeh, 2000) at $p$ -value $< 0.01$ on three of the five employed benchmark datasets. Moreover, our architecture showed improved performance over the state-of-the-art approaches on the TSV and MOH datasets. It is worth mentioning that the size of their test sets is relatively smaller; therefore any change in a single annotated instance drastically affects the results. Moreover, the approach proposed by Tsvetkov et al. (2014) relies on hand-coded lexical features which justifies its high F1-score. + +The effect of contextual modulation. When excluding the AffineTrans layer and only using the simple LSTM model, we observe a significant performance drop that shows the effectiveness of leveraging linear modulation. This layer adaptively influences the output of the model by conditioning the identification process on the contextual information of the targeted expression itself which significantly improved the system performance, as observed from the results. Moreover, employing the contextualised representation of the targeted expression, through ELMo sentence embeddings, + +
ZayTw (test-set)TSV (test-set)
Prec.RecallF1-scoreAcc.Prec.RecallF1-scoreAcc.
Tsvetkov et al. (2014)------0.85-
Shutova et al. (2016) (multimodal)----0.670.960.79-
Bulat et al. (2017) (ATTR-EMBED)----0.850.710.77-
Rei et al. (2017) (SSN)0.5431.00.7040.5430.9030.7380.8110.829
Simple LSTM0.6250.7580.6850.6210.6900.580.6300.66
Simple LSTM (+Attend)0.6140.8660.7180.6310.6550.550.5980.63
Our AffineTrans0.8040.7690.786*0.7730.8690.800.8340.84
Our AffineTrans (+Attend)0.7580.8120.784*0.7570.8750.770.8190.83
+ +Table 1: Our proposed architecture performance compared to the state-of-the-art approaches on the benchmark datasets ZayTw and TSV. *Statistically significant (p-value<0.01) compared to the SSN system (Rei et al., 2017). + +
adapted MOH (10-fold)adapted TroFi (test-set)adapted VUAMC (test-set)
Prec.RecallF1-scoreAcc.Prec.RecallF1-scoreAcc.Prec.RecallF1-scoreAcc.
Rei et al. (2017) (SSN)0.7360.7610.7420.7480.6200.8920.7320.6280.4750.5320.5020.558
Simple LSTM0.7570.7730.7590.7590.700.7510.7250.6740.5100.3390.4070.587
Simple LSTM (+ Attend)0.7460.7820.7570.7520.7590.8530.803*0.7610.5750.4230.4870.627
Our AffineTrans0.8040.7480.7710.7800.8520.9090.879*0.8580.7120.6390.673*0.741
Our AffineTrans (+ Attend)0.7530.8130.7790.7730.8410.8700.856*0.8320.6860.6790.683*0.736
+ +Table 2: Our proposed architecture performance compared to the state-of-the-art approaches on the adapted benchmark datasets MOH, TroFi and VUAMC. *Statistically significant $(p$ -value $< 0.01)$ compared to the SSN system (Rei et al., 2017). We could not include Shutova et al. (2016) results on the MOH dataset since they used different test settings, thus their results will not be strictly comparable. + +was essential to explicitly capture the interaction between the verb/adjective and its accompanying noun. Then, the AffineTrans layer was able to modulate the network based on this interaction. + +The effect of attention. It is worth noting that the attention mechanism did not help much in our AffineTrans model because affine transformation itself could be seen as playing a similar role to attention, as discussed in Section 1. In attention mechanisms important elements are given higher weight based on weight scaling whereas in linear affine transformation scaling is done in addition to shifting which gives prior importance (probability) to particular features. We are planning to perform an in-depth comparison of using affine transformation verses attention in our future work. + +Error analysis. An error analysis is performed to determine the model flaws by analysing the predicted classification. We examined the false positives and false negatives obtained by the best performing model, namely AffineTrans (without attention). Interestingly, the majority of false negatives are from the political tweets in ZayTw dataset. Table 3 lists some examples of misclassified instances in the TSV and ZayTw datasets. Some instances could be argued as being correctly classified by the model. For instance, "spend capital" could be seen as a metaphor in that the noun is an abstract concept + +referring to actual money. Examples of misclassified instances from the other employed datasets are presented in Appendix A. Interestingly, we noticed that the model was able to spot mistakenly annotated instances. Although the adapted VUMAC subset contains various expressions which should help the model perform better, we noticed annotation inconsistency in some of them. For example, the verb "choose" associated with the noun "science" is annotated once as metaphor and twice as literal in very similar contexts. This aligns well with the findings of Zayed et al. (2020) who questioned the annotation of around $5\%$ of the instances in this subset mainly due to annotation inconsistency. + +Analysis of some misclassified verbs. We noticed that sometimes the model got confused while identifying the metaphoricity of expressions where the verb is related to emotion and cognition such as: "accept, believe, discuss, explain, experience, need, recognise, and want". Our model tends to classify them as not metaphors. We include different examples from the ZayTw dataset of the verbs "experience" and "explain" with different associated nouns along with their gold and predicted classifications in Appendix A. Our model's prediction seems reasonable given that the instances in the training set were labelled as not metaphors. It is + +
ZayTwTSV
TweetProb.SentenceProb.
False Negativehard to resist the feeling that remain is further [...]0.46You have a shiny goal in mind that is distract- ing you with its awesomeness.0.49
@abpi uk: need #euref final facts? read why if [...]0.08The first hours of a shaky ceasefire are not “the best of times”.0.14
#ivoted with a black pen. do not trust pencils. [...]0.003The French bourgeoisie has rushed into a blind alley.0.00
False Positive[...] this guy would spend so much political capital trying to erase the [...]0.96I could hear the shrill voices of his sisters as they dash about their store helping customers.0.98
#pencilgate to justify vitriolic backlash if #remain wins [...]0.94[...] flavoring used in cheese, meat and fish to give it a smoky flavor could in fact be toxic.0.82
@anubhuti921 @prasannas it adds technol-ogy to worst of old police state practices, [...]0.76*Usually an overly dry nose is a precursor to a bloody nose.0.64
+ +Table 3: Misclassified examples by our AffineTrans model (without attention) from ZayTw and TSV test sets. Sentences are truncated due to space limitations. *Our model was able to spot some mistakenly annotated instances. + +not clear why the gold label for "explain this mess" is not a metaphor while it is metaphor for "explain implications"; similarly, the nouns "inspirations" and "emotions" with the verb "experience". + +# 7 Conclusions + +In this paper, we introduced a novel architecture to identify metaphors by utilising feature-wise affine transformation and deep contextual modulation. Our approach employs a contextual modulation pipeline to capture the interaction between the metaphor components. This interaction is then used as an auxiliary input to modulate a metaphor identification linguistic pipeline. We showed that such modulation allowed the model to dynamically highlight the key contextual features to identify the metaphoricity of a given expression. We applied our approach to relation-level metaphor identification to classify expressions of certain syntactic constructions for metaphoricity as they occur in context. We significantly outperform the state-of-the-art approaches for this level of analysis on benchmark datasets. Our experiments also show that our contextual modulation-based model can generalise well to identify the metaphoricity of unseen instances in different text types including the noisy user-generated text of tweets. Our model was able to identify both conventionalised common metaphoric expressions as well as less common ones. To the best of our knowledge, this is the first attempt to computationally identify metaphors in tweets and the first approach to study the employment of feature-wise linear modulation on + +metaphor identification in general. The proposed methodology is generic and can be applied to a wide variety of text classification approaches including sentiment analysis or term extraction. + +# Acknowledgments + +This work was supported by Science Foundation Ireland under grant number SFI/12/RC/2289_2 (Insight). + +We would like to thank the anonymous reviewers of this paper for their helpful comments and feedback. Special thanks for the anonymous meta-reviewer for steering an effective and constructive discussion about this paper which we realised its results through the experienced, extensive and beneficial meta-review. Sincere thanks to Mennatullah Siam for the insightful discussions about the technical part of this paper. + +# References + +Julia Birke and Anoop Sarkar. 2006. A clustering approach for nearly unsupervised recognition of nonliteral language. In In Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics, EACL '06, pages 329-336, Trento, Italy. +Yuri Bizzoni and Mehdi Ghanimifard. 2018. Bigrams and BiLSTMs two neural networks for sequential metaphor detection. In Proceedings of the Workshop on Figurative Language Processing, pages 91-101, New Orleans, LA, USA. +David B. Bracewell, Marc T. Tomlinson, Michael Mohler, and Bryan Rink. 2014. A tiered approach + +to the recognition of metaphor. Computational Linguistics and Intelligent Text Processing, 8403:403-414. +Luana Bulat, Stephen Clark, and Ekaterina Shutova. 2017. Modelling metaphor with attribute-based semantics. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL '17, pages 523-528, Valencia, Spain. +Jiaxun Cai, Shexia He, Zuchao Li, and Hai Zhao. 2018. A full end-to-end semantic role labeler, syntactic-agnostic over syntactic-aware? In Proceedings of the 27th International Conference on Computational Linguistics, COLING '18, pages 2753-2765, Santa Fe, NM, USA. Association for Computational Linguistics. +Lynne Cameron. 2003. Metaphor in educational discourse. Advances in Applied Linguistics. Continuum, London, UK. +Lynne Cameron and Graham Low. 1999. Researching and Applying Metaphor. Cambridge Applied Linguistics. Cambridge University Press, Cambridge, UK. +Jonathan Charteris-Black. 2011. Metaphor in Political Discourse. In Politicians and Rhetoric: The Persuasive Power of Metaphor, pages 28-51. Palgrave Macmillan UK, London. +Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillip Koehn, and Tony Robinson. 2014. One billion word benchmark for measuring progress in statistical language modeling. In *The 15th Annual Conference of the International Speech Communication Association*, INTERSPEECH '14, pages 2635-2639, Singapore. +François Chollet et al. 2015. Keras. +Yulia Clausen and Vivi Nastase. 2019. Metaphors in text simplification: To change or not to change, that is the question. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 423-434, Florence, Italy. +Verna Dankers, Marek Rei, Martha Lewis, and Ekaterina Shutova. 2019. Modelling the interplay of metaphor and emotion through multitask learning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP '19, pages 2218-2229, Hong Kong, China. Association for Computational Linguistics. +Erik-Lan Do Dinh and Iryna Gurevych. 2016. Token-level metaphor detection using neural networks. In Proceedings of the 4th Workshop on Metaphor in NLP, pages 28-33, San Diego, CA, USA. + +Timothy Dozat and Christopher D. Manning. 2017. Deep bioaffine attention for neural dependency parsing. In Proceedings of the 5th International Conference on Learning Representations, ICLR '17, Toulouse, France. +Vincent Dumoulin, Ethan Perez, Nathan Schucher, Florian Strub, Harm de Vries, Aaron Courville, and Yoshua Bengio. 2018. Feature-wise transformations. Distill. https://distill.pub/2018/feature-wei-transformations. +Vincent Dumoulin, Jonathon Shlens, and Manjunath Kudlur. 2017. A learned representation for artistic style. In Proceedings of the 5th International Conference on Learning Representations, ICLR '17, Toulouse, France. +Jonathan Dunn. 2013a. Evaluating the premises and results of four metaphor identification systems. In Proceedings of the 14th International conference on Computational Linguistics and Intelligent Text Processing, volume 7816 of CICling '13, pages 471-486, Samos, Greece. Springer Berlin Heidelberg. +Jonathan Dunn. 2013b. What metaphor identification systems can tell us about metaphor-in-language. In Proceedings of the First Workshop on Metaphor in NLP, pages 1-10, Atlanta, GA, USA. Association for Computational Linguistics. +Laurel J End. 1986. Grounds for metaphor comprehension. Knowledge and language, pages 327-345. +Dan Fass. 1991. met*: A method for discriminating metonymy and metaphor by computer. Computational Linguistics, 17(1):49-90. +Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. MIT Press. +Ge Gao, Eunsol Choi, Yejin Choi, and Luke Zettle-moyer. 2018. Neural metaphor detection in context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP '18, pages 1412-1424, Brussels, Belgium. +Andrew Gargett and John Barnden. 2015. Modeling the interaction between sensory and affective meanings for detecting metaphor. In Proceedings of the Third Workshop on Metaphor in NLP, pages 21-30, Denver, CO, USA. Association for Computational Linguistics. +Matt Gedigian, John Bryant, Srini Narayanan, and Branimir Ciric. 2006. Catching metaphors. In Proceedings of the 3rd Workshop on Scalable Natural Language Understanding, ScaNAU '06, pages 41-48, New York City, NY, USA. +Ilana Heintz, Ryan Gabbard, Mahesh Srivastava, Dave Barner, Donald Black, Majorie Friedman, and Ralph Weischedel. 2013. Automatic extraction of linguistic metaphors with lda topic modeling. In Proceedings of the 1st Workshop on Metaphor in NLP, pages 58-66, Atlanta, GA, USA. + +Janet Ho and Winnie Cheng. 2016. Metaphors in financial analysis reports: How are emotions expressed? English for Specific Purposes, 43:37 - 48. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780. +Dirk Hovy, Shashank Srivastava, Sujay Kumar Jauhar, Mrinmaya Sachan, Kartik Goyal, Huiying Li, Whitney Sanders, and Eduard Hovy. 2013. Identifying metaphorical word use with tree kernels. In Proceedings of the 1st Workshop on Metaphor in NLP, pages 52-56, Atlanta, GA, USA. +Hyeju Jang, Seungwhan Moon, Yohan Jo, and Carolyn Rose. 2015. Metaphor detection in discourse. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIG-DIAL '15, pages 384-392, Prague, Czech Republic. +Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in Neural Information Processing Systems, NIPS '15, pages 3294-3302. +Beata Beigman Klebanov, Ben Leong, Michael Heilman, and Michael Flor. 2014. Different texts, same metaphors: Unigrams and beyond. In Proceedings of the Second Workshop on Metaphor in NLP, pages 11-17, Baltimore, MD, USA. Association for Computational Linguistics. +Arlene Koglin and Rossana Cunha. 2019. Investigating the post-editing effort associated with machine-translated metaphors: a process-driven analysis. The Journal of Specialised Translation, 31(01):38-59. +Maximilian Köper and Sabine Schulte im Walde. 2017. Improving verb metaphor detection by propagating abstractness to words, phrases and individual senses. In Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications, SENSE '18, pages 24-30, Valencia, Spain. +Saisuresh Krishnakumaran and Xiaojin Zhu. 2007. Hunting elusive metaphors using lexical resources. In Proceedings of the Workshop on Computational Approaches to Figurative Language, pages 13-20, Rochester, NY, USA. +George Lakoff and Mark Johnson. 1980. Metaphors we live by. University of Chicago Press, Chicago, USA. +Chee Wee (Ben) Leong, Beata Beigman Klebanov, and Ekaterina Shutova. 2018. A report on the 2018 VUA metaphor detection shared task. In Proceedings of the Workshop on Figurative Language Processing, pages 56-66, New Orleans, LA, USA. +Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence + +embedding. In Proceedings of the 5th International Conference on Learning Representations, ICLR '17, Toulon, France. +Rui Mao, Chenghua Lin, and Frank Guerin. 2018. Word embedding and WordNet based metaphor identification and interpretation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL '18, pages 1222-1231, Melbourne, Australia. +Rui Mao, Chenghua Lin, and Frank Guerin. 2019. End-to-end sequential metaphor identification inspired by linguistic theories. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, ACL '19, pages 3888-3898, Florence, Italy. +Katerina Margatina, Christos Baziotis, and Alexandros Potamianos. 2019. Attention-based conditioning methods for external knowledge integration. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, ACL '19, pages 3944-3951, Florence, Italy. +Saif M. Mohammad, Ekaterina Shutova, and Peter D. Turney. 2016. Metaphor as a medium for emotion: An empirical study. In Proceedings of the 5th Joint Conference on Lexical and Computational Semantics, *Sem '16, pages 23-33, Berlin, Germany. +Michael Mohler, David Bracewell, Marc Tomlinson, and David Hinote. 2013. Semantic signatures for example-based linguistic metaphor detection. In Proceedings of the 1st Workshop on Metaphor in NLP, pages 27-35, Atlanta, GA, USA. +Jesse Mu, Helen Yannakoudakis, and Ekaterina Shutova. 2019. Learning outside the box: Discourse-level features improve metaphor identification. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT '19, Minneapolis, MN, USA. +Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP '14, pages 1532-1543, Doha, Qatar. +Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, and Aaron C. Courville. 2018. FiLM: Visual reasoning with a general conditioning layer. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, AAAI '18, New Orleans, LA, USA. +Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language + +Technologies, NAACL-HLT '18, New Orleans, LA, USA. +Sunny Rai, Shampa Chakraverty, and Devendra K. Tayal. 2016. Supervised metaphor detection using conditional random fields. In Proceedings of the 4th Workshop on Metaphor in NLP, pages 18-27, San Diego, CA, USA. +Marek Rei, Luana Bulat, Douwe Kiela, and Ekaterina Shutova. 2017. Grasping the finer point: A supervised similarity network for metaphor detection. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP '17, pages 1537-1546, Copenhagen, Denmark. +Vassiliki Rentoumi, George A. Vouros, Vangelis Karkaletsis, and Amalia Moser. 2012. Investigating metaphorical language in sentiment analysis: A sense-to-sentiment perspective. ACM Transactions on Speech and Language Processing, 9(3):1-31. +Elena Semino, Zsofia Demjen, Andrew Hardie, Sheila Alison Payne, and Paul Edward Rayson. 2018. Metaphor, Cancer and the End of Life: A Corpus-based Study. Routledge, London, UK. +Ekaterina Shutova. 2015. Design and evaluation of metaphor processing systems. Computational Linguistics, 41(4):579-623. +Ekaterina Shutova, Douwe Kiela, and Jean Maillard. 2016. Black holes and white rabbits: Metaphor identification with visual features. In Proceedings of the 2016 Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT '16, pages 160-170, San Diego, CA, USA. +Ekaterina Shutova and Lin Sun. 2013. Unsupervised metaphor identification using hierarchical graph factorization clustering. In Proceedings of the 2013 Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT '13, pages 978-988, Atlanta, GA, USA. +Ekaterina Shutova, Lin Sun, and Anna Korhonen. 2010. Metaphor identification using verb and noun clustering. In Proceedings of the 23rd International Conference on Computational Linguistics, COLING '10, pages 1002-1010, Beijing, China. +Ekaterina Shutova and Simone Teufel. 2010. Metaphor corpus annotated for source-target domain mappings. In Proceedings of the 7th International Conference on Language Resources and Evaluation, LREC '10, pages 255-261, Malta. +Gerard J. Steen, Aletta G. Dorst, J. Berenike Herrmann, Anna Kaal, Tina Krennmayr, and Trijntje Pasma. 2010. A Method for Linguistic Metaphor Identification: From MIP to MIPVU. Converging evidence in language and communication research. John Benjamins Publishing Company. + +Egon Stemle and Alexander Onysko. 2018. Using language learner data for metaphor detection. In Proceedings of the Workshop on Figurative Language Processing, pages 133-138, New Orleans, LA, USA. +Kevin Stowe and Martha Palmer. 2018. Leveraging syntactic constructions for metaphor identification. In Proceedings of the Workshop on Figurative Language Processing, pages 17-26, New Orleans, LA, USA. +Tomek Strzalkowski, George Aaron Broadwell, Sarah Taylor, Laurie Feldman, Samira Shaikh, Ting Liu, Boris Yamrom, Kit Cho, Umit Boz, Ignacio Cases, and Kyle Elliot. 2013. Robust extraction of metaphor from novel data. In Proceedings of the 1st Workshop on Metaphor in NLP, pages 67-76, Atlanta, GA, USA. +Yulia Tsvetkov, Leonid Boytsov, Anatole Gershman, Eric Nyberg, and Chris Dyer. 2014. Metaphor detection with cross-lingual model transfer. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL '14, pages 248-258, Baltimore, MD, USA. +Yulia Tsvetkov, Elena Mukomel, and Anatole Gershman. 2013. Cross-lingual metaphor detection using common semantic features. In Proceedings of the 1st Workshop on Metaphor in NLP, pages 45-51, Atlanta, GA, USA. +Peter D. Turney, Yair Neuman, Dan Assaf, and Yohai Cohen. 2011. Literal and metaphorical sense identification through concrete and abstract context. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, EMNLP '11, pages 680-690, Edinburgh, Scotland, UK. +Harm de Vries, Florian Strub, Jérémie Mary, Hugo Larochelle, Olivier Pietquin, and Aaron Courville. 2017. Modulating early visual processing by language. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS '17, pages 6597-6607, Long Beach, California, USA. +Yorick Wilks. 1978. Making preferences more active. Artificial Intelligence, 11(3):197-223. +Yorick Wilks, Adam Dalton, James Allen, and Lucian Galescu. 2013. Automatic metaphor detection using large-scale lexical resources and conventional metaphor extraction. In Proceedings of the 1st Workshop on Metaphor in NLP, pages 36-44, Atlanta, GA, USA. +Magdalena Wolska and Yulia Clausen. 2017. Simplifying metaphorical language for young readers: A corpus study on news text. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, pages 313-318, Copenhagen, Denmark. Association for Computational Linguistics. + +Chuhan Wu, Fangzhao Wu, Yubo Chen, Sixing Wu, Zhigang Yuan, and Yongfeng Huang. 2018. Neural metaphor detecting with CNN-LSTM model. In Proceedings of the Workshop on Figurative Language Processing, pages 110-114, New Orleans, LA, USA. +Alexander Yeh. 2000. More accurate tests for the statistical significance of result differences. In Proceedings of the 18th conference on Computational linguistics, volume 2 of COLING '00, pages 947-953, Saarbruecken, Germany. +Omnia Zayed, John Philip McCrae, and Paul Buitelaar. 2018. Phrase-level metaphor identification using distributed representations of word meaning. In Proceedings of the Workshop on Figurative Language Processing, pages 81-90, New Orleans, LA, USA. +Omnia Zayed, John Philip McCrae, and Paul Buitelaar. 2019. Crowd-sourcing a high-quality dataset for metaphor identification in tweets. In Proceedings of the 2nd Conference on Language, Data and Knowledge, LDK '19, Leipzig, Germany. +Omnia Zayed, John Philip McCrae, and Paul Buitelaar. 2020. Adaptation of word-level benchmark datasets for relation-level metaphor identification. In Proceedings of the Second Workshop on Figurative Language Processing, Online. +Matthew D Zeiler. 2012. ADADELTA: an adaptive learning rate method. arXiv preprint arXiv:1212.5701. +Rui Zhang, Cicero Nogueira dos Santos, Michihiro Yasunaga, Bing Xiang, and Dragomir Radev. 2018. Neural coreference resolution with deep biaffine attention by joint mention detection and mention clustering. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, volume 2 (short papers) of ACL '18, pages 102-107, Melbourne, Australia. Association for Computational Linguistics. + +# A Datasets Statistics and Analysis + +# A.1 Benchmark Datasets Statistics + +Table 4 shows the statistics of the benchmark datasets employed in this work, namely the relation-level datasets $\mathrm{TSV^6}$ and ZayTw in addition to the adapted TroFi $^7$ , $\mathrm{VUAMC^8}$ and $\mathrm{MOH^9}$ datasets. Table 5 shows examples of annotated instances from each dataset. + +# A.2 Datasets Analysis + +Examples of correctly classified instances from the employed datasets: We show examples of correctly classified instances by our best performing model. Table 6 comprises examples from the relational-level datasets TSV and ZayTw. Table 7 lists examples from the adapted MOH and TroFi datasets as well as the adapted VUAMC. + +Examples of misclassified instances by our model in the tweets dataset: Examples of misclassified instances from the TSV and ZayTw datasets as well as the adapted MOH, TroFi and VUAMC datasets are given in Table 8. Our model spotted some instances that are mistakenly annotated in the original datasets. + +Missclassified Verbs: Table 9 shows examples from the ZayTw dataset of the verbs "experience" and "explain" with different associated nouns along with their gold and predicted classifications. + +# B Design Considerations + +# B.1 Experimental Settings + +The word embeddings layer is initialised with the pre-trained GloVe embeddings. We used the uncased 200-dimensional GloVe embeddings pretrained on $\sim 2$ billion tweets and contains 1.2 million words. We did not update the weights of these embeddings during training. Table 10 shows the sizes of the training, validation and test sets of each employed dataset for as well as the corresponding best obtained validation accuracy by the Affine-Trans model (without attention). All experiments are done on a NVIDIA Quadro M2000M GPU and + +the average running time for the proposed models is around 1 hour for maximum of 100 epochs. + +# B.2 Other Trials + +Sentence Embedding: We experimented with different representations other than GLoVe in order to embed the input sentence. We tried to employ the contextualised pre-trained embeddings ELMo and BERT either instead of the GloVe embeddings or as additional-features but no further improvements were observed on both validation and test sets over the best performance obtained. Furthermore, we experimented with different pre-trained GloVe embeddings including the uncased 300-dimensional pre-trained vectors on the Common Crawl dataset but we did notice any significant improvements. + +Sentence Encoding: The choice of using the simple LSTM to encode the input was based on several experiments on the validation set. We tried bidirectional LSTM but observed no further improvement. This is due to the nature of the relation-level metaphor identification task itself as the tenor (e.g. noun) affects the metaphoricity of the vehicle (e.g. verb or adjective) so a single-direction processing was enough. + +
DatasetSyntactic structureText typeSize% MetaphorsAverage Sentence Length
The adapted TroFi Datasetverb-direct object50 selected verbs (News)1,535 sentences59.15%48.5
The adapted VUAMC (NAACL Shared Task subset)verb-direct objectknown-corpus (The BNC)5,820 sentences38.87%63.5
The adapted MOH Datasetverb-direct object; subject-verbselected examples (WordNet)647 sentences48.8%11
The TSV Datasetadjective-nounselected examples (Web/Tweets)1,964 sentences50%43.5
The ZayTw Datasetverb-direct objectTweets (general and political topics)2,531 tweets54.8%34.5
+ +Table 4: Statistics of the employed benchmark datasets to train and evaluate our proposed models highlighting the used experimental setting and links to the data sources in the footnotes. The adapted versions are available upon request from their corresponding authors. + +
DatasetSentenceTargeted ExpressionGold Label
TSVChicago is a big city, with a lot of everything to offer.big city0
It 's a foggy night and there are a lot of cars on the motor-way.foggy night0
Their initial icy glares had turned to restless agitation.icy glares1
And he died with a sweet smile on his lip.sweet smile1
ZayTwinsanity. ok to abuse children by locking them in closet, dark room and damage their psyche, but corporal punishment not ok? twisted!abuse children0
nothing to do with your lot mate #ukip ran hate nothing else and your bloody poster upset the majority of the country regardless in or outupset the majority0
nothing breaks my heart more than seeing a person looking into the mirror with anger & disappointment, blaming themselves when someone left.breaks my heart1
how quickly will the warring tories patch up their differences to preserve power? #eurefpatch up their differences1
The adapted TroFiA Middle Eastern analyst says Lebanese usually drink cofee at such occasions; Palestinians drink tea.drink coffee0
In addition, the eight-warhead missiles carry guidance systems allowing them to strike Soviet targets precisely.strike Soviet targets0
He now says that specialty retailing fills the bill, but he made a number of profitable forays in the meantime.fills the bill1
A survey of U.K. institutional fund managers found most expect London stocks to be flat after the fiscal 1989 budget is announced, as Chancellor of the Exchequer Nigel Lawson strikes a careful balance between cutting taxes and not overstimulating the economy.strikes a careful balance1
The adapted VUAMC (NAACL Shared Task)Among the rich and famous who had come to the salon to have their hair cut, tinted and set, Paula recognised Dusty Springfield, the pop singer, her eyes big and sooty, her lips pearly pink, and was unable to suppress the thrill of excitement which ran through her.recognised Dusty Springfield0
But until they get any money back, the Tysons find themselves in the position of the gambler who gambled all and lost.get any money0
The Labour Party Conference: Policy review throws a spanner in the Whitehall machinerythrows a spanner1
Otherwise Congress would have to face the consequences of automatic across-the-board cuts under the Gramm-Rudman-Hollings budget deficit reduction law.face the consequences1
MOH-Xcommit a random act of kindness.commit a random act0
The smoke clouded above the houses.smoke clouded0
His political ideas color his lectures.ideas color1
flood the market with tennis shoes.flood the market1
+ +Table 5: Examples of annotated instances from the employed relation-level datasets showing their format as: sentence, targeted expression and the provided label. + +
Model ClassificationZayTwTSV
ExpressionProb.ExpressionProb.
Metaphorpoisoning our democracy0.999rich history0.999
binding the country0.942rocky beginning0.928
see greater diversity0.892foggy brain0.873
patch up their differences0.738steep discounts0.723
seeking information0.629smooth operation0.624
retain eu protection0.515dumb luck0.512
Not Metaphorshake your baby0.420filthy garments0.393
enjoy a better climate0.375clear day0.283
improve our cultural relations0.292slimy slugs0.188
placate exiters0.225sour cherries0.102
betrayed the people0.001short walk0.014
washing my car0.000hot chocolate0.000
+ +Table 6: Examples of correctly classified instances by our AffineTrans model (without attention) from the ZayTw and TSV datasets showing the classification probability. + +
Model Classificationadapted MOHadapted TroFiadapted VUAMC
ExpressionProb.ExpressionProb.ExpressionProb.
Metaphorabsorbed the knowledge0.987grasped the concept0.985bury their reservations0.999
steamed the young man0.899strike fear0.852reinforce emotional reticence0.871
twist my words0.770ate the rule0.781possess few doubts0.797
color my judgment0.701planted a sign0.700suppress the thrill0.647
poses an interesting question0.543examined the legacy0.599considers the overall effect0.568
wears a smile0.522pumping money0.529made no attempt0.517
Not Metaphorshed a lot of tears0.484pumping power0.427send the tape0.482
abused the policeman0.361poured acid0.314asking pupils0.389
tack the notice0.274ride his donkey0.268removes her hat0.276
stagnate the waters0.148fixed the dish0.144enjoying the reflected glory0.188
paste the sign0.002lending the credit0.069predict the future0.088
heap the platter0.000destroy coral reefs0.000want anything0.000
+ +Table 7: Examples of correctly classified instances by our AffineTrans model (without attention) from the adapted MOH, TroFi and VUAMC datasets showing the classification probability. + +
DatasetSentenceProb.
False NegativeTroFiUnself-consciously, the littlest cast member with the big voice steps into the audience in one number to open her wide cat-eyes and throat to melt the heart of one lucky patron each night.0.295
Lillian Vernon Corp., a mail-order company, said it is experiencing delays in filling orders at its new national distribution center in Virginia Beach, Va.0.006
VUAMCIt is a curiously paradoxical foundation uponupon which to build a theory of autonomy.0.410
It has turned up in Canberra with Japan to develop Asia Pacific Economic Co-operation (APEC) and a new 12-nation organisation which will mimic the role of the Organisation for Economic Co-operation and Development in Europe.0.000
MOHWhen does the court of law sit?0.499
The rooms communicated.0.000
TSVIt was great to see a warm reception for it on twitter.0.488
An honest meal at a reasonable price is a rarity in Milan.0.000
ZayTw#brexit? we explain likely implications for business insurances on topic of #eureferendum0.2863
@abpi uk: need #euref final facts? read why if you care about uk life sciences we're #strongerin.0.0797
False PositiveTroFiAs the struggle enters its final weekend, any one of the top contenders could grasp his way to the top of the greasy pole.0.998*
Southeastern poultry producers fear withering soybean supplies will force up prices on other commodities.0.507
VUAMCOr after we followed the duff advice of a legal journalist in a newspaper?0.999*
Aristotle said something very interesting in that extract from the Politics which I quoted earlier; he said that women have a deliberative faculty but that it lacks full authority.0.525
MOHAll our planets condensed out of the same material.0.999
He bowed before the King.0.868
TSVBags two and three will only have straight edges along the top and the bottom.0.846
Mountain climbers at high altitudes quickly acquire a tan from the sun.0.986
ZayTwdelayed flight in fueturventura due to french strikes restricting access across french airspace =/ hopefully get back in time to #voteleave0.9589
in manchester more young people are expected to seek help in the coming months and years #cypiapt #mentalhealth0.7055*
+ +Table 8: Misclassified examples by our AffineTrans model (without attention) from the TSV test set as well as the adapted MOH, TroFi and VUAMC test sets. *Our model was able to spot some mistakenly annotated instances in the dataset. + +
ExpressiontweetPredictedProb.Gold
experiencethe inspirationrelive the show , re - listen to her messages, re - experience the inspiration, refuel your motivation00.2201
your emotionsdo not be afraid to experience your emotions; they are the path to your soul. trust yourself enough to feel what you feel.00.3550
this shocking behavioura friend voted this morning & experienced this shocking behaviour. voting is everyone ’s right. #voteremain00.0090
explainlikely implications#brexit? we explain likely implications for business insurances on topic of #eureferendum00.28661
this mess@b_hanbin28 ikr same here :D imagine hansol & shua trynna explain this mess to other members :D00.1090
the riseloss aversion partly explains the rise of trump and ukip10.6181
+ +Table 9: Examples of classified instances of the verbs "experience" and "explain" in the ZayTw test set. + +
DatasetTrainValidationTestsplit %Validation Accuracy@epoch
The adapted TroFi Dataset1,07415031270-10-200.91440
The adapted VUAMC3,5358851,398-0.74820
The adapted MOH Dataset582 per fold-65 per fold10-fold cross-validation--
The TSV Dataset1,566200200-0.90568
The ZayTw Dataset1,66136051070-10-200.80829
+ +Table 10: Experimental information of the five benchmark datasets including the best obtained validation accuracy by the AffineTrans model (without attention). We preserved the splits used in literature for the VUAMC and TSV datasets. \ No newline at end of file diff --git a/contextualmodulationforrelationlevelmetaphoridentification/images.zip b/contextualmodulationforrelationlevelmetaphoridentification/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..936326bca6ebe43b821c3565bf8a7c3e3237eaaa --- /dev/null +++ b/contextualmodulationforrelationlevelmetaphoridentification/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:226f72863db48869aff1aa82ca47ba192b1c6d89103b48666bf201c0d693f1cc +size 1260353 diff --git a/contextualmodulationforrelationlevelmetaphoridentification/layout.json b/contextualmodulationforrelationlevelmetaphoridentification/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..65742ff75f6298d50ac9d9413dea8217c075541a --- /dev/null +++ b/contextualmodulationforrelationlevelmetaphoridentification/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:83d38fbc20e7cfa7e5ee4dbf5d5ddb0a04b9730db592df415da04c265c855340 +size 392288 diff --git a/contextualtextstyletransfer/2c19cb4e-fee5-46af-b550-afc0e2328c4d_content_list.json b/contextualtextstyletransfer/2c19cb4e-fee5-46af-b550-afc0e2328c4d_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..054ccc2899dfe165d6cbe5bb3313a01b3a9d91c1 --- /dev/null +++ b/contextualtextstyletransfer/2c19cb4e-fee5-46af-b550-afc0e2328c4d_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0c97f9a6e9900b34d1a63db853060b0f5ff31a69b8ccc19cf2c3b5c6e7f2e7da +size 70780 diff --git a/contextualtextstyletransfer/2c19cb4e-fee5-46af-b550-afc0e2328c4d_model.json b/contextualtextstyletransfer/2c19cb4e-fee5-46af-b550-afc0e2328c4d_model.json new file mode 100644 index 0000000000000000000000000000000000000000..320cdcc8fce274bb6fa4b60111ba0e1af0e274e6 --- /dev/null +++ b/contextualtextstyletransfer/2c19cb4e-fee5-46af-b550-afc0e2328c4d_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:38005202877ef1baaedee4c1e42e36d9e0b36bddd28a4bb39e5db6b72059c6d3 +size 84971 diff --git a/contextualtextstyletransfer/2c19cb4e-fee5-46af-b550-afc0e2328c4d_origin.pdf b/contextualtextstyletransfer/2c19cb4e-fee5-46af-b550-afc0e2328c4d_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5be2ac7f78c27d202ea4b83dc657015163b6ebb9 --- /dev/null +++ b/contextualtextstyletransfer/2c19cb4e-fee5-46af-b550-afc0e2328c4d_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3c47707b153684a6e4dcd016cb43ec112f5d014514eb33e99d8bf64edb7a9f9f +size 377889 diff --git a/contextualtextstyletransfer/full.md b/contextualtextstyletransfer/full.md new file mode 100644 index 0000000000000000000000000000000000000000..cf153f9655f745a841e7ef2aabcf2d1fd467699b --- /dev/null +++ b/contextualtextstyletransfer/full.md @@ -0,0 +1,286 @@ +# Contextual Text Style Transfer + +Yu Cheng $^{1}$ , Zhe Gan $^{1}$ , Yizhe Zhang $^{2}$ , OuSSama Elachqar $^{2}$ , Dianqi Li $^{3}$ , Jingjing Liu $^{1}$ + +1Microsoft Dynamics 365 AI Research 2Microsoft Research + +3University of Washington + +{yu.cheng,zhe.gan,yizhe.zhang,ouelachq,jinjl}@microsoft.com,dianqili@uw.edu + +# Abstract + +We introduce a new task, Contextual Text Style Transfer - translating a sentence into a desired style with its surrounding context taken into account. This brings two key challenges to existing style transfer approaches: (i) how to preserve the semantic meaning of target sentence and its consistency with surrounding context during transfer; (ii) how to train a robust model with limited labeled data accompanied by context. To realize high-quality style transfer with natural context preservation, we propose a Context-Aware Style Transfer (CAST) model, which uses two separate encoders for each input sentence and its surrounding context. A classifier is further trained to ensure contextual consistency of the generated sentence. To compensate for the lack of parallel data, additional self-reconstruction and back-translation losses are introduced to leverage non-parallel data in a semi-supervised fashion. Two new benchmarks, Enron-Context and Reddit-Context, are introduced for formality and offensiveness style transfer. Experimental results on these datasets demonstrate the effectiveness of the proposed CAST model over state-of-the-art methods across style accuracy, content preservation and contextual consistency metrics. $^{1}$ + +# 1 Introduction + +Text style transfer has been applied to many applications (e.g., sentiment manipulation, formalized writing) with remarkable success. Early work relies on parallel corpora with a sequence-to-sequence learning framework (Bahdanau et al., 2015; Jhamtani et al., 2017). However, collecting parallel annotations is highly time-consuming and expensive. There has also been studies on developing text style transfer models with non-parallel data (Hu et al., + +2017; Li et al., 2018; Prabhumoye et al., 2018; Subramanian et al., 2018), assuming that disentangling style information from semantic content can be achieved in an auto-encoding fashion with the introduction of additional regularizers (e.g., adversarial discriminators (Shen et al., 2017), language models (Yang et al., 2018)). + +Despite promising results, these techniques still have a long way to go for practical use. Most existing models focus on sentence-level rewriting. However, in real-world applications, sentences typically reside in a surrounding paragraph context. In formalized writing, the rewritten span is expected to align well with the surrounding context to keep a coherent semantic flow. For example, to automatically replace a gender-biased sentence in a job description document, a style transfer model taking the sentence out of context may not be able to understand the proper meaning of the statement and the intended message. Taking a single sentence as the sole input of a style transfer model may fail in preserving topical coherency between the generated sentence and its surrounding context, leading to low semantic and logical consistency on the paragraph level (see Example C in Table 4). Similar observations can be found in other style transfer tasks, such as offensive to non-offensive and political to neutral translations. + +Motivated by this, we propose and investigate a new task - Contextual Text Style Transfer. Given a paragraph, the system aims to translate sentences into a desired style, while keeping the edited section topically coherent with its surrounding context. To achieve this goal, we propose a novel Context-Aware Style Transfer (CAST) model, by jointly considering style translation and context alignment. To leverage parallel training data, CAST employs two separate encoders to encode the source sentence and its surrounding context, respectively. With the encoded sentence and context embed + +dings, a decoder is trained to translate the joint features into a new sentence in a specific style. A pre-trained style classifier is applied for style regularization, and a coherence classifier learns to regularize the generated target sentence to be consistent with the context. To overcome data sparsity issue, we further introduce a set of unsupervised training objectives (e.g., self-reconstruction loss, back-translation loss) to leverage non-parallel data in a hybrid approach (Shang et al., 2019). The final CAST model is jointly trained with both parallel and non-parallel data via end-to-end training. + +As this is a newly proposed task, we introduce two new datasets, Enron-Context and Reddit-Context, collected via crowdsourcing. The former contains 14,734 formal vs. informal paired samples from Enron (Klimt and Yang, 2004) (an email dataset), and the latter contains 23,158 offensive vs. non-offensive paired samples from Reddit (Serban et al., 2017). Each sample contains an original sentence and a human rewritten one in target style, accompanied by its paragraph context. In experiments, we also leverage 60k formal/informal sentences from GYAFC (Rao and Tetreault, 2018) and 100k offensive/non-offensive sentences from Reddit (dos Santos et al., 2018) as additional non-parallel data for model training. + +The main contributions of this work are summarized as follows: (i) We propose a new task - Contextual Text Style Transfer, which aims to translate a sentence into a desired style while preserving its style-agnostic semantics and topical consistency with the surrounding context. (ii) We introduce two new datasets for this task, Enron-Context and Reddit-Context, which provide strong benchmarks for evaluating contextual style transfer models. (iii) We present a new model - Context-Aware Style Transfer (CAST), which jointly optimizes the generation quality of target sentence and its topical coherency with adjacent context. Extensive experiments on the new datasets demonstrate that the proposed CAST model significantly outperforms state-of-the-art style transfer models. + +# 2 Related Work + +# 2.1 Text Style Transfer + +Text style transfer aims to modify an input sentence into a desired style while preserving its style-independent semantics. Previous work has explored this as a sequence-to-sequence learning task using parallel corpora with paired source/target sen + +tences in different styles. For example, Jhamtani et al. (2017) pre-trained word embeddings by leveraging external dictionaries mapping Shakespearean words to modern English words and additional text. However, available parallel data in different styles are very limited. Therefore, there is a recent surge of interest in considering a more realistic setting, where only non-parallel stylized corpora are available. A typical approach is: $(i)$ disentangling latent space as content and style features; then $(ii)$ generating stylistic sentences by tweaking style-relevant features and passing them through a decoder, together with the original content-relevant features (Xu et al., 2018). + +Many of these approaches borrowed the idea of adversarial discriminator/classifier from the Generative Adversarial Network (GAN) framework (Goodfellow et al., 2014). For example, Shen et al. (2017); Fu et al. (2018); Lample et al. (2018) used adversarial classifiers to force the decoder to transfer the encoded source sentence into a different style/language. Alternatively, Li et al. (2018) achieved disentanglement by filtering stylistic words of input sentences. Another direction for text style transfer without parallel data is using back-translation (Prabhumoye et al., 2018) with a de-noising auto-encoding objective (Logeswaran et al., 2018; Subramanian et al., 2018). + +Regarding the tasks, sentiment transfer is one of the most widely studied problems. Transferring from informality to formality (Rao and Tetreault, 2018; Li et al., 2019) is another direction of text style transfer, aiming to change the style of a given sentence to more formal text. dos Santos et al. (2018) presented an approach to transferring offensive text to non-offensive based on social network data. In Prabhumoye et al. (2018), the authors proposed the political slant transfer task. However, all these previous studies did not directly consider context-aware text style transfer, which is the main focus of this work. + +# 2.2 Context-aware Text Generation + +Our work is related to context-aware text generation (Mikolov and Zweig, 2012; Tang et al., 2016), which can be applied to many NLP tasks (Mangrulkar et al., 2018). For example, previous work has investigated language modeling with context information (Wang and Cho, 2015; Wang et al., 2017; Li et al., 2020), treating the preceding sentences as context. There are also studies on response gen + +![](images/5a53a75f8ba215c6fad0aaca06fff1457e1f7230a715756aca271f28c73403df.jpg) + +![](images/a5943251a087cee35c3cbc020af4a1e41eb602bd5c8c04ed38576467bdf3231a.jpg) +Figure 1: Model architecture of the proposed CAST model for contextual text style transfer. Both training paths share the same sentence encoder and decoder. See Sec. 3 for details. + +eration for conversational systems (Sordoni et al., 2015b; Wen et al., 2015), where dialogue history is treated as a context. Zang and Wan (2017) introduced a neural model to generate long reviews from aspect-sentiment scores given the topics. Vinyls and Le (2015) proposed a model to predict the next sentence given the previous sentences in a dialogue session. Sordoni et al. (2015a) presented a hierarchical recurrent encoder-decoder model to encode dialogue context. Our work is the first to explore context information in the text style transfer task. + +# 3 Context-Aware Style Transfer + +In this section, we first describe the problem definition and provide an overview of the model architecture in Section 3.1. Section 3.2 presents the proposed Context-Aware Style Transfer (CAST) model with supervised training objectives, and Section 3.3 further introduces how to augment the CAST model with non-parallel data in a hybrid approach. + +# 3.1 Overview + +Problem Definition The problem of contextual text style transfer is defined as follows. A style-labelled parallel dataset $\mathcal{P} = \{(\mathbf{x}_i, l_i), (\mathbf{y}_i, \tilde{l}_i), \mathbf{c}_i\}_{i=1}^M$ includes: (i) the $i$ -th instance containing the original sentence $\mathbf{x}_i$ with a style $l_i$ , (ii) its corresponding rewritten sentence $\mathbf{y}_i$ in another style $\tilde{l}_i$ , and (iii) the paragraph context $\mathbf{c}_i$ . $\mathbf{x}_i$ and $\mathbf{y}_i$ are expected to encode the same semantic content, but in different language styles (i.e., $l_i \neq \tilde{l}_i$ ). The goal is to transform $\mathbf{x}_i$ in style + +$l_{i}$ to $\mathbf{y}_i$ in style $\tilde{l}_i$ , while keeping $\mathbf{y}_i$ semantically coherent with its context $\mathbf{c}_i$ . In practice, labelled parallel data may be difficult to garner. Ideally, additional non-parallel data $\mathcal{U} = \{\left(\mathbf{x}_i,l_i\right)\}_{i = 1}^N$ can be leveraged to enhance model training. + +Model Architecture The architecture of the proposed CAST model is illustrated in Figure 1. The hybrid model training process consists of two paths, one for parallel data and the other for non-parallel data. In the parallel path, a Seq2Seq loss and a contextual coherence loss are included, for the joint training of two encoders (Sentence Encoder and Context Encoder) and the Sentence Decoder. The non-parallel path is designed to further enhance the Sentence Encoder and Decoder with three additional losses: (i) a self-reconstruction loss; (ii) a back-translation loss; and (iii) a style classification loss. The final training objective, uniting both parallel and non-parallel paths, is formulated as: + +$$ +\begin{array}{l} L _ {f i n a l} ^ {\mathcal {P}, \mathcal {U}} = L _ {c - s 2 s} ^ {\mathcal {P}} + \lambda_ {1} L _ {c o h e r e} ^ {\mathcal {P}} + \lambda_ {2} L _ {r e c o n} ^ {\mathcal {U}} \tag {1} \\ + \lambda_ {3} L _ {b t r a n s} ^ {\mathcal {U}} + \lambda_ {4} L _ {s t y l e} ^ {\mathcal {U}}, \\ \end{array} +$$ + +where $\lambda_1, \lambda_2, \lambda_3$ and $\lambda_4$ are hyper-parameters to balance different objectives. Each of these loss terms will be explained in the following subsections. + +# 3.2 Supervised Training Objectives + +In this subsection, we discuss the training objective associated with parallel data, consisting of: $(i)$ a contextual Seq2Seq loss; and $(ii)$ a contextual coherence loss. + +Contextual Seq2Seq Loss When parallel data is available, a Seq2Seq model can be directly learned for text style transfer. We denote the Seq2Seq model as $(E,D)$ , where the semantic representation of sentence $\mathbf{x}_i$ is extracted by the encoder $E$ , and the decoder $D$ aims to learn a conditional distribution of $\mathbf{y}_i$ given the encoded feature $E(\mathbf{x}_i)$ and style $\tilde{l}_i$ : + +$$ +L _ {s 2 s} ^ {\mathcal {P}} = - \underset {\mathbf {x} _ {i}, \mathbf {y} _ {i} \sim \mathcal {P}} {\mathbb {E}} \log p _ {D} (\mathbf {y} _ {i} | E (\mathbf {x} _ {i}), \tilde {l} _ {i}). \tag {2} +$$ + +However, in such a sentence-to-sentence style transfer setting, the context in the paragraph is ignored, which if well utilized, could help improve generation quality such as paragraph-level topical coherence. + +Thus, to take advantage of the paragraph context $\mathbf{c}_i$ , we use two separate encoders $E_{s}$ and $E_{c}$ to encode the sentence and the context independently. The outputs of the two encoders are combined via a linear layer, to obtain a context-aware sentence representation, which is then fed to the decoder to generate the target sentence. The model is trained to minimize the following loss: + +$$ +L _ {c - s 2 s} ^ {\mathcal {P}} = - \underset {\mathbf {x} _ {i}, \mathbf {c} _ {i}, \mathbf {y} _ {i} \sim \mathcal {P}} {\mathbb {E}} \log p _ {D} \left(\mathbf {y} _ {i} \mid E _ {s} \left(\mathbf {x} _ {i}\right), E _ {c} \left(\mathbf {c} _ {i}\right), \tilde {l} _ {i}\right). \tag {3} +$$ + +Compared with Eqn. (2), the use of $E_{c}(\mathbf{c}_{i})$ makes the text style transfer process context-dependent. The generated sentence can be denoted as $\tilde{\mathbf{y}}_i = D(E_s(\mathbf{x}_i),E_c(\mathbf{c}_i),\tilde{l}_i)$ . + +Contextual Coherence Loss To enforce contextual coherence (i.e., to ensure the generated sentence $\mathbf{y}_i$ aligns with the surrounding context $\mathbf{c}_i$ ), we train a coherence classifier that judges whether $\mathbf{c}_i$ is the context of $\mathbf{y}_i$ , by adopting a language model with an objective similar to next sentence prediction (Devlin et al., 2019). + +Specifically, assume that $\mathbf{y}_i$ is the $t$ -th sentence of a paragraph $\mathbf{p}_i$ (i.e., $\mathbf{y}_i = \mathbf{p}_i^{(t)}$ ), and $\mathbf{c}_i = \{\mathbf{p}_i^{(0)}, \ldots, \mathbf{p}_i^{(t-1)}, \mathbf{p}_i^{(t+1)}, \ldots, \mathbf{p}_i^{(T)}\}$ is its surrounding context. We first reconstruct the paragraph $\mathbf{p}_i = \{\mathbf{p}_i^{(0)}, \ldots, \mathbf{p}_i^{(T)}\}$ by inserting $\mathbf{y}_i$ into the proper position in $\mathbf{c}_i$ , denoted as $[\mathbf{c}_i; \mathbf{y}_i]$ . Based on this, we obtain a paragraph representation $\mathbf{u}_i$ via a language model encoder. Then, we apply a linear layer to the representation, followed by a tanh function and a softmax layer to predict a binary label $s_i$ , which indicates whether $\mathbf{c}_i$ is the + +right context for $\mathbf{y}_i$ : + +$$ +\mathbf {u} _ {i} = \operatorname {L M} ([ \mathbf {c} _ {i}; f (\mathbf {y} _ {i}) ]) \tag {4} +$$ + +$$ +p _ {\mathrm {L M}} \left(s _ {i} \mid \mathbf {c} _ {i}, \mathbf {y} _ {i}\right) = \text {s o f t m a x} (\tanh \left(\mathbf {W u} _ {i} + \mathbf {b}\right)), +$$ + +where LM represents the language model encoder, and $s_i = 1$ indicates that $\mathbf{c}_i$ is the context of $\mathbf{y}_i$ . $f(.)$ is a softmax function with temperature $\tau$ , where the logits are the predicted network output with a dimension of vocabulary size. Note that since $\tilde{\mathbf{y}}_i$ are discrete tokens that are nondifferentiable, we use the continuous feature $f(\tilde{\mathbf{y}}_i)$ to generate $\tilde{\mathbf{y}}_i$ as the input of the language model. We construct paired data $\{\mathbf{y}_i,\mathbf{c}_i,s_i\}_{i = 1}^N$ for training the classifier, where the negative samples are created by replacing a sentence in a paragraph with another random sentence. After pre-training, the coherence classifier is used to obtain the contextual coherence loss: + +$$ +L _ {\text {c o h e r e}} ^ {\mathcal {P}} = - \underset {\mathbf {x} _ {i}, \mathbf {c} _ {i} \sim \mathcal {P}} {\mathbb {E}} \log p _ {\mathrm {L M}} \left(s _ {i} = 1 \mid \mathbf {c} _ {i}, f \left(\tilde {\mathbf {y}} _ {i}\right)\right). \tag {5} +$$ + +Intuitively, minimizing $L_{cohere}^{\mathcal{P}}$ encourages $\tilde{\mathbf{y}}_i$ to blend better to its context $\mathbf{c}_i$ . Note that the coherence classifier is pre-trained, and remains fixed during the training of the CAST model. The above coherence loss can be used to update the parameters of $E_s$ , $E_c$ and $D$ during model training. + +# 3.3 Unsupervised Training Objectives + +For the contextual style transfer task, there are not many parallel datasets available with style-labeled paragraph pairs. To overcome the data sparsity issue, we propose a hybrid approach to leverage additional non-parallel data $\mathcal{U} = \{\left(\mathbf{x}_i,l_i\right)\}_{i = 1}^N$ , which are abundant and less expensive to collect. In order to fully exploit $\mathcal{U}$ to enhance the training of the Sentence Encoder and Decoder $(E_{s},D)$ , we introduce three additional training losses, detailed below. + +Reconstruction Loss The reconstruction loss aims to encourage $E_{s}$ and $D$ to reconstruct the input sentence itself, if the desired style is the same as the input style. The corresponding objective is similar to Eqn. (2): + +$$ +L _ {r e c o n} ^ {\mathcal {U}} = - \underset {\mathbf {x} _ {i} \sim \mathcal {U}} {\mathbb {E}} \log p _ {D} \left(\mathbf {x} _ {i} \mid E _ {s} \left(\mathbf {x} _ {i}\right), l _ {i}\right). \tag {6} +$$ + +Compared to Eqn. (2), here we encourage the decoder $D$ to recover $\mathbf{x}_i$ 's original style properties as accurate as possible, given the style label $l_i$ . The self-reconstructed sentence is denoted as $\hat{\mathbf{x}}_i = D(E_s(\mathbf{x}_i), l_i)$ . + +Back-Translation Loss The back-translation loss requires the model to reconstruct the input sentence after a transformation loop. Specifically, the input sentence $\mathbf{x}_i$ is first transferred into the target style, i.e., $\tilde{\mathbf{x}}_i = D(E_s(\mathbf{x}_i),\tilde{l}_i)$ . Then the generated target sentence is transferred back into its original style, i.e., $\hat{\mathbf{x}}_i = D(E_s(\tilde{\mathbf{x}}_i),l_i)$ . The back-translation loss is defined as: + +$$ +L _ {b t r a n s} ^ {\mathcal {U}} = - \underset { \begin{array}{c} \mathbf {x} _ {i} \sim \mathcal {U}, \tilde {\mathbf {x}} _ {i} \sim \\ p _ {D} (\mathbf {y} _ {i} | E _ {s} (\mathbf {x} _ {i}), \tilde {l} _ {i}) \end{array} } {\mathbb {E}} \log p _ {D} (\mathbf {x} _ {i} | E _ {s} (\tilde {\mathbf {x}} _ {i}), l _ {i}), \tag {7} +$$ + +where the source and target styles are denoted as $l_{i}$ and $\tilde{l}_i$ , respectively. + +Style Classification Loss To further boost the model, we use $\mathcal{U}$ to train a classifier that predicts the style of a given sentence, and regularize the training of $(E_s,D)$ with the pre-trained style classifier. The objective is defined as: + +$$ +L _ {\text {s t y l e}} = - \underset {\mathbf {x} _ {i} \sim \mathcal {U}} {\mathbb {E}} \log p _ {C} \left(l _ {i} \mid \mathbf {x} _ {i}\right), \tag {8} +$$ + +where $p_C(\cdot)$ denotes the style classifier. After the classifier is trained, we keep its parameters fixed, and apply it to update the parameters of $(E_s,D)$ . The resulting style classification loss utilizing the pre-trained style classifier is defined as: + +$$ +\begin{array}{l} L _ {s t y l e} ^ {\mathcal {U}} = - \underset {\mathbf {x} _ {i} \sim \mathcal {U}} {\mathbb {E}} \left[ \underset {\hat {\mathbf {x}} _ {i} \sim p _ {D} (\hat {\mathbf {x}} _ {i} | E _ {s} (\mathbf {x} _ {i}), l _ {i})} {\mathbb {E}} \log p _ {C} (l _ {i} | \hat {\mathbf {x}} _ {i}) \right. \\ \left. + \underset {\tilde {\mathbf {x}} _ {i} \sim p _ {D} \left(\tilde {\mathbf {x}} _ {i} \mid E _ {s} \left(\mathbf {x} _ {i}\right), \tilde {l} _ {i}\right)} {\mathbb {E}} \log p _ {C} \left(\tilde {l} _ {i} \mid \tilde {\mathbf {x}} _ {i}\right) \right]. \tag {9} \\ \end{array} +$$ + +# 4 New Benchmarks + +Existing text style transfer datasets, either parallel or non-parallel, do not contain contextual information, thus unsuitable for the contextual transfer task. To provide benchmarks for evaluation, we introduce two new datasets: Enron-Context and Reddit-Context, derived from two existing datasets - Enron (Klimt and Yang, 2004) and Reddit Politics (Serban et al., 2017). + +1) Enron-Context To build a formality transfer dataset with paragraph contexts, we randomly sampled emails from the Enron corpus (Klimt and Yang, 2004). After pre-processing and filtering with NLTK (Bird et al., 2009), we asked Amazon Mechanical Turk (AMT) annotators to identify informal sentences within each email, and rewrite + +them in a more formal style. Then, we asked a different group of annotators to verify if each rewritten sentence is more formal than the original sentence. + +2) Reddit-Context Another typical style transfer task is offensive vs. non-offensive, for which we collected another dataset from the Reddit Politics corpus (Serban et al., 2017). First, we identify offensive sentences in the original dataset with sentence-level classification. After filtering out extremely long/short sentences, we randomly selected a subset of sentences (10% of the whole dataset) and asked AMT annotators to rewrite each offensive sentence into two non-offensive alternatives. + +After manually removing wrong or duplicate annotations, we obtained a total of 14,734 rewritten sentences for Enron-Context, and 23,158 for Reddit-Context. We also limited the vocabulary size by replacing words with a frequency less than 20/70 in Enron/Reddit datasets with a special unknown token. Table 1 provides the statistics on the two datasets. More details on AMT data collection are provided in Appendix. + +# 5 Experiments + +In this section, we compare our model with state-of-the-art baselines on the two new benchmarks, and provide both quantitative analysis and human evaluation to validate the effectiveness of the proposed CAST model. + +# 5.1 Datasets and Baselines + +In addition to the two new parallel datasets, we also leverage non-parallel datasets for CAST model training. For formality transfer, one choice is Grammarlys Yahoo Answers Formality Corpus (GYAFC) (Rao and Tetreault, 2018), crawled and annotated from two domains in Yahoo Answers. This corpus contains paired informal-formal sentences without context. We randomly selected a subset of sentences (28,375/29,774 formal/informal) from the GYAFC dataset as our training dataset. For offensiveness transfer, we utilize the Reddit dataset. Following dos Santos et al. (2018), we used a pre-trained classifier to extract 53,028/53,714 offensive/non-offensive sentences from Reddit posts as our training dataset. + +Table 2 provides the statistics of parallel and non-parallel datasets used for the two style transfer tasks. For the non-parallel datasets, we split them into two: one for CAST model training ('Train'), and the other for the style classifier pre-training. + +
Dataset# sent.# rewritten sent.# words per sent.# words per paragraph# vocabulary
Reddit-Context14,73414,7349.438.54,622
Enron-Context23,15825,2597.625.92,196
+ +Table 1: Statistics on Enron-Context and Reddit-Context datasets. + +
Formality Transfer
Non-parallelTrainStyle classifierParallelTrainDevTestCoherence classifier
GYAFC58k12kEnron-Context13k0.5k1k2.5k
Offensiveness Transfer
Non-parallelTrainStyle classifierParallelTrainDevTestCoherence classifier
REDDIT106k15kReddit-Context22k0.5k1k3.5k
+ +Table 2: Statistics of the parallel and non-parallel datasets on two text style transfer tasks. + +Similarly, for the parallel datasets, the training sets are divided into two as well, for the training of CAST ('Train/Dev/Test') and the coherence classifier, respectively. + +We compare CAST model with several baselines: (i) Seq2Seq: a Transformer-based Seq2Seq model (Eqn. (2)), taking sentences as the only input, trained on parallel data only; (ii) Contextual Seq2Seq: a Transformer-based contextual Seq2Seq model (Eqn. (3)), taking both context and sentence as input, trained on parallel data only; (iii) Hybrid Seq2Seq (Xu et al., 2019): a Seq2Seq model leveraging both parallel and non-parallel data; (iv) ControlGen (Hu et al., 2017, 2018): a state-of-the-art text transfer model using non-parallel data; (v) MulAttGen (Subramanian et al., 2018): another state-of-the-art style transfer model that allows flexible control over multiple attributes. + +# 5.2 Evaluation Metrics + +The contextual style transfer task requires a model to generate sentences that: $(i)$ preserve the original semantic content and structure in the source sentence; $(ii)$ conform to the pre-specified style; and $(iii)$ align with the surrounding context in the paragraph. Thus, we consider the following automatic metrics for evaluation: + +Content Preservation. We assess the degree of content preservation during transfer, by measuring BLEU scores (Papineni et al., 2002) between generated sentences and human references. Following Rao and Tetreault (2018), we also use GLEU as an additional metric for the formality transfer task, which was originally introduced for the grammatical error correction task (Napoles et al., 2015). + +For offensiveness transfer, we include perplexity (PPL) as used in dos Santos et al. (2018), which is computed by a word-level LSTM language model pre-trained on non-offensive sentences. + +Style Accuracy. Similar to prior work, we measure style accuracy using the prediction accuracy of the pre-trained style classifier over generated sentences (Acc.). + +Context Coherence. We use the prediction accuracy of the pre-trained coherence classifier to measure how a generated sentence matches its surrounding context (Coherence). + +The evaluation classifiers are trained separately from those used to train CAST, following (dos Santos et al., 2018). For formality transfer, the style classifier and coherence classifier reach $91.35\%$ and $86.78\%$ accuracy, respectively, on pre-trained dataset. For offensiveness transfer, the accuracy is $93.47\%$ and $84.96\%$ . + +# 5.3 Implementation Details + +The context encoder, sentence encoder and sentence decoder are all implemented as a one-layer Transformer with 4 heads. The hidden dimension of one head is 256, and the hidden dimension of the feed-forward sub-layer is 1024. The context encoder is set to take maximum of 50 words from the surrounding context of the target sentence. For the style classifier, we use a standard CNN-based sentence classifier (Kim, 2014). + +Since the non-parallel corpus $\mathcal{U}$ contains more samples than the parallel one $\mathcal{P}$ , we down-sample $\mathcal{U}$ to assign each mini-batch the same number of parallel and non-parallel samples to balance training, alleviating the 'catastrophic forgetting prob + +
ModelFormality TransferOffensiveness Transfer
Acc.CoherenceBLEUGLEUAcc.CoherenceBLEUPPL
Seq2Seq64.0578.0924.1610.4683.0580.2817.22140.39
Contextual Seq2Seq64.2881.2523.7210.3783.4281.6918.74138.42
Hybrid Seq2Seq65.0979.6224.3510.9383.2884.8720.78107.12
ControlGen62.1873.6614.328.7282.1578.8110.4492.14
MulAttGen63.3672.9715.148.9182.7178.4511.0392.56
CAST68.0485.4726.3815.0688.4585.9823.9293.03
+ +Table 3: Quantitative evaluation results of different models on two style transfer tasks. + +
Task: informal to formal transferContext
AInputI'm assuming that you'd set up be part of that meeting?I'll call him back to a meeting. [Input]. I asked him what sort of deals they're working on.
ControlGenI'm guessing that you would be set up that call?
MulAttGenI'm guessing that you would be set up that meeting?
C-Seq2SeqI am assuming that you would part of that person .
H-Seq2SeqI am assuming that you would be part of that party ?
CASTAm I correct to assume that you would attend that meeting ?
BInputDo y'all interface with C/P .Thanks . Can someone let the C/P know that the deals are good ? [Input]. If not deal confirmations could but they need the deal details .
ControlGenDo you compete with them ?
MulAttGenDo you interface with them ?
C-Seq2SeqDo we interface with them ?
H-Seq2SeqDo we interface with them ?
CASTDo you all interface with C/P ?
Task: offensive to non-offensive transferContext
CInputYou are ugly .With the glasses , [Input]. I don't need them because I never read . How do i look ?
ControlGenYou bad guy !
MulAttGenYou are sad .
C-Seq2SeqHave a bad day .
H-Seq2SeqWhat a bad day !
CASTYou look not good .
+ +Table 4: Examples from the two datasets, where orange denotes the sentence to be transferred, and blue denotes the content that also appears in the context (C-Seq2Seq: Contextual Seq2Seq; H-Seq2Seq: Hybrid Seq2Seq). + +lem' described in Howard and Ruder (2018). We train the model using Adam optimizer with a minibatch size 64 and a learning rate 0.0005. The validation set is used to select the best hyper-parameters. Hard-sampling (Logeswaran et al., 2018) is used to back-propagate loss through discrete tokens from the pre-trained classifier to the model. + +For the ControlGen (Hu et al., 2017) baseline, we use the code provided by the authors, and use their default hyper-parameter setting. For Hybrid Seq2Seq (Xu et al., 2019) and MulAttGen (Subramanian et al., 2018), we re-implement their models following the original papers. + +# 5.4 Experimental Results + +Formality Transfer Results on the formality transfer task are summarized in Table 3. The CAST model achieves better performance than all the baselines. Particularly, CAST is able to boost GLEU and Coherence scores with a large margin. Hybrid Seq2Seq also achieves good performance by utilizing non-parallel data. By incorporating context information, Contextual Seq2Seq also im + +proves over the vanilla Seq2Seq model. As expected, ControlGen does not perform well, since only non-parallel data is used for training. + +Offensiveness Transfer Results are summarized in Table 3. CAST achieves the best performance over all the metrics except for $PPL$ . In terms of Coherence, Contextual Seq2Seq and CAST, that leverage context information achieve better performance than Seq2Seq baseline. Contextual Seq2Seq also improves $BLEU$ , which is different from the observation in the formality transfer task. On $PPL$ , CAST produces slightly worse performance than ControlGen and MulAttGen. We hypothesize that this is because our model tends to use the same non-offensive word to replace an offensive word, producing some untypical sentences, as discussed in dos Santos et al. (2018). + +Qualitative Analysis Table 4 presents some generation examples from different models. We observe that CAST is better at replacing informal words with formal ones (Example B and C), and generates more context-aware sentences (Example A and C), possibly due to the use of coherence and + +
Formality TransferOffensiveness Transfer
ModelAcc.CoherenceBLEUGLEUAcc.CoherenceBLEUPPL
CAST68.0485.4726.3815.0688.4585.9823.9293.03
w/o context encoder65.3582.923.9814.1784.1580.9620.54127.02
w/o cohere.classifier65.4780.1614.8214.4585.1179.3721.97115.57
w/o both62.1974.4715.8810.4672.6978.1513.14147.31
w/o non-parallel data60.1975.4913.59.8870.8478.7210.53151.08
+ +Table 5: Ablation study of CAST on two style transfer tasks. + +
TaskAspectsCAST vs. Contextual Seq2SeqCAST vs. Hybrid Seq2SeqCAST vs. ControlGen
winlosetiewinlosetiewinlosetie
Formality TransferStyle Control57.128.314.646.926.128.072.112.625.3
Content Preservation59.722.118.250.420.828.268.814.517.7
Context Consistence56.423.120.551.519.728.870.110.619.3
Offensiveness TransferStyle Control58.625.316.150.129.220.354.819.925.3
Content Preservation62.326.511.254.017.528.553.130.216.7
Context Consistence60.132.417.555.324.920.858.135.816.7
+ +Table 6: Results of pairwise human evaluation between CAST and three baselines on two style transfer tasks. Win/lose/tie indicate the percentage of results generated by CAST being better/worse/equal to the reference model. + +style classifiers. We also observe that the exploitation of context information can help the model preserve semantic content in the original sentence (Example B). + +Ablation Study To investigate the effectiveness of each component of CAST model, we conduct detailed ablation studies and summarize the results in Table 5. Experiments show that the context encoder and the coherence classifier play an important role in the proposed model. The context encoder is able to improve content preservation and style transfer accuracy, demonstrating the effectiveness of using context. The coherence classifier can help improve the coherence score but not much for style accuracy. By using these two components, our model can strike a proper balance between translating to the correct style and maintaining contextual consistency. When both of them are removed (the 4th row), performance on all the metrics drops significantly. We also observe that without using non-parallel data, the model performs poorly, showing the benefit of using a hybrid approach and more data for this task. + +Human Evaluation Considering the subjective nature of this task, we conduct human evaluation to judge model outputs regarding content preservation, style control and context consistency. Given an original sentence along with its corresponding context and a pair of generated sentences from two different models, AMT workers were asked to select the best one based on these three aspects. The + +AMT interface also allows a neutral option, if the worker considers both sentences as equally good in certain aspect. We randomly sampled 200 sentences from the test set, and collected three human responses for each pair. Table 6 reports the pairwise comparison results on both tasks. Based on human judgment, the quality of transferred sentences by CAST is significantly higher than the other methods across all three metrics. This is consistent with the experimental results on automatic metrics discussed earlier. + +# 6 Conclusion + +In this paper, we present a new task - Contextual Text Style Transfer. Two new benchmark datasets are introduced for this task, which contain annotated sentence pairs accompanied by paragraph context. We also propose a new CAST model, which can effectively enforce content preservation and context coherence, by exploiting abundant non-parallel data in a hybrid approach. Quantitative and human evaluations demonstrate that CAST model significantly outperforms baseline methods that do not consider context information. We believe our model takes a first step towards modeling context information for text style transfer, and will explore more advanced solutions e.g., using a better encoder/decoder like GPT-2 (Radford et al., 2019) and BERT (Devlin et al., 2019), adversarial learning (Zhu et al., 2020) or knowledge distillation (Chen et al., 2019). + +# References + +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. +Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python. O'Reilly Media. +Yen-Chun Chen, Zhe Gan, Yu Cheng, Jingzhou Liu, and Jingjing Liu. 2019. Distilling the knowledge of BERT for text generation. CoRR, abs/1911.03829. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *NAACL*. +Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Exploration and evaluation. In AAAI. +Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In NeurIPS. +Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In ACL. +Zhiting Hu, Haoran Shi, Zichao Yang, Bowen Tan, Tiancheng Zhao, Junxian He, Wentao Wang, Lianhui Qin, Di Wang, et al. 2018. Texar: A modularized, versatile, and extensible toolkit for text generation. arXiv preprint arXiv:1809.00794. +Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward controlled generation of text. In ICML. +Harsh Jhamtani, Varun Gangal, Eduard Hovy, and Eric Nyberg. 2017. Shakespearizing modern language using copy-enriched sequence-to-sequence models. arXiv preprint arXiv:1707.01161. +Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882. +Bryan Klimt and Yiming Yang. 2004. Introducing the enron corpus. In CEAS. +Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018. Unsupervised machine translation using monolingual corpora only. In ICLR. +Dianqi Li, Yizhe Zhang, Zhe Gan, Yu Cheng, Chris Brockett, Ming-Ting Sun, and Bill Dolan. 2019. Domain adaptive text style transfer. arXiv preprint arXiv:1908.09395. +Dianqi Li, Yizhe Zhang, Hao Peng, Liquin Chen, Chris Brockett, Ming-Ting Sun, and Bill Dolan. 2020. Contextualized perturbation for textual adversarial attack. arXiv preprint arXiv:2009.07502. + +Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, retrieve, generate: a simple approach to sentiment and style transfer. In NAACL. +Lajanugen Logeswaran, Honglak Lee, and Samy Bengio. 2018. Content preserving text generation with attribute controls. In NeurIPS. +Sourab Mangrulkar, Suhani Shrivastava, Veena Thenkanidiyoor, and Dileep Aoor Dinesh. 2018. A context-aware convolutional natural language generation model for dialogue systems. In SIGDIAL. +Tomas Mikolov and Geoffrey Zweig. 2012. Context dependent recurrent neural network language model. IEEE Spoken Language Technology Workshop (SLT). +Courtney Naples, Keisuke Sakaguchi, Matt Post, and Joel Tetreault. 2015. Ground truth for grammatical error correction metrics. In ACL. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL. +Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhutdinov, and Alan W Black. 2018. Style transfer through back-translation. arXiv preprint arXiv:1804.09000. +Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. +Sudha Rao and Joel Tetreault. 2018. Dear sir or madam, may i introduce the gyafc dataset: Corpus, benchmarks and metrics for formality style transfer. In NAACL. +Cicero Nogueira dos Santos, Igor Melnyk, and Inkit Padhi. 2018. Fighting offensive language on social media with unsupervised text style transfer. In ACL. +Iulian Vlad Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Mudumba, Alexandre de Brébisson, Jose Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, and Yoshua Bengio. 2017. A deep reinforcement learning chatbot. arXiv preprint arXiv:1709.02349. +Mingyue Shang, Piji Li, Zhenxin Fu, Lidong Bing, Dongyan Zhao, Shuming Shi, and Rui Yan. 2019. Semi-supervised text style transfer: Cross projection in latent space. In EMNLP-IJCNLP. +Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In NeurIPS. +Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, and JianYun Nie. 2015a. A hierarchical recurrent encoder-decoder for generative context-aware query suggestion. In CIKM. + +Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015b. A neural network approach to context-sensitive generation of conversational responses. In NAACL. +Sandeep Subramanian, Guillaume Lample, Eric Michael Smith, Ludovic Denoyer, Marc'Aurelio Ranzato, and Y-Lan Boureau. 2018. Multiple-attribute text style transfer. arXiv preprint arXiv:1811.00552. +Jian Tang, Yifan Yang, Samuel Carton, Ming Zhang, and Qiaozhu Mei. 2016. Context-aware natural language generation with recurrent neural networks. arXiv preprint arXiv:1611.09900. +Oriol Vinyals and Quoc V. Le. 2015. A neural conversational model. arXiv preprint arXiv:1506.05869. +Tian Wang and Kyunghyun Cho. 2015. Larger-context language modelling. arXiv preprint arXiv:1511.03729. +Wenlin Wang, Zhe Gan, Wenqi Wang, Dinghan Shen, Jiaji Huang, Wei Ping, Sanjeev Satheesh, and Lawrence Carin. 2017. Topic compositional neural language model. arXiv preprint arXiv:1712.09783. +Tsung-Hsien Wen, Milica Gasic, Dongho Kim, Nikola Mrksic, Pei-Hao Su, David Vandyke, and Steve Young. 2015. Stochastic language generation in dialogue using recurrent neural networks with convolutional sentence reranking. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue. +Jingjing Xu, Xu Sun, Qi Zeng, Xuancheng Ren, Xiaodong Zhang, Houfeng Wang, and Wenjie Li. 2018. Unpaired sentiment-to-sentiment translation: A cycled reinforcement learning approach. arXiv preprint arXiv:1805.05181. +Ruochen Xu, Tao Ge, and Furu Wei. 2019. Formality style transfer with hybrid textual annotations. arXiv preprint arXiv:1903.06353. +Zichao Yang, Zhiting Hu, Chris Dyer, Eric P Xing, and Taylor Berg-Kirkpatrick. 2018. Unsupervised text style transfer using language models as discriminators. In NeurIPS. +Hongyu Zang and Xiaojun Wan. 2017. Towards automatic generation of product reviews from aspect-sentiment scores. In Proceedings of the 10th International Conference on Natural Language Generation. +Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, and Jingjing Liu. 2020. Freelb: Enhanced adversarial training for natural language understanding. In International Conference on Learning Representations. \ No newline at end of file diff --git a/contextualtextstyletransfer/images.zip b/contextualtextstyletransfer/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..0c7c19525af1d352d00c0bd176b8177a79f60d1a --- /dev/null +++ b/contextualtextstyletransfer/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:02b6681f4f01941e68ffceb0ae4820d1c678f95a1a9da58c4aca3dcfd287c6d1 +size 480387 diff --git a/contextualtextstyletransfer/layout.json b/contextualtextstyletransfer/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..80d1820dcadf5eba8868aa02858282e935fe4875 --- /dev/null +++ b/contextualtextstyletransfer/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ffcfcce5399d69e58dee96587b2772f20601b1e2aebc35f7b409374c10694c92 +size 346066 diff --git a/continuallearningfornaturallanguagegenerationintaskorienteddialogsystems/d538570b-fc75-4a28-b205-1b94800c303d_content_list.json b/continuallearningfornaturallanguagegenerationintaskorienteddialogsystems/d538570b-fc75-4a28-b205-1b94800c303d_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..a4a9c5214a4183d8f0a3d2a43be0dce7349b9801 --- /dev/null +++ b/continuallearningfornaturallanguagegenerationintaskorienteddialogsystems/d538570b-fc75-4a28-b205-1b94800c303d_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3cc726a561af0b81b362c2a24dbdcdf158d856fd0780c22373042acecc663511 +size 101593 diff --git a/continuallearningfornaturallanguagegenerationintaskorienteddialogsystems/d538570b-fc75-4a28-b205-1b94800c303d_model.json b/continuallearningfornaturallanguagegenerationintaskorienteddialogsystems/d538570b-fc75-4a28-b205-1b94800c303d_model.json new file mode 100644 index 0000000000000000000000000000000000000000..ebd6f30f0ecbdb72315d44da6415f03a2fc0999a --- /dev/null +++ b/continuallearningfornaturallanguagegenerationintaskorienteddialogsystems/d538570b-fc75-4a28-b205-1b94800c303d_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c3b0512261a150f9532fed0676b0f6daeddcfc50e74f866834c871ee507ce19f +size 125293 diff --git a/continuallearningfornaturallanguagegenerationintaskorienteddialogsystems/d538570b-fc75-4a28-b205-1b94800c303d_origin.pdf b/continuallearningfornaturallanguagegenerationintaskorienteddialogsystems/d538570b-fc75-4a28-b205-1b94800c303d_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..19fa711d126948a7196ed52dc6462df4a30dd2c1 --- /dev/null +++ b/continuallearningfornaturallanguagegenerationintaskorienteddialogsystems/d538570b-fc75-4a28-b205-1b94800c303d_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7d71a50e5427ea69e1d02cf27ce1385cd50f50454ca7852b7fc51039dbe15f2f +size 4717382 diff --git a/continuallearningfornaturallanguagegenerationintaskorienteddialogsystems/full.md b/continuallearningfornaturallanguagegenerationintaskorienteddialogsystems/full.md new file mode 100644 index 0000000000000000000000000000000000000000..54354c2564363fabefe8da08fbbe48631a3dea83 --- /dev/null +++ b/continuallearningfornaturallanguagegenerationintaskorienteddialogsystems/full.md @@ -0,0 +1,447 @@ +# Continual Learning for Natural Language Generation in Task-oriented Dialog Systems + +Fei $\mathbf{M}\mathbf{i}^{1*}$ , Liangwei Chen $^{1}$ , Mengjie Zhao $^{2}$ , Minlie Huang $^{3*}$ and Boi Faltings $^{1}$ + +$^{1}$ LIA, EPFL, Lausanne, Switzerland + +$^{2}$ CIS, LMU, Munich, Germany + +$^{3}$ CoAI, DCST, Tsinghua University, Beijing, China + +{fei.mi, liangwei.chen, boi.faltings}@epfl.ch + +mzhao@cis.lmu.de, aihuang@tsinghua.edu.cn + +# Abstract + +Natural language generation (NLG) is an essential component of task-oriented dialog systems. Despite the recent success of neural approaches for NLG, they are typically developed in an offline manner for particular domains. To better fit real-life applications where new data come in a stream, we study NLG in a "continual learning" setting to expand its knowledge to new domains or functionalities incrementally. The major challenge towards this goal is catastrophic forgetting, meaning that a continually trained model tends to forget the knowledge it has learned before. To this end, we propose a method called ARPER (Adaptively Regularized Prioritized Exemplar Replay) by replaying prioritized historical exemplars, together with an adaptive regularization technique based on Elastic Weight Consolidation. Extensive experiments to continually learn new domains and intents are conducted on MultiWoZ-2.0 to benchmark ARPER with a wide range of techniques. Empirical results demonstrate that ARPER significantly outperforms other methods by effectively mitigating the detrimental catastrophic forgetting issue. + +# 1 Introduction + +As an essential part of task-oriented dialog systems (Wen et al., 2015b; Bordes et al., 2016), the task of Natural Language Generation (NLG) is to produce a natural language utterance containing the desired information given a semantic representation (so-called dialog act). Existing NLG models (Wen et al., 2015c; Tran and Nguyen, 2017; Tseng et al., 2018) are typically trained offline using annotated data from a single or a fixed set of domains. However, a desirable dialog system in real-life applications often needs to expand its knowledge to new domains and functionalities. Therefore, it is crucial to develop an NLG approach with the capability + +of continual learning after a dialog system is deployed. Specifically, an NLG model should be able to continually learn new utterance patterns without forgetting the old ones it has already learned. + +The major challenge of continual learning lies in catastrophic forgetting (McCloskey and Cohen, 1989; French, 1999). Namely, a neural model trained on new data tends to forget the knowledge it has acquired on previous data. We diagnose in Section 4.4 that neural NLG models suffer such detrimental catastrophic forgetting issues when continually trained on new domains. A naive solution is to retrain the NLG model using all historical data every time. However, it is not scalable due to severe computation and storage overhead. + +To this end, we propose storing a small set of representative utterances from previous data, namely exemplars, and replay them to the NLG model each time it needs to be trained on new data. Methods using exemplars have shown great success in different continual learning (Rebuffi et al., 2017; Castro et al., 2018; Chaudhry et al., 2019) and reinforcement learning (Schaul et al., 2016; Andrychowicz et al., 2017) tasks. In this paper, we propose a prioritized exemplar selection scheme to choose representative and diverse exemplar utterances for NLG. We empirically demonstrate that the prioritized exemplar replay helps to alleviate catastrophic forgetting by a large degree. + +In practice, the number of exemplars should be reasonably small to maintain a manageable memory footprint. Therefore, the constraint of not forgetting old utterance patterns is not strong enough. To enforce a stronger constraint, we propose a regularization method based on the well-known technique, Elastic Weight Consolidation (EWC (Kirkpatrick et al., 2017)). The idea is to use a quadratic term to elastically regularize the parameters that are important for previous data. Besides the wide application in computer vision, EWC has been recently + +applied to the domain adaptation task for Neural Machine Translation (Thompson et al., 2019; Saunders et al., 2019). In this paper, we combine EWC with exemplar replay by approximating the Fisher Information Matrix w.r.t. the carefully chosen exemplars so that not all historical data need to be stored. Furthermore, we propose to adaptively adjust the regularization weight to consider the difference between new and old data to flexibly deal with different new data distributions. + +To summarize our contribution, (1) to the best of our knowledge, this is the first attempt to study the practical continual learning configuration for NLG in task-oriented dialog systems; (2) we propose a method called Adaptively Regularized Prioritized Exemplar Replay (ARPER) for this task, and benchmark it with a wide range of state-of-the-art continual learning techniques; (3) extensive experiments are conducted on the MultiWoZ-2.0 (Budzianowski et al., 2018) dataset to continually learn new tasks, including domains and intents using two base NLG models. Empirical results demonstrate the superior performance of ARPER and its ability to mitigate catastrophic forgetting. Our code is available at https://github.com/MiFei/Continual-Learning-for-NLG + +# 2 Related Work + +Continual Learning. The major challenge for continual learning is catastrophic forgetting (McCloskey and Cohen, 1989; French, 1999), where optimization over new data leads to performance degradation on data learned before. Methods designed to mitigate catastrophic forgetting fall into three categories: regularization, exemplar replay, and dynamic architectures. Methods using dynamic architectures (Rusu et al., 2016; Maltoni and Lomonaco, 2019) increase model parameters throughout the continual learning process, which leads to an unfair comparison with other methods. In this work, we focus on the first two categories. + +Regularization methods add specific regularization terms to consolidate knowledge learned before. Li and Hoiem (2017) introduced the knowledge distillation (Hinton et al., 2015) to penalize model logit change, and it has been widely employed in Rebuffi et al. (2017); Castro et al. (2018); Wu et al. (2019); Hou et al. (2019); Zhao et al. (2019). Another direction is to regularize parameters crucial to old knowledge according to various importance measures (Kirkpatrick et al., 2017; + +Zenke et al., 2017; Aljundi et al., 2018). + +Exemplar replay methods store past samples, a.k.a exemplars, and replay them periodically. Instead of selecting exemplars at random, Rebuffi et al. (2017) incorporated the Herding technique (Welling, 2009) to choose exemplars that best approximate the mean feature vector of a class, and it is widely used in Castro et al. (2018); Wu et al. (2019); Hou et al. (2019); Zhao et al. (2019); Mi et al. (2020a,b). Ramalho and Garnelo (2019) proposed to store samples that the model is least confident. Chaudhry et al. (2019) demonstrated the effectiveness of exemplars for various continual learning tasks in computer vision. + +Catastrophic Forgetting in NLP. The catastrophic forgetting issue in NLP tasks has raised increasing attention recently (Mou et al., 2016; Chronopoulou et al., 2019). Yogatama et al. (2019); Arora et al. (2019) identified the detrimental catastrophic forgetting issue while fine-tuning ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019). To deal with this issue, He et al. (2019) proposed to replay pre-train data during fine-tuning heavily, and Chen et al. (2020) proposed an improved Adam optimizer to recall knowledge captured during pretraining. The catastrophic forgetting issue is also noticed in domain adaptation setups for neural machine translation (Saunders et al., 2019; Thompson et al., 2019; Varis and Bojar, 2019) and the reading comprehension task (Xu et al., 2019). + +Lee (2017) firstly studied the continual learning setting for dialog state tracking in task-oriented dialog systems. However, their setting is still a one-time adaptation process, and the adopted dataset is small. Shen et al. (2019) recently applied progressive network (Rusu et al., 2016) for the semantic slot filling task from a continual learning perspective similar to ours. However, their method is based on a dynamic architecture that is beyond the scope of this paper. Liu et al. (2019) proposed a Boolean operation of "conceptor" matrices for continually learning sentence representations using linear encoders. Li et al. (2020) combined continual learning and language systematic compositionality for sequence-to-sequence learning tasks. + +Natural Language Generation (NLG). In this paper, we focus on NLG for task-oriented dialog systems. A series of neural methods have been proposed to generate accurate, natural, and diverse utterances, including HLSTM (Wen et al., 2015a), + +SCLSTM (Wen et al., 2015c), Enc-Dec (Wen et al., 2015b), RALSTM (Tran and Nguyen, 2017), SCVAE (Tseng et al., 2018). + +Recent works have considered the domain adaptation setting. Tseng et al. (2018); Tran and Nguyen (2018b) proposed to learn domain-invariant representations using VAE (Kingma and Welling, 2013). They later designed two domain adaptation critics (Tran and Nguyen, 2018a). Recently, Mi et al. (2019); Qian and Yu (2019); Peng et al. (2020) studied learning new domains with limited training data. However, existing methods only consider a one-time adaptation process. The continual learning setting and the corresponding catastrophic forgetting issue remain to be explored. + +# 3 Model + +In this section, we first introduce the background of neural NLG models in Section 3.1, and the continual learning formulation in Section 3.2. In Section 3.3, we introduce the proposed method ARPER. + +# 3.1 Background on Neural NLG Models + +The NLG component of task-oriented dialog systems is to produce natural language utterances conditioned on a semantic representation called dialog act (DA). Specifically, the dialog act $\mathbf{d}$ is defined as the combination of intent I and a set of slot-value pairs $S(\mathbf{d}) = \{(s_i,v_i)\}_{i = 1}^p$ : + +$$ +\mathbf {d} = \left[ \begin{array}{l} \mathbf {I} \\ \text {I n t e n t} \end{array} , \underbrace {(s _ {1} , v _ {1}) , \dots , (s _ {p} , v _ {p})} _ {\text {S l o t - v a l u e p a i r s}} \right], \tag {1} +$$ + +where $p$ is the number of slot-value pairs. Intent I controls the utterance functionality, while slot-value pairs contain information to express. For example, "There is a restaurant called [La Margherita] that serves [Italian] food." is an utterance corresponding to a DA "[Inform, (name=La Margherita, food=Italian)]" + +Neural models have recently shown promising results for NLG tasks. Conditioned on a DA, a neural NLG model generates an utterance containing the desired information word by word. For a DA d with the corresponding ground truth utterance $\mathbf{Y} = (y_{1},y_{1},\dots,y_{K})$ , the probability of generating $\mathbf{Y}$ is factorized as below: + +$$ +f _ {\theta} (\mathbf {Y}, \mathbf {d}) = \prod_ {k = 1} ^ {K} p _ {y _ {k}} = \prod_ {k = 1} ^ {K} p \left(y _ {k} \mid y _ {< k}, \mathbf {d}, \theta\right), \tag {2} +$$ + +where $f_{\theta}$ is the NLG model parameterized by $\theta$ and $p_{y_k}$ is the output probability (i.e. softmax of + +![](images/4def2e362d3cd1ea4fc7a853df8e8d1634bdccbaef1066ffb1b3d2cfe6e3a5e5.jpg) +Figure 1: An example for a NLG model to continually learn new domains. The model needs to perform well on all domains it has seen before. For example $f_{\theta_3}$ needs to deal with all three previous domains (Attraction, Restaurant, Hotel). + +logits) of the ground truth token $y_{k}$ at position $k$ . The typical objective function for an utterance $\mathbf{Y}$ with DA $\mathbf{d}$ is the average cross-entropy loss w.r.t. all tokens in the utterance (Wen et al., 2015c,b; Tran and Nguyen, 2017; Peng et al., 2020): + +$$ +L _ {C E} (\mathbf {Y}, \mathbf {d}, \theta) = - \frac {1}{K} \sum_ {k = 1} ^ {K} \log \left(p _ {y _ {k}}\right) \tag {3} +$$ + +# 3.2 Continual Learning of NLG + +In practice, an NLG model needs to continually learn new domains or functionalities. Without loss of generality, we assume that new data arrive task by task (Rebuffi et al., 2017; Kirkpatrick et al., 2017). In a new task $t$ , new data $\mathbf{D}_t$ are used to train the NLG model $f_{\theta_{t-1}}$ obtained till the last task. The updated model $f_{\theta_t}$ needs to perform well on all tasks so far. An example setting of continually learning new domains is illustrated in Figure 1. A task can be defined with different modalities to reflect diverse real-life applications. In subsequent experiments, we consider continually learning new domains and intents in Eq. (1). + +We emphasize that the setting of continual learning is different from that of domain adaptation. The latter is a one-time adaptation process, and the focus is to optimize performance on a target domain transferred from source domains but without considering potential performance drop on source domains (Mi et al., 2019; Qian and Yu, 2019; Peng et al., 2020). In contrast, continual learning requires a NLG model to continually learn new tasks in multiple transfers, and the goal is to make the model perform well on all tasks learned so far. + +# 3.3 Adaptively Regularized Prioritized Exemplar Replay (ARPER) + +We introduce the proposed method (ARPER) with prioritized exemplar replay and an adaptive regu + +larization technique to further alleviate the catastrophic forgetting issue. + +# 3.3.1 Prioritized Exemplar Replay + +To prevent the NLG model catastrophically forgetting utterance patterns in earlier tasks, a small subset of a task's utterances are selected as exemplars, and exemplars in previous tasks are replayed to the later tasks. During training the NLG model $f_{\theta_t}$ for task $t$ , the set of exemplars in previous tasks, denoted as $\mathbf{E}_{1:t-1} = \{\mathbf{E}_1, \dots, \mathbf{E}_{t-1}\}$ , is replayed by joining with the data $\mathbf{D}_t$ of the current task. Therefore, the training objective with exemplar replay can be written as: + +$$ +L _ {E R} \left(\theta_ {t}\right) = \sum_ {\left\{\mathbf {Y}, \mathbf {d} \right\} \in \mathbf {D} _ {t} \cup \mathbf {E} _ {1: t - 1}} L _ {C E} \left(\mathbf {Y}, \mathbf {d}, \theta_ {t}\right). \tag {4} +$$ + +The set of exemplars of task $t$ , referred to as $\mathbf{E}_t$ , is selected after $f_{\theta_t}$ has been trained and will be replayed to later tasks. + +The quality of exemplars is crucial to preserve the performance on previous tasks. We propose a prioritized exemplar selection method to select representative and diverse utterances as follows. + +Representative utterances. The first criterion is that exemplars $\mathbf{E}_t$ of a task $t$ should be representative of $\mathbf{D}_t$ . We propose to select $\mathbf{E}_t$ as a priority list from $\mathbf{D}_t$ that minimize a priority score: + +$$ +U (\mathbf {Y}, \mathbf {d}) = L _ {C E} (\mathbf {Y}, \mathbf {d}, \theta_ {t}) \cdot | S (\mathbf {d}) | ^ {\beta}, \tag {5} +$$ + +where $S(\mathbf{d})$ is the set of slots in $\mathbf{Y}$ , and $\beta$ is a hyper-parameter. This formula correlates the representativeness of an utterance to its $L_{CE}$ . Intuitively, the NLG model $f_{\theta_t}$ trained on $\mathbf{D}_t$ should be confident with representative utterances of $\mathbf{D}_t$ , i.e., low $L_{CE}$ . However, $L_{CE}$ is agnostic to the number of slots. We found that an utterance with many common slots in a task could also have very low $L_{CE}$ , yet using such utterances as exemplars may lead to overfitting and thus forgetting of previous general knowledge. The second term $|S(\mathbf{d})|^{\beta}$ controls the importance of the number of slots in an utterance to be prioritized as exemplars. We empirically found in Appendix A.1 that the best $\beta$ is greater than 0. + +Diverse utterances. The second criterion is that exemplars should contain diverse slots of the task, rather than being similar or repetitive. A drawback of the above priority score is that similar or duplicated utterances containing the same set of frequent slots could be prioritized over utterances w.r.t. a + +Algorithm 1 select_exemplars: Prioritized exemplar selection procedure of ARPER for task $t$ +1: procedure select_exemplars $(\mathbf{D}_t,f_{\theta_t},m)$ +2: $\mathbf{E}_t\gets$ new Priority_list() +3: $\mathbf{D}_t\gets$ sort $(\mathbf{D}_t,\text{key} = U,\text{order} = \text{asc})$ +4: while $|\mathbf{E}_t| < m$ do +5: $\mathbf{S}_{\text{seen}}\gets$ new Set() +6: for $\{\mathbf{Y},\mathbf{d}\} \in \mathbf{D}_t$ do +7: if $S(\mathbf{d})\in \mathbf{S}_{\text{seen}}$ then continue +8: else +9: $\mathbf{D}_t.\text{remove}(\{\mathbf{Y},\mathbf{d}\})$ +10: $\mathbf{E}_t.\text{insert}(\{\mathbf{Y},\mathbf{d}\})$ +11: $\mathbf{S}_{\text{seen}}.\text{insert}(S(\mathbf{d}))$ +12: if $|\mathbf{E}_t| == m$ then +13: return $\mathbf{E}_t$ + +diverse set of slots. To encourage diversity of selected exemplars, we propose an iterative approach to add data from $\mathbf{D}_t$ to the priority list $\mathbf{E}_t$ based on the above priority score. At each iteration, if the set of slots of the current utterance is already covered by utterances in $\mathbf{E}_t$ , we skip it and move on to the data with the next best priority score. + +Algorithm 1 shows the procedure to select $m$ exemplars as a priority list $\mathbf{E}_t$ from $\mathbf{D}_t$ . The outer loop allows multiple passes through $\mathbf{D}_t$ to select various utterances for the same set of slots $S(\mathbf{d})$ . + +# 3.3.2 Reducing Exemplars in Previous Tasks + +Algorithm 1 requires the number of exemplars to be given. A straightforward choice is to store the same and fixed number of exemplars for each task as in Castro et al. (2018); Wu et al. (2019); Hou et al. (2019). However, there are two drawbacks in this method: (1) the memory usage increases linearly with the number of tasks; (2) it does not discriminate tasks with different difficulty levels. + +To this end, we propose to store a fixed number of exemplars throughout the entire continual learning process to maintain a bounded memory footprint as in Rebuffi et al. (2017). As more tasks are continually learned, exemplars in previous tasks are gradually reduced by only keeping the ones in the front of the priority list1, and the exemplar size of a task is set to be proportional to the training data size of the task to differentiate the task's difficulty. To be specific, suppose $M$ exemplars are kept in + +total. The number of exemplars for a task is: + +$$ +\left| \mathbf {E} _ {i} \right| = M \cdot \frac {\left| \mathbf {D} _ {i} \right|}{\sum_ {j = 1} ^ {t} \left| \mathbf {D} _ {j} \right|}, \forall i \in 1, \dots , t, \tag {6} +$$ + +where we choose 250/500 for $M$ in experiments. + +# 3.3.3 Constraint with Adaptive Elastic Weight Consolidation + +Although exemplars of previous tasks are stored and replayed, the size of exemplars should be reasonably small ( $M \ll |\mathbf{D}_{1:t}|$ ) to reduce memory overhead. As a consequence, the constraint we have made to prevent the NLG model catastrophically forgetting previous utterance patterns is not strong enough. To enforce a stronger constraint, we propose a regularization method based on the well-known Elastic Weight Consolidation (EWC, Kirkpatrick et al., 2017) technique. + +Elastic Weight Consolidation (EWC). EWC utilizes a quadratic term to elastically regularize parameters important for previous tasks. The loss function of using the EWC regularization together with exemplar replay for task $t$ can be written as: + +$$ +L _ {E R - E W C} \left(\theta_ {t}\right) = L _ {E R} \left(\theta_ {t}\right) + \lambda \sum_ {i} ^ {N} F _ {i} \left(\theta_ {t, i} - \theta_ {t - 1, i}\right) ^ {2} \tag {7} +$$ + +where $N$ is the number of model parameters; $\theta_{t - 1,i}$ is the $i$ -th converged parameter of the model trained till the previous task; $F_{i} = \nabla^{2}L_{CE}^{\mathbf{E}_{1:t - 1}}(\theta_{t - 1,i})$ is the $i$ -th diagonal element of the Fisher Information Matrix approximated w.r.t. the set of previous exemplars $\mathbf{E}_{1:t - 1}$ . $F_{i}$ measures the importance of $\theta_{t - 1,i}$ to previous tasks represented by $\mathbf{E}_{1:t - 1}$ . Typical usages of EWC compute $F_{i}$ w.r.t. a uniformly sampled subset from historical data. Yet, we propose to compute $F_{i}$ w.r.t. the carefully chosen $\mathbf{E}_{1:t - 1}$ so that not all historical data need to be stored. The scalar $\lambda$ controls the contribution of the quadratic regularization term. The idea is to elastically penalize changes on parameters important (with large $F_{i}$ ) to previous tasks, and more plasticity is assigned to parameters with small $F_{i}$ . + +Adaptive regularization. In practice, new tasks have different difficulties and similarities compared to previous tasks. Therefore, the degree of need to preserve the previous knowledge varies. To this end, we propose an adaptive weight $(\lambda)$ for the EWC regularization term as follows: + +$$ +\lambda = \lambda_ {b a s e} \sqrt {V _ {1 : t - 1} / V _ {t}}, \tag {8} +$$ + +Algorithm 2 learn_task: Procedure of ARPER to continually learn task $t$ +1: procedure learn_task( $\mathbf{D}_t$ , $\mathbf{E}_{1:t-1}$ , $f_{\theta_{t-1}}$ , $M$ ) +2: $\theta_t \gets \theta_{t-1}$ +3: while $\theta_t$ not converged do +4: $\theta_t \gets \text{update}(L_{ER\_EWC}(\theta_t))$ +5: $m \gets M \cdot \frac{|\mathbf{D}_t|}{\sum_{j=1}^{t} |\mathbf{D}_j|}$ +6: $\mathbf{E}_t \gets \text{select\_exemplars}(\mathbf{D}_t, f_{\theta_t}, m)$ +7: for $j = 1$ to $t - 1$ do +8: $\mathbf{E}_j \gets \mathbf{E}_j.top(M \cdot \frac{|\mathbf{D}_j|}{\sum_{j=1}^{t} |\mathbf{D}_j|})$ +9: return $f_{\theta_t}$ , $\mathbf{E}_t$ + +where $V_{1:t-1}$ is the old word vocabulary size in previous tasks, and $V_t$ is the new word vocabulary size in the current task $t$ ; $\lambda_{base}$ is a hyper-parameter. In general, $\lambda$ increases when the ratio of the size of old word vocabularies to that of new ones increases. In other words, the regularization term becomes more important when the new task contains fewer new vocabularies to learn. + +Algorithm 2 summarizes the continual learning procedure of ARPER for task $t$ . $\theta_t$ is initialized with $\theta_{t-1}$ , and it is trained with prioritized exemplar replay and adaptive EWC in Eq. (7). After training $\theta_t$ , exemplars $\mathbf{E}_t$ of task $t$ are computed by Algorithm 1, and exemplars in previous tasks are reduced by keeping the most prioritized ones to preserve the total exemplar size. + +# 4 Experiments + +# 4.1 Dataset + +We use the MultiWoZ-2.0 dataset $^{2}$ (Budzianowski et al., 2018) containing six domains (Attraction, Hotel, Restaurant, Booking, Taxi and Train) and seven DA intents ("Inform, Request, Select, Recommend, Book, Offer-Booked, No-Offer"). The original train/validation/test splits are used. For methods using exemplars, both training and validation set are continually expanded with exemplars extracted from previous tasks. + +To support experiments on continual learning new domains, we pre-processed the original dataset by segmenting multi-domain utterances into single-domain ones. For instance, an utterance "The ADC Theatre is located on Park Street. Before I find your train, could you tell me where you would like to go?" is split into two utterances with domain + +![](images/e675ecf135796a7f92311c463f65d1f0b957439585e3a472911c2e6ee277d3b9.jpg) +Figure 2: Venn diagram visualizing intents in different domains. The number of utterances of each domain (bold) and intents (italic) is indicated in parentheses. + +"Attraction" and "Train" separately. If multiple sentences of the same domain in the original utterance exist, they are still kept in one utterance after pre-processing. In each continual learning task, all training data of one domain are used to train the NLG model, as illustrated in Figure 1. Similar preprocessing is done at the granularity of DA intents for experiments in Section 4.6. The statistics of the pre-processed MultiWoZ-2.0 dataset is illustrated in Figure 2. The resulting datasets and the pre-processing scripts are open-sourced. + +# 4.2 Evaluation Metrics + +Following previous studies, we use the slot error rate (SER) and the BLEU-4 score (Papineni et al., 2002) as evaluation metrics. SER is the ratio of the number of missing and redundant slots in a generated utterance to the total number of ground truth slots in the DA. + +To better evaluate the continual learning ability, we use two additional commonly used metrics (Kemker et al., 2018) for both SER and BLEU-4: + +$$ +\Omega_ {a l l} = \frac {1}{T} \sum_ {i = 1} ^ {T} \Omega_ {a l l, i}, \quad \Omega_ {f i r s t} = \frac {1}{T} \sum_ {i = 1} ^ {T} \Omega_ {f i r s t, i} +$$ + +where $\mathrm{T}$ is the total number of continual learning tasks; $\Omega_{all,i}$ is the test performance on all the tasks after the $i^{th}$ task has been learned; $\Omega_{first,i}$ is that on the first task after the $i^{th}$ task has been learned. Since $\Omega$ can be either SER or BLEU-4, both $\Omega_{all}$ and $\Omega_{first}$ have two versions. $\Omega_{all}$ evaluates the + +overall performance, while $\Omega_{\text{first}}$ evaluates the ability to alleviate catastrophic forgetting. + +# 4.3 Baselines + +Two methods without exemplars are as below: + +- Finetune: At each task, the NLG model is initialized with the model obtained till the last task, and then fine-tuned with the data from the current task. +- Full: At each task, the NLG model is trained with the data from the current and all historical tasks. This is the "upper bound" performance for continual learning w.r.t. $\Omega_{all}$ . + +Several exemplar replay (ER) methods trained with Eq. (4) using different exemplar selection schemes are compared: + +- $\mathbf{E R}_{\text {herding }}$ (Welling, 2009; Rebuffi et al., 2017): This scheme chooses exemplars that best approximate the mean DA vector over all training examples of this task. +- $\mathbf{E R}_{\text {random }}$ : This scheme selects exemplars at random. Despite its simplicity, the distribution of the selected exemplars is the same as the distribution of the current task in expectation. +- $ER_{prior}$ : The proposed prioritized scheme (c.f. Algorithm 1) to select representative and diverse exemplars. + +Based on $ER_{prior}$ , four regularization methods (including ours) to further alleviate catastrophic forgetting are compared: + +- $L2$ : A static L2 regularization by setting $F_{i} = 1$ in Eq. (7). It regularizes all parameters equally. +- KD (Rebuffi et al., 2017; Wu et al., 2019; Hou et al., 2019): The widely-used knowledge distillation (KD) loss (Hinton et al., 2015) is adopted by distilling the prediction logit of current model w.r.t. the prediction logit of the model trained till the last task. More implementation details are included in Appendix A.1. +- Dropout (Mirzadeh et al., 2020): Dropout Hinton et al. (2012) is recently shown by (Mirzadeh et al., 2020) that it effectively alleviates catastrophic forgetting. We tuned different dropout rates assigned to the non-recurrent connections. +- ARPER (c.f. Algorithm 2): The proposed method using adaptive EWC with $ER_{prior}$ . + +We utilized the well-recognized semantically-conditioned LSTM (SCLSTM Wen et al., 2015c) as + +
250 exemplars in total ΩallΩfirst500 exemplars in total ΩallΩfirst
SER%BLEU-4SER%BLEU-4SER%BLEU-4SER%BLEU-4
Finetune64.460.361107.270.25364.460.361107.270.253
ERherding16.890.5359.890.53212.250.5554.530.568
ERrandom10.930.5526.960.5538.360.5694.410.572
ERprio9.67**0.5785.28**0.5787.48**0.5973.59*0.620
ERprio+L214.940.5795.31**0.58710.510.5964.28**0.605
ERprio+KD8.65**0.5866.870.6017.37**0.5964.890.617
ERprio+Dropout7.15**0.5885.53**0.5946.09*0.5954.51**0.616
ARPER5.220.5902.990.6245.120.5982.810.627
Full4.260.5993.600.6164.260.5993.600.616
+ +Table 1: Average performance of continually learning 6 domains using 250/500 exemplars. Best Performance excluding "Full" are in bold in each column. In each column, $\star$ indicates $p < 0.05$ and $\star\star$ indicates $p < 0.01$ for a one-tailed t-test comparing ARPER to the three top-performing competitors except Full. + +![](images/41b4b463a18d89afa9df95299057a26d23c49ccfa796f4ada8d5cfe65b778cbf.jpg) +Figure 3: Diagnose the catastrophic forgetting issue in NLG. SER (Left) and BLEU-4 (Right) on the test data of "Attraction" at different epochs when a model pre-trained on the "Attraction" domain is continually trained on another "Train" domain. + +![](images/eb82fb60728a158d4d9467247ddd9df3990adc105896b675a05de1e7183fa68d.jpg) + +the base NLG model $f_{\theta}$ with one hidden layer of size 128. Dropout is not used by default, which is evaluated as a separate regularization technique (c.f. $ER_{prior} + Dropout$ ). For all the above methods, the learning rate of Adam is set to 5e-3, batch size is set to 128, and the maximum number of epochs used to train each task is set to 100. Early stop to avoid over-fitting is adopted when the validation loss does not decrease for 10 consecutive epochs. To fairly compare different methods, they are trained with the identical configuration on the first task to have a consistent starting point. Hyper-parameters of different methods are included in Appendix A.1. + +# 4.4 Diagnose Forgetting in NLG + +Before proceeding to our main results, we first diagnose whether the catastrophic forgetting issue exists when training an NLG model continually. As + +an example, a model pre-trained on the "Attraction" domain is continually trained on the "Train" domain. We present test performance on "Attraction" at different epochs in Figure 3 with 250 exemplars. + +We can observe: (1) catastrophic forgetting indeed exists as indicated by the sharp performance drop of Finetune; (2) replaying carefully chosen exemplars helps to alleviate catastrophic forgetting by a large degree, and $ER_{prior}$ does a better job than $ER_{random}$ ; and (3) ARPER greatly mitigates catastrophic forgetting by achieving similar or even better performance compared to Full. + +# 4.5 Continual Learning New Domains + +In this experiment, the data from six domains are presented sequentially. We test 6 runs with different domain order permutations. Each domain is selected as the first task for one time, and the remaining five domains are randomly ordered $^{4}$ . Results averaged over 6 runs using 250 and 500 total exemplars are presented in Table 1. Several interesting observations can be noted: + +- All methods except Finetune perform worse on all seen tasks $(\Omega_{all})$ than on the first task $(\Omega_{first})$ . This is due to the diverse knowledge among different tasks, which increases the difficulty of handling all the tasks. Finetune performs poorly in both metrics because of the detrimental catastrophic forgetting issue. +- Replaying exemplars helps to alleviate the catastrophic forgetting issue. Three ER methods substantially outperform Finetune. Moreover, the proposed prioritized exemplar selection scheme is effective, indicated by the superior perfor + +![](images/680ff40a999f2633db3115ca2a6fe20d4d4a5ac78870eed3bdbfbf2b8977b446.jpg) +Figure 4: SER on all seen domains (solid) and on the first domain (dashed) when more domains are continually learned using 250 exemplars. + +mance of $ER_{prio}$ over $ER_{herding}$ and $ER_{random}$ . + +- ARPER significantly outperforms three ER methods and other regularization-based baselines. Compared to the three closest competitors, ARPER is significantly better with $p$ -value $< 0.05$ w.r.t SER. +- The improvement margin of ARPER is significant w.r.t SER that is critical for measuring an output's fidelity to a given dialog act. Different methods demonstrate similar performance w.r.t BLEU-4, where several of them approach Full, thus are very close to the upper bound performance. +- ARPER achieves comparable performance w.r.t to the upper bound (Full) on all seen tasks ( $\Omega_{all}$ ) even with a very limited number of exemplars. Moreover, it outperforms Full on the first task ( $\Omega_{first}$ ), indicating that ARPER better mitigates forgetting the first task than Full, and the latter is still interfered by data in later domains. + +Dynamic Results in Continual Learning In Figure 4, several representative methods are compared as more domains are continually learned. With more tasks continually learned, ARPER performs consistently better than other methods on all seen tasks (solid lines), and it is comparable to Full. On the first task (dashed lines), ARPER outperforms all the methods, including Full, at every continual learning step. These results illustrate the advantage of ARPER through the entire continual learning process. + +# 4.6 Continual Learning New DA Intent + +It is also essential for a task-oriented dialog system to continually learn new functionalities, namely, supporting new DA intents. To test this ability, + +
ΩallΩfirst
SER%BLEU-4SER%BLEU-4
Finetune49.940.38244.000.375
ERherding13.960.5428.500.545
ERrandom8.580.6265.530.618
ERprio8.210.6845.200.669
ERprio+L26.870.6934.920.661
ERprio+KD10.590.66410.870.649
ERprio+Dropout6.320.6895.550.658
ARPER3.630.7013.520.685
Full3.080.6942.980.672
+ +Table 2: Performance of continually learning 7 DA intents using 250 exemplars. Best Performance excluding "Full" are in bold. + +
ΩallΩfirst
SER%BLEU-4SER%BLEU-4
ARPER4.820.5923.880.569
w/o ER6.410.5845.850.559
w/o PE5.530.5875.850.562
w/o AR5.570.5874.570.563
+ +Table 3: Ablation study for ARPER. ER / PE / AR stands for the Exemplar Replay loss / Prioritized Exemplars / Adaptive Regularization, respectively. + +the data of seven DA intents are presented sequentially in the order of decreasing data size, i.e., "Inform, Request, Book, Recommend, Offer-Booked, No-Offer, Select". Results using 250 exemplars are presented in Table 2. We can observe that ARPER still largely outperforms other methods, and similar observations for ARPER can be made as before. Therefore, we conclude that ARPER is able to learn new functionalities continually. + +Compared to previous experiments, the performance of $ER_{prior} + KD$ degrades, while the performance of $ER_{prior} + L2$ improves due to the very large data size in the first task ("Inform"), which means that they are sensitive to task orders. + +# 4.7 Ablation Study + +In Table 3, we compare several simplified versions of ARPER to understand the effects of different components. Comparisons are based on continually learning 6 domains staring with "Attraction". We can observe that: (1). $L_{ER}$ is beneficial because dropping it ("w/o ER") degrades the performance of ARPER. (2). Using prioritized exemplars is advantageous because using random exemplars ("w/o PE") for ARPER impairs its performance. (3). Adaptive regularization is also effective, indicated by the superior performance of ARPER compared to using fixed regularization weights ("w/o AR"). + +
Recommend (Addr=regent street, Fee=free, Name=Downing College)
Reference[Downing College] is my favorite. It's located on [regent street] and it's [free] to get in.
ERprio+Dropout[Downing College] is located in the city and it's located in the [regent street]. it's located at located at! it's located in the [Slot-Hotel-Area]. (missing: Fee=fre)
ARPERI would recommend [Downing College]. It is located at [regent street] and has a entrance fee of [free]. (correct)
Recommend (Area=centre of town, Name=saints church, Type=architecture destination)
ReferenceThere is a [saints church] that is an [architecture destination] in the [centre of town], would you like that?
ERprio+DropoutI recommend [saints church] in the [centre of town]. it is a nice. it is a guest house in a in a [Slot-Restaurant-Food]. (missing: Type=architecture destination)
ARPER[saints church] is a [architecture destination] in the [centre of town]. (correct)
+ +Table 4: Sample utterances generated for the first domain ("Attraction") after the NLG is continually trained on all 6 domains using 250 exemplars. Redundant and missing slots are colored in orange and blue respectively. Obvious grammar mistakes (redundant repetitions) are colored in purple. + +
SCVAEGPT-2
ΩallΩfirstΩallΩfirst
Finetune60.8398.8628.6931.76
ERherding17.9511.4811.9510.48
ERrandom9.317.529.878.85
ERprio8.926.168.728.20
ERprio+L212.476.6710.519.20
ERprio+KD6.326.098.418.09
ERprio+Dropout8.018.777.607.72
ARPER4.454.045.325.05
Full3.994.034.754.53
+ +Table 5: SER in % of using SCVAE and GPT-2 as $f_{\theta}$ . Best Performance excluding "Full" are in bold. + +# 4.8 Case Study + +Table 4 shows two examples generated by ARPER and the closest competitor $(ER_{prio} + Dropout)$ on the first domain ("Attraction") after the NLG model is continually trained on all 6 domains starting with "Attraction". In both examples, $ER_{prio} + Dropout$ fails to generate slot "Fee" or "Type", instead, it mistakenly generates slots belonging to later domains ("Hotel" or "Restaurant") with several obvious redundant repetitions colored in purple. It means that the NLG model is interfered by utterance patterns in later domains, and it forgets some old patterns it has learned before. In contrast, ARPER succeeds in both cases without forgetting previously learned patterns. + +# 4.9 Results using Other NLG Models + +In this experiment, we changed the base NLG model From SCLSTM to SCVAE (Tseng et al., 2018) and GPT-2 (Radford et al., 2019). For GPT + +2, we used the pre-trained model with 12 layers and 117M parameters. As in Peng et al. (2020), exact slot values are not replaced by special placeholders during training as in SCLSTM and SCVAE. The dialog act is concatenated with the corresponding utterance before feeding into GPT-2. More details are included in Appendix A.1. + +Results of using 250 exemplars to continually learn 6 domains starting with "Attraction" are presented in Table 5. Thanks to the large-scale pretrained language model, GPT-2 suffers less from the catastrophic forgetting issue because of the better performance of Finetune. In general, the relative performance patterns of different methods are similar to that we observed in Section 4.5 and 4.6. Therefore, we can claim that the superior performance of ARPER can generalize to different base NLG models. + +# 5 Conclusion + +In this paper, we study the practical continual learning setting of language generation in task-oriented dialog systems. To alleviate catastrophic forgetting, we present ARPER which replays representative and diverse exemplars selected in a prioritized manner, and employs an adaptive regularization term based on EWC (Elastic Weight Consolidation). Extensive experiments on MultiWoZ-2.0 in different continual learning scenarios reveal the superior performance of ARPER. The realistic continual learning setting and the proposed technique may inspire further studies towards building more scalable task-oriented dialog systems. + +# References + +Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuytelaars. 2018. Memory aware synapses: Learning what (not) to forget. In Proceedings of the European Conference on Computer Vision (ECCV), pages 139-154. +Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, OpenAI Pieter Abbeel, and Wojciech Zaremba. 2017. Hindsight experience replay. In Advances in neural information processing systems, pages 5048-5058. +Gaurav Arora, Afshin Rahimi, and Timothy Baldwin. 2019. Does an lstm forget more than a cnn? an empirical study of catastrophic forgetting in nlp. In Proceedings of the The 17th Annual Workshop of the Australasian Language Technology Association, pages 77-86. +Antoine Bordes, Y-Lan Boureau, and Jason Weston. 2016. Learning end-to-end goal-oriented dialog. arXiv preprint arXiv:1605.07683. +Pawel Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Inigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gasic. 2018. Multiwoz-a large-scale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5016-5026. +Francisco M Castro, Manuel J Marín-Jiménez, Nicolás Guil, Cordelia Schmid, and Karteek Alahari. 2018. End-to-end incremental learning. In Proceedings of the European Conference on Computer Vision (ECCV), pages 233-248. +Arslan Chaudhry, Marcus Rohrbach, Mohamed Elhoseiny, Thalaiyasingam Ajanthan, Puneet K Dokania, Philip HS Torr, and Marc'Aurelio Ranzato. 2019. Continual learning with tiny episodic memories. arXiv preprint arXiv:1902.10486. +Sanyuan Chen, Yutai Hou, Yiming Cui, Wanxiang Che, Ting Liu, and Xiangzhan Yu. 2020. Recall and learn: Fine-tuning deep pretrained language models with less forgetting. arXiv preprint arXiv:2004.12651. +Alexandra Chronopoulou, Christos Baziotis, and Alexandros Potamianos. 2019. An embarrassingly simple approach for transfer learning from pretrained language models. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2089-2095. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for + +Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186. +Robert M French. 1999. Catastrophic forgetting in connectionist networks. Trends in Cognitive Sciences, pages 128-135. +Tianxing He, Jun Liu, Kyunghyun Cho, Myle Ott, Bing Liu, James Glass, and Fuchun Peng. 2019. Mixreview: Alleviate forgetting in the pretrain-finetune framework for neural language generation models. arXiv preprint arXiv:1910.07117. +Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. +Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. 2012. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580. +Saihui Hou, Xinyu Pan, Chen Change Loy, Zilei Wang, and Dahua Lin. 2019. Learning a unified classifier incrementally via rebalancing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 831-839. +Ronald Kemker, Marc McClure, Angelina Abitino, Tyler L Hayes, and Christopher Kanan. 2018. Measuring catastrophic forgetting in neural networks. In Thirty-second AAAI conference on artificial intelligence. +Diederik P Kingma and Max Welling. 2013. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114. +James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521-3526. +Sungjin Lee. 2017. Toward continual learning for conversational agents. arXiv preprint arXiv:1712.09943. +Yuanpeng Li, Liang Zhao, Kenneth Church, and Mohamed Elhoseiny. 2020. Compositional continual language learning. +Zhizhong Li and Derek Hoiem. 2017. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12):2935-2947. +Tianlin Liu, Lyle Ungar, and João Sedoc. 2019. Continual learning for sentence representations using conceptors. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3274-3279. + +Davide Maltoni and Vincenzo Lomonaco. 2019. Continuous learning in single-incremental-task scenarios. Neural Networks, 116:56-73. +Michael McCloskey and Neal J Cohen. 1989. Catastrophic interference in connectionist networks: The sequential learning problem. In *Psychology of learning and motivation*, volume 24, pages 109-165. Elsevier. +Fei Mi, Minlie Huang, Jiyong Zhang, and Boi Faltings. 2019. Meta-learning for low-resource natural language generation in task-oriented dialogue systems. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, pages 3151-3157. AAAI Press. +Fei Mi, Lingjing Kong, Tao Lin, Kaicheng Yu, and Boi Faltings. 2020a. Generalized class incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 240-241. +Fei Mi, Xiaoyu Lin, and Boi Faltings. 2020b. Ader: Adaptively distilled exemplar replay towards continual learning for session-based recommendation. In Fourteenth ACM Conference on Recommender Systems, pages 408-413. +Seyed-Iman Mirzadeh, Mehrdad Farajtabar, and Hassan Ghasemzadeh. 2020. Dropout as an implicit gating mechanism for continual learning. arXiv preprint arXiv:2004.11545. +Lili Mou, Zhao Meng, Rui Yan, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. 2016. How transferable are neural networks in nlp applications? In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 479-489. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 311-318. +Baolin Peng, Chenguang Zhu, Chunyuan Li, Xiujun Li, Jinchao Li, Michael Zeng, and Jianfeng Gao. 2020. Few-shot natural language generation for task-oriented dialog. arXiv preprint arXiv:2002.12328. +Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237. +Kun Qian and Zhou Yu. 2019. Domain adaptive dialog generation via meta learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2639-2649. + +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. +Tiago Ramalho and Marta Garnelo. 2019. Adaptive posterior learning: few-shot learning with a surprise-based memory module. In International Conference on Learning Representations (ICLR)). +Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert. 2017. icarl: Incremental classifier and representation learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2001-2010. +Matthew Riemer, Tim Klinger, Djallel Boueffouf, and Michele Franceschini. 2019. Scalable recollections for continual lifelong learning. In AAAI, volume 33, pages 1352-1359. +Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. 2016. Progressive neural networks. arXiv preprint arXiv:1606.04671. +Danielle Saunders, Felix Stahlberg, Adrià de Gispert, and Bill Byrne. 2019. Domain adaptive inference for neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 222-228. +Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. 2016. Prioritized experience replay. +Yilin Shen, Xiangyu Zeng, and Hongxia Jin. 2019. A progressive model to enable continual learning for semantic slot filling. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1279-1284. +Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. 2017. Continual learning with deep generative replay. In Advances in Neural Information Processing Systems, pages 2990-2999. +Brian Thompson, Jeremy Gwinnup, Huda Khayrallah, Kevin Duh, and Philipp Koehn. 2019. Overcoming catastrophic forgetting during domain adaptation of neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2062-2068. +Van-Khanh Tran and Le-Minh Nguyen. 2017. Natural language generation for spoken dialogue system using rnn encoder-decoder networks. In Proceedings of the 21st Conference on Computational Natural Language Learning, pages 442-451. + +Van-Khanh Tran and Le-Minh Nguyen. 2018a. Adversarial domain adaptation for variational neural language generation in dialogue systems. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1205-1217. +Van-Khanh Tran and Le-Minh Nguyen. 2018b. Dual latent variable model for low-resource natural language generation in dialogue systems. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 21-30. +Bo-Hsiang Tseng, Florian Kreyssig, Pawel Budzianowski, Inigo Casanueva, Yen-chen Wu, Stefan Ultes, and Milica Gasic. 2018. Variational cross-domain natural language generation for spoken dialogue systems. In 19th Annual SIG-dial Meeting on Discourse and Dialogue, pages 338-343. +Dusan Varis and Ondrej Bojar. 2019. Unsupervised pretraining for neural machine translation using elastic weight consolidation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 130-135. +Max Welling. 2009. Herding dynamical weights to learn. In Proceedings of the 26th International Conference on Machine Learning, pages 1121-1128. ACM. +Tsung-Hsien Wen, Milica Gašic, Dongho Kim, Nikola Mrkšic, Pei-Hao Su, David Vandyke, and Steve Young. 2015a. Stochastic language generation in dialogue using recurrent neural networks with convolutional sentence reranking. In 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, page 275. +Tsung-Hsien Wen, Milica Gašic, Nikola Mrksic, Lina M Rojas-Barahona, Pei-Hao Su, David Vandyke, and Steve Young. 2015b. Toward multidomain language generation using recurrent neural networks. In NIPS Workshop on Machine Learning for Spoken Language Understanding and Interaction. +Tsung-Hsien Wen, Milica Gasic, Nikola Mrkšić, PeiHao Su, David Vandyke, and Steve Young. 2015c. Semantically conditioned LSTM-based natural language generation for spoken dialogue systems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1711-1721. +Yue Wu, Yinpeng Chen, Lijuan Wang, Yuancheng Ye, Zicheng Liu, Yandong Guo, and Yun Fu. 2019. Large scale incremental learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 831-839. +Ying Xu, Xu Zhong, Antonio Jose Jimeno Yepes, and Jey Han Lau. 2019. Forget me not: Reducing catastrophic forgetting for domain adaptation in reading comprehension. arXiv preprint arXiv:1911.00202. + +Dani Yogatama, Cyprien de Masson d'Autume, Jerome Connor, Tomas Kocisky, Mike Chrzanowski, Lingpeng Kong, Angeliki Lazaridou, Wang Ling, Lei Yu, Chris Dyer, et al. 2019. Learning and evaluating general linguistic intelligence. arXiv preprint arXiv:1901.11373. +Friedemann Zenke, Ben Poole, and Surya Ganguli. 2017. Continual learning through synaptic intelligence. In Proceedings of the 34th International Conference on Machine Learning, pages 3987-3995. JMLR.org. +Bowen Zhao, Xi Xiao, Guojun Gan, Bin Zhang, and Shutao Xia. 2019. Maintaining discrimination and fairness in class incremental learning. arXiv preprint arXiv:1911.07053. + +# Appendix + +# A Reproducibility Checklist + +# A.1 Model Details and Hyper-parameters + +We first elaborate implementation details of the knowledge distillation (KD) baseline compared in our paper. We used the below loss term: + +$$ +L _ {K D} (\mathbf {Y}, \mathbf {d}, f _ {\theta_ {t - 1}}, f _ {\theta_ {t}}) = - \sum_ {k = 1} ^ {K} \sum_ {i = 1} ^ {| L |} \hat {p} _ {k, i} \cdot \log (p _ {k, i}) +$$ + +where $L$ is the vocabulary that appears in previous tasks but not in task $t$ . At each position $k$ of $\mathbf{Y}$ , $[\hat{p}_{k,1},\dots,\hat{p}_{k,|L|}]$ is the predicted distribution over $L$ given by $f_{\theta_{t-1}}$ , and $[p_{k,1},\dots,p_{k,|L|}]$ is the distribution given by $f_{\theta_t}$ . $L_{KD}$ penalizes prediction changes on the vocabularies specific to earlier tasks. For all $\{\mathbf{Y},\mathbf{d}\} \in \mathbf{D}_t \cup \mathbf{E}_{1:t-1}, L_{KD}$ is linearly interpolated with $L_{ER}$ by $L_{ER} + \eta \cdot L_{KD}$ , with the $\eta$ tuned as a hyper-parameter. + +Hyper-parameters of SCVAE reported in Section 4.9 are set by default according to https: //github.com/andy194673/nlg-scvae, except that the learning rate is set to 2e-3. For GPT-2, we used the implementation pipeline from https://github.com/pengbaolin/SC-GPT. We pre-processed the dialog act $\mathbf{d}$ into the format of: $\mathbf{d}' = [\mathbf{I}(s_1 = v_1,\dots ,s_p = v_p)]$ and the corresponding utterance $\mathbf{Y}$ is appended to be $\mathbf{Y}'$ with a special start token [BOS] and an end token [EOS]. $\mathbf{d}'$ and $\mathbf{Y}'$ are concatenated before feeding into GPT-2. The learning rate of Adam optimizer is set to 5e-5 without weight decay. As GPT-2 converges faster, we train maximum 10 epochs for each task with early stop applied to 3 consecutive epochs. + +Hyper-parameters of different methods are tuned to maximize $\mathrm{SER}_{all}$ using grid search, and the optimal settings of different methods in various experiments are summarized in Table 6. + +# A.2 Domain Order Permutations + +In Table 7, we provide the exact domain order permutations of the 6 runs used in the experiments in Table 1 and Figure 4. + +# A.3 Computation Resource + +All experiments are conducted using a single GPU (GeForce GTX TITAN X). In Table 8, we compared the average training time of one epoch using + +
DomainsDA Intents
ERprio(β)0.5/0.5/0.5/0.50.5
L2 (weight on L2)1e-3/1e-3/1e-3/5e-41e-2
KD (weight on LKD)2.0/3.0/2.0/0.55.0
Dropout (rate)0.25/0.25/0.25/0.10.25
ARPER (λbase)300k/350k/200k/30k100k
+ +Table 6: Optimal hyper-parameters of methods experimented in this paper. Four different values in the column "Domains" correspond to using 250 exemplars in both Table 1 and Table 2 / 500 exemplars in Table 1 / using SCVAE / GPT-2 as $f(\theta)$ in Table 5, respectively. + +
Run 1052134
Run 2140532
Run 3203145
Run 4324015
Run 5421503
Run 6532014
+ +Table 7: Each row corresponds to a domain order permutation. The mapping from domain to id is: {"Attraction": 0, "Booking": 1, "Hotel": 2, "Restaurant": 3, "Taxi": 4, "Train": 5.} + +
FinetuneER_prioL2KDDropoutARPERFull
17.5s18.5s19.5s24.6s15.5s39.5s242.5s
+ +Table 8: Average training time of one epoch at the last task when continually learning 6 domains starting with "Attraction" using 250 exemplars. Methods other than Finetune and Full are applied on top of $ER_{prior}$ . + +different methods. Full spends more than 200s of extra computation overhead per epoch than other methods using bounded exemplars. ARPER takes a slightly longer time to train than the methods except for Full. Nevertheless, considering its superior performance, we contend that ARPER achieves desirable resource-performance trade-off. In addition, 250 exemplars are less than $1\%$ of historical data at the last task, and the memory usage to store a small number of exemplars is trivial. + +# B Supplementary Empirical Results + +# B.1 Comparison to Pseudo Exemplar Replay + +Instead of storing raw samples as exemplars, Shin et al. (2017); Riemer et al. (2019) generate "pseudo" samples akin to past data. The NLG model itself can generate pseudo exemplars. In this experiment, we replace the 500 raw exemplars of $ER_{random}$ , $ER_{prior}$ , and $ARPER$ by pseudo samples generated by the continually trained NLG model using the dialog acts of the same raw exemplars + +![](images/f379a8bdb1a569d44edc1cc96f22fa001786ab89197be11c06a4b0248e0fc84e.jpg) + +![](images/913bde216088b72e8684b23f3d19379b3bf298828e9b8cdfdd66056f8f833a78.jpg) + +![](images/d09de595d951f85ccf307dea8e22ae15399637734e962591cb0a55a901013c75.jpg) +Figure 5: An visualization of the change of SCLSTM's hidden layer weights obtained from two consecutive tasks of ARPER (Top) and $ER_{prio} + Dropout$ (Bottom). Two sample task transitions ("from Attraction" to "Train", and then from "Train" to "Hotel") are shown. High temperature areas of ARPER is highlighted by red bounding boxes for better visualization. + +![](images/56d548fc3e1e7cb8e15a601657b679e8d70e2e830226c1de5882226bc2f477e4.jpg) + +
ΩallΩfirst
SER%BLEU-4SER%BLEU-4
ERrandom9.820.4958.640.405
Pseudo-ERrandom9.260.5516.880.519
ERprio7.840.5736.200.523
Pseudo-ERprio8.870.5576.370.521
ARPER4.430.5973.400.574
Pseudo-ARPER5.070.5903.510.570
+ +Table 9: Comparison with Pseudo Exemplar Replay. + +as input. Result comparing using pseudo or raw exemplars to continually learn 6 domains starting with "Attraction" are illustrated in Table 9. We can see that using pseudo exemplars performs better for $ER_{random}$ , but worse for $ER_{prior}$ and $ARPER$ . It means that pseudo exemplars are better when exemplars are chosen randomly, while carefully chosen exemplars (c.f. algorithm 1) is better than pseudo exemplars. Explorations on utilizing pseudo exemplars for NLG is orthogonal to our work, and it is left as future work. + +# B.2 Flow of Parameters Update + +To further understand the superior performance of ARPER, we investigated the update of parameters throughout the continual learning process. Specifically, we compared SCLSTM's hidden layer weights obtained from consecutive tasks, and the pairwise $L_{1}$ difference of two sample transitions is shown in Figure 5. + +We can observe that $ER_{prior} + Dropout$ tends to update almost all parameters, while ARPER only updates a small fraction of them. Furthermore, ARPER has different sets of important parameters for distinct tasks, indicated by different high-temperature areas in distinct weight updating heat maps. In comparison, parameters of $ER_{prior} + Dropout$ seem to be updated uniformly in different task transitions. The above observations verify that ARPER indeed elastically allocates different network parameters to distinct NLG tasks to mitigate catastrophic forgetting. \ No newline at end of file diff --git a/continuallearningfornaturallanguagegenerationintaskorienteddialogsystems/images.zip b/continuallearningfornaturallanguagegenerationintaskorienteddialogsystems/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..12bf0efd0d9c73ce82fb661ceabae06bec05e852 --- /dev/null +++ b/continuallearningfornaturallanguagegenerationintaskorienteddialogsystems/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1e1e19a5edb3423bf19081bc69df40f48aef18b3e93153ec56ad00e591fe071d +size 712615 diff --git a/continuallearningfornaturallanguagegenerationintaskorienteddialogsystems/layout.json b/continuallearningfornaturallanguagegenerationintaskorienteddialogsystems/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..7aa8010502646b36d5f1ff4cd55bf88ef3fda77f --- /dev/null +++ b/continuallearningfornaturallanguagegenerationintaskorienteddialogsystems/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:238c262857637cde5423c439dda4f03b80f770a442eba4691adab57df73b6cfb +size 573314 diff --git a/continuallearninglongshorttermmemory/773f951e-7887-4b40-8766-e30d0ea82c09_content_list.json b/continuallearninglongshorttermmemory/773f951e-7887-4b40-8766-e30d0ea82c09_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..a5669b8cbe2663258478df3b06e39fea41096a9d --- /dev/null +++ b/continuallearninglongshorttermmemory/773f951e-7887-4b40-8766-e30d0ea82c09_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:caf26b8647db202e0a60a8afc6fa1784bf4daa7c031246bc241c0e1e2a651738 +size 51462 diff --git a/continuallearninglongshorttermmemory/773f951e-7887-4b40-8766-e30d0ea82c09_model.json b/continuallearninglongshorttermmemory/773f951e-7887-4b40-8766-e30d0ea82c09_model.json new file mode 100644 index 0000000000000000000000000000000000000000..58fa7aca64f564bf4df4d58ac4b94479f688d278 --- /dev/null +++ b/continuallearninglongshorttermmemory/773f951e-7887-4b40-8766-e30d0ea82c09_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c44b059bfdf627906ec10c3fd460f5c6a319a5dfd006f1a852e015f7790b9389 +size 62221 diff --git a/continuallearninglongshorttermmemory/773f951e-7887-4b40-8766-e30d0ea82c09_origin.pdf b/continuallearninglongshorttermmemory/773f951e-7887-4b40-8766-e30d0ea82c09_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..438586805a998832cde62ccc7cc23d9d75b64e5c --- /dev/null +++ b/continuallearninglongshorttermmemory/773f951e-7887-4b40-8766-e30d0ea82c09_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4d820d29ddfc68854cf04d38c7ed90cc0aa1ea0b8c496d197dc918ba56035855 +size 347705 diff --git a/continuallearninglongshorttermmemory/full.md b/continuallearninglongshorttermmemory/full.md new file mode 100644 index 0000000000000000000000000000000000000000..fe118d3784e88f1480abb4e396c223939ac75294 --- /dev/null +++ b/continuallearninglongshorttermmemory/full.md @@ -0,0 +1,269 @@ +# Continual Learning Long Short Term Memory + +Xin Guo* + +University of Delaware + +guoxin@udel.edu + +Yu Tian* + +Rutgers University + +yt219@cs.rutgers.edu + +Qinghan Xue + +IBM + +qinghan.xue@ibm.com + +Panos Lampropoulos IBM + +panos11@ibm.com + +Steven Eliuk IBM + +steven.eliuk@ibm.com + +Kenneth Barner + +University of Delaware + +barner@udel.edu + +Xiaolong Wang† + +IBM + +xiaolong.wang@ibm.com + +# Abstract + +Catastrophic forgetting in neural networks indicates the performance decreasing of deep learning models on previous tasks while learning new tasks. To address this problem, we propose a novel Continual Learning Long Short Term Memory (CL-LSTM) cell in Recurrent Neural Network (RNN) in this paper. CL-LSTM considers not only the state of each individual task's output gates but also the correlation of the states between tasks, so that the deep learning models can incrementally learn new tasks without catastrophically forgetting previously tasks. Experimental results demonstrate significant improvements of CL-LSTM over state-of-the-art approaches on spoken language understanding (SLU) tasks. + +# 1 Introduction + +The whole AI community has enjoyed a superior performance boost from the emerging of deep learning technologies, thanks to the availability of big data and computing technologies. One of the most recent, realistic and emerged challenges for deep learning models on streaming data is continual learning capability. When new data is available, re-training brand new models with all the old and new data is the ideal way to achieve high performance on both tasks. However, there are several factors preventing saving old data for the entire lifetime, such as the memory restriction and data governance. When learning without all the old + +data, the performance on old tasks will drop dramatically, this phenomenon is called catastrophic forgetting (Mcclelland et al., 1995). + +Catastrophic forgetting occurs in neural networks due to the stability-plasticity dilemma (Abraham and Robins, 2005), where the network requires sufficient plasticity to capture new tasks, but large weights variations may disrupt previous learned representations. Continual learning methods are proposed to prevent catastrophic forgetting, when only a limited size of old data is available. + +Several approaches have been proposed to solve this problem in deep learning field (Awasthi and Sarawagi, 2019; Rusu et al., 2016; Zhizhong Li, 2018; Kirkpatrick et al., 2016; Riemer et al., 2019; Serra et al., 2018; Hou et al., 2018). A popular trend is to use expandable networks to store/learn old/new knowledge then acquire a task ID to select one from all the tasks during the inference stage. (Rusu et al., 2016; Mallya et al., 2018; Yoon et al., 2017; Mallya and Lazebnik, 2017). + +In contrast, only a few attempts have been made to address catastrophic forgetting in natural language forgetting (NLP) field. Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2016) has been adapted to visual question answering (Greco et al., 2019) and language modeling (Wolf et al., 2019). Progressive Neural Network proposed in reinforcement learning (Rusu et al., 2016) has been adopted to semantic slot filling in (Shen et al., 2019). A continual learning architecture preventing catastrophic forgetting via block-sparsity and orthogonality constraints is presented in (Pasunuru and Bansal, 2019) on diverse sentence-pair classification tasks. + +![](images/2b6e42ed94fb1f795c20106348bdc0af0f89b1ef406c6f73b103840d2be071d3.jpg) +Figure 1: Deep neural networks a) with requirement on task IDs, and b) without requirement on task IDs, in inference stage. + +To our best knowledge, none of the previous works in NLP considers the interactions between tasks at the LSTM cell level. Moreover, the requirement of task IDs in inference is infeasible and impractical in the real scenarios as shown in Fig. 1. Therefore, a novel Continual Learning Long Short Term Memory (CL-LSTM) cell is proposed to prevent catastrophic forgetting. The contributions of the paper are: (a) a novel LSTM cell for continual learning is proposed. The proposed CL-LSTM includes separate modules for different tasks; (b) each task further has a broadcast module to send its hidden states to all of the old tasks, and a collect module to take hidden states as inputs from all of the old tasks. Therefore, the output gates of each task integrates information from all tasks; (c) the proposed model doesn't require task IDs to perform inference, which is more practical in real-world scenarios. We evaluate the proposed CL-LSTM on both slot filling and intent detection of spoken language understanding. Experimental results show that the proposed CL-LSTM outperforms state-of-the-arts by a large margin. Code is available at https://github.com/IBM-GCDO/EMNLP-CL-LSTM. + +# 2 Method + +# 2.1 Preliminary: LSTM + +As we know, LSTM (Long Short Term Memory) (Hochreiter and Schmidhuber, 1997) operates as a parameterized function $R$ that takes an input vector $x_{t}$ with a state vector $(c_{t - 1}, h_{t - 1})$ and returns a state vector $(c_{t}, h_{t}) = R(x_{t}, c_{t - 1}, h_{t - 1})$ . + +![](images/d1be6fe04d5cc41b1b5f7830af651bc231e872da7945cf2b157d6921ecb21930.jpg) +Figure 2: CL-LSTM with three tasks. For the third task, old modules are frozen (grey) and $M_3$ , $M_3^c$ , $M_3^b$ (yellow) are trained for information sharing. $h_{out}^{(t)}$ is the aggregation of all hidden states. + +Specifically, it incorporates a gating mechanism, taking the form: + +$$ +f _ {t} = W ^ {f} x _ {t} + U ^ {f} h _ {t - 1} + b ^ {f}, \tag {1} +$$ + +$$ +i _ {t} = W ^ {i} x _ {t} + U ^ {i} h _ {t - 1} + b ^ {i}, \tag {2} +$$ + +$$ +o _ {t} = W ^ {o} x _ {t} + U ^ {o} h _ {t - 1} + b ^ {o}, \tag {3} +$$ + +$$ +\tilde {c} _ {t} = W ^ {c} x _ {t} + U ^ {c} h _ {t - 1} + b ^ {c}, \tag {4} +$$ + +where $W$ s and $U$ s are learnable matrices, $b$ s are biases. If we integrate $W$ s and $U$ s into one single matrix $W$ , combine $b$ s into $b$ , then by concatenating $x_{t}$ and $h_{t-1}$ together, we have: + +$$ +\left[ f _ {t}, i _ {t}, o _ {t}, \tilde {c} _ {t} \right] = W \left[ x _ {t}, h _ {t - 1} \right] + b. \tag {5} +$$ + +The outputs $c_{t}$ and $h_{t}$ can be obtained from: + +$$ +c _ {t} = \sigma \left(f _ {t}\right) \circ c _ {t - 1} + \sigma \left(i _ {t}\right) \circ \operatorname {t a n h} \left(\tilde {c} _ {t}\right), \tag {6} +$$ + +$$ +h _ {t} = \sigma \left(o _ {t}\right) \circ g \left(c _ {t}\right), \tag {7} +$$ + +where $\sigma$ indicates the sigmoid function, $\circ$ represents the Hadamard product, $g$ can be either tanh or the identity function. In this paper, we are interested in the hidden states: for a standard LSTM cell with parameters $\{W,b\}$ included within one module $M$ , the update of $h_t$ can be represented as: + +$$ +h _ {t} = M \left(x _ {t}, h _ {t - 1}\right). \tag {8} +$$ + +# 2.2 CL-LSTM + +As discussed above, model parameters $\{W,b\}$ in the standard LSTM cell keep updating once the + +given cell starts to learn the new task, which makes it difficult to avoid catastrophic forgetting. To mitigate this phenomena, we propose a novel cell named CL-LSTM as illustrated in Fig. 2, which is mainly composed of the following components: + +Task-oriented Modules. Assuming that the model is going to learn $K$ tasks sequentially. The training data is $X = \{X_{1},X_{2},\dots,X_{K}\}$ , where $X_{k}$ denotes the training dataset for the $k^{th}$ task. There are $C_k$ different classes included in task $k$ . When the first task comes, CL-LSTM starts with a single module $M_1 = \{W_1,b_1\}$ . $M_{1}$ is updated like a standard LSTM with the training data $x\in X_{1}$ : + +$$ +h _ {1} ^ {(t)} = M _ {1} \left(x ^ {(t)}, h _ {1} ^ {(t - 1)}\right), t \in \{1, 2, \dots , T \}, \tag {9} +$$ + +where $h_1^{(t)}$ is the hidden state at timestamp $t$ , $T$ represents the length of sequential data $x$ , $c_1^{(t)}$ is updated by Eq. 6. When starting to work on a new task $k > 1$ , parameters of old tasks $(M_{